asterisk/rest-api/api-docs/bridges.json

739 lines
19 KiB
JSON
Raw Normal View History

{
"_copyright": "Copyright (C) 2012 - 2013, Digium, Inc.",
"_author": "David M. Lee, II <dlee@digium.com>",
"_svn_revision": "$Revision$",
"apiVersion": "2.0.0",
"swaggerVersion": "1.1",
"basePath": "http://localhost:8088/ari",
"resourcePath": "/api-docs/bridges.{format}",
"apis": [
{
"path": "/bridges",
"description": "Active bridges",
"operations": [
{
"httpMethod": "GET",
"summary": "List all active bridges in Asterisk.",
"nickname": "list",
"responseClass": "List[Bridge]"
},
{
"httpMethod": "POST",
"summary": "Create a new bridge.",
"notes": "This bridge persists until it has been shut down, or Asterisk has been shut down.",
"nickname": "create",
"responseClass": "Bridge",
"parameters": [
{
"name": "type",
"description": "Comma separated list of bridge type attributes (mixing, holding, dtmf_events, proxy_media).",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "bridgeId",
"description": "Unique ID to give to the bridge being created.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "name",
"description": "Name to give to the bridge being created.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}",
"description": "Individual bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Create a new bridge or updates an existing one.",
"notes": "This bridge persists until it has been shut down, or Asterisk has been shut down.",
"nickname": "createWithId",
"responseClass": "Bridge",
"parameters": [
{
"name": "type",
"description": "Comma separated list of bridge type attributes (mixing, holding, dtmf_events, proxy_media) to set.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "bridgeId",
"description": "Unique ID to give to the bridge being created.",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "name",
"description": "Set the name of the bridge.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
}
]
},
{
"httpMethod": "GET",
"summary": "Get bridge details.",
"nickname": "get",
"responseClass": "Bridge",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
}
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
}
]
},
{
"httpMethod": "DELETE",
"summary": "Shut down a bridge.",
"notes": "If any channels are in this bridge, they will be removed and resume whatever they were doing beforehand.",
"nickname": "destroy",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
}
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/addChannel",
"description": "Add a channel to a bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Add a channel to a bridge.",
"nickname": "addChannel",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "channel",
"description": "Ids of channels to add to bridge",
"paramType": "query",
"required": true,
"allowMultiple": true,
"dataType": "string"
},
{
"name": "role",
"description": "Channel's role in the bridge",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
}
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
],
"errorResponses": [
{
"code": 400,
"reason": "Channel not found"
},
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge not in Stasis application; Channel currently recording"
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
},
{
"code": 422,
"reason": "Channel not in Stasis application"
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/removeChannel",
"description": "Remove a channel from a bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Remove a channel from a bridge.",
"nickname": "removeChannel",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "channel",
"description": "Ids of channels to remove from bridge",
"paramType": "query",
"required": true,
"allowMultiple": true,
"dataType": "string"
}
],
"errorResponses": [
{
"code": 400,
"reason": "Channel not found"
},
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge not in Stasis application"
},
{
"code": 422,
"reason": "Channel not in this bridge"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/videoSource/{channelId}",
"description": "Set a channel as the video source in a multi-party bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Set a channel as the video source in a multi-party mixing bridge. This operation has no effect on bridges with two or fewer participants.",
"nickname": "setVideoSource",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "channelId",
"description": "Channel's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
}
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge or Channel not found"
},
{
"code": 409,
"reason": "Channel not in Stasis application"
},
{
"code": 422,
"reason": "Channel not in this Bridge"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/videoSource",
"description": "Removes any explicit video source",
"operations": [
{
"httpMethod": "DELETE",
"summary": "Removes any explicit video source in a multi-party mixing bridge. This operation has no effect on bridges with two or fewer participants. When no explicit video source is set, talk detection will be used to determine the active video stream.",
"nickname": "clearVideoSource",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
}
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/moh",
"description": "Play music on hold to a bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Play music on hold to a bridge or change the MOH class that is playing.",
"nickname": "startMoh",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "mohClass",
"description": "Channel's id",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
}
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge not in Stasis application"
}
]
},
{
"httpMethod": "DELETE",
"summary": "Stop playing music on hold to a bridge.",
"notes": "This will only stop music on hold being played via POST bridges/{bridgeId}/moh.",
"nickname": "stopMoh",
"responseClass": "void",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
}
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge not in Stasis application"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/play",
"description": "Play media to the participants of a bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Start playback of media on a bridge.",
"notes": "The media URI may be any of a number of URI's. Currently sound:, recording:, number:, digits:, characters:, and tone: URI's are supported. This operation creates a playback resource that can be used to control the playback of media (pause, rewind, fast forward, etc.)",
"nickname": "play",
"responseClass": "Playback",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "media",
ARI: Add the ability to play multiple media URIs in a single operation Many ARI applications will want to play multiple media files in a row to a resource. The most common use case is when building long-ish IVR prompts made up of multiple, smaller sound files. Today, that requires building a small state machine, listening for each PlaybackFinished event, and triggering the next sound file to play. While not especially challenging, it is tedious work. Since requiring developers to write tedious code to do normal activities stinks, this patch adds the ability to play back a list of media files to a resource. Each of the 'play' operations on supported resources (channels and bridges) now accepts a comma delineated list of media URIs to play. A single Playback resource is created as a handle to the entire list. The operation of playing a list is identical to playing a single media URI, save that a new event, PlaybackContinuing, is raised instead of a PlaybackFinished for each non-final media URI. When the entire list is finished being played, a PlaybackFinished event is raised. In order to help inform applications where they are in the list playback, the Playback resource now includes a new, optional attribute, 'next_media_uri', that contains the next URI in the list to be played. It's important to note the following: - If an offset is provided to the 'play' operations, it only applies to the first media URI, as it would be weird to skip n seconds forward in every media resource. - Operations that control the position of the media only affect the current media being played. For example, once a media resource in the list completes, a 'reverse' operation on a subsequent media resource will not start a previously completed media resource at the appropiate offset. - This patch does not add any new operations to control the list. Hopefully, user feedback and/or future patches would add that if people want it. ASTERISK-26022 #close Change-Id: Ie1ea5356573447b8f51f2e7964915ea01792f16f
2016-04-18 23:17:08 +00:00
"description": "Media URIs to play.",
"paramType": "query",
"required": true,
ARI: Add the ability to play multiple media URIs in a single operation Many ARI applications will want to play multiple media files in a row to a resource. The most common use case is when building long-ish IVR prompts made up of multiple, smaller sound files. Today, that requires building a small state machine, listening for each PlaybackFinished event, and triggering the next sound file to play. While not especially challenging, it is tedious work. Since requiring developers to write tedious code to do normal activities stinks, this patch adds the ability to play back a list of media files to a resource. Each of the 'play' operations on supported resources (channels and bridges) now accepts a comma delineated list of media URIs to play. A single Playback resource is created as a handle to the entire list. The operation of playing a list is identical to playing a single media URI, save that a new event, PlaybackContinuing, is raised instead of a PlaybackFinished for each non-final media URI. When the entire list is finished being played, a PlaybackFinished event is raised. In order to help inform applications where they are in the list playback, the Playback resource now includes a new, optional attribute, 'next_media_uri', that contains the next URI in the list to be played. It's important to note the following: - If an offset is provided to the 'play' operations, it only applies to the first media URI, as it would be weird to skip n seconds forward in every media resource. - Operations that control the position of the media only affect the current media being played. For example, once a media resource in the list completes, a 'reverse' operation on a subsequent media resource will not start a previously completed media resource at the appropiate offset. - This patch does not add any new operations to control the list. Hopefully, user feedback and/or future patches would add that if people want it. ASTERISK-26022 #close Change-Id: Ie1ea5356573447b8f51f2e7964915ea01792f16f
2016-04-18 23:17:08 +00:00
"allowMultiple": true,
"dataType": "string"
},
{
"name": "lang",
"description": "For sounds, selects language for sound.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "offsetms",
ARI: Add the ability to play multiple media URIs in a single operation Many ARI applications will want to play multiple media files in a row to a resource. The most common use case is when building long-ish IVR prompts made up of multiple, smaller sound files. Today, that requires building a small state machine, listening for each PlaybackFinished event, and triggering the next sound file to play. While not especially challenging, it is tedious work. Since requiring developers to write tedious code to do normal activities stinks, this patch adds the ability to play back a list of media files to a resource. Each of the 'play' operations on supported resources (channels and bridges) now accepts a comma delineated list of media URIs to play. A single Playback resource is created as a handle to the entire list. The operation of playing a list is identical to playing a single media URI, save that a new event, PlaybackContinuing, is raised instead of a PlaybackFinished for each non-final media URI. When the entire list is finished being played, a PlaybackFinished event is raised. In order to help inform applications where they are in the list playback, the Playback resource now includes a new, optional attribute, 'next_media_uri', that contains the next URI in the list to be played. It's important to note the following: - If an offset is provided to the 'play' operations, it only applies to the first media URI, as it would be weird to skip n seconds forward in every media resource. - Operations that control the position of the media only affect the current media being played. For example, once a media resource in the list completes, a 'reverse' operation on a subsequent media resource will not start a previously completed media resource at the appropiate offset. - This patch does not add any new operations to control the list. Hopefully, user feedback and/or future patches would add that if people want it. ASTERISK-26022 #close Change-Id: Ie1ea5356573447b8f51f2e7964915ea01792f16f
2016-04-18 23:17:08 +00:00
"description": "Number of milliseconds to skip before playing. Only applies to the first URI if multiple media URIs are specified.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "int",
"defaultValue": 0,
"allowableValues": {
"valueType": "RANGE",
"min": 0
}
},
{
"name": "skipms",
"description": "Number of milliseconds to skip for forward/reverse operations.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "int",
"defaultValue": 3000,
"allowableValues": {
"valueType": "RANGE",
"min": 0
}
},
{
"name": "playbackId",
"description": "Playback Id.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
}
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge not in a Stasis application"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/play/{playbackId}",
"description": "Play media to a bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Start playback of media on a bridge.",
"notes": "The media URI may be any of a number of URI's. Currently sound:, recording:, number:, digits:, characters:, and tone: URI's are supported. This operation creates a playback resource that can be used to control the playback of media (pause, rewind, fast forward, etc.)",
"nickname": "playWithId",
"responseClass": "Playback",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "playbackId",
"description": "Playback ID.",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "media",
ARI: Add the ability to play multiple media URIs in a single operation Many ARI applications will want to play multiple media files in a row to a resource. The most common use case is when building long-ish IVR prompts made up of multiple, smaller sound files. Today, that requires building a small state machine, listening for each PlaybackFinished event, and triggering the next sound file to play. While not especially challenging, it is tedious work. Since requiring developers to write tedious code to do normal activities stinks, this patch adds the ability to play back a list of media files to a resource. Each of the 'play' operations on supported resources (channels and bridges) now accepts a comma delineated list of media URIs to play. A single Playback resource is created as a handle to the entire list. The operation of playing a list is identical to playing a single media URI, save that a new event, PlaybackContinuing, is raised instead of a PlaybackFinished for each non-final media URI. When the entire list is finished being played, a PlaybackFinished event is raised. In order to help inform applications where they are in the list playback, the Playback resource now includes a new, optional attribute, 'next_media_uri', that contains the next URI in the list to be played. It's important to note the following: - If an offset is provided to the 'play' operations, it only applies to the first media URI, as it would be weird to skip n seconds forward in every media resource. - Operations that control the position of the media only affect the current media being played. For example, once a media resource in the list completes, a 'reverse' operation on a subsequent media resource will not start a previously completed media resource at the appropiate offset. - This patch does not add any new operations to control the list. Hopefully, user feedback and/or future patches would add that if people want it. ASTERISK-26022 #close Change-Id: Ie1ea5356573447b8f51f2e7964915ea01792f16f
2016-04-18 23:17:08 +00:00
"description": "Media URIs to play.",
"paramType": "query",
"required": true,
ARI: Add the ability to play multiple media URIs in a single operation Many ARI applications will want to play multiple media files in a row to a resource. The most common use case is when building long-ish IVR prompts made up of multiple, smaller sound files. Today, that requires building a small state machine, listening for each PlaybackFinished event, and triggering the next sound file to play. While not especially challenging, it is tedious work. Since requiring developers to write tedious code to do normal activities stinks, this patch adds the ability to play back a list of media files to a resource. Each of the 'play' operations on supported resources (channels and bridges) now accepts a comma delineated list of media URIs to play. A single Playback resource is created as a handle to the entire list. The operation of playing a list is identical to playing a single media URI, save that a new event, PlaybackContinuing, is raised instead of a PlaybackFinished for each non-final media URI. When the entire list is finished being played, a PlaybackFinished event is raised. In order to help inform applications where they are in the list playback, the Playback resource now includes a new, optional attribute, 'next_media_uri', that contains the next URI in the list to be played. It's important to note the following: - If an offset is provided to the 'play' operations, it only applies to the first media URI, as it would be weird to skip n seconds forward in every media resource. - Operations that control the position of the media only affect the current media being played. For example, once a media resource in the list completes, a 'reverse' operation on a subsequent media resource will not start a previously completed media resource at the appropiate offset. - This patch does not add any new operations to control the list. Hopefully, user feedback and/or future patches would add that if people want it. ASTERISK-26022 #close Change-Id: Ie1ea5356573447b8f51f2e7964915ea01792f16f
2016-04-18 23:17:08 +00:00
"allowMultiple": true,
"dataType": "string"
},
{
"name": "lang",
"description": "For sounds, selects language for sound.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "offsetms",
ARI: Add the ability to play multiple media URIs in a single operation Many ARI applications will want to play multiple media files in a row to a resource. The most common use case is when building long-ish IVR prompts made up of multiple, smaller sound files. Today, that requires building a small state machine, listening for each PlaybackFinished event, and triggering the next sound file to play. While not especially challenging, it is tedious work. Since requiring developers to write tedious code to do normal activities stinks, this patch adds the ability to play back a list of media files to a resource. Each of the 'play' operations on supported resources (channels and bridges) now accepts a comma delineated list of media URIs to play. A single Playback resource is created as a handle to the entire list. The operation of playing a list is identical to playing a single media URI, save that a new event, PlaybackContinuing, is raised instead of a PlaybackFinished for each non-final media URI. When the entire list is finished being played, a PlaybackFinished event is raised. In order to help inform applications where they are in the list playback, the Playback resource now includes a new, optional attribute, 'next_media_uri', that contains the next URI in the list to be played. It's important to note the following: - If an offset is provided to the 'play' operations, it only applies to the first media URI, as it would be weird to skip n seconds forward in every media resource. - Operations that control the position of the media only affect the current media being played. For example, once a media resource in the list completes, a 'reverse' operation on a subsequent media resource will not start a previously completed media resource at the appropiate offset. - This patch does not add any new operations to control the list. Hopefully, user feedback and/or future patches would add that if people want it. ASTERISK-26022 #close Change-Id: Ie1ea5356573447b8f51f2e7964915ea01792f16f
2016-04-18 23:17:08 +00:00
"description": "Number of milliseconds to skip before playing. Only applies to the first URI if multiple media URIs are specified.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "int",
"defaultValue": 0,
"allowableValues": {
"valueType": "RANGE",
"min": 0
}
},
{
"name": "skipms",
"description": "Number of milliseconds to skip for forward/reverse operations.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "int",
"defaultValue": 3000,
"allowableValues": {
"valueType": "RANGE",
"min": 0
}
}
],
"errorResponses": [
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge not in a Stasis application"
}
]
}
]
},
{
"path": "/bridges/{bridgeId}/record",
"description": "Record audio on a bridge",
"operations": [
{
"httpMethod": "POST",
"summary": "Start a recording.",
"notes": "This records the mixed audio from all channels participating in this bridge.",
"nickname": "record",
"responseClass": "LiveRecording",
"parameters": [
{
"name": "bridgeId",
"description": "Bridge's id",
"paramType": "path",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "name",
"description": "Recording's filename",
"paramType": "query",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "format",
"description": "Format to encode audio in",
"paramType": "query",
"required": true,
"allowMultiple": false,
"dataType": "string"
},
{
"name": "maxDurationSeconds",
"description": "Maximum duration of the recording, in seconds. 0 for no limit.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "int",
"defaultValue": 0,
"allowableValues": {
"valueType": "RANGE",
"min": 0
}
},
{
"name": "maxSilenceSeconds",
"description": "Maximum duration of silence, in seconds. 0 for no limit.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "int",
"defaultValue": 0,
"allowableValues": {
"valueType": "RANGE",
"min": 0
}
},
{
"name": "ifExists",
"description": "Action to take if a recording with the same name already exists.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "fail",
"allowableValues": {
"valueType": "LIST",
"values": [
"fail",
"overwrite",
"append"
]
}
},
{
"name": "beep",
"description": "Play beep when recording begins",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "boolean",
"defaultValue": false
},
{
"name": "terminateOn",
"description": "DTMF input to terminate recording.",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"any",
"*",
"#"
]
}
}
],
"errorResponses": [
{
"code": 400,
"reason": "Invalid parameters"
},
{
"code": 404,
"reason": "Bridge not found"
},
{
"code": 409,
"reason": "Bridge is not in a Stasis application; A recording with the same name already exists on the system and can not be overwritten because it is in progress or ifExists=fail"
},
{
"code": 422,
"reason": "The format specified is unknown on this system"
}
]
}
]
}
],
"models": {
"Bridge": {
"id": "Bridge",
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
"description": "The merging of media from one or more channels.\n\nEveryone on the bridge receives the same audio.",
"properties": {
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
"id": {
"type": "string",
"description": "Unique identifier for this bridge",
"required": true
},
"technology": {
"type": "string",
"description": "Name of the current bridging technology",
"required": true
},
"bridge_type": {
"type": "string",
"description": "Type of bridge technology",
"required": true,
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
"allowableValues": {
"valueType": "LIST",
"values": [
"mixing",
"holding"
]
}
},
Update events to use Swagger 1.3 subtyping, and related aftermath This patch started with the simple idea of changing the /events data model to be more sane. The original model would send out events like: { "stasis_start": { "args": [], "channel": { ... } } } The event discriminator was the field name instead of being a value in the object, due to limitations in how Swagger 1.1 could model objects. While technically sufficient in communicating event information, it was really difficult to deal with in terms of client side JSON handling. This patch takes advantage of a proposed extension[1] to Swagger which allows type variance through the use of a discriminator field. This had a domino effect that made this a surprisingly large patch. [1]: https://groups.google.com/d/msg/wordnik-api/EC3rGajE0os/ey_5dBI_jWcJ In changing the models, I also had to change the swagger_model.py processor so it can handle the type discriminator and subtyping. I took that a big step forward, and using that information to generate an ari_model module, which can validate a JSON object against the Swagger model. The REST and WebSocket generators were changed to take advantage of the validators. If compiled with AST_DEVMODE enabled, JSON objects that don't match their corresponding models will not be sent out. For REST API calls, a 500 Internal Server response is sent. For WebSockets, the invalid JSON message is replaced with an error message. Since this took over about half of the job of the existing JSON generators, and the .to_json virtual function on messages took over the other half, I reluctantly removed the generators. The validators turned up all sorts of errors and inconsistencies in our data models, and the code. These were cleaned up, with checks in the code generator avoid some of the consistency problems in the future. * The model for a channel snapshot was trimmed down to match the information sent via AMI. Many of the field being sent were not useful in the general case. * The model for a bridge snapshot was updated to be more consistent with the other ARI models. Another impact of introducing subtyping was that the swagger-codegen documentation generator was insufficient (at least until it catches up with Swagger 1.2). I wanted it to be easier to generate docs for the API anyways, so I ported the wiki pages to use the Asterisk Swagger generator. In the process, I was able to clean up many of the model links, which would occasionally give inconsistent results on the wiki. I also added error responses to the wiki docs, making the wiki documentation more complete. Finally, since Stasis-HTTP will now be named Asterisk REST Interface (ARI), any new functions and files I created carry the ari_ prefix. I changed a few stasis_http references to ari where it was non-intrusive and made sense. (closes issue ASTERISK-21885) Review: https://reviewboard.asterisk.org/r/2639/ git-svn-id: https://origsvn.digium.com/svn/asterisk/trunk@393529 65c4cc65-6c06-0410-ace0-fbb531ad65f3
2013-07-03 16:32:41 +00:00
"bridge_class": {
"type": "string",
"description": "Bridging class",
"required": true
},
"creator": {
"type": "string",
"description": "Entity that created the bridge",
"required": true
},
"name": {
"type": "string",
"description": "Name the creator gave the bridge",
"required": true
},
"channels": {
"type": "List[string]",
"description": "Ids of channels participating in this bridge",
"required": true
},
"video_mode": {
"type": "string",
"description": "The video mode the bridge is using. One of 'none', 'talker', or 'single'.",
"required": false
},
"video_source_id": {
"type": "string",
"description": "The ID of the channel that is the source of video in this bridge, if one exists.",
"required": false
}
}
}
}
}