🌐API
Setting the input image or video directly.
Last updated
Setting the input image or video directly.
Last updated
The endpoints available for a Pop will be explored in this document. It's important to note that we'll first need to fetch the Pop's worker server address and pipeline ID before making any API calls.
GET
https://api.eyepop.ai/api/v1/user/pops/{POP_UUID}/config
Responds with a URL, Pipeline ID, Pop Type, and more. See response description for further details.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
POST
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use the uploaded image or video file as a new source.
PATCH
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use a new source specified by a URL.
PATCH
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use a new source specified by a list of image URLs synchronously.
PATCH
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use a new streaming video source via Whip Protocol.
The EyePop.ai's computer vision API provides detailed results concerning the contents of the analyzed images. These results are structured in a JSON format.
Here is an example excerpt from a video input source:
The results include the dimensions of the source image which was analyzed:
‘result."source_width"’: The width of the source image.
‘result."source_height"’: The height of the source image.
The API identifies various objects within the image and provides details about each of these objects:
‘result."objects"’: An array containing information about each object predicted.
‘result.objects[0]."confidence"’: The confidence score associated with the predicted object. This value indicates the certainty of the detection.
‘result.objects[0]."category"’: The category of the class. Categories are used to group classes together logically. Examples include EyePop predefined categories like "common-objects", "vehicles" or "person". For custom trained models, these categories are assigned at Pop creation time.
‘result.objects[0]."classLabel"’: The label of the class, e.g., "person".
‘result.objects[0]."x", result.objects[0]."y"’: The top-left coordinates of the predicted object within the image.
‘result.objects[0]."width", result.objects[0]."height"’: The width and height dimensions of the predicted object.
For some primary predicted objects, the API might identify secondary objects within them. For example, within a "person" object, the API could detect a "face" object.
‘result.objects[0]."objects"’: An array containing secondary objects predicted within a primary object.
‘result.objects[0]."objects"[0]."classLabel"’: The label of the class of the secondary object, e.g., "face".
For some secondary objects, the API performs classifications to identify various attributes:
‘result.objects[0]."objects"[0]."classes"’: Contains the classifications for the predicted object.
‘result.objects[0]."objects"[0]."classes"[0].confidence’: The confidence score associated with the predicted classification value.
‘result.objects[0]."objects"[0]."classes"[0].category’: The category of the class. Categories are used to group classes together logically. Examples include EyePop predefined categories like "age-range", "gender" or "expression". For custom trained models, these categories are assigned at Pop creation time.
‘result.objects[0]."objects"[0]."classes"[0].classLabel’: The label of the predicted classification value.
For some primary objects like "person", the API identifies body keypoints, indicating specific landmarks such as the eyes, nose, wrists, etc.
‘result.objects[0]."keyPoints"."points"’: An array containing information about each keypoint predicted.
‘result.objects[0]."keyPoints"."points"[0]."confidence"’: Confidence score for the predicted keypoint.
‘result.objects[0]."keyPoints"."points"[0]."classLabel"’: The label of the keypoint, e.g., "nose".
‘result.objects[0]."keyPoints"."points"[0]."x"’, ‘result.objects[0]."keyPoints"."points"[0]."y"’: The coordinates of the predicted keypoint within the image.
Custom Pop types can predict classifications on the entire image content. In this case the result includes a "classes" array similar to the classification on object.
‘result."classes"’: Contains the classifications for the predicted object.
‘result."classes"[0].confidence’: The confidence score associated with the predicted classification value.
‘result."classes"[0].category’: The category of the class. Categories are used to group classes together logically.
‘result."classes"[0].classLabel’: The label of the predicted classification value.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
POP_UUID*
String
The UUID that identifies your Pop
Authorization*
Bearer {token}
Your authentication token
SERVER_URL*
The URL received from the Get user config data fetch
PIPELINE_ID*
The pipeline id of the active Pop
Authorization*
Bearer {token}
Your authentication token
file*
The new source as an image or video file. Provided in the form data.
mode*
String
The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter. Reject means, pipeline continues operation on previous tasks. Preempt means, this task takes over what the pipeline is previously working on. Queue means, the task is added to the end of the Pops task queue.
processing*
String
The processing mode, either to wait for the result or to fire and forget. Available values are: ‘sync’ (synchronous) and ‘async’ (asynchronous). This is a required query parameter.
SERVER_URL*
String
The URL received from the Get user config data fetch
PIPELINE_ID*
String
The pipeline id of the active Pop
mode*
String
The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter.
processing*
String
The processing mode, either to wait for the result or to fire and forget. Available values are: ‘sync’ (synchronous) and ‘async’ (asynchronous). This is a required query parameter.
sourceType
"URL"
url
String
The url to the image or video to be uploaded. Warning: must be publicly accessible.
SERVER_URL*
String
The URL received from the Get user config data fetch
PIPELINE_ID*
String
The pipeline id of the active Pop
mode*
String
The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter.
processing
String
The processing mode, specifically set to ‘sync’ for synchronous processing. This is a required query parameter.
sourceType
"LIST"
sources
List of urls
SERVER_URL*
String
The URL received from the Get user config data fetch
PIPELINE_ID*
String
The pipeline id of the active Pop
mode*
String
The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter.
processing*
String
The processing mode, either to wait for the result or to fire and forget. Available values are: ‘sync’ (synchronous) and ‘async’ (asynchronous). This is a required query parameter.
sourceType
String
"LIVE_INGRESS"
liveIngressId
String
The ingress ID used during the WHIP communication. For details on the WHIP/WHEP protocol, click here.
sourceId
String
The source ID used during the WHIP communication. For details on the WHIP/WHEP protocol, click here.