API
Setting the input image or video directly.
The endpoints available for a Pop will be explored in this document. It's important to note that we'll first need to fetch the Pop's worker server address and pipeline ID before making any API calls.
Endpoints
Get Pop Config
GET
https://api.eyepop.ai/api/v1/user/pops/{POP_UUID}/config
Responds with a URL, Pipeline ID, Pop Type, and more. See response description for further details.
Query Parameters
POP_UUID*
String
The UUID that identifies your Pop
Headers
Authorization*
Bearer {token}
Your authentication token
Image / Video Upload
POST
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use the uploaded image or video file as a new source.
Path Parameters
SERVER_URL*
The URL received from the Get user config data fetch
PIPELINE_ID*
The pipeline id of the active Pop
Headers
Authorization*
Bearer {token}
Your authentication token
Request Body
file*
The new source as an image or video file. Provided in the form data.
mode*
String
The mode to handle concurrent sources. Available values include: βrejectβ, βpreemptβ, and βqueueβ. This is a required query parameter. Reject means, pipeline continues operation on previous tasks. Preempt means, this task takes over what the pipeline is previously working on. Queue means, the task is added to the end of the Pops task queue.
processing*
String
The processing mode, either to wait for the result or to fire and forget. Available values are: βsyncβ (synchronous) and βasyncβ (asynchronous). This is a required query parameter.
Image / Video Url
PATCH
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use a new source specified by a URL.
Path Parameters
SERVER_URL*
String
The URL received from the Get user config data fetch
PIPELINE_ID*
String
The pipeline id of the active Pop
mode*
String
The mode to handle concurrent sources. Available values include: βrejectβ, βpreemptβ, and βqueueβ. This is a required query parameter.
processing*
String
The processing mode, either to wait for the result or to fire and forget. Available values are: βsyncβ (synchronous) and βasyncβ (asynchronous). This is a required query parameter.
Request Body
sourceType
"URL"
url
String
The url to the image or video to be uploaded. Warning: must be publicly accessible.
Image / Video Url List
PATCH
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use a new source specified by a list of image URLs synchronously.
Path Parameters
SERVER_URL*
String
The URL received from the Get user config data fetch
PIPELINE_ID*
String
The pipeline id of the active Pop
mode*
String
The mode to handle concurrent sources. Available values include: βrejectβ, βpreemptβ, and βqueueβ. This is a required query parameter.
processing
String
The processing mode, specifically set to βsyncβ for synchronous processing. This is a required query parameter.
Request Body
sourceType
"LIST"
sources
List of urls
Streaming Video (Whip)
PATCH
{PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source
Changes the live pipeline to use a new streaming video source via Whip Protocol.
Path Parameters
SERVER_URL*
String
The URL received from the Get user config data fetch
PIPELINE_ID*
String
The pipeline id of the active Pop
mode*
String
The mode to handle concurrent sources. Available values include: βrejectβ, βpreemptβ, and βqueueβ. This is a required query parameter.
processing*
String
The processing mode, either to wait for the result or to fire and forget. Available values are: βsyncβ (synchronous) and βasyncβ (asynchronous). This is a required query parameter.
Request Body
sourceType
String
"LIVE_INGRESS"
liveIngressId
String
sourceId
String
Results Details
The EyePop.ai's computer vision API provides detailed results concerning the contents of the analyzed images. These results are structured in a JSON format.
Here is an example excerpt from a video input source:
Source Image Dimensions:
The results include the dimensions of the source image which was analyzed:
βresult."source_width"β: The width of the source image.
βresult."source_height"β: The height of the source image.
Objects Predicted in Image:
The API identifies various objects within the image and provides details about each of these objects:
βresult."objects"β: An array containing information about each object predicted.
βresult.objects[0]."confidence"β: The confidence score associated with the predicted object. This value indicates the certainty of the detection.
βresult.objects[0]."category"β: The category of the class. Categories are used to group classes together logically. Examples include EyePop predefined categories like "common-objects", "vehicles" or "person". For custom trained models, these categories are assigned at Pop creation time.
βresult.objects[0]."classLabel"β: The label of the class, e.g., "person".
βresult.objects[0]."x", result.objects[0]."y"β: The top-left coordinates of the predicted object within the image.
βresult.objects[0]."width", result.objects[0]."height"β: The width and height dimensions of the predicted object.
Objects Predicted Within Objects:
For some primary predicted objects, the API might identify secondary objects within them. For example, within a "person" object, the API could detect a "face" object.
βresult.objects[0]."objects"β: An array containing secondary objects predicted within a primary object.
βresult.objects[0]."objects"[0]."classLabel"β: The label of the class of the secondary object, e.g., "face".
Classifications on Objects within Objects:
For some secondary objects, the API performs classifications to identify various attributes:
βresult.objects[0]."objects"[0]."classes"β: Contains the classifications for the predicted object.
βresult.objects[0]."objects"[0]."classes"[0].confidenceβ: The confidence score associated with the predicted classification value.
βresult.objects[0]."objects"[0]."classes"[0].categoryβ: The category of the class. Categories are used to group classes together logically. Examples include EyePop predefined categories like "age-range", "gender" or "expression". For custom trained models, these categories are assigned at Pop creation time.
βresult.objects[0]."objects"[0]."classes"[0].classLabelβ: The label of the predicted classification value.
Body Keypoints:
For some primary objects like "person", the API identifies body keypoints, indicating specific landmarks such as the eyes, nose, wrists, etc.
βresult.objects[0]."keyPoints"."points"β: An array containing information about each keypoint predicted.
βresult.objects[0]."keyPoints"."points"[0]."confidence"β: Confidence score for the predicted keypoint.
βresult.objects[0]."keyPoints"."points"[0]."classLabel"β: The label of the keypoint, e.g., "nose".
βresult.objects[0]."keyPoints"."points"[0]."x"β, βresult.objects[0]."keyPoints"."points"[0]."y"β: The coordinates of the predicted keypoint within the image.
Classifications of image content:
Custom Pop types can predict classifications on the entire image content. In this case the result includes a "classes" array similar to the classification on object.
βresult."classes"β: Contains the classifications for the predicted object.
βresult."classes"[0].confidenceβ: The confidence score associated with the predicted classification value.
βresult."classes"[0].categoryβ: The category of the class. Categories are used to group classes together logically.
βresult."classes"[0].classLabelβ: The label of the predicted classification value.
Last updated