🌐API

Setting the input image or video directly.

The endpoints available for a Pop will be explored in this document. It's important to note that we'll first need to fetch the Pop's worker server address and pipeline ID before making any API calls.


Endpoints

Get Pop Config

GET https://api.eyepop.ai/api/v1/user/pops/{POP_UUID}/config

Responds with a URL, Pipeline ID, Pop Type, and more. See response description for further details.

Query Parameters

NameTypeDescription

POP_UUID*

String

The UUID that identifies your Pop

Headers

NameTypeDescription

Authorization*

Bearer {token}

Your authentication token

{
    "name": "MyFirstPop",
    "pop_type": "api",
    "base_url": "https://worker-222-example-222-222.cloud.eyepop.ai:2222",
    "pipeline_id": "example-efadsads-example-bd0adsads8-example",
    "input": {
        "name": "webcam_on_page",
        "params": {}
    },
    "draw": [
        {
            "type": "box",
            "targets": [
                "*"
            ]
        },
        {
            "type": "pose",
            "targets": [
                "person"
            ]
        }
    ]
}

Image / Video Upload

POST {PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source

Changes the live pipeline to use the uploaded image or video file as a new source.

Path Parameters

NameTypeDescription

SERVER_URL*

The URL received from the Get user config data fetch

PIPELINE_ID*

The pipeline id of the active Pop

Headers

NameTypeDescription

Authorization*

Bearer {token}

Your authentication token

Request Body

NameTypeDescription

file*

The new source as an image or video file. Provided in the form data.

mode*

String

The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter. Reject means, pipeline continues operation on previous tasks. Preempt means, this task takes over what the pipeline is previously working on. Queue means, the task is added to the end of the Pops task queue.

processing*

String

The processing mode, either to wait for the result or to fire and forget. Available values are: ‘sync’ (synchronous) and ‘async’ (asynchronous). This is a required query parameter.

Image / Video Url

PATCH {PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source

Changes the live pipeline to use a new source specified by a URL.

Path Parameters

NameTypeDescription

SERVER_URL*

String

The URL received from the Get user config data fetch

PIPELINE_ID*

String

The pipeline id of the active Pop

mode*

String

The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter.

processing*

String

The processing mode, either to wait for the result or to fire and forget. Available values are: ‘sync’ (synchronous) and ‘async’ (asynchronous). This is a required query parameter.

Request Body

NameTypeDescription

sourceType

"URL"

url

String

The url to the image or video to be uploaded. Warning: must be publicly accessible.

Image / Video Url List

PATCH {PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source

Changes the live pipeline to use a new source specified by a list of image URLs synchronously.

Path Parameters

NameTypeDescription

SERVER_URL*

String

The URL received from the Get user config data fetch

PIPELINE_ID*

String

The pipeline id of the active Pop

mode*

String

The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter.

processing

String

The processing mode, specifically set to ‘sync’ for synchronous processing. This is a required query parameter.

Request Body

NameTypeDescription

sourceType

"LIST"

sources

List of urls

Streaming Video (Whip)

PATCH {PopConfig.base_url}/pipelines/{PopConfig.pipeline_id}/source

Changes the live pipeline to use a new streaming video source via Whip Protocol.

Path Parameters

NameTypeDescription

SERVER_URL*

String

The URL received from the Get user config data fetch

PIPELINE_ID*

String

The pipeline id of the active Pop

mode*

String

The mode to handle concurrent sources. Available values include: ‘reject’, ‘preempt’, and ‘queue’. This is a required query parameter.

processing*

String

The processing mode, either to wait for the result or to fire and forget. Available values are: ‘sync’ (synchronous) and ‘async’ (asynchronous). This is a required query parameter.

Request Body

NameTypeDescription

sourceType

String

"LIVE_INGRESS"

liveIngressId

String

The ingress ID used during the WHIP communication. For details on the WHIP/WHEP protocol, click here.

sourceId

String

The source ID used during the WHIP communication. For details on the WHIP/WHEP protocol, click here.


Results Details

The EyePop.ai's computer vision API provides detailed results concerning the contents of the analyzed images. These results are structured in a JSON format.

Here is an example excerpt from a video input source:

response.body = [
{
    "": 1080, 
    "": 1920, 
    "": "UuidExample-213132-123231-32423xsdfa234"
    "system_timestamp": 1706293665659876000,
    "seconds": 0,
    "timestamp": 0,
    "count": 1,
    "": [  
        {
            "classId": 0,
            "classLabel": "person",
            "confidence": ,
            "x": 595.079,
            "y": 933.953,
            "width": 353.567,
            "height": 449.213,
            "id": 1459523,
            "inferId": 1,
            "orientation": 0,
            "objects": [
                {
                    "classId": 0,
                    "classLabel": "pose",
                    "": 0.524,
                    "width": 404,
                    "height": 472,
                    "id": 1459525,
                    "inferId": 3,
                    "orientation": -1.447,
                    "x": 30,
                    "y": 67
                    "keyPoints": [
                        {
                            "type": "body-mediapipe-33",
                            "points": [
                                {
                                    "": 0,  
                                    "": "nose",  
                                    "confidence": 0.85,
                                    "id": 1459527,
                                    "inferId": 4,
                                    : 341.168,
                                    : 208.944,
                                    : -311.207
                                },
                                {
                                    "classId": 1,
                                    "classLabel": "left eye (inner)",
                                    "confidence": 0.839,
                                    "id": 1459528,
                                    "inferId": 4,
                                    "x": 341.962,
                                    "y": 187.347,
                                    "z": -292.833
                                }, ...
                            ],
                        }
                    ],
                }
            ],
        }
    ]
];

Source Image Dimensions:

The results include the dimensions of the source image which was analyzed:

  • result."source_width"’: The width of the source image.

  • result."source_height"’: The height of the source image.

Objects Detected in Image:

The API identifies various objects within the image and provides details about each of these objects:

  • result."objects"’: An array containing information about each object detected.

  • result.objects[0]."confidence"’: The confidence score associated with the detected object. This value indicates the certainty of the detection.

  • result.objects[0]."classId"’: An integer identifier for the class of the detected object.

  • result.objects[0]."classLabel"’: The label of the class, e.g., "person".

  • result.objects[0]."x", result.objects[0]."y"’: The top-left coordinates of the detected object within the image.

  • result.objects[0]."width", result.objects[0]."height"’: The width and height dimensions of the detected object.

Objects Detected Within Objects:

For some primary detected objects, the API might identify secondary objects within them. For example, within a "person" object, the API could detect a "face" object.

  • result.objects[0]."objects"’: An array containing secondary objects detected within a primary object.

  • result.objects[0]."objects"[0]."classLabel"’: The label of the class of the secondary object, e.g., "face".

Classifications on Objects within Objects:

For some secondary objects, the API performs classifications to identify various attributes:

  • result.objects[0]."objects"[0]."classes"’: Contains the classifications for the secondary object.

    • "inferId": 4: Refers to Emotion classification.

    • "inferId": 5: Refers to Age classification.

    • "inferId": 6: Refers to Gender classification.

    • "inferId": 7: Refers to Race classification

Body Keypoints:

For some primary objects like "person", the API identifies body keypoints, indicating specific landmarks such as the eyes, nose, wrists, etc.

  • result.objects[0]."keyPoints"."points"’: An array containing information about each keypoint detected.

  • result.objects[0]."keyPoints"."points"[0]."confidence"’: Confidence score for the detected keypoint.

  • result.objects[0]."keyPoints"."points"[0]."classLabel"’: The label of the keypoint, e.g., "nose".

  • result.objects[0]."keyPoints"."points"[0]."x"’, ‘result.objects[0]."keyPoints"."points"[0]."y"’: The coordinates of the detected keypoint within the image.

Last updated