Page cover image

🎨JavaScript SDK

Streamline working with your Pop on the web

EyePop.ai has introduced a dedicated npm package for JavaScript developers, streamlining the process of integrating your Pop in the web. This package ensures an efficient and user-friendly setup experience.

https://www.npmjs.com/package/@eyepop.ai/javascript-sdk

npm:

npm install@eyepop.ai/javascript-sdk

cdn:

<script src="https://cdn.jsdelivr.net/npm/@eyepop.ai/javascript-sdk"></script>


Quick Start

Refer to the following page to quickly get the EyePopSDK JavaScript demo repository up and running on your local machine.

👾JavaScript SDK Demos

SDK Usage

Here's a barebones example of the SDK for uploading and displaying an image with object identification overlays.


<form>
    
</form>



<div style="height: 600px; width:600px;">

</div>

<script src="https://cdn.jsdelivr.net/npm/@eyepop.ai/javascript-sdk"></script>

<script>
    var config = {};
    
    // replace with your endpoint UUID
    const pop_uuid = '<POP_UUID>'; 
    
    // leave this empty to launch a 'login' popup, or enter a temporary token 
    const token = ''; 

    // first, fetch the Pop info    
    EyePopSDK.EyePopAPI.FetchPopConfig(pop_endpoint, token)
        .then((response) => {
            config = response;
            
            config.input = {
                "name": "file_upload"
            };
             
            console.log("EyePopSDK config: ", config);
            
            // then start the Pop
            EyePopSDK.EyePopSDK.init(config);
        });
    
</script>

API

Table of Contents


EyePopSDK

instance

The static instance of theSDK.

init(config)

The initialization function required for the SDK. This starts all media streaming and uploading, as well as drawing on top of the provided canvas. This object is created with a call to EyePopAPI.FetPopConfig(pop_endpoint, token) and appended to with the following options:

  • Parameters:

config.input

This is where to set a Pop's media input type and source is set.


// An object with the following properties:

config.input = {} 


    // A string name of the input media, options are: "webcam_on_page", "screen", "webcam_off_site", "url", "file_upload"

     = "" 
    

    // A string url of the input media, ex: "https://www.example.com/video.mp4"

     = ""
    
 
config.draw

The SDK supports an array of drawing methods, organized alphabetically by type. To use, you must change the Draw parameter on the config object used to initiate the EyePopSDK.

Overwrite the config.Draw object to enable or disable drawing passes. Pass an empty array to disable all drawings.

User configurable elements of the config are as follows:


// An array of objects, each specifying a different drawing pass. You are free to enable as many drawing passes as required.
config.draw = [ {} ]


    // A string, options include: "box", "pose", "hand", "face", "posefollow", "clip", "custom"

    config.draw[ i ].type = ""  


    // An array of strings, options include: "*", "people",

    config.draw[ i ].targets = [ "" ] 


    // An array of strings, options include all pose labels, such as "right eye", "left eye", etc

    config.draw[ i ].anchors = [ "" ] 


    // A string path to an image to be anchored to our Anchor points, ex: "./fun/sunglasses3.png?raw=true", 

    config.draw[ i ].image = "" 


    // A number to scale the anchored image by, ex: 2.6

    config.draw[ i ].scale = 2.6 

Draw Types

box:

  • Targets: Specific objects or parts of objects you wish to encircle within a box. E.g., "person.face". A value of "*" means all labels.

box with Tracking:

  • Targets: A list of objects to enclose within a box. E.g., "apple", "backpack", and more. A value of "*" means all labels.

  • Tracking:

    • Tracking allows you to receive a unique ID for each person in a scene. This ID is held stable while a person remains on screen. Tracking has to be turned on for your account so please ask an EyePop.ai Team member to turn this on for you.

  • Labels: An array of assigned labels for the tracking augmentation.

pose:

  • Targets: Currently, in the present version, the pose type can only target "person", highlighting body key points.

posefollow:

  • Targets: Objects like "person" that you aim to track. A value of "*" means all labels.

  • Anchors: Specific points within the target object where the augmentation is anchored, such as "right wrist".

  • Image: The image placed at the anchor point, like "./images/sunglasses3.png".

  • Scale: Augmentation image's scaling factor, e.g., 1.

posefollow with multiple Anchors:

  • Targets: Objects to track, such as "person". A value of "*" means all labels.

  • Anchors: Multiple anchor points within the target object, like "right eye", "left eye".

  • Image: Image to be superimposed on the anchor points, such as "./images/scream.png".

  • Scale: The image's scaling factor, e.g., 3.5.

  • Example:

var config = {};
EyePopSDK.EyePopAPI.FetchPopConfig(pop_endpoint, token)
    .then((response) =>
    {
    
        config = response;


        // First we set our input type        
        config.input = {
            "name": "url", // "webcam_on_page", "screen", "webcam_off_site", "url", "file_upload"
            "url": url
        };


        // Then we enable the following visualization
        config.draw = [
            { "type": "box",  "targets": [ "*" ] },
            { "type": "pose", "targets": [ "*" ] },
            { "type": "hand", "targets": [ "*" ] },
            { "type": "face", "targets": [ "*" ] },
        ]


        EyePopSDK.EyePopSDK.init(config);

    }
);

EyePopAPI

lastmsg

The last message recieved from the Pop WebSocket. Useful for synchronizing video and the drawing loop.

  • Example:

EyePopSDK.EyePopAPI.instance.OnDrawFrame = function ()
{   
    var closestIndex = findClosestIndex(cached_data, video.currentTime);
    EyePopSDK.EyePopAPI.instance.lastmsg = cached_data[ closestIndex ];
}

OnDrawFrame()

The callback method fired at the beginning of the draw loop.

  • Example:

EyePopSDK.EyePopAPI.instance.OnDrawFrame = function ()
{
    console.log("Drawing frame");      
}

OnDrawFrameEnd(jsonData)

The callback method fired at the end of the draw loop.

  • Example:

EyePopSDK.EyePopAPI.instance.OnDrawFrameEnd = function (jsonData)
{
    console.log("Finished drawing frame: ", jsonData);      
}

OnPrediction(jsonData)

The callback method fired when a new prediction message is received.

  • Example:

EyePopSDK.EyePopAPI.instance.OnPrediction = function ()
{
    console.log("Finished drawing frame");      
}

OnPredictionTarget()

The callback method fired when a target is found in the prediction data.

  • Example:

EyePopSDK.EyePopAPI.instance.OnPredictionTarget = function ()
{
    console.log("Target found!");      
}

OnPredictionEnd()

The callback method fired when the analysis is completed.

  • Example:

EyePopSDK.EyePopAPI.instance.OnPredictionEnd = function ()
{
    console.log("Analyzed 100%");      
}

onPredictionEndBase()

The callback method fired when the Pop has closed for any reason.

  • Example:

EyePopSDK.EyePopAPI.instance.OnPredictionEndBase = function ()
{
    console.error("Pop socket closed!");      
}

Rules

The Rules class has been specifically crafted for processing the outputs of EyePop.ai's computer vision system. It provides functionalities to construct semantic rules, helping in the identification and extraction of specific features and attributes from photos and videos.

FindObject(label, object)

Purpose: Filters the provided list of objects based on the specified class label.

  • Parameters:

    • label’: String representing the class label of the desired object.

    • objects’: Array of objects.

  • Returns:

    • Array of objects that match the specified class label.

  • Example:

var rulesState = [];
var rules = [{
  description: "Person detected!",
  condition: (resultSet) => {
    const resultArray = Rules.FindObject("person", resultSet.objects);
    return resultArray.length > 1;
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Biggest(label, object)

Purpose: Identifies the object with the largest bounding box area for a specific class label.

  • Parameters:

    • label’: String representing the class label of the object to compare.

    • objects’: Array of objects.

  • Returns:

    • Single object with the largest area.

  • Example:

var rulesState = [];
var rules = [{
  description: "Largest person found!",
  condition: (resultSet) => {
    return Rules.Biggest("person", resultSet.objects);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Area(object, source_width, source_height)

Purpose: Computes the relative area of an object to the source's dimensions.

  • Parameters:

    • object’: Object whose area needs to be determined.

    • source_width’: Width of the source.

    • source_height’: Height of the source.

  • Returns:

    • Relative area (fraction) of the object with respect to the source dimensions

  • Example:

var rulesState = [];
var rules = [{
  description: "Person found in normalized coordinates.",
  condition: (resultSet) => {
    return Rules.Area("person", source_width, source_height);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Between(x, min, max)

Purpose: Checks if a given value lies between a specified range.

  • Parameters:

    • x’: The value to be checked.

    • min’: Minimum value of the range.

    • max’: Maximum value of the range.

  • Returns:

    • Boolean value indicating whether x lies between min and max.

  • Example:

var rulesState = [];
var rules = [{
  description: "Is x between min_x and mix_x?",
  condition: (resultSet) => {
    return Rules.Between(x, min_x, max_x);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Amount(label, objects)

Purpose: Counts the number of objects that match a specific class label.

  • Parameters:

    • label’: String representing the class label.

    • objects’: Array of objects.

  • Returns:

    • Boolean indicating presence of the pose point label.

  • Example:

var rulesState = [];
var rules = [{
  description: "Person detected!",
  condition: (resultSet) => {
    return Rules.Amount("person", resultSet.objects);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

PosePoint(label, personObject)

Purpose: Determines if a person object contains a specific pose point label.

  • Parameters:

    • label’: Pose point label.

    • personObject’: Object containing pose information.

  • Returns:

    • Boolean indicating presence of the pose point label.

  • Example:

var rulesState = [];
var rules = [{
  description: "Person detected!",
  condition: (resultSet) => {
    return Rules.PosePoint("person", resultSet.objects);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Emotion(emotionLabel, personObject)

Purpose: Checks the inferred emotion on a person's face.

  • Parameters:

    • emotionLabel’: The desired emotion label.

    • personObject’: Object containing facial information.

  • Returns:

    • Boolean indicating the presence of the specified emotion.

  • Example:

var rulesState = [];
var rules = [{
  description: "Person detected!",
  condition: (person) => {
    return Rules.Emotion("happy", person);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Gender(genderLabel, personObject)

Purpose: Checks the inferred gender label of a person based on the identified facial features.

  • Parameters:

    • genderLabel’: Gender label to check.

    • personObject’: Person Object containing facial information.

  • Returns:

    • Boolean indicating if the identified gender label matches the specified personObject.

  • Example:

var rulesState = [];
var rules = [{
  description: "Gender detected!",
  condition: (person) => {
    return Rules.Gender("female", person);
  }
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Position(object1, direction, object2)

Compares the relative positions of two objects based on the specified direction.

  • Parameters:

    • object1’: First object.

    • direction’: String representing the desired direction (above, below, left, right).

    • object2’: Second object.

  • Returns:

    • Boolean indicating the relative position of object1 with respect to object2 based on the given direction.

  • Example:

var rulesState = [];
var person1 = null, person2 = null;

var rules = [{
  description: "Person detected!",
  condition: (resultSet) => {
    return Rules.Position(person1, "left", person2);
  }
}];

var predictionData= null;
{prediction, person1, person2} = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Check(resultSet, rules, rulesState)

Purpose: Evaluates a set of conditions on the provided resultSet and tracks the state of rule evaluations.

  • Parameters:

    • resultSet’: Data to be evaluated.

    • rules’: Array containing conditions to evaluate on the resultSet.

    • rulesState’: Object to track the state of rule evaluations.

  • Returns:

    • Array of results for each rule evaluation.

  • Example:

var rulesState = [];
var rules = [ ... ];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);

Last updated