EyePop.ai has introduced a dedicated npm package for JavaScript developers, streamlining the process of integrating your Pop in the web. This package ensures an efficient and user-friendly setup experience.
Refer to the following page to quickly get the EyePopSDK JavaScript demo repository up and running on your local machine.
SDK Usage
Here's a barebones example of the SDK for uploading and displaying an image with object identification overlays.
<form>
</form>
<div style="height: 600px; width:600px;">
</div>
<script src="https://cdn.jsdelivr.net/npm/@eyepop.ai/javascript-sdk"></script>
<script>
var config = {};
// replace with your endpoint UUID
const pop_uuid = '<POP_UUID>';
// leave this empty to launch a 'login' popup, or enter a temporary token
const token = '';
// first, fetch the Pop info
EyePopSDK.EyePopAPI.FetchPopConfig(pop_endpoint, token)
.then((response) => {
config = response;
config.input = {
"name": "file_upload"
};
console.log("EyePopSDK config: ", config);
// then start the Pop
EyePopSDK.EyePopSDK.init(config);
});
</script>
The initialization function required for the SDK. This starts all media streaming and uploading, as well as drawing on top of the provided canvas. This object is created with a call to EyePopAPI.FetPopConfig(pop_endpoint, token) and appended to with the following options:
Parameters:
config.input
This is where to set a Pop's media input type and source is set.
// An object with the following properties:
config.input = {}
// A string name of the input media, options are: "webcam_on_page", "screen", "webcam_off_site", "url", "file_upload"
= ""
// A string url of the input media, ex: "https://www.example.com/video.mp4"
= ""
config.draw
The SDK supports an array of drawing methods, organized alphabetically by type. To use, you must change the Draw parameter on the config object used to initiate the EyePopSDK.
Overwrite the config.Draw object to enable or disable drawing passes. Pass an empty array to disable all drawings.
User configurable elements of the config are as follows:
// An array of objects, each specifying a different drawing pass. You are free to enable as many drawing passes as required.
config.draw = [ {} ]
// A string, options include: "box", "pose", "hand", "face", "posefollow", "clip", "custom"
config.draw[ i ].type = ""
// An array of strings, options include: "*", "people",
config.draw[ i ].targets = [ "" ]
// An array of strings, options include all pose labels, such as "right eye", "left eye", etc
config.draw[ i ].anchors = [ "" ]
// A string path to an image to be anchored to our Anchor points, ex: "./fun/sunglasses3.png?raw=true",
config.draw[ i ].image = ""
// A number to scale the anchored image by, ex: 2.6
config.draw[ i ].scale = 2.6
Draw Types
box:
Targets: Specific objects or parts of objects you wish to encircle within a box. E.g., "person.face". A value of "*" means all labels.
box with Tracking:
Targets: A list of objects to enclose within a box. E.g., "apple", "backpack", and more. A value of "*" means all labels.
Tracking:
Tracking allows you to receive a unique ID for each person in a scene. This ID is held stable while a person remains on screen. Tracking has to be turned on for your account so please ask an EyePop.ai Team member to turn this on for you.
Labels: An array of assigned labels for the tracking augmentation.
pose:
Targets: Currently, in the present version, the pose type can only target "person", highlighting body key points.
posefollow:
Targets: Objects like "person" that you aim to track. A value of "*" means all labels.
Anchors: Specific points within the target object where the augmentation is anchored, such as "right wrist".
Image: The image placed at the anchor point, like "./images/sunglasses3.png".
The callback method fired when a new prediction message is received.
Example:
EyePopSDK.EyePopAPI.instance.OnPrediction = function ()
{
console.log("Finished drawing frame");
}
OnPredictionTarget()
The callback method fired when a target is found in the prediction data.
Example:
EyePopSDK.EyePopAPI.instance.OnPredictionTarget = function ()
{
console.log("Target found!");
}
OnPredictionEnd()
The callback method fired when the analysis is completed.
Example:
EyePopSDK.EyePopAPI.instance.OnPredictionEnd = function ()
{
console.log("Analyzed 100%");
}
onPredictionEndBase()
The callback method fired when the Pop has closed for any reason.
Example:
EyePopSDK.EyePopAPI.instance.OnPredictionEndBase = function ()
{
console.error("Pop socket closed!");
}
Rules
The Rules class has been specifically crafted for processing the outputs of EyePop.ai's computer vision system. It provides functionalities to construct semantic rules, helping in the identification and extraction of specific features and attributes from photos and videos.
FindObject(label, object)
Purpose: Filters the provided list of objects based on the specified class label.
Parameters:
‘label’: String representing the class label of the desired object.
‘objects’: Array of objects.
Returns:
Array of objects that match the specified class label.
Example:
var rulesState = [];
var rules = [{
description: "Person detected!",
condition: (resultSet) => {
const resultArray = Rules.FindObject("person", resultSet.objects);
return resultArray.length > 1;
}
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);
Biggest(label, object)
Purpose: Identifies the object with the largest bounding box area for a specific class label.
Parameters:
‘label’: String representing the class label of the object to compare.
‘objects’: Array of objects.
Returns:
Single object with the largest area.
Example:
var rulesState = [];
var rules = [{
description: "Largest person found!",
condition: (resultSet) => {
return Rules.Biggest("person", resultSet.objects);
}
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);
Area(object, source_width, source_height)
Purpose: Computes the relative area of an object to the source's dimensions.
Parameters:
‘object’: Object whose area needs to be determined.
‘source_width’: Width of the source.
‘source_height’: Height of the source.
Returns:
Relative area (fraction) of the object with respect to the source dimensions
Example:
var rulesState = [];
var rules = [{
description: "Person found in normalized coordinates.",
condition: (resultSet) => {
return Rules.Area("person", source_width, source_height);
}
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);
Between(x, min, max)
Purpose: Checks if a given value lies between a specified range.
Parameters:
‘x’: The value to be checked.
‘min’: Minimum value of the range.
‘max’: Maximum value of the range.
Returns:
Boolean value indicating whether x lies between min and max.
Example:
var rulesState = [];
var rules = [{
description: "Is x between min_x and mix_x?",
condition: (resultSet) => {
return Rules.Between(x, min_x, max_x);
}
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);
Amount(label, objects)
Purpose: Counts the number of objects that match a specific class label.
Parameters:
‘label’: String representing the class label.
‘objects’: Array of objects.
Returns:
Boolean indicating presence of the pose point label.
Example:
var rulesState = [];
var rules = [{
description: "Person detected!",
condition: (resultSet) => {
return Rules.Amount("person", resultSet.objects);
}
}];
var predictionData = GetPrediction(image);
var rulesResult = Rules.Check(predictionData, rules, rulesState);
console.log(rulesResult);
PosePoint(label, personObject)
Purpose: Determines if a person object contains a specific pose point label.