Page cover

☁️React/Node SDK

Simplify integrating AI on your Node server, React Web app, or React Native App

Get Started

The EyePop.ai Node SDK provides convenient access to the EyePop.ai's inference API from applications written in the TypeScript or JavaScript language.

Installation

React/Node

npm install --save @eyepop.ai/eyepop

Browser

<script src="https://cdn.jsdelivr.net/npm/@eyepop.ai/eyepop/dist/eyepop.min.js"></script>

Configuration

The EyePop SDK needs to be configured with the Pop Id and your Authentication Credentials. Credentials can be provided as:

  1. Api Key, server side only because this key must be kept secret

  2. Session generated from Api Key, server side generated session transported to client over trusted channel

  3. Current Browser Session, for developer running client code in the same browser session loghed into their EyePop Dashboard.

Configuration via Environment (Server Side)

While you can provide a secret_key keyword argument, we recommend using dotenv to add EYEPOP_SECRET_KEY="My API Key" to your .env file so that your API Key is not stored in source control. By default, the SDK will read the following environment variables:

  • EYEPOP_POP_ID: The Pop Id to use as an endpoint. You can copy and paste this string from your EyePop Dashboard in the Pop -> Settings section.

  • EYEPOP_SECRET_KEY: Your Secret Api Key. You can create Api Keys in the profile section of your EyePop dashboard.

  • EYEPOP_URL: (Optional) URL of the EyePop API service, if you want to use any other endpoint than production http://api.eyepop.ai

Authentication with Api Key

Configuration and authorization with explicit defaults:

Equivalent, but shorter:

Authentication with session generated from Api Key

Server Side

Client Side

Authentication with Current Browser Session

Usage Examples

Uploading and processing one single image

  1. EyePop.endpoint() returns a local endpoint object, that will authenticate with the Api Key found in EYEPOP_SECRET_KEY and load the worker configuration for the Pop identified by EYEPOP_POP_ID.

  2. Call endpoint.connect() before any job is submitted and endpoint.disconnect() to release all resources.

  3. endpoint.process({path:'examples/example.jpg'}) initiates the upload to the local file to the worker service. The image will be queued and processed immediately when the worker becomes available. The result of endpoint.upload() implements AsyncIterable<Prediction> which can be iterated with 'for await' as shown in the example above. Predictions will become available when the submitted file becomes processed by the worker and results are efficiently streamed back to the calling client. If the uploaded file is a video e.g. 'video/mp4' or image container format e.g. 'image/gif', the client will receive one prediction per image frame until the entire file has been processed.

  4. Alternatively to path process() also accepts a readable stream with a mandatory mime-type:

Visualizing Results

Visualization components are provided as separate modules. Please refer to the module's documentation for usage examples.

Asynchronous uploading and processing of images

The above synchronous way, process() then iterate all results, is great for individual images or reasonable sized batches. For larger batch sizes, or continuous stream of images, don't await the results but instead use then() on the returned promise.

This will result in a most efficient processing, i.e. uploads will be processed in parallel (up to five HTTP connections per endpoint) and results will be processed by your code as soon as they are available.

Loading images from URLs

Alternatively to uploading files, you can also submit a publicly accessible URL for processing. Supported protocols are:

  • HTTP(s) URLs with response Content-Type image/* or video/*

  • RTSP (live-streaming)

  • RTMP (live-streaming)

Processing Videos

You can process videos via upload or public URLs. This example shows how to process all video frames of a file retrieved from a public URL.

Canceling Jobs

Any job that has been queued or is in-progress can be cancelled. E.g. stop the video processing after predictions have been processed for 10 seconds duration of the video.

Live Stream Processing (e.g. from getUserMedia or RTSP)

You can connect EyePop to a real-time video stream (e.g., from a webcam or an RTSP feed) and continuously process incoming frames. This example demonstrates how to do this with getUserMedia, but the same logic applies for other stream sources:

⚠️ Note: This method requires a readable MediaStream object from the browser or other source. If using getUserMedia, make sure your app requests proper camera permissions and handles stream lifecycle cleanly.

Other Usage Options

Auto start workers

By default, EyePop.endpoint().connect() will start a worker if none is running yet. To disable this behavior create an endpoint with EyePop.endpoint({autoStart: false}).

Stop pending jobs

By default, EyePop.endpoint().connect() will cancel all currently running or queued jobs on the worker. It is assumed that the caller takes full control of that worker. To disable this behavior create an endpoint with EyePop.endpoint({stopJobs: false}).

Last updated