📜
Developer Documentation
  • 👋EyePop.ai Introduction
  • 🎯Getting Started
    • 👨‍💻Pop Quick Start
    • 💪Low Code Examples
  • 🗝️API Key
  • 🏷️Finding People & Objects
  • SDKs
    • ☁️React/Node SDK
      • Render 2D (Visualization)
      • Composable Pops
    • 🐍Python SDK
      • Composable Pops
  • Page 2
  • Self Service Training
    • Dataset SDK (Node)
    • 🏋️How To Train a Model
      • Defining Your Computer Vision Model
      • Example Use Case: Detecting Eyeglasses
      • Preparing & Uploading Data
      • Using EyePop.ai’s AutoLabeler
      • Human Review
      • Data Augmentation Setup
      • Training in Progress
      • Deployment
      • Previewing Results
      • Iterative Training
      • Deep Dives (FAQ)
  • FAQ
  • EyePop.ai Visual Intelligence
    • Visual Intelligence
    • Reports
  • Deployment
    • On Premise IP Camera analysis
    • Windows Application Runtime
Powered by GitBook
On this page
  1. SDKs
  2. React/Node SDK

Composable Pops

📦 EyePop.ai — Composable Pops SDK Guide

This guide helps you build, run, and visualize Composable Pops—configurable chains of AI tasks like object detection, OCR, pose estimation, and tracking—using the EyePop SDK.

🧠 What Are Composable Pops?

Composable Pops are dynamic pipelines that link multiple AI components together. Each component can perform inference, forward cropped regions to other models, or trace objects over time. With this system, you can chain object detection → cropping → pose estimation → tracking… and more.


✅ Prerequisites

Make sure you have:

• Node.js >= 18

• Installed packages:

• @eyepop.ai/eyepop

• An EyePop developer account and access to model UUIDs


🔌 Base Imports

import {
  ContourType, EndpointState, EyePop,
  ForwardOperatorType, InferenceType,
  PopComponentType, TransientPopId
} from '@eyepop.ai/eyepop'

🧱 Anatomy of a Pop

A Pop is defined with a list of components, each specifying:

• type: the kind of processing (INFERENCE, TRACING, CONTOUR_FINDER)

• inferenceTypes: what to infer (e.g. OBJECT_DETECTION, OCR)

• modelUuid: model to use

• forward: what to do with the output (e.g. crop, trace, or chain more components)

📋 Available Models & Abilities

All EyePop models are available as abilities by appending :latest to the model name. Use these in your Pop components with the ability parameter:

javascript{  type: PopComponentType.INFERENCE,  ability: "eyepop.person:latest"  // Any ability from the list below}

Object Detection & Classification

  • eyepop.coco.yolov7:latest - COCO object detection with YOLOv7

  • eyepop.coco.yolov7-tiny:latest - Lightweight COCO object detection

  • eyepop.coco.yolov8m:latest - COCO object detection with YOLOv8 medium

  • eyepop.coco.yolov8n:latest - COCO object detection with YOLOv8 nano

  • eyepop.common-objects:latest - Detect common everyday objects

  • eyepop.imagenet.classify:latest - ImageNet classification

  • eyepop.animal:latest - Animal detection and classification

  • eyepop.vehicle:latest - Vehicle detection

  • eyepop.device:latest - Electronic device detection

  • eyepop.sports:latest - Sports equipment and activity detection

Visual Intelligence & Analysis

  • eyepop.image-contents:latest - Prompt-based visual analysis and understanding

  • eyepop.localize-objects:latest - Find objects with custom labels and bounding boxes

  • eyepop.localize-objects.preview:latest - Preview version of object localization

  • eyepop.image-captions:latest - Generate image captions

  • eyepop.vlm.preview:latest - Vision Language Model preview

Person Analysis

  • eyepop.person:latest - Person detection

  • eyepop.age:latest - Age estimation

  • eyepop.gender:latest - Gender classification

  • eyepop.expression:latest - Facial expression analysis

  • eyepop.person.pose:latest - Human pose estimation

  • eyepop.person.2d-body-points:latest - 2D body keypoint detection

  • eyepop.person.3d-body-points.full:latest - Full 3D body pose

  • eyepop.person.3d-body-points.heavy:latest - Heavy 3D body pose model

  • eyepop.person.3d-body-points.lite:latest - Lightweight 3D body pose

  • eyepop.person.3d-hand-points:latest - 3D hand keypoint detection

  • eyepop.person.face-mesh:latest - Detailed facial mesh

  • eyepop.person.face.long-range:latest - Face detection for distant subjects

  • eyepop.person.face.short-range:latest - Face detection for close subjects

  • eyepop.person.palm:latest - Palm detection

  • eyepop.person.reid:latest - Person re-identification

  • eyepop.person.segment:latest - Person segmentation

Text Recognition

  • eyepop.text:latest - Text detection

  • eyepop.text.recognize.landscape:latest - OCR for landscape text

  • eyepop.text.recognize.landscape-tiny:latest - Lightweight landscape OCR

  • eyepop.text.recognize.square:latest - OCR for square/document text

Segmentation Models (SAM)

  • eyepop.sam.small:latest - Small Segment Anything Model

  • eyepop.sam.tiny:latest - Tiny Segment Anything Model

  • eyepop.sam2.decoder:latest - SAM2 decoder component

  • eyepop.sam2.encoder.large:latest - SAM2 large encoder

  • eyepop.sam2.encoder.small:latest - SAM2 small encoder

  • eyepop.sam2.encoder.tiny:latest - SAM2 tiny encoder

Quick Reference

javascript// Common usage patterns{  type: PopComponentType.INFERENCE,  ability: "eyepop.person:latest",           // Detect people  confidenceThreshold: 0.8}{  type: PopComponentType.INFERENCE,  ability: "eyepop.image-contents:latest",   // Visual intelligence  params: {    prompts: [{      prompt: "What is this person wearing?"    }]  }}{  type: PopComponentType.INFERENCE,  ability: "eyepop.localize-objects:latest", // Custom object labels  params: {    prompts: [      {prompt: 'dog', label: 'Best Friend'}    ]  }}

💡 For detailed Visual Intelligence capabilities, see our Visual Intelligence Documentation which covers prompt-based analysis with eyepop.image-contents:latest.


🧪 Example Pops

1. Text on Detected Objects

const TEXT_ON_OBJECTS = {
  components: [{
    type: PopComponentType.INFERENCE,
    inferenceTypes: [InferenceType.OBJECT_DETECTION],
    modelUuid: 'yolov7:...', // your object detector
    forward: {
      operator: { type: ForwardOperatorType.CROP },
      targets: [{
        type: PopComponentType.INFERENCE,
        inferenceTypes: [InferenceType.OBJECT_DETECTION],
        modelUuid: 'eyepop-text:...',
        forward: {
          operator: { type: ForwardOperatorType.CROP },
          targets: [{
            type: PopComponentType.INFERENCE,
            inferenceTypes: [InferenceType.OCR],
            modelUuid: 'PARSeq:...'
          }]
        }
      }]
    }
  }]
}

Use case: Extract text within labeled objects.


2. Object Tracking

const OBJECT_TRACKING = {
  components: [{
    type: PopComponentType.INFERENCE,
    inferenceTypes: [InferenceType.OBJECT_DETECTION],
    modelUuid: 'yolov7:...',
    forward: {
      operator: { type: ForwardOperatorType.CROP },
      targets: [{ type: PopComponentType.TRACING }]
    }
  }]
}

Use case: Detect and track object movements across frames.


3. Detect Objects and Pose on People

const OBJECT_PLUS_PERSON = {
  components: [
    {
      type: PopComponentType.INFERENCE,
      inferenceTypes: [InferenceType.OBJECT_DETECTION],
      modelUuid: 'yolov7:...'
    },
    {
      type: PopComponentType.INFERENCE,
      inferenceTypes: [InferenceType.OBJECT_DETECTION],
      modelUuid: 'eyepop-person:...',
      categoryName: 'person',
      confidenceThreshold: 0.8,
      forward: {
        operator: { type: ForwardOperatorType.CROP },
        targets: [{
          type: PopComponentType.INFERENCE,
          inferenceTypes: [InferenceType.KEY_POINTS],
          categoryName: '2d-body-points',
          modelUuid: 'Mediapipe:...'
        }]
      }
    }
  ]
}

Use case: Detect all objects and analyze human posture when a person is found.


4. Segmentation and Contour Extraction

const OBJECT_SEGMENTATION = {
  components: [{
    type: PopComponentType.INFERENCE,
    inferenceTypes: [InferenceType.OBJECT_DETECTION],
    modelUuid: 'yolov7:...',
    forward: {
      operator: {
        type: ForwardOperatorType.CROP,
        crop: { boxPadding: 0.25 }
      },
      targets: [{
        type: PopComponentType.INFERENCE,
        inferenceTypes: [InferenceType.SEMANTIC_SEGMENTATION],
        modelUuid: 'EfficientSAM:...',
        forward: {
          operator: { type: ForwardOperatorType.FULL },
          targets: [{
            type: PopComponentType.CONTOUR_FINDER,
            contourType: ContourType.POLYGON
          }]
        }
      }]
    }
  }]
}

Use case: Get polygon contours of detected objects for precise shape/area analysis.


🧪 Running the Pop

const endpoint = await EyePop.workerEndpoint({
  popId: TransientPopId.Transient,
  logger
}).connect()

await endpoint.changePop(OBJECT_SEGMENTATION)

const results = await endpoint.process({ path: 'image.jpg' })

for await (const result of results) {
  Render2d.renderer(ctx, [Render2d.renderPose(), Render2d.renderText(), Render2d.renderContour()])
         .draw(result)
}

🧠 Tips for Building Your Own Pops

• Chain components by specifying forward.targets

• Use ForwardOperatorType.CROP to pass cropped regions

• Use categoryName and confidenceThreshold to filter by label

• Use TRACING or CONTOUR_FINDER for post-processing


🔄 Reusability

Wrap Pops in a function and dynamically switch:

function makeTrackingPop(modelUuid: string) {
  return {
    components: [{
      type: PopComponentType.INFERENCE,
      inferenceTypes: [InferenceType.OBJECT_DETECTION],
      modelUuid,
      forward: {
        operator: { type: ForwardOperatorType.CROP },
        targets: [{ type: PopComponentType.TRACING }]
      }
    }]
  }
}

🛠️ Troubleshooting

• ✅ Ensure model UUIDs are accessible to your API key

• ⚠️ Use logger.info() to debug endpoint state changes

• 🧪 Try rendering intermediate results to validate your pipeline

PreviousRender 2D (Visualization)NextPython SDK

Last updated 2 days ago

☁️