Composable Pops
📦 EyePop.ai — Composable Pops SDK Guide
This guide helps you build, run, and visualize Composable Pops—configurable chains of AI tasks like object detection, OCR, pose estimation, and tracking—using the EyePop SDK.
🧠 What Are Composable Pops?
Composable Pops are dynamic pipelines that link multiple AI components together. Each component can perform inference, forward cropped regions to other models, or trace objects over time. With this system, you can chain object detection → cropping → pose estimation → tracking… and more.
✅ Prerequisites
Make sure you have:
• Node.js >= 18
• Installed packages:
• @eyepop.ai/eyepop
• An EyePop developer account and access to model UUIDs
🔌 Base Imports
import {
ContourType, EndpointState, EyePop,
ForwardOperatorType, InferenceType,
PopComponentType, TransientPopId
} from '@eyepop.ai/eyepop'
🧱 Anatomy of a Pop
A Pop is defined with a list of components, each specifying:
• type: the kind of processing (INFERENCE, TRACING, CONTOUR_FINDER)
• inferenceTypes: what to infer (e.g. OBJECT_DETECTION, OCR)
• modelUuid: model to use
• forward: what to do with the output (e.g. crop, trace, or chain more components)
📋 Available Models & Abilities
All EyePop models are available as abilities by appending :latest
to the model name. Use these in your Pop components with the ability
parameter:
javascript{ type: PopComponentType.INFERENCE, ability: "eyepop.person:latest" // Any ability from the list below}
Object Detection & Classification
eyepop.coco.yolov7:latest
- COCO object detection with YOLOv7eyepop.coco.yolov7-tiny:latest
- Lightweight COCO object detectioneyepop.coco.yolov8m:latest
- COCO object detection with YOLOv8 mediumeyepop.coco.yolov8n:latest
- COCO object detection with YOLOv8 nanoeyepop.common-objects:latest
- Detect common everyday objectseyepop.imagenet.classify:latest
- ImageNet classificationeyepop.animal:latest
- Animal detection and classificationeyepop.vehicle:latest
- Vehicle detectioneyepop.device:latest
- Electronic device detectioneyepop.sports:latest
- Sports equipment and activity detection
Visual Intelligence & Analysis
eyepop.image-contents:latest
- Prompt-based visual analysis and understandingeyepop.localize-objects:latest
- Find objects with custom labels and bounding boxeseyepop.localize-objects.preview:latest
- Preview version of object localizationeyepop.image-captions:latest
- Generate image captionseyepop.vlm.preview:latest
- Vision Language Model preview
Person Analysis
eyepop.person:latest
- Person detectioneyepop.age:latest
- Age estimationeyepop.gender:latest
- Gender classificationeyepop.expression:latest
- Facial expression analysiseyepop.person.pose:latest
- Human pose estimationeyepop.person.2d-body-points:latest
- 2D body keypoint detectioneyepop.person.3d-body-points.full:latest
- Full 3D body poseeyepop.person.3d-body-points.heavy:latest
- Heavy 3D body pose modeleyepop.person.3d-body-points.lite:latest
- Lightweight 3D body poseeyepop.person.3d-hand-points:latest
- 3D hand keypoint detectioneyepop.person.face-mesh:latest
- Detailed facial mesheyepop.person.face.long-range:latest
- Face detection for distant subjectseyepop.person.face.short-range:latest
- Face detection for close subjectseyepop.person.palm:latest
- Palm detectioneyepop.person.reid:latest
- Person re-identificationeyepop.person.segment:latest
- Person segmentation
Text Recognition
eyepop.text:latest
- Text detectioneyepop.text.recognize.landscape:latest
- OCR for landscape texteyepop.text.recognize.landscape-tiny:latest
- Lightweight landscape OCReyepop.text.recognize.square:latest
- OCR for square/document text
Segmentation Models (SAM)
eyepop.sam.small:latest
- Small Segment Anything Modeleyepop.sam.tiny:latest
- Tiny Segment Anything Modeleyepop.sam2.decoder:latest
- SAM2 decoder componenteyepop.sam2.encoder.large:latest
- SAM2 large encodereyepop.sam2.encoder.small:latest
- SAM2 small encodereyepop.sam2.encoder.tiny:latest
- SAM2 tiny encoder
Quick Reference
javascript// Common usage patterns{ type: PopComponentType.INFERENCE, ability: "eyepop.person:latest", // Detect people confidenceThreshold: 0.8}{ type: PopComponentType.INFERENCE, ability: "eyepop.image-contents:latest", // Visual intelligence params: { prompts: [{ prompt: "What is this person wearing?" }] }}{ type: PopComponentType.INFERENCE, ability: "eyepop.localize-objects:latest", // Custom object labels params: { prompts: [ {prompt: 'dog', label: 'Best Friend'} ] }}
💡 For detailed Visual Intelligence capabilities, see our Visual Intelligence Documentation which covers prompt-based analysis with
eyepop.image-contents:latest
.
🧪 Example Pops
1. Text on Detected Objects
const TEXT_ON_OBJECTS = {
components: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid: 'yolov7:...', // your object detector
forward: {
operator: { type: ForwardOperatorType.CROP },
targets: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid: 'eyepop-text:...',
forward: {
operator: { type: ForwardOperatorType.CROP },
targets: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OCR],
modelUuid: 'PARSeq:...'
}]
}
}]
}
}]
}
Use case: Extract text within labeled objects.
2. Object Tracking
const OBJECT_TRACKING = {
components: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid: 'yolov7:...',
forward: {
operator: { type: ForwardOperatorType.CROP },
targets: [{ type: PopComponentType.TRACING }]
}
}]
}
Use case: Detect and track object movements across frames.
3. Detect Objects and Pose on People
const OBJECT_PLUS_PERSON = {
components: [
{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid: 'yolov7:...'
},
{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid: 'eyepop-person:...',
categoryName: 'person',
confidenceThreshold: 0.8,
forward: {
operator: { type: ForwardOperatorType.CROP },
targets: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.KEY_POINTS],
categoryName: '2d-body-points',
modelUuid: 'Mediapipe:...'
}]
}
}
]
}
Use case: Detect all objects and analyze human posture when a person is found.
4. Segmentation and Contour Extraction
const OBJECT_SEGMENTATION = {
components: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid: 'yolov7:...',
forward: {
operator: {
type: ForwardOperatorType.CROP,
crop: { boxPadding: 0.25 }
},
targets: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.SEMANTIC_SEGMENTATION],
modelUuid: 'EfficientSAM:...',
forward: {
operator: { type: ForwardOperatorType.FULL },
targets: [{
type: PopComponentType.CONTOUR_FINDER,
contourType: ContourType.POLYGON
}]
}
}]
}
}]
}
Use case: Get polygon contours of detected objects for precise shape/area analysis.
🧪 Running the Pop
const endpoint = await EyePop.workerEndpoint({
popId: TransientPopId.Transient,
logger
}).connect()
await endpoint.changePop(OBJECT_SEGMENTATION)
const results = await endpoint.process({ path: 'image.jpg' })
for await (const result of results) {
Render2d.renderer(ctx, [Render2d.renderPose(), Render2d.renderText(), Render2d.renderContour()])
.draw(result)
}
🧠 Tips for Building Your Own Pops
• Chain components by specifying forward.targets
• Use ForwardOperatorType.CROP to pass cropped regions
• Use categoryName and confidenceThreshold to filter by label
• Use TRACING or CONTOUR_FINDER for post-processing
🔄 Reusability
Wrap Pops in a function and dynamically switch:
function makeTrackingPop(modelUuid: string) {
return {
components: [{
type: PopComponentType.INFERENCE,
inferenceTypes: [InferenceType.OBJECT_DETECTION],
modelUuid,
forward: {
operator: { type: ForwardOperatorType.CROP },
targets: [{ type: PopComponentType.TRACING }]
}
}]
}
}
🛠️ Troubleshooting
• ✅ Ensure model UUIDs are accessible to your API key
• ⚠️ Use logger.info() to debug endpoint state changes
• 🧪 Try rendering intermediate results to validate your pipeline
Last updated