# Pretrained Models & Abilities

This guide helps you build and run *Composable Pops*. They are configurable chains of AI tasks like object detection, OCR, pose estimation, and tracking all using the EyePop SDK.

### What Are Composable Pops?

Composable Pops are dynamic pipelines that link multiple AI components together. Each component can perform inference, forward cropped regions to other models, or trace objects over time. With this system, you can chain object detection → cropping → pose estimation → tracking… and more.

***

### Anatomy of a Pop

A Pop is defined with a list of components, each specifying:

• **type**: the kind of processing (INFERENCE, TRACING, CONTOUR\_FINDER)

• **inferenceTypes**: what to infer (e.g. OBJECT\_DETECTION, OCR)

• **modelUuid / ability**: this specificies the model or ability to use

• **forward**: what to do with the output (e.g. crop, trace, or chain more components)

### Available Models & Abilities

All EyePop models are available as abilities by appending `:latest` to the model name. Use these in your Pop components with the `ability` parameter:

```json
{  
    type: PopComponentType.INFERENCE,  
    ability: "eyepop.person:latest"
}
```

#### Abilities Hub

* Search a full list of all abilities here: <https://www.eyepop.ai/abilities>
* Create your own Abilities with a prompt on your [EyePop.ai Dashboard](https://dashboard.eyepop.ai/dashboard)

#### Object Detection & Classification

* `eyepop.common-objects:latest` - Detect common everyday objects
* `eyepop.animal:latest` - Animal detection and classification
* `eyepop.vehicle:latest` - Vehicle detection
* `eyepop.device:latest` - Electronic device detection
* `eyepop.sports:latest` - Sports equipment and activity detection
* `eyepop.localize-objects:latest` - Find objects with just a prompt.

#### Person Analysis

* `eyepop.person:latest` - Person detection
* `eyepop.age:latest` - Age estimation
* `eyepop.gender:latest` - Gender classification
* `eyepop.expression:latest` - Facial expression analysis
* `eyepop.person.pose:latest` - Human pose estimation
* `eyepop.person.2d-body-points:latest` - 2D body keypoint detection
* `eyepop.person.3d-body-points.full:latest` - Full 3D body pose
* `eyepop.person.3d-body-points.heavy:latest` - Heavy 3D body pose model
* `eyepop.person.3d-body-points.lite:latest` - Lightweight 3D body pose
* `eyepop.person.3d-hand-points:latest` - 3D hand keypoint detection
* `eyepop.person.face-mesh:latest` - Detailed facial mesh
* `eyepop.person.face.long-range:latest` - Face detection for distant subjects
* `eyepop.person.face.short-range:latest` - Face detection for close subjects
* `eyepop.person.palm:latest` - Palm detection
* `eyepop.person.reid:latest` - Person re-identification
* `eyepop.person.segment:latest` - Person segmentation

#### Text Recognition

* `eyepop.text:latest` - Text detection
* `eyepop.text.recognize.landscape:latest` - OCR for landscape text
* `eyepop.text.recognize.landscape-tiny:latest` - Lightweight landscape OCR
* `eyepop.text.recognize.square:latest` - OCR for square/document text
* See the [Abilities Hub](https://www.eyepop.ai/abilities) for more structured OCR abilities.&#x20;

#### Quick Reference

```javascript
// Common usage patterns

// Detect people 
{  
    type: PopComponentType.INFERENCE,  
    ability: "eyepop.person:latest",     
    confidenceThreshold: 0.8
}

// Visual intelligence  
{  
    type: PopComponentType.INFERENCE,  
    ability: "eyepop.image-contents:latest",   
    params: {    
        prompts: [{      prompt: "What is this person wearing?"    }]  
    }
}

// Custom object labels  
{  
    type: PopComponentType.INFERENCE,  
    ability: "eyepop.localize-objects:latest", 
    params: {    prompts: [      {prompt: 'dog', label: 'Best Friend'}    ]  }
}
```

***

#### Example Pops

1\. Text on Detected Objects

```javascript
{
  components: [{
    type: PopComponentType.INFERENCE,
    inferenceTypes: [InferenceType.OBJECT_DETECTION],
    ability: 'eyepop.common-objects:latest', 
    forward: {
      operator: { type: ForwardOperatorType.CROP },
      targets: [{
        type: PopComponentType.INFERENCE,
        inferenceTypes: [InferenceType.OBJECT_DETECTION],
        ability: 'eyepop.text:latest',
        forward: {
          operator: { type: ForwardOperatorType.CROP },
          targets: [{
            type: PopComponentType.INFERENCE,
            inferenceTypes: [InferenceType.OCR],
            ability: 'PARSeq:latest'
          }]
        }
      }]
    }
  }]
}
```

Use case: Extract text *within* labeled objects.

***

2\. Object Tracking

```javascript
{
  components: [{
    type: PopComponentType.INFERENCE,
    inferenceTypes: [InferenceType.OBJECT_DETECTION],
    ability: 'eyepop.common-objects:latest',
    forward: {
      operator: { type: ForwardOperatorType.CROP },
      targets: [{ type: PopComponentType.TRACING }]
    }
  }]
}
```

Use case: Detect and track object movements across frames.

***

3\. Detect Objects and Pose on People

```javascript
{
  components: [
    {
      type: PopComponentType.INFERENCE,
      inferenceTypes: [InferenceType.OBJECT_DETECTION],
      ability: 'eyepop.common-objects:latest',
    },
    {
      type: PopComponentType.INFERENCE,
      inferenceTypes: [InferenceType.OBJECT_DETECTION],
      ability: "eyepop.person:latest",     
      categoryName: 'person',
      confidenceThreshold: 0.8,
      forward: {
        operator: { type: ForwardOperatorType.CROP },
        targets: [{
          type: PopComponentType.INFERENCE,
          inferenceTypes: [InferenceType.KEY_POINTS],
          categoryName: '2d-body-points',
          ability: 'eyepop.person.2d-body-points:latest'
        }]
      }
    }
  ]
}
```

Use case: Detect all objects and analyze human posture when a person is found.

***

4\. Segmentation and Contour Extraction

```javascript
const OBJECT_SEGMENTATION = {
  components: [{
    type: PopComponentType.INFERENCE,
    inferenceTypes: [InferenceType.OBJECT_DETECTION],
    ability: 'eyepop.common-objects:latest',
    forward: {
      operator: {
        type: ForwardOperatorType.CROP,
        crop: { boxPadding: 0.25 }
      },
      targets: [{
        type: PopComponentType.INFERENCE,
        inferenceTypes: [InferenceType.SEMANTIC_SEGMENTATION],
        modelUuid: 'EfficientSAM:latest',
        forward: {
          operator: { type: ForwardOperatorType.FULL },
          targets: [{
            type: PopComponentType.CONTOUR_FINDER,
            contourType: ContourType.POLYGON
          }]
        }
      }]
    }
  }]
}
```

Use case: Get polygon contours of detected objects for precise shape/area analysis.

***

Running the Pop

```javascript
const endpoint = await EyePop.workerEndpoint(auth: { API_KEY }).connect()

await endpoint.changePop(<'YOUR POP DEFINITION'>)

const results = await endpoint.process({ path: 'image.jpg' })

for await (const result of results) {
  Render2d.renderer(ctx, [Render2d.renderPose(), Render2d.renderText(), Render2d.renderContour()])
         .draw(result)
}

```

#### Tips for Building Your Own Pops

• Chain components by specifying forward.targets

• Use ForwardOperatorType.CROP to pass cropped regions

• Use categoryName and confidenceThreshold to filter by label

• Use TRACING or CONTOUR\_FINDER for post-processing

Contact us at <help@eyepop.ai> or on our discord for any help creating a composable pop!
