πŸ“œ
Developer Documentation
  • πŸ‘‹EyePop.ai Introduction
  • 🎯Getting Started
    • πŸ‘¨β€πŸ’»Pop Quick Start
    • πŸ’ͺLow Code Examples
  • πŸ—οΈAPI Key
  • 🏷️Finding People & Objects
  • SDKs
    • ☁️React/Node SDK
      • Render 2D (Visualization)
      • Composable Pops
    • 🐍Python SDK
      • Composable Pops
  • Self Service Training
    • Dataset SDK (Node)
    • πŸ‹οΈHow To Train a Model
      • Defining Your Computer Vision Model
      • Example Use Case: Detecting Eyeglasses
      • Preparing & Uploading Data
      • Using EyePop.ai’s AutoLabeler
      • Human Review
      • Data Augmentation Setup
      • Training in Progress
      • Deployment
        • Deploy to Windows Runtime
      • Previewing Results
      • Iterative Training
      • Deep Dives (FAQ)
  • EyePop.ai Visual Intelligence
    • Reports
Powered by GitBook
On this page
  1. SDKs
  2. Python SDK

Composable Pops

EyePop SDK Developer Guide: Using Composable Pops

This guide shows how to build and run Composable Pops using EyePop’s Python SDK. Composable Pops let you chain models and post-processing steps (e.g., object detection β†’ crop β†’ OCR) into reusable pipelinesβ€”all defined in Python.


🧠 What is a Composable Pop?

A Composable Pop is a flexible visual processing pipeline. Each Pop is made up of components, where each component can:

β€’ Run inference with a specific model

β€’ Crop output and pass it to another component

β€’ Trace objects over time

β€’ Find contours or keypoints

All without needing to train your own model or write any machine learning code.


πŸ”§ Installation & Setup

pip install eyepop-sdk requests pillow webui pybars3

Make sure you have:

β€’ An EyePop developer key


πŸ“¦ Composable Pop Examples

Each Pop is defined using the Pop and Component objects.

1. Detect people

Pop(components=[
    InferenceComponent(
        model='eyepop.person:latest',
        categoryName="person"
    )
])

2. 2D body pose estimation

Pop(components=[
    InferenceComponent(
        model='eyepop.person:latest',
        categoryName="person",
        forward=CropForward(
            maxItems=128,
            targets=[
                InferenceComponent(
                    model='eyepop.person.2d-body-points:latest',
                    categoryName="2d-body-points",
                    confidenceThreshold=0.25
                )
            ]
        )
    )
])

3. Face mesh from people

Pop(components=[
    InferenceComponent(
        model='eyepop.person:latest',
        categoryName="person",
        forward=CropForward(
            maxItems=128,
            targets=[
                InferenceComponent(
                    model='eyepop.person.face.short-range:latest',
                    categoryName="2d-face-points",
                    forward=CropForward(
                        boxPadding=1.5,
                        orientationTargetAngle=-90.0,
                        targets=[
                            InferenceComponent(
                                model='eyepop.person.face-mesh:latest',
                                categoryName="3d-face-mesh"
                            )
                        ]
                    )
                )
            ]
        )
    )
])

4. OCR on text regions

Pop(components=[
    InferenceComponent(
        model='eyepop.text:latest',
        categoryName="text",
        confidenceThreshold=0.7,
        forward=CropForward(
            maxItems=128,
            targets=[
                InferenceComponent(
                    model='eyepop.text.recognize.landscape:latest',
                    confidenceThreshold=0.1
                )
            ]
        )
    )
])

5. Semantic segmentation with contours

Pop(components=[
    InferenceComponent(
        model='eyepop.sam.small:latest',
        forward=FullForward(
            targets=[
                ContourFinderComponent(
                    contourType=ContourType.POLYGON,
                    areaThreshold=0.005
                )
            ]
        )
    )
])

πŸš€ Running a Pop

from eyepop import EyePopSdk

with EyePopSdk.workerEndpoint() as endpoint:
    endpoint.set_pop(pop_examples["2d-body-points"])  # choose a pop
    job = endpoint.upload("my-image.jpg")             # or use endpoint.load_from(url)

    for result in job.predict():
        print(json.dumps(result, indent=2))

πŸ” Using Points, Boxes, or Prompts

params = [
    ComponentParams(componentId=1, values={
        "roi": {
            "points": [{"x": 100, "y": 150}],
            "boxes": [{"topLeft": {"x": 10, "y": 20}, "bottomRight": {"x": 200, "y": 300}}]
        }
    })
]
job = endpoint.upload("image.jpg", params=params)

🧱 Build Your Own Pops

All components follow this general structure:

InferenceComponent(
    model='your-model:latest',
    categoryName='your-label',
    forward=CropForward(
        boxPadding=0.25,
        targets=[...]
    )
)

Use:

β€’ CropForward for passing regions

β€’ FullForward to process the whole image

β€’ ContourFinderComponent for polygon detection

β€’ ComponentParams to pass prompts or ROI


πŸ› οΈ Debug Tips

β€’ Log levels can be adjusted using logging.getLogger().setLevel(...)

PreviousPython SDKNextDataset SDK (Node)

Last updated 17 days ago

🐍