Composable Pops

EyePop SDK Developer Guide: Using Composable Pops

This guide shows how to build and run Composable Pops using EyePop’s Python SDK. Composable Pops let you chain models and post-processing steps (e.g., object detection → crop → OCR) into reusable pipelines—all defined in Python.


🧠 What is a Composable Pop?

A Composable Pop is a flexible visual processing pipeline. Each Pop is made up of components, where each component can:

• Run inference with a specific model

• Crop output and pass it to another component

• Trace objects over time

• Find contours or keypoints

All without needing to train your own model or write any machine learning code.


🔧 Installation & Setup

pip install eyepop-sdk requests pillow webui pybars3

Make sure you have:

• An EyePop developer key


📦 Composable Pop Examples

Each Pop is defined using the Pop and Component objects.

1. Detect people


2. 2D body pose estimation


3. Face mesh from people


4. OCR on text regions


5. Semantic segmentation with contours


🚀 Running a Pop


🔍 Using Points, Boxes, or Prompts


🧱 Build Your Own Pops

All components follow this general structure:

Use:

• CropForward for passing regions

• FullForward to process the whole image

• ContourFinderComponent for polygon detection

• ComponentParams to pass prompts or ROI


🛠️ Debug Tips

• Log levels can be adjusted using logging.getLogger().setLevel(...)

Last updated