Composable Pops
EyePop SDK Developer Guide: Using Composable Pops
This guide shows how to build and run Composable Pops using EyePop’s Python SDK. Composable Pops let you chain models and post-processing steps (e.g., object detection → crop → OCR) into reusable pipelines—all defined in Python.
🧠 What is a Composable Pop?
A Composable Pop is a flexible visual processing pipeline. Each Pop is made up of components, where each component can:
• Run inference with a specific model
• Crop output and pass it to another component
• Trace objects over time
• Find contours or keypoints
All without needing to train your own model or write any machine learning code.
🔧 Installation & Setup
pip install eyepop-sdk requests pillow webui pybars3
Make sure you have:
• An EyePop developer key
📦 Composable Pop Examples
Each Pop is defined using the Pop and Component objects.
1. Detect people
Pop(components=[
InferenceComponent(
model='eyepop.person:latest',
categoryName="person"
)
])
2. 2D body pose estimation
Pop(components=[
InferenceComponent(
model='eyepop.person:latest',
categoryName="person",
forward=CropForward(
maxItems=128,
targets=[
InferenceComponent(
model='eyepop.person.2d-body-points:latest',
categoryName="2d-body-points",
confidenceThreshold=0.25
)
]
)
)
])
3. Face mesh from people
Pop(components=[
InferenceComponent(
model='eyepop.person:latest',
categoryName="person",
forward=CropForward(
maxItems=128,
targets=[
InferenceComponent(
model='eyepop.person.face.short-range:latest',
categoryName="2d-face-points",
forward=CropForward(
boxPadding=1.5,
orientationTargetAngle=-90.0,
targets=[
InferenceComponent(
model='eyepop.person.face-mesh:latest',
categoryName="3d-face-mesh"
)
]
)
)
]
)
)
])
4. OCR on text regions
Pop(components=[
InferenceComponent(
model='eyepop.text:latest',
categoryName="text",
confidenceThreshold=0.7,
forward=CropForward(
maxItems=128,
targets=[
InferenceComponent(
model='eyepop.text.recognize.landscape:latest',
confidenceThreshold=0.1
)
]
)
)
])
5. Semantic segmentation with contours
Pop(components=[
InferenceComponent(
model='eyepop.sam.small:latest',
forward=FullForward(
targets=[
ContourFinderComponent(
contourType=ContourType.POLYGON,
areaThreshold=0.005
)
]
)
)
])
🚀 Running a Pop
from eyepop import EyePopSdk
with EyePopSdk.workerEndpoint() as endpoint:
endpoint.set_pop(pop_examples["2d-body-points"]) # choose a pop
job = endpoint.upload("my-image.jpg") # or use endpoint.load_from(url)
for result in job.predict():
print(json.dumps(result, indent=2))
🔍 Using Points, Boxes, or Prompts
params = [
ComponentParams(componentId=1, values={
"roi": {
"points": [{"x": 100, "y": 150}],
"boxes": [{"topLeft": {"x": 10, "y": 20}, "bottomRight": {"x": 200, "y": 300}}]
}
})
]
job = endpoint.upload("image.jpg", params=params)
🧱 Build Your Own Pops
All components follow this general structure:
InferenceComponent(
model='your-model:latest',
categoryName='your-label',
forward=CropForward(
boxPadding=0.25,
targets=[...]
)
)
Use:
• CropForward for passing regions
• FullForward to process the whole image
• ContourFinderComponent for polygon detection
• ComponentParams to pass prompts or ROI
🛠️ Debug Tips
• Log levels can be adjusted using logging.getLogger().setLevel(...)
Last updated