Composable Pops
EyePop SDK Developer Guide: Using Composable Pops
This guide shows how to build and run Composable Pops using EyePopβs Python SDK. Composable Pops let you chain models and post-processing steps (e.g., object detection β crop β OCR) into reusable pipelinesβall defined in Python.
π§ What is a Composable Pop?
A Composable Pop is a flexible visual processing pipeline. Each Pop is made up of components, where each component can:
β’ Run inference with a specific model
β’ Crop output and pass it to another component
β’ Trace objects over time
β’ Find contours or keypoints
All without needing to train your own model or write any machine learning code.
π§ Installation & Setup
pip install eyepop-sdk requests pillow webui pybars3
Make sure you have:
β’ An EyePop developer key
π¦ Composable Pop Examples
Each Pop is defined using the Pop and Component objects.
1. Detect people
Pop(components=[
InferenceComponent(
model='eyepop.person:latest',
categoryName="person"
)
])
2. 2D body pose estimation
Pop(components=[
InferenceComponent(
model='eyepop.person:latest',
categoryName="person",
forward=CropForward(
maxItems=128,
targets=[
InferenceComponent(
model='eyepop.person.2d-body-points:latest',
categoryName="2d-body-points",
confidenceThreshold=0.25
)
]
)
)
])
3. Face mesh from people
Pop(components=[
InferenceComponent(
model='eyepop.person:latest',
categoryName="person",
forward=CropForward(
maxItems=128,
targets=[
InferenceComponent(
model='eyepop.person.face.short-range:latest',
categoryName="2d-face-points",
forward=CropForward(
boxPadding=1.5,
orientationTargetAngle=-90.0,
targets=[
InferenceComponent(
model='eyepop.person.face-mesh:latest',
categoryName="3d-face-mesh"
)
]
)
)
]
)
)
])
4. OCR on text regions
Pop(components=[
InferenceComponent(
model='eyepop.text:latest',
categoryName="text",
confidenceThreshold=0.7,
forward=CropForward(
maxItems=128,
targets=[
InferenceComponent(
model='eyepop.text.recognize.landscape:latest',
confidenceThreshold=0.1
)
]
)
)
])
5. Semantic segmentation with contours
Pop(components=[
InferenceComponent(
model='eyepop.sam.small:latest',
forward=FullForward(
targets=[
ContourFinderComponent(
contourType=ContourType.POLYGON,
areaThreshold=0.005
)
]
)
)
])
π Running a Pop
from eyepop import EyePopSdk
with EyePopSdk.workerEndpoint() as endpoint:
endpoint.set_pop(pop_examples["2d-body-points"]) # choose a pop
job = endpoint.upload("my-image.jpg") # or use endpoint.load_from(url)
for result in job.predict():
print(json.dumps(result, indent=2))
π Using Points, Boxes, or Prompts
params = [
ComponentParams(componentId=1, values={
"roi": {
"points": [{"x": 100, "y": 150}],
"boxes": [{"topLeft": {"x": 10, "y": 20}, "bottomRight": {"x": 200, "y": 300}}]
}
})
]
job = endpoint.upload("image.jpg", params=params)
π§± Build Your Own Pops
All components follow this general structure:
InferenceComponent(
model='your-model:latest',
categoryName='your-label',
forward=CropForward(
boxPadding=0.25,
targets=[...]
)
)
Use:
β’ CropForward for passing regions
β’ FullForward to process the whole image
β’ ContourFinderComponent for polygon detection
β’ ComponentParams to pass prompts or ROI
π οΈ Debug Tips
β’ Log levels can be adjusted using logging.getLogger().setLevel(...)
Last updated