Deployment

As you enter the deployment step, you will see information about your model including the final precision vs. recall output, a representative image, the total number of images in your dataset, and the version. You can then add other models into the pipeline for deployment. EyePop.ai currently has four models integrated which you can choose to include in your deployment:

  • Text Detection sets up a pipeline that finds the object, looks for textinside the object, and returns text data.

  • Person (2D Pipeline) allows you to combine your model with any peoplefound in the scene, including key points for people that correspond to theirbody landmarks. For example, if you want to know if someone is looking at theobject or holding the object, you’ll have key points that your object is next to when your data comes back.

  • Tracking should be used if you’re processing video. This tracks specific instances of the object over time (frame by frame). The way you know that the object found in frame one is the same as the object found in frame two is by turning on tracking.

  • Segmentation allows you to get the outline or mask of the object. This can be used for lifting the object from the background of a scene and for precisely calculating the area that the object takes up within a scene.

You can also create your own model to add in for deployment by producing your pipeline in code.

Deployment Location

Choose where to deploy your model.

Deploying the EyePop.ai Cloud creates a Pop that you can then re-use for your dashboard as an API endpoint for your code. An AI worker server in EyePop.ai does the analysis and the model is tested in the cloud.

Optimizing for the edge with the Qualcomm AI Hub allows you to run the model on Snapdragon devices and provides code examples to walk you through that process.

Last updated