📜
Developer Documentation
  • 👋EyePop.ai Introduction
  • 🎯Getting Started
    • 👨‍💻Pop Quick Start
    • 💪Low Code Examples
  • 🗝️API Key
  • 🏷️Finding People & Objects
  • SDKs
    • ☁️React/Node SDK
      • Render 2D (Visualization)
    • 🐍Python SDK
  • Self Service Training
    • Dataset SDK (Node)
    • 🏋️How To Train a Model
      • Defining Your Computer Vision Model
      • Example Use Case: Detecting Eyeglasses
      • Preparing & Uploading Data
      • Using EyePop.ai’s AutoLabeler
      • Human Review
      • Data Augmentation Setup
      • Training in Progress
      • Deployment
        • Deploy to Windows Runtime
      • Previewing Results
      • Iterative Training
      • Deep Dives (FAQ)
  • EyePop.ai Visual Intelligence
    • Reports
Powered by GitBook
On this page
  1. Self Service Training
  2. How To Train a Model

Human Review

PreviousUsing EyePop.ai’s AutoLabelerNextData Augmentation Setup

Last updated 4 months ago

Now you’ve reached human review (that’s you!). On the right side of the screen, EyePop.ai displays the images that are prioritized for human review based on the resulting health of your model. Image data shown to the left includes the number of images in the dataset, the percentage that are labeled, the Image ID for each image, and the image priority (or how critical it is to train the model on this particular image).

Your task is to verify that all target objects are properly labeled. Approve accurate labels, ignore irrelevant or low-quality images, and save negative images that correctly did not identify the object. You can also adjust the bounding boxes to fine-tune the labeled areas. Keyboard shortcuts are your friends here. You can click the keyboard icon above the image data to see a full list of keyboard shortcuts. Some of the most commonly used include:

  • S (Save): this means everything looks good, continue on.

  • R (Ignore): this rejects the image from the dataset.

  • Arrow keys can be used to navigate left and right through the images.

  • Pinch zoom or scroll in with the mouse to enlarge images.

Once you have an acceptable amount of data, the app will give you a go to training button (see it pop up in the bottom right corner). Click on this to move into the pretraining check.

Common Labeling Mistakes

  1. Over-sized Box: Likely caused by manual copying. This can reduce prediction accuracy as:

    • It’s hard to verify correctness due to massive overlaps.

    • It could cover other objects if they are close.

  1. Double Boxes: Likely caused by the auto-labeler. This is very harmful to model learning.

  1. Missed Box: Likely caused by manual review errors. When objects are grouped together, it’s easy to miss one box between multiple boxes due to overlapping annotations.

🏋️