Human Review
Last updated
Last updated
Now you’ve reached human review (that’s you!). On the right side of the screen, EyePop.ai displays the images that are prioritized for human review based on the resulting health of your model. Image data shown to the left includes the number of images in the dataset, the percentage that are labeled, the Image ID for each image, and the image priority (or how critical it is to train the model on this particular image).
Your task is to verify that all target objects are properly labeled. Approve accurate labels, ignore irrelevant or low-quality images, and save negative images that correctly did not identify the object. You can also adjust the bounding boxes to fine-tune the labeled areas. Keyboard shortcuts are your friends here. You can click the keyboard icon above the image data to see a full list of keyboard shortcuts. Some of the most commonly used include:
S (Save): this means everything looks good, continue on.
R (Ignore): this rejects the image from the dataset.
Arrow keys can be used to navigate left and right through the images.
Pinch zoom or scroll in with the mouse to enlarge images.
Once you have an acceptable amount of data, the app will give you a go to training button (see it pop up in the bottom right corner). Click on this to move into the pretraining check.
Over-sized Box: Likely caused by manual copying. This can reduce prediction accuracy as:
It’s hard to verify correctness due to massive overlaps.
It could cover other objects if they are close.
Double Boxes: Likely caused by the auto-labeler. This is very harmful to model learning.
Missed Box: Likely caused by manual review errors. When objects are grouped together, it’s easy to miss one box between multiple boxes due to overlapping annotations.