Deep Dives (FAQ)
Good Negative Examples: What They Are and Why They Matter
When collecting data for your computer vision model on EyePop.ai, it’s important to include negative examples in your dataset. Negative examples are images that do not contain the object or feature you want the model to detect or classify. These examples are critical for training a robust model that minimizes false positives.
1. What Are Good Negative Examples?
Definition: Images that lack the object(s) or feature(s) you are training your model to detect.
Purpose: Teach the model to distinguish between relevant and irrelevant features.
Characteristics of Good Negative Examples:
Represent the same environment or context where your target object might appear.
Include objects or features that could be mistaken for the target.
Have similar lighting, angles, or backgrounds as your positive examples.
2. Why Are Negative Examples Important?
Reduce False Positives: Helps the model avoid misidentifying unrelated objects as the target.
Improve Generalization: Ensures the model performs well on real-world data by understanding both relevant and irrelevant cases.
Enhance Model Confidence: Strengthens the model’s ability to differentiate between target and non-target data.
3. Examples of Good Negative Examples
Context-Dependent Negatives:
For a model detecting dogs, include images of other animals (e.g., cats, foxes) in similar settings.
For detecting medical sample bags, include images of hands, tables, and other lab equipment without the bags.
Similar Backgrounds Without the Target Object:
Street scenes without cars for a vehicle detection model.
Office settings without laptops for a laptop detection model.
Challenging Distractors:
Objects or patterns that resemble the target but are not.
Shadows, reflections, or partial objects that might confuse the model.
4. Common Mistakes in Choosing Negative Examples
Irrelevant Negatives:
Images that are completely unrelated to your target domain (e.g., beach scenes for a warehouse inventory model).
Overly Simplistic Examples:
Plain backgrounds or empty images that don’t reflect the operational context.
Lack of Diversity:
Using only a narrow set of negatives that don’t cover the variety of distractions your model may encounter.
5. Tips for Collecting High-Quality Negative Examples
Use the same sources as your positive examples for consistency.
Ensure a mix of backgrounds, objects, and lighting conditions.
Include ambiguous cases to challenge the model (e.g., objects partially occluded or in unusual orientations).
If not all the pictures have been fully trained (i.e., don’t have the green check box), can we still train the model, or will that cause issues?
Yes, you can start training whenever you'd like. Any un-labelled images are ignored.
After images are uploaded to Eyepop.ai and someone has arranged the boxes over the desired objects for only a portion of the total uploaded images (e.g., 3,000 images uploaded, but only 500 are boxed), can the AI be re-trained on just those 500 boxed images to improve its accuracy when reviewing the remaining 2,500?
Yes, as soon as you finish training a model we automatically re-autolabel all the images to make the remaining easier to label.
Can two people be logged in and working on labelling at the same time?
Yes, just be careful to make sure two people aren't working on the same image simultaneously.
Last updated