Advanced Mode - Some documentation on the different 'Models to detect animals, vehicles, and persons'

Dear AddaxAIers,

Just sharing some information here, for those who were looking for more information on the ‘Model to detect animals, vehicles, and persons’ in the advanced mode of AddaxAI.

We currently (as per my clean install of AddaxAI v6.23 for Windows) have the following options:

1. MegaDetector 1000 Redwood
2. MegaDetector 1000 Spruce
3. MegaDetector 5a
4. MegaDetector 5b
5. Custom model

So, below are some pointers on each of them.

1. MegaDetector 1000 Redwood

If you want to join me on the cutting edge, or if you have large reptiles and/or you experience the “boxes in the sky” problem with MDv5, use MDv1000-redwood.

MD1000-redwood (the largest model) is the one I reach for (alongside MDv5) when I’m responsible for sending the most accurate results I can to a user. For this model, I chose YOLOv5x6 (the same architecture as MDv5) because (a) it is still (around) the highest accuracy on COCO among Ultralytics model families, (b) it was the last Ultralytics model whose recommended inference code carried a GPL (rather than AGPL) license, and (c) using the same architecture as MDv5 allowed me to do a pretty apples-to-apples comparison of the whole training process against MDv5.

As per GitHub DOT COM /agentmorris/MegaDetector/blob/main/docs/release-notes/mdv1000-release.md

2. MegaDetector 1000 Spruce

MDv1000-spruce (based on YOLOv5s) was built specifically for the Conservation X Labs Sentinel device, a module that adds edge AI capabilities and connectivity to existing camera traps.

As per GitHub DOT COM /agentmorris/MegaDetector/blob/main/docs/release-notes/mdv1000-release.md

3. MegaDetector 5a

If you’re starting from scratch and you want the safe thing that has been studied extensively in the literature, use MDv5a.

As per GitHub DOT COM /agentmorris/MegaDetector/blob/main/docs/release-notes/mdv1000-release.md

MegaDetector v4 was trained on all MDv3 training data, plus new private data, and new public data.

As per GitHub DOT COM /agentmorris/MegaDetector/blob/main/megadetector.md#can-you-share-the-training-data

The first thing we always run is MDv5a… 95% of the time, the flowchart stops here. That’s in bold because we want to stress that this whole section is about the unusual case, not the typical case. There are enough complicated things in life, don’t make choosing MegaDetector versions more complicated than it needs to be.

As per GitHub DOT COM /agentmorris/MegaDetector/blob/main/megadetector.md#pro-tips-for-coaxing-every-bit-of-accuracy-out-of-megadetector

4. MegaDetector 5b

MDv5 is actually two models (MDv5a and MDv5b), differing only in their training data. Both appear to be more accurate than MDv4, and both are 3x-4x faster than MDv4, but each MDv5 model can outperform the other slightly, depending on your data. When in doubt, for now, try them both. If you really twist our arms to recommend one… we recommend MDv5a. But try them both and tell us which works better for you!

As per As per GitHub DOT COM /agentmorris/MegaDetector/releases

MegaDetector v5b was trained on all MDv4 training data, plus new private data, and new public data.

As per As per GitHub DOT COM /MegaDetector/blob/main/megadetector.md#can-you-share-the-training-data

If anything looks off, specifically if you’re missing animals that you think MegaDetector should be getting, or if you just want to see if you can squeeze a little more precision out, try MDv5b. Usually, we’ve found that MDv5a works at least as well as MDv5b, but every dataset is different.

As per As per GitHub DOT COM /agentmorris/MegaDetector/blob/main/megadetector.md#pro-tips-for-coaxing-every-bit-of-accuracy-out-of-megadetector

5. Custom model

This allows us to load other Yolov5 models.

1 Like