New object detector model trained on FathomVerse data
- Joost Daniels
- Jun 19
- 2 min read
We recently deployed a new object detector model in FathomVerse. Object detector models predict where in the image an object of interest is located, without trying to classify it. This model is used for generating AI proposals in Bound.

This new model was trained exclusively on consensus data that FathomVerse players generated by playing Bound. That includes about 50,000 localizations in 10,000 unique images, including the “urchin fields” and “coral cliffs” that have been a popular topic of discussion on the FathomVerse Discord.
We excluded a portion of the dataset from training to evaluate the model's performance and to compare it with other models. A key metric for assessing our new model is recall, which measures the percentage of desired objects that the model detects. When we compare the new model to our player consensus data, we find that it has a recall of 81%, while our previous model only detected 44% of the regions identified by players. This indicates that the new model significantly outperforms the older one with the recently added data.
One reason for this improvement is that the original model, nicknamed “Megalodon,” was trained on a dataset containing different animals, substrates, and lighting conditions compared to much of our current data. Additionally, the source data for the old model was not intended to provide full coverage of all animals in the scene, unlike what our Bound players are currently doing.
What does this mean for players?
You should start seeing more accurate AI proposals in Bound soon! We will continue to use the contributions from players to improve this model. We also adjusted the minimum size of AI proposals provided to players, so you might find some smaller boxes as well. Check out the images below to see the new object detector in action!