An evaluation of platforms for processing camera-trap data using artificial intelligence

Juliana Vélez, William McShea, Hila Shamon, Paula J. Castiblanco-Camacho, Michael A. Tabak, Carl Chalmers, Paul Fergus, John Fieberg

Research output: Contribution to journalReview articlepeer-review

17 Scopus citations

Abstract

Camera traps have quickly transformed the way in which many ecologists study the distribution of wildlife species, their activity patterns and interactions among members of the same ecological community. Although they provide a cost-effective method for monitoring multiple species over large spatial and temporal scales, the time required to process the data can limit the efficiency of camera-trap surveys. Thus, there has been considerable attention given to the use of artificial intelligence (AI), specifically deep learning, to help process camera-trap data. Using deep learning for these applications involves training algorithms, such as convolutional neural networks (CNNs), to use particular features in the camera-trap images to automatically detect objects (e.g. animals, humans, vehicles) and to classify species. To help overcome the technical challenges associated with training CNNs, several research communities have recently developed platforms that incorporate deep learning in easy-to-use interfaces. We review key characteristics of four AI platforms—Conservation AI, MegaDetector, MLWIC2: Machine Learning for Wildlife Image Classification and Wildlife Insights—and two auxiliary platforms—Camelot and Timelapse—that incorporate AI output for processing camera-trap data. We compare their software and programming requirements, AI features, data management tools and output format. We also provide R code and data from our own work to demonstrate how users can evaluate model performance. We found that species classifications from Conservation AI, MLWIC2 and Wildlife Insights generally had low to moderate recall. Yet, the precision for some species and higher taxonomic groups was high, and MegaDetector and MLWIC2 had high precision and recall when classifying images as either ‘blank’ or ‘animal’. These results suggest that most users will need to review AI predictions, but that AI platforms can improve efficiency of camera-trap-data processing by allowing users to filter their dataset into subsets (e.g. of certain taxonomic groups or blanks) that can be verified using bulk actions. By reviewing features of popular AI-powered platforms and sharing an open-source GitBook that illustrates how to manage AI output to evaluate model performance, we hope to facilitate ecologists' use of AI to process camera-trap data.

Original languageEnglish (US)
Pages (from-to)459-477
Number of pages19
JournalMethods in Ecology and Evolution
Volume14
Issue number2
DOIs
StatePublished - Feb 2023

Bibliographical note

Funding Information:
We thank Juan David Rodríguez and volunteers for their assistance with camera-trap data collection. We also appreciate the constant support from César Barrera and Eduardo Enciso who allowed us to set up camera traps on their properties and facilitated field expeditions. Thank you to American Prairie for allowing us to sample on their lands, and to John and Adrienne Mars for supporting Smithsonian's Great Plains Science Program. We acknowledge the valuable comments provided by Tanya Birch, Dan Morris, Saul Greenberg, and two anonymous reviewers, that considerably improved the manuscript. This research was made possible thanks to funding from the Colciencias-Fulbright Scholarship, the WWF's Russell E. Train Education for Nature Program (EFN), the Interdisciplinary Center for the Study of Global Change Fellowship, the Department of Fisheries, Wildlife and Conservation Biology at the University of Minnesota and the Smithsonian's National Zoo and Conservation Biology Institute. John Fieberg received partial salary support from the Minnesota Agricultural Experimental Station.

Funding Information:
We thank Juan David Rodríguez and volunteers for their assistance with camera‐trap data collection. We also appreciate the constant support from César Barrera and Eduardo Enciso who allowed us to set up camera traps on their properties and facilitated field expeditions. Thank you to American Prairie for allowing us to sample on their lands, and to John and Adrienne Mars for supporting Smithsonian's Great Plains Science Program. We acknowledge the valuable comments provided by Tanya Birch, Dan Morris, Saul Greenberg, and two anonymous reviewers, that considerably improved the manuscript. This research was made possible thanks to funding from the Colciencias‐Fulbright Scholarship, the WWF's Russell E. Train Education for Nature Program (EFN), the Interdisciplinary Center for the Study of Global Change Fellowship, the Department of Fisheries, Wildlife and Conservation Biology at the University of Minnesota and the Smithsonian's National Zoo and Conservation Biology Institute. John Fieberg received partial salary support from the Minnesota Agricultural Experimental Station.

Publisher Copyright:
© 2022 The Authors. Methods in Ecology and Evolution published by John Wiley & Sons Ltd on behalf of British Ecological Society.

Keywords

  • artificial intelligence
  • camera traps
  • computer vision
  • data processing
  • deep learning
  • image classification
  • remote sensing
  • review

Fingerprint

Dive into the research topics of 'An evaluation of platforms for processing camera-trap data using artificial intelligence'. Together they form a unique fingerprint.

Cite this