Home

Awesome

image

<div align="center"> <font size="6"> A Collaborative Deep Learning Framework for Conservation </font> <br> <hr> <a href="https://pypi.org/project/PytorchWildlife"><img src="https://img.shields.io/pypi/v/PytorchWildlife?color=limegreen" /></a> <a href="https://pypi.org/project/PytorchWildlife"><img src="https://static.pepy.tech/badge/pytorchwildlife" /></a> <a href="https://pypi.org/project/PytorchWildlife"><img src="https://img.shields.io/pypi/pyversions/PytorchWildlife" /></a> <a href="https://huggingface.co/spaces/ai-for-good-lab/pytorch-wildlife"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue" /></a> <a href="https://colab.research.google.com/drive/1rjqHrTMzEHkMualr4vB55dQWCsCKMNXi?usp=sharing"><img src="https://img.shields.io/badge/Colab-Demo-blue?logo=GoogleColab" /></a> <!-- <a href="https://colab.research.google.com/drive/16-OjFVQ6nopuP-gfqofYBBY00oIgbcr1?usp=sharing"><img src="https://img.shields.io/badge/Colab-Video detection-blue?logo=GoogleColab" /></a> --> <a href="https://cameratraps.readthedocs.io/en/latest/"><img src="https://img.shields.io/badge/read-docs-yellow?logo=ReadtheDocs" /></a> <a href="https://github.com/microsoft/CameraTraps/blob/main/LICENSE"><img src="https://img.shields.io/pypi/l/PytorchWildlife" /></a> <a href="https://discord.gg/TeEVxzaYtm"><img src="https://img.shields.io/badge/any_text-Join_us!-blue?logo=discord&label=Discord" /></a> <br><br> </div>

🐾 Introduction

At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite. Where they're able to share, grow, use datasets and deep learning architectures for wildlife conservation. We've been inspired by the potential and capabilities of Megadetector, and we deeply value its contributions to the community. As we forge ahead with Pytorch-Wildlife, under which Megadetector now resides, please know that we remain committed to supporting, maintaining, and developing Megadetector, ensuring its continued relevance, expansion, and utility.

Pytorch-Wildlife is pip installable:

pip install PytorchWildlife

To use the newest version of MegaDetector with all the existing functionalities, you can use our Hugging Face interface or simply load the model with Pytorch-Wildlife. The weights will be automatically downloaded:

from PytorchWildlife.models import detection as pw_detection
detection_model = pw_detection.MegaDetectorV6()

For those interested in accessing the previous MegaDetector repository, which utilizes the same MegaDetectorV5 model weights and was primarily developed by Dan Morris during his time at Microsoft, please visit the archive directory, or you can visit this forked repository that Dan Morris is actively maintaining.

[!TIP] If you have any questions regarding MegaDetector and Pytorch-Wildlife, please email us or join us in our discord channel:

📣 Announcements

🎉🎉🎉 Pytorch-Wildlife Version 1.1.0 is out!

<details> <summary><font size="3">👉 Click for more updates</font></summary> <li> Issues [#523](https://github.com/microsoft/CameraTraps/issues/523), [#524](https://github.com/microsoft/CameraTraps/issues/524) and [#526](https://github.com/microsoft/CameraTraps/issues/526) have been solved! <li> PyTorchWildlife is now compatible with Supervision 0.23+ and Python 3.10+! <li> CUDA 12.x compatibility. <br> </details>

:racing_car::dash::dash: MegaDetectorV6: SMALLER, BETTER, and FASTER!

After a few months of public beta testing, we are finally ready to officially release our 6th version of MegaDetector, MegaDetectorV6! In the next generation of MegaDetector, we are focusing on computational efficiency, performance, mordernizing of model architectures, and licensing. We have trained multiple new models using different model architectures, including Yolo-v9, Yolo-v11, and RT-Detr for maximum user flexibility. We have a rolling release schedule for different versions of MegaDetectorV6, and in the first step, we are releasing the compact version of MegaDetectorV6 with Yolo-v9 (MDv6-ultralytics-yolov9-compact, MDv6-c in short). From now on, we encourage our users to use MegaDetectorV6 as their default animal detection model.

This MDv6-c model is optimized for performance and low-budget devices. It has only one-sixth (SMALLER) of the parameters of the previous MegaDetectorV5 and exhibits 12% higher recall (BETTER) on animal detection in our validation datasets. In other words, MDv6-c has significantly fewer false negatives when detecting animals, making it a more robust animal detection model than MegaDetectorV5. Furthermore, one of our testers reported that the speed of MDv6-c is at least 5 times FASTER than MegaDetectorV5 on their datasets.

ModelsParametersPrecisionRecall
MegaDetectorV5121M0.960.73
MegaDetectroV6-c22M0.920.85

Learn how to use MegaDetectorV6 in our image demo and video demo.

:bangbang: Model licensing (IMPORTANT!!)

The Pytorch-Wildlife package is under MIT, however some of the models in the model zoo are not. For example, MegaDetectorV5, which is trained using the Ultralytics package, is under AGPL-3.0, and is not for closed-source comercial uses.

[!IMPORTANT] THIS IS TRUE TO ALL EXISTING MEGADETECTORV5 MODELS IN ALL EXISTING FORKS THAT ARE TRAINED USING YOLOV5, AN ULTRALYTICS-DEVELOPED MODEL.

We want to make Pytorch-Wildlife a platform where different models with different licenses can be hosted and want to enable different usecases. To reduce user confusions, in our model zoo section, we list all existing and planed future models in our model zoo, their corresponding license, and release schedules.

In addition, since the Pytorch-Wildlife package is under MIT, all the utility functions, including data pre-/post-processing functions and model fine-tuning functions in this packages are under MIT as well.

:mag: Model Zoo and Release Schedules

Detection models

ModelsLicenceRelease
MegaDetectorV5AGPL-3.0Released
MegaDetectroV6-Ultralytics-YoloV9-CompactAGPL-3.0Released
HerdNet-generalCC BY-NC-SA-4.0Released
HerdNet-ennediCC BY-NC-SA-4.0Released
MegaDetectroV6-Ultralytics-YoloV9-ExtraAGPL-3.0November 2024
MegaDetectroV6-Ultralytics-YoloV10-Compact (even smaller and no NMS)AGPL-3.0November 2024
MegaDetectroV6-Ultralytics-YoloV10-Extra (extra large model and no NMS)AGPL-3.0November 2024
MegaDetectroV6-MIT-YoloV9-CompactMITDecember 2024
MegaDetectroV6-MIT-YoloV9-ExtraMITDecember 2024
MegaDetectroV6-Ultralytics-YoloV11-Compact (better performance)AGPL-3.0December 2024
MegaDetectroV6-Ultralytics-YoloV11-Extra (better performance)AGPL-3.0December 2024
MegaDetectroV6-Apache-RTDetr-CompactApacheJanuary 2025
MegaDetectroV6-Apache-RTDetr-ExtraApacheJanuary 2025

Classification models

ModelsLicenceRelease
AI4G-OppossumMITReleased
AI4G-AmazonMITReleased
AI4G-SerengetiMITReleased

👋 Welcome to Pytorch-Wildlife

PyTorch-Wildlife is a platform to create, modify, and share powerful AI conservation models. These models can be used for a variety of applications, including camera trap images, overhead images, underwater images, or bioacoustics. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.

The Pytorch-Wildlife library allows users to directly load the MegaDetector model weights for animal detection. We've fully refactored our codebase, prioritizing ease of use in model deployment and expansion. In addition to MegaDetector, Pytorch-Wildlife also accommodates a range of classification weights, such as those derived from the Amazon Rainforest dataset and the Opossum classification dataset. Explore the codebase and functionalities of Pytorch-Wildlife through our interactive HuggingFace web app or local demos and notebooks, designed to showcase the practical applications of our enhancements at PyTorchWildlife. You can find more information in our documentation.

👇 Here is a brief example on how to perform detection and classification on a single image using PyTorch-wildlife

import numpy as np
from PytorchWildlife.models import detection as pw_detection
from PytorchWildlife.models import classification as pw_classification

img = np.random.randn(3, 1280, 1280)

# Detection
detection_model = pw_detection.MegaDetectorV6() # Model weights are automatically downloaded.
detection_result = detection_model.single_image_detection(img)

#Classification
classification_model = pw_classification.AI4GAmazonRainforest() # Model weights are automatically downloaded.
classification_results = classification_model.single_image_classification(img)

⚙️ Install Pytorch-Wildlife

pip install PytorchWildlife

Please refer to our installation guide for more installation information.

🕵️ Explore Pytorch-Wildlife and MegaDetector with our Demo User Interface

If you want to directly try Pytorch-Wildlife with the AI models available, including MegaDetector, you can use our Gradio interface. This interface allows users to directly load the MegaDetector model weights for animal detection. In addition, Pytorch-Wildlife also has two classification models in our initial version. One is trained from an Amazon Rainforest camera trap dataset and the other from a Galapagos opossum classification dataset (more details of these datasets will be published soon). To start, please follow the installation instructions on how to run the Gradio interface! We also provide multiple Jupyter notebooks for demonstration.

image

🛠️ Core Features

What are the core components of Pytorch-Wildlife? Pytorch-core-diagram

🌐 Unified Framework:

Pytorch-Wildlife integrates four pivotal elements:

▪ Machine Learning Models<br> ▪ Pre-trained Weights<br> ▪ Datasets<br> ▪ Utilities<br>

👷 Our work:

In the provided graph, boxes outlined in red represent elements that will be added and remained fixed, while those in blue will be part of our development.

🚀 Inaugural Model:

We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from MegaDetector. We have MegaDetectorV5, which is the same MegaDetector v5 model from the previous repository, and many different versions of MegaDetectorV6 for different usecases.

📚 Expandable Repository:

As we move forward, our platform will welcome new models and pre-trained weights for camera traps and bioacoustic analysis. We're excited to host contributions from global researchers through a dedicated submission platform.

📊 Datasets from LILA:

Pytorch-Wildlife will also incorporate the vast datasets hosted on LILA, making it a treasure trove for conservation research.

🧰 Versatile Utilities:

Our set of utilities spans from visualization tools to task-specific utilities, many inherited from Megadetector.

💻 User Interface Flexibility:

While we provide a foundational user interface, our platform is designed to inspire. We encourage researchers to craft and share their unique interfaces, and we'll list both existing and new UIs from other collaborators for the community's benefit.

Let's shape the future of wildlife research, together! 🙌

🖼️ Examples

Image detection using MegaDetector

<img src="https://microsoft.github.io/CameraTraps/assets/animal_det_1.JPG" alt="animal_det_1" width="400"/><br> Credits to Universidad de los Andes, Colombia.

Image classification with MegaDetector and AI4GAmazonRainforest

<img src="https://microsoft.github.io/CameraTraps/assets/animal_clas_1.png" alt="animal_clas_1" width="500"/><br> Credits to Universidad de los Andes, Colombia.

Opossum ID with MegaDetector and AI4GOpossum

<img src="https://microsoft.github.io/CameraTraps/assets/opossum_det.png" alt="opossum_det" width="500"/><br> Credits to the Agency for Regulation and Control of Biosecurity and Quarantine for Galápagos (ABG), Ecuador.

🔥 Future highlights

To check the full version of the roadmap with completed tasks and long term goals, please click here!.

🤜🤛 Collaboration with EcoAssist!

We are thrilled to announce our collaboration with EcoAssist---a powerful user interface software that enables users to directly load models from the PyTorch-Wildlife model zoo for image analysis on local computers. With EcoAssist, you can now utilize MegaDetectorV5 and the classification models---AI4GAmazonRainforest and AI4GOpossum---for automatic animal detection and identification, alongside a comprehensive suite of pre- and post-processing tools. This partnership aims to enhance the overall user experience with PyTorch-Wildlife models for a general audience. We will work closely to bring more features together for more efficient and effective wildlife analysis in the future.

:fountain_pen: Cite us!

We have recently published a summary paper on Pytorch-Wildlife. The paper has been accepted as an oral presentation at the CV4Animals workshop at this CVPR 2024. Please feel free to cite us!

@misc{hernandez2024pytorchwildlife,
      title={Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation}, 
      author={Andres Hernandez and Zhongqi Miao and Luisa Vargas and Rahul Dodhia and Juan Lavista},
      year={2024},
      eprint={2405.12930},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

🤝 Contributing

This project is open to your ideas and contributions. If you want to submit a pull request, we'll have some guidelines available soon.

We have adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact us with any additional questions or comments.

License

This repository is licensed with the MIT license.

👥 Existing Collaborators

The extensive collaborative efforts of Megadetector have genuinely inspired us, and we deeply value its significant contributions to the community. As we continue to advance with Pytorch-Wildlife, our commitment to delivering technical support to our existing partners on MegaDetector remains the same.

Here we list a few of the organizations that have used MegaDetector. We're only listing organizations who have given us permission to refer to them here or have posted publicly about their use of MegaDetector.

<details> <summary><font size="3">👉 Full list of organizations</font></summary>

(Newly Added) TerrOïko (OCAPI platform)

Arizona Department of Environmental Quality

Blackbird Environmental

Camelot

Canadian Parks and Wilderness Society (CPAWS) Northern Alberta Chapter

Conservation X Labs

Czech University of Life Sciences Prague

EcoLogic Consultants Ltd.

Estación Biológica de Doñana

Idaho Department of Fish and Game

Island Conservation

Myall Lakes Dingo Project

Point No Point Treaty Council

Ramat Hanadiv Nature Park

SPEA (Portuguese Society for the Study of Birds)

Synthetaic

Taronga Conservation Society

The Nature Conservancy in Wyoming

TrapTagger

Upper Yellowstone Watershed Group

Applied Conservation Macro Ecology Lab, University of Victoria

Banff National Park Resource Conservation, Parks Canada(https://www.pc.gc.ca/en/pn-np/ab/banff/nature/conservation)

Blumstein Lab, UCLA

Borderlands Research Institute, Sul Ross State University

Capitol Reef National Park / Utah Valley University

Center for Biodiversity and Conservation, American Museum of Natural History

Centre for Ecosystem Science, UNSW Sydney

Cross-Cultural Ecology Lab, Macquarie University

DC Cat Count, led by the Humane Rescue Alliance

Department of Fish and Wildlife Sciences, University of Idaho

Department of Wildlife Ecology and Conservation, University of Florida

Ecology and Conservation of Amazonian Vertebrates Research Group, Federal University of Amapá

Gola Forest Programma, Royal Society for the Protection of Birds (RSPB)

Graeme Shannon's Research Group, Bangor University

Hamaarag, The Steinhardt Museum of Natural History, Tel Aviv University

Institut des Science de la Forêt Tempérée (ISFORT), Université du Québec en Outaouais

Lab of Dr. Bilal Habib, the Wildlife Institute of India

Mammal Spatial Ecology and Conservation Lab, Washington State University

McLoughlin Lab in Population Ecology, University of Saskatchewan

National Wildlife Refuge System, Southwest Region, U.S. Fish & Wildlife Service

Northern Great Plains Program, Smithsonian

Quantitative Ecology Lab, University of Washington

Santa Monica Mountains Recreation Area, National Park Service

Seattle Urban Carnivore Project, Woodland Park Zoo

Serra dos Órgãos National Park, ICMBio

Snapshot USA, Smithsonian

Wildlife Coexistence Lab, University of British Columbia

Wildlife Research, Oregon Department of Fish and Wildlife

Wildlife Division, Michigan Department of Natural Resources

Department of Ecology, TU Berlin

Ghost Cat Analytics

Protected Areas Unit, Canadian Wildlife Service

School of Natural Sciences, University of Tasmania (story)

Kenai National Wildlife Refuge, U.S. Fish & Wildlife Service (story)

Australian Wildlife Conservancy (blog, blog)

Felidae Conservation Fund (WildePod platform) (blog post)

Alberta Biodiversity Monitoring Institute (ABMI) (WildTrax platform) (blog post)

Shan Shui Conservation Center (blog post) (translated blog post)

Irvine Ranch Conservancy (story)

Wildlife Protection Solutions (story, story)

Road Ecology Center, University of California, Davis (Wildlife Observer Network platform)

The Nature Conservancy in California (Animl platform)

San Diego Zoo Wildlife Alliance (Animl R package)

</details><br>

[!IMPORTANT] If you would like to be added to this list or have any questions regarding MegaDetector and Pytorch-Wildlife, please email us or join us in our Discord channel: