Awesome
<img src="img/droneaid-logo.png" height="100" alt="DroneAid logo">
DroneAid uses machine learning to detect calls for help on the ground placed by those in need. At the heart of DroneAid is a Symbol Language that is used to train a visual recognition model. That model analyzes video from a drone to detect and count specific images. A dashboard can be used to plot those locations on a map and initiate a response.
An aerial scout for first responders
DroneAid consists of several components:
- The DroneAid Symbol Language that represents need and quantities
- A mechanism for rendering the symbols in virtual reality to train a model
- The trained model that can be applied to drone livestream video
- A dashboard that renders the location of needs captured by a drone
The current implementation can be extended beyond a particular drone to additional drones, airplanes, and satellites. The Symbol Language can be used to train additional visual recognition implementations.
The original version of DroneAid was created by Pedro Cruz in August 2018. A refactored version was released as a Call for CodeĀ® with The Linux Foundation open source project in October 2019. DroneAid is currently hosted at The Linux Foundation.
Get started
- The DroneAid origin story
- DroneAid Symbol Language
- See it in action
- Use the pre-trained visual recognition model on the Symbol Language
- Set up and training the model
- Frequently asked questions
- Project roadmap
- Built with
- Contributing
- Authors
- License
The DroneAid origin story
Pedro Cruz explains his inspiration for DroneAid, based on his experience in Puerto Rico after Hurricane Maria. He flew his drone around his neighborhood and saw handwritten messages indicating what people need and realized he could standardize a solution to provide a response.
DroneAid Symbol Language
The DroneAid Symbol Language provides a way for those affected by natural disasters to express their needs and make them visible to drones, planes, and satellites when traditional communications are not available.
Victims can use a pre-packaged symbol kit that has been manufactured and distributed to them, or recreate the symbols manually with whatever materials they have available.
These symbols include those below, which represent a subset of the icons provided by The United Nations Office for the Coordination of Humanitarian Affairs (OCHA). These can be complemented with numbers to quantify need, such as the number or people who need water.
Symbol | Meaning | Symbol | Meaning |
---|---|---|---|
<img src="img/icons/icon-sos.png" width="100" alt="SOS"> | Immediate Help Needed<br>(orange; downward triangle over SOS) | <img src="img/icons/icon-shelter.png" width="100" alt="Shelter"> | Shelter Needed<br>(cyan; person standing in structure) |
<img src="img/icons/icon-ok.png" width="100" alt="OK"> | No Help Needed<br>(green; upward triangle over OK) | <img src="img/icons/icon-firstaid.png" width="100" alt="FirstAid"> | First Aid Kit Needed<br>(yellow; case with first aid cross) |
<img src="img/icons/icon-water.png" width="100" alt="Water"> | Water Needed<br>(blue; water droplet) | <img src="img/icons/icon-children.png" width="100" alt="Children"> | Area with Children in Need<br>(lilac; baby with diaper) |
<img src="img/icons/icon-food.png" width="100" alt="Food"> | Food Needed<br>(red; pan with wheat) | <img src="img/icons/icon-elderly.png" width="100" alt="Elderly"> | Area with Elderly in Need<br>(purple; person with cane) |
See it in action
A demonstration implementation takes the video stream of DJI Tello drone and analyzes the frames to find and count symbols. See tello-demo for instructions on how to get it running.
Use the pre-trained visual recognition model on the Symbol Language
See the Tensorflow.js example.
See the Tensorflow.js example deployed to Code Engine.
Set up and training the model
In order to train the model, we must place the symbols into simulated environments so that the system knows how to detect them in a variety of conditions (i.e. whether they are distorted, faded, or in low light conditions).
See SETUP.md
Frequently asked questions
See FAQ.md
Project roadmap
See ROADMAP.md
Technical charter
See DroneAid-Technical-Charter.pdf
Built with
- TensorFlow.js - Used to run inference on the browser
- Cloud Annotations - Used for training the model
- Lens Studio - Used to create the augmented reality and generate the imageset
Contributing
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting DroneAid pull requests.
Authors
License
This project is licensed under the Apache 2 License - see the LICENSE file for details.