Awesome
Magnebot
Magnebot is a high-level robotics-like API for TDW. The Magnebot can move around the scene and manipulate objects by picking them up with "magnets". The simulation is entirely driven by physics.
The Magnebot can be loaded into a wide variety of scenes populated by interactable objects.
At a low level, the Magnebot is driven by robotics commands such as set_revolute_target, which will turn a revolute drive. The high-level API combines the low-level commands into "actions", such as grasp(target_object)
or move_by(distance)
.
The Magnebot API supports both single-agent and multi-agent simulations.
<img src="https://raw.githubusercontent.com/alters-mit/magnebot/main/doc/images/reach_high.gif" />Requirements
See TDW requirements.
Installation
pip3 install magnebot
- (Linux servers only): Download the latest TDW build and unzip it. For more information on setting up a TDW build on a server, read this. On a personal computer, the build will be downloaded and launched automatically.
Test if your installation was successful
- Run this controller:
from magnebot import MagnebotController
c = MagnebotController() # On a server, change this to MagnebotController(launch_build=False)
c.init_scene()
c.move_by(2)
print(c.magnebot.dynamic.transform.position)
c.end()
- (Linux servers only): Launch the TDW build. On a personal computer, the build will launch automatically.
Update an existing installation
pip3 install tdw -U
pip3 install magnebot -U
- (Linux servers only): Download the latest TDW build and unzip it. On a personal computer, the build will automatically be upgraded the next time you create a TDW controller.
If you are upgrading from Magnebot 1.3.2 or earlier, be aware that there are many changes to the API in Magnebot 2.0.0 and newer. Read the changelog for more information.
Manual
General
TDW Documentation
Before using Magnebot, we recommend you read TDW's documentation to familiarize yourself with some of the underlying concepts in this API:
MagnebotController
(single-agent, high-level API)
The MagnebotController
offers a simplified API for single-agent simulations. Actions are non-interruptible; self.move_by(2)
will simulate motion until the action ends (i.e. when the Magnebot has moved forward by 2 meters). This API mode has been optimized for ease of use and simulation speed.
- Overview
- Scene setup
- Output data
- Actions
- Moving, turning, and collision detection
- Arm articulation
- Grasp action
- Camera rotation
- Third-person cameras
- Occupancy maps
Magnebot
(n-agent, lower-level API)
Magnebot
is a TDW add-on that must be added to a TDW controller to be usable. Magnebot
can be used in multi-agent simulations, but it requires a more extensive understanding of TDW than MagnebotController
.
- Overview
- Scene setup
- Output data
- Actions
- Moving, turning, and collision detection
- Arm articulation
- Grasp action
- Camera
- Third-person cameras
- Occupancy maps
- Multi-agent simulations
Actions
It is possible to define custom Magnebot actions by extending the Action
class.
- Overview
- Move and turn actions
- Arm articulation actions
- Inverse kinematics (IK) actions
- Camera actions
API
Examples
Example controllers show actual examples for an actual use-case.
Other controllers in this repo:
- Promo controllers are meant to be use to generate promo videos or images; they include low-level TDW commands that you won't need to ordinarily use.
- Test controllers load the Magnebot into an empty room and test basic functionality.
Higher-level APIs
The Magnebot API relies on the tdw
Python module. Every action in this API uses combinations of low-level TDW commands and output data, typically across multiple simulation steps.
This API is designed to be used as-is or as the base for an API with higher-level actions, such as the Transport Challenge.
<img src="https://raw.githubusercontent.com/alters-mit/magnebot/main/doc/images/api_hierarchy.png" style="zoom:67%;" />API | Description |
---|---|
Transport Challenge | Transport objects from room to room using containers as tools. |
Multimodal Challenge | Perceive objects in the scene using visual and audio input. |