Awesome
iMapper: Interaction-guided Scene Mapping from Monocular Videos (link)
i3DB
Download videos and images
sh get_data.sh
Open a ground truth scenelet
Setup
conda create --name iMapper python=3 numpy -y
Usage
conda activate iMapper
export PYTHONPATH=$(pwd); python3 ./example.py
Run iMapper on a video
### Requirements
- CUDA capable GPU
- docker
- nvidia-docker
### How to run:
- Have a folder containing
- a video (e.g.,
video.mp4
) and - camera intrinsics
in
intrinsics.json
e.g.,[[1920.0, 0.0, 960.0], [0.0, 1920.0, 540.0], [0.0, 0.0, 1.0]]
- a video (e.g.,
- Replace the variables
PATH_TO_FOLDER_CONTAINING_VIDEO
andVIDEO
below. - Adjust GPU ID if needed.
git clone https://github.com/amonszpart/iMapper.git
cd iMapper
docker build -t imapper imapper/docker
nvidia-docker run -it --name iMapper \
-v ${PATH_TO_FOLDER_CONTAINING_VIDEO}:/opt/iMapper/i3DB/MyScene:rw \
imapper \
bash -c "CUDA_VISIBLE_DEVICES=0 \
python3 run_video.py /opt/iMapper/i3DB/MyScene/${VIDEO} --gpu-id 0"