Awesome
<h1 align="center"> equilib </h1> <h4 align="center"> Processing Equirectangular Images with Python </h4> <div align="center"> <a href="https://badge.fury.io/py/pyequilib"><img src="https://badge.fury.io/py/pyequilib.svg" alt="PyPI version"></a> <a href="https://pypi.org/project/pyequilib"><img src="https://img.shields.io/pypi/pyversions/pyequilib"></a> <a href="https://github.com/haruishi43/equilib/actions"><img src="https://github.com/haruishi43/equilib/workflows/ci/badge.svg"></a> <a href="https://github.com/haruishi43/equilib/blob/master/LICENSE"><img alt="GitHub license" src="https://img.shields.io/github/license/haruishi43/equilib"></a> </div> <img src=".img/equilib.png" alt="equilib" width="720"/>- A library for processing equirectangular image that runs on Python.
- Developed using Python>=3.6 (
c++
is WIP). - Compatible with
cuda
tensors for faster processing. - No need for other dependencies except for
numpy
andtorch
. - Added functionality like creating rotation matrices, batched processing, and automatic type detection.
- Works with various input modals
- Highly modular
If you found this module helpful to your project, please site this repository:
@software{pyequilib2021github,
author = {Haruya Ishikawa},
title = {PyEquilib: Processing Equirectangular Images with Python},
url = {http://github.com/haruishi43/equilib},
version = {0.5.0},
year = {2021},
}
Installation:
Prerequisites:
- Python (>=3.6)
- Pytorch (tested on 1.12)
pip install pyequilib
For developing, use:
git clone --recursive https://github.com/haruishi43/equilib.git
cd equilib
pip install -r requirements.txt
pip install -e .
# or
python setup.py develop
NOTE: might not work for PyTorch>=2.0. If you have any issues, please open an issue.
Basic Usage:
equilib
has different transforms of equirectangular (or cubemap) images (note each transform has class
and func
APIs):
Cube2Equi
/cube2equi
: cubemap to equirectangular transformEqui2Cube
/equi2cube
: equirectangular to cubemap transformEqui2Equi
/equi2equi
: equirectangular transformEqui2Pers
/equi2pers
: equirectangular to perspective transform
There are no real differences in class
or func
APIs:
class
APIs will allow instantiating a class which you can call many times without having to specify configurations (class
APIs call thefunc
API)func
APIs are useful when there are no repetitive calls- both
class
andfunc
APIs are extensible, so you can extend them to your use-cases or create a method that's more optimized (pull requests are welcome btw)
Each API automatically detects the input type (numpy.ndarray
or torch.Tensor
), and outputs are the same type.
The arguments for each class
or func
depends on the transform, but here are the common arguments:
z_down (bool)
: whether to use a coordinate system with z-axis pointing down, defaults toFalse
mode (str)
: interpolation mode, defaults tobilinear
clip_output (bool)
: whether to clip values based on the range of the input values, default toTrue
An example for Equi2Pers
/equi2pers
:
import numpy as np
from PIL import Image
from equilib import Equi2Pers
# Input equirectangular image
equi_img = Image.open("./some_image.jpg")
equi_img = np.asarray(equi_img)
equi_img = np.transpose(equi_img, (2, 0, 1))
# rotations
rots = {
'roll': 0.,
'pitch': np.pi/4, # rotate vertical
'yaw': np.pi/4, # rotate horizontal
}
# Intialize equi2pers
equi2pers = Equi2Pers(
height=480,
width=640,
fov_x=90.0,
mode="bilinear",
)
# obtain perspective image
pers_img = equi2pers(
equi=equi_img,
rots=rots,
)
</pre>
</td>
<td>
<pre>
import numpy as np
from PIL import Image
from equilib import equi2pers
# Input equirectangular image
equi_img = Image.open("./some_image.jpg")
equi_img = np.asarray(equi_img)
equi_img = np.transpose(equi_img, (2, 0, 1))
# rotations
rots = {
'roll': 0.,
'pitch': np.pi/4, # rotate vertical
'yaw': np.pi/4, # rotate horizontal
}
# Run equi2pers
pers_img = equi2pers(
equi=equi_img,
rots=rots,
height=480,
width=640,
fov_x=90.0,
mode="bilinear",
)
</pre>
</td>
</table>
For more information about how each APIs work, take a look in .readme or go through example codes in the tests
or scripts
.
Coordinate System:
Right-handed rule XYZ global coordinate system. x-axis
faces forward and z-axis
faces up.
roll
: counter-clockwise rotation about thex-axis
pitch
: counter-clockwise rotation about they-axis
yaw
: counter-clockwise rotation about thez-axis
You can chnage the right-handed coordinate system so that the z-axis
faces down by adding z_down=True
as a parameter.
See demo scripts under scripts
.
Grid Sampling
To process equirectangular images fast, whether to crop perspective images from the equirectangular image, the library takes advantage of grid sampling techniques.
Some sampling techniques are already implemented, such as scipy.ndimage.map_coordiantes
and cv2.remap
.
This project's goal was to reduce these dependencies and use cuda
and batch processing with torch
and c++
for a faster processing of equirectangular images.
There were not many projects online for these purposes.
In this library, we implement varieties of methods using c++
, numpy
, and torch
.
This part of the code needs cuda
acceleration because grid sampling is parallelizable.
For torch
, the built-in torch.nn.functional.grid_sample
function is very fast and reliable.
I have implemented a pure torch
implementation of grid_sample
which is very customizable (might not be fast as the native function).
For numpy
, I have implemented grid sampling methods that are faster than scipy
and more robust than cv2.remap
.
Just like with this implementation of torch
, numpy
implementation is just as customizable.
It is also possible to pass the scipy
and cv2
's grid sampling function through the use of override_func
argument in grid_sample
.
Developing faster approaches and c++
methods are WIP.
See here for more info on implementations.
Some notes:
- By default,
numpy
'sgrid_sample
will use purenumpy
implementation. It is possible to override this implementation withscipy
andcv2
's implementation usingoverride_func
. - By default,
torch
'sgrid_sample
will use the official implementation. - Benchmarking codes are stored in
tests/
. For example, benchmarking codes fornumpy
'sequi2pers
is located intests/equi2pers/numpy_run_baselines.py
and you can benchmark the runtime performance using different parameters againstscipy
andcv2
.
Develop:
Test files for equilib
are included under tests
.
Running tests:
pytest tests
Note that I have added codes to benchmark every step of the process so that it is possible to optimize the code. If you find there are optimal ways of the implementation or bugs, all pull requests and issues are welcome.
Check CONTRIBUTING.md for more information
TODO:
- Documentations for each transform
- Add table and statistics for speed improvements
- Batch processing for
numpy
- Mixed precision for
torch
-
c++
version of grid sampling - More accurate intrinsic matrix formulation using vertial FOV for
equi2pers
- Multiprocessing support (slow when running on
torch.distributed
)