Home

Awesome

Genesis

Experimental Unity package that auto-generates depth textures for skyboxes created with Skybox Lab.

https://user-images.githubusercontent.com/26555424/221649787-1bf45ce8-4c71-4647-a9be-c02128687d2a.mp4

Disclaimer

As of right now, this project's main purpose is experimentation. Feel free to give it a go with your own imaginations (but expect things to break).

The image-based rendering techniques used here do not do any sophisticated inpainting techniques. The depth estimation still has a lot of limitations and artifacts (outlined below). How well it performs also depends highly on the content (e.g. indoor vs outdoor scenes). Especially when generating highly non-realistic images, depth estimation can fail badly.

The motivation here in some way is to see how far you can push current 2D-based Image generation models for building 3D worlds and prototype workflows for the inevitable future of AI-assisted game development.

You're welcome to provide feedback here on GitHub, Twitter, or join the discussion on the Blockade Labs Discord

Requirements

This package requires Unity 2020 or newer and uses the Built-in Render Pipeline.

Installation

Go to the releases section, download the Unity Package, and import it into any Unity project. The package is a 'Hybrid Package', so it will install into your Packages folder as a local package.

Usage

menu_crop_2

On the generated prefab, there is a material that has two properties: Depth Multiplier and Max Depth Cutoff skybox_material

Ideas

Not a roadmap, just some general thoughts and ideas I have for the future.

Known Issues / Limitations

Depth Quality

The generated depth is not very high quality right now. The resolution for the depth maps we generate is only 256x256. Some limitations in Barracuda require us to use the smaller version of an older depth estimation model (MiDaS v2.1), but that is only part of the issue. MiDaS alone is not really suitable to be used with high-resolution, 360° images with lots of distortion.

There is active research going on, on how to extend the capabilities of depth estimation models through various approaches. Naively, one could run the model on patches of the whole panoramic image. But from what I've read, realignment and blending the results back together becomes tricky fast, so I would rather use some existing approach. Notably, 360monodepth does just that and seems to be the state of the art for generating high-resolution depth for panoramic images.

The neat thing about using Unity's Baracuda is, that we can run the inference directly in the Unity Editor, or even do it at runtime. Still, if we want better depth, looking at the setup instructions of the 360monodepth repository, it might require building a web service to do the depth estimation instead.

Depth Discontinuities

The seams and depth discontinuities could probably be fixed in a quick-and-dirty way, but I'm not sure it's worth doing, before tackling the larger issue of figuring out a better way to do the depth estimation.

Acknowledgements

Thanks go out to these wonderful projects: