Home

Awesome

ShapePFCN

Shape Projective Fully Convolutional Network

This is the implementation of the ShapePFCN architecture described in this paper:

Evangelos Kalogerakis, Melinos Averkiou, Subhransu Maji, Siddhartha Chaudhuri, "3D Shape Segmentation with Projective Convolutional Networks", Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR) 2017 (oral presentation)

Project page: http://people.cs.umass.edu/~kalo/papers/shapepfcn/index.html

Arxiv most recent version: https://arxiv.org/abs/1612.02808


To compile in Linux (we assume 32 threads for compilation, change make's -j32 option according to your system):

  1. First compile Siddhartha Chaudhuri's "Thea" library:
     cd TheaDepsUnix/Source/
     ./install-defaults.sh --user <your_user_name> --with-osmesa -j32
     cd ../../
     cp -R TheaDepsUnix/Source/Installations/include/GL TheaDepsUnix/Source/Mesa/mesa-11.0.7/include
     cd Thea/Code/Build
     cmake -DTHEA_INSTALLATIONS_ROOT=../../../TheaDepsUnix/Source/Installations/ -DTHEA_GL_OSMESA=TRUE -DOSMesa_INCLUDE_DIR=../../../TheaDepsUnix/Source/Mesa/mesa-11.0.7/include/ -DOSMesa_GLU_LIBRARIES=../../../TheaDepsUnix/Source/Mesa/mesa-11.0.7/lib -DOPENCL_INCLUDE_DIRS=/usr/local/cuda75/toolkit/7.5.18/include  -DOPENCL_LIBRARIES=/usr/local/cuda75/toolkit/7.5.18/lib64 -DCMAKE_BUILD_TYPE=Release
     make -j32
     cd ../../../
     ln -s Thea/Code/Build/Output/lib lib
  1. Given that Thea's libraries were compiled successfully (for questions related to Thea, please email Siddhartha Chaudhuri), the next step is to compile our version of caffe (sorry, we modified caffe to incorporate our own data & projection layers) and generate the header used for parsing protobuf schemas in Caffe:
     cd caffe-ours   
     make -j32
     sh generate_proto.sh     
     cd ../

(notes: you may need to adjust the library paths in `caffe-ours/Makefile.config' according to your system, and you also need to install the libraries that caffe requires: http://caffe.berkeleyvision.org/installation.html)

  1. Given that caffe was compiled successfully, you can now compile ShapePFCN. In the root directory of ShapePFCN, type:
     make -j32

(note: you may need to adjust the library paths in Makefile.config according to your system)

  1. Download the pretrained VGG model on ImageNet from here : https://drive.google.com/file/d/1YjMyTsdpsI17pV998_bc145j66i4cum1/view?usp=sharing (we train starting from a pretrained VGG model). Place it in the ShapePFCN root directory (i.e., frontend_vgg_train_net.txt and vgg_conv.caffemodel should be in the same directory)

To run the net training procedure (the first command renders images, the second runs the network training):

     ./build_release/mvfcn.bin --skip-testing --do-only-rendering --train-meshes-path  <your_path_to_training_data>
     ./build_release/mvfcn.bin --skip-testing --skip-train-rendering --train-meshes-path  <your_path_to_training_data> --gpu-use 0

Notes:

     LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./caffe-ours/build/lib:/usr/local/hdf5_18/1.8.17/lib:/usr/local/openblas/0.2.18/lib:/usr/local/boost/lib:/usr/local/cuda75/toolkit/7.5.18/lib64:/usr/local/cudnn/5.1/lib64/:/usr/local/apps/cuda-driver/libs/375.20/lib64/
     export LD_LIBRARY_PATH

To run the testing procedure (after you execute training!):

     ./build_release/mvfcn.bin --skip-training --do-only-rendering --test-meshes-path  <your_path_to_test_data> --train-meshes-path <your_path_to_training_data>
     ./build_release/mvfcn.bin --skip-training --skip-test-rendering --test-meshes-path  <your_path_to_test_data> --train-meshes-path <your_path_to_training_data> --gpu-use 0

Same notes as above apply wrt GPU usage, memory, shape orientation, and "baseline" rendering.

For any questions related to the compilation and execution of ShapePFCN and our caffe version, you may contact Evangelos Kalogerakis


Regarding training/test data format:

Our repository includes the airplanes from the L-PSB dataset (http://people.cs.umass.edu/~kalo/papers/LabelMeshes/index.html) as an example of the data format that ShapePFCN supports. There are two possible formats:

For testing, no OBJ groups or labels txt files are needed. If they are found in the test directory, they will be simply used for evaluating test accuracy.