Home

Awesome

Action Recognition with Deep Learning

Build Status

This branch hosts the code for the technical report "Towards Good Practices for Very Deep Two-stream ConvNets", and more.

Updates

Full Change Log

Features

Usage

See more in Wiki.

Generally it's the same as the original caffe. Please see the original README. Please see following instruction for accessing features above. More detailed documentation is on the way.

mkdir build && cd build
cmake .. -DUSE_MPI=ON
make && make install
mpirun -np 4 ./install/bin/caffe train --solver=<Your Solver File> [--weights=<Pretrained caffemodel>]

Note: actual batch_size will be num_device times batch_size specified in network's prototxt.

Working Examples

Extension

Currently all existing data layers sub-classed from BasePrefetchingDataLayer support parallel training. If you have newly added layer which is also sub-classed from BasePrefetchingDataLayer, simply implement the virtual method

inline virtual void advance_cursor();

Its function should be forwarding the "data cursor" in your data layer for one step. Then your new layer will be able to provide support for parallel training.

Questions

Contact

Citation

You are encouraged to also cite the following report if you find this repo helpful

@article{MultiGPUCaffe2015,
  author    = {Limin Wang and
               Yuanjun Xiong and
               Zhe Wang and
               Yu Qiao},
  title     = {Towards Good Practices for Very Deep Two-Stream ConvNets},
  journal   = {CoRR},
  volume    = {abs/1507.02159},
  year      = {2015},
  url       = {http://arxiv.org/abs/1507.02159},
}

Following is the original README of Caffe.

Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Join the chat at https://gitter.im/BVLC/caffe

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

License and Citation

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}