Home

Awesome

plenoptic

PyPI Version Anaconda-Server Badge License: MIT Python version Build Status DOI codecov Binder Project Status: Active – The project has reached a stable, usable state and is being actively developed. Code style: Ruff

plenoptic is a python library for model-based synthesis of perceptual stimuli. For plenoptic, models are those of visual1 information processing: they accept an image as input, perform some computations, and return some output, which can be mapped to neuronal firing rate, fMRI BOLD response, behavior on some task, image category, etc. The intended audience is researchers in neuroscience, psychology, and machine learning. The generated stimuli enable interpretation of model properties through examination of features that are enhanced, suppressed, or discarded. More importantly, they can facilitate the scientific process, through use in further perceptual or neural experiments aimed at validating or falsifying model predictions.

Getting started

Installation

The best way to install plenoptic is via pip:

$ pip install plenoptic

or conda:

$ conda install plenoptic -c conda-forge

[!WARNING] We do not currently support conda installs on Windows, due to the lack of a Windows pytorch package on conda-forge. See here for the status of that issue.

Our dependencies include pytorch and pyrtools. Installation should take care of them (along with our other dependencies) automatically, but if you have an installation problem (especially on a non-Linux operating system), it is likely that the problem lies with one of those packages. Open an issue and we'll try to help you figure out the problem!

See the installation page for more details, including how to set up a virtual environment and jupyter.

ffmpeg and videos

Several methods in this package generate videos. There are several backends possible for saving the animations to file, see matplotlib documentation for more details. In order convert them to HTML5 for viewing (and thus, to view in a jupyter notebook), you'll need ffmpeg installed and on your path as well. Depending on your system, this might already be installed, but if not, the easiest way is probably through [conda] (https://anaconda.org/conda-forge/ffmpeg): conda install -c conda-forge ffmpeg.

To change the backend, run matplotlib.rcParams['animation.writer'] = writer before calling any of the animate functions. If you try to set that rcParam with a random string, matplotlib will tell you the available choices.

Contents

Synthesis methods

Models, Metrics, and Model Components

Getting help

We communicate via several channels on Github:

In all cases, please follow our code of conduct.

Citing us

If you use plenoptic in a published academic article or presentation, please cite both the code by the DOI as well the JOV paper. If you are not using the code, but just discussing the project, please cite the paper. You can click on Cite this repository on the right side of the GitHub page to get a copyable citation for the code, or use the following:

See the citation guide for more details, including citations for the different synthesis methods and computational moels included in plenoptic.

Support

This package is supported by the Simons Foundation Flatiron Institute's Center for Computational Neuroscience.

Footnotes

  1. These methods also work with auditory models, such as in Feather et al., 2019, though we haven't yet implemented examples. If you're interested, please post in Discussions!