Home

Awesome

StoryBench: A Multifaceted Benchmark for Continuous Story Visualization

This is the implementation of the approaches described in the paper:

Emanuele Bugliarello, Hernan Moraldo, Ruben Villegas, Mohammad Babaeizadeh, Mohammad Taghi Saffar, Han Zhang, Dumitru Erhan, Vittorio Ferrari, Pieter-Jan Kindermans, Paul Voigtlaender. StoryBench: A Multifaceted Benchmark for Continuous Story Visualization. Advances in Neural Information Processing Systems 37 (NeurIPS 2023).

We provide our text annotations, guidelines for human evaluation, and the code for computing automatic metrics.

Leaderboards are available on Papers With Code.

Data

data/ contains the evaluation data for StoryBench.

Training data can be dowloaded from the following links:

While human-annotated evaluation files are recommended (see 'metrics/data/'), we also share our automatically generated Oops validation data, which we used to assess the robustness of our data transformation pipeline:

Metrics

metrics/ contains the source code to perform automatic evaluation of generated videos.

To set up your Python virtual environment, run:

pip install -r metrics/requirements.txt

To compute a given metric (e.g., FID with InceptionV3) run as follows:

MODEL_NAME="phenaki"
TASK="action_exe"  # [action_exe, story_cont, story_gen]
DATA_SPLIT="oops_test"  # [{oops,uvo,didemo}_{val,test}]
DATA_DIR="/tmp/datadir/"
OUT_DIR="/tmp/out/"

python3 -m metrics.fid_inception --batch_size=256 --model="ground_truth" --task=${TASK} --dataset=${DATA_SPLIT} --data_dir=${DATA_DIR} --output_dir=${OUT_DIR} --num_videos=1

python3 -m metrics.fid_inception --batch_size=256 --model=${MODEL_NAME} --task=${TASK} --dataset=${DATA_SPLIT} --data_dir=${DATA_DIR} --output_dir=${OUT_DIR} --num_videos=4

In this example, we run the same script twice, first to extract the features from the ground-truth videos, and then to extract the features from the videos generated by a text-to-video model (phenaki here). Note that we set --num_videos=4 in the latter case as we sample four videos per text prompt when we generate videos with our models.

If you do not use our extracted features (see above), you only need to run the first script (to extract ground-truth features) once.

The input data to the scripts are npz files with the (ground-truth or generated) video as a NumPy array.

We rely on publicly available models and code to compute our automatic metrics. For reference, our working directory is structured as follows.

<details> <summary>Click to expand</summary>
checkpoints/
    | DOVER.pth
    | InternVideo-MM-L-14.ckpt
    | ViT-L-14-336px.pt
    | convnext_tiny_1k_224_ema.pth
    | i3d_torchscript.pt
    | pt_inception-2015-12-05-6726825d.pth
data/
    | ground_truth/
    |   | action_exe/
    |   |   | oops_test/
    |   |   |   | raw/
    |   |   |   |   | fn0.npz
    |   |   |   |   | ...
    |   |   |   | features/
    |   |   |   |   | fid_clip/
    |   |   |   |   |   | embeddings_0.npz
    |   |   |   |   | fid_inception/
    |   |   |   |   |   | embeddings_0.npz
    |   |   |   |   | ...
    |   |   |   |   | vtm_internvideo/
    |   |   |   |   |   | embeddings_0.npz
    |   |   | ...
    |   | ...
    | phenaki/
    |   | action_exe/
    |   |   | oops_test/
    |   |   |   | raw/
    |   |   |   |   | fn0.npz
    |   |   |   |   | ...
    |   |   | ...
    |   | ...
outputs/
    | phenaki/
    |   | action_exe/
    |   |   | oops_test/
    |   |   |   | features/
    |   |   |   |   | embeddings_0.npz
    |   |   |   |   | embeddings_1.npz
    |   |   |   |   | embeddings_2.npz
    |   |   |   |   | embeddings_3.npz
    |   |   | ...
    |   | ...

Note that:

</details>

License

This work is licensed under the Apache License. See LICENSE for details.

We rely on third-party software and models to compute automatic evaluation metrics, released under MIT and Apache licenses.

The annotations are licensed by Google LLC under CC BY 4.0 license.

If you find our code/data/models or ideas useful in your research, please consider citing the paper:

@inproceedings{bugliarello-etal-2023-storybench,
    author = {Bugliarello, Emanuele and Moraldo, Hernan and Villegas, Ruben and Babaeizadeh, Mohammad and Taghi Saffar, Mohammad and Zhang, Han and Erhan, Dumitru and Ferrari, Vittorio and Kindermans, Pieter-Jan and Voigtlaender, Paul},
    title = "{{StoryBench}: {A} Multifaceted Benchmark for Continuous Story Visualization}",
    booktitle = {Advances in Neural Information Processing Systems},
    publisher = {Curran Associates, Inc.},
    url = {https://arxiv.org/pdf/2308.11606.pdf},
    volume = {37},
    year = {2023}
}