Home

Awesome

Japanese-CLIP

rinna-icon

This repository includes codes for Japanese CLIP (Contrastive Language-Image Pre-Training) variants by rinna Co., Ltd.

Table of Contents
News
Pretrained Models
Usage
Citation
License

News

July 2022

v0.2.0 was released!

Pretrained models

Model NameTOP1*TOP5*
rinna/japanese-cloob-vit-b-1654.6472.86
rinna/japanese-clip-vit-b-1650.6972.35
sonoisa/clip-vit-b-32-japanese-v138.8860.71
multilingual-CLIP14.3627.28

*Zero-shot ImageNet validation set top-k accuracy.

Usage

  1. Install package
$ pip install git+https://github.com/rinnakk/japanese-clip.git
  1. Run
from PIL import Image
import torch
import japanese_clip as ja_clip

device = "cuda" if torch.cuda.is_available() else "cpu"
# ja_clip.available_models()
# ['rinna/japanese-clip-vit-b-16', 'rinna/japanese-cloob-vit-b-16']
# If you want v0.1.0 models, set `revision='v0.1.0'`
model, preprocess = ja_clip.load("rinna/japanese-clip-vit-b-16", cache_dir="/tmp/japanese_clip", device=device)
tokenizer = ja_clip.load_tokenizer()

image = preprocess(Image.open("./data/dog.jpeg")).unsqueeze(0).to(device)
encodings = ja_clip.tokenize(
    texts=["犬", "猫", "象"],
    max_seq_len=77,
    device=device,
    tokenizer=tokenizer, # this is optional. if you don't pass, load tokenizer each time
)

with torch.no_grad():
    image_features = model.get_image_features(image)
    text_features = model.get_text_features(**encodings)
    
    text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print("Label probs:", text_probs)  # prints: [[1.0, 0.0, 0.0]]

Citation

To cite this repository:

@inproceedings{japanese-clip,
  author = {シーン 誠, 趙 天雨, 沢田 慶},
  title = {日本語における言語画像事前学習モデルの構築と公開},
  booktitle= {The 25th Meeting on Image Recognition and Understanding},
  year = 2022,
  month = July,
}

License

The Apache 2.0 license