Home

Awesome

text2prompt

This is an extension to make prompt from simple text for Stable Diffusion web UI by AUTOMATIC1111.
Currently, only prompts consisting of some danbooru tags can be generated.

Installation

Extensions tab on WebUI

Copy https://github.com/toshiaki1729/stable-diffusion-webui-text2prompt.git into "Install from URL" tab and "Install".

Install Manually

To install, clone the repository into the extensions directory and restart the web UI.
On the web UI directory, run the following command to install:

git clone https://github.com/toshiaki1729/stable-diffusion-webui-text2prompt.git extensions/text2prompt

Usage

  1. Type some words into "Input Theme"
  2. Type some unwanted words into "Input Negative Theme"
  3. Push "Generate" button

Tips

How it works

It's doing nothing special;

  1. Danbooru tags and it's descriptions are in the data folder
  2. Tokenize your input text and calculate cosine similarity with all tag descriptions
  3. Choose some tags depending on their similarities

Database (Optional)

You can choose the following dataset if needed.
Download the following, unzip and put its contents into text2prompt-root-dir/data/danbooru/.

Tag descriptionall-mpnet-base-v2all-MiniLM-L6-v2
well filtered (recommended)download (preinstalled)download
normal (same as previous one)downloaddownload
full (noisy)downloaddownload

well filtered: Tags are removed if their description include the title of some work. These tags are heavily related to a specific work, meaning they are not "general" tags.
normal: Tags containing the title of a work, like tag_name(work_name), are removed.
full: Including all tags.


More detailed description

$i \in N = \{1, 2, ..., n\}$ for index number of the tag
$s_i = S_C(d_i, t)$ for cosine similarity between tag description $d_i$ and your text $t$
$P_i$ for probability for the tag to be chosen

"Method to convert similarity into probability"

"Cutoff and Power"

$$p_i = \text{clamp}(s_i, 0, 1)^{\text{Power}} = \text{max}(s_i, 0)^{\text{Power}}$$

"Softmax"

$$p_i = \sigma(\{s_n|n \in N\})i = \dfrac{e^{s_i}}{ \Sigma{j \in N}\ e^{s_j} }$$

"Sampling method"

Yes, it doesn't sample like other "true" language models do, so "Filtering method" might be better.

"NONE"

$$P_i = p_i$$

"Top-k"

$$ P_i = \begin{cases} \dfrac{p_i}{\Sigma p_j \text{ for all top-}k} & \text{if } p_i \text{ is top-}k \text{ largest in } \{p_n | n \in N \} \ 0 & \text{otherwise} \ \end{cases} $$

"Top-p (Nucleus)"

$$ P_i = \begin{cases} \dfrac{p_i}{\Sigma p_j \text{ for all }j \in N_p} & \text{if } i \in N_p \ 0 & \text{otherwise} \ \end{cases} $$

Finally, the tags will be chosen randomly while the number $\leq$ "Max number of tags".