Awesome
<!-- README.md is generated from README.Rmd. Please edit that file -->ollamar <a href="https://hauselin.github.io/ollama-r/"><img src="man/figures/logo.png" align="right" height="117" alt="ollamar website" /></a>
<!-- badges: start --> <!-- badges: end -->The Ollama R library is the easiest way to integrate R with Ollama, which lets you run language models locally on your own machine.
The library also makes it easy to work with data structures (e.g.,
conversational/chat histories) that are standard for different LLMs
(such as those provided by OpenAI and Anthropic). It also lets you
specify different output formats (e.g., dataframes, text/vector, lists)
that best suit your need, allowing easy integration with other
libraries/tools and parallelization via the httr2
library.
To use this R library, ensure the Ollama app is installed. Ollama can use GPUs for accelerating LLM inference. See Ollama GPU documentation for more information.
See Ollama’s Github page for more information. This library uses the Ollama REST API (see documentation for details) and has been tested on Ollama v0.1.30 and above. It was last tested on Ollama v0.3.10.
Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Ollama R vs Ollama Python/JS
This library has been inspired by the official Ollama Python and Ollama JavaScript libraries. If you’re coming from Python or JavaScript, you should feel right at home. Alternatively, if you plan to use Ollama with Python or JavaScript, using this R library will help you understand the Python/JavaScript libraries as well.
Installation
- Download and install the Ollama app.
- macOS
- Windows preview
- Linux:
curl -fsSL https://ollama.com/install.sh | sh
- Docker image
-
Open/launch the Ollama app to start the local server.
-
Install either the stable or latest/development version of
ollamar
.
Stable version:
install.packages("ollamar")
For the latest/development version with more features/bug fixes (see
latest changes
here), you can
install it from GitHub using the install_github
function from the
remotes
library. If it doesn’t work or you don’t have remotes
library, please run install.packages("remotes")
in R or RStudio before
running the code below.
# install.packages("remotes") # run this line if you don't have the remotes library
remotes::install_github("hauselin/ollamar")
Example usage
Below is a basic demonstration of how to use the library. For details, see the getting started vignette on our main page.
ollamar
uses the httr2
library
to make HTTP requests to the Ollama server, so many functions in this
library returns an httr2_response
object by default. If the response
object says Status: 200 OK
, then the request was successful.
library(ollamar)
test_connection() # test connection to Ollama server
# if you see "Ollama local server not running or wrong server," Ollama app/server isn't running
# download a model
pull("llama3.1") # download a model (equivalent bash code: ollama run llama3.1)
# generate a response/text based on a prompt; returns an httr2 response by default
resp <- generate("llama3.1", "tell me a 5-word story")
resp
#' interpret httr2 response object
#' <httr2_response>
#' POST http://127.0.0.1:11434/api/generate # endpoint
#' Status: 200 OK # if successful, status code should be 200 OK
#' Content-Type: application/json
#' Body: In memory (414 bytes)
# get just the text from the response object
resp_process(resp, "text")
# get the text as a tibble dataframe
resp_process(resp, "df")
# alternatively, specify the output type when calling the function initially
txt <- generate("llama3.1", "tell me a 5-word story", output = "text")
# list available models (models you've pulled/downloaded)
list_models()
name size parameter_size quantization_level modified
1 codegemma:7b 5 GB 9B Q4_0 2024-07-27T23:44:10
2 llama3.1:latest 4.7 GB 8.0B Q4_0 2024-07-31T07:44:33
Citing ollamar
If you use this library, please cite this paper using the following BibTeX entry:
@article{Lin2024Aug,
author = {Lin, Hause and Safi, Tawab},
title = {{ollamar: An R package for running large language models}},
journal = {PsyArXiv},
year = {2024},
month = aug,
publisher = {OSF},
doi = {10.31234/osf.io/zsrg5},
url = {https://doi.org/10.31234/osf.io/zsrg5}
}