From IDE plugins to external chatbots and running LLMs locally, these new and emerging tools are bringing the generative AI revolution to R.

My previous article focused on some of the best tools for incorporating LLMs into your R scripts and workflows. We’ll expand on that theme here, offering up a collection of generative AI tools you can use to get help with your R programming or to run LLMs locally with R.
Coding help for R developers
Getting help with writing code is one of the most popular uses for large language models. Some developers prefer to use these tools within their integrated development environments (IDEs); others are content to copy and paste into external tools. R programmers have options for both.
gander
The gander package is a bit like GitHub Copilot Light for R scripts. It’s an IDE add-in for RStudio and Positron that can be aware of code in scripts around it as well as variables in your working environment. If you’ve selected code when you invoke gander, you’ll be asked whether you want the model’s result to replace that code or be added either before or after.
gander’s interface asks if you want to replace selected code or put its suggestions either before or after your selection.
You can choose any model supported by ellmer to use in gander. As of early 2025, package author and Posit developer Simon Couch recommended Anthropic’s Claude Sonnet for its R prowess. You can set that as your default with options(.gander_chat = ellmer::chat_claude())
.
As always when using a commercial LLM provider, you need to make an API key available in your working environment. As with many current R packages for working with LLMs, you can also use a local model powered by ollama. Note that you can also use ellmer as a chatbot to ask questions about R and run LLMs locally. (See my previous article for more about ellmer and ollama.)
Installing gander
gander is available for R scripts in both RStudio and Positron. You can download it from CRAN or install the development version with pak::pak("simonpcouch/gander")
.
That command also installs an add-in within RStudio, where you can choose a keyboard shortcut to invoke it.
For Positron, there are instructions on the package website’s homepage for opening the Positron command palette and adding code to the keybindings.json
file.
To see what data gander sent to the model—the background information from your script and working session, as well as the question you typed—you can run gander_peek()
after running the add-in.
Some gander settings can be changed with R’s options()
function. The tool’s default style instructions are:
Use tidyverse style and, when relevant, tidyverse packages. For example, when asked to plot something, use ggplot2, or when asked to transform data, using dplyr and/or tidyr unless explicitly instructed otherwise.
You can change that default with code such as options(.gander_style = "Use base R when possible.")
See more customization possibilities by running ?gander_options
in your R console.
You can learn more about gander on the gander package website and Posit blog.
chatgpt
An “interface to ChatGPT from R,” chatgpt features around 10 RStudio add-ins for things such as opening an interactive chat session, commenting selected code, creating unit tests, documenting code, and optimizing selected code. If you don’t select any code for the programming-specific tasks, it will evaluate the entire active file.
This could be a good choice for people who don’t use large language models often or don’t want to write their own prompts for regular programming tasks. And if you don’t want to call on different add-ins for each task, you can use its functions in the terminal, such as comment_code()
and complete_code()
.
You can customize model settings with R environment variables, such as OPENAI_MODEL
(gpt-4o-mini is the default), OPENAI_TEMPERATURE
(which defaults to 1—my choice would be 0), and OPENAI_MAX_TOKENS
(which defaults to 256).
Note that the chatgpt
package only supports OpenAI models.
gptstudio
gptstudio, on CRAN, is another RStudio add-in that offers access to LLMs. It features defined options for spelling and grammar, chat, and code commenting. I found the interface a bit more disruptive than some other options, but opinions will likely vary on that.
The gptstudio
package supports HuggingFace, Ollama, Anthropic, Perplexity, Google, Azure, and Cohere along with OpenAI.
pkgprompt
pkgprompt can turn an R package’s documentation—all of it or only specific topics—into a single character string using the pkg_prompt()
function. This makes it easy to send that documentation to an LLM as part of a prompt. For example, the command
library(pkgprompt)
pkg_docs <- pkg_prompt(pkg = "dplyr", topics = c("across", "coalesce"))
returns the documentation for the dplyr
package’s across
and coalesce
functions as a single character string. You can then add the string to any LLM prompt, either within R or by copying and pasting to an external tool. This is another R package by Simon Couch. Install with: pak::pak("simonpcouch/pkgprompt")
.
Help outside your IDE
If you’re willing to go outside your coding environment to query an LLM, there are some R-specific options in addition to a general-purpose chatbot like Claude that knows some R.
Shiny Assistant
If you build Shiny web apps, Posit’s Shiny Assistant is a great resource. This free web-based tool uses an LLM to answer questions about building Shiny apps for both R Shiny and Shiny for Python. Note that the Shiny team says they may log and look at your queries to improve the tool, so don’t use the web version for sensitive work. You can also download the Shiny Assistant code from GitHub and tweak to run it yourself.
R and R Studio Tutor + Code Nerd
R and R Studio Tutor is a custom GPT that adds specific R information to basic ChatGPT. It was developed by Jose A Fernandez Calvo and is designed to answer questions specifically about R and RStudio.
Code Nerd by Christian Czymara is another custom GPT that answers R questions.
R Tutor + Chatilize
One of the earliest entries in the GenAI for R space, R Tutor still exists online. Upload a data set, ask a question, and watch as it generates R code and your results, including graphics.
R Tutor will let you ask questions about a data set and generate R code in response.
The code for RTutor is available on GitHub, so you can install your own local version. However, licensing only allows using the app for nonprofit or non-commercial use, or for commercial testing. RTutor was a personal project of Dr. Steven Ge, a professor of bioinformatics at South Dakota State University, but is now developed and maintained by a company he founded, Orditus LLC.
Chatilize, a newer option that is similar to R Tutor, can generate Python as well as R.
Interacting with LLMs installed locally
Large LLMs in the cloud may still be more capable than models you can download and run locally, but smaller open-weight models are getting better all the time. And, they may already be good enough for some specific tasks—the only way to know is to try. Local models also have the huge advantage of privacy, so you never have to send your data to someone else’s server for analysis. Plus, you don’t have to worry about a model you like being deprecated. They’re also free beyond whatever it costs to run your desktop PC. The general-purpose ellmer package lets you run local LLMs, too, but there are also R packages specifically designed for local generative AI.
rollama and ollamar
Both the rollama
and ollamar
packages let you use R to run local models via the popular Ollama project, but their syntax is different. If you want an Ollama-specific package, I’d suggest trying both to see which you prefer.
In addition to one (or both) of these R packages, you’ll need the Ollama application itself. Download and install Ollama as a conventional software package—that is, not an R library—for Windows, Mac, or Linux. If Ollama isn’t already running, you can run ollama serve
from a command prompt or terminal (not the R console) to start the Ollama server. (Setting up Ollama to run in the background when your system starts up is worth doing if you use it frequently.)
After loading the rollama
R package with library(rollama)
, you can test with the ping_ollama()
function to discover if R sees an Ollama server running.
If you don’t already have local LLMs installed with Ollama, you can do that in R with the pull_model("the_model_name)
or ollama pull the_model_name
in a terminal. You can check what models are available for Ollama on the Ollama website.
To download the 3B parameter llama 3.2 model, for instance, you could run pull_model("llama3.2:3b")
in R.
To set a model as your default for the session, use the syntax options(rollama_model = "the_model_name)
. An example is: options(rollama_model = "llama3.2:3b")
.
For a single question, use the query()
function:
query("How do you rotate text on the x-axis of a ggplot2 graph?")
If you want a chat where previous questions and answers remain in memory, use chat()
. Both functions’ optional arguments include screen
(whether answers should be printed to the screen) and model_params
(named list of parameters such as temperature). query()
also includes a format
argument whose returned value can be a response object, text, list, data.frame
, httr2_response
, or httr2_request
.
Queries and chats can also include uploaded images with the images
argument.
ollamar
The ollamar package starts up similarly, with a test_connection()
function to check that R can connect to a running Ollama server, and pull("the_model_name")
to download the model such as pull("gemma3:4b") or pull("gemma3:12b")
.
The generate()
function generates one completion from an LLM and returns an httr2_response
, which can then be processed by the resp_process()
function.
library(ollamar)
resp <- generate("gemma2", "What is ggplot2?")
resp_text <- resp_process(resp)
Or, you can request a text response directly with a syntax such as resp <- generate("gemma2", "What is ggplot2?", output = "text"
). There is an option to stream the text with stream = TRUE
:
resp <- generate("gemma2", "Tell me about the data.table R package", output = "text", stream = TRUE)
ollamar has other functionality, including generating text embeddings, defining and calling tools, and requesting formatted JSON output. See details on GitHub.
rollama was created by Johannes B. Gruber; ollamar by by Hause Lin.
Roll your own
If all you want is a basic chatbot interface for Ollama, one easy option is combining ellmer, shiny, and the shinychat package to make a simple Shiny app. Once those are installed, assuming you also have Ollama installed and running, you can run a basic script like this one:
library(shiny)
library(shinychat)
ui <- bslib::page_fluid(
chat_ui("chat")
)
server <- function(input, output, session) {
chat <- ellmer::chat_ollama(system_prompt = "You are a helpful assistant", model = "phi4")
observeEvent(input$chat_user_input, {
stream <- chat$stream_async(input$chat_user_input)
chat_append("chat", stream)
})
}
shinyApp(ui, server)
That should open an extremely basic chat interface with a model hardcoded. If you don’t pick a model, the app won’t run. You’ll get an error message with the instruction to specify a model along with those you’ve already installed locally.
I’ve built a slightly more robust version of this, including dropdown model selection and a button to download the chat. You can see that code here.
Conclusion
There are a growing number of options for using large language models with R, whether you want to add functionality to your scripts and apps, get help with your code, or run LLMs locally with ollama. It’s worth trying a couple of options for your use case to find one that best fits both your needs and preferences.