How to Use ellmer in R

llm
ellmer
Learn to use the ellmer package for LLM interactions in R. A tidyverse-friendly interface supporting OpenAI, Claude, Ollama, and more with consistent syntax.
Published

April 4, 2026

Introduction

ellmer is Posit’s official R package for working with Large Language Models. It provides a consistent, tidyverse-friendly interface that works with multiple LLM providers.

Key features: - Consistent API across providers (OpenAI, Claude, Ollama, etc.) - Streaming output support - Tool/function calling - Structured data extraction - Conversation history management

Getting Started

install.packages("ellmer")
library(ellmer)

Supported Providers

Provider Function API Key Variable
OpenAI chat_openai() OPENAI_API_KEY
Anthropic (Claude) chat_claude() ANTHROPIC_API_KEY
Google (Gemini) chat_gemini() GOOGLE_API_KEY
Ollama (Local) chat_ollama() None (local)
Azure OpenAI chat_azure() AZURE_OPENAI_API_KEY
AWS Bedrock chat_bedrock() AWS credentials

Basic Usage

Create a chat

# OpenAI
chat <- chat_openai()

# Claude
chat <- chat_claude()

# Local (Ollama)
chat <- chat_ollama()

Send a message

chat <- chat_openai()
response <- chat$chat("What is the tidyverse?")
response

Multi-turn conversation

chat <- chat_claude()

chat$chat("I'm analyzing sales data.")
chat$chat("How should I handle missing values?")
chat$chat("Show me code to do that.")

System Prompts

Control assistant behavior:

chat <- chat_openai(
  system_prompt = "You are an R programming expert. Always provide working code examples using tidyverse conventions."
)

chat$chat("How do I join two data frames?")

Role-specific assistants

# Data analyst
analyst <- chat_claude(
  system_prompt = "You are a senior data analyst. Explain concepts clearly and suggest best practices for data analysis in R."
)

# Code reviewer
reviewer <- chat_openai(
  system_prompt = "You are a code reviewer. Analyze R code for bugs, inefficiencies, and style issues. Be constructive."
)

# Statistics tutor
tutor <- chat_claude(
  system_prompt = "You are a statistics tutor. Explain statistical concepts in simple terms with R examples."
)

Switching Between Providers

Same code works across providers:

# Function that works with any provider
analyze_with_llm <- function(chat, data_description) {
  chat$chat(paste(
    "I have this data:",
    data_description,
    "Suggest 3 analyses I could perform."
  ))
}

# Use with different providers
openai_chat <- chat_openai()
claude_chat <- chat_claude()
local_chat <- chat_ollama()

# Same function, different backends
analyze_with_llm(openai_chat, "Customer purchase history")
analyze_with_llm(claude_chat, "Customer purchase history")
analyze_with_llm(local_chat, "Customer purchase history")

Streaming Output

For long responses, stream as they generate:

chat <- chat_claude()

# Prints incrementally as response generates
chat$stream("Write a comprehensive guide to data visualization in R.")

Tool Calling

Tool calling lets LLMs execute R functions to get real data.

Define a function

calculate_stats <- function(numbers) {
  list(mean = mean(numbers), sd = sd(numbers), median = median(numbers))
}

Register it as a tool

chat <- chat_claude()

chat$register_tool(
  name = "calculate_stats",
  description = "Calculate summary statistics for a vector of numbers",
  arguments = list(
    numbers = tool_arg("array", "A vector of numbers", items = type_number())
  ),
  func = calculate_stats
)

# LLM can now call this function
chat$chat("What are the statistics for the numbers 1, 5, 3, 9, 2, 7?")

Multiple tools

# Tool 1: Read data
read_data <- function(filename) {
  if (file.exists(filename)) {
    head(read.csv(filename), 5)
  } else {
    "File not found"
  }
}

# Tool 2: Plot data
create_plot <- function(x_col, y_col) {
  paste("Creating plot of", y_col, "vs", x_col)
}

chat <- chat_openai()

chat$register_tool(
  name = "read_data",
  description = "Read first 5 rows of a CSV file",
  arguments = list(filename = tool_arg("string", "Path to CSV file")),
  func = read_data
)

chat$register_tool(
  name = "create_plot",
  description = "Create a scatter plot",
  arguments = list(
    x_col = tool_arg("string", "X axis column"),
    y_col = tool_arg("string", "Y axis column")
  ),
  func = create_plot
)

chat$chat("Read data.csv and plot mpg vs wt")

Structured Data Extraction

Extract data in specific formats. For a deep dive, see How to Extract Structured Data with LLMs.

Simple extraction

Define a schema for the data you want:

person_type <- type_object(
  name = type_string("Person's full name"),
  age = type_integer("Person's age"),
  occupation = type_string("Person's job")
)

Extract structured data from text:

chat <- chat_claude()

result <- chat$extract_data(
  "John Smith is a 35-year-old software engineer from Seattle.",
  type = person_type
)

result$name  # "John Smith"
result$age   # 35
# $name: "John Smith"
# $age: 35
# $occupation: "software engineer"

Extract arrays

# Schema for multiple items
products_type <- type_array(
  items = type_object(
    name = type_string("Product name"),
    price = type_number("Price in dollars"),
    category = type_string("Product category")
  )
)

text <- "We have laptops at $999, headphones for $199, and keyboards at $79."

chat <- chat_openai()
products <- chat$extract_data(text, type = products_type)

Classification

sentiment_type <- type_enum(
  values = c("positive", "negative", "neutral"),
  description = "The sentiment of the text"
)

chat <- chat_claude()
sentiment <- chat$extract_data(
  "This product is amazing!",
  type = sentiment_type
)
# "positive"

Conversation Management

View history

chat <- chat_claude()
chat$chat("Hello!")
chat$chat("What's 2+2?")

# Get all turns
turns <- chat$get_turns()

Clear history

# Start fresh
chat$clear()

Save and restore

# Save conversation
turns <- chat$get_turns()
saveRDS(turns, "conversation.rds")

# Restore later
saved_turns <- readRDS("conversation.rds")
chat <- chat_claude(turns = saved_turns)

Practical Examples

Code generation pipeline

generate_and_run <- function(task) {
  chat <- chat_claude(
    system_prompt = "Return only valid R code. No explanations. No markdown."
  )

  code <- chat$chat(task)

  # Try to run the code
  tryCatch({
    result <- eval(parse(text = code))
    list(code = code, result = result, success = TRUE)
  }, error = function(e) {
    list(code = code, error = e$message, success = FALSE)
  })
}

generate_and_run("Calculate the mean of mtcars$mpg")

Data summarization

summarize_data <- function(df, chat = chat_claude()) {
  description <- paste(
    "Dataset with", nrow(df), "rows and", ncol(df), "columns.",
    "Columns:", paste(names(df), collapse = ", "),
    "First few values:", capture.output(head(df, 3))
  )

  chat$chat(paste(
    "Here's a dataset:",
    description,
    "Provide a brief summary of what this data contains."
  ))
}

summarize_data(mtcars)

Batch processing with rate limiting

library(purrr)

process_texts <- function(texts, system_prompt, delay = 0.5) {
  map_chr(texts, \(text) {
    chat <- chat_claude(system_prompt = system_prompt)
    Sys.sleep(delay)
    chat$chat(text)
  })
}

reviews <- c(
  "Great product, highly recommend!",
  "Terrible, waste of money",
  "It's okay, nothing special"
)

sentiments <- process_texts(
  reviews,
  system_prompt = "Classify as positive/negative/neutral. One word."
)

Error Handling

safe_chat <- function(prompt, provider = "claude") {
  chat_fn <- switch(provider,
    "claude" = chat_claude,
    "openai" = chat_openai,
    "ollama" = chat_ollama
  )

  tryCatch({
    chat <- chat_fn()
    chat$chat(prompt)
  }, error = function(e) {
    warning("LLM Error: ", e$message)
    NA_character_
  })
}

Common Mistakes

1. Creating new chat for each message

# Wrong - loses context
chat_claude()$chat("My name is Bob")
chat_claude()$chat("What's my name?")  # Doesn't know!

# Right - reuse chat object
chat <- chat_claude()
chat$chat("My name is Bob")
chat$chat("What's my name?")  # "Bob"

2. Forgetting to set API keys

# Check if keys are set
Sys.getenv("OPENAI_API_KEY")
Sys.getenv("ANTHROPIC_API_KEY")

# Set them
usethis::edit_r_environ()
# Add: OPENAI_API_KEY=your-key
# Add: ANTHROPIC_API_KEY=your-key

3. Not handling rate limits

# Add delays in loops
results <- map(items, \(item) {
  Sys.sleep(0.5)  # Delay
  chat$chat(item)
})

Summary

Task Code
OpenAI chat chat_openai()
Claude chat chat_claude()
Local Ollama chat_ollama()
System prompt chat_*(system_prompt = "...")
Send message chat$chat("message")
Stream output chat$stream("message")
Register tool chat$register_tool(...)
Extract data chat$extract_data(text, type)
View history chat$get_turns()
  • ellmer provides consistent syntax across providers
  • Use system prompts to control response style
  • Reuse chat objects to maintain context
  • Use tool calling for R function integration
  • Use structured extraction for reliable data parsing

Sources