How to Use Claude API in R
Introduction
Claude is Anthropic’s flagship AI model and one of the most popular coding assistants available. It excels at writing, understanding, and debugging R code. The ellmer package provides a tidyverse-friendly way to use Claude’s API in R.
Alternatives: See also OpenAI API or run models locally with Ollama for free.
What you’ll learn: - Set up Claude API access - Use ellmer’s chat_claude() function - Create multi-turn conversations - Use tool calling for R functions - Extract structured data
Getting Started
Install ellmer
install.packages("ellmer")
library(ellmer)Get your API key
- Create an account at console.anthropic.com
- Go to API Keys section
- Create a new key
- Add billing information (required for API access)
Note: A Claude Pro subscription does NOT include API access. You need a separate developer account.
Set your API key
# Option 1: Set for current session
Sys.setenv(ANTHROPIC_API_KEY = "sk-ant-your-key-here")
# Option 2: Add to .Renviron (recommended)
usethis::edit_r_environ()
# Add: ANTHROPIC_API_KEY=sk-ant-your-key-hereBasic Chat
Create a chat session
library(ellmer)
chat <- chat_claude()
chat$chat("What is R programming?")Single question
chat <- chat_claude()
response <- chat$chat("Explain what a data frame is in R.")
responseSpecify model
# Use specific Claude model
chat <- chat_claude(model = "claude-opus-4-6")Available Models
Latest Models (Claude 4.6)
| Model | Best For | Context | Max Output |
|---|---|---|---|
claude-opus-4-6 |
Agents, complex coding | 1M tokens | 128k tokens |
claude-sonnet-4-6 |
Speed + intelligence | 1M tokens | 64k tokens |
claude-haiku-4-5 |
Fastest, budget | 200k tokens | 64k tokens |
Legacy Models (still available)
| Model | Context |
|---|---|
claude-opus-4-5 |
200k tokens |
claude-sonnet-4-5 |
200k tokens |
claude-sonnet-4-0 |
200k tokens |
# Claude Opus 4.6 (most capable)
chat <- chat_claude(model = "claude-opus-4-6")
# Claude Sonnet 4.6 (balanced, recommended)
chat <- chat_claude(model = "claude-sonnet-4-6")
# Claude Haiku 4.5 (fastest, cheapest)
chat <- chat_claude(model = "claude-haiku-4-5")Note: Claude 4.6 models support extended thinking and have 1M token context windows.
System Prompts
Control Claude’s behavior:
chat <- chat_claude(
system_prompt = "You are an expert R programmer. Always provide working code examples. Be concise."
)
chat$chat("How do I calculate the mean of a column?")Specialized assistants
# Data analysis assistant
analyst <- chat_claude(
system_prompt = "You are a data analyst expert in R and the tidyverse.
When asked questions, provide clear explanations with code examples using dplyr and ggplot2."
)
analyst$chat("How do I find outliers in my data?")Multi-turn Conversations
ellmer maintains conversation history automatically:
chat <- chat_claude()
# First message
chat$chat("I have a dataset of customer purchases.")
# Follow-up (Claude remembers context)
chat$chat("How would I calculate total spending per customer?")
# Another follow-up
chat$chat("Now how do I visualize this?")View conversation history
# See all turns
chat$get_turns()Practical Examples
Generate R code
chat <- chat_claude(
system_prompt = "You are an R expert. Return only executable R code with comments. No explanations outside code."
)
code <- chat$chat("
Create a function that:
1. Takes a data frame and column name
2. Removes outliers (values beyond 1.5*IQR)
3. Returns the cleaned data frame
")
cat(code)Explain existing code
chat <- chat_claude()
code_to_explain <- "
mtcars |>
group_by(cyl) |>
summarise(across(where(is.numeric), mean)) |>
pivot_longer(-cyl)
"
chat$chat(paste("Explain this R code step by step:", code_to_explain))Debug errors
chat <- chat_claude(
system_prompt = "You are an R debugging expert. When shown errors, explain the cause and provide a fix."
)
error_message <- "Error in select(df, name) : object 'name' not found"
chat$chat(paste("I got this error:", error_message,
"My code was: df |> select(name)"))Analyze data descriptions
chat <- chat_claude()
chat$chat("
I have a dataset with these columns:
- customer_id (integer)
- purchase_date (date)
- amount (numeric)
- category (character: 'electronics', 'clothing', 'food')
Suggest 5 interesting analyses I could perform.
")Tool Calling
Tool calling lets Claude execute R functions to get real data. This is powerful for building AI agents.
Define your function
First, create a regular R function:
get_weather <- function(city) {
# In practice, this would call a weather API
paste("Weather in", city, ": 72°F, sunny")
}Register it as a tool
Tell Claude about the function:
chat <- chat_claude()
chat$register_tool(
name = "get_weather",
description = "Get current weather for a city",
arguments = list(
city = tool_arg("string", "The city name")
),
func = get_weather
)Claude calls it automatically
When you ask about weather, Claude recognizes it should use the tool:
chat$chat("What's the weather in New York?")
# Claude calls get_weather("New York") and returns the resultTool for data analysis
Create a function that analyzes data:
analyze_data <- function(column_name) {
data <- mtcars[[column_name]]
list(mean = mean(data), sd = sd(data), min = min(data), max = max(data))
}Register and use it:
chat <- chat_claude()
chat$register_tool(
name = "analyze_data",
description = "Get summary statistics for a column in mtcars",
arguments = list(column_name = tool_arg("string", "Column name")),
func = analyze_data
)
chat$chat("What are the statistics for the mpg column?")Structured Output
Extract data in a specific format instead of free text. This is useful for parsing reviews, extracting entities, or classifying content. For more examples, see How to Extract Structured Data with LLMs.
Define a schema
Specify the structure you want:
review_schema <- type_object(
sentiment = type_string("positive, negative, or neutral"),
confidence = type_number("confidence score 0-1"),
summary = type_string("one sentence summary")
)Extract structured data
Pass text and schema to get clean output:
chat <- chat_claude()
result <- chat$extract_data(
"This product exceeded my expectations! Great quality and fast shipping.",
type = review_schema
)Result is a list
result$sentiment
# "positive"
result$confidence
# 0.95Streaming Responses
For long responses, stream output:
chat <- chat_claude()
# Stream response (prints as it generates)
chat$stream("Write a detailed guide to ggplot2 themes.")Error Handling
safe_chat <- function(prompt) {
tryCatch({
chat <- chat_claude()
chat$chat(prompt)
}, error = function(e) {
message("API Error: ", e$message)
NA
})
}
result <- safe_chat("What is 2+2?")Common errors
| Error | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid API key | Check ANTHROPIC_API_KEY |
| 429 Rate limit | Too many requests | Add delays |
| Credit balance | No funds | Add billing |
Cost Management
Token usage
# Check usage after chat
chat <- chat_claude()
chat$chat("Hello!")
# View token counts
chat$get_turns()Use appropriate models
# Use Haiku for simple tasks (cheapest)
simple_chat <- chat_claude(model = "claude-haiku-4-5")
# Use Sonnet for coding (balanced, recommended)
code_chat <- chat_claude(model = "claude-sonnet-4-6")
# Use Opus only for complex reasoning (most capable)
complex_chat <- chat_claude(model = "claude-opus-4-6")Batch Processing
Process multiple texts with purrr’s map() functions.
Create a processing function
classify_sentiment <- function(text) {
chat <- chat_claude(
system_prompt = "Classify as positive/negative/neutral. One word only."
)
Sys.sleep(0.5) # Rate limiting - important!
chat$chat(text)
}Apply to multiple texts
library(purrr)
texts <- c("Great product!", "Terrible service", "It's okay I guess")
results <- map_chr(texts, classify_sentiment)
# "positive", "negative", "neutral"The Sys.sleep(0.5) prevents hitting rate limits when processing many items.
Common Mistakes
1. Using Claude Pro subscription for API
# Claude Pro (claude.ai) is NOT the same as API access
# You need a developer account at console.anthropic.com2. Forgetting conversation state
# Each chat_claude() creates a NEW conversation
chat1 <- chat_claude()
chat1$chat("My name is Alice")
chat2 <- chat_claude() # New conversation!
chat2$chat("What's my name?") # Claude doesn't know
# Reuse same chat object for context
chat1$chat("What's my name?") # "Alice"3. Not setting appropriate system prompts
# Generic (may give verbose responses)
chat <- chat_claude()
# Better (focused responses)
chat <- chat_claude(
system_prompt = "Be concise. Respond in 2-3 sentences max."
)Summary
| Task | Code |
|---|---|
| Basic chat | chat <- chat_claude(); chat$chat("Hi") |
| Set API key | Sys.setenv(ANTHROPIC_API_KEY = "key") |
| System prompt | chat_claude(system_prompt = "...") |
| Specific model | chat_claude(model = "claude-opus-4-6") |
| Tool calling | chat$register_tool(...) |
| Structured output | chat$extract_data(text, type) |
- Claude Sonnet is best for R code generation
- Use system prompts to control response style
- Reuse chat objects to maintain conversation context
- Add delays when processing multiple items