Access Anthropic's Claude API directly from Duso scripts with an options-based, idiomatic interface.
Set your API key as an environment variable:
export ANTHROPIC_API_KEY=sk-ant-xxxxx
duso script.du
Or pass it explicitly in your script:
claude = require("claude")
response = claude.prompt("Hello", {key = "sk-ant-xxxxx"})
claude = require("claude")
response = claude.prompt("What is Duso?")
print(response)
claude = require("claude")
chat = claude.session({
system = "You are a helpful assistant"
})
response1 = chat.prompt("What is a closure?")
response2 = chat.prompt("Can you give me an example?")
print(chat.usage) // Check token usage
claude = require("claude")
// Lower temperature = more deterministic
response = claude.prompt("Solve this math problem: 2 + 2", {
temperature = 0.5
})
// Higher temperature = more creative
response = claude.prompt("Write a poem about code", {
temperature = 1.0
})
claude = require("claude")
// Define a tool using standard format
var calculator = {
name = "calculator",
description = "Performs basic math operations",
parameters = {
operation = {type = "string"},
a = {type = "number"},
b = {type = "number"}
},
required = ["operation", "a", "b"],
handler = function(input)
if input.operation == "add" then return input.a + input.b end
if input.operation == "multiply" then return input.a * input.b end
end
}
// Create agent - handler is automatically extracted!
agent = claude.session({
tools = [calculator]
})
// Ask the agent - it will automatically call tools
response = agent.prompt("What is 15 * 27?")
print(response) // "405"
Set a custom timeout for API requests (default 30 seconds):
claude = require("claude")
// 10 second timeout for fast failure on slow connections
chat = claude.session({
system = "You are helpful",
timeout = 10
})
response = chat.prompt("Quick question")
Claude requires explicit cache_control markers. OpenAI caches automatically on GPT-4o+.
Enable caching on system prompts and tool definitions to reduce API costs. Cached content is reused across requests:
claude = require("claude")
var tools = [
{
name = "search",
description = "Search the knowledge base",
parameters = {query = {type = "string"}},
required = ["query"]
}
]
// Enable caching on system prompt and tools
chat = claude.session({
system = "You are a helpful research assistant with access to tools.",
tools = tools,
cache_control = {
system = true, // Cache the system prompt
tools = true // Cache all tool definitions
}
})
// First request: builds cache (costs normal tokens)
response1 = chat.prompt("Search for information about prompt caching")
// Second request: reuses cache (saves ~90% on system + tools input tokens!)
response2 = chat.prompt("What did you find?")
print(chat.usage)
Benefits:
Send a one-shot query to Claude.
Parameters:
message (string, required) - Your promptconfig (object, optional) - Configuration options:
system - System prompt defining behaviormodel - Model ID (default: claude-haiku-4-5-20251001)max_tokens - Max tokens in response (default: 2048)temperature - Sampling temperature 0-2 (default: 1.0)top_p - Nucleus sampling parametertop_k - Top-k sampling parameterkey - API key (if not in ANTHROPIC_API_KEY)Returns:
string - Claude's responseExamples:
// Basic
response = claude.prompt("What is the capital of France?")
// With system prompt
response = claude.prompt("Translate 'hello' to Spanish", {
system = "You are a translator"
})
// With model override
response = claude.prompt("Solve this complex problem", {
max_tokens = 4096
})
// With temperature
response = claude.prompt("Write a story", {
temperature = 1.5
})
Create a multi-turn conversation session.
Parameters:
config (object, optional) - Configuration options (same as prompt() plus):
tools - Array of tool definitionstool_handlers - Object mapping tool names to handler functionsauto_execute_tools - Auto-execute tools in response loop (default: true)tool_choice - Tool selection strategy: "auto", "any", "none" (default: "auto")Returns:
session object with methods:
prompt(message) - Send a message, returns text responseadd_tool_result(tool_use_id, result) - Manually add tool result (for manual tool handling)continue_conversation() - Continue conversation after manual tool resultclear() - Reset conversation and usage statsmessages - Array of all messages in conversationusage - Token usage: {input_tokens = N, output_tokens = M}Examples:
// Basic conversation
chat = claude.session({
system = "You are a helpful assistant",
temperature = 0.8
})
response1 = chat.prompt("Tell me about Duso")
response2 = chat.prompt("What are its main features?")
print(chat.usage) // {input_tokens = 234, output_tokens = 567}
// With tools
agent = claude.session({
tools = [my_tool],
tool_handlers = {my_tool_name = my_handler_function},
auto_execute_tools = true
})
response = agent.prompt("Use the tool to answer this")
// Manual tool handling
chat = claude.session({
tools = [my_tool],
tool_handlers = {},
auto_execute_tools = false
})
response = chat.prompt("Use the tool")
// Process response.content manually
chat.add_tool_result(tool_use_id, result)
chat.continue_conversation()
Define tools using the standard format with name, description, parameters, required, and optional handler:
Tool Structure:
name - Function name (required, string)description - Function description (string)parameters - Object with parameter definitions (keys → type objects)required - Array of required parameter nameshandler - Handler function that executes the tool (optional)
input object with parametersExamples:
// Tool with handler
var greet = {
name = "greet",
description = "Greet someone",
parameters = {name = {type = "string"}},
required = ["name"],
handler = function(input)
return "Hello, " + input.name + "!"
end
}
// Tool without handler (manual handling)
var info = {
name = "get_info",
description = "Get information",
parameters = {topic = {type = "string"}},
required = ["topic"]
}
// Use in session
agent = claude.session({
tools = [greet, info]
})
List all models available for your account.
Parameters:
key (string, optional) - API key (if not in ANTHROPIC_API_KEY)Returns:
array - Array of model objects with id, type, display_name, created_atExample:
models = claude.models()
for i = 0; i < len(models); i = i + 1
print(models[i].id)
end
All config options that can be passed to prompt() or session():
| Option | Type | Default | Description |
|---|---|---|---|
model |
string | claude-haiku-4-5-20251001 |
Model ID to use |
max_tokens |
number | 2048 | Maximum tokens in response |
temperature |
number | 1.0 | Sampling temperature (0-2) |
top_p |
number | nil | Nucleus sampling parameter (0-1) |
top_k |
number | nil | Top-k sampling parameter |
system |
string | nil | System prompt |
tools |
array | nil | Array of tool definitions |
tool_handlers |
object | {} | Map of tool names to handler functions |
auto_execute_tools |
bool | true | Auto-execute tools in response loop |
tool_choice |
string | auto |
Tool selection: auto, any, none |
cache_control |
object | nil | Prompt caching config: {system = true, tools = true} |
timeout |
number | 30 | Request timeout in seconds |
skills |
array | nil | Agent skills (experimental) |
key |
string | nil | API key (uses ANTHROPIC_API_KEY if not provided) |
As of 2025, Anthropic's latest models are:
claude-opus-4-6 - Most capable, best for complex tasksclaude-sonnet-4-5-20250929 - Fast and powerfulclaude-haiku-4-5-20251001 - Fast and affordableSee Anthropic's models page for the latest.
Define tools using the standard format and handlers are automatically extracted:
var my_tool = {
name = "web_search",
description = "Search the web",
parameters = {
query = {type = "string", description = "Search query"}
},
required = ["query"],
handler = function(input)
// Implement search
return results
end
}
// Handler is automatically extracted and registered!
chat = claude.session({
tools = [my_tool]
})
response = chat.prompt("Search for Duso")
You can also include vendor-specific extras in tools that other vendors will ignore:
var multi_vendor_tool = {
name = "tool_name",
description = "...",
parameters = {...},
required = ["..."],
handler = function(input) ... end,
some_vendor_field = "value" // ignored by other vendors
}
session = claude.session({
tools = [multi_vendor_tool]
})
When auto_execute_tools = true (default), Claude's tool calls are automatically executed and results integrated into the conversation. When false, you can process tool calls manually:
chat = claude.session({
tools = [my_tool],
auto_execute_tools = false
})
response = chat.prompt("Use the tool")
// Check if Claude requested tool use
if contains(response, "tool") then
// Process manually
chat.add_tool_result(tool_id, result)
response = chat.continue()
end
Control response creativity with temperature and sampling parameters:
temperature (0-1): Controls randomness
top_p (0-1): Nucleus sampling - keeps top probability mass
top_k (integer): Keep top k most likely tokens
Typical configurations:
// Analytical
{temperature = 0.5, top_p = 0.9}
// Balanced
{temperature = 1.0}
// Creative
{temperature = 1.5, top_p = 0.95}
ANTHROPIC_API_KEY - Your API key (required if not passed in config)try
claude = require("claude")
response = claude.prompt("Hello")
print(response)
catch (error)
print("Error: " + error)
end
Common errors:
ANTHROPIC_API_KEY or pass key = in configClaude API uses pay-as-you-go pricing based on tokens:
See Anthropic pricing for current rates.
Tips to reduce costs:
max_tokens to limit response lengthexamples/simple.du - One-shot queries with temperatureexamples/conversation.du - Multi-turn conversationexamples/tools.du - Tool use and agent patternsThis module uses Duso's best practices:
default_config(overrides) for clean config mergingFor complex agent patterns, disable auto execution and handle tools manually:
chat = claude.session({
tools = [my_tool],
auto_execute_tools = false
})
// In a loop:
response = chat.prompt(user_input)
if has_tool_calls(response) then
tool_calls = extract_tool_calls(response)
results = {}
for each tool_call in tool_calls
results[tool_call.id] = execute_tool(tool_call)
end
chat.add_tool_result(...)
response = chat.continue_conversation()
end