Access OpenAI's API directly from Duso scripts with an options-based, idiomatic interface.
Set your API key as an environment variable:
export OPENAI_API_KEY=sk-proj-xxxxx
duso script.du
Or pass it explicitly in your script:
openai = require("openai")
response = openai.prompt("Hello", {key = "sk-proj-xxxxx"})
openai = require("openai")
response = openai.prompt("What is Duso?")
print(response)
openai = require("openai")
chat = openai.session({
system = "You are a helpful assistant"
})
response1 = chat.prompt("What is a closure?")
response2 = chat.prompt("Can you give me an example?")
print(chat.usage) // Check token usage
openai = require("openai")
// Lower temperature = more deterministic
response = openai.prompt("Solve this math problem: 2 + 2", {
temperature = 0.5
})
// Higher temperature = more creative
response = openai.prompt("Write a poem about code", {
temperature = 1.0
})
openai = require("openai")
// Define a tool using standard format
var calculator = {
name = "calculator",
description = "Performs basic math operations",
parameters = {
operation = {type = "string"},
a = {type = "number"},
b = {type = "number"}
},
required = ["operation", "a", "b"],
handler = function(input)
if input.operation == "add" then return input.a + input.b end
if input.operation == "multiply" then return input.a * input.b end
end
}
// Create agent - handler is automatically extracted!
agent = openai.session({
tools = [calculator]
})
// Ask the agent - it will automatically call tools
response = agent.prompt("What is 15 * 27?")
print(response) // "405"
OpenAI automatically caches prompts on GPT-4o and newer models. No special configuration needed!
openai = require("openai")
// Automatic caching: any prompt 1024+ tokens is cached on GPT-4o+
// Cache hits get 50% discount on cached token cost
chat = openai.session({
system = "Your long system prompt here",
model = "gpt-4o" // or gpt-4o-mini, o1-preview, o1-mini
})
response1 = chat.prompt("First question")
response2 = chat.prompt("Second question") // Uses cache!
Note: Cache_control config is accepted for API compatibility but ignored (OpenAI handles caching automatically). Requires GPT-4o or newer and 1024+ prompt tokens.
Set a custom timeout for API requests (default 30 seconds):
openai = require("openai")
// 15 second timeout for responsive user interfaces
chat = openai.session({
system = "You are helpful",
timeout = 15
})
response = chat.prompt("Quick question")
Enable caching on system prompts and tool definitions to reduce API costs. Cached content is reused across requests:
openai = require("openai")
var tools = [
{
name = "search",
description = "Search the knowledge base",
parameters = {query = {type = "string"}},
required = ["query"]
}
]
// Enable caching on system prompt and tools
chat = openai.session({
system = "You are a helpful research assistant",
tools = tools,
cache_control = {
system = true, // Cache the system prompt
tools = true // Cache tool definitions
}
})
// First request: normal token cost (builds cache)
response1 = chat.prompt("What is machine learning?")
// Second request: reuses cached system prompt & tools (50% cost savings on cached tokens!)
response2 = chat.prompt("Tell me more about neural networks")
print(chat.usage)
Benefits:
Send a one-shot query to OpenAI.
Parameters:
message (string, required) - Your promptconfig (object, optional) - Configuration options:
system - System prompt defining behaviormodel - Model ID (default: gpt-4o-mini)max_tokens - Max tokens in response (default: 2048)temperature - Sampling temperature 0-2 (default: 1.0)top_p - Nucleus sampling parameterkey - API key (if not in OPENAI_API_KEY)Returns:
string - Assistant's responseExamples:
// Basic
response = openai.prompt("What is the capital of France?")
// With system prompt
response = openai.prompt("Translate 'hello' to Spanish", {
system = "You are a translator"
})
// With model override
response = openai.prompt("Solve this complex problem", {
model = "gpt-4o",
max_tokens = 4096
})
// With temperature
response = openai.prompt("Write a story", {
temperature = 1.5
})
Create a multi-turn conversation session.
Parameters:
config (object, optional) - Configuration options (same as prompt() plus):
tools - Array of tool definitions (OpenAI format)tool_handlers - Object mapping tool names to handler functionsauto_execute_tools - Auto-execute tools in response loop (default: true)tool_choice - Tool selection strategy: "auto", "any", "none" (default: "auto")Returns:
session object with methods:
prompt(message) - Send a message, returns text responseadd_tool_result(tool_call_id, result) - Manually add tool result (for manual tool handling)continue_conversation() - Continue conversation after manual tool resultclear() - Reset conversation and usage statsmessages - Array of all messages in conversationusage - Token usage: {input_tokens = N, output_tokens = M}Examples:
// Basic conversation
chat = openai.session({
system = "You are a helpful assistant",
temperature = 0.8
})
response1 = chat.prompt("Tell me about Duso")
response2 = chat.prompt("What are its main features?")
print(chat.usage) // {input_tokens = 234, output_tokens = 567}
// With tools
agent = openai.session({
tools = [my_tool],
tool_handlers = {my_tool_name = my_handler_function},
auto_execute_tools = true
})
response = agent.prompt("Use the tool to answer this")
// Manual tool handling
chat = openai.session({
tools = [my_tool],
tool_handlers = {},
auto_execute_tools = false
})
response = chat.prompt("Use the tool")
// Process response.content manually
chat.add_tool_result(tool_call_id, result)
chat.continue_conversation()
Define tools using the standard format with name, description, parameters, required, and optional handler:
Tool Structure:
name - Function name (required, string)description - Function description (string)parameters - Object with parameter definitions (keys → type objects)required - Array of required parameter nameshandler - Handler function that executes the tool (optional)
input object with parametersExamples:
// Tool with handler
var greet = {
name = "greet",
description = "Greet someone",
parameters = {name = {type = "string"}},
required = ["name"],
handler = function(input)
return "Hello, " + input.name + "!"
end
}
// Tool without handler (manual handling)
var info = {
name = "get_info",
description = "Get information",
parameters = {topic = {type = "string"}},
required = ["topic"]
}
// Use in session
agent = openai.session({
tools = [greet, info]
})
List all models available for your account.
Parameters:
key (string, optional) - API key (if not in OPENAI_API_KEY)Returns:
array - Array of model objects with id, owned_by, created, etc.Example:
models = openai.models()
for i = 0; i < len(models); i = i + 1
print(models[i].id)
end
All config options that can be passed to prompt() or session():
| Option | Type | Default | Description |
|---|---|---|---|
model |
string | gpt-4o-mini |
Model ID to use |
max_tokens |
number | 2048 | Maximum tokens in response |
temperature |
number | 1.0 | Sampling temperature (0-2) |
top_p |
number | nil | Nucleus sampling parameter (0-1) |
system |
string | nil | System prompt |
tools |
array | nil | Array of tool definitions (OpenAI format) |
tool_handlers |
object | {} | Map of tool names to handler functions |
auto_execute_tools |
bool | true | Auto-execute tools in response loop |
tool_choice |
string | auto |
Tool selection: auto, any, none |
cache_control |
object | nil | Prompt caching config: {system = true, tools = true} |
timeout |
number | 30 | Request timeout in seconds |
key |
string | nil | API key (uses OPENAI_API_KEY if not provided) |
As of 2025, OpenAI's latest models are:
gpt-4o - Most capablegpt-4-turbo - Previous generationgpt-4o-mini - Affordable, fastgpt-3.5-turbo - Budget optionSee OpenAI's models page for the latest.
Tools enable the assistant to take actions. Define them using the standard format:
var my_tool = {
name = "web_search",
description = "Search the web",
parameters = {
query = {type = "string", description = "Search query"}
},
required = ["query"],
handler = function(input)
// Implement search
return results
end
}
// Handler is automatically extracted and registered!
chat = openai.session({
tools = [my_tool]
})
response = chat.prompt("Search for Duso")
You can also include vendor-specific extras in tools:
var multi_vendor_tool = {
name = "tool_name",
description = "...",
parameters = {...},
required = ["..."],
handler = function(input) ... end,
some_vendor_field = "value"
}
session = openai.session({
tools = [multi_vendor_tool]
})
When auto_execute_tools = true (default), the assistant's tool calls are automatically executed and results integrated into the conversation. When false, you can process tool calls manually:
chat = openai.session({
tools = [my_tool],
auto_execute_tools = false
})
response = chat.prompt("Use the tool")
// Process manually
chat.add_tool_result(tool_call_id, result)
response = chat.continue_conversation()
Control response creativity with temperature and sampling parameters:
temperature (0-2): Controls randomness
top_p (0-1): Nucleus sampling - keeps top probability mass
Typical configurations:
// Analytical
{temperature = 0.5, top_p = 0.9}
// Balanced
{temperature = 1.0}
// Creative
{temperature = 1.5, top_p = 0.95}
OPENAI_API_KEY - Your API key (required if not passed in config)try
openai = require("openai")
response = openai.prompt("Hello")
print(response)
catch (error)
print("Error: " + error)
end
Common errors:
OPENAI_API_KEY or pass key = in configOpenAI uses pay-as-you-go pricing based on tokens:
See OpenAI pricing for current rates.
Tips to reduce costs:
max_tokens to limit response lengthexamples/simple.du - One-shot queries with temperatureexamples/conversation.du - Multi-turn conversationexamples/tools.du - Tool use and agent patternsThis module uses Duso's best practices:
This module uses OpenAI's native API format, making it compatible with:
To use a compatible provider, you typically only need to change the API endpoint. Future Duso provider modules will follow this same interface.
For complex agent patterns, disable auto execution and handle tools manually:
chat = openai.session({
tools = [my_tool],
auto_execute_tools = false
})
// In a loop:
response = chat.prompt(user_input)
tool_calls = extract_tool_calls(response)
if len(tool_calls) > 0 then
for tool_call in tool_calls
result = execute_tool(tool_call)
chat.add_tool_result(tool_call.id, result)
end
response = chat.continue_conversation()
end
finish_reason == "tool_calls"