Tool Integration & Function Calling
Function Calling Syntax Across Platforms
5 min read
Each major AI platform has its own function calling syntax. Understanding these patterns lets you build portable tool integrations and optimize for each platform's strengths.
Claude's Function Calling (Anthropic)
Tool Definition Format
{
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or coordinates"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
}
Tool Use Response (Claude Opus 4.5/Sonnet 4.5)
{
"content": [
{
"type": "tool_use",
"id": "toolu_01ABC123",
"name": "get_weather",
"input": {
"location": "San Francisco",
"units": "fahrenheit"
}
}
],
"stop_reason": "tool_use"
}
Tool Result Format
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01ABC123",
"content": "72°F, sunny, humidity 45%"
}
]
}
Claude Code's Internal Format
Claude Code uses structured tags for internal tool calls:
[function_calls]
[invoke name="Read"]
[parameter name="file_path"]/src/index.ts[/parameter]
[/invoke]
[/function_calls]
[function_results]
Content of the file...
[/function_results]
OpenAI's Function Calling (GPT-5.2)
Tool Definition Format
{
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}
Tool Call Response
{
"choices": [
{
"message": {
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"Tokyo\"}"
}
}
]
},
"finish_reason": "tool_calls"
}
]
}
Tool Result Message
{
"role": "tool",
"tool_call_id": "call_abc123",
"content": "22°C, partly cloudy"
}
GPT-5.2 Parallel Tool Calls
GPT-5.2 supports parallel function calls:
{
"tool_calls": [
{
"id": "call_1",
"function": {"name": "get_weather", "arguments": "{\"location\": \"NYC\"}"}
},
{
"id": "call_2",
"function": {"name": "get_weather", "arguments": "{\"location\": \"LA\"}"}
}
]
}
Google's Function Calling (Gemini 3)
Tool Definition Format
{
"tools": [
{
"function_declarations": [
{
"name": "get_weather",
"description": "Returns weather for location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
]
}
]
}
Function Call Response
{
"candidates": [
{
"content": {
"parts": [
{
"functionCall": {
"name": "get_weather",
"args": {"location": "London"}
}
}
]
}
}
]
}
Function Response Format
{
"parts": [
{
"functionResponse": {
"name": "get_weather",
"response": {
"temperature": "15",
"condition": "rainy"
}
}
}
]
}
Comparison Table
| Feature | Claude | OpenAI | Gemini |
|---|---|---|---|
| Schema | input_schema | parameters | parameters |
| Call ID | tool_use_id | tool_call_id | implicit |
| Result role | user (tool_result) | tool | function_response |
| Parallel calls | Yes | Yes | Yes |
| Streaming | Yes | Yes | Yes |
MCP (Model Context Protocol)
Claude Code uses MCP for tool extensibility:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"env": {
"ALLOWED_PATHS": "/home/user/projects"
}
}
}
}
MCP Tool Discovery
{
"jsonrpc": "2.0",
"method": "tools/list",
"id": 1
}
Response:
{
"tools": [
{
"name": "read_file",
"description": "Read file contents",
"inputSchema": {...}
}
]
}
MCP Tool Invocation
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "read_file",
"arguments": {
"path": "/src/main.ts"
}
},
"id": 2
}
Best Practices for Cross-Platform Tools
1. Use JSON Schema Standard
{
"type": "object",
"properties": {
"query": {
"type": "string",
"minLength": 1,
"maxLength": 1000
}
},
"required": ["query"],
"additionalProperties": false
}
2. Normalize Response Handling
def handle_tool_response(platform, response):
if platform == "claude":
return response["content"][0]["input"]
elif platform == "openai":
return json.loads(response["tool_calls"][0]["function"]["arguments"])
elif platform == "gemini":
return response["candidates"][0]["content"]["parts"][0]["functionCall"]["args"]
3. Unified Error Format
{
"error": {
"type": "tool_error",
"code": "INVALID_PARAMS",
"message": "Missing required parameter: location",
"recoverable": true
}
}
Tool Calling Modes
Different invocation strategies:
| Mode | Description | Use Case |
|---|---|---|
| auto | Model decides | General use |
| required | Must call tool | Data retrieval |
| none | No tools | Pure generation |
| specific | Force specific tool | Controlled flows |
// Claude: tool_choice
{"type": "auto"}
{"type": "any"}
{"type": "tool", "name": "get_weather"}
// OpenAI: tool_choice
"auto"
"required"
{"type": "function", "function": {"name": "get_weather"}}
// Gemini: tool_config
{"function_calling_config": {"mode": "AUTO"}}
Key Insight: While syntax differs across platforms, the core concepts are identical: define tools with JSON Schema, receive structured calls, return formatted results. Building abstraction layers lets you write tools once and use them everywhere.
Next, we'll explore MCP integration and tool orchestration patterns. :::