Structured output across vendors
Tool schemas — vendor-specific differences
The previous lesson established that all three vendors support function calling and described the high-level shape. This lesson is the implementation detail: when you write the actual tools array, the JSON differs across vendors in small but consequential ways. Knowing these differences ahead of time saves a debugging session per port.
The same conceptual schema, three serialisations
Imagine you want to expose a function get_order_status(order_id: string) -> string to the model. Each vendor takes that intent and serialises it differently.
Anthropic (Claude)
{
"tools": [
{
"name": "get_order_status",
"description": "Look up the current shipping status of an order",
"input_schema": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "Order ID like 4821"
}
},
"required": ["order_id"]
}
}
]
}
OpenAI (GPT)
{
"tools": [
{
"type": "function",
"function": {
"name": "get_order_status",
"description": "Look up the current shipping status of an order",
"parameters": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "Order ID like 4821"
}
},
"required": ["order_id"]
}
}
}
]
}
Google (Gemini)
{
"tools": [
{
"function_declarations": [
{
"name": "get_order_status",
"description": "Look up the current shipping status of an order",
"parameters": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "Order ID like 4821"
}
},
"required": ["order_id"]
}
}
]
}
]
}
Where the differences actually matter
Three points to flag when porting:
-
The wrapper shape. Anthropic uses
input_schemadirectly on the tool. OpenAI nests underfunction.parameters. Google nests underfunction_declarations[].parameters. The inner JSON Schema is the same in all three; the wrapping is not. -
Type system support. All three accept the JSON Schema standard subset (string, number, boolean, array, object, enum). Edge cases differ. Anthropic accepts
format: "uri"andpattern: "^regex$"for strings. OpenAI's strict mode (strict: true) is the only one that guarantees the model's output matches the schema exactly — but strict mode forbidsadditionalPropertiesand some other JSON Schema features. Google's strictness is the loosest of the three. -
Streaming behaviour. Function-call arguments come back streamed in chunks on all three. Anthropic streams the JSON as content_block_delta events. OpenAI streams via
delta.tool_calls[].function.arguments. Google streams as part ofparts. If your code parses arguments as they arrive, the parser is vendor-specific.
A portable abstraction layer
In production, most teams write an internal interface that accepts a single schema description and emits the right vendor-specific JSON. The layer typically lives in 100-200 lines of TypeScript or Python and is the right place to centralise these differences. The layer also normalises the response shape — your business code sees { tool_name, arguments } regardless of vendor.
If you are starting today, do not write your own. The Vercel AI SDK and LangChain both ship vendor-portable function-calling abstractions. Pick one. The interesting work is the schema design and the tool implementations, not the JSON wrapping.
Next: failure modes per vendor when function calling goes wrong. :::
Sign in to rate