Let’s talk about something that’s flying under the radar but has huge implications—especially if you're building automation workflows: OpenAI's new Responses API.
What Is the Responses API?
The Responses API is OpenAI’s most advanced and future-focused interface for working with their models (like GPT-4o or GPT-3.5 Turbo). It consolidates and expands on previous APIs—especially the Completions and Assistants APIs—into a single, powerful toolset.
Here’s what it brings to the table:
- Text Generation – Like the classic Completions API: simple prompt, smart response.
- Stateful Conversations – Maintains memory across turns, similar to the Assistants API.
- Multimodal Input – Supports both images and text, ideal for GPT-4o’s capabilities.
-
Native Tools Integration – Direct access to:
- File Search (RAG-style document querying)
- Web Browsing (live info retrieval)
- Function Calling – Trigger external APIs or tools based on the model’s output.
- Structured Outputs (My Favorite) – Use a JSON Schema to force the model to return data in a predefined structure.
From Assistants to Responses: A Welcome Evolution
If you’ve worked with the Assistants API, you know it was already a step up—especially for stateful interaction and tool integration. I built most of my earlier automation projects on Assistants, and while it never got the love it deserved, it was incredibly useful.
But OpenAI is moving forward. The Responses API will replace the Assistants API—and honestly, that’s a good thing.
Why? Because the Responses API gives you all the control that made Assistants valuable, while making the process easier, more capable, and far more flexible. It’s less rigid, more powerful, and deeply customizable.
Spotlight: Structured Outputs
The game-changer feature for automation is Structured Output—and this is where things get exciting.
Structured Outputs allow you to define a strict JSON Schema, which forces the AI to return data in a consistent and machine-readable format. This matters not just for clarity, but because it makes the response immediately useful in other systems—like a CRM, a training module, or a dashboard.
Let’s look at a side-by-side example of freeform vs structured output in a "Wine Tutor" bot scenario.
Input Data
{
"setting": "Harvest Dinner",
"scenario": "You're attending an exclusive harvest dinner...",
"Best Grape": "Cabernet Sauvignon",
"Region": "Napa Valley",
"Pairing Rationale": "A bold Napa Valley Cabernet Sauvignon...",
"Student Response": "I think a Pinot from Oregon might be the best option..."
}
Freeform AI Response
"Ah, my dear aspirant... For choosing a Pinot Noir, rather than the crowning jewel of a Napa Valley Cabernet Sauvignon, I must award you a mere 7 points... You did catch the red category, so an additional 2 points to you... Your rationale, however, demonstrated a grasp... deserving of a 3 for its attempt... Remember this: At events where richness abounds, let Napa Valley's Cabernet Sauvignon—bold like the sun and full of grace—be your guiding star."
Trying to extract structured values (like total score, grape recommendation, or rationale quality) from that paragraph reliably? Not fun. You’d likely have to build a custom parser, use regex, or rely on brittle keyword matching—none of which is scalable or reliable in a production automation environment. Worse, any slight variation in how the AI phrases its response could completely break your downstream logic. That’s a risky bet when you need consistent data for powering other systems like CRMs, LMS platforms, or analytics dashboards.
Structured Output (Easy to Parse and Use)
{
"feedback": "Ah, my dear aspirant... let Napa Valley's Cabernet Sauvignon—bold like the sun and full of grace—be your guiding star.",
"grapeScore": 7,
"justificationScore": 3,
"totalScore": 10,
"bestGrape": "Cabernet Sauvignon",
"bestRegion": "Napa Valley"
}
Now that we understand structured data and Responses API, the question left to anwer is how to use this in an Automation tool like Make.com
Implementing in Make.com: The HTTP Module & Double Parsing
So how do you do this in Make.com? As of now, there isn’t a dedicated Responses API module. Instead, we use the HTTP - Make a request module.
- Setup:
-
URL:
https://api.openai.com/v1/responses
-
Method:
POST
-
Headers:
-
Authorization
:Bearer YOUR_API_KEY
-
Content-Type
:application/json
-
OpenAI-Beta
:assistants=v2
-
-
Body Type:
Raw
,JSON
- Payload:
-
model
:gpt-4o
orgpt-3.5-turbo
-
input
: Your prompt (e.g., "I am eating {{40.meal}}. What wine pairs best?") -
format
: Usejson_schema
-
schema
: Your defined JSON structure -
instructions
: Guide the model (e.g., "Return feedback, scores, and pairings.")
- Double Parse:
- First, parse the response's
data
object. - Then, extract the
output[].content[].text
field and parse it again to get actual JSON data.
- Use the Data:
- After parsing, you'll have clean variables:
feedback
,wine
,region
,totalScore
, etc.
The Power Unleashed
Why this matters:
- Reliability – No more guesswork when parsing AI output.
- Simplicity – Eliminate brittle regex and string searches.
- Efficiency – Focus on building logic, not fixing parsing issues.
- Control – You dictate exactly what you want and how.
So while the Assistants API had its moment, the Responses API—especially when paired with Structured Outputs and Make.com—is a massive leap forward. It may take a few extra steps, but the payoff is clean, predictable, and scalable automation.