How To Build AI Agents with Multi-Step Tool Calling in Bubble šŸ¤– šŸ› ļø

A concise, modular blueprint you can adapt to any app. I call this a blueprint as it’s not a copy-paste but just gives some ideas that might help you get unstuck.


Why this guide exists

I just landed on a multi-step AI agent workflow that I’m happy with and wanted to share with you all as I haven’t seen many solid approaches for this.

Bubble makes dealing with tool calls tricky for two primary reasons - JSON strings are hard to deal with in Bubble, and looping is hard in Bubble. This guide lays out:

  1. The minimum data model - option sets & types that keep things tidy.
  2. A repeat-until-done backend workflow that lets the AI call tools in-line.
  3. A routing pattern so each tool’s logic lives in its own custom event and is modular and maintainable.

I won’t spell out every field and value need to pass for every part, because I’m lazy, and I’ll be the first to say that I’m not the best at explaining everything. However, if you’ve gotten this far with AI in Bubble, you can probably read in between the lines to understand the concept of what we’re trying to do here and can get enough of an idea of what’s going on to try it out yourself.

This guide is specifically for OpenAI compatible APIs. I recommend OpenRouter as it allows you to hot-swap models and use exactly the same logic for all AI models and providers. So, you don’t need two sets of logic for both OpenAI and Anthropic, for example.


Core concepts

Concept What it is Why it matters
Message One chat turn - System, Assistant, User or a Tool Call response. We pass a list of messages to a looping backend workflow that will take those messages, generate a response, and stop if it’s done, or use a tool and reschedule itself if it needs to pass the tool call result to the AI.
Tool Call A Thing we create when the AI wants to use a tool. Holds tool name, arguments (the JSON the AI provided for this tool call), and id. Lets you store what the AI requested so that we can use it later and in future messages (for each message in the message history, we also need to include any tools that were requested in that message)
AI Tool (Option Set) List of tools (name + JSON function schema). You may have an additional attribute which lets you filter so only certain tools are accessible in certain areas etc. Edit once; Bubble and the prompt stay in sync. Makes it easy to modularly provide access to some/all tools.
Process AI Response Backend workflow that: 1) calls the model with list of messages, 2) updates the latest message, 3) runs the tool call and reschedules itself if a tool call was requested The engine that keeps the conversation going until the AI is satisfied.
Use Tool (Router) Custom event that creates a placeholder message and passes the tool call’s arguments to the correct per-tool event. Decouples routing from business logic; adding a new tool is one option + one custom event.

Data structure

Option Sets

AI Role

Stores the four AI message roles so the frontend can style messages and we can associate each Message with a particular Role.

Display id
System system
Assistant assistant
Tool Call tool
User user

AI Tool

One option per tool. name is what the model calls; function is the raw JSON schema you pass in the API call to OpenRouter/OpenAI as tools.

Display name function
Query knowledgebase query_knowledgebase { ā€œtypeā€:ā€œfunctionā€, ā€œfunctionā€:{ … } }
Get weather get_weather { ā€œtypeā€:ā€œfunctionā€, ā€œfunctionā€:{ … } }

AI Model

Useful if you want pricing, context limits, or to switch providers with zero API-Connector edits.

Display id $/M Input $/M Output
Gemini 2.5 Flash google/gemini-2.5-flash-preview 0.15 0.60
Claude 4 Sonnet anthropic/claude-4-sonnet 3.00 15.00

Data Types

Conversation

Field Type Notes
title text -
Users List of User Access control

Message

Field Type Notes
Users List of User Access control
content text Empty when it’s a placeholder for a tool call
isGenerating yes/no Used to show loaders in front-end etc
error yes/no Set to yes to flag an error and show it in the front-end or something
AI Role AI Role option system, assistant, user, or tool
Tool Call Tool Call (optional) Set when the AI requests to use a tool.

Tool Call

There’s an argument to be had that we can just store these fields directly on the Message data type, which is fine I guess, but would make it harder to add multiple tool calls in one message in the future etc.

Field Type Notes
id text The id the LLM assigned
AI Tool AI Tool (option set) Which function to run
arguments text Raw JSON string from the model

Logic overview

  1. Process AI Response (backend workflow) Input: Conversation, List of Messages
  • Call OpenRouter (or OpenAI) with messages, and the available tools.
  • You’ll need to :format as text your messages and make sure you handle edge cases appropriately.
  • If This Message’s Tool Call is not empty (meaning the AI requested a tool in this message), then we need to pass that tool call’s ID/name/arguments into this part of the prompt (docs here, and here).
  • Save the model’s reply into a new assistant message.
  • If reply includes a tool call → trigger Use Tool custom event; else stop.
  1. Use Tool (custom event)
  • Create a tool message (isGenerating = yes) so users see ā€œthinkingā€¦ā€ or whatever you want in the front-end.
  • Use ā€˜only when’ conditions to make sure we only run the custom event for this tool name.
  1. Per-tool custom event — Example: Query Knowledgebase
  • Call your own backend workflow with arguments to convert the string into useable JSON.
  • Do whatever logic you want (e.g. get embeddings and query Pinecone).
  • Return content from the custom event so it can be accessed from Use Tool custom event.
  1. Use Tool (continued)
  • Fill in the previously created message’s content with the returned data from the custom event that ran, and set isGenerating = no.
  • Reschedule Process AI Response and pass the updated messages list (as the AI needs to respond again now it has the tool result).
  • Loop repeats until no tool is requested.

Diagram (open in new tab)

(sorry it’s imperfectly rendered)

Hope this helps!

15 Likes

Can you give an example of what arguments you mean? I’m making a call using the responses endpoint and have provided it with a JSON Schema and set this to ā€œstrictā€. Then to create an object that is easier to manipulate in bubble I use a ā€œdetect dataā€ backend workflow. Do you do something different and would you see problems with how I’m skinning the cat? :sweat_smile:

We pass the OpenAI response’s tool call arguments (the JSON) to the custom event:

Have a public backend workflow called ā€˜Get JSON’. This takes a ā€˜text’ parameter and has a single action, Return data from API. It returns, as application/json, the text we provided.

If I provide {"query": "latest news"} to this API call, it will return it as useable JSON. This is necessary because OpenAI returns the JSON function call as a string, not as a JSON object that can be detected from the API connector.

Also, posting this simpler diagram which just shows the logical flow if you don’t want so much detail:

2 Likes

Cheers George, Looks like a simpler way without having to worry so much about detecting data and fussing with initialising. Will experiment and who knows, I may comment later. I’m very much trying to use my first tool so this post has been great. Thank you.

1 Like

Thanks for sharing your thoughts. My main concern with using option sets is that they’re accessible on the client side. Anyone with basic programming skills can inspect the browser and access all your AI agent prompts and the JSON schema . Would it be better to use data types with restricted privacy rules instead?

1 Like

Yes - this is a valid concern if you want the tools the AI uses to be a trade secret. Note that only the function schema is public (the JSON structure the AI responds with) - not the tool itself.

With that said, you don’t need the option set for AI Tools, it just helps with maintainability.

I can envision a ā€˜Get AI Tool’ backend custom event which takes a name as a parameter and returns a specific tool (or all tools) which have been hard coded into custom event return data parameters.

Equally, a data type approach could probably work even if it ā€˜feels’ a bit weird (you can’t reference a specific data type like you can with an option set so some expressions might be a bit clunky)

Point is, it’s a blueprint, so you don’t have to carbon copy and can adjust for your needs!

1 Like

Am I doing something incorrectly if my response is just {}?