Hi! I am building an AI evaluation tool in Bubble, and I am stuck on how to display the API response fields in my UI. I am new to Bubble so I may be misunderstanding how parent groups, workflow outputs, and dynamic data work.
What I have so far
Users enter text, click a Validate button, and the app sends the text to an OpenAI API call that returns JSON with the following fields:
-
authenticity_score
-
evaluation_text
-
hallucination_flag
-
agreement_score
I am using the Bubble API Connector. The call initializes correctly, and Bubble detects all fields.
Data structure
I have two main data types:
Submission
ModelEvaluation
UI setup
On my page I have a fully designed layout. There are three fixed panels, one for each model. Inside each panel I have:
-
A placeholder “–” where the authenticity score should appear
-
A small label for the evaluation
-
A larger text box where the full evaluation_text should appear
These elements already exist in the design. I only need to connect them to the API response.
Where I am stuck
I am unsure about the following:
-
How to structure the workflow for the Validate button. Should I create the Submission first, then run the API call, then create a ModelEvaluation, or is there a simpler flow.
-
How to display the JSON results in each text field. I do not understand how to use “Result of step X” to pull specific values into the text elements.
-
How to link fields to my existing layout without turning everything into a repeating group. I want to keep the three fixed panels as they are.
-
How to separate the API response into the individual fields. Since the API returns one JSON object, I am not sure how to place each value into its own text element.
What I am looking for
I need a straightforward explanation of how to:
-
Set up the workflow steps
-
Decide whether to store the results or display them directly
-
Bind authenticity_score and evaluation_text to the placeholders
-
Keep the “–” placeholder until the API response is back
-
Handle this without restructuring the entire page
Thanks in advance for any guidance. I can also provide screenshots if that helps.
Hi @ivuturan Welcome to the community!
Totally get where you’re stuck, Bubble’s dynamic data and workflows can be tricky at first. Here’s a simple way to handle this without restructuring your layout:
- Workflow order:
- When the user clicks Validate, you can either:
- Option A (simpler for MVP): Run the API call first, then create the Submission and ModelEvaluation using the API results.
- Option B: Create the Submission first, then run the API call, and create ModelEvaluation afterward. Either works, but I usually go with Option A if you just need the API data for display.
- Displaying JSON fields:
- In your text elements, use “Insert dynamic data → Result of step X (API call) → [field name]”.
- Example: authenticity_score →
Result of step 1 (API call) → authenticity_score.
- Do the same for evaluation_text, hallucination_flag, etc.
- Keeping the fixed panels:
- No need for repeating groups. Just bind each text element to the API result as above.
- For the “–” placeholder, set the text element’s initial content to “–”. Once the API response comes back, it will automatically update.
- Storing vs displaying:
- If you just want users to see results temporarily, you can display directly from the API call.
- If you want to keep a record, create a Submission and ModelEvaluation entry in your database after the API call. Then bind the text elements to the database fields instead of the API directly.
Basically: API call → grab individual fields → display in each text element → optionally save to DB. You don’t need to touch the page layout or repeaters at all. Hope this helps.
Hey @ivuturan ,
Here’s an approach I would take, let me know if this helps:
When Button “Validate” is clicked:
Step 1: Create a new Submission
- input_text = Input’s value
- status = “processing”
Step 2: Call OpenAI API (API Connector)
Step 3: Create a new ModelEvaluation
- authenticity_score = Result of Step 2’s authenticity_score
- evaluation_text = Result of Step 2’s evaluation_text
- model_name = “GPT-4” (or whatever)
- submission = Result of Step 1
Step 4: Make changes to Result of Step 1 (Submission)
- hallucination_flag = Result of Step 2’s hallucination_flag
- agreement_score = Result of Step 2’s agreement_score
- validated_answer = Result of Step 2’s evaluation_text
- status = “complete”
Display Results Without Repeating Groups
Use a Parent Group + Custom States or Group’s Data Source
Use a Group Data Source
Group “Model 1 Panel” (Type: ModelEvaluation)
Text: Parent group’s ModelEvaluation’s authenticity_score
Text: Parent group’s ModelEvaluation’s evaluation_text
Text: (label)
In your workflow, add a final step:
Step 5: Parent group’s ModelEvaluation’s authenticity_score
Text: Parent group’s ModelEvaluation’s evaluation_text
Text: (label)Display data in group
- Element: Group “Model 1 Panel”
- Data to display: Result of Step 3 (the ModelEvaluation you created)
Now all text elements inside that group can reference:
- Parent group’s ModelEvaluation’s authenticity_score
- Parent group’s ModeEvaluation’s evaluation_text