Can anyone help me get my AssemblyAI call to LeMUR to transcribe and summarize a video file without having the Bubble workflow rush ahead and execute my next command before it has the return data it needs in the database?
I’m running an AssemblyAI transcription on one page of my app, and putting out a webhook to catch and use the result for a LeMUR summary to be used in a workflow on the following page. Then, the workflow is supposed to make a call to ChatGPT using the LeMUR summary returned from AssemblyAI to generate an output. But every time I try it with a file over 100MB (very small for a video file), the summary arrives too late, and Bubble makes the ChatGPT call with a blank field, resulting in an incorrect response from ChatGPT.
I’m running the LeMUR call in the back end to try and keep it in order. I’ve tried adding as much as 3 minutes wait time before the ChatGPT call. But even when I do this, the summary always comes back between 10 and 60 seconds after the ChatGPT workflow has been initiated.
Any thoughts on how I can get Bubble to delay the ChatGPT call until after the AssemblyAI LeMUR call has completed the summary and the data is in my Bubble database?
Here’s a screenshot of the log entry showing the ChatGPT call going out, and then…
Cant you just make the next step start off a backend workflow which is receiving this webhook? then it literally only runs when the transcription is done?
Thanks, @Oliver-wholegraintech! Sounds like a great idea to kick it off in the very next step. I have a backend workflow that’s designed to do just what you suggest. How do I launch it from the next step in my workflow? My backend workflow is set on “detect data” for the webhook, but I don’t see how to do as you suggest. Here’s what I’m doing now:
The initial workflow that triggers AssemblyAI’s transcription of the video is set off by the user clicking a button on one page here:
My backend workflow called “inbound Lemur” is set to detect the webhook signal, but, as I understand it, never needs to be triggered since it’s in detect mode for that endpoint. Here’s what that backend workflow looks like:
You can see “inbound_lemur” is set to “detect data” on the webhook whenever it’s ready in the “AssemblyAI - lemur call” step. But Bubble still runs right past the webhook on the next page when the user clicks on “create” and goes to the ChatGPT step, but it looks like it makes the call to ChatGPT before the webhook response has come in.
When I try to manually activate the “inbound_lemur” workflow within the same workflow as you suggest, there’s no way to set it to “detect data” in a frontend workflow. And there’s no option to sense “request data” when I try to insert the “AssemblyAI - lemur call” step on the front end. Do you know a way to make the lemur step happen on this same page with the webhook so it can get that data faster? Or am I misunderstanding your idea?
Yeah so I would be making the front end actions wait until the webhook has run, and use a field on one of the objects you’re processing to define if the front-end can proceed or not, sorry for the lack of detail, I’ll shoot you a DM though and we can screen share and chat this one through if you’d like