I’m having an issue with integrating the OpenAI Assistant in a Backend Workflow. I trigger the workflow using “Schedule API Workflow” from another action, and at the end of this workflow, I wait 2 minutes before checking if the response is ready.
The issue is that I don’t get the response live as soon as it’s ready. A bigger problem occurs when I try to run it for more than two users simultaneously – it only processes the first request and stops there.
I’m attaching a video showing how it’s currently set up. If anyone has an idea or a solution, I’d love to hear it.
It’s hard to tell from the screenshots since we can’t see all of the data source inside. But I think there is an issue when you are searching for your transcriptions last item. Is there a way to send the transcription along instead of search for it? Usually the backend workflow can accept a value like transcription, so then you know which one you are working with. That is the first thing I noticed.
There might be a few ways to get the information back live as soon as it is ready. You can use ‘streaming’. I think there might be some plugins available for that now. Or check it in a loop it until it is ‘completed’. https://platform.openai.com/docs/assistants/deep-dive