I created an api call with api connector that creates a message about openai and then retrieves the run to use the response from openai’s api assistant. In the test version it works perfectly, but in the live version the workflow is interrupted because it is as if it were not receiving a response from openai. Can anyone tell me why?
Hmm Good question. Can you share a screenshot of the workflow? Or a video? Hard to know unless we can see what is going on. Also, does it throw an error in the logs? Or can you do the step-by-step debugger and see if it throws any error?
Are you using the API connector? Do you have ‘Include errors in response and allow workflow actions to continue’ checked?
two things
(1) ask the same question both in test and live version
(2) use the debugger and check the run status of the retreive_a_run api call of it os still queued or completed
Also, does it throw an error in the logs? No, simply the workflow is interrupted because it call himself too much times as the openai assistant api doesn’t works
Or can you do the step-by-step debugger and see if it throws any error? Just did, in debugging theres no problem, because the thread, the message and the run is created, but theres no answers from openai
Theres some screenshot:
As you could see in the log, the thread is created and the run as well, but the workflow is terminated by infinite recursion protection
I see in the logs that you have infinite recursion protection on and that it is being blocked. Basically you have a loop that is looping beyond the amount of loops that you allow yourself. I think that is why it keeps stopping.
So are you saying it normally gets a response before 10 loops? Maybe try raising the limit to see if it helps.
In test mode it receives response long before reaching the loop limit. It is not possible to increase this limit
Hmm Can you share the API Connector configurations so we can see if there is anything missing there? Hope we can figure it out.
Ok. So what I think might be happening is that you might be calling the wrong assistant ID in live compared to test. You have the assistant ID marked as private in some places but not all. So you might be changing it in the workflow somewhere potentially.
If you don’t need to create a new assistant anywhere then try to be consistent with it. Maybe try to make them all private and see if that helps.
That’s just my best guess. Let me know if that helps.
There are still some missing screenshots so it’s hard to be certain. Also some important information is redacted too. So it’s hard to know for sure what you are actually doing.