Why are OpenAI API calls 2x slower via Bubble?

I am building a basic app that accesses the OpenAI API. It is very slow, making the app unusable.

Every call I make via Bubble takes at least twice as long through Bubble as it does through command line CURL script with the exact same parameters:

Example: 1000 tokens to gpt-4 takes 38s directly, but 1:25 via Bubble.

I have tried different models and different token sizes. I run tests at exact same time, at different times, so system issues don’t explain.

Is OpenAI slowing requests from Bubble?

Is there anything that I can do on my side to speed things up?


There’s no reason OpenAI would slow requests from Bubble. I’ve never found this to be the case. Bubble has a bit of latency but not 30 seconds worth. Maybe there’s something else with your config that we’re not seeing?

If you initialize the OpenAI call, with a message like ‘Write me 300 words’ and time it and calculate the tokens generated per second in the completion and compare it to the cURL script, I doubt there’ll be much difference (take 3 seconds off for Bubble’s behind the scenes initialisation stuff before showing you the response).

1 Like