OpenAI API call optimization

I have a simple UI on bubble which uses the API connector using the OpenAI API where i am using gpt-3.5-turbo model to generate content which is around 500 words, the call takes around 30 seconds.

I have seen a few bubble apps using the openAI api but they have been showing same amount of results in around 10 seconds.

What can I do to optimize my call? And reduce my time to 10ish seconds.
Has anyone faced a similar issue and used any optimization techniques?

Any leads in this regard would help!