Up until a few days ago my chatgpt api connector was working just fine. I could ask it to generate a bunch of ideas to solve a problem and it would. Ever since the openai outage the same api call generates random ideas with no relationship to the problem being solved. what happened?
According to the OpenAI forums, thye just broke the GPT-3.5-turbo model. They suggested I try GPT-3.5-turbo-16k but it still cannot solve the problems I give it. Has anyone else experienced this?
ps the same prompt works fine through the Chat interface, and otherwise works … so it is generating content and passing it back. It’s just that the response is now junk.
It’s the first I’ve heard of it but you might just be early to a problem. I’m thinking of a quick fix in the mean time for you. Other than reviewing the API and making sure nothing has changed, which I’m sure you’ve already done.
If you have access to the previous responses that were correct (before the outage), maybe you could copy 2 or 3 of them and include it in your initial prompt to GPT3.5, saying something along the lines of “Follow this format, or make you response to this standard”. Just an idea, because it would be showing GPT every time what it should be responding like.
I’ll keep an ear out and see if there are any fixes to the problem your having if others are having it.
this is embarrassing … I was passing the wrong parameters into the API call … doh!
On the plus side, I have learned a bit more about available GPT models and temperature settings