I would like to generate long articles (1500 - 2500 words) with the chat gpt API on bubble, the problem is that when we send instruction to gpt, he always answer with 700-800 words maximum, then, I generate my article in different parts.
Example :
Me : Generate the title, the summuary and the intro for an article that is about …
GPT : …
Me : Now generate the 2 first part of the article with at least 400 words each
GPT : …
Me : Me : Now generate the 2 last part of the article with at least 400 words each
GPT : …
etc …
The goal here, is to stock each answer of gpt (in database I guess) and put them back together in order to display the full article.
I created an Article data type in which I created associated field like “intro”, “part 1” etc …
Now I don’t know how to make the workflow in order to do all of that, and especially how to stock each answer of gpt in the good place and to put them back all together in one single place (in database if possible then I can create a storage page with a repeating group in which I’ll list all the current user’s articles).
I thought about making different API call, but if I do that, there will be no chat memory so the different steps won’t make sense, and I will need too much token to pick up where I left it off.
In order to take advantage of the chat memory you have to use the Assistant API which is a bit tricky to set up for this purpose.
I can show you my implementation if you really need the help, but in escence this is how I built it.
-First you create an assistant in Open AI,
-In your database you will need a Blog Post data type and a Blog Section Data type
In you Blog Post datatype you will host the List of headers, the keyword, meta description and so on…
In the blog section datatype you will host the H title in a field and the Text in another field, plus a Blog Post refference
-You have to first create a thread to use the assistant, this thread save it into the Blog Post as well
-Add your first message to the thread (for example generate a list of H1, H2 and H3 for a blog post about “Keyword”, comma separated)
-Then after that step Run the assistant
-Then after running it retrieve by running GET message
-Take the :Last Item and save it into your Headings list field in your Blog Post data type
-Take the Heading list of say 20 headings and you’re going to run the thread again for each heading, one by one, asking the the assistant on the same thread to write the blog section
–Each time you add a message you have to run the assistant right after, and then get the message right after that too, and process the :Last Item of the response BEFORE adding the message for the next Heading
–This is important because if you add all the messages before running it will only run for t he last added message. You HAVE to RUN the assistant RIGHT AFTER each Message you add to it so it answers one by one, then Getting the message right after the response is also very important cause it makes your life easier. if you do it like this, each section will be in the :Last Item of the Get Message, allowing you to easily take it and add it into your database.
Its easier said than done because you have to constantly Poll for responses before running the Get messages so the whole setup spans across 8 or 9 backend workflows that are conditionally running throughout your whole blog creation process…
Send me a DM so I can show you how I implemented the whole thing