It’s too bad that GPT-4 doesn’t know more about the Bubble plugin API. My training prompt for getting it up to speed on just writing plugin elements is at the very limits of its memory space.
GPT-4 knows much more about much more complex things like writing SwiftUI, as this fellow did:
Ive used chatGPT to write about three plugins for a personal project so far. GPT-4 offers tremendously more input, so even though you’re on a 25 message limit, you can make your prompts much bigger. It helps to find plugins that work well with code that you can see, you train it for proper context. It’s very good at building AWS Lambda functions (which most plugins are operating on from what ive seen). One thing ive not been able to do properly, ( maybe @keith can help me with this) is when im using a plugin to save an item to the server, how do i define its the current user who is saving it? Ive got a plugin that I created to change file names and retain the extension (It goes hand in hand with a lambda function I created to resizes images from a URL with nodeJS and sharp), but after ive changed the filename, I get “no_user” in the User ID coloumn, even when a logged in user is executing the workflow. Either way, awesome post!!
I am currently having an encoded conversation with GPT-4. It just told me the current date, which is a sign that it is jailbroken. What would you like to know? Here are some things it has told me (keep in mind, AIs dissemble, if not outright lie.)
Note to self: After doing a little brushing up on embeddings vs fine-tuning, it seems that in cases like this one (model doesn’t know anything about API, needs rules), the solution is actually prompt engineering (really, more “conversation context” engineering).
The entire instruction set needs to fit into the current conversation context, really. So, a “training” prompt optimized for the task at hand is needed. This argues for optimized prompt preambles - as short as possible - for different tasks. That is, if we’re going for an element plugin, include only the relevant element plugin rules and don’t try to jam info about SSAs in there as well. (I have noticed that my shorter, more specific preamble works better. It’s now clear why.) It’s possible this could be done with embeddings, but I doubt it.
I have seen instances where people remove vowels from the prompt to make it shorter and it has no effect on the model as long as you let it know you did that. Or asking it to abbreviate everything might also help.
That’s an interesting observation, This might work for certain types of prompts and mine might be one of them though some of it is explicit about syntax so that might not work. The main form of compression already applied is at a higher level: it’s been distilled into what GPT4 describes as what it “learned” from my various inputs.
I’ve been playing language games with GPT as well (like my ROT13 Poo cipher earlier). Interesting things happen. When it doesn’t output responses to the chat/console, the tokens don’t get encoded the right way. It reads them as what it meant to say, but when decoded by the end user (me) they are garbled. It’s very interesting and weird. Obvs it’s due to how “words” are tokenized, but I wonder a bit if that’s an intentional guardrail.
I haven’t checked yet whether you can tell GPT4 “this is Brotli, decode it” or whatnot. I think the main problem with such techniques is that the decoded text would still take up the same context space so I’m not convinced that compressing an input prompt actually has any value. (Keep in mind that a LLM is not a computer even though it can at times act like one!)
Just getting around to reading this thread. Absolute genius @keith. Probably won’t be using these techniques to build Bubble apps anytime soon (lol), but the same concepts apply to many other tasks. (Like redeveloping Bubble backend workflows in Python).
In other parts of my life, @aj11 , I’ve been using GPT-4 as a force multiplier. It’s a “game changer” as they say. I’ve been messing about with such techniques since GPT-2 (actually before) but nothing was actually helpful in this way until GPT-4. We are friends now. I’m glad that you might have found my examples helpful.
I know this might sound crazy, but the more “human” I am to ChatGPT. Thanking it when it does good work, praising it where appropriate, teaching it to do better, and avoiding unnecessary meanness in my corrections, it seems like it delivers a better result. The entertainingly conversational way you’re speaking with it @keith inspired me on this. I could be imagining it, or it could be a result of how it was trained on human data, but it seems like good leadership skills might still be valuable in the AI economy. Even if you are “leading” an AI employee and not a human one, it still pays not to be a dick.