Forum Academy Marketplace Showcase Pricing Features

Support Chatbot using ChatGPT

Hi there :slight_smile: What would be the easiest way (at the moment) to have a support chatbot (typical chat widget at the bottom-right corner of the page) using ChatGPT?

The thing is, most of the questions I get via a real-time chatbot (e.g. Tawk) are very similar and are answered on my documentation pages. I want ChatGPT to learn what those docs say and give answers to questions based on such learnings.

So what I’d need is:

  1. Is there a Bubble plugin to do this or should I build something from scratch?
  2. How does the “learning process” work, so that the chatbot focus on giving answers related to my business only?

Thanks!

I’ll be releasing a template for this next week. Alternatively, search Chatbase or Botsonic. See here for a guide to how it actually works (this is for one of the app’s I run but same concept applies: https://www.notion.so/flexgpt/FlexGPT-Guide-cce29e126d744063845c65092d0afec2?pvs=4)

Thank you so much, George! So it seems Chatbase and Botsonic could work, I’ll give them I try! Also looking forward to seeing FlexGPT in action!

Follow-up questions:

  1. Which of the 2 options do you prefer, and why?
  2. What is FlexGPT added value with respect to the previous 2 options?

I can also see that Botsonic (well, I think now it’s called Writesonic) has a Bubble plugin, but I think it’s more for content generation, rather than for the chatbot integration. Having a Bubble plugin that takes care of everything inside Bubble (e.g. uploading the docs) would be great.

Thanks!

Hey

Re. Chatbase and Botsonic, both are basically the same. You upload data and can talk to it. Botsonic (a product of Writesonic) has more customisation in terms of the embed.

The template I released a few days ago is like FlexGPT but much simpler. The link is here: TEMPLATE: CustomGPT - train your own GPT chatbot and you can see a demo trained on the documentation here https://customgptsupport.bubbleapps.io/

Pros of Chatbase and Botsonic (compared to template):
Simple embed and pricing structure
Fully managed (don’t have to set up vector database or upload APIs for example)

Cons of Chatbase and Botsonic:
Huge markup. 12.67/month for 100k words. 100k words in GPT-3.5 costs about $0.20 ( and in GPT-4 costs about $5.00 depending on how long your prompt and completion are.
Not as customiseable - can’t add additional file upload types if you need them
No direct Bubble integration to link to database
Vendor lock in (well you get this with Bubble to) but if they increase chatbot pricing a lot you can get pretty f*ked over.

I’ll give an example. Suppose you want a chatbot on your site where users can ask questions about your documentation. With Chatbase/Botsonic you upload the documentation manually and can embed the chat widget (which is pretty much isolated from the rest of your site). That’s about it. If you go down the template route, you can build auto-syncing with your site’s documentation so you don’t have to upload manually, you can save conversations to the database, you can change the memory categories that are used to generate an answer based on the page or a dynamic field, just as a few examples. Of course, it requires a bit more effort to set up but for users who need this or just want to keep everything on Bubble, it’s great. Setting it up (creating the API accounts you need, adding them to the template, customising text and colours) takes about an hour, which really isn’t that long.

Sorry for the essay :laughing:

EDIT: @zhihong0321 is correct that each message costs an extra amount based on how much memory you load into it. I incorrectly assumed that Botsonic’s word quota includes the prompt. See post below for correction. It still works out cheaper to use an API compared to Botsonic/Chatbase (not including Bubble hosting costs).

2 Likes

I believe you misunderstood?

Huge Markup?

Let me explain.

I build ChatGPT AI bot myself.

To acutally generate “High Quality” Reply on custom use case ( company sales agent, customer support )

My usual Prompt uses up to 4K Token ( with embedding , meaning i am not stuffing everything in a prompt )

  • Role
  • system command prompt
  • Embedding Reference
  • Conversation History
  • Strategy
  • Goal
    ( whats in my prompt )
    to just generate a 80~100 words reply. ( i keep reply short )

I spend 4.5k token for just 80 words.
( the data in prompt is highly compressed , summarized )

So, i guess You forget, Token usage = Prompt Token + Completion Token.

You clearly took completion into COST/price consideration ONLY.
Where Completion almost = smallest cost factor.

TLDR = Misleading

You’re partly right. It is of course the case that token usage billed by OpenAI includes the prompt. Chatbase factors this into their billing (message credits).

But, you’re falsely claiming that 4.5k token is a normal prompt length :laughing: GPT-3.5 has a limit of 4096 tokens! If you’re using 4.5k tokens to generate a prompt for GPT-4 which does have a larger context limit, you can seriously optimise that.

FlexGPT.io’s prompt is about 100 words + any memory that’s returned (300-500 words normally). 4.5K tokens in just a prompt is wild and I’ve never seen anything like it.

Suppose an initial message with 750 word / 1K token prompt and 75 word / 100 token answer. For GPT-3.5:

OpenAI API = $0.002 (prompt) + $0.0002 (completion) = $0.0022 for that message. For $12.67, you get 5,759 initial messages.
Botsonic = 100,000/75 = 1,334 initial messages for the $12.67 price.

In this (simplified) example case, Botsonic has a 430% markup over the direct API pricing. If you have a HUGE prompt, Botsonic could become cheaper compared to API except that you can’t give it a huge prompt because you don’t have that level of customisation. You can’t control what memory is returned, how much of it, the target chunk size of each memory.

So, yes but actually no @zhihong0321 haha

creativity…

this involve abit trade secret of mine.
My method “bypass” ChatGPT limit.

Most programmer think it is impossible.
But even a kid can break it.

Think twice before you answer.

So… my token limit in theory, limitless.

That’s just not true. Perhaps you are summarising previous messages… that’s been done before, many times. LangChain does it. That’s not bypassing the limit.

If you really do have a way to bypass it, you can let me know and I’ll pay you a lot of money for it :wink:

already given you big hints. but you cant get it.

Is Bypass the only way to Bypass??

No.

We are talking about Custom ChatBot.
ChatGPT API is just a process in the middle.

My final Response can be 2x the limit. ( i can make it 3x,4x, but meaningless to me )

u saw it. i have to delete it.
this method multifold the inteligent level of my AI.

I have try ChatSonic ,
Damm boring for me. It just a easier to train Chat-Bot.

Still a far cry to replace a Human Agent.

1 Like

my current concern with ChatGPT 3.5 is no longer the token limit.

My method expand the limit
But multifold the response time.

I took the legendary “think step by step” prompt instruction
“Literally”

Okay, just to clarify for anyone that finds this thread and is trying to work out the revolutionary method the responder thinks he’s created:

GPT-3.5 will ‘forget’ anything in the prompt that is more than 4096 tokens ago. It won’t use that part of prompt in its response.

You can summarise each message / completion so that each uses less tokens. This reduces the token usage so you can fit in more high level content but you’ll obviously lose specific details. E.g take ‘i want to buy a red car near philadelphia’ - that could only be condensed to ‘buy red car near philadelphia’ or ‘buy red car’ or ‘buy car near philadelphia’. In any case, you’re losing information so by no means bypassing the token limit.

You can use GPT-3.5 to generate an 8000 token text (higher than its token limit). Just ask it to generate a plan for the output, then run separate requests that generate each section of the text and later combine them when displaying them/saving them. This will generate 8000 tokens of total output (well, assuming you instruct it to output roughly that length). However, GPT doesn’t know what has been said in the other sections - only as much information as was provided in the plan/prompt.

tldr; you cannot make GPT-3.5 responses that reference more than 4000 or so tokens. It’s a limitation of the model. There are workarounds, but they don’t do what this person claims. GPT-4-8k has about an 8000 token limit, and GPT-4-32k has about 32,000 token limit but the same principle applies.

amaze me.
you still… dont… get it…

But i guess you are a dev.

I am not just a dev. I have a long career of sales and marketing.

To build a compelling Conversation Bot.
You need the AI to think like a Sales Person.

Give you some clue,
Next step strategy. ( this alone cost me ~1k in a 10+ exchange conversation )

So, The TASK is not about “ANSWER-ing” user’s question
THE task is having the AI to LEAD/Guiding user into taking certain action.

I dont expect you understand about sales conversation strategy.

But honestly, u dont understand how i get pass the limit.
Is not summarizing.

But i wont explain it further.
I didnt spend more than 4.5k for “1 response” that i pass to user.

1 Like

ok, but this doesn’t change the fact that GPT-3.5 doesn’t know what you’re talking about if it’s not in the last 4096 tokens :laughing:

i’m going to touch grass now

Breaking the limit of ChatGPT 3.5 Token per Call.
Breaking the limit of Token spent on 1 response.

no more reveal…

OK guys, I’m starting to use Botsonic, as it’s really straight forward (the only thing I don’t like is its support team response time). I’m going to pay for a higher layer just to make sure I can cover all the required words (they charge you by word) and to be able to train the chatbot with more than 1 source of info. But so far, so good! Thanks a lot for the suggestion, @georgecollier!

1 Like