[FREE PLUGIN] AI Proxy - ChatGPT, OpenAI, Claude, LlaMA, PaLM Real-Time Streaming

Thanks for trying out AI Proxy by CoAlias. I have a couple of favours to ask everyone here.

I am currently busy developing new features for the AI Proxy and would like to know a little bit more about you. Could you answer the following questions? I am awarding a $100 dollars amazon card to one of the respondents of this email!

1. What are you currently working on (and using AI for)?

2. What AI solutions are you currently using for your Bubble app?

3. What would be features we could add to AI Proxy plugin you would value?
Like being able to add documents/websites using an api, chat with them using an api, fine tune models

4. Could you review the AI Proxy plugin?

Could you email this to ab@coalias.com ?

1 Like

I definitely need this token count added back—specifically for OpenRouter. If we’re going to manage the context length for end users, this is critical.

Unless you wanted to somehow handle context length for us… in which case, even better! :joy:

Hi @gaimed is any user data submitted to this plugin accessed or stored on any servers?

1 Like

@Stackapp no. But if you like you can host your own server using cloudflare: https://github.com/CoAlias/ai-proxy-custom

I think I found a bug, and I would rather disclose it privately. What’s the best way to get a hold of you @gaimed?

Just dm it. Ill look into it :slight_smile:

100% uptime in the last 40 days!!


Since yesterday the responses have been extremely slow in my application. Anyone going through the same?

Not on my side. Also not in the logs

Hi @gaimed ! Thanks for this plugin, I like the simplicity of it and how we can use custom json and endpoints. very flexible

I have a few questions:

  1. I have found a way to this already but would like your advice on what is the best practice for this - I would like to send my past messages as context back to the API, how do we do that with “add a message” ( each of my messages are created as an item a table and referencing a conversation. )

  2. :raising_hand_man:When will token count be put back sir? Its very important as we are tracking usage
    ( token count can also be computed after streaming is completed , does not need to be real time :slight_smile: -)

  3. If we do not want the token to expire, can we leave the expiration time blank? (example if we are storing the api key in the database for each user - unlike the demo page, they do not need to put it in everytime )

  4. Will setting it up on my own cloudflare worker provide 100% privacy?

Seconded on the token count.

Absent eyeballs on the tokens, can anyone answer if using an agent is more cost-effective in tokens vs posting snowballing JSON (e.g. each reply in a thread sends all the old characters + new in the post and continually grows with each new reply the user sends to maintain context so the first post may be 280 tokens, then 500 tokens, the third 1,180 tokens, constantly growing, the 4th you could simply send the word “turtle” with a cost of 1,181 tokens)

  1. Hi Web4, You can pass a entire json to generate token. I dont have a action to add multiple messages yet. Ill create that in the new version. But you can build your own json and pass it to generate token.

  2. I think in a couple of weeks

  3. I will also add that to the new version. Currently a token can only be used once

  4. Yes.

About the bug, thats a bubble bug and not a ai proxy bug.

1 Like

working on that!

Hi @gaimed ,
Thank you so much for your response !

  1. Yes, currently I am passing the the array of context messages through custom JSON in generate token and it works great. Its simple to use. no hurry on multi messages.

  2. :red_circle: Token Count for OpenAI is Crucial - please do expedite it sir. :sob:

Thanks for the great work! Already using it in production.

1 Like

Hey there

thank you for this plugin, I recently implemented it on an app and it works perfect, now I want to use it for one of my app that is a bit larger, I have on average 200 new user everyday and for each of them I want to generate an initial message, I was wondering if this plugin can handle this ?


1 Like

Hi there, We are handeling 1.000.000 ai requests a month on average :wink: Could you leave a review?


The link to the plugin page doesn’t seem to work

Which link?

Is AI proxy’s server down? @gaimed

Getting error responses nonstop.

Nope. Fully online: AI Proxy