🤖 ᴺᴱᵂ ᴾᴸᵁᴳᴵᴺ Google AI Gemini (incl. Gemini Pro & Gemini Pro Vision & Function Calling) - ChatGPT-Like Streaming [Keeps your keys secure] [Works in all Countries]

Hey Bubblers!

Leverage the power of Google Gemini models, including Gemini Pro and Gemini Pro Vision, on your app through this plugin enabling ChatGPT-Like multi-turn conversations with streaming capabilities.

These AI engines, particularly the Pro version, are designed for seamless multi-threaded and multimodal interactions, effortlessly managing simultaneous conversations.
Built on advanced machine learning and vast datasets, they excel in interpreting nuances of language and context with remarkable precision.

With the introduction of the Gemini models, we’re stepping into a new era of AI-driven communication, blurring the lines between human and machine interactions.

You can test out our Google AI - Gemini Chat Streaming Plugin with the live demo .


Enjoy !
Made with :black_heart: by wise:able
Discover our other Artificial Intelligence-based Plugins .


Hello, greetings, I have been testing the plugin but I have had some problems with the response. For some reason I always get the same response no matter what image I send. My prompt is very basic. I only ask that you describe the image. I leave you here some images of the workflow and the responses. Thank you very much in advance for the response

@proteuscrypto can you please send me in DM your pictures in original formats?

1 Like

@proteuscrypto apparently you are using another plugin, not mine which works :wink:

1 Like

Hey guys!

Informing you that function calling has been implemented :slight_smile: to be able to trigger workflows in your app and feed back the data to the Google Gemini.


Hey guys!

Just to let you know that the plugin has been updated to account for Gemini 1.5 Pro, quoting here Google:

An AI model’s “context window” is made up of tokens, which are the building blocks used for processing information. Tokens can be entire parts or subsections of words, images, videos, audio or code. The bigger a model’s context window, the more information it can take in and process in a given prompt — making its output more consistent, relevant and useful.

Through a series of machine learning innovations, we’ve increased 1.5 Pro’s context window capacity far beyond the original 32,000 tokens for Gemini 1.0. We can now run up to 1 million tokens in production.

This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens.