Hey everyone,
I’ve been working on a tool to simplify and secure the way we interact with various Large Language Model (LLM) APIs, and I wanted to share the idea with the Bubble community. My team initially built this as an internal solution, but after some feedback, I’m considering turning it into a public service. Before I do that, I’d love to validate whether this could be helpful to others in the community.
Here are some of the challenges I’ve faced when working with LLM APIs in Bubble:
-
Handling nested JSON responses: Often, LLMs return JSON data nested within another JSON, making it difficult to access directly in Bubble without using plugins like readJson, which can be quite cumbersome.
-
Exposing sensitive data: When making LLM API calls from the client side, prompts, model information, and other sensitive details are exposed. This is less than ideal for those of us who want to keep certain information private without relying on backend workflows.
-
Prompt updates require redeployment: If we need to update a prompt, the entire application has to be redeployed. This can be especially frustrating when the change is urgent, and we aren’t ready to deploy the rest of the work.
-
No prompt version control: It’s difficult to track and manage different versions of prompts within Bubble. There’s no built-in version control, which becomes a hassle when updates are frequent or need to be handled by different team members.
-
Non-technical users struggle with API management: Often, prompt adjustments need to be made by non-technical team members, which makes API calls and configuration daunting for them.
What We Built:
To address these pain points, we created a platform that acts as a secure middleware for API calls to LLMs. The tool hides and protects proprietary prompts, model details, and API configurations, which would otherwise be exposed in client-side applications. Here’s how it works:
• Custom API Templates: Users can set up customizable API templates with variables and receive a unique endpoint that reroutes API requests to the LLM while shielding sensitive data.
• No Redeployment Needed: Instead of redeploying the entire app, teams can update prompts directly on the platform. This allows for multiple versions of prompts to be managed, tested, and updated in real time.
• Non-Technical Friendly: It simplifies the process so that non-technical users can manage and update prompts without needing to get into the technical details of the API calls themselves.
Key Benefits:
-
Secure your LLM-related IP: Protect prompts, model details, and API configurations from being exposed.
-
Version control for prompts: Manage prompt versions without the need for code redeployment, keeping things agile and efficient.
-
Simplify LLM integration: Eliminate the need for complex backend setups, allowing you to securely route API calls with minimal effort.
If any of this resonates with you, I’d love to hear your thoughts! I’m still trying to figure out if this tool could be of value to others, so if you’re working with LLMs and think this would be useful, feel free to reach out and I’ll be happy to provide access.
Thanks!