PI Connector Streaming (SSE) Feature Not Working with Dify API

Hello Bubble Community,

I’m trying to use Bubble’s API Connector to call the streaming API (e.g., for chat completion) of an LLM platform called Dify. I want to utilize the API Connector’s streaming (Server-Sent Events, SSE) functionality, but it doesn’t seem to be working correctly.

Upon investigation, I found several significant differences between the SSE format returned by the Dify API and the format used by the ChatGPT (OpenAI) API, which Bubble seems to support more commonly. I suspect these differences might be the cause of the issue.

Below is a summary of the main discrepancies identified based on example responses from Dify and ChatGPT (referencing dify_response.txt and chatgpt_response.txt), along with specific examples:

  1. JSON Structure in data: Field and Text Chunk Retrieval:
  • Difference: Dify returns text chunks in the answer key within the JSON object in the data: field, whereas ChatGPT often uses the delta key. The event type names also differ (agent_message vs. response.output_text.delta).

  • Dify Example:

    event: proxy bTOYc1
    data: data: {"event": "agent_message", ... "answer": "Okay, I understand. ..."} [cite: 2]
    data: 
    data: 
    
  • ChatGPT Exmaple:

    event: proxy bTOYc1
    data: event: response.output_text.delta
    data: data: {"type":"response.output_text.delta", ... "delta":"Certainly"} [cite: 12]
    data: 
    data:
    
  • Concern: It’s unclear if Bubble’s API Connector can be configured to extract text from Dify’s answer field, or if it specifically expects the delta structure used by OpenAI.

  1. Stream Termination Signal:
  • Difference: Dify signals the end of the stream with an event: message_end event. The provided ChatGPT log shows an event: response.completed event containing final metadata, although the standard OpenAI API often terminates with data: [DONE]. These methods differ.

  • Dify Example:

    event: proxy bTOYc1
    data: data: {"event": "message_end", ... "metadata": {"usage": {...}}} [cite: 9]
    data: 
    data:
    
  • ChagGPT Example:

    event: proxy bTOYc1
    data: event: response.completed
    data: data: {"type":"response.completed","response":{ ... usage ... }} [cite: 117]
    data: 
    data:
    
  • Concern: If Bubble specifically expects a signal like data: [DONE], it might not correctly recognize Dify’s message_end event, potentially causing issues with stream termination or data handling.

  • Raw SSE Formatting (Duplicate data: Prefix):

  • Difference: The Dify response example shows multiple instances of data: data: {JSON} , where the data: prefix is duplicated. This deviates from the standard SSE format and could potentially cause parsing issues. The ChatGPT log appears to follow the more standard event: <name>\ndata: <JSON>\n\n format.

  • Dify Example:

    event: proxy bTOYc1
    data: data: {"event": "agent_thought", ... } // Double 'data:' prefix [cite: 1]
    data: 
    data: data: {"event": "agent_mes         // Double 'data:' prefix [cite: 1]
    
    event: proxy bTOYc1
    data: sage", ... "answer": "..."} [cite: 2]
    data: 
    data: data: {"event": "message_end", ... } // Double 'data:' prefix [cite: 9]
    data: 
    data:
    
  • ChagGPT Example:

    event: proxy bTOYc1
    data: event: response.output_text.delta // Standard event line [cite: 12]
    data: data: {"type":"response.output_text.delta", ... "delta":"..."} // Standard data line [cite: 12]
    data: 
    data:
    
  • Concern: The data: data: format is highly likely to cause errors in Bubble’s SSE parser.

Questions:

  • Is Bubble’s API Connector streaming feature flexible enough to handle these differences in SSE format (specifically the JSON structure, termination signal, and the data: data: formatting)?
  • Or is it primarily designed assuming an OpenAI-compatible format?
  • Are there specific configurations or known workarounds (e.g., using an intermediary server, specific plugins) within Bubble to correctly parse streams from APIs like Dify?

Any advice from those who have encountered similar issues or know potential solutions would be greatly appreciated.

1 Like

We’re seeing the same thing but with another (non-OpenAI) LLM.
When initializing the API call, Bubble does pick up on the chunks within the response and for the actual text response chunk, we’ve configured it to push to the text stream response field but it’s just not working in our workflow.
We see the API call being made and we see the streamed data coming back but once that happens, the results are not being properly wired to the group we’ve defined on our page (whereas with OpenAI, this works perfectly)

This topic was automatically closed after 14 days. New replies are no longer allowed.