User presence system design (user online/offline status)

I want to share a user presence system design I’ve used in one of my Bubble apps in order to discern when users are online/offline. This design makes use of Bubble nocode tools only, i.e. no third party services integration, Javascript or plugins.


Disclaimer: this design is no longer a viable solution since it is WU consumption intensive, and probably not viable economic-wise in most cases. So, consider it just as an exercise of what can be done with Bubble pure nocode elements only, and not as a practical strategy.


Introduction

User presence, also known as user online status or user availability, represents a feature that is very much needed in any app that offers users information about the availability or online status (or presence) of other users. Think of any messaging app where you want to know when the other person is online at a given time.

As I developed a real-time Bubble app myself, I faced the need of adding such a feature. So, I investigated the problem and came up with different strategies to do so. However, today I will just share one of those. In particular, I’ll disclose one strategy that is solely based on Bubble “native” nocode elements, i.e. no external plugins, no Javascript, no third party service integrations, just workflows and Bubble DB.

Architecture

The system design I’m describing here is composed of two elements: a server-side initiated heartbeat and a server-side presence monitor. More on this below.

Heartbeat

A heartbeat is a signal that a component sends to indicate that it is alive. In our case, it’s the client-side, i.e. the app running in the users’ browser, the one signalling.

In this design, the heartbeat is requested by the server (ping-pong schema).
Server asks the client (ping) for a heartbeat (pong). It’s a synchronous process since all clients receive the ping at the same time.

The ping must be sent periodically by the server, so a process must be always running on the server-side. The ping request scales O(n) as it needs to execute one action (DB writing) per user.

Presence monitoring

Presence monitoring evaluates client heartbeats and decides on the user presence status. This is, with the information provided by the Heartbeat architecture element, presence monitoring evaluates whether a certain user is present or not.

In this design, we will rely on the server-side for presence monitoring of all users. A User field presenceStatus is needed to keep track of the user availability, and it is the source of truth for any agent that needs to know whether the user is present or not.

Server does a periodic sweep, that takes into account certain tolerance to avoid false Offline-positives. There will be a single process running in the server for all users, but actions will still take place for each user, so it still scales O(n) with the number of users.

Implementation

Summary

  • The heartbeat is registered in User’s field lastHeartbeat.
  • The the user presence status (Online/Offline) is stored in User’s field presenceStatus
  • The User’s field flagHeartbeatRequestedByServer serves the server to signal the user when it should send a new heartbeat.

A diagram of the proposed architecture can be found in here and in the figure below.

Client-side

Keeping webapp active

If we want to consider our users as present even if the webapp is not focused, i.e. the browser is minimized, the tab where the app is running is not focused etc., client-side must ensure the tab is kept active. Otherwise, client-side will stop responding to server and user will be considered as not present. For that purpose, this design relies on the Audio / Sound Player Howler.js Lite to reproduce a light sound in the background, so that the browser keeps the tab active.


Disclaimer: this is the only plugin we use, although not directly related to the heartbeat/presence monitoring mechanism.


Workflow “active user procedure”

We update the value of lastHeartbeat with the active user procedure, that performs the following actions

  • Updates the User’s lastHeartbeat (fig. 3)
  • Sets User’s presenceStatus to Online (fig. 4)

The active user procedure workflow is executed under two circumstances

  1. When page is loaded, so that the User is set as Online as soon as it opens the app (fig. 5)
  2. When the User’s flagHeartbeatRequestedByServer is set to yes (fig. 6), i.e. when the server-side asks the user to send a heartbeat

Server side

API workflow “initiate inactivity detection process”

The superadmin user schedules (fig. 7) the initiate inactivity detection process API workflow (fig. x), used for starting the inactivity detection process (10-20-25), that does various things

  1. Signals users (ping) to send a heartbeat (pong)
  2. Checks users’ lastHeartbeat to mark them as Offline when it’s older than a threshold
  3. Keeps itself running without user intervention by re-scheduling itself
API workflow “inactivity detection process (10-20-25)”

The initiate inactivity detection process API workflow calls the inactivity detection process (10-20-25) API workflow (fig. 9).

The code 10-20-25 stands for

  • (x) 10 seconds is the periodicity of server signals (ping), asking for users’ heartbeats
  • (2x) 20 seconds is the threshold for process health check. It’s set to two times the ping periodicity so as to give some tolerance for new pings to be scheduled
  • (2.5x) 25 seconds is the threshold for inactive user detection. It’s set to 2.5 times the ping periodicity so as to give some tolerance for clients responding heartbeats (pong)

The thresholds and periodicity can be changed but it will directly affect the WU consumption and inactivity detection margin.

This API workflow does various things:

  1. Triggers the Custom event request heartbeat from users (fig. 10)
  2. Schedules itself 10 seconds in the future. The ID of this scheduled API Workflow will be used later. This is done to maintain the process running ad infinitum (fig. 11)
  3. Schedules the API workflow check process health 20 seconds in the future, providing the ID of the inactivity detection process (10-20-25) API workflow just scheduled 10 seconds in the future, as an argument called lastInactivityDetectionProcessId (fig. 12)
  4. Stores the ID of the scheduled inactivity detection process (10-20-25) API workflow in the superadmin User’s nextInactivityDetectionProcessId field (fig. 13). The superadmin user is the one that started the whole process.
  5. Schedules the API workflow check users inactivity (+25 sec) 25 seconds in the future (fig. 14).
Custom event “request heartbeat from users”

The Custom event request heartbeat from users sets the flagHeartbeatRequestedByServer value of every User to yes (fig. 15).

API workflow “check process health”

The API workflow check process health (fig. 16) reinitiates the process by calling the initiate inactivity detection process API workflow in case the process is dead (there’s no new inactivity detection process (10-20-25) scheduled in the future) (fig. 17)

API workflow “check users inactivity (+25 sec)”
  1. API workflow check users inactivity (+25 sec) applies the API workflow apply inactive user protocol to a user to a list of Users (fig. 18)

  2. The API workflow apply inactive user protocol to a user (fig. 19) sets the User’s presenceStatus to Offline (fig. 19)

  3. The list of Users contains the subset of Users constrained by the following conditions
    a. lastHeartbeat is older than 20 seconds
    b. presenceStatus is Online

Execution timeline

  1. (T0) server-side. Admin executes API workflow initiate inactivity detection process (Fig. 7). This action must be done only once, via a superadmin dashboard.

  2. (T0) server-side. An API workflow inactivity detection process (10-20-25) is scheduled 0 sec in the future (Fig. 9)

  3. (T0) server-side. An API workflow inactivity detection process (10-20-25) executes.

  4. (T0) server-side. Custom event request heartbeat from users is executed.

  5. (T0) server-side. All users are set flagHeartbeatRequestedByServer = yes (Fig. 15)

  6. (T0) client-side. All present users get Do when Current User’s flagHeartbeatRequestedByServer is yes workflow triggered (Fig. 6).

  7. (T0) client-side. All present users trigger the custom event active user procedure

  8. (T0) client-side. All present users update their lastHeartbeat to current time and set their flagHeartbeatRequestedByServer = no (Fig. 3)

  9. (T0) client-side. Present users who had their status set to Offline, reset it to Online (Fig. 4).

  10. (T0) server-side. An API workflow inactivity detection process (10-20-25) is scheduled 10 sec. in the future (Fig. 11).

  11. (T0) server-side. An API workflow check process health is scheduled 20 sec. in the future, providing the ID of the scheduled API workflow of the previous step, in the lastInactivityDetectionProcessId argument. (Fig. 12)

  12. (T0) server-side. The same ID is saved in the superadmin User’s field :gear:nextInactivityDetectionProcessId (Fig. 13).

  13. (T0) server-side. An API workflow check users inactivity (+25 sec) is scheduled 25 sec. in the future. (Fig. 14)

  14. (T+10) server-side. API workflow inactivity detection process (10-20-25) executes. Steps 3 to 13 repeat.

  15. (T+20) server-side. API workflow inactivity detection process (10-20-25) executes. Steps 3 to 13 repeat.

  16. (T+20) server-side. If the superadmin user’s :gear:nextInactivityDetectionProcessId equals lastInactivityDetectionProcessId, the process has problems and must be reinitiated. We do this by invoking API workflow initiate inactivity detection process API workflow (return to Step 1) (fig. x).

  17. (T+25) server-side. API workflow check users inactivity (+25 sec) runs, executing the API workflow apply inactive user protocol to a user on a list of Users. The list contains users whose lastHeartbeat is older than 20 sec. and whose presenceStatus is not Offline (fig. 18).

  18. (starts T+25) server-side. Each selected user is set its status to “Offline” (selected in step 17).

Performance analysis

2 users continuously present consume ~2920 WU/h
5 users consume ~4886 WU/h

We can divide this consumption between

  • overhead operations, e.g. (scheduling backend process, changes to superadmin user…)
  • operations per present user.

We can perform a quick estimation of how many WUs are consumed per user by doing the difference between both calculations and dividing by the difference of users (3 users):

  • 4886 WU/h - 2920 WU/h = 1966 WU/h
  • 1966 WU/h / 3 users = 655.33 WU/h/user

The overhead WU consumption is approximately:

  • 655.33 WU/h/user * 5 users = 3276.7 WU/h
  • 4889 WU/h - 3267.7 WU/h = 1609.33 WU/h

Other presence system plugins and solutions

Updated Miro dashboard

Hang on, what am I missing? Why can’t you just have a lastSeen date on the relevant data type, and say that Do a search for X where lastSeen > 15 minutes ago to get online uesrs?

1 Like

LastSeen being proactively updated by the client-side needs some kind of periodic process, e.g. a Do every X seconds workflow. The solution you propose is valid unless you care about these two edge cases.

  • Client-side suffers a connection outage
  • OS on client-side goes on hibernation mode

Updates to lastSeen based on client running periodic actions start misbehaving when recovering connection or exiting hibernation mode.

I think it’s overcomplicated. Just have a page is loaded workflow, set the lastLogin date if the user’s last login date is more than 5 minutes ago, and then you have an easy way to show all online users (all users who have loaded a page in the last 5 minutes) - that’s pretty fine, I think, especially given that in SPAs, changes to URL params load the page again so will run the workflow if necessary.

You’ll always need something triggering that param change for the When page is load event to execute and the lastSeen field to update, which sends us back to periodic processes (if you really want something close to “real-time” detection) that suffer from edge cases that I stated above.

I don’t claim this solution is for everyone, but for the most demanding use cases, like the one I had in hand.

That’s a lot of WU. I wonder if something like MQTT could be implemented here as it’s less resource intensive than websockets and you’re really just sending a ping to the broker to say you’re still there, but I haven’t seen anyone do that yet in bubble.

Indeed, that’s why I don’t consider it a viable solution anymore.

If I ever have to implement this again, I’ll explore a websockets-based architecture with a third-party service like PubNub or Ably.

doesn’t load the page again, just triggers the page is loaded event trigger again, so yes, the workflows will run again, but there are no loading wait times for data fetching etc. and all custom state values will remain in tact, which are important caveats especially for those building SPAs

1 Like

As a proof of concept, I spun up a lightweight Redis instance on Railway using Docker, then built a plugin to track user without touching Bubble’s backend or database.

The plugin takes 3 inputs:

  1. user_id – The ID of the user to track.
  2. polling_interval – How often to send a ping (default: 10s).
  3. idle_threshold – Interval at which the user is considered away (default: 30s).

Every polling_interval, the plugin sends a POST request to the server with the user ID and current timestamp. If the server doesn’t receive a ping within a set expiry the user is considered offline.

If the user’s mouse hasn’t moved in 15 seconds, the plugin updates their status to “away” instead of “online”. And if they move again while marked “away”, an immediate “online” ping is sent to reflect their return.

To check if a user is online/offline/away, I am working on a second plugin element that does the exact same thing but in reverse:

  • Every 10 seconds, it sends a GET request to the server with a user ID.
  • It then publishes the status (online, away, or offline) to the plugin’s state so it can be used for conditionals in the app.

I tested the first plugin with 5 users and it resulted in 0 WU generated from the plugin. The only WU generated was from the page is loaded workflow I already had, but nothing related to the plugin itself.

Because this is client-driven, it still runs into the usual “Bubble tab inactive” behavior if the page is backgrounded for too long, I think the only solution for fixing something like that is howler.

1 Like

Does the first plugin keep sending pings under the following conditions?

  1. From Chrome dev tools → Network → Set Network conditions to Offline → Keep it a while, e.g. 300s → Reset to No throttling, no further click on Chrome window (simulating connection outage while in background)
  2. Hibernate/suspend OS → Keep it a while, e.g. 300s → Just exit suspension (and log in if necessary)

Those two use cases I am very interested because they were the whole point of designing a server-led procedure.

In order to save API calls it would be useful to get a list of Online/Away/Offline users in one call. But I suppose you only need one-user queries since you are updating a single plugin element state, associated with a single user.

I guess it consumes 0 WU for setting that state? I was thinking on a server-side action updating all users at once but that indeed potentially consumes a lot of WU.

I guess you are making API calls directly from within plugin JS code? without using the API Connector? Does that method do not consume any WU?

I haven’t tested this, however if the server doesn’t receive a ping within a certain timeframe it defaults to offline.

Totally agree, after consideration it might be better to make this a websocket as it will be more scalable with more users and cheaper for the external server. I did find a plugin which approached it this way, and rather than doing 1000 API calls to get their status, you could just do a single API call that retrieved “online” users.

I haven’t set the state yet, but it was going to be internally within the plugin, the plan with this was to totally avoid touching the bubble DB or doing any bubble server side WF due to their markup.

That’s correct, if the API call is done within the plugin client side it doesn’t use any WU.

1 Like