How much do you use Cursor or any AI-powered IDE?

Personally, I used to use it more just for fun, to play around, build small projects for my own use, and see how capable AI really was.

I went a while without using it, but I came back to it over the past few weeks, and it’s been a really great experience.

It has helped me get ideas out of my head quickly before moving them into Bubble, and to build amazing plugins much faster and more efficiently. And it’s noticeable how much LLMs have evolved this year. I remember that at the beginning of the year, it was common for the AI to sometimes break your code because of a syntax error — a comma, a quote, something small like that. But that basically doesn’t happen anymore. It’s really becoming an excellent tool to boost our productivity.

Now I’m curious — how much and in what ways have you been using AI-powered IDEs or similar tools? :slightly_smiling_face:

In the last 3 months, I’ve used 10.3 billion tokens and $7,734 worth of OpenAI and Claude models.

Which apps did you use most of these tokens on? Buildprint?

It seems like I need to use more AI to get close to those 10 billion tokens in 3 months, hehe.

I used that just last month :joy:

My last 3 months are just shy of 25 billion tokens but this last month has been very high usage from several code refactors.

That’s gonna jump a lot once those companies aim for profitability over adoption/growth. Scary to think how much AI has already caused prices to go up, like electricity, software and hardware.

3 Likes

about 25B tokens , Cursor and WARP , mostly on OPUS model. luckily , they are cheaper now with 4.5

Getting at >1000$ monthly now , refactoring backends and UIs

You’re absolutely right!

Though in the mean time I love the fact that the competitive marketplace is working exactly as it should.

1 Like

I’ve used LLMs to make some superb custom plugins for bubble. Nowhere near the amount of tokens you guys have used (I pay around 50 a month) but still managed to create a PDF license generator, zip creator, keyboard nav system, audioplayer, S3 tools and many more.

Infact I’ve loved the bubble+code/plugin experience so much that I believe it’s a big sleeper selling point for bubble they’ve not fully realised yet: They should consider making their plugin building environment a first class citizen on the bubble ecosystem, really polish it up, add slick LLM based IDE (keeping it all under one roof). They could also then legitimately say that people can ‘keep their own code’ etc. They could charge for the tokens used, provide proper syntax checking and rapid testing environment. Could be so good.

Merry Christmas everyone :snowman::sun::christmas_tree::smiling_face::alien:

I use VS Copilot alot. I don’t understand the appeal of bragging about token usage. I never exceed my subscription limits and I use AI in every facet of my full time work and business. :melting_face:

I have a few AI pipelines used by around 30 users every day. I’ll be horrified if they bragged about how much tokens they burn.

2 Likes

I use Lovable for the initial mockups/designs, also used Bubble AI for 1 app. Not perfect but definitely a good starting point.

Meh I’m paying for subscription so I’ve spent about $800 over the last 3 months, nothing close to API cost of $7,700. Obviously if you pay per token you can be a bit more judicious about what you get AI to do.

VC-to-B-to-C

Yeah, enjoy that free gift from the VCs while it lasts.

1 Like

How come you pay these prices for pure ai ide coding? I pay 15$ for Windsurf which often has free usage fir GTP Codex or whatever new model comes and Antigravity has generous Gemini 3 and Opus 4.5 limits on the free plan. They work in parallel like 6-8 hours a day :grinning_face:

In agent workflows, higher usage usually just means the model is doing more of the work itself. in that context, the token usage is shorthand for how much human work is being replaced.

Totally fair. Windsurf and similar tools are great value and very good in assist mode.

If you use AI as an actual agent, something like Cline w/ claude sonnet 4.5, it becomes a different category altogether and can easily justify 4 to 5 digit monthly budget…

Wow, I honestly didn’t expect this post to spark such an intense and high-quality discussion.

When I created it, I had no idea it would turn into such a deep debate about tokens and cost–benefit. But that alone shows how relevant this topic is for all of us.

A few reflections after reading all the comments:

About the insane token numbers

First off, @georgecollier and @mitchbaylis — 10 billion and 25 billion tokens?! That’s not just heavy usage, that’s literally replacing human labor at scale.

About “bragging” with token usage

@ihsanzainal84 raised a fair point. This isn’t about flexing how many tokens you burn. It’s about transparency. If you’re paying $50/month and delivering the same results as someone spending $1,000, then you’re clearly more efficient. But if you’re spending $1,000 and delivering 20× more, that also makes perfect sense. Context is everything.

What really stood out to me

The suggestion from @asked111 about Bubble heavily investing in an AI-integrated plugin environment is brilliant. Seriously. Keeping everything under one roof, charging for tokens, fast testing environments… that would remove so much friction from our current workflow.

My current reality

I’m much closer to the $50–$100/month range than anything like $7k. I mostly use AI to:

  • Prototype logic before building it in Bubble
  • Debug plugins I’ve built
  • Generate base code that I then refine manually

I’m nowhere near fully autonomous agents yet. But after reading this thread, I’m seriously considering experimenting with heavier workflows.

The future pricing question

@boston85719 and @rico.trevisan brought up the elephant in the room. Yes, VCs are subsidizing our productivity right now. When the bill comes due, it’s going to hurt. But until then? I’m going to squeeze every available token to build competitive advantage.

Question for the community

For those spending $500+/month on tokens: what’s the real ROI? Are you able to measure how much time you’re saving versus how much you’re spending? That kind of insight would really help the rest of us decide whether it’s worth scaling usage or keeping it lean.

And for those who spend very little: do you feel like you’re leaving productivity on the table, or do you think you’re already at the sweet spot?

Huge thanks to everyone who turned a simple question into such a rich discussion. This is exactly why this community is awesome.

Merry Christmas :christmas_tree:

3 Likes

Carlos, great followup.

I’d push back on the idea that spend has to scale proportionally with output.

The bar isn’t about 20× more results, it’s whether or not these agents replace more human time than it costs? At $50/month you might clear some low-hanging work and save a handful of hours. At $1,000/month you might save 25–30 hours. That’s not proportional, but it’s still an easy trade.

So it’s not about how much you spend, it’s about how many hours you buy back.

Yeah, costs will go up once the subsidies dry up (or the industry’s leaders are defined). That’s unavoidable.

However, you can switch AI providers seamlessly if prices get out of hand, so cheaper options that get you most of the value will keep pressure on pricing. Compare with Bubble’s 8x pricing jump some people left, but many stayed because they were stuck. Migration and retraining costs meant it wasn’t really a free market, so it couldn’t course correct.

Add to that the fact that models and users both get more efficient over time, and the cost per task usually goes down. So prices may rise, but competition and efficiency should keep them manageable. :crossed_fingers:

Once you’re spending mid four figures, you’re not optimizing prompts anymore, you’re using agents to replace seats. At that level you’re easily offsetting one or two senior engineers’ worth of work, which makes the ROI pretty straightforward. (also more easily scalable up and down etc)

Is that what you tell your clients when their AI use expenses goes through the roof?

  1. Token usage varies model by model. So the same workflow with the same input, but using different models will use different tokens. Some models generate more output tokens to process the same input. For example, Gemini 3 does this.

  2. When building any pipeline that runs model inference, you can generate a median token usage by running some unit tests and then during UAT. So when I talk about higher token usage, I refer to usage that deviates from the norm. This can mean a few things but here are some actual common reasons:

    • You have leakage: somewhere in the pipeline a model is running amok. Could be it’s “overthinking” (looping due to an input error).
    • Bad actors are trying to break your pipeline. Maybe it’s time to add a prompt validation sequence. Maybe improve your guardrails.

I’ve been bootstrapping for 4-5 years. We’re in the green but every penny counts. We build budgets the old way. AI is still a luxury. I wish I could Scrooge McDuck into all that money you obviously swim in.

1 Like

I think we’re still talking about different use cases.

You’re describing production pipelines where spikes in token usage usually point to bugs, abuse, or inefficiency. In that context, I agree with you that rising token usage is something to control, not celebrate, and I obviously wouldn’t tell a client that runaway token usage is a sign of success.

I’m referring to agent workflows, not shared pipelines. In those cases, token usage isn’t compared to a median at all. If higher usage replaces hours or days of manual work, that spend can make sense even if it would look inefficient by pipeline standards.

You’re right that token usage varies by model, which is why I don’t treat the raw numbers as precise. But for models that actually do end-to-end work, they’re all in the same (or at least similar) order of magnitude, so it’s still a useful signal for how much work is being delegated.

Same metric, different context.

And for what it’s worth, I don’t itemize AI usage but bake it into the project price.

1 Like

Yes, that’s the right thing to do. My view is to use it now to build stuff that doesn’t require AI later. If building systems today dependent on AI tomorrow, well then when that bill comes due, you may have to build a new system.

It’s crazy that AI companies are already lobbying the US government for bail out guarantees as that speaks to their own belief they will fail.

Warren wants Trump White House to promise it won’t bail out OpenAI | FedScoop )-,Sen.,public%20funds%2C”%20Warren%20wrote.

plus Sam Altman has a history of not being the most trust worthy when speaking on user numbers or payouts to community members, I’m not sure why anybody is trusting him now.

AI is hype, trillions invested to replace over the next 10 years mostly minimum wage jobs.

https://www.axios.com/2025/08/21/ai-wall-street-big-tech

95% of companies see no real return when investing into AI