There is an energy cost, so more powerful model consumes more energy is a higher input cost so needs higher price to user. Plus to get more powerful model requires more investment than previous models, again a higher input cost which will result in higher price.
Yes, the user may get better at using it and rebuild their systems, which is a cost also.
In regard to deploying AI, I come from a different perspective. There are 2 related roles that I fill in my full-time job:
Recommending, budgeting and deploying AI tools for my department of 8 people.
Recommending tools and infrastructure for a sector with around 800 users, consisting of staff and stakeholders.
If youâre deploying agents from third party services, then youâre most likely just paying a subscription. Then, overages do become a problem. Unless I can actually quantify productivity, I cannot justify with âoverages < productivityâ.
That said I do have tools deployed in my department to somewhat quantify productivity. Yet itâs a different set of features when labeling for 70 organizations each with different definitions of productivity.
On the flip side, generally speaking, large variable costs are common derivatives of custom pipelines. If youâre building bespoke agents, Iâll refer to my previous points.
Maybe for most specialized models within the same scope. Not true for large thinking models. eg. Gemini 3 usually spends double the amount of output tokens.
Larger models tend to be a lot more persistent resulting in âover-thinkingâ and hallucinations. Compared to specialized models that have been trained to stop and say they âdonât knowâ.
Token usage is too broad to be used to quantify productivity. Better signals will be actual logs that show when, how and why an agent was called.
So true. This gets looked over a lot. Thereâs an important economic cost to AI adoption that must be factored in when deploying AI. The effect of the magnitudes that are required to support AI trickles down to the user. We can already see this in prices for memory. Users still need the hardware to use our tools.
This is especially true for enterprises. The tech is also very frontier and like all tech, there will be an amalgamation of services:
What will it cost in the next 6 months?
What new frameworks become standard?
Will the technology weâve invested money and man hours today become obsolete tomorrow?
Will the third-party services I integrate continue to exist?
Iâm noob in this, but how is cline different? I havenât tried it but except for me paying for the actual token usage and data privacy, donât others achieve the same output at the end? What can Cline do which Windsurf or Antigravity canât?
Windsurf/Antigravity et. al are incentivised to neuter your context window because they per token to upstream providers. Cline doesnât have that perverse incentive as you pay for your own tokens so the idea is that you can have full control of what the AI sees
That study gets thrown around a lot, but it says more about enterprise adoption than whether AI actually delivers value.
The ROI claim is largely based on interviews the report itself describes as directional, not hard financial reporting, and on a broad scan of AI initiatives without clear evaluation guidelines. Itâs less a clean ROI study and more a snapshot of how hard it is for large enterprises to operationalize AI.
Thatâs not surprising. Big companies move like ocean liners. Changing workflows, incentives, and governance takes time, and ROI often lags early adoption.
Timing matters too. A lot of the agent-style tooling people are discussing now didnât exist when that data was gathered for the MIT study. . Cline didnât announce an enterprise offering until October 2025, and MCP-style integrations have grown a lot since then, making it easier for agents to interact with real systems rather than just generate text. That redefines what âusableâ even looks like.
And finally, this is mostly an enterprise story. For smaller teams and developers, the ROI is much easier to see. When AI directly replaces human hours, especially in development, the value is often immediate and measurable.
Models have energy costs, but energy isnât a fixed input. Itâs a market with pricing, timing, and tradeoffs. I already sell power back to my utility today, just not at great rates. Thereâs no reason to assume AI compute costs will always be priced in one rigid way.
What actually matters is cost per unit of useful work. As agent workflows improve, you can plan up front, route work to different models, or spin things up during low-cost periods if needed. There are a lot of levers before prices must go up, and even if prices increase dramatically, the relationship between cost and real-world ROI isnât a straight-line.
One thing thatâs worth clarifying is that with Cline youâre not locked into a single model or credit system. You can use OpenRouter to choose whatever models you want and even switch between them for planning vs execution. Since youâre paying for the tokens directly, thereâs no incentive for Cline to throttle context.
That openness is a big reason itâs taken off. The ~6,000 forks arenât about polish, theyâre about flexibility. People can change models, set their own rules, decide when context shrinks and what gets dropped, separate planning from execution, hook in MCPs, run it via the CLI, and build agents to build their own agentsâŚ
Thatâs really the difference. Windsurf and similar tools are great inside the IDE, but theyâre opinionated and fairly closed by design. You donât get much control over context management or how planning vs acting works. Cline is built to be tweaked and extended, which is why it shows up more in agent-style workflows.
Thatâs really the difference. Windsurf and similar tools are great inside the IDE, but theyâre more locked down / guardrailed by design. They make most of the decisions around context, planning vs execution, and guardrails for you.
Cline is built to be tweaked and extended, which is why it shows up more in agent-style workflows. Agent-style workflows let you hand off real chunks of work, not just get better autocomplete â> thatâs also where the major ROI difference comes from.
When VC do not hand the money anymore, prices could go 2, 3x and openAI will make a hefty profit margin by then. So actually, I think it is one of the most sound investments you can make. With 20 years of IT experience I can safely say that on an average month I do about $500.000 market value worth of coding, paying about $200 for AI subscriptions. Even $600 would be dirt cheap. And thatâs just coding. I have many first hand experiences where ChatGPT handed me in minutes what the health issue was including treatment plan. Imagine what we could save already on these two domains only. And if you have a car, make a picture of you engine and ask some things, just for fun. You will be amazed. NoâŚ.AI is dirt cheap but it will take years for the masses to understand that. Same has been the case for the dotcom years. It was all air and useless. How different do we think about that nowâŚ
about token usage, AI does make little mistakes that cause big refactors upstream. So the best way to save tokens is to spend 80% discussing and finetuning what you want and need and then let it code. And of course have proper guardrails. Simple example, one of my guardrails is that every script needs to include my logger.ts script (first thing you need to build when working with AI) and properly call it such that I get rich logs that AI uses to fix issues. Would I have not done it at the start (which I didnât at the beginning), it would cost me lots of tokens now to add it in my 30k or so code base.
Iâve integrated it into my full workflow, mostly for context-aware suggestions. The way it indexes the whole codebase is what makes it better than just copy-pasting into a browser. I noticed it handles complex stuff much better than it did six months ago, especially when it comes to maintaining consistent naming conventions across files. It still struggles with very niche libraries sometimes, but for standard Python or JS, the accuracy is pretty high. Itâs basically replaced my documentation searches.
The central issue here is separating AI as a fundamental capability from AI as a specific vendor implementation.
The cost exists, and it is real.
Infrastructure, memory, compute, vendor lock-in⌠and in the end, the user still needs capable hardware. That cost doesnât disappear. It just moves around the stack and eventually lands in someoneâs pocket.
In enterprise environments, this becomes even more obvious.
But hereâs the point:
Adopting AI is not a mistake. The mistake is adopting it without a defensive architecture.
The companies that will survive are not the ones betting everything on a single model, framework, or vendor. They are the ones that build abstraction layers from day one.
Yeah tbh I donât get how anyone can justify not spending a ton of money on AI. Youâre literally paying for a superintelligent white collar worker that works when you want, however you want, and doesnât get burned out. If you canât get a ROI from that than I donât know what to say
Justifications are dependent on the problem being solved. You canât just throw money around just because itâs shiny. Hiring meatbags workers still require some complex justifications.
For example, as head of my department, I can justify for subscriptions to ChatGPT Team, Notion Team, Gamma. Then as lead for digital transformation, Iâll am confident in justifying that the organization upgrades to Google Workspace with AI.
ChatGPT is easily justifable, 70% of our org of 60, uses it to support their daily work. They openly use it during meetings and brainstorming. Same for Notion and Gamma. Itâs apparent when I create meeting summaries with transcripts and presentations.
I can also easily justify Google Workspace with Gemini by focusing on Workspace Workflows. Demo some workflows (I have my personal Workspace account) and show how AI automation reduces redundancies.
Most importantly, these are fixed subscriptions. When integrating and deploying complex solutions where workflows need to be built and variable costs need to be considered, itâs not as straightforward.
For example my sector is considering deploying multiple data and analytics pipelines through Azure (personally not a fan). We have to propose to multiple layers: internal stakeholders > sector policy office > sector digitalization office > GovTec > statutory board senior management.
Worst thing that will happen next upcoming years is that employees get AI in their hands and think they do more. It is essential to be able to ask the right questions and this is extremely hard. Ask any very smart person and he will tell you this.
so what will happen if mediocre intelligent or capable people start doing their work with AI? Their mediocre intelligent and capabilities will be growing exponentially leaving the smart people in the dust as they are often already are. Organizations will start to make more mistakes faster and those mistakes will be bigger mistakes.
I think youâre mixing up two different arguments.
@georgecollier isnât saying approvals are easy or that AI agents offer fixed costs. Heâs making a simpler point that when it comes to AI in dev the ROI should be obvious. If someone canât get ROI from AI in that situation, the issue isnât red tape or budgeting, itâs the analysis.
Procurement, governance, and cost predictability are real constraints, but only regarding what gets approved, not whether value exists.
And honestly, understanding that distinction is part of the job of a digital transformation lead. The role isnât just picking tools that are easy to justify; itâs spotting leverage and turning it into a clear business case and determining what work can be reduced / replaced..
Bottom Line: If AI applied to dev environments canât be justified on ROI, something in the thinking is broken. Getting it approved can be hard. Seeing the value shouldnât be.
Have you actually worked in a similar role? Thatâs just one part of the role. Thatâs a naive statement.The reality is that you spend more time fighting against status quo and bureaucracy.
That is beside the fact that AI is frontier tech.
As mentioned, I have already proposed similar solutions that have been deployed are running. Now I am working to propose scaled solutions. Sweating over large budget proposals. At the very least I can use my own experiences as reflections and study.
Even if you say ROI, what defines ROI? Organizational needs and goals
I already know and appreciate the value of AI and automation. I can even confidently define ROIs for the different operations that make up the day to day in my sector. But thatâs me, the tech guy who loves tech.
I cannot say the same for the people who have to sign off on my proposals and for the people who have to adjust their established work process (menial and inefficient as they are).
There are 70 different organisations in my sector. 70 different groups of decision makers outside of HQ. Each one with their own opinions of what returns look like.
This is where I think the conversation keeps slipping categories.
ROI isnât subjective. Itâs not âorganizational goalsâ or âpriorities.â ROI is a simple question: does the value returned exceed the cost? An organization can choose not to pursue something with positive ROI for many reasons, but that doesnât make the ROI disappear.
When you say that agentic AI does not reduce the actual costs of software development, thatâs a claim about economics, not bureaucracy. And thatâs the claim Iâve been responding to. If agentic AI replaces more human hours than it costs, then by definition it reduces development cost, even if procurement, governance, or change management prevents adoption.
Bringing in 70 decision makers explains why adoption is difficult, which Iâve already acknowledged multiple times. It doesnât change whether the ROI exists in the first place.
Calling this ânaiveâ also doesnât really land when I explicitly said that changing the status quo is the hard part. Thatâs not whatâs under debate. The question here is whether agentic AI can be positive ROI in development. On that narrow point, the math is straightforward.
So again, two separate things:
⢠Organizational reality can block adoption of positive-ROI tech. Agreed.
⢠That doesnât mean the ROI isnât there, or that agentic AI doesnât reduce real development costs.
Those shouldnât be conflated.
At this point, Iâve made the distinction as clearly as I can and responded in good faith several times. Unless thereâs a new argument about the economics themselves, I donât think thereâs much more to add.
You are wrong: ROI is subjective. What one organization defines as ROI is dependent on their needs and goals. You need to clearly define whatâs the investment and what are the expected returns.
Simple example for you: Deploying automation stacks - A non-profit organization will not define ROI the same as a for-profit organization.
I already share my experience on this: what I have defined as ROIs in my own research may not be the expected ROIs of the organization. Hereâs what usually happens in my dull old full time job:
I present a draft proposal with some estimated ROIs to my committee.
The feedback I get will include some tweaks to the ROI to make it more presentable to HQ management.
I then have to work out a separate set of ROIs for when presenting to stakeholders at the lower organization level.
Why? Because HQ directors want to know how them approving expenditure will equate to achieving their own KPIs. Then, since the project is paid for by HQ, lower-level organizations want to know how them investing the time to retrain workers will equate to better processes.
Youâre trying to dumb down a pretty complex business process thatâs been in practice since forever.
No it doesnât. It does if a company removes a human out of the picture.
For example: I too am also hustling as a bootstrap founder while working full-time. Iâve always been a one man dev. I now use AI to increase my output. I pay subscriptions for the AI tools. In my case how does it reduce the actual cost of my development?
My refutes, historically, have always been simple: Donât simplify matters that should not be simplified. There are layers to software development as there are layers in business management. These 2 are more often than not; 2 distinct parts of business ops.
You can simplify to teach and explain, but donât simplify to make a point. That said I too make that mistake ever so often.