AI Coding Tools: Short-Term Gains, Long-Term Vendor Dependency

This title was summarized by AI from the post below.

Nobody’s really talking about what AI coding tools are training us to do. I’ve spent the last few weeks updating my indie Unity workflow to 2026 standards with Claude Code and OpenAI Codex. The productivity gains are real. But the new pipeline keeps bothering me. The “correct” workflow is not just prompting for code. It’s building layers of supplemental files, context docs, specs, and refactor instructions so the model can keep generating, updating, and cleaning up code. And all of it runs through the meter. That is the part I cannot stop thinking about. Development starts to feel less like writing software and more like orchestrating token consumption. You can already see where this goes. Development cost forecasts will eventually carry token usage alongside salaries. Engineering velocity will be measured partly in API consumption. Entire pipelines will be built around AI being present at every step. That is the future these companies are trying to create. The current token pricing is obviously designed to drive adoption. Get developers and studios to rebuild their workflows around your API first. Once switching costs are real, reprice. At that point it is cheaper to stay than to rip the system back out. It is the classic platform playbook, just applied to software production itself. To be clear, I am not anti-tool. Claude Code and OpenAI Codex are genuinely useful. I am using them myself. But people should be honest about the trade: short-term productivity gains in exchange for long-term pricing exposure to a vendor whose ideal outcome is your dependency. Know what you are signing up for.

  • No alternative text description for this image
Gregor H. Max Koch, MSc in ISM

Chronos Games GmbH953 followers

5d

Are you using the CLI or the desktop app? Did you build an orchestration with Rag and MCP? Is your agent aware of the GDD and so on?

Patrick Koorevaar

Q-Vend B.V.364 followers

6d

Just wait a bit longer. Over time the models will be good enough to run locally and the hardware will also be better. You can run Claude Code already locally with the latest Ollama, or use Aider. I posted about how to set this up: https://www.linkedin.com/posts/patrickkoorevaar_aicoding-aider-ollama-share-7441260510265696256-rj45?utm_source=share&utm_medium=member_ios&rcm=ACoAAAL4D2YBjXab9WXUuwLGd_c1W5yPbdO1Xa8

Samantha Cook

Freelance2K followers

6d

As per the latest Hard Fork podcast, apparently major tech companies are rewarding the people who use the most tokens at work, and at least partially judging peoples' performance (positively) based on high token use. It's a hell of an expensive way to decide people are doing their jobs...

Ash R.

BlogCore app ✍ Agentic…7K followers

6d

funny irony in this: those context docs and specs you build to keep the model on track? thats just good engineering practice that most teams never bothered with before. took an LLM to make people write design docs again.

Roberto Bianchini

Mindgear Studio3K followers

5d

"It is the classic platform playbook, just applied to software production itself." Just the enshitification of development, like Cory Doctorow explained for social networks and other kind of SaS.

Patrick Heney

Heney LLC493 followers

4d

That's not an accident. The ruling class wants to turn life into a subscription service. Once they normalized the idea with things like games and movie services, it just snowballed. The next biggest step was Microsoft's push to move from Office as a product to Office 365, a service. It's been building ever since. Now, the "tech brah's" are building tools that spit out "serviceable" code and charging by the "token." And notice how a "token" is vaguely defined? That's deliberate. Generative AI's use tokens internally, but there was no need to use that for the pricing scheme. The whole system is designed to create a constant drain on the wallets of everybody, and ideally, obfuscate the actual costs as much as possible.

Like
Reply

The survival of the big AI companies is predicated on people buying token credits from them. When local models get good enough, small enough, and fast enough to run locally without breaking the bank, the very same vendors currently building our future to align with their financial interests may well build themselves into obsolescence.

Like
Reply

That's why I invented and opened source'd PAP. Context minimization, agents that stop executing over time when ignored, and more. https://baur-software.github.io/pap/pap/

Like
Reply
Aleksandr Bogdanov

Helium9 Games254 followers

4d

I agree with this. These tools are genuinely useful, but for many ordinary companies in most countries, the economics still do not look that straightforward. Without serious workflow optimization, some agentic pipelines can start to feel less like efficiency and more like a shift of engineering budget into API spend. For top-tier companies with very high developer costs, that trade may be easy to justify, but for many teams the picture is less clear, especially given the current limitations of the models.

Michael Dobekidis

Kaizen Gaming510 followers

3d

My main concern is that after all is said and done we are are producing x100 of what we produce now, what happens during an outage? Up until now no service disruption could 100% stop code from being written, we could develop even without internet (by a small or medium amount but still better than nothing). So what happens if online agents are no longer available? Enterprises need to think about local in-house solutions as a fallback before an extended outage knocks on their door

See more comments

To view or add a comment, sign in

Explore content categories