AI coding tools work best when they can plan, act, and iterate - not just respond. Watch Ado Kukic's talk on building agentic, autonomous workflows across the developer lifecycle and how to get the most out of Claude Code: https://lnkd.in/gv-uJF_4
Building Agentic Workflows with AI Coding Tools
More Relevant Posts
-
AI coding tools are already in engineers’ hands. Claude Code and Cursor are accelerating how products get built, but in regulated environments, speed alone isn’t enough. In this session at Validated AI on Thursday, April 9, Gabriel Pascualy will explore what changes when AI is writing production code: how code review must evolve, where CI/CD needs new guardrails, and how quality teams can adapt without slowing innovation. Explore the full lineup of speakers and RSVP here: https://lnkd.in/eyjrdKye
To view or add a comment, sign in
-
-
Over time, experimenting with AI assisted coding, I’ve stopped treating the process as “magic black box do magic things for me" and started treating it as a structured workflow. This morning I turned my latest project into a small "project template" I now intend to use for every new AI‑assisted trinket I make. It's a simple approach to keeping ideas, plans, design notes and reference documentation consistent. This is what I find really helps Future Me (and Future AI collaborator) go back to any project, pick up where we left off, and keep shipping. If you’re building with Claude Code (or similar) and find every project drifts into chaos over time, this might be useful (or at least a good “version 1” to argue with). Full post and repo link here: https://lnkd.in/eqf3HuUw
To view or add a comment, sign in
-
-
'AI coding tools are now the default': Top engineering teams double their output as nearly two-thirds of code production shifts to AI-Generation — and could reach 90% within a year https://flip.it/nFYJaz
To view or add a comment, sign in
-
Vibe coding is a simple idea: describe what needs to be built in plain language, let AI generate the working code, then refine it inside an IDE. This video breaks down what vibe coding actually is (and what it is not), then walks through setting up VS...
Vibe Coding Explained: Build a Website Using AI in VS Code (No Coding Needed)
https://www.youtube.com/
To view or add a comment, sign in
-
AI coding tools are getting smarter every week. They reason better. They write better code. They can even restructure systems. So vibe coding is everywhere right now. Founders are shipping demos fast. Teams are building prototypes in days. But smarter tools don’t remove one very human problem. The things that never made it into the prompt. Scaling assumptions. Future integrations. Edge cases you haven’t discovered yet. AI can reason deeply about what it sees. Your blind spots still sit outside that window. We explored this idea further in a short read. If this carousel made you think, the full piece adds a few layers to it 👇 https://lnkd.in/gMMTACUQ Because sometimes the biggest risk in a product isn’t bad code. It’s the question nobody thought to ask.
To view or add a comment, sign in
-
If you missed last week's webinar on AI coding agents, here's what you need to know. Krzysztof Wróbel (our CTO) built a full-stack internal tool almost entirely with Claude Code. He and Pawel Gawliczek spent an hour breaking down the full process, requirements, architecture, implementation, QA, deployment, and where the wheels might come off along the way. A few things that came up that we hadn't seen talked about much: ▪️ The teams that struggle most with AI coding tools aren't the ones who use them too little. They're the ones who use them without engineering discipline, and then spend weeks untangling the results. ▪️ Context management is a skill. Dumping 100 documents into an agent doesn't make it smarter. Knowing what to feed it and when is most of the work. ▪️ Paweł built his own Jira-like MCP server because he kept losing context between sessions. Practical problem, practical fix. Full recording you can find here: https://lnkd.in/dsFB6vcG
How to use AI coding agents without losing engineering standards? | CodiLime
https://www.youtube.com/
To view or add a comment, sign in
-
Ralph loops and autonomous coding agents are all the rage right now. Here are three quick thoughts after building and using one myself across two projects. For the uninitiated: a Ralph loop calls an AI agent repeatedly, making incremental progress each iteration until the task is done. 1. Most tools already do this natively. Ask Claude Code to fix tests and it'll alternate between running tests and patching code on its own. The original case for Ralph loops (forcing more "complete" output by re-running the same prompt) is mostly moot. 2. Short iterations help with context drift. Progressing through a plan step-by-step in different sessions works well. Focused, stateless sessions are less likely to go off-script than one long, accumulating task. 3. Shorter sessions are cheaper. Every prompt in a long session includes the full prior history. Breaking a task into many clean mini-sessions cuts token usage significantly. --- So is it worth using? For me it is, but for reasons I didn't expect. How about you? #AI #CodingAgents #SoftwareEngineering #ClaudeCode #DeveloperProductivity #EngineeringLeadership
To view or add a comment, sign in
-
OpenAI built Codex: roughly 1 million lines of code across 1,500 PRs, zero manually written. But before they got there, they were spending 20% of every week just cleaning up AI slop. Their solution: "Harness engineering" - shaping the environment around coding agents so they can act reliably. In the latest episode of Fragmented, we break down the two sides of harness engineering: A. Shaping the harness and 5 themes that matter: *** 1. Agent legibility — making the repo navigable by agents, not just humans 2. Closed feedback loops — giving agents the tooling to see why things broke 3. Persistent memory — so agents stop relearning the same lessons 4. Entropy control — because agent-generated code accelerates codebase disorder 5. Blast radius controls — scoped permissions and approval gates B. Building the harness: *** When generic tools stop being enough. Stripe forked Goose, built custom agents, and now ships 1,000+ PRs a week. What does building a custom Claude Code look like for your team? Open Code gives you an open-source starting point. Incremental steps in AI coding don't always come from smarter models. It's better scaffolding around the model you already have. That and more in the new episode. https://lnkd.in/gKn42ZJt #AIEngineering #HarnessEngineering
To view or add a comment, sign in
-
This is the real work that teams need to put in, to get the best results from their AI coding agents. Listen to our latest episode on Harness Engineering.
OpenAI built Codex: roughly 1 million lines of code across 1,500 PRs, zero manually written. But before they got there, they were spending 20% of every week just cleaning up AI slop. Their solution: "Harness engineering" - shaping the environment around coding agents so they can act reliably. In the latest episode of Fragmented, we break down the two sides of harness engineering: A. Shaping the harness and 5 themes that matter: *** 1. Agent legibility — making the repo navigable by agents, not just humans 2. Closed feedback loops — giving agents the tooling to see why things broke 3. Persistent memory — so agents stop relearning the same lessons 4. Entropy control — because agent-generated code accelerates codebase disorder 5. Blast radius controls — scoped permissions and approval gates B. Building the harness: *** When generic tools stop being enough. Stripe forked Goose, built custom agents, and now ships 1,000+ PRs a week. What does building a custom Claude Code look like for your team? Open Code gives you an open-source starting point. Incremental steps in AI coding don't always come from smarter models. It's better scaffolding around the model you already have. That and more in the new episode. https://lnkd.in/gKn42ZJt #AIEngineering #HarnessEngineering
To view or add a comment, sign in
-
The testing crisis nobody's talking about... Teams are shipping 10x more code with AI coding agents. But testing hasn't changed at all. You're still writing tests by hand. Still maintaining brittle selectors. Still playing whack-a-mole with flaky CI. Meanwhile, AI-generated code is piling up with zero verification that it actually works. I've been thinking about this problem obsessively for the last few months. We're building something different. More soon.
To view or add a comment, sign in
Explore related topics
- AI Coding Tools and Their Impact on Developers
- How AI Coding Tools Drive Rapid Adoption
- How AI can Improve Coding Tasks
- Reasons for the Rise of AI Coding Tools
- AI Tools for Code Completion
- The Role of AI in Programming
- How to Manage AI Coding Tools as Team Members
- How to Use AI Code Suggestion Tools
- How to Drive Hypergrowth With AI-Powered Developer Tools
- How to Use AI for Manual Coding Tasks