We have been using Anthropic's Claude for the past few months, and the results have been impressive. The increasing presence of AI-generated code in our codebase has significantly reduced time spent on repetitive tasks and minimised errors. However, it’s important to note that to leverage Claude's full potential, there is still a learning curve. For new users, here are a few things to keep in mind: 1. When requesting help for a specific aspect of the code, Claude tends to focus solely on that part, often overlooking the broader context. This can result in redundant actions or duplicated steps. However, when these issues are pointed out, Claude usually responds with valuable suggestions that help improve the solution. 2. Occasionally, Claude might modify parts of perfectly functioning code, introducing unintended bugs. To minimise this, it’s helpful to explicitly instruct Claude to make only the minimal necessary changes. 3. Claude often adds extensive logging to help diagnose issues, which is generally useful. However, this can lead to excessive code changes over time, potentially making debugging more complex. It’s essential to carefully manage versions and ensure that unnecessary changes do not accumulate. Despite these quirks, Claude remains one of the most powerful coding tools available. Understanding its nuances can unlock even greater productivity gains. Have you encountered any other quirks while working with Claude?
Shantanu Bhattacharyya’s Post
More Relevant Posts
-
Finding where two nodes meet in a Binary Search Tree? 🌳 Let’s uncover the logic behind the Lowest Common Ancestor! ⚡ Hey everyone! Day 281 of my 365-day coding journey took me into a classic tree problem: LeetCode’s "Lowest Common Ancestor of a Binary Search Tree" (Problem 235). This problem beautifully showcases how leveraging BST properties can simplify complex tree logic. Let’s break it down! 🛠️ The Problem Given a Binary Search Tree (BST) and two nodes, p and q, the task is to find their Lowest Common Ancestor (LCA). The LCA is the lowest node that has both p and q as descendants (a node can be a descendant of itself). The key idea? Use the inherent ordering property of BSTs for efficient searching. 🎯 The Approach Solution: BST-based LCA (Using Tree Properties) 1️⃣ Start at the root node. 2️⃣ Compare the values of p and q with the current node: • If both p and q are smaller, move to the left subtree. • If both are larger, move to the right subtree. • If one is smaller and the other is larger (or one equals the current node), you’ve found the LCA — the point where their paths split. This approach efficiently walks down the tree until it hits the split point or one of the nodes. 🧠 Key Takeaways - The BST property is your superpower — it turns what could be a complex traversal into a clean O(log N) search (for balanced trees). - Always remember: a node can be its own ancestor. - This problem reinforces how understanding tree structure deeply leads to elegant and efficient solutions. 💡 Challenge for you! How would your logic change if this were a general Binary Tree instead of a BST? Think about how the absence of ordering impacts your traversal! 💬 📺 Check out my video walkthrough I break down the full BST-based LCA solution step-by-step in my latest video: https://lnkd.in/gCjyJR6J 🔥 Let’s Connect! If you’re exploring data structures or tackling daily coding challenges, let’s connect! Always great to learn, share, and grow together. 🚀 #CodingJourney #DSA #BinarySearchTree #LeetCode #TreeTraversal #ProblemSolving #Algorithms #DataStructures #JavaScript #Python #DeveloperLife #CodeNewbies #TechLearning #365DaysOfCode #LearningEveryDay
To view or add a comment, sign in
-
-
𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗪𝗼𝗿𝗸 — 𝗪𝗵𝗮𝘁 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗖𝗵𝗮𝗻𝗴𝗲𝘀 (𝗮𝗻𝗱 𝗪𝗵𝗮𝘁 𝗗𝗼𝗲𝘀𝗻’𝘁) I recently came across an article, by Edem Kumodzi that I thought was fascinating. It asked a simple but powerful question: 𝗔𝗿𝗲 ���𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀 𝗿𝗲𝗽𝗹𝗮𝗰𝗶𝗻𝗴 𝗽𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹𝘀 — 𝗼𝗿 𝗷𝘂𝘀𝘁 𝗰𝗵𝗮𝗻𝗴𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗱𝗼? The article described decades of predictions that new tools would make software engineers obsolete. Yet, each time, the profession didn’t disappear — it evolved. Could the same be true for other fields? In accounting, for instance, we’ve moved from 𝗲𝗹𝗲𝗰𝘁𝗿𝗼𝗻𝗶𝗰 𝘀𝗽𝗿𝗲𝗮𝗱𝘀𝗵𝗲𝗲𝘁𝘀 → 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗲𝗱 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 (𝗘𝗥𝗣) → 𝗰𝗹𝗼𝘂𝗱-𝗯𝗮𝘀𝗲𝗱 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 → 𝗔𝗜-𝗮𝘀𝘀𝗶𝘀𝘁𝗲𝗱 𝗮𝘂𝗱𝗶𝘁 𝗮𝗻𝗱 𝘁𝗮𝘅 𝘁𝗼𝗼𝗹𝘀. Each innovation has changed 𝘩𝘰𝘸 accountants work — but not 𝘸𝘩𝘺 they work. The profession adapted rather than vanished. Many fields today are facing genuine disruption, though not necessarily obsolescence. What really matters is identifying what 𝘄𝗼𝗻’𝘁 𝗰𝗵𝗮𝗻𝗴𝗲, while learning to adapt to what will. The article drew an important distinction between 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝗶𝗲𝘀 (the inherent aspects of a profession that resist automation) and 𝗮𝗰𝗰𝗶𝗱𝗲𝗻𝘁𝗮𝗹 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝗶𝗲𝘀 (the parts most likely to be automated). Perhaps our focus should be on understanding the 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 parts of our work — the reasoning, judgement, and human interpretation — while embracing tools that enhance them. AI tools, like many before them, are likely to become amplifiers of human capability, not replacements. 🤔 What parts of your own profession feel truly irreplaceable — and which are already beginning to shift? Always interesting to see how these debates unfold across industries. P.S. For those curious, look up 𝘑𝘦𝘷𝘰𝘯𝘴’ 𝘗𝘢𝘳����𝘥𝘰𝘹 — an economic idea suggesting that as technology makes something more efficient and cheaper, overall use can actually 𝘪𝘯𝘤𝘳𝘦𝘢𝘴𝘦. A reminder that innovation doesn’t always simplify; it often amplifies.
I wrote my first line of code 20 years ago. In that time, I've watched: • Visual Basic promise to democratize programming • Low-code platforms promise to eliminate developers • AI promise to automate all coding The predictions are always the same. The results are always different. Why? Because every tool that was supposed to replace us actually expanded what we could build, creating more demand, not less. I researched seven decades of these failed predictions and found a fascinating pattern that keeps repeating. https://lnkd.in/duCNaKFg
To view or add a comment, sign in
-
"Last year the most useful exercise for getting a feel for how good LLMs were at writing code was vibe coding (before that name had even been coined) - seeing if you could create a useful small application through prompting alone. Today I think there's a new, more ambitious and significantly more intimidating exercise: spend a day working on real production code through prompting alone, making no manual edits yourself. This doesn't mean you can't control exactly what goes into each file - you can even tell the model "update line 15 to use this instead" if you have to - but it's a great way to get more of a feel for how well the latest coding agents can wield their edit tools." 👉 And if you really want to push it further, do it by voice. Dictate your prompts, corrections, and instructions aloud — no typing at all. It forces you to think differently about code, intent, and collaboration with AI, and reveals how close we are to hands-free programming. As J.C.R. Licklider envisioned in Man-Computer Symbiosis, this is the next step toward a genuine partnership between human intuition and machine precision — where thought flows through language, not keystrokes. https://lnkd.in/e-UkFhcR
To view or add a comment, sign in
-
My favourite graph & article on catching software bugs: https://lnkd.in/drFnU23i One possible take-away is that we are under-investing in formal design reviews, especially in the era of AI-assisted programming. LLMs do a pretty good job of writing code, so the quality of the outcome now derives mostly from the quality of the plan. (This is a follow-up to the previous argument about the virtues of planning docs for AI-assisted programming: https://lnkd.in/dNzXVUft)
To view or add a comment, sign in
-
-
People are starting to use agentic coding tools for more than just building apps. They’re more like an engineering sidekick that’s always available to patiently help you with any technical task. Take, for example, a challenge Eitan Rovero-Shein recently brought to Memex Not to build an app. But to make sense of a massive dataset. 625,000 rows of data. Dozens of questions. He started by describing what they wanted to understand. Memex helped him design the methodology, Refine it multiple times, And surface better ways to see the problem. Instead of following instructions, Memex collaborated: ↳ Challenging assumptions. ↳ Suggesting better filters. ↳ Adjusting when they changed direction. It picked the right stack (Python + pandas), handled setup, ran the analysis locally, and produced 50+ clean CSVs. Along the way, it visualized patterns, highlighted anomalies, and helped interpret what the data meant. All executed locally. That’s what I love most about what we’re building. It’s not just about accelerating output. It’s about amplifying reasoning. And when a tool can push back, suggest alternatives, and grow with your questions, It stops being a tool. It becomes a thought partner.
To view or add a comment, sign in
-
-
Some time back, maybe like around 2 to three months ago, everywhere we were seeing people glorifying Vibe Coding. Just type a prompt, and AI builds an entire app within a few seconds Everyone clapped. You posted. I posted. It was fun. Then people tried running those projects for real. Here’s the story of what happened next. Act 1: Weekend win You spin up a tiny app, upload a CSV, see a chart, and it works. It looks clean, the demo lands, and everyone’s excited. Perfect for proving the idea fast. Act 2: Monday arrives Real usage hits. Files get big, actions collide, SSO is required, and there’s PII in the mix. The project now needs guardrails, not just a simple demo. Act 3: The invisible work This is where engineering shows up: lock schemas and run migrations so data doesn’t drift, watch p95 and fix N+1s, add roles and audit logs, make retries idempotent, write unit and contract tests, ship behind flags with a rollback plan, and wire up logs, traces, and metrics. Boring to read, essential to run. Act 4. Where Vibe Coding actually shines Vibe is fantastic for idea testing, internal tools, data exploration, and scaffolding boring boilerplate. It speeds up seniors and gives juniors a helping hand. It is not a substitute for architecture or actual coding. Act 5. The new workflow What this really means is simple. Sketch the thing fast with AI. Lock the contracts. Write the tests that matter. Add auth, rate limits, and guardrails. Watch it under load. Fix the bottlenecks. Ship with flags. Observe. Iterate. Vibe Coding didn’t replace coders. Real engineering is everything that happens after the demo still works.
To view or add a comment, sign in
-
Explore related topics
- Understanding Anthropic Claude AI
- Best Practices for Using Claude Code
- How to Boost Productivity With AI Coding Assistants
- Best Use Cases for Claude AI
- Tips for AI-Assisted Programming
- Applications of Claude AI in Engineering
- How Claude Code Transforms Team Workflows
- How to Use AI for Manual Coding Tasks
Very informative