Mastering Coding Challenges

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,404,684 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Rajya Vardhan Mishra

    Engineering Leader @ Google | Mentored 300+ Software Engineers | Building high-performance teams | Tech Speaker | Led $1B+ programs | Cornell University | Lifelong learner driven by optimism & growth mindset

    111,555 followers

    In the last 15 years, I have interviewed 800+ Software Engineers across Google, Paytm, Amazon & various startups. Here are the most actionable tips I can give you on how to approach  solving coding problems in Interviews  (My DMs are always flooded with this particular question) 1. Use a Heap for K Elements      - When finding the top K largest or smallest elements, heaps are your best tool.      - They efficiently handle priority-based problems with O(log K) operations.      - Example: Find the 3 largest numbers in an array.   2. Binary Search or Two Pointers for Sorted Inputs      - Sorted arrays often point to Binary Search or Two Pointer techniques.      - These methods drastically reduce time complexity to O(log n) or O(n).      - Example: Find two numbers in a sorted array that add up to a target.   3. Backtracking    - Use Backtracking to explore all combinations or permutations.      - They’re great for generating subsets or solving puzzles.      - Example: Generate all possible subsets of a given set.   4. BFS or DFS for Trees and Graphs      - Trees and graphs are often solved using BFS for shortest paths or DFS for traversals.      - BFS is best for level-order traversal, while DFS is useful for exploring paths.      - Example: Find the shortest path in a graph.   5. Convert Recursion to Iteration with a Stack      - Recursive algorithms can be converted to iterative ones using a stack.      - This approach provides more control over memory and avoids stack overflow.      - Example: Iterative in-order traversal of a binary tree.   6. Optimize Arrays with HashMaps or Sorting      - Replace nested loops with HashMaps for O(n) solutions or sorting for O(n log n).      - HashMaps are perfect for lookups, while sorting simplifies comparisons.      - Example: Find duplicates in an array.   7. Use Dynamic Programming for Optimization Problems      - DP breaks problems into smaller overlapping sub-problems for optimization.      - It's often used for maximization, minimization, or counting paths.      - Example: Solve the 0/1 knapsack problem.   8. HashMap or Trie for Common Substrings      - Use HashMaps or Tries for substring searches and prefix matching.      - They efficiently handle string patterns and reduce redundant checks.      - Example: Find the longest common prefix among multiple strings.   9. Trie for String Search and Manipulation      - Tries store strings in a tree-like structure, enabling fast lookups.      - They’re ideal for autocomplete or spell-check features.      - Example: Implement an autocomplete system.   10. Fast and Slow Pointers for Linked Lists      - Use two pointers moving at different speeds to detect cycles or find midpoints.      - This approach avoids extra memory usage and works in O(n) time.      - Example: Detect if a linked list has a loop.   💡 Save this for your next interview prep!

  • View profile for Akash Keshri

    Worked @upGrad, @ByteXL, @HackerEarth | IIITian | Expert @Codeforces | Knight @Leetcode | Educator @CodingNinjas | Speaker | LinkedIn Top 200 India | DM For Collaboration

    78,623 followers

    Clean code is nice. But scalable architecture? That’s what makes you irreplaceable. Early in my journey, I thought “writing clean code” was enough… Until systems scaled. Teams grew. Bugs multiplied. That’s when I discovered Design Patterns, and things started making sense. Here’s a simple breakdown that can save you hundreds of hours of confusion. 🔷 Creational Patterns: Master Object Creation These patterns handle how objects are created. Perfect when you want flexibility, reusability, and less tight coupling. 💡 Use these when: You want only one instance (Singleton) You need blueprints to build complex objects step-by-step (Builder) You want to switch object types at runtime (Factory, Abstract Factory) You want to duplicate existing objects efficiently (Prototype) 🔷 Structural Patterns: Organise the Chaos Think of this as the architecture layer. These patterns help you compose and structure code efficiently. 💡 Use these when: You’re bridging mismatched interfaces (Adapter) You want to wrap and enhance existing objects (Decorator) You need to simplify a complex system into one entry point (Facade) You’re building object trees (Composite) You want memory optimization (Flyweight) You want to control access and protection (Proxy, Bridge) 🔷 Behavioural Patterns: Handle Interactions & Responsibilities These deal with how objects interact and share responsibilities. It’s about communication, delegation, and dynamic behavior. 💡 Use these when: You want to notify multiple observers of changes (Observer) You’re navigating through collections (Iterator) You want to encapsulate operations or algorithms (Command, Strategy) You need undo/redo functionality (Memento) You need to manage state transitions (State) You’re passing tasks down a chain (Chain of Responsibility) 📌 Whether you're preparing for interviews or trying to scale your application, understanding these 3 categories is a must: 🔹 Creational → Creating Objects 🔹 Structural → Assembling Objects 🔹 Behavioral → Object Interaction & Responsibilities Mastering these gives you a mental map to write scalable, reusable, and testable code. It’s not about memorising them, it's about knowing when and why to use them. #softwareengineering #systemdesign #linkedintech #sde #connections #networking LinkedIn LinkedIn News India

  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems

    204,370 followers

    Meta just dropped a new kind of code model; and it's not just bigger. It's different. The new Code World Model (CWM), a 32B parameter LLM for code generation is not "just another code model." What makes it different? CWM was trained not only on code, but on what code does at runtime. Most LLMs learn code like they learn prose: predict the next token. CWM learns code like developers do; by simulating its execution. This shift is critical because: - When humans debug or write code, we think in terms of state changes, side effects, and what happens next. - CWM learns from execution traces of Python functions and agentic behaviors in Dockerized Bash environments. It doesn’t just guess the next line; it reasons like it’s living inside the terminal. This unlocks: - Stronger reasoning in multi-step problems - Simulation-based debugging ; More accurate code generation in real-world workflows - Potential for autonomous “neural debuggers” that think in traces, not just tokens On benchmarks, it’s already competitive: - 68.6% on LiveCodeBench v5 - 76% on AIME 2024 - 65.8% on SWE-bench Verified And it's open weights. Meta is betting that world modeling + RL fine-tuning is the next frontier for coding LLMs; not just scale. Is this a glimpse of what post-token-prediction AI looks like? Get started with the links below: - Tech Report: https://lnkd.in/eV7YirjC - Model Weights: https://lnkd.in/e2CTzsxr - On Huggingface: https://lnkd.in/e_S4R-P4 - Inference Code: https://lnkd.in/eVHeW8VV ___ If you like this content and it resonates, follow me Armand Ruiz for more like it.

  • View profile for Navveen Balani
    Navveen Balani Navveen Balani is an Influencer

    LinkedIn Top Voice | Google Cloud Fellow | Chair - Standards Working Group @ Green Software Foundation | Driving Sustainable AI Innovation & Specification | Award-winning Author | Let's Build a Responsible Future

    12,099 followers

    From Code Generation to System Integration: Why AI Coding Tools and Agentic IDEs Must Evolve to Solve Real Software Development Challenges Since GPT-3 went mainstream, AI coding tools have sprinted through three waves. 1. First came smart autocomplete. 2. Then came cloud companions tuned to specific stacks. 3. Now we’re in the agent wave – tools that read whole repos, open terminals, run tests and raise pull requests on their own. Every cycle starts the same way: Wow. Impressive. Look at how much this can do for me. But the uncomfortable truth is this: most of what these tools automate is commodity knowledge. Framework boilerplate, CRUD patterns, standard integration glue, typical test shapes – once a pattern exists in public code, a model can learn it and repeat it very well. That used to feel like expertise. Now it’s autocomplete on steroids. The real problems have barely moved: • Design and architecture. Not just file-by-file edits, but coherent system design: boundaries, contracts, data flows, failure modes, performance budgets – a holistic solution, not local patchwork. •  End-to-end SDLC integration. How change actually flows from idea to production: design, review, CI, approvals, environments, rollout strategies and on-call ownership. • Change management and legacy transformation. How to evolve decade-old systems, untangle hidden dependencies, migrate behaviour safely and avoid breaking everything that still quietly depends on “that old module”. • Traceability. Knowing who or what changed what, why, and what else was impacted – across code, configs, data pipelines and policies. • How strongly workflows enforce the top 10 principles like reliability, security, cost and maintainability that were outlined in the earlier post – not as posters on a wall, but as gates every change must pass through. This is where vibe-coding tools become dangerous. The model writes the feature, generates the tests, explains the diff. Everything looks green. It feels safe enough to ship on vibe. Without deep expertise and a solid workflow around it, that is not productivity. It is an efficient way to inject new risk into a live system. If code patterns are now cheap, differentiation shifts somewhere else: • To how clearly an organisation defines how systems should be built and evolved • To how tightly AI tools are integrated with that SDLC, not just with the editor • To how well workflows embody design principles, change discipline and traceability by default Writing code is becoming a commodity. However, writing holistic, thoughtful systems, and continuously evolving and governing them safely, is where the true value lies AI coding copilots and agentic IDEs now need to evolve from “look what I can generate” to “look how I help you integrate, operate and transform”. That’s when it stops being “wow, impressive demo” and becomes “yes – this is finally solving the real problem.”

  • View profile for Satyam Jyottsana Gargee

    Software engineer | AI & Tech | LinkedIn Top Voice 2025 | Ex-Microsoft | walmart | 260k+ community | Featured on Time Square | Josh Talk speaker

    210,097 followers

    𝐇𝐨𝐰 𝐦𝐮𝐜𝐡 𝐃𝐒𝐀 𝐢𝐬 𝐞𝐧𝐨𝐮𝐠𝐡 𝐭𝐨 𝐜𝐫𝐚𝐜𝐤 𝐌𝐢𝐜𝐫𝐨���𝐨𝐟𝐭, 𝐆𝐨𝐨𝐠𝐥𝐞 𝐨𝐫 𝐖𝐚𝐥𝐦𝐚𝐫𝐭? This is the most common DM which I get from juniors are, "Ma’am, I’ve solved 300+ questions but still can’t solve new ones. How many do I really need to do?" When I started, I had the same doubt. Some seniors said 300 questions, others said 500+ to be safe. So I rushed to hit those numbers. But here’s the truth it’s not about the count, it’s about the patterns. Once you master patterns, every new problem feels familiar. Here are the 15 patterns you must know for placements: 1. 𝐓𝐰𝐨 𝐏𝐨𝐢𝐧𝐭𝐞𝐫 𝐓𝐞𝐜𝐡𝐧𝐢𝐪𝐮𝐞 – Solve pair/relationship problems in arrays/linked lists. 2. 𝐒𝐥𝐢𝐝𝐢𝐧𝐠 𝐖𝐢𝐧𝐝𝐨𝐰 – Efficiently handle subarray/substring problems. 3. 𝐇𝐚𝐬𝐡𝐢𝐧𝐠 / 𝐅𝐫𝐞𝐪𝐮𝐞𝐧𝐜𝐲 𝐂𝐨𝐮𝐧𝐭𝐢𝐧𝐠 – O(1) lookups for counts, duplicates, mapping. 4. 𝐏𝐫𝐞𝐟𝐢𝐱 𝐒𝐮𝐦𝐬 – Answer range queries fast. 5. 𝐁𝐢𝐧𝐚𝐫�� 𝐒𝐞𝐚𝐫𝐜𝐡 (𝐚𝐧𝐝 𝐯𝐚𝐫𝐢𝐚𝐧𝐭𝐬) – For sorted arrays or monotonic conditions. 6. 𝐆𝐫𝐞𝐞𝐝𝐲 – Local choices that lead to global solutions. 7. 𝐃𝐲𝐧𝐚𝐦𝐢𝐜 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠 – Break down overlapping subproblems. 8. 𝐁𝐚𝐜𝐤𝐭𝐫𝐚𝐜𝐤𝐢𝐧𝐠 – Explore all possibilities (subsets, permutations). 9. 𝐁𝐅𝐒 – Shortest paths, level-by-level traversals. 10. 𝐃𝐅𝐒 – Explore all paths, detect cycles. 11. 𝐇𝐞𝐚𝐩 / 𝐓𝐨𝐩-𝐊 – Manage largest/smallest efficiently. 12. 𝐌𝐞𝐫𝐠𝐞 𝐈𝐧𝐭𝐞𝐫𝐯𝐚𝐥𝐬 – Handle overlaps in schedules. 13. 𝐔𝐧𝐢𝐨𝐧-𝐅𝐢𝐧𝐝 – Manage connectivity in graphs. 14. 𝐓𝐫𝐢𝐞 – Prefix-based search and storage. 15. 𝐌𝐨𝐧𝐨𝐭𝐨𝐧𝐢𝐜 𝐒𝐭𝐚𝐜𝐤 / 𝐐𝐮𝐞𝐮𝐞 – Solve next/previous greater/smaller problems. And remember, DSA is not a sprint, it is a marathon. Rejections will happen, and that is normal. But every attempt makes you sharper, and every failure teaches you a pattern in life too. #DSA #Placements #CodingInterviews #ProblemSolving #CareerAdvice

  • View profile for Parikh Jain

    Founder @ ProPeers | Ex-Amazon | Ex-Founding Member at Coding Ninjas | Youtuber(80k+) | DTU

    172,662 followers

    From leaving TCS in Jan 2025… to landing an SDE-1 (L59) offer from Microsoft India in April 2025… This is the story of one of my mentees. (He’s asked me not to tag him, but I’m sharing his full DSA + LeetCode strategy because his approach fixed what most candidates get wrong.) Most students struggle with DSA, not because their logic is weak… but because their approach is broken.  ✅ Step 1: Choose Your Language Based on Speed  Don’t waste time debating Java vs C++ vs Python.  Pick what you’re fastest at when stressed.  Tip: If you’re solving string-heavy questions, Python will save time.  ✅ Step 2: Learn Pattern-Wise This was the biggest game-changer. Instead of solving problems randomly, he studied DSA patterns. → Sliding Window → Two Pointers → Hashing → Recursion + DP → Backtracking → Binary Search on Answers → Monotonic Stack / Queue Every new question became easier to recognize, not just solve. 👉 Here’s my pattern-based learning guide he followed: https://bit.ly/3SmlAzr More on learning DSA pattern-wise here: https://lnkd.in/dkW_CkH5  ✅ Step 3: Get Solid on Time + Space You’re expected to:  Estimate time/space complexity fast  Start with brute force, then optimize  Know tradeoffs (code clarity vs performance) 📌 Before interviews, revise:  Time/space of all core data structures  Big-O notations (keep a printed cheat sheet)  ✅ Step 4: Master the Fundamentals First Don’t rush to DP or Graphs if your basics aren’t solid. He spent his first 4 weeks on: — Arrays — LinkedLists — HashMaps — Stack/Queue operations — Custom implementations like LRU Cache, HashMap 📌 These showed up in his interviews at Microsoft.  ✅ Step 5: Use a Clear, Ordered Roadmap His DSA roadmap looked like this: 1. Arrays, LinkedLists, HashMaps 2. Searching + Sorting 3. Trees (Pre, In, Post order) 4. Graphs, Heaps, Backtracking 5. Recursion 6. DP, Tries, Union-Find He didn’t jump between 10s of DSA sheets out there  ✅ Step 6: Practice With Depth — 10 problems done deeply >> 50 done shallowly — Always ask: “What pattern does this question follow?” Examples:  Max sum of subarray? → Sliding Window  Repeated sub-problems? → Try Recursion → then DP  Multiple paths? → Backtracking  ✅ Step 7: Interview Like a Problem Solver — Start with brute force — Explain your thinking out loud — Clarify the question before writing code — Know your edge cases (nulls, empty inputs, one element) Your goal isn’t to become a LeetCode machine. It’s to become the person who knows how to think, debug, and deliver when it matters. And that’s what this offer from Microsoft proved.

  • View profile for Deeksha Pandey

    SWE III at Google | Building scalable AI systems | Tech Creator | Open to collaborate

    253,208 followers

    Top 5 Must-Know DSA Patterns👇🏻👇🏻 DSA problems often follow recurring patterns. Mastering these patterns can make problem-solving more efficient and help you ace coding interviews. Here’s a quick breakdown: 1. Sliding Window • Use Case: Solves problems involving contiguous subarrays or substrings. • Key Idea: Slide a window over the data to dynamically track subsets. • Examples: • Maximum sum of subarray of size k. • Longest substring without repeating characters. 2. Two Pointers • Use Case: Optimizes array problems involving pairs or triplets of elements. • Key Idea: Use two pointers to traverse from opposite ends or incrementally. • Examples: • Pair with target sum in a sorted array. • Trapping rainwater problem. 3. Binary Search • Use Case: Efficiently solves problems with sorted data or requiring optimization. • Key Idea: Repeatedly halve the search space to narrow down the solution. • Examples: • Find an element in a sorted array. • Search in a rotated sorted array. 4. Dynamic Programming (DP) • Use Case: Handles problems with overlapping subproblems and optimal substructure. • Key Idea: Build solutions iteratively using a table to store intermediate results. • Examples: • 0/1 Knapsack problem. • Longest common subsequence. 5. Backtracking • Use Case: Solves problems involving all possible combinations, subsets, or arrangements. • Key Idea: Incrementally build solutions and backtrack when a condition is not met. • Examples: • N-Queens problem. • Sudoku solver. Why These Patterns? By focusing on patterns, you can identify the right approach quickly, saving time and improving efficiency in problem-solving.

  • View profile for Ryan Mitchell

    O'Reilly / Wiley Author | LinkedIn Learning Instructor | Principal Software Engineer @ GLG

    30,077 followers

    I’ve been working on a massive prompt that extracts structured data from unstructured text. It's effectively a program, developed over the course of weeks, in plain English. Each instruction is precise. The output format is strict. The logic flows. It should Just Work™. And the model? Ignores large swaths of it. Not randomly, but consistently and stubbornly. This isn't a "program," it's a probability engine with auto-complete. This is because LLMs don’t "read" like we do, or execute prompts like a program does. They run everything through the "attention mechanism," which mathematically weighs which tokens matter in relation to others. Technically speaking: Each token is transformed into a query, key, and value vector. The model calculates dot products between the query vector and all key vectors to assign weights. Basically: "How relevant is this other token to what I’m doing right now?" Then it averages the values using those weights and moves on. No state. No memory. Just a rolling calculation over a sliding window of opaquely-chosen context. It's kind of tragic, honestly. You build this beautifully precise setup, but because your detailed instructions are buried in the middle of a long prompt -- or phrased too much like background noise -- they get low scores. The model literally pays less attention to them. We thought we were vibe coding, but the real vibe coder was the LLM all along! So how to fix it? Don’t just write accurate instructions. Write ATTENTION-WORTHY ones. - 🔁 Repeat key patterns. Repetition increases token relevance, especially when you're relying on specific phrasing to guide the model's output. - 🔝 Push constraints to the top. Instructions buried deep in the prompt get lower attention scores. Front-load critical rules so they have a better chance of sticking. - 🗂️ Use structure to force salience. Consistent headers, delimiters, and formatting cues help key sections stand out. Markdown, line breaks, and even ALL CAPS (sparingly) can help direct the model's focus to what actually matters. - ✂️ Cut irrelevant context. The less junk in the prompt, the more likely your real instructions are to be noticed and followed. You're not teaching a model. You're gaming a scoring function.

  • View profile for Himanshu Kumar

    Building India’s Best AI Job Search Platform | LinkedIn Growth for Forbes 30u30 & YC Founders & Investors | Building your personal brand | 200+ Profiles, 150+ Mn Impressions | Marketing & Brand Building

    281,432 followers

    I analyzed the top 50 coding interview questions, and they all boil down to these 18 patterns. People ask me how to crack technical interviews without spending years on LeetCode. My secret? I don't memorize solutions. People ask how I identify the right approach for a brand new problem instantly. My secret? I look for the underlying pattern, not the specific question. But the truth is... There is no secret. Just pure Pattern Recognition. To master DSA, you have to stop treating every problem as unique. You need to map them to these 18 fundamental branches. Here is the technical breakdown to get you started: 1. Optimization & Pointers - Two Pointers: Essential for sorted arrays and linked lists. Use this to detect cycles (Floyd's algorithm), remove elements, or find target pairs in O(N) time. - Sliding Window: The standard for subarray problems. Perfect for calculating running averages, finding the longest substring under constraints, or optimization within a linear data structure. - Intervals: When dealing with time ranges or overlaps, use Merge Intervals or Catalan logic. 2. Search & Traversal - Binary Search: Beyond finding numbers. Use on rotated sorted arrays or to find "first/last occurrence" boundaries in O(log N). - Tree Traversal: Master the recursion. Know when to use Level-order (BFS) vs. Pre/In/Post-order (DFS) for tasks like serialization or finding the Lowest Common Ancestor. - Graph Traversal: Solves island counting, cycle detection, and topological sorting. 3. Complex Structures & Logic - Heaps: The most efficient way to handle "Top K" elements, scheduling tasks, or finding medians in a data stream. - Tries: The go-to design pattern for autocomplete systems, spell checkers, and prefix searches. - Backtracking: For when you need all possibilities. Used in generating permutations, N-Queens, and Sudoku solvers. 4. The Heavy Hitters - Dynamic Programming (DP): For overlapping subproblems. Includes 1D/2D arrays, Longest Common Subsequence (LCS), and the Knapsack problem. - Graph Optimization (Union Find): Critical for network connectivity. Uses Disjoint Set Union and MST algorithms like Kruskal’s or Prim’s. Want to be a software engineer? Stop memorizing. Start recognizing. Remember, seeing the pattern is 90% of the solution. The code is just syntax. Which of these patterns do you find most difficult to implement? ♻️ Repost to help a connection ace their technical interview.

Explore categories