Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]
Sales Feedback Loops
Explore top LinkedIn content from expert professionals.
-
-
Microsoft just promoted 4 sales leaders to EVP. The press release buried the real story. One line stood out: "Keep the feedback loop between customers and product decisions as small as possible." Read that again. Microsoft—a company with 220,000 employees—is restructuring its entire sales leadership to shrink the distance between what customers need and what product builds. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗳𝗼𝗿 𝗖𝗥𝗢𝘀: Microsoft isn't doing this for fun. They're doing it because "AI is being adopted at extraordinary speed, and customers expect these capabilities to come to life in their business faster than ever before." Translation: The old model—sales captures feedback, passes it to product, product builds it 18 months later—is dead. Customers won't wait. Competitors won't wait. Your org structure can't wait either. 𝗧𝗵𝗲 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗶𝗻 𝗺𝗼𝘀𝘁 𝘀𝗮𝗹𝗲𝘀 𝗼𝗿𝗴𝘀: • Reps hear what customers actually need • That insight gets buried in CRM notes nobody reads • Product builds features based on internal roadmaps • Customers churn because their problems never get solved • Everyone blames "alignment issues" The loop is too long. The signal gets lost. Deals die in the gap. 𝗪𝗵𝗮𝘁 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝘀: When you elevate sales leaders to EVP and give them direct lines to product strategy, you're not just promoting people. You're compressing the feedback loop by design. Customer pain → Sales leadership → Product decision. No 6-month committee reviews. No "we'll add it to the backlog." No lost-in-translation moments. 𝗧𝗵𝗲 𝗖𝗥𝗢 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻: How long does it take for customer feedback from your sellers to influence a product decision at your company? If the answer is "months" or "I don't know," your feedback loop is a competitive liability. The companies winning in AI aren't just deploying faster. They're learning faster. And learning speed is a function of feedback loop length. How compressed is your customer-to-product feedback loop?
-
How Does a Control Valve Positioner Really Work? A control valve positioner is essentially a closed-loop electropneumatic servo mechanism that ensures the valve stem reaches and maintains the exact position demanded by the controller. Here’s the technical flow: ➡️ Signal Conversion and Loop Power The DCS or PLC sends a 4–20 mA analog control signal, which also powers most 2-wire loop-powered positioners. This current represents the requested valve position (setpoint). ➡️I/P Conversion (Electro-Pneumatic Interface) Inside the positioner, the electrical signal drives an I/P converter, often using flapper-nozzle systems, piezoelectric valves, or force-balance torque motors. This converts 4–20 mA into a standardized 3–15 psi pneumatic output for actuator control. ➡️Pneumatic Relay and Pressure Amplification The low I/P output is boosted by a pneumatic relay/booster to actuator-level pressures (typically 20–80 psi). This ensures fast stroking and stable control under high thrust or shutoff force conditions. ➡️Actuator Motion The actuator converts pressure into mechanical motion through diaphragms, springs, cylinders, rack-and-pinion or Scotch yoke mechanisms. This motion drives the valve stem or shaft toward the demanded position. ➡️Stem Position Feedback (Closed Loop Control) A feedback element (mechanical linkage, magnetic sensor, potentiometer, Hall-effect sensor, or optical encoder) measures the actual stem position. The positioner continuously compares actual vs. commanded position and corrects air pressure until the error is zero. This creates a true PID-like servo loop operating directly on the valve. Example conversion 4 mA → 3 psi → valve 0% open 12 mA → 9 psi → valve 50% open 20 mA → 15 psi → valve 100% open Advanced technical insights • Auto-stroking and adaptive tuning to minimize hysteresis and deadband • Friction/stiction detection for early identification of packing or actuator issues • Valve signature and step-response curves for predictive maintenance • Partial Stroke Testing (PST) for ESD/SIS valves • Air consumption optimization and reduced bleed losses • Self-diagnostics for travel deviation, supply pressure issues, or I/P drift Modern digital positioners are no longer simple signal converters. They operate as intelligent field-level control devices that improve accuracy, reduce process variability, enhance reliability, and support SIL and IEC 61511 requirements when used with safety-instrumented valves.
-
Following user feedback is a Product Management virtue. Is there an actual way to implement it, between all the noise, bugs, and stakeholder requests? Well… Most teams claim they are customer-driven. Yet the moment you open Zendesk, App Store reviews, survey results, and Slack threads, you instantly remember why everyone quietly avoids this work. Feedback is everywhere, contradictory, emotional, duplicated, and nearly impossible to turn into decisions. It is chaos disguised as “insights.” This is why the new Amplitude AI Feedback release caught my attention and made it all the easier to decide to partner with them on this update. It successfully connects what users say with what they actually do, in one workflow. No extra tools. No extra tabs. You see their words, frustrations, and praise. You see their behavior. And AI transforms it into ranked themes, rising trends, top requests, and complaints. Noise turns into clarity. Opinions turn into patterns. Patterns turn into action. And because it is native inside Amplitude, it kills the biggest problem in feedback work: Fragmentation. Everything flows into analytics, session replay, and cohorts, creating a full loop from insight to fix. You can trace why an issue matters, how many users care, how it impacts behavior, and which actions you should take. Finally, a single source of truth for PMs, UX, CX, and marketing. I’m also genuinely impressed with the supported sources of feedback: App Store, Google Play, Zendesk, Intercom, Freshdesk, Salesforce Service, Gong, Trustpilot, G2, Reddit, Discord, and X. Slack arrives in Q1, and there will be more! If you ever felt overwhelmed by feedback, this is one of the first attempts I have seen that genuinely solves the operational pain, not just the reporting part. It launches… Today! Take a look: https://lnkd.in/dAJKeTez What was the most successful update you know that came from the product’s users? Let me know in the comments. #productmanagement #productmanager #userfeedback
-
Unlock the Power of High-Quality Performance Reviews 'Tis the season for annual performance reviews. They are dreaded by some (both managers and direct reports alike), but a GOLDEN opportunity for growth, alignment and acceleration when done right! When I became a people manager for the first time I had no formal training on how to do a formal performance evaluation which made it more an intimidating and time consuming process than effective. It took me a while to develop some best practices which I still use today. Here are some actionable tips for how to make these conversations transformative instead of transactional: Best Practices for Managers: 1️⃣ Make it a Dialogue, Not a Monologue: Listen as much as you speak. Performance reviews should be a two-way street. 2️⃣ Focus on Specifics: Give actionable, evidence-based feedback tied to clear examples—not vague generalizations. 3️⃣ Balance Praise with Growth Opportunities: Celebrate wins but also highlight areas for improvement with a clear path forward. 4️⃣ Set Goals, Not Just Grades: Use reviews to align on SMART goals for the future. 5️⃣ Document & Follow Up: Don’t let feedback vanish post-meeting. Document outcomes and revisit them regularly. Common Mistakes to Avoid: 🚫 Waiting Until Review Time: Feedback should be ongoing—not a once-a-year surprise. 🚫 Being Too General: Saying "Good job" or "Needs improvement" without specifics leaves employees guessing. 🚫 Avoiding Tough Conversations: Constructive feedback can be uncomfortable, but it’s essential for growth. 🚫 Ignoring Employee Input: This isn’t just your show—make space for their perspective! Tips for Employees: Get Better Feedback 1️⃣ Be Proactive: Ask for feedback regularly—not just during reviews. Questions like, “What’s one thing I could do better?” shows initiative and openness. 2️⃣ Come Prepared: Bring accomplishments, challenges, and goals to the table. Show ownership of your growth. 3️⃣ Clarify Expectations: Ask, “What does success look like in my role / on this project?" This helps align your work with manager expectations. Year-Round Impact ✔️ Schedule Regular Check-Ins: Quarterly or monthly conversations keep feedback fresh and actionable. ✔️ Use Tools to Track Progress: Utilize shared documents or platforms to monitor goals throughout the year. ✔️ Create a Feedback Culture: Encourage real-time recognition and coaching on a weekly basis. A high-quality performance review isn’t just a meeting—it’s a tool for growth, alignment, and stronger relationships. Let’s move away from the “annual checkbox” and toward continuous improvement! What’s your secret to impactful performance reviews? Drop your tips in the comments! #Leadership #Feedback #PerformanceManagement #CareerGrowth
-
Treating AI like a chatbot, AKA you ask a question → it gives an answer is only scraching the surface. Underneath, modern AI agents are running continuous feedback loops - constantly perceiving, reasoning, acting, and learning to get smarter with every cycle. Here’s a simple way to visualize what’s really happening 👇 1. Perception Loop – The agent collects data from its environment, filters noise, and builds real-time situational awareness. 2. Reasoning Loop – It processes context, forms logical hypotheses, and decides what needs to be done. 3. Action Loop – It executes those plans using tools, APIs, or other agents, then validates outcomes. 4. Reflection Loop – After every action, it reviews what worked (and what didn’t) to improve future reasoning. 5. Learning Loop – This is where it gets powerful, the model retrains itself based on new knowledge, feedback, and data patterns. 6. Feedback Loop – It uses human and system feedback to refine outputs and improve alignment with goals. 7. Memory Loop – Stores and retrieves both short-term and long-term context to maintain continuity. 8. Collaboration Loop – Multiple agents coordinate, negotiate, and execute tasks together, almost like a digital team. These loops are what make AI agents more human-like while reasoning and self-improveming. Leveraging these loops moves AI systems from “prompt and reply” to “observe, reason, act, reflect, and learn.” #AIAgents
-
Behaviors are learned and reinforced. To make performance evaluations more inclusive, you need to proactively craft new practices. 🧠 Unbiasing nudges, intentional and subtle adjustments I craft with my clients, can play a pivotal role in achieving an objective and inclusive performance assessment. 👇 Here is what to consider: 🔎 Key Decision Points Analyze your evaluation process to identify key decision points. In my practice, focusing on assessment, performance goal setting, and feedback processes has proven crucial. Introduce inclusive prompts at each stage to guide unbiased decision-making. 🔎 Common Biases Examine previous reviews to unearth prevailing biases. Halo/horn effects, recency bias, and affinity bias often surface. Counteract these biases by crafting nudges tailored to your organization, integrating them seamlessly into your review spreadsheets. 🔎 Behavioral Prompts I usually develop concise pre-decision checklists tailored to each organization. The goal is to support raters' metacognition and introduce timed prompts during the evaluation process. 🔎 Feedback Loops Begin with small-scale implementation and collect feedback. Compare perceptions of both raters and ratees to gauge effectiveness. 🔎 Ongoing Training Avoid off-the-shelf solutions; instead, tailor training to your organization's unique context and patterns. Your trainer should understand your specific needs and design a continuous training program that reinforces these unbiasing nudges, providing managers with the necessary competencies. 🔎 Pilot and Evaluation Define metrics to measure progress and impact. Pilot your unbiasing nudges and regularly evaluate their effectiveness. Adjust based on feedback and insights gained during the pilot phase. 👉 Crafting inclusive performance evaluations is an ongoing journey. Yet, I believe, it's one of the most important ones. Each evaluation matters as it defines a person's career and sometimes even the future. ________________________________________ Are you looking for more DEI x Performance-related recommendations like this? 📨 Join my free DEI Newsletter:
-
If you’re building with AI in 2025, you need to understand how agents self-evolve. LLMs gave us static reasoning. Agents go further - they adapt, retain, and improve over time. Here’s how that actually works 👇 🤔When does evolution happen? → Intra-task evolution happens during inference. Agents adapt mid-task using in-context learning, memory lookup, or dynamic tool usage. → Inter-task evolution happens across episodes. This includes supervised fine-tuning, reinforcement learning, or meta-learning to improve behavior between tasks. Strong systems combine both - fast task-level adaptation and longer-term improvement across workflows. 🤖 How do agents evolve? → Reward-based: Learning from success signals, proxy metrics, or human feedback. → Imitation-based: Learning from demos, whether human, self-generated, or from other agents. → Population-based: Evolving across agent variants running in parallel, selecting the best performers. Most real-world systems blend these - imitation for bootstrapping, reward for refinement, and population methods for scaling. 📝 What tradeoffs are you managing? → Online vs offline learning: Do you allow the agent to adapt in production or only in training windows? → On-policy vs off-policy: Is the agent learning from its own actions or from broader data like replay buffers, past runs, or human examples? → Granularity: Are you evolving the prompt stack, the memory schema, routing logic, or the core policy? These choices define how fast you can evolve, how stable it is, and what infrastructure is required. ✅ Where does self-evolution work best? → General-purpose agents operate across broad, unpredictable tasks. Feedback is noisy, which makes evolution harder, but worth it. → Domain-specific agents - for coding, GUI automation, finance, or healthcare - benefit from structured environments and clearer reward signals, which accelerate feedback loops and enable faster evolution. ⚖️ How do you evaluate progress? You can’t rely on static benchmarks. You need to measure across five axes: Adaptivity → Retention → Generalization → Efficiency → Safety Use both short-horizon and long-horizon evaluation setups to capture real gains over time. 〰️〰️〰️ Follow me (Aishwarya Srinivasan) for real-world insights on AI agents and GenAI systems. Subscribe to my Substack for weekly breakdowns: https://lnkd.in/dpBNr6Jg
-
360-degree feedback... when used well, it can open doors to genuine growth BUT when used poorly, it can do more harm than good. I've seen both sides. I’m using 360s in my coaching programmes but I have a few rules now that I didn’t have when I started out: 👉 I’ll only administer a 360 to someone who has already been through coaching and worked on their self-awareness first.( Preferably through my 6 month starter programme). 👉 Very clear instruction and rationale must be provided to every person who is asked to fill in a 360 ( I work with HR/Managers to achieve this). 👉 360 degree assessment results must be accompanied by a minimum of 3 coaching sessions over 3 months ( Goals and priority leadership development areas are identified and worked on as part of this). ..... Without that foundation, feedback can feel threatening instead of constructive. But with it, leaders are open to learning and genuinely able to use the insights to improve. They are also a way to measure impact and improvement in leader development efforts. .... What are your thoughts on 360's? Do you find them useful? Or stressful? Leave your thoughts below 🙏😊 Image source: https://lnkd.in/eZYNcnsF
-
Underrated leadership lesson: Be radically transparent. Feedback shouldn't happen just once a year. It should be a daily, continuous loop. During my 10 years at Bridgewater, I received 12,385 pieces of feedback. And, it wasn't just reserved for formal reviews. Feedback was given LIVE throughout the day. In the middle of a presentation? Feedback. Right after answering a question? Feedback. Truthfully, as an employee, I didn't always love it. But I valued it. After all, they're called blind spots for a reason. This was all the result of one key principle: Radical transparency. A system that integrates candid feedback into daily work life, Allowing employees to constantly assess and be assessed. Here's why it works: ✅ Good thinking and behavior increase ↳ Processes improve when logic is analyzed in real time. ✅ High standards are maintained ↳ Problems get fixed faster when everything is visible. ✅ No more workplace hierarchies ↳ Continuous improvement happens when everyone is accountable. It's a principle that didn't just change my resilience to feedback. It completely transformed my leadership as a whole. So managers, Consider implementing radical transparency for these 7 reasons: 1. Faster problem-solving ↳ Small issues are easier to fix than big ones. 2. Openness saves time ↳ Less time wasted on gossip and tracking information. 3. Accelerated learning ↳ Teams grow faster when they understand each other’s thinking. 4. Long-term success ↳ Ongoing feedback improves leadership and the organization. 5. Building an idea of meritocracy ↳ Transparency builds trust and rewards good ideas. 6. Reduced workplace inefficiencies ↳ Open communication cuts wasted time and confusion. 7. Proactive issue resolution ↳ Fixing small problems early prevents bigger ones. While getting scores live in the mid-presentation may not be for everyone: Becoming more transparent has real, tangible benefits, And can put you on a streamlined path to success. Leaders - are you brave enough to try it? ♻️ Repost to help other leaders become radically transparent. 🔔 And follow Dave Kline for more.