After 1,000 hours of prompt engineering, these 6 patterns work best. Here's the framework: --- ✦ I saw it here: https://lnkd.in/dj8Ax6BT. ✦ I tested it, and it's quite effective! ✦ I wrote numerous blogs on prompt engineering. ✦ 7 Sins of prompting: https://lnkd.in/duP3Za5W. ✦ What do people prompt: https://lnkd.in/dGYgcQ_7. ✦ How to search: https://lnkd.in/dxzSBEjW. ✦ ChatGPT-5: https://lnkd.in/gVx_ZPh3. --- K - Keep it simple Bad: 500 words of context Good: One clear goal Example: Instead of "I need help writing something about Redis," use "Write a technical tutorial on Redis caching" Result: 70% less token usage, 3x faster responses E - Easy to verify Your prompt needs clear success criteria Replace "make it engaging" with "include 3 code examples" If you can't verify success, AI can't deliver it My testing: 85% success rate with clear criteria vs 41% without R - Reproducible results Avoid temporal references ("current trends", "latest best practices") Use specific versions and exact requirements Same prompt should work next week, next month 94% consistency across 30 days in my tests N - Narrow scope One prompt = one goal Don't combine code + docs + tests in one request Split complex tasks Single-goal prompts: 89% satisfaction vs 41% for multi-goal E - Explicit constraints Tell AI what NOT to do "Python code" → "Python code. No external libraries. No functions over 20 lines." Constraints reduce unwanted outputs by 91% L - Logical structure Format every prompt like: Context (input) Task (function) Constraints (parameters) Format (output) Real example from my work last week: Before KERNEL: "Help me write a script to process some data files and make them more efficient" Result: 200 lines of generic, unusable code After KERNEL: Task: Python script to merge CSVs Input: Multiple CSVs, same columns Constraints: Pandas only, <50 lines Output: Single merged.csv Verify: Run on test_data/ Result: 37 lines, worked on first try Actual metrics from applying KERNEL to 1000 prompts: First-try success: 72% → 94% Time to useful result: -67% Token usage: -58% Accuracy improvement: +340% Revisions needed: 3.2 → 0.4 Advanced tip: Chain multiple KERNEL prompts instead of writing complex ones. Each prompt does one thing well, feeds into the next. The best part? This works consistently across GPT-5, Claude, Gemini, even Llama. It's model-agnostic.
Engineering
Explore top LinkedIn content from expert professionals.
-
-
The world's largest sand battery has been inaugurated in Finland. Developed by Polar Night Energy, this high-temperature thermal energy storage system stores heat in sand using low-cost, clean electricity. The project is a powerful example of how thermal storage can enhance grid flexibility, decarbonise heating, and accelerate the energy transition. - It can store up to 100 MWh of thermal energy. - It has a round-trip efficiency of 90%. - It offers a cost-effective alternative to lithium-ion batteries for long-term heat storage. - By replacing an old woodchip plant, the sand battery is expected to cut the local heating network's carbon emissions by 70%.
-
I frequently see conversations where terms like LLMs, RAG, AI Agents, and Agentic AI are used interchangeably, even though they represent fundamentally different layers of capability. This visual guides explain how these four layers relate—not as competing technologies, but as an evolving intelligence architecture. Here’s a deeper look: 1. 𝗟𝗟𝗠 (𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹) This is the foundation. Models like GPT, Claude, and Gemini are trained on vast corpora of text to perform a wide array of tasks: – Text generation – Instruction following – Chain-of-thought reasoning – Few-shot/zero-shot learning – Embedding and token generation However, LLMs are inherently limited to the knowledge encoded during training and struggle with grounding, real-time updates, or long-term memory. 2. 𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻) RAG bridges the gap between static model knowledge and dynamic external information. By integrating techniques such as: – Vector search – Embedding-based similarity scoring – Document chunking – Hybrid retrieval (dense + sparse) – Source attribution – Context injection …RAG enhances the quality and factuality of responses. It enables models to “recall” information they were never trained on, and grounds answers in external sources—critical for enterprise-grade applications. 3. 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 RAG is still a passive architecture—it retrieves and generates. AI Agents go a step further: they act. Agents perform tasks, execute code, call APIs, manage state, and iterate via feedback loops. They introduce key capabilities such as: – Planning and task decomposition – Execution pipelines – Long- and short-term memory integration – File access and API interaction – Use of frameworks like ReAct, LangChain Agents, AutoGen, and CrewAI This is where LLMs become active participants in workflows rather than just passive responders. 4. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 This is the most advanced layer—where we go beyond a single autonomous agent to multi-agent systems with role-specific behavior, memory sharing, and inter-agent communication. Core concepts include: – Multi-agent collaboration and task delegation – Modular role assignment and hierarchy – Goal-directed planning and lifecycle management – Protocols like MCP (Anthropic’s Model Context Protocol) and A2A (Google’s Agent-to-Agent) – Long-term memory synchronization and feedback-based evolution Agentic AI is what enables truly autonomous, adaptive, and collaborative intelligence across distributed systems. Whether you’re building enterprise copilots, AI-powered ETL systems, or autonomous task orchestration tools, knowing what each layer offers—and where it falls short—will determine whether your AI system scales or breaks. If you found this helpful, share it with your team or network. If there’s something important you think I missed, feel free to comment or message me—I’d be happy to include it in the next iteration.
-
We’re planting trees — but losing biodiversity. Global efforts to restore forests are gathering pace, driven by promises of combating climate change, conserving biodiversity, and improving livelihoods. Yet a recent paper published in Nature Reviews Biodiversity warns that the biodiversity gains from these initiatives are often overstated — and sometimes absent altogether. Forest restoration is at the heart of Target 2 of the Kunming-Montreal Global Biodiversity Framework, which aims to place 30% of degraded ecosystems under effective restoration by 2030. But the gap between ambition and outcome is wide. "Biodiversity will remain a vague buzzword rather than an actual outcome" unless projects explicitly prioritize it, the authors caution. Restoration has typically prioritized utilitarian goals such as timber production, carbon sequestration, or erosion control. This bias is reflected in the widespread use of monoculture plantations or low-diversity agroforests. Nearly half of the Bonn Challenge’s forest commitments consist of commercial plantations of exotic species — a trend that risks undermining biodiversity rather than enhancing it. Scientific evidence shows that restoring biodiversity requires more than planting trees. Methods like natural regeneration — allowing forests to recover on their own — can often yield superior biodiversity outcomes, though they face social and economic barriers. By contrast, planting a few fast-growing species may sequester carbon quickly but offers little for threatened plants and animals. Biodiversity recovery is influenced by many factors: the intensity of prior land use, the surrounding landscape, and the species chosen for restoration. Recovery is slow — often measured in decades — and tends to lag for rare and specialist species. Alarmingly, most projects stop monitoring after just a few years, long before ecosystems stabilize. However, the authors say there are reasons for optimism. Biodiversity markets, including emerging biodiversity credit schemes and carbon credits with biodiversity safeguards, could mobilize new financing. Meanwhile, technologies like environmental DNA sampling, bioacoustics, and remote sensing promise to improve monitoring at scale. To turn good intentions into reality, the paper argues, projects must define explicit biodiversity goals, select suitable methods, and commit to long-term monitoring. Social equity must also be central. "Improving biodiversity outcomes of forest restoration… could contribute to mitigating power asymmetries and inequalities," the authors write, citing examples from Madagascar and Brazil. If designed well, forest restoration could help address the twin crises of biodiversity loss and climate change. But without a deliberate shift, billions of dollars risk being spent on projects that plant trees — and little else. 🔬 Brancalion et al (2025): https://lnkd.in/gG6X36WP
-
Scaling from 50 to 100 employees almost killed our company. Until we discovered a simple org structure that unlocked $100M+ in annual revenue. In my 10+ years of experience as a founder, one of the biggest challenges I faced in scaling was bridging the organizational gap between startup and enterprise. We hit that wall at around 100~ employees. What worked beautifully with a small team suddenly became our biggest obstacle to growth. The problem was our functional org structure: Engineers reporting to engineering, product to product, business to business. This created a complex dependency web: • Planning took weeks • No clear ownership�� • Business threw Jira tickets over the fence and prayed for them to get completed • Engineers didn’t understand priorities and worked on problems that didn’t align with customer needs That was when I studied Amazon's Single-Threaded Owner (STO) model, in which dedicated GMs run independent business units with their own cross-functional teams and manage P&L It looked great for Amazon's scale but felt impossible for growing companies like ours. These 2 critical barriers made it impractical for our scale: 1. Engineering Squad Requirements: True STO demands complete engineering teams (including managers) reporting to a single owner. At our size, we couldn't justify full engineering squads for each business unit. To make it work, we would have to quadruple our engineering headcount. 2. P&L Owner Complexity: STO leaders need unicorn-level skills: deep business acumen and P&L management experience. Not only are these leaders rare and expensive, but requiring all these skills in one person would have limited our talent pool and slowed our ability to launch new initiatives. What we needed was a model that captured STO's focus and accountability but worked for our size and growth needs. That's when we created Mission-Aligned Teams (MATs), a hybrid model that changed our execution (for good) Key principles: • Each team owns a specific mission (e.g., improving customer service, optimizing payment flow) • Teams are cross-functional and self-sufficient, • Leaders can be anyone (engineer, PM, marketer) who's good at execution • People still report functionally for career development • Leaders focus on execution, not people management The results exceeded our highest expectations: New MAT leads launched new products, each generating $5-10M in revenue within a year with under 10 person teams. Planning became streamlined. Ownership became clear. But it's NOT for everyone (like STO wasn’t for us) If you're under 50 people, the overhead probably isn't worth it. If you're Amazon-scale, pure STO might be better. MAT works best in the messy middle: when you're too big for everyone to be in one room but too small for a full enterprise structure. image courtesy of Manu Cornet ------ If you liked this, follow me Henry Shi as I share insights from my journey of building and scaling a $1B/year business.
-
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data. 2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro -> RoboCasa produces N (varying visuals) -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK
-
Drone shows are increasingly incorporating AI technologies to enhance their performance. What do you think about this one? Here are several ways in which #AI is being utilized in drone shows: 1. Autonomous Navigation: Path Planning: AI algorithms assist drones in planning and optimizing flight paths for intricate aerial displays. Collision Avoidance: AI enables real-time analysis of the environment, helping drones avoid collisions and maintain safe distances. 2. Formation Flying: Coordination Algorithms: AI algorithms coordinate the movements of multiple drones to achieve precise formations. Real-Time Adjustments: Drones can dynamically adjust their positions in response to environmental factors or unexpected changes. 3. Swarm Intelligence: Collective Behavior: AI-driven swarm intelligence allows drones to exhibit collective behavior, creating synchronized and mesmerizing patterns. Adaptability: Drones in a swarm can adapt their behavior based on the actions of neighboring drones. 4. Real-Time Data Analysis: Environmental Sensors: Drones equipped with sensors provide real-time data on weather conditions, wind speed, and other factors. Adjusting Performances: AI analyzes this data to make real-time adjustments to the drone show, ensuring optimal performance. 5. Light and Color Choreography: Dynamic Lighting: AI algorithms control the lighting elements on drones, creating dynamic and customizable light shows. Color Synchronization: Drones can synchronize their colors and lighting patterns in real time for visually stunning effects. 6. AI-Generated Patterns: Generative Algorithms: AI is used to generate unique and artistic patterns for drone formations. Variability: Each show can be different, adding an element of surprise and creativity. 7. Gesture Recognition: Audience Interaction: AI-powered gesture recognition systems allow drones to respond to audience movements or gestures. Interactive Shows: Audience members can influence the show in real time. 8. Dynamic Choreography: Learning Algorithms: AI can learn from previous performances, adjusting choreography based on audience reactions and preferences. Continuous Improvement: Drones can adapt and improve their performances over time. 9. Logistics Optimization: Efficient Deployment: AI assists in optimizing the deployment and retrieval of drones before and after shows. Battery Management: Algorithms manage drone battery usage for extended performances. 10. Safety Measures: Emergency Protocols: AI can implement emergency protocols to ensure the safety of the drone show, such as automated landing in case of malfunctions. Monitoring Systems: AI monitors drones for any irregularities in flight behavior. 11. Sound Integration: Audio-Synchronized Displays: AI synchronizes drone movements with music or other audio elements for a fully immersive experience. #ai #innovation via @ zzmenx #drone #dronetechnology
-
#Diversity in high-tech fields remains critically low. The Equal Employment Opportunity Commission (EEOC) recently reported that #Black and #Latino professionals are underrepresented in high-tech roles, especially in leadership. These numbers highlight ongoing structural barriers in hiring, promotion and retention. This gap is a missed opportunity to tap into a wealth of diverse talent and perspectives essential to the future of tech. However, addressing and thoroughly fixing these challenges will require time, consistent effort and a long-term commitment to systemic change. Companies can support the progression of representation in tech by investing in training, mentorship and internship opportunities that open doors for people who were historically shut out. Programs like internXL, a platform that is committed to increasing diversity and inclusion in the internship hiring process for top companies, are making a significant impact. Similarly, the expansion of STEM education at institutions like Cornell University is helping to connect talented young people from underrepresented communities with opportunities for high-tech careers. When we work together to remove these barriers, we’re fostering a more inclusive workforce and strengthening innovation, problem-solving and leadership in the industry. Let’s build a tech future that reflects the diversity of our society. https://bit.ly/3UNtOCh
-
5 key developments this month in Wearable Devices supporting Digital Health ranging from current innovations to exciting future breakthroughs. And I made it all the way through without mentioning AI… until now. Oops! >> 🔘Movano Health has received FDA 510(k) clearance for its EvieMED Ring, a wearable that tracks metrics like blood oxygen, heart rate, mood, sleep, and activity. This approval enables the company to expand into remote patient monitoring, clinical trials, and post-trial management, with upcoming collaborations including a pilot study with a major payor and a clinical trial at MIT 🔘ŌURA has launched Symptom Radar, a new feature for its smart rings that analyzes heart rate, temperature, and breathing patterns to detect early signs of respiratory illness before symptoms fully develop. While it doesn’t diagnose specific conditions, it provides an “illness warning light” so users can prioritize rest and potentially recover more quickly 🔘A temporary scalp tattoo made from conductive polymers can measure brain activity without bulky electrodes or gels simplifying EEG recordings and reducing patient discomfort. Printed directly onto the head, it currently works well on bald or buzz-cut scalps, and future modifications, like specialized nozzles or robotic 'fingers', may enable use with longer hair 🔘Researchers have developed a wearable ultrasound patch that continuously and non-invasively monitors blood pressure, showing accuracy comparable to clinical devices in tests. The soft skin patch sensor could offer a simpler, more reliable alternative to traditional cuffs and invasive arterial lines, with future plans for large-scale trials and wireless, battery-powered versions 🔘According to researchers, a new generation of wearable sensors will continuously track biochemical markers such as hydration levels, electrolytes, inflammatory signals, and even viruses, from bodily fluids like sweat, saliva, tears, and breath. By providing minimally invasive data and alerting users to subtle health changes before they become critical, these devices could accelerate diagnosis, improve patient monitoring, and reduce discomfort (see image) 👇Links to related articles in comments #DigitalHealth #Wearables