What happens when a robot loses a leg mid-mission? Most robots would fail immediately. But watch this one figure out how to walk again in just a few tries. The researcher deliberately damages the robot. Cuts off a leg. Adds weights. Attaches wheels to limbs. Each time, the robot experiments with different gaits until it finds one that works. This is omni-bodied intelligence. The software doesn't panic when the hardware changes. It adapts. Here's why this matters: we talk about robots in homes and factories, but we rarely talk about what happens after six months of use. Parts break. Joints wear out. Sensors fail. If robots can't handle imperfection, they'll never leave the lab. This approach treats adaptability as a core feature, not an edge case. That's the difference between a demo and a tool you can actually rely on. Video credits: SkildAI --- Interested in starting your robotics career? Check out our free robotics career guide to get you started: https://lnkd.in/gpPVTPKE
Robotics In Everyday Work
Explore top LinkedIn content from expert professionals.
-
-
Car inspection drive by AI! 🔦 BMW Group has become the first automaker to use AI-driven robots at scale for paint inspection and processing, rolling out the technology at its German plant. These robots from KUKA inspect, sand, polish, and mark vehicle surfaces, ensuring higher quality and shorter lead times. Using pattern projection and advanced cameras, the system detects even the smallest flaws in the paintwork, creating a 3D image to guide the robots. Unlike traditional automation, these AI-powered robots adapt their process for each vehicle, performing 1,000 unique inspections daily. While robots handle most of the work, human workers still refine edges and tight spaces. AI assists by projecting laser guidance, ensuring precise manual finishing. 👨🏻🔧 BMW is now exploring further enhancements, such as real-time fault prevention and automated documentation. What a time to be a robotics guy! 😮💨 🔔 Hit the bell on my profile to never miss a robot story.
-
It's clear that humanoids won't fully replace humans in most jobs, but they will undoubtedly transform industries. Humanoid waiters can work long hours without breaks and can perform tasks more efficiently than human waiters. What do you think about this girl waiteress? - The complexity of human work necessitates cognitive depth and emotional intelligence beyond current humanoid capabilities. - Social skills like empathy and trust are challenging for machines to replicate. - Creativity and innovative thinking remain strong human advantages. - Humanoids still face limitations in strength, dexterity, and endurance compared to humans. - Ethical concerns surround job displacement and social inequality. Nevertheless, humanoids offer significant enhancements: - They excel at dangerous or repetitive tasks, boosting safety. - Humanoids analyze data to aid decision-making. - They offer personalized customer service and task assistance. - In healthcare, humanoids support surgery, patient care, and research. In essence, humanoids will augment human capabilities, not replace them. The future workforce will likely involve a synergistic blend of human and machine strengths. #robots #Innovation #Technology
-
Breaking Space News 🚨 India’s First Space robotic arm in Action aboard POEM4. India says hi to them domain of Space robotics today. On wards and Upwards! Relocatable Robotic Manipulator-TD (RRM-TD) is Developed by ISRO-IISU RRM-TD aka Walking Robotic Arm is a technology demonstrator for the type of robotic arms that will be used on the Bharatiya Antariksh Station to help with its construction and maintenance. The specialty of this type of robotic arm is that it has the ability to relocate itself along the body of its parent spacecraft by "walking" end-over-end, similar to how an inchworm or leech moves, and grabbing onto fixed grappling points and re-routing power & data connections
-
🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy
-
MIT researchers paired 2,310 people into human-human and human-AI teams to create real ads in a collaborative workspace with some fascinating outcomes—tracking 183K messages, 2m copy edits, and over 5m ad impressions. The paper "Collaborating with AI Agents: Field Experiments on Teamwork, Productivity, and Performance" examined many facets of the dynamics of human-AI collaboration on what was most effective. Some of the valuable insights: 🤖 AI changes how teams talk and work together. Human-AI teams sent 45% more messages than human-only teams, with a focus on task execution—suggestions, instructions, and planning—while human teams sent more social and emotional messages. Despite this shift, both team types rated teamwork quality similarly, showing that collaboration can remain strong even when social interaction drops. 🧍➕🤖 One person plus AI can match or beat human teams. Individuals in human-AI teams produced 60% to 73% more ads than individuals in human-human teams, closing the productivity gap that usually favors groups. Despite having only one human per team, human-AI groups created just as many ads overall as two-human teams. 🧠 Human-AI success depends on psychological compatibility. When a conscientious person worked with a conscientious AI, message volume increased by 62%, signaling better engagement. But mismatches had negative effects—for example, extraverted humans working with conscientious AIs saw drops in text, image, and click quality across the board. 📊 AI lets people shift from doing to directing. Participants in human-AI teams made 60% fewer direct text edits compared to those in human-only teams. Instead of rewriting content themselves, they communicated what needed to be done—refocusing effort from manual changes to guiding and refining AI-generated output. 🔄 AI redistributes cognitive workload and changes who does what. With AI handling routine and complex text generation, humans shifted attention from editing to strategic input and idea generation. This redesigns roles within teams, suggesting new ways to organize work where humans steer, and AI constructs. Humans + AI is the future. This research provides more valuable foundations for understanding how to do this well.
-
Will robots replace humans at Amazon? It’s a question I’m often asked, and one I discussed with Jimmy McLoughlin OBE on Jimmy's Jobs of the Future Podcast. There’s this perception that robots will take jobs away, but the reality is much more nuanced - and in many ways far more promising for Amazon and beyond. When we first introduced robots into warehouses, some outside the company raised questions about job losses. But over the years, we’ve found that robotics has unlocked new opportunities. Not only did they double our capacity and boost productivity at Amazon, but they have also increased employment. In fact, our warehouses (or what we call fulfilment centres) with robotics employ 50% more people than traditional sites. The reason? Robotics has created demand for new roles that didn’t exist before. Yes, we still need people for picking, packing, and shipping, but now we also need robotic engineers, technicians, and specialists who can operate, maintain, and improve these systems. As technology advances, the future of fulfilment isn’t about replacing people - it’s about expanding possibilities and creating more varied and specialised roles than ever before while delivering more for customers. #Amazon #Innovation #Robotics
-
I watched our humanoid make coffee in the office kitchen. The milk was not where it was yesterday. A mug was half blocked by plates. Nothing was scripted. The robot adapted and kept going. That is the point. Kitchens are messy. Objects move. Layouts change. Interruptions happen. If a robot can operate there without freezing or breaking things, it can handle the edge cases that matter in logistics and industry. Most robotics demos avoid this. Fixed objects. Clean setups. Repeatable motions. That is not the real world. At Sereact we build for live, unscripted environments. The coffee is irrelevant. Learning to generalise is everything. If it works in our kitchen, it works in your warehouse.
-
MIT Engineers teach household robots common sense 🤖🧹 Researchers at MIT have developed a way for robots to handle "surprises" in household tasks through learning from large language models. This approach allows robots to adjust to disruptions on their own, improving efficiency in completing tasks without human interventions. 𝗪𝗵𝘆 𝘀𝗵𝗼𝘂𝗹𝗱 𝘆𝗼𝘂 𝗸𝗻𝗼𝘄❓ 1. Robots can already mimic human actions, but they struggle with errors and disruptions. 2. The innovative approach allows robots to self-correct, reducing the need for human intervention. 3. This could significantly advance household robotics and bring robots with enhanced problem-solving abilities. 𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗶𝘁 𝘄𝗼𝗿𝗸⚙️ - The team connected robot motion data with Large Language Models. - This allowed to break tasks into subtasks for easier adjustment and correction. - Then the algorithm converted training data into robust robot behavior, despite external perturbations. 👉🏻 𝗔𝗿𝗲 𝘆𝗼𝘂 𝘂𝘀𝗶𝗻𝗴 𝗿𝗼𝗯𝗼𝘁𝗶𝗰 𝗮𝗽𝗽𝗹𝗶𝗮𝗻𝗰𝗲𝘀 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗮𝘁 𝗵𝗼𝗺𝗲? Investment theme: Automation and Robotics - Source: MIT News / TechCrunch #investing #robotics #robots Thematic #investment #Litrendingtopics PS: Did you know that the world's first prototype of a ‘robotic’ vacuum cleaner was presented in 1997 by Electrolux? (Source: New Atlas)
-
What does 2025 mean for robots? I recently interviewed Matha Chen, Head of Global Marketing at KEENON Robotics, a cutting-edge robotics company and market leader in service robots, to explore how AI and robotics are transforming our world. Matha shared how Keenon’s service robots - dining, delivery, cleaning, and even a butler bot - are already making life easier in hotels and restaurants around the globe, each uniquely adapted to local needs. My own experience with Keenon’s dining bot in Singapore showed just how seamlessly functional robots fit into our daily routines, navigating obstacles and delivering orders with remarkable precision. While humanoids often capture the spotlight, Matha emphasized that robots come in all shapes and sizes, each serving a specific purpose. And that’s the real takeaway: service robots are already here, quietly reshaping industries without us even noticing. Curious about the next wave of humanoids and special-purpose bots? Check out my full interview with Matha to learn more!