Reachy The Robot Gets A Mini (Kit) Version

Reachy Mini is a kit for a compact, open-source robot designed explicitly for AI experimentation and human interaction. The kit is available from Hugging Face, which is itself a repository and hosting service for machine learning models. Reachy seems to be one of their efforts at branching out from pure software.

Our guess is that some form of Stewart Platform handles the head movement.

Reachy Mini is intended as a development platform, allowing people to make and share models for different behaviors, hence the Hugging Face integration to make that easier. On the inside of the full version is a Raspberry Pi, and we suspect some form of Stewart Platform is responsible for the movement of the head. There’s also a cheaper (299 USD) “lite” version intended for tethered use, and a planned simulator to allow development and testing without access to a physical Reachy at all.

Reachy has a distinctive head and face, so if you’re thinking it looks familiar that’s probably because we first covered Reachy the humanoid robot as a project from Pollen Robotics (Hugging Face acquired Pollen Robotics in April 2025.)

The idea behind the smaller Reachy Mini seems to be to provide a platform to experiment with expressive human communication via cameras and audio, rather than to be the kind of robot that moves around and manipulates objects.

It’s still early in the project, so if you want to know more you can find a bit more information about Reachy Mini at Pollen’s site and you can see Reachy Mini move in a short video, embedded just below.

Continue reading “Reachy The Robot Gets A Mini (Kit) Version”

A Lockpicking Robot That Can Sense The Pins

Having a robot that can quickly and unsupervised pick any lock with the skills of a professional human lockpicker has been a dream for many years. A major issue with lockpicking robots is however the lack of any sensing of the pins – or equivalent – as the pick works its magic inside. One approach to try and solve this was attempted by the [Sparks and Code] channel on YouTube, who built a robot that uses thin wires in a hollow key, load cells and servos to imitate the experience of a human lockpicker working their way through a pin-tumbler style lock.

Although the experience was mostly a frustrating series of setbacks and failures, it does show an interesting approach to sensing the resistance from the pin stack in each channel. The goal with picking a pin-tumbler lock is to determine when the pin is bound where it can rotate, and to sense any false gates from security pins that may also be in the pin stack. This is not an easy puzzle to solve, and is probably why most lockpicking robots end up just brute-forcing all possible combinations.

Perhaps that using a more traditional turner and pick style approach here – with one or more loadcells on the pick and turner- or a design inspired by the very effective Lishi decoding tools would be more effective here. Regardless, the idea of making lockpicking robots more sensitive is a good one, albeit a tough nut to crack. The jobs of YouTube-based lockpicking enthusiasts are still safe from the robots, for now.

Continue reading “A Lockpicking Robot That Can Sense The Pins”

Hackaday Links Column Banner

Hackaday Links: June 22, 2025

Hold onto your hats, everyone — there’s stunning news afoot. It’s hard to believe, but it looks like over-reliance on chatbots to do your homework can turn your brain into pudding. At least that seems to be the conclusion of a preprint paper out of the MIT Media Lab, which looked at 54 adults between the ages of 18 and 39, who were tasked with writing a series of essays. They divided participants into three groups — one that used ChatGPT to help write the essays, one that was limited to using only Google search, and one that had to do everything the old-fashioned way. They recorded the brain activity of writers using EEG, in order to get an idea of brain engagement with the task. The brain-only group had the greatest engagement, which stayed consistently high throughout the series, while the ChatGPT group had the least. More alarmingly, the engagement for the chatbot group went down even further with each essay written. The ChatGPT group produced essays that were very similar between writers and were judged “soulless” by two English teachers. Go figure.

Continue reading “Hackaday Links: June 22, 2025”

3D Pen Used To Build Cleaning Robot That Picks Up Socks

Your average 3D printer is just a nozzle shooting out hot  plastic while being moved around by a precise robotic mechanism. There’s nothing stopping you replacing the robot and moving around the plastic-squirting nozzle yourself. That’s precisely what [3D Sanago] did to produce this cute little robot.

The beginning of the video sets the tone. “First we create the base that will become the robot vacuum’s body,” explains [3D Sanago]. “I quickly and precisely make a 15 x 15 cm square almost as if I were a 3D printer.” It’s tedious and tiring to move the 3D printing pen through the motions to build simple parts, but that’s the whole gimmick here. What’s wild is how good the results are. With the right post-processing techniques using an iron, [3D Sanago] is able to produce quite attractive plastic parts that almost justify the huge time investment.

The robot itself works in a fairly straightforward fashion. It’s got four gear motors driving four omniwheels, which let it pan around in all directions with ease. They’re under command of an Arduino Uno paired with a multi-channel motor driver board. The robot also has a servo-controlled arm for moving small objects. The robot lacks autonomy. Instead, [3D Sanago] gave it a wireless module so it could be commanded with a PS4 controller. Despite being referred to as a “robot vacuum,” it’s more of a general “cleaning robot” since it only has an arm to move objects, with no actual vacuum hardware. It’s prime use? Picking up socks.

We’ve seen [3D Sanago]’s fine work before, too. Video after the break.

Continue reading “3D Pen Used To Build Cleaning Robot That Picks Up Socks”

[Austin Blake] sitting on line follower cart in garage

Honey, I Blew Up The Line Follower Robot

Some readers may recall building a line-following robot during their school days. Involving some IR LEDs, perhaps a bit of LEGO, and plenty of trial-and-error, it was fun on a tiny scale. Now imagine that—but rideable. That’s exactly what [Austin Blake] did, scaling up a classroom robotics staple into a full-size vehicle you can actually sit on.

The robot uses a whopping 32 IR sensors to follow a black line across a concrete workshop floor, adjusting its path using a steering motor salvaged from a power wheelchair. An Arduino Mega Pro Mini handles the logic, sending PWM signals to a DIY servo. The chassis consists of a modified Crazy Cart, selected for its absurdly tight turning radius. With each prototype iteration, [Blake] improved sensor precision and motor control, turning a bumpy ride into a smooth glide.

The IR sensor array, which on the palm-sized vehicle consisted of just a handful of components, evolved into a PCB-backed bar nearly 0.5 meters wide. Potentiometer tuning was a fiddly affair, but worth it. Crashes? Sure. But the kind that makes you grin like your teenage self. If it looks like fun, you could either build one yourself, or upgrade a similar LEGO project.
Continue reading “Honey, I Blew Up The Line Follower Robot”

Hackaday Links Column Banner

Hackaday Links: May 4, 2025

By now, you’ve probably heard about Kosmos 482, a Soviet probe destined for Venus in 1972 that fell a bit short of the mark and stayed in Earth orbit for the last 53 years. Soon enough, though, the lander will make its fiery return; exactly where and when remain a mystery, but it should be sometime in the coming week. We talked about the return of Kosmos briefly on this week’s podcast and even joked a bit about how cool it would be if the parachute that would have been used for the descent to Venus had somehow deployed over its half-century in space. We might have been onto something, as astrophotographer Ralf Vanderburgh has taken some pictures of the spacecraft that seem to show a structure connected to and trailing behind it. The chute is probably in pretty bad shape after 50 years of UV torture, but how cool is that?

Parachute or not, chances are good that the 495-kilogram spacecraft, built to not only land on Venus but to survive the heat, pressure, and corrosive effects of the hellish planet’s atmosphere, will at least partially survive reentry into Earth’s more welcoming environs. That’s a good news, bad news thing: good news that we might be able to recover a priceless artifact of late-Cold War space technology, bad news to anyone on the surface near where this thing lands. If Kosmos 482 does manage to do some damage, it won’t be the first time. Shortly after launch, pieces of titanium rained down on New Zealand after the probe’s booster failed to send it on its way to Venus, damaging crops and starting some fires. The Soviets, ever secretive about their space exploits until they could claim complete success, disavowed the debris and denied responsibility for it. That made the farmers whose fields they fell in the rightful owners, which is also pretty cool. We doubt that the long-lost Kosmos lander will get the same treatment, but it would be nice if it did.

Continue reading “Hackaday Links: May 4, 2025”

Hackaday Links Column Banner

Hackaday Links: April 27, 2025

Looks like the Simpsons had it right again, now that an Australian radio station has been caught using an AI-generated DJ for their midday slot. Station CADA, a Sydney-based broadcaster that’s part of the Australian Radio Network, revealed that “Workdays with Thy” isn’t actually hosted by a person; rather, “Thy” is a generative AI text-to-speech system that has been on the air since November. An actual employee of the ARN finance department was used for Thy’s voice model and her headshot, which adds a bit to the creepy factor.

Continue reading “Hackaday Links: April 27, 2025”