ChatGPT & Me. ChatGPT Is Me!

For a while now part of my email signature has been a quote from a Hackaday commenter insinuating that an article I wrote was created by a “Dumb AI”. You have my sincerest promise that I am a humble meatbag scribe just like the rest of you, indeed one currently nursing a sore shoulder due to a sporting injury, so I found the comment funny in a way its writer probably didn’t intend. Like many in tech, I maintain a skepticism about the future role of large-language-model generative AI, and have resisted the urge to drink the Kool-Aid you will see liberally flowing at the moment.

Hackaday Is Part Of The Machine

As you’ll no doubt be aware, these large language models work by gathering a vast corpus of text, and doing their computational tricks to generate their output by inferring from that data. They can thus create an artwork in the style of a painter who receives no reward for the image, or a book in the voice of an author who may be struggling to make ends meet. From the viewpoint of content creators and intellectual property owners, it’s theft on a grand scale, and you’ll find plenty of legal battles seeking to establish the boundaries of the field.

Anyway, once an LLM has enough text from a particular source, it can do a pretty good job of writing in that style. ChatGPT for example has doubtless crawled the whole of Hackaday, and since I’ve written thousands of articles in my nearly a decade here, it’s got a significant corpus of my work. Could it write in my style? As it turns out, yes it can, but not exactly. I set out to test its forging skill. Continue reading “ChatGPT & Me. ChatGPT Is Me!”

“Man And Machine” Vs “Man Vs Machine”

Every time we end up talking about 3D printers, Al Williams starts off on how bad he is in a machine shop. I’m absolutely sure that he’s exaggerating, but the gist is that he’s much happier to work on stuff in CAD and let the machine take care of the precision and fine physical details. I’m like that too, but with me, it’s the artwork.

I can’t draw to save my life, but once I get it into digital form, I’m pretty good at manipulating images. And then I couldn’t copy that out into the real world, but that’s what the laser cutter is for, right? So the gameplan for this year’s Mother’s Day gift (reminder!) is three-way. I do the physical design, my son does the artwork, we combine them in FreeCAD and then hand it off to the machine. Everyone is playing to their strengths.

So why does it feel a little like cheating to just laser-cut out a present? I’m not honestly sure. My grandfather was a trained architectural draftsman before he let his artistic side run wild and went off to design jewellery. He could draw a nearly perfect circle with nothing more than a pencil, but he also used a French curve set, a pantograph, and a rolling architect’s ruler when they were called for. He had his tools too, and I bet he’d see the equivalence in mine.

People have used tools since the stone age, and the people who master their tools transcend them, and produce work where the “human” shines through despite having traced a curve or having passed the Gcode off to the cutter. If you doubt this, I’ll remind you of the technological feat that is the piano, with which people nonetheless produce music that doesn’t make you think of the hammers or of the tremendous cast metal frame. The tech disappears into the creation.

I’m sure there’s a parable here for our modern use of AI too, but I’ve got a Mother’s Day present to finish.

Comparing ‘AI’ For Basic Plant Care With Human Brown Thumbs

The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
The future of healthy indoor plants, courtesy of AI. (Credit: [Liam])
Like so many of us, [Liam] has a big problem. Whether it’s the curse of Brown Thumbs or something else, those darn houseplants just keep dying despite guides always telling you how incredibly easy it is to keep them from wilting with a modicum of care each day, even without opting for succulents or cactuses. In a fit of despair [Liam] decided to pin his hopes on what we have come to accept as the Savior of Humankind, namely ‘AI’, which can stand for a lot of things, but it’s definitely really smart and can even generate pretty pictures, which is something that the average human can not. Hence it’s time to let an LLM do all the smart plant caring stuff with ‘PlantMom’.

Since LLMs so far don’t come with physical appendages by default, some hardware had to be plugged together to measure parameters like light, temperature and soil moisture. Add to this a grow light and a water pump and all that remained was to tell the LMM using an extensive prompt, containing Python code, what it should do (keep the the plant alive), and what Python methods are available. All that was left now was to let the Google’s Gemma 3 handle it.

To say that this resulted in a dramatic failure along with what reads like an emotional breakdown on the part of the LLM would be an understatement. The LLM insisted on turning the grow light on when it should be off and had the most erratic watering responses imaginable based on absolutely incorrect interpretations of the ADC data, flipping dry and wet. After this episode the poor chili plant’s soil was absolutely saturated and is still trying to dry out, while the ongoing LLM experiment, with an empty water tank, has the grow light blasting more often than a weed farm.

So far it seems like that the humble state machine’s job is still safe from being taken over by ‘AI’, and not even brown thumb folk can kill plants this efficiently.

A blue-gloved hand holds a glass plate with a small off-white rectangular prism approximately one quarter the area of a fingernail in cross-section.

AI Helps Researchers Discover New Structural Materials

Nanostructured metamaterials have shown a lot of promise in what they can do in the lab, but often have fatal stress concentration factors that limit their applications. Researchers have now found a strong, lightweight nanostructured carbon. [via BGR]

Using a multi-objective Bayesian optimization (MBO) algorithm trained on finite element analysis (FEA) datasets to identify the best candidate nanostructures, the researchers then brought the theoretical material to life with 2 photon polymerization (2PP) photolithography. The resulting “carbon nanolattices achieve the compressive strength of carbon steels (180–360 MPa) with the density of Styrofoam (125–215 kg m−3) which exceeds the specific strengths of equivalent low-density materials by over an order of magnitude.”

While you probably shouldn’t start getting investors for your space elevator startup just yet, lighter materials like this are promising for a lot of applications, most notably more conventional aviation where fuel (or energy) prices are a big constraint on operations. As with any lab results, more work is needed until we see this in the real world, but it is nice to know that superalloys and composites aren’t the end of the road for strong and lightweight materials.

We’ve seen AI help identify battery materials already and this seems to be one avenue where generative AI isn’t just about making embarrassing photos or making us less intelligent.

Will Embodied AI Make Prosthetics More Humane?

Building a robotic arm and hand that matches human dexterity is tougher than it looks. We can create aesthetically pleasing ones, very functional ones, but the perfect mix of both? Still a work in progress. Just ask [Sarah de Lagarde], who in 2022 literally lost an arm and a leg in a life-changing accident. In this BBC interview, she shares her experiences openly – highlighting both the promise and the limits of today’s prosthetics.

The problem is that our hands aren’t just grabby bits. They’re intricate systems of nerves, tendons, and ridiculously precise motor control. Even the best AI-powered prosthetics rely on crude muscle signals, while dexterous robots struggle with the simplest things — like tying shoelaces or flipping a pancake without launching it into orbit.

That doesn’t mean progress isn’t happening. Researchers are training robotic fingers with real-world data, moving from ‘oops’ to actual precision. Embodied AI, i.e. machines that learn by physically interacting with their environment, is bridging the gap. Soft robotics with AI-driven feedback loops mimic how our fingers instinctively adjust grip pressure. If haptics are your point of interest, we have posted about it before.

The future isn’t just robots copying our movements, it’s about them understanding touch. Instead of machine learning, we might want to shift focus to human learning. If AI cracks that, we’re one step closer.

 

Preventing AI Plagiarism With .ASS Subtitling

Around two years ago, the world was inundated with news about how generative AI or large language models would revolutionize the world. At the time it was easy to get caught up in the hype, but in the intervening months these tools have done little in the way of productive work outside of a few edge cases, and mostly serve to burn tons of cash while turning the Internet into even more of a desolate wasteland than it was before. They do this largely by regurgitating human creations like text, audio, and video into inferior simulacrums and, if you still want to exist on the Internet, there’s basically nothing you can do to prevent this sort of plagiarism. Except feed the AI models garbage data like this YouTuber has started doing.

At least as far as YouTube is concerned, the worst offenders of AI plagiarism work by downloading the video’s subtitles, passing them through some sort of AI model, and then generating another YouTube video based off of the original creator’s work. Most subtitle files are the fairly straightfoward .srt filetype which only allows for timing and text information. But a more obscure subtitle filetype known as Advanced SubStation Alpha, or .ass, allows for all kinds of subtitle customization like orientation, formatting, font types, colors, shadowing, and many others. YouTuber [f4mi] realized that using this subtitle system, extra garbage text could be placed in the subtitle filetype but set out of view of the video itself, either by placing the text outside the viewable area or increasing its transparency. So now when an AI crawler downloads the subtitle file it can’t distinguish real subtitles from the garbage placed into it.

[f4mi] created a few scripts to do this automatically so that it doesn’t have to be done by hand for each one. It also doesn’t impact the actual subtitles on the screen for people who need them for accessibility reasons. It’s a great way to “poison” AI models and make it at least harder for them to rip off the creations of original artists, and [f4mi]’s tests show that it does work. We’ve actually seen a similar method for poisoning data sets used for emails long ago, back when we were all collectively much more concerned about groups like the NSA using automated snooping tools in our emails than we were that machines were going to steal our creative endeavors.

Thanks to [www2] for the tip!

Continue reading “Preventing AI Plagiarism With .ASS Subtitling”