We're living in interesting times...
A few months back, I thought AI was quite weak at writing Excel formulas - it often made up functions, messed up syntax, etc. And there are plausible reasons why it would be much harder to train on Excel formulas than on a lot of other computer code that tends to be more structured.
But recently
Thomas Marcelo showed that Google Gemini could do a pretty solid job of writing single complex formulas (which is both much easier and much harder than building a full model), and got the Excel esports community in a spin about AI coming for us.
Then
Eric Ashton won the last esports challenge using an AI-assisted approach (this is explicitly permitted in the rules, by the way).
So I decided to experiment a little with having AI solve a case, specifically aiming for a setup that would allow for minimal human guidance at solve time (this involved doing a few hand-holding solves and taking note of the things I had to keep telling it, and including those in a up-front prompt).
The result was a little wobbly (it didn't get everything on the first go), but it still completed the solve much faster than the top humans with no more helpful steer from me that 'that gave X instead of Y on the example' when the formula it gave me didn't work.
This is a familiar feeling for many at the moment: I thought very recently that we were still quite a way off this... but here we are.
As an aside, even after seeing this, I am NOT worried about AI taking my job for a variety of reasons, but that's a whole post for another day...
Here's the video link:
https://lnkd.in/eMCEQ6fU
P.S.: I'll share more details about the workings behind the prompt soon. But the main thing to tell you is that my total time experimenting with this up to this solve was maybe 4 hours - this was not a product of months of hard work!