All the world’s a Turing test and we're merely players.
My brother sent me a photo of the mountain range we plan to visit this summer. Washington state has some extraordinary natural wonders but this picture made the rocky summits look like menacing ridges from a Lord of the Rings landscape.
A sheepish note arrived one hour later conceding that he (a computer scientist) had been ‘duped’. Well, we all had.
The artifice seems to be coming from all angles. About a month ago I was contacted by an Instagram account that purported to be a young woman from Croydon. Obviously, having worked in online for more than twenty years, I’m comfortable that all things are not what they seem. But while I usually block & delete such approaches, I thought it might be interesting to see;
a) What methods are scammers using these days?
b) What are they trying to achieve?
The dialogue began with enquiries about my Instagram ‘bio’ but what followed was almost certainly machine generated, with strange, non-sequitur responses interspersed with regular suggestions of meeting up ‘in-real-life’.
I decided to test the AI and ask the character questions about precise details of holiday snaps in ‘their’ photo-stream, and seek opinions on various aspects of living in London. They answered blandly and noncommittally before abruptly turning the conversation to much more ‘fruity’ territory. The tonal shift was so violent, I decided to Google search the exact phrase and discovered a bizarre website called crushonai.com which contained the verbatim phrase as one of its template ‘scripts’.
The website allows users to fine-tune artificial message dialogues, with different personalities and flirtation intensities to allow, presumably, young men to simulate a saucy DM interchange. I wondered how lonely one would have to be to spend our very finite time on Earth deluding oneself in such a craven way.
But it would appear this persona is also used to generate automated conversation responses. Or possibly the text is manually cut n’ pasted into the thread by an actual human. It doesn’t surprise me that someone might rely on a machine to provide the bluntly suggestive content for this sort of endeavour.
I concluded I was most likely conversing with a machine after the next interchange. The chat had been sprinkled with fairly harmless selfies of a woman in a bedroom. Noting the shadows cast outside the window, I asked exactly when had they been taken?
“Just now”, the character said. I enquired “And what are you doing right now?” The account replied. “Just capping in bed”. I quickly googled ‘capping’ and discovered it's south LDN street slang meaning ‘being dishonest’ or ‘passing oneself off as something else’. Lying, for want of a better word. A robotic Freudian slip.
The bot had tried to access drill-rap-style argot but had got it wrong. I asked what it meant, and it replied. “Lying in bed. Or also being honest about my feelings”
Perhaps sensing the jig was up, the account then repeatedly asked me to join it on OnlyFans where we could speak ‘more freely’. We could arrange to meet in real life but only on the subscription-based service.
I gamely tried to keep the conversation going about South-London, favourite restaurants, the best TV show I’d watched, but by this time ‘Amy’ was mechanically repeating the requirement to move to the other platform.
The selfies were probably taken years ago or stolen from some other content provider, then dished out as part of the subscriber recruitment scheme. I reckoned new bespoke photos would be impossible for it to create. Eventually I offered, “If you take a selfie with your baseball cap on backwards, I’ll subscribe to your OnlyFans account”.
“Trust issues is not sexy” came the reply. Wow, you said it.
These chatbots are really just onboarding routines, getting humans to leave one social platform and insert their bank details into another. There is something chilling about the way the machine, devoid of pathos or feeling, corrals you toward the payment point. I imagine some (not me I hasten to add!) become emotionally involved in the spurious blather, like poor Theodore Twombly from Spike Jonze’s ‘Her’.
More chilling still, soon these bots will be able to generate selfies with backwards-basecaps or anything else you ask of it. My rudimentary Turing tests will become trivial to overcome, as platforms like Google Veo and Midjourney will allow bots to programmatically respond to these challenges in photos and videos.
Recommended by LinkedIn
However, humans are highly attuned to fakery. We’re not at the point where machines can create entirely synthetic realities to populate our online social pipelines.
Qatar Airways launched an interactive adventure where you could be face-swapped with the hero, to ‘star’ in the story. Such an offer should appeal to me with my history of first-person interactive documentary and branching narrative experiences (‘Tell Me Your Secrets’ & ‘A Short Ride in an Intelligent Machine’).
This experience, created by McCann & London agency ‘Flipside’ promised “seamless character adaptation”. But I think it was probably optimised for a Middle-Eastern or South Eastern sub-continent market, certainly not for a pale, British / Celtic/ Scandi person like me! I watched in horror as a grinning golem, slightly resembling Venezuelan conductor Gustavo Dudamel, wandered through various meet-cute scenes.
It’s true, generative AI is still in the lowlands of the uncanny valley, but I have seen some video sequences, generated by machines, which have no obvious tells of their synthetic origins. And these are not from experimental, R&D platforms but mainstream web tech providers like Google & ChatGPT. Meta provides its own AI Studio where you can create digital characters for friendship, help or ‘fun’. Who knows how these AI functionalities are already hooked into the web tools and social platforms we now rely on completely?
Ten years ago, I wrote a BBC article with Konnie Huq, “Why does social media seems fake to some people?”. I suggested that while ‘Web2.0’ allowed people to highly curate their lives online, their social media representation has become almost a simulacrum to their real existence. As we live more of our lives online, with our online representation gaining far more reach and social currency than our physical selves, we begin to create an alternate reality with less and less basis in the real world.
It seems this was just the start. I know colleagues who spend time every day trying to detect how much of a job application or research paper has been compiled by Chat GPT with its intrinsic confabulations. Or whether a creative pitch deck has been generated by AI tools.
Soon, in all realms of life; work, news, education, entertainment - we may be swimming in content that has been created statistically from within software with no origin in the real world. We may lose the battle of determining what is a record of a human experience and what has been created just to keep our synapses twitching.
At the idealistic dawn of cyber culture, there was an ambition to set our imaginations free with cybernetic tools, allowing us to imagine any situation and make it photorealistic, unleashing torrents of hyper-real fantasy.
But in a digital economy that prospers by publishing anything that we will glance at for a few more seconds, technology will inevitably finds ways to create depictions of the 'right sort of stuff' regardless of whether it has actually happened.
If GenAI creates material we can’t discern from genuine human-created content, then does the whole web paradigm start to feel shaky? Will the trust bubble burst, like a clothes shop that’s full of counterfeit brands?
Or perhaps we don’t really care. Maybe humanity and human connection are becoming less important? Like Cypher in The Matrix movie, perhaps we’re consciously choosing satisfying synthetic sustenance over the more difficult business of people and their complex needs, expressions and desires?