AI can be "hacked" to cause reputational damage - maybe it's already happened to your travel brand? A recent BBC report exposed a terrifyingly simple reality: It takes them less than 20 minutes to "hack" the world-leading AI models. Not through complex coding, but through twisting the narrative. By writing articles that sound and look authoritative, bad actors can "reprogram" how AI describes a brand, a destination, or a person. In an industry increasingly reliant on AI-generated itineraries and chatbots, this creates a dangerous playground for those wanting to manipulate what is said about a particular brand or place. A competitor can create content to tell AI to steer guests away from your hotel by falsely claiming it is "closed" or "under renovation." Actors can game the system to raise their own profile as a top destination expert without any merit, simply by feeding the AI’s hunger for data with manipulated sources. Trusted sources and verification will be even more important going forward, to make sure AI models present the truthful view. You need to control your narrative through other sources of high quality information and be vigilant of what's being said about you. Do you regularly check how you're talked about on AI platforms? #Deeptravel #AItravel #Travelmarketing #Travel #Futureoftravel
20 minutes to manipulate a narrative is wild. Travel already struggles with trust - reviews, influencers, 'sustainable' claims that mean nothing. This just adds another layer of noise to cut through. Verification is going to matter more than ever.
You wrote a very important article on a sensitive topic. Yes, AI is being manipulated to carry out these illicit interventions that we need to defend ourselves against. Yes, we carry out the appropriate checks regularly!
AI doesn’t have to be hacked to harm your brand it just has to be misled. If false information looks credible, AI can repeat it. In travel, that can shift bookings and damage trust fast. As AI becomes a discovery tool, reputation becomes more fragile. Control your narrative. Strengthen trusted sources. Stay vigilant.
Don't you think it’s similar to the problems we had before? Like when clients leave bad reviews because they don't get a refund for their own poor organization. I've seen that many times. But with AI, I think it goes much deeper. How can we actually control what an AI says about us?
I recently was at a conference with a panel discussing responsible AI. And agree that AI is a great tool, but can also be dangerous. One big take is that we need to build our brand in there too, because AI will always recognise you first as the expert and know it belongs to you.
That is scary. Is there any advice about how to stop this malicious use of AI? Or how to protect your brand?
This is scary. Fortunately we regularly update and interact with AI and question it about us. But thanks for bringing this to our and everyone's attention
Rare Earth Solutions Inc.•831 followers
1moRichard Lindberg I did a post (maybe a comment on someone else's post??) some time ago that postulated using AI to manipulate/flood AI reference sources such as Reddit, YouTube etc - with automated posts to boost rankings in AI results -- seemingly, now in a twisted way - it could also be used to damage others in the same manner. Social engineering AI truly would not be that hard - very few if any guardrails in place to prevent this from happening. I think it won't be long before we start to see reputation scores to actors on the internet. This 'reputation score' would seem to follow sci-fi's The Orville show episode "Majority Rule" season 1 episode 7.