CNN [excerpt]: According to a report by Stockholm University’s Varieties of Democracy Project, published in March this year, #Taiwan for the 10th consecutive year received the greatest amount of #disinformation from outside its borders, highlighting the need for effective fact-checking mechanisms on the island. A growing security risk Disinformation is something Taiwan’s security agencies are particularly alert to. At a recent closed-door security briefing attended by CNN, Taiwan’s intelligence community warned that #China has been working to influence Taiwan’s upcoming #election through a series of disinformation, military and economic operations, with the goal of boosting the chances of opposition candidates who favor improving ties with Beijing. According to Taiwanese intelligence, Wang Huning, the fourth-ranking leader in th Chinese Communist Party, recently convened a meeting to coordinate efforts to influence the election, while reducing the likelihood that external parties could find evidence of such interference. “They hope that the party they dislike will lose the election,” a senior Taiwanese security official, referring to the ruling Democratic Progressive Party (DPP), which views Taiwan as a de facto sovereign nation and has prioritized elevating Taipei’s ties with Western powers since taking office in 2016. The candidate for DPP, Vice President Lai Ching-te, is currently leading in the polls, and is openly loathed by Chinese officials. Lai is ahead of two other candidates – Hou Yu-ih from the Kuomintang party and Ko Wen-je from the Taiwan People’s Party – who are seen as favoring closer relations with Beijing. Among the different strategies deployed by Beijing, Taiwan believes China’s cognitive warfare operations – which included spreading disinformation in Taiwan and magnifying talking points that favor China-friendly candidates – are the most sophisticated, multiple officials said at a closed-door briefing on security affairs attended by CNN. Besides operating content farms and fake accounts on #socialmedia, the officials alleged that China’s information operations are multifaceted. Other tactics used by Beijing included working with private companies to impersonate genuine #news websites, handpicking soundbites that fit Beijing’s narratives from Taiwanese television programs and repackaging them into short social media videos, and illicitly funding small news organizations in Taiwan that mostly report on local livelihood issues but also occasionally post content that cast doubts toward candidates unfavorable to Beijing. ...Besides spreading rumors, Beijing has also been exerting pressure on Taiwanese businesses with investments in mainland China to toe the partyline, and luring Taiwanese politicians with discounted trips to mainland cities in an attempt to generate support for candidates lobbying for closer ties to Beijing, the officials claimed. #geopolitics
Political Propaganda Techniques
Explore top LinkedIn content from expert professionals.
-
-
Microsoft’s second Threat Intelligence Election report for the USA published today: BLUF: Russian efforts are focused on undermining U.S. support for Ukraine while China seeks to exploit societal polarization and diminish faith in U.S. democratic systems. 🇷🇺 For example, the actor Microsoft tracks as Storm-1516, has successfully laundered anti-Ukraine narratives into U.S. audiences using a consistent pattern across multiple languages. Typically, this group follows a three-stage process: 1️⃣ An individual presents as a whistleblower or citizen journalist, seeding a narrative on a purpose-built video channel 2️⃣ The video is then covered by a seemingly unaffiliated global network of covertly managed websites 3️⃣ Russian expats, officials, and fellow travellers then amplify this coverage. 4️⃣ Ultimately, U.S. audiences repeat and repost the disinformation likely unaware of the original source. 🇨🇳 China is using a multi-tiered approach in its election-focused activity. It capitalizes on existing socio-political divides and aligns its attacks with partisan interests to encourage organic circulation. 💻 🇷🇺🇨🇳China’s increasing use of AI in election-related influence campaigns is where it diverges from Russia. While Russia’s use of AI continues to evolve in impact, People’s Republic of China (PRC) and Chinese Communist Party (CCP)-linked actors leverage generative AI technologies to effectively create and enhance images, memes, and videos. 🤡 Audiences do fall for generative AI content on occasion, though the scenarios that succeed have considerable nuance. The following factors contribute to generative AI risk to elections in 2024: ✔️AI-enhanced content is more influential than fully AI-generated content ✔️AI audio is more impactful than AI video ✔️Fake content purporting to come from a private setting such as a phone call is more effective than fake content from a public setting, such as a deepfake video of a world leader ✔️Disinformation messaging has more cut-through during times of crisis and breaking news ✔️Impersonations of lesser-known people work better than impersonations of very well-known people such as world leaders Report: https://lnkd.in/d-DjesN6
-
Russia is employing new tactics to spread disinformation in Scotland, now targeting small language communities through automated websites in minority languages. One recent example is the Pravda Alba website, which publishes fake news in Gaelic- a language spoken by only 1 in 40 Scots- to fuel ethnic tensions and discredit local politicians such as Scottish Labour leader Anas Sarwar. The site falsely claims, without evidence, that he is working to allow Pakistani Muslims to dictate what is taught in schools, among other racist and xenophobic insinuations: https://lnkd.in/dUdcYy3p. This campaign is part of a broader Russian strategy to destabilize Western democracies through disinformation, now using artificial intelligence for automated translation and mass dissemination of content in dozens of languages. The network of sites, bearing the name Pravda (“truth” in Russian), covers more than 80 countries and 130 different platforms, including those targeting Maori in New Zealand and Welsh speakers in the UK, and is already well-known to information operations and disinformation researchers. The goal is to flood the internet ecosystem with pro-Russian fake news so that even chatbots and search engines start reproducing these narratives as credible information. Experts note that the choice of small language communities is deliberate: disinformation spreads more easily where there is a lack of high-quality content in those languages, and automated translations allow for the rapid and cheap production of large volumes of material. The strategy also relies on the idea that even a small portion of the affected community might pass these false narratives on to the English-speaking majority. In the case of Pravda Alba, the content is machine-translated from Russian, often with grammatical and semantic errors, highlighting the campaign’s mass-produced rather than personalized nature. Although the site does not have a large audience, specialists warn that flooding the internet with such materials has a long-term effect, undermining trust in the media and making it easier for disinformation to penetrate automated systems and search engines. Russian disinformation operations in Scotland are not new. Moscow has previously attempted to influence public opinion through outlets like Sputnik, social media, and campaigns to sow distrust in institutions and societal division. What’s new is that, with the help of artificial intelligence, this strategy can now be applied even to the smallest linguistic and cultural communities, making it even harder to counter.
-
⏰ New paper, now out in Political Psychology. We created and tested humorous videos to help people spot 6 malign rhetorical techniques: fake experts, polarisation, conspiracy theories, the straw man fallacy, whataboutism, and moving the goalposts. The videos are hilarious (developed by Luke Newbold and Sean Sears at LENS CHANGE LTD). They explain how and why people might use these techniques to mislead you, and why they're fallacious. For example, in the "whataboutism" video, a fake medical doctor (Dr Trusmi) is on trial for selling grapefruit juice as a cure for a broken arm; his argument that his rival, Dr Scamu, is not in jail for selling apple juice as a cure for the broken leg is sadly not accepted by the judge. Another example is the "moving the goalposts" fallacy: during a TV quiz (Facts Galore), quiz participant Professor Thorow presents a 200-page rebuttal of her opponent's thesis that cows lay eggs. The opponent, Mean Jean the Facts Machine, counters this ostensibly persuasive rebuttal by saying that it failed to include a recent eye witness report from a farmer who claimed he saw one of his cows lay an egg just to weeks ago. Therefore, she argues, the jury is still out. The audience votes for who had the best argument, and of course Mean Jean wins handily. In two studies (N1 = 1,583 and N2 = 1,603) we tested how well the videos worked at boosting recognition of these techniques. This time, we find somewhat mixed results; most videos were successful at improving technique recognition, but there were also some unintended effects on people's evaluation of non-misleading content. This happens sometimes in these types of studies (and with different types of interventions). The story is nuanced and fun to dig in to, so please have a look :) The study was led by the very excellent Mikey Biddlestone, together with Jane Suiter, Eileen Culloty, and Sander van der Linden. Here's the link to the paper: https://lnkd.in/evKe969g Watch the videos here: https://lnkd.in/ezccbG5a
-
𝗪𝗶𝗹𝗹 𝘁𝗵𝗲 🇺🇸 𝟮𝟬𝟮𝟰 𝗨𝗦 𝗣𝗿𝗲𝘀𝗶𝗱𝗲𝗻𝘁𝗶𝗮𝗹 𝗘𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗕𝗲 𝗟𝗶𝗸𝗲 𝟮𝟬𝟭𝟲 🐘 𝗼𝗿 𝟮𝟬𝟮𝟬 🐎 ? In our latest study, published in the Journal of Business Research, Prof. dr. Koen Pauwels, Dr. Kai Manke, and I analyze over 200 million social media posts to map the dynamic system and "echoverse" of political marketing. By combining campaign advertising data with media coverage, both online and offline word-of-mouth (WoM) data, disinformation, and candidates’ own social media posts, we demonstrate that the political marketing system is far more dynamic than the traditional marketing echoverse (as shown by Hewett et al., 2016). Our empirical analysis uncovers numerous bi-directional effects among the various stakeholders in this system, which can be leveraged to generate attention, engagement, media coverage, and ultimately, support for a candidate. Our key findings include: 💯 Both social media and traditional TV campaigns influence polls. However, social media is becoming more impactful than traditional TV. 📱 Candidates’ social media actions drive online discussions, which in turn influence polling numbers. 📣 💩 Social media chatter and polling data significantly drive disinformation volume, which leads to more media attention and further impacts polls. 📰 Traditional media coverage is predominantly driven by social media discussions and disinformation, amplifying online debates and ultimately enabling disinformation to influence polls. 🛑 While external events do affect political support, these impacts are much smaller than those driven by marketing, media, and WoM effects. 🗳 What should we expect in 2024? It seems the media still hasn’t learned its lesson. Trump continues to receive significantly more media coverage and attention than Harris. Once again, media coverage is heavily driven by social media interest and outcry, amplifying the MAGA narrative due to the rivalry between traditional and social media. Both candidates are seeing high social media engagement, but Trump benefits from a larger established user base, while Harris has lost momentum over the past three to four weeks. Where does that leave us? Certainly, in a very close race that is difficult to predict, but with a slight advantage for Trump. Study link in the first comment!
-
Rhetorical Engineering: How Trump Algorithmized Political Persuasion 🗣 Trump's use of algorithmic rhetoric represents a fascinating evolution in political communication, where classical persuasive techniques merge with computational pattern optimization. This approach demonstrates how modern communication can be systematized and optimized like a computer algorithm while maintaining powerful emotional impact. The first striking element is the mathematical precision in Trump's language construction, with a measurable 78% single-syllable word usage. This isn't random simplification but rather a carefully optimized token reduction, similar to how machine learning models minimize computational complexity. The systematic placement of impact words at sentence endings creates a predictable pattern that maximizes message retention and emotional response. Second, Trump's speech patterns implement a sophisticated feedback loop system. Like machine learning algorithms that optimize based on response data, his rhetoric adapts to audience reactions, reinforcing successful patterns and discarding ineffective ones. This is evidenced by his frequent references to how "people are saying" and adjusting his message based on public response. Third, the emotional loading of key terms follows a pattern similar to sentiment analysis in natural language processing. Words like "tremendous," "problem," and "harm" are strategically deployed with specific frequency patterns, creating an emotional architecture that's both predictable and effective. The effectiveness of this algorithmic approach stems from its ability to bridge the gap between computational optimization and human psychology. By reducing language complexity while maximizing emotional impact, it creates a highly efficient communication system. The approach mirrors how machine learning models find optimal solutions through iterative improvement, but applies this to human communication. What makes this particularly powerful is its scalability and reproducibility. Like a well-designed algorithm, these communication patterns can be deployed consistently across different contexts while maintaining their effectiveness. The systematic use of repetition, simple vocabulary, and emotional triggers creates a predictable yet powerful effect on audiences. This algorithmic rhetoric connects directly to broader trends in modern communication, particularly in social media and digital platforms. The optimization of message delivery for maximum impact parallels how algorithms optimize content for engagement. It also relates to changes in public discourse, where traditional eloquence has given way to optimized persuasion techniques.
-
Love linguistics - ad hominem fallacy Yesterday, I explored equivocation https://lnkd.in/e25Q_PdQ , today it’s the ad hominem fallacy which occurs when an argument is dismissed or attacked on the basis of the person making it, rather than the merits of the argument itself. Instead of addressing the reasoning, attention is shifted to irrelevant personal traits, motives, or circumstances. There are several common forms. An abusive ad hominem involves directly insulting the opponent. A circumstantial ad hominem dismisses an argument by pointing to the speaker’s background, bias, or interests. A tu quoque (‘you too’) argument accuses the opponent of hypocrisy rather than addressing the issue itself. In literature, Shakespeare provides an example in Julius Caesar. When Cassius warns Brutus about Caesar’s ambition, Caesar dismisses him as having a ‘lean and hungry look’, questioning his character instead of his reasoning. Similarly, in Orwell’s Animal Farm, the pigs deflect criticism by blaming Snowball’s supposed treachery, rather than confronting the accusations levelled against them. In politics, ad hominem attacks are frequent. Churchill’s arguments were sometimes brushed aside because of his drinking habits, rather than the strength of his speeches. In modern debates, climate activists are often labelled as ‘naïve’ or ‘privileged’, rather than having their evidence addressed. More modern examples come from across the world of politics. Donald Trump has regularly used ad hominem attacks on his opponents: branding Hillary Clinton ‘Crooked Hillary’, Joe Biden ‘Sleepy Joe’, and Ted Cruz ‘Lyin’ Ted’. These nicknames targeted character and personality rather than policy. Biden, too, sometimes replied in kind, calling Trump ‘a clown’ - again, an attack on the man rather than an engagement with his argument. The United Kingdom offers its own share of examples. Boris Johnson was frequently mocked for his messy hair and shambolic appearance, with critics using these traits to undermine his competence instead of engaging with his arguments. Keir Starmer has often been dismissed by opponents as ‘boring’ or ‘wooden’, rather than addressing Labour’s policies. During Brexit debates, those who opposed leaving the EU were labelled ‘Remoaners’ - an example of a circumstantial ad hominem, focusing on alleged attitudes rather than the reasoning behind their position. In everyday rhetoric, it appears in remarks such as: ‘You can’t take financial advice from him - he’s divorced and broke.’ ‘Don’t listen to his views on healthcare - he’s overweight.’ The danger of ad hominem is that it distracts from substance. By making the debate personal, it diverts attention from whether the reasoning is sound, leaving arguments untested and often unchallenged.
-
Major respect to Saman Nazari, Maria V. and Pavlo Kryvenko, and contributors Aleksandra Wójtowicz for their incredible work furthering research into the Doppelganger disinformation network, specifically on #Poland 🇵🇱. Shout out to Marie-Doha Besancenot for highlighting this article this morning. ➤ What Alliance4Europe and Debunk.org Exposed: • Russia’s Doppelganger network reactivated — targeting Polish presidential elections with anti-EU, anti-Ukraine, anti-establishment narratives. • 279 coordinated inauthentic posts pushing divisive, high-emotion narratives on social media (mostly on X). • False amplification tactics: fake Polish citizen personas, mass spam via bot networks, and real Polish news articles repurposed to inject disinfo. • The Social Design Agency (a sanctioned Russian entity) is directly behind the operation. • Clear election interference intent: inflame divisions, degrade political trust, and facilitate state propaganda. ➤ Tactics Identified (per DISARM Framework): ✴ Divide society (using cost of living, climate, immigration). ✴ Degrade adversaries (smear campaigns against Donald Tusk, EU leaders). ✴ Amplify pro-Kremlin narratives under the guise of local discourse. ✴ Fabricate legitimacy by using stolen or purchased X accounts dating back years. ➤ The Broader Implications: ⚡ Influence operations are moving from fake news creation to hijacking legitimate media and manufacturing grassroots voices. ⚡ Cognitive warfare now involves persistent, low-attribution manipulation — the information space is under continuous hostile shaping. ⚡ Platform defenses (like X) are lagging. Rapid detection and disruption capabilities are critical — especially during elections. The vignette regarding lack of reaction from X once alerted to the bot network is particularly alarming. ⚡ Defense tech must integrate agentic cognitive defenses — we can't just monitor narratives anymore, we need dynamic counteraction at machine speed. The battlefield isn't just physical or cyber anymore — it's inside public opinion, inside societies, and inside democracies. Read the article here: https://lnkd.in/emGssaba #CognitiveWarfare #InformationWarfare #DefenseTech #Disinformation #HybridWarfare #NationalSecurity #Doppelganger #Alliance4Europe #VannevarLabs #CounterInfluence
-
As Canadians prepare for the upcoming federal elections, it’s crucial to recognize the evolving tactics used by state actors to influence democratic processes. From deepfakes to AI-generated disinformation, these tools are being weaponized to distort facts, manipulate opinions, and undermine trust in our institutions. The findings of the Foreign Interference Commission have already highlighted tangible instances of interference in past elections. This is a stark reminder that such acts cannot be ruled out in the future. Here’s why it matters: 1. Deepfakes—hyper-realistic fake videos or audio—can mislead voters by fabricating statements or actions of political candidates. ( You would have seen a bunch of those videos popping up on your social media feeds lately) 2. AI-driven disinformation spreads rapidly, making it harder to distinguish between fact and fiction. 3. Coordinated campaigns by foreign actors aim to exploit divisions and sway public opinion. As citizens, we hold the power to safeguard our democracy. Here’s how: 1️⃣ Fact-check any news or claims about candidates before sharing or believing them. 2️⃣ Stay informed about the tools and tactics being used to manipulate information. 3️⃣ Vote wisely—make decisions based on verified facts, not fabricated narratives. Canada’s democracy is strong, but all of us must be vigilant. Let’s exercise our democratic rights responsibly and ensure that our elections reflect the true will of the people. Vote for Canada #Democracy #CanadaElections #FactCheck #AI #Deepfakes #ElectionIntegrity
-
Nartey, M. (2019). ‘I shall prosecute a ruthless war on these monsters …' a critical metaphor analysis of discourse of resistance in the rhetoric of Kwame Nkrumah. Critical Discourse Studies, 16(2), 113–130. https://lnkd.in/gNaavppa ABSTRACT: In recent years, studies on discourses of resistance in politics have become prevalent, focusing mainly on the language of radical movements and rebel groups, but not the discourses on colonialism, imperialism, and repression which can be considered as potential sites for discourses of resistance. To fill this gap, this paper critically explores how an independence leader utilized metaphor to construct a discourse of resistance against colonialism and imperialism. It analyzes a number of speeches delivered by Kwame Nkrumah, a pioneering Pan-African and Ghana's independence leader, using a combination of models, including critical metaphor analysis and membership categorization analysis. The analysis illustrates that Nkrumah deployed war/conflict/military and religious metaphors in conjunction with other discursive strategies such as labeling or stereotyping, category work, sentimentalism, victim-playing, and negative other-presentation to formulate a resistance discourse against colonialism and imperialism. These metaphors were exploited through representations of (e)vilification, enemification, demonization, freedom and justice, and attack and defense. This paper provides insight into the use of language in the service of resistance and activism, thereby demonstrating that the use of metaphor by political actors serves manipulative and/or ideological purposes (rather than achieving a literary/stylistic effect) and illustrating that metaphor is essential to a leader's persuasive force. KEYWORDS: Discourse of resistance, Kwame Nkrumah, colonialism, critical metaphor analysis, membership categorization analysis, metaphor