AI will deeply influence election results. Wherever, forever.

How AI will influence elections

Politics is still partly exempted from new concepts like marketing/propaganda automation, machine learning, AI, and quantum computing. The decisional process reminds what politicians used to do 100 years ago or more.

Despite the constantly growing impact of digital channels, political propaganda still massively relies on old-fashioned advertising channels: posters, events, TV, newspapers, and stupid/useless gadgets (in most cases = waste/pollution).

Recent posters for national elections, respectively in Ireland and Italy.

Besides the elections, government choices still rely on personal interactions, private meetings, lobbyists, etc. Where’s AI? Where’s the automation of political decision-making? Are political powers willing to rely on machines, or can the pursuit of personal interests not be mediated by logic?

Whatever politicians think, AI is entering into politics, at least on the electoral side of it, and that sounds pretty scary to a consistent part of the establishment, let’s say, the most conservative (not necessarily ideologically speaking).

Machines can generate fake news, and they do better than humans

Machines will take the role of journalists, in particular for recurrent, repetitive, boring reports. Plus, they can take the role of writers. Deep learning can instruct a machine to generate amazing stories based on a huge data background that most humans, even the most creative or high, can’t reach. So in the future, when journalists/writers have to shape a story or write one from scratch, machines probably can do better to persuade a particular audience.

Researchers at the non-profit AI research group OpenAI – backed among the others by Elon Musk – wanted to train their new text generation software to predict the next word in a sentence. It blew away all of their expectations and was so good at mimicking writing by humans they’ve decided to pump the brakes on the research while they explore the damage it could do.

The researchers used 40GB of data pulled from 8 million web pages to train the GPT-2 software.

In one example, the software was fed this paragraph:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

Based on those two sentences, it was able to continue writing this whimsical news story for another nine paragraphs in a fashion that could have believably been written by a human being. Here are the next few machine-paragraphs that were produced by the machine:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

The software GPT-2 seemed remarkably good at adapting to the style and content of the prompts it was given.

Read more on Gizmodo.

Let a famous person tell the fake news with his own fake face

Ever heard of the software FakeApp? This is way beyond morphing.

Using some of the latest AI techniques, Peele ventriloquizes Barack Obama, having him voice his opinion on Black Panther (“Killmonger was right”) and call President Donald Trump “a total and complete dipshit.”

The video was made by Peele’s production company using a combination of old and new technology: Adobe After Effects and the AI face-swapping tool FakeApp. The latter is the most prominent example of how AI can facilitate the creation of photorealistic fake videos. It started life on Reddit as a tool for making fake celebrity porn, but it has since become a worrying symbol of the power of AI to generate misinformation and fake news.

Read more on The Verge

Spread the word to a qualified audience that will multiply the reach on social media

The idea here came from Larry Kim’s Facebook Ads, Fake News and the Shockingly Low Cost of Influencing an Election [DATA]

  • Step 1: Create a Fake News Website
  • Step 2: Create a Fake News Page on Facebook
  • Step 3: Create a Facebook Ad, Promoting the Fake News on Your Fake Page

Donald Trump and Hillary Clinton spent a combined $6.8 billion in their bid to become president in 2016. But the U.S. election is remarkably easy (and cheap) to hack. That’s because the outcome of presidential elections often hinges on just a few thousand votes (e.g. Michigan).

Every Facebook ad is given a Relevancy Score between 1 and 10. Facebook rewards advertisers with highly engaging ads. According to the author, the fake news ads scored 7/10.

Let AI improve your targeting

Create channels for either supporters or opponents of a specific brand/cause/person/party under the same umbrella. It’s important to attract enemies/opponents for at least two reasons:

  • Exclude opponents from future targeting. It’s ok not to preach the converted, although seeding is important to let them work for you, it’s more important not to waste money on a lost battle which is trying to convert the enemies, at least directly. Let them change their mind later on.
  • Confuse them by creating slightly alternative pathways — Romans used to say “DIVIDI ET IMPERA” (divide and rule). “I agree with you but…” where the but will start a new potentially disruptive saga in the community.
  • Spread soft fake news through apparent supporting channels to distract from the core topics that the opponent’s leaders privilege on their channels. Are they massively promoting A? Let’s debate about B, but don’t forget C.

Seed with interesting, softly fake stories. AI-generated, maybe powered by fake interviews with leaders claiming things they might have done, but it’s not confirmed.

Then, let your qualified audience, selectively targeted through optimised targeting based on specific conversions, spread this news online, also outside Facebook, for example, in the dark zone of messengers (Whatsapp, Facebook Messenger, Instagram messages, Twitter Direct Messages, LinkedIn Messages, Viber, Telegram, Line, etc.) that can’t be tracked. Still, it’s said to contribute to retargeting functionalities. These channels can help in making these stories viral.

In parallel, let the machine build massive audiences on both sides (pro and against). These audiences can be extended via Lookalike audiences (below an example).

Again, let AI optimize the campaign for you. Assign a custom conversion to specific actions, such as sharing a campaign, signing a petition, voting online, or responding to a survey in a certain way.

Then the algorithm, if instructed to “optimise for conversion”, that specific custom conversion, will keep on showing your ad to similar people that were not initially targeted through the standard categories. Many users don’t express themselves publicly but still have an opinion, they are potential dormant amplifiers that can be reached through sophisticated targeting and triggered with stories that touch their instinct (A/B testing helps to fine tune messages, which is a job of AI, again). The target audience will also be triggered by a growing community of unexpected users campaigning towards a specific cause (here, a bunch of fake users can speed up the process).

It is important to keep political branding away from the initial stages of the funnel. The brand will softly emerge as the solution — the only or the best — to the issues that generated a negative sentiment among the users.

Don’t forget search and native ads

Some users are normally doubtful, so they’ll search for proof: let’s reassure them. Make sure to have side stories on the same topic and/or revive old scandals to reinforce the seed concept, then boost these stories by remarketing them via programmatic native ads — advertorials appearing below real news on similar topics. Wanna do more? Buy keywords related to your opponents, using custom-built landing pages enriched with relevant keywords and divert search traffic to the entrance of a new political fiction aka pol-fi saga.

Wait for the Snowball Effect

Larry Kim states, “Buying an Election through social media channels does not cost much, and Facebook is profiting from it big time.”

Now, wait for the Snowball Effect, when users start to spread the news, adding their own views, and doubts, enriching with other fake stories, creating fake memeplex. A seed can generate viral effects in a very short time, with partly unpredictable outcomes.

And the attack might go on, generating new fake stories based on good seeds spotted during the viral effects.

Whilst the entities under attack must spend the time to deny accuses (that are still based on some true seeds, so they can never be 100% denied), the opponents have time to further attack and talk of their own electoral programme, appearing as clean, immaculate, stronger, winners. Many voters tend to stand for winners.

According to the author, “fake news is cheap and effective” since it is possible to reach thousands of users with a few dollars and let them spread the ‘news’ for free. A fake news machine, powered by AI and other tools (like FakeApp), spread to the right users/multipliers selectively targeted, is something old-fashioned campaigners can’t fight unless they change their weapons.

After Cambridge Analytica, why all this fear against Facebook?

(What are we talking about? Read more on Facebook-Cambridge Analytica data ‘scandal’)

It’s understandable as the constituted power, aka nation-states apparatuses, recycled from generation to generation, fear that new narratives, either true or false, can be disseminated through new channels that have taken the upper hand of the traditional ones’, usually piloted, instructed, conniving or, if alternative, marginalised.

Numerous publishers and journalists contribute to creating framing to please powers. The fact that strangers, at/of any scale, can break into this system by altering or distorting such framing for precise purposes is considered a huge danger for the system’s stability. People aka voters believe to whatever they hear/read/see and can multiply messages, creating an uncontrollable snowball effect.

If we love freedom, even to misinform (something that often happens since the press and the regimes are in love), we must defend channels like Facebook and work to have more conscious voters rather than less harmful media.

The cross-targeted targets do so many, but nobody says so. The GDPR violate many, but no one dares to denounce it because it is impossible to control everyone minutely. Thus the heart of the problem is struck. And democratic countries like Germany also decided to censor Facebook (as well as Youtube and Google Maps) as North Korea does with its own citizens. Censorship can only reinforce the willingness to spread alternative news. It doesn’t help in the long term.

Can we trust election results heavily undermined by the growing power of AI that engulfs masses of voters with no critical spirit? Maybe not.

To improve the quality of democracy, in the current scenario, an answer can be a different way to select representatives, skipping elections and giving algorithms clear and transparent instructions, mirroring citizens’ will to generate the best outcome for the society that politicians should represent. More to come. Stay tuned…

Infographic: the fake-news generation and dissemination process mentioned in the article.