Fake news is one of the most dangerous concepts to come out of this election cycle. Unfortunately the concept and the tools used to propagate it are here to stay.
Before I get into the really scary stuff, let’s define fake news (no quotes) as real fake news — hyperbolic headlines and mostly fictional content meant to incite emotions and/or play to our confirmation bias. Then there is “fake news” which I will define as a straw man argument used to discount any news that doesn’t fit your agenda or beliefs.
A prime example of the straw man argument is the use of “fake news” by 45. Beyond the irony of a serial liar and exaggerator throwing the “fake news” phrase around with reckless abandon, it’s obviously trying to undermine and destroy the credibility of the press, which plays a crucial role in our country. The “fake news” claim is never backed up with fact, just more hyperbole. In fact, just yesterday, he claimed the leaks that resulted in Flynn’s resignation were real, but the news of them fake. Can’t have it both ways big guy!
On to the scary stuff, since it’s shows how dangerous this all is, when combined with technology.
Not only is it hard to detect fake news (although Facebook and Google are hard at work), it’s also highly profitable for all sides. The content is so profitable because it plays into our confirmation bias. It sells us an idea that we want to believe or already believe is true.
Add a platform like Facebook into the mix, where it is super easy to target this content to specific groups and users that would be prone to confirmation bias regarding the post’s subject. Pay Facebook some money and watch the likes, shares, and traffic pile up. Traffic for the fake news sites means advertising revenue — upwards to $3,000 a day in one report.
Some more specific examples of the viral nature of fake news during the last election cycle:
Real news is costly and not as profitable. The news industry hasn’t figured out the internet as a business model quite yet, though the “failing” NY Times is reporting record subscribers.
Hopefully, as the social sites and search engines take the revenue out of the mix, we’ll see less and less of the “in it for the money” publishers.
Which is fine and good, except the most dangerous thing to come out of this election and the social media age is the rise of AI-driven propaganda and social engineering.
According to the report in Scout, the fake news phenomenon is much darker:
By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world.
Which is interesting because Cambridge Analytica is owned by one of Trump’s biggest donors (the Mercer family) and has a prominent board member in Steve Bannon, Trump’s Chief Strategist and a member of the White House Security Council.
So what does this mean? In simpler terms, Cambridge Analytica has developed an algorithm that can, within 10 Facebook likes, determine a person’s character better than their coworker. With 70 likes, know a person better than a friend, with 150 know someone better than a person’s parent, and with 300 better than the person’s partner.
With access to your public Facebook data, shopping data, land ownership data, church attendance, what stores you visit, and what magazines you subscribe to (all this and much more available from data brokers, by the way), Analytica can develop a robust picture of the US electorate:
Nix likes to boast that Analytica’s personality model has allowed it to create a personality profile for every adult in the U.S. -- 220 million of them, each with up to 5,000 data points. And those profiles are being continually updated and improved the more data you spew out online.
Scary, right? But that’s just the start. What do they do with this data?
“Your behavior is driven by your personality and actually the more you can understand about people’s personality as psychological drivers, the more you can actually start to really tap in to why and how they make their decisions,” Nix explained to Bloomberg’s Sasha Issenburg. “We call this behavioral microtargeting and this is really our secret sauce, if you like. This is what we’re bringing to America.”
This profile information:
not only identifies which voters are most likely to swing for their causes or candidates; they use that information to predict and then change their future behavior.
And this “microtargeting” can lead to actionable items like:
Where traditional pollsters might ask a person outright how they plan to vote, Analytica relies not on what they say but what they do, tracking their online movements and interests and serving up multivariate ads designed to change a person’s behavior by preying on individual personality traits.
Here’s a specific example the article gave:
For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton’s negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait -- say the voter’s tendency to be agreeable toward authority figures. Perhaps: “Top Intelligence Officials Agree: Clinton’s Emails Jeopardized National Security.”
And one example, specifically for Trump:
Based on users’ response to these posts, Cambridge Analytica was able to identify which of Trump’s messages were resonating and where. That information was also used to shape Trump’s campaign travel schedule. If 73 percent of targeted voters in Kent County, Mich. clicked on one of three articles about bringing back jobs? Schedule a Trump rally in Grand Rapids that focuses on economic recovery.
The dark web is extra dangerous for politics:
Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds.
Then add the fake news sites, bots, fake social media profiles, automated YouTube video creation, and more to this equation — you get a machine that can deploy, improve, and amplify messages at insane pace. All while specifically targeting your personality traits to tap into the viral nature of social media, your biases, and alter your decisions.
Truly scary stuff. Especially when many of us have been “conditioned” to distrust real journalism, science, and even question facts:
For years, as a conservative radio talk show host, I played a role in that conditioning by hammering the mainstream media for its bias and double standards. But the price turned out to be far higher than I imagined. The cumulative effect of the attacks was to delegitimize those outlets and essentially destroy much of the right’s immunity to false information. We thought we were creating a savvier, more skeptical audience. Instead, we opened the door for President Trump, who found an audience that could be easily misled.
So how do we combat this? Not 100% sure. But I do know we need to stay vigilant and engaged in politics — not just during presidential elections. We need to support high quality journalism. We need to be open-minded and curious, despite our preconceived notions or what we hear from our leaders every day. The stakes are too high.
Update: another perspective of Cambridge Analytica, where campaign staffers downplayed their impact and referred to much of their work, promises, and boasts as snake oil.