According to recent research, including this paper from scientists at the University of Tennessee and the Rensselaer Polytechnic Institute, we’re going to need more than just clever algorithms to fix our broken discourse. The problem is simple: AI can’t do anything a person can’t do. Sure, it can do plenty of things faster and more efficiently than people – like counting to a million – but, at its core, artificial intelligence only scales things people can already do. And people really suck at identifying fake news. According to the aforementioned researchers, the problem lies in what’s called “confirmation bias.” Basically, when a person thinks they know something already they’re less likely to be swayed by a “fake news” tag or a “dubious source” description. Per the team’s paper: This makes it incredibly difficult to design, develop, and train an AI system to spot fake news. While most of us may think we can spot fake news when we see it, the truth is that the bad actors creating misinformation aren’t doing so in a void: they’re better at lying than we are at telling the truth. At least when they’re saying something we already believe. On the flip-side, people were less likely to make the same mistake when the news being presented was considered part of a novel news situation. In other words: when we think we know what’s going on, we’re more likely to agree with fake news that lines up with our preconceived notions. While the researchers do go on to identify several methods by which we can use this information to shore up our ability to inform people when they’re presented with fake news, the gist of it is that accuracy isn’t the issue. Even when the AI gets it right we’re still less likely to believe a real news article when the facts don’t line up with our personal bias. This isn’t surprising. Why should someone trust a machine built by big tech in place of the word of a human journalist? If you’re thinking: because machines don’t lie, you’re absolutely wrong. When an AI system is built to identify fake news it, typically, has to be trained on pre-existing data. In order to teach a machine to recognize and flag fake news in the wild we have to feed it a mixture of real and fake articles so it can learn how to spot which is which. And the datasets used to train AI are usually labeled by hand, by humans. And, as long as humans are biased, we’ll continue to see fake news thrive. Not only does confirmation bias make it difficult for us to differentiate facts we don’t agree with from lies we do, but the perpetuation and acceptance of outright lies and misinformation from celebrities, our family members, peers, bosses, and the highest political offices makes it difficult to convince people otherwise. While AI systems can certainly help identify egregiously false claims, especially when made by news outlets who regularly engage in fake news, the fact remains that whether or not a news article is true isn’t really an issue to most people. Take, for instance, the most watched cable network on television: Fox News. Despite the fact that Fox News lawyers have repeatedly stated that numerous programs – including the second highest-viewed program on its network, hosted by Tucker Carlson – are actually fake news. Per a ruling in a defamation case against Carlson, U.S. District Judge Mary Kay Vyskocil — a Trump appointee — ruled in favor of Carlson and Fox after discerning that reasonable people wouldn’t take the host’s everyday rhetoric as truthful: And that’s why, under the current news paradigm, it may be impossible to create an AI system that can definitively determine whether any given news statement is true or false. If the news outlets themselves, the general public, elected officials, big tech, and the so-called experts can’t decide whether a given news article is true or false without bias, there’s no way we can trust an AI system to do so. As long as the truth remains as subjective as a given reader’s politics, we’ll be inundated with fake news. Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.