Can AI Help Us Combat Fake News?

The exponential rise of fake news and misinformation significantly exacerbated the multiplicity of narratives and misinformation. Fake news, in particular as facilitated by fraudulent sites, social media, and bots is on the rise, posing significant threats to democracy and health, particularly in light of the election and the pandemic. Along with fake text news, the rise of deep fakes further amplifies the perilous implications of this issue; Deep fakes are essentially fake videos that can create fraudulent impersonations of individuals. 

AI’s Work With Fake News

Many of the perpetrators of fake news utilize artificial intelligence as a tool to enable misinformation. As the technology develops, the peril and the potential it offers are significant. While AI models such as GPT-3 cannot create sophisticated fake content without human prompts, according to Nandu Nandakumar, CTO at Razorthink, these are likely to arise in the next decade or so. In the meantime, AI can be used to control the dissemination of the information and in gauging it’s legitimacy. 

“AI can assist with bucketing news into different places as well as establishing providence/origin. Companies such as Cloudflare can assist with this–you direct your traffic through them, and they not only stop all threats but also find patterns, do a correlated look at different IPs and see different patterns emerge. Standard models can help establish providence, do sentence analysis and classify categories of news,” said Nandu.

Establishing providence, in other words, origin/creator identity is an essential component. While AI can be used to establish this, it is also easy to fabricate and therefore can be difficult to legitimately establish. 

According to Mike McKenna, a data scientist at CVS, “Training AI to recognize that news has model-based providence is difficult because those models can be easily tweaked through transfer learning. After the tweak, the providence detection AI will be less effective.” One of the ways to combat this is to embed signatures within content. This way, legitimate distributors of content can have an existing, standard verification within the articles they release. 

However, Nandu points out that while signatures are a good way to establish providence, there are clever ways in which this can be tweaked. The ultimate challenge with software is that regulation is temporary, you can’t stop people from sharing. Yet, an effective providence tracking mechanism is essential to countering fake news. Nandu also emphasized the importance of a verification authority for news and content. Dan Foehner, SVP Marketing at Razorthink, spoke about the possibility of an industry standard and/or practice whereby editors and distributors could embed their unique company signatures into content, thus validating its authenticity. 

Social Media’s Our Biggest Problem

Currently, social media sites are the biggest proponents of fake news. Over the years, thay have established mechanisms to detect fake news. Users can now flag content they view as dubious, and companies such as Facebook have teams of reviewers and moderators that assess and filter content. However, one of the biggest issues with fake news is that strategies are not preemptive; often, the damage has been done in news cycles that move very quickly. 

According to Dan, “One way to do this is through a combination of trained AI models and human reviewers/fact checkers whereby every piece of news goes into a review queue to be assessed for accuracy and legitimacy.” 

Nandu emphasized that the human element is still very much necessary. AI cannot currently achieve this on its own. 

“If you think about computer viruses, an antivirus is a preventative tactic. Companies that make these antiviruses are trying to dominate the market so they have the upper hand. In the case of deep fakes, the equivalent of the antivirus is the person who is reading or watching. It is difficult to control this content, mostly because it is evolving so quickly and because deep fakes are notoriously difficult to differentiate.”

Changes In The Future

In the long term, as deep fakes become more common, trust levels in video content will go down significantly, and in the short term, AI can be used to tell the difference, but eventually, this will become more and more difficult,” said Nandu. 

Furthermore, there is a real risk of legitimate content being filtered out due to these AI technologies. The AI tool called Perspective, which was built by Jigsaw (a Google-based technology incubator), which was designed to detect inappropriate language, often flags legitimate content, leading to “false-positives,” demonstrating that while these technologies are effective, they are far from perfect.  

Of course, even with effective regulation and technology, social media sites are businesses first. Some have argued that there is a fundamental issue with the way in which social media sites are structured and the spreading of fake news: “Data plus ad-tech leads to mental and cognitive paralysis.” The issue remains that social media sites, the biggest proponents of fake news are businesses first. Their incentives are increasing user engagement, misinformation is not necessarily a priority. Still, the responsibility they bear is immense. 

Nandu emphasized that not only is money the primary objective of these sites, there is a strong incentive for them to separate themselves from the noise. In this sense, provocative and oftentimes false content actually works in their favor. “Unless it is egregious, social media sites are unlikely to policy or curtail false content.”

Dan, who used to work at Facebook, agreed that there is an inherent bias in the models of these social media sites. “Posts that receive really good engagement send signals back to the algorithm to increase organic distribution. With fake news, if a post receives engagement very quickly then distribution will be automatically increased, especially if it is not flagged as fake, making the post go viral. 

Thousands of reviewers that are scoring content for a variety of different objectives. All these signals go back and feed the AI models to determine a number of things – fake, viral, suicide videos etc. bad things. 10,000 + human reviewers in addition to machine learning that is monitoring.

Facebook has the capability to control this content, but they don’t seem incentivized to do so. They have initiatives around  news literacy and Facebook Journalism Project, but the biggest issue remains their lack of transparency in terms of the signals that are coming in for these articles in real time.” 

The organizations that Facebook partners with to “vet content” are small nonprofits; the motives behind these partnerships seems to be protecting Facebook rather than assisting news organizations and users, especially because none of these methods are preemptive. Similarly, Twitter has also instituted policies to control misinformation and bots, but these the specifics of these strategies are not revealed to the public. 

Government Regulation

Furthermore, governmental regulation often becomes embroiled in issues of partisanship, as was apparent last week’s Senate hearing intended to discuss Section 230. Section 230 provides companies such as Facebook and Twitter “immunity for user-generated content” while also giving them the power to moderate said content. Twitter, similar to Facebook, has instituted policies to control misinformation and bots, but these the specifics of these strategies are not revealed to the public. At the hearing, Senator Deb Fisher said that more transparency was needed on how these tech giants “moderate content.” Dan Foehner believes that these policies are public, they are just inconsistent and their complexity makes it difficult for the public to fully understand them. Partisanship, along with rapidly evolving technologies and the complexity of these systems makes regulation and more broadly, the controlling and moderation of content a challenge, especially if moderation of offensive content is labeled as “censorship.” 

The regulation of deep fakes is fraught with even more complexity. Software that helps detect deep fakes is available and can be used by journalists and media entities to verify sources does exist and is an effective tool. Social media platforms have also instituted policies to ban deep fakes.

In early 2020 Facebook instituted a policy which forbade manipulated audio and video depicting a person saying something they did not (excluding satire and parody). Critics noted that manipulated audio and video depicting a person doing something they did not actually do was still allowed. Many have noted that this policy does no go far enough, particularly in regulating “cheapfakes,” which minor modifications to existing videos that can be used to mislead audiences. 

Incentivizing social media platforms to adequately regulate fake news can be difficult. Whether AI is being used to create fake news or combat it, it is clear that it is ultimately a tool that requires human input. AI still cannot operate without human input in terms of gauging fake news, mostly because AI is limited in its ability to understand all the nuances and inherent inaccuracies of language. 

“Human language is inaccurate and imprecise, it is important to provide context. AI will get to the point where it can provide information for the person making the decision but not on its own. Watson or another AI technology could provide judgement, but we are not at the point of live, realtime AI mechanisms that operate without human input” said Nandu.  

The stigma associated with AI is pervasive, however, the threats and potential of this technology are dependent on how it is wielded. 

“The power of AI is growing exponentially. The question of threat is more a philosophical one,” said Nandu. 

Fake news and misinformation, especially deep fakes, are likely to compound in intensity. In addition to developing the requisite technologies and making these available to journalists and media companies, corporate responsibility is an essential facet of combating fake news. Ultimately, it is the human component that creates and enables harm and misinformation perpetuated by AI. If AI is a tool, then it is as dangerous or beneficial as we make it.