Tuesday, September 3, 2019

Report: Military to Fight Fake News, Disinformation


Fake news and social media posts are such a threat to U.S. security that the Defense Department is launching a project to repel “large-scale, automated disinformation attacks,” as the top Republican in Congress blocks efforts to protect the integrity of elections.

Bloomberg reports the Defense Advanced Research Projects Agency wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips. If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.

“A decade ago, today’s state-of-the-art would have registered as sci-fi — that’s how fast the improvements have come,” said Andrew Grotto at the Center for International Security at Stanford University. “There is no reason to think the pace of innovation will slow any time soon.”

U.S. officials have been working on plans to prevent outside hackers from flooding social channels with false information ahead of the 2020 election. The drive has been hindered by Senate Majority Leader Mitch McConnell’s refusal to consider election-security legislation. Critics have labeled him #MoscowMitch, saying he left the U.S. vulnerable to meddling by Russia, prompting his retort of “modern-day McCarthyism.”

President Donald Trump has repeatedly rejected allegations that dubious content on platforms like Facebook, Twitter and Google aided his election win. Hillary Clinton supporters claimed a flood of fake items may have helped sway the results in 2016.

False news stories and so-called deepfakes are increasingly sophisticated and making it more difficult for data-driven software to spot. AI imagery has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems. Researchers have shown that these generative adversarial networks -- or GANs -- can be used to create fake videos.

No comments:

Post a Comment