Plus Pages

Monday, July 15, 2019

'Deepfaking' Called Potential Trust Issue for Journalism

President Donald Trump has been warning about “fake news” throughout his entire political career putting a dark cloud over the journalism professional. And now the real wolf might be just around the corner that industry experts should be alarmed about, reports CNBC.

The threat is called “deepfaking,” a product of AI and machine learning advancements that allows high-tech computers to produce completely false yet remarkably realistic videos depicting events that never happened or people saying things they never said.

The danger goes far beyond manipulating 1980s thrillers. Deepfake technology is allowing organizations that produce fake news to augment their “reporting” with seemingly legitimate videos, blurring the line between reality and fiction like never before — and placing the reputation of journalists and the media at greater risk.

Ben Zhao, a computer science professor at the University of Chicago, thinks the age of getting news on social media makes consumers very susceptible to this sort of manipulation.

“What the last couple years has shown is basically fake news is quite compelling even in [the] absence of actual proof. ... So the bar is low,” Zhao said.

The bar to produce a convincing doctored video is lower than people might assume.

Earlier this year a clip purporting to show Democratic leader Nancy Pelosi slurring her words when speaking to the press was shared widely on social media, including at one point by Trump’s attorney Rudy Giuliani. However, closer inspection revealed that the video had been slowed to 75% of its normal speed to achieve this slurring effect, according to the Washington Post. Even with the real video now widely accessible, Hany Farid, a professor at UC Berkeley’s School of Information and a digital forensics expert, said he still regularly receives emails from people insisting the slowed video is the legitimate one.



To make one of these fake videos, computers digest thousands of still images of a subject to help researchers build a 3-D model of the person. This method has some limitations, according to Zhao, who noted the subjects in many deepfake videos today never blink, since almost all photographs are taken with a person’s eyes open.

The journalism industry is going to face a massive consumer trust issue, according to Zhao. He fears it will be hard for top-tier media outlets to distinguish a real video from a doctored one, let alone news consumers who haphazardly stumble across the video on Twitter.

No comments:

Post a Comment