[ad_1]

The U.S. House Intelligence Committee last week heard expert testimony on the growing threat posed by “deepfakes” — altered videos and otherartificial intelligence-generated false information — and what it could mean forthe 2020 general elections, as well as the country’s national security overall.

The technologies collectively known as “deepfakes” can be used to combine orsuperimpose existing images and videos with other images or videos byutilizing AI or machine learning “generative adversarial network” techniques.

These capabilities have allowed thecreation of fake celebrity videos — including pornography –as well as for the distribution of fake news and other malicious hoaxes.

The hearing followed the widespread online distribution of a doctored video of House Speaker Nancy Pelosi, D-Calif., which made her appear impaired. The videomade the rounds on social media and was viewed more than 2.5 million times onFacebook.

Deepfakes has become a bipartisan issue, withboth Democrats and Republicans expressing concerns over the use of manipulatedvideos as a tool of disinformation.

The House Intelligence Committee heard testimony from four differentexperts in AI and disinformation about the potential risks to the U.S.government and even democracy from deepfakes. However, one expert alsowarned of the threat deepfakes could pose to the privatesector. One such scenario might involve a deepfake video showing a CEO committinga crime. Putting that type of video in circulation could impact a company’s stock price.

Whether in politics or the business world, even if a video is debunked the damage could be lasting.

Deep History

The phrase “deepfakes” was first coined in 2017, but the ability tomodify and manipulate videos goes back to the VideoRewrite program, which was published in 1997. It allowed users tomodify video footage of a person speaking to depict that personmouthing the words from a completely different audio track.

The technique of combining videos and changing what was said hasbeen used in Hollywood even longer, but it generally was a costly andtime-consuming endeavor. The film Forrest Gump, for example, requireda team of artists to render the character, played by Tom Hanks, into historic footage. Now, more than 20 years later, the results aren’t nearly as good as whattoday’s software can do.

Simple programs such as FakeApp — which was released in January 2018– allow users to manipulate videos easily, swapping faces.The app utilizes an artificial neural network and just 4 GB of storageto generate the videos.

The quality and detail of the videos is basedon the amount of visual material that can be provided, but given thattoday’s political figures appear in hundreds of hours of footage, itis easy enough to make a compelling video.

Fighting the Fakes

Technology to combat deepfakes is in development. The USCInformation Sciences Institute developed a tool that can detect fakeswith up to 96 percent accuracy. It is able to detect subtle face andhead movements, as well as unique video “artifacts” — thenoticeable distortion of media that is caused by compression, which alsocan indicate a video manipulation.

Previous methods for detecting deepfakes required frame-by-frameanalysis of the video, but the USC ISC researchers developed a toolthat has been tested on more than 1,000 videos and has proven tobe less computationally intensive.

It could have the potential to scale and be used to automatically –and more importantly quickly — detect fakes as the videos are uploadedon Facebook and other social media platforms. This could allow near real-time detection, something that would keep such videos fromgoing viral.

The USC ISI researchers rely on a two-step process. It firstrequires that hundreds of examples of verified videos of a person areuploaded. A deep learning algorithm known as a “convolutional neuralnetwork” then allows researchers to identify features and patterns inan individual’s face. The tools then can determine if a video hasbeen manipulated by comparing the motions and facial features.

The results are similar to a biometric reader that can recognizea face, retina scan or fingerprint — but just as with those technologies, a baseline is required for comparison. That could be easy for famous individualssuch as Speaker Pelosi or actor Tom Hanks, but for theaverage person it probably won’t be as easy, as thedatabase of existing video footage may be limited or nonexistent.

Potential to Be Weaponized

Deepfakes have the potential to be farworse and do far more damage than “Photoshopped” images — both on an individual and even national level.

“There is a world of difference between Photoshopped images andAI-aided videos, and people should be concerned with deepfakes becauseof their heightened realism and potential for weaponization,” warnedUsman Rahim, digital security and operations manager for The Media Trust.

One reason is that today people accept that photos can be altered, somuch so that these have earned the moniker “cheapfakes.” Videois a new frontier.

“Much fewer are aware of how realistic fake videos have become and howeasily they can be made in order to spread disinformation, destroyreputations, or disrupt democratic processes,” Rahim told the E-Commerce Times

“In the wrong hands, deepfakes spread through the Internet, especiallysocial media, can have a large impact on individuals — and morebroadly, societies and economies,” he added.

“Aside from the National security risk — e.g., a deep fake video of aworld leader used to incite terrorist activity — the political risk isespecially high in a competitive national election such as 2020, withmultiple candidates seeking to unseat a controversial incumbent,”noted associate professor Larry Parnell, strategic public relations program director in The Graduate School of Political Management at George Washington University.

“Either side might be tempted to engage in this activity, and thatwould make ‘old school’ dirty tricks seem mundane and quaint,” hetold the E-Commerce Times. “We have already seen how social media can be usedto impact a national election in 2016. That will seem like child’splay compared to how advanced this technology has become in the lasttwo-to-three years.”

Beyond Politics and Security Risks

Deepfakes could present a problem on a much more personal andindividual level. The technology already has been used to createrevenge porn videos, and the potential is there to use it for othersinister or nefarious purposes.

“In the hands of unsupervised kids, deepfakes can raise cyberbullyingto a new level,” said The Media Trust’s Rahim.

“Imagine what happens if our own or our children’s images are used anddistributed online,” he added.

“We might even see fake videos and social media posts being used in legal proceedings as evidence against a controversial figure tosilence them or destroy their credibility,” warned GW’s Parnell.

There already have been calls to hold the tech industryresponsible for the creation of deepfakes.

“If you create software that allows a user to create deepfakes, well,then you will be held liable for significant damages, maybe even heldcriminally liable,” argued Anirudh Ruhil, a professor in the Voinovich School of Leadership and Public Affairs at Ohio University.

“Should you be a social media or other tech platform that disseminatesdeepfakes, you will be held liable and pay damages, maybe even jailtime,” he told the E-Commerce Times.

“These are your only policy options, because otherwise you will havethe social media platforms and websites going scot-free for pushingdeep fakes to the mass public,” Ruhil added.

It is possible the authors of such heinous videos may not befound easily, and in some cases could be a world way, making prosecution anon-starter.

“In some ways, this policy is similar to what someone might argueabout gun control: Target the sellers of weapons capable of causingmassive damage,” explained Ruhil. “If we allow the tech industry toskate free, you will see repeats of the same struggles we have hadpolicing Facebook, YouTube, Twitter and the like.”

Fighting Back

The good news about deepfakes is that in many cases the technologystill isn’t perfect, and there are plenty of telltale signs that thevideo has been manipulated.

Also, there are already tools that can help researchers and the media tell fact from fiction.

“Social media and platforms, and traditional media can use these toolsto identify deepfakes and either remove them or label them as such, sousers aren’t fooled,” said Rahim.

Another solution could be as simple as adding “digital noise” toimages and files, making it harder to use them to producedeepfakes.

However, just as in the world of cybersecurity, it’s likely the bad actors will stayone step ahead — so today’s solutions may not solve tomorrow’s methodsfor producing deepfakes.

It may be necessary to put more effort into solving this problem before it becomes so great that it is not solvable.

“While it may be a constant and expensive process — the major techcompanies should invest now in emerging technology to spot deep fakevideos,” suggested Parnell.

“Software is being developed by DARPA and other government and privatesector companies that could be utilized, as the alternative is to becaught flat-footed and be publicly criticized for not doing so — andsuffer the serious reputation damage that will result,” he added.

For now the best thing that can happen is for publishers and socialmedia platforms to call out and root out deepfakes, which will helprestore trust.

“If they don’t, their credibility will continue to dive, and they willhave a hand in their own business’ demise,” said Rahim.

“Distrust of social media platforms in particular is rising, and theyare seen almost as much of a threat as hackers,” he warned.

“The era of prioritizing the monetization of consumer data at theexpense of maintaining or regaining consumer trust is giving way to anew era where online trust works hand-in-glove with growing yourbottom line,” Rahim pointed out. “Social and traditional media can also be aforce for good by outing bad actors and raising consumers’ awarenessof the prevalence and threats of deepfakes.”

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *