Analysis | The Technology 202: New video editing technology raises disinformation worries - The Washington Post

Ctrl + N

A new algorithm developed by Stanford University engineers is putting the spotlight on advances in video editing that could make it more difficult to separate fact from fiction online.

A team of researchers has developed new technology allowing editors to alter the words of anyone who appears on video in an image from the shoulders up, making doing so as easy as typing changes into a word processing program. In practice, this could be a talking head, a politician, a news anchor or any other person who influences political discourse.

The researchers say this technology could be used to adapt instructional videos or quickly make edits to movies -- but experts warn it could have more sinister effects if applied to politics. It raises serious ethical concerns because it could make it far easier for bad actors to manipulate videos from typically trusted sources.

Here's how the new technology works: Editors can simply delete or add words to a transcript, and the application will assemble the right word or speech motions from another point in the video and use machine learning to edit the video version of the transcript in a way that appears seamless to the natural eye.

Jack Clark, policy director at San Francisco artificial intelligence research center OpenAI, warned that if the technology were widely released, it could make it far cheaper to spread propaganda.

“The fact that this technology exists means people are now going to question the veracity of information sources that they wouldn’t have questioned before,” Clark told me in an interview. “We need a whole of society response to this.”

The new research comes as Congress holds its first hearing this week to address an expected scourge of deepfakes videos that can be altered using artificial intelligence to make it appear as if people said or did things that never actually happened. Lawmakers are just starting to grapple with the ways video manipulation could exacerbate disinformation online as there a mounting fears about how deepfakes could be abused ahead of the 2020 election. 

Rep. Adam B. Schiff (D-Calif.) said in an interview with CNN last week he feared Russia could engage in a "severe escalation" of its influence campaign targeting the United States ahead of the 2020 presidential election using this new technology. 

"And the most severe escalation might be the introduction of a deep fake — a video of one of the candidates saying something they never said," Schiff told CNN.

The hearing follows a recent fracas over a video of Nancy Pelosi, which appeared to be slowed down and altered using traditional editing techniques to make the House speaker appear intoxicated. Although the video wasn’t a deepfake, it raised awareness among lawmakers about how vulnerable politicians are to even the most crude forms of video manipulation. (Read our previous Technology 202 coverage here). Deepfakes remain difficult to produce, and to date no deepfakes of U.S. politicians have been deployed in a viral campaign.

Thursday’s hearing will convene experts to discuss the state of deepfakes today and how they can be abused. Clark will be among the witnesses at the hearing and told me no one is ready for the challenges such distorted videos will present — and it’s going to take cooperation from lawmakers, technologists and institutions like social media companies to address them.

Clark is planning to use his time in the hot seat to urge lawmakers to do a better job of monitoring the research that is being done on video manipulation -- like the work out of Stanford, particularly so lawmakers and social media aren’t caught off guard by the technology’s evolution.

Lawmakers won't be able to address this problem alone. It's also important for researchers to disclose ethical trade-offs as they do more work related to how artificial intelligence can be used to manipulate videos, Clark said. He praised the team of researchers from Stanford, Princeton and Adobe Research, who were transparent about  such potential problems with their video editing algorithm. They also suggested some technical solutions, such as developing a watermarking to identify any content that had been edited and provide a full history of the edits. 

“Unfortunately, technologies like this will always attract bad actors,” said Ohad Fried, a post-doctoral scholar at Stanford, in a news release. “But the struggle is worth it given the many creative video editing and content creation applications this enables.” The researchers were not immediately available for an interview.

BITS, NIBBLES AND BYTES

BITS: Google made $4.7 billion from the work of news publishers via search and Google News, according to a study to be released today by the News Media Alliance. The organization -- which represents more than 2,000 publications across the country including The Post -- says journalists deserve a piece of that, reports Marc Tracy in the New York Times

“They make money off this arrangement,” David Chavern, the News Media Alliance president and chief executive, told the Times, “and there needs to be a better outcome for news publishers.”

That figure is almost as much as the $5.1 billion the United States news industry generated as a whole from digital advertising last year. The News Media Alliance warned that its estimate for Google’s news revenue was conservative. It didn't count the value of the personal data the company collects on consumers every time they click on an article. 

“The study blatantly illustrates what we all know so clearly and so painfully,” Terrance C.Z. Egger, the chief executive of Philadelphia Inquirer PBC, which publishes The Philadelphia Inquirer, The Philadelphia Daily News and philly.com, told the Times. “The current dynamics in the relationships between the platforms and our industry are devastating.”

The study comes as House lawmakers are set to review the impact Big Tech has had on the news industry as they kick off their broad probe into tech's market power. The News Media Alliance has been advocating for Congress to pass the the Journalism Competition and Preservation Act, which would give publications a safe harbor from antitrust laws so they can collectively bargain with the tech platforms for revenue sharing. 

NIBBLES: YouTube may be inadvertently playing a role in radicalizing young men online — sometimes pushing them far-right content whether they seek it or not, according to a report from Kevin Roose of the New York Times.

YouTube's business model rewarding provocative videos with high advertising dollars and its algorithms that make personalized recommendations aimed at keeping viewers watching have created a perfect storm, Kevin writes.

“There’s a spectrum on YouTube between the calm section — the Walter Cronkite, Carl Sagan part — and Crazytown, where the extreme stuff is,” former Google design ethicist Tristan Harris told Kevin. “If I’m YouTube and I want you to watch more, I’m always going to steer you toward Crazytown.”

 YouTube in 2015 tweaked its artificial intelligence to redirect users toward newer content instead of pushing them into content similar to what they just viewed. The technology, known as Reinforce, was "a kind of long-term addiction machine," Kevin writes. The new technology managed to increase sitewide views by nearly 1 percent, or millions of more hours of daily watch time. Kevin notes that can have major consequences in an environment with extreme politics:

Kevin's story focuses on one 26-year-old West Virginia man, Caleb Cain, who says he "fell down the alt-right rabbit hole" on YouTube five years ago as a college dropout. He now disavows the movement, but he described being radicalized by what he called a "decentralized cult" of far-right YouTube personalities. 

“I just kept falling deeper and deeper into this, and it appealed to me because it made me feel a sense of belonging,” he told Kevin. “I was brainwashed.”

Thisis a common theme in internet culture, especially among white men with an interest in video games, Kevin writes. YouTube has long denied allegations that its algorithm pushes users toward more extreme content. The company told Kevin that users watching extreme content are recommended more moderate content, though declined to provide any data to support the claim.

BYTES: Russell Vought, the White House’s acting budget chief, is pushing for a delay in the implementation of a law that would ban government contractors from using Huawei technology,  the Wall Street Journal’s Dan Strumpf reports. Vought cited concerns about the burden the ban could put on U.S. companies that rely on Huawei equipment.

Vought wrote in a letter to Vice President Pence and nine members of Congress that the ban will result in a "dramatic reduction" in the number of U.S. companies able to service the government. The contractor restrictions are set to take effect in 2020, but Vought is requesting that they not be enacted until 2022. 

This is just the latest warning that Washington’s actions targeting Huawei could have ripple effects impacting U.S. businesses. American tech companies have escalated warnings that a separate action that prohibits American companies from doing business with Huawei could have devastating effects. Chip makers and software companies who work with Huawei are already applying for licenses to continue selling to the company, my colleague Reed Albergotti reports. Last month, the Information Technology Council, which represents Google, Microsoft and several other tech powerhouses, publicly asked the department to consider the "unintended consequences" of the plan. But Reed reports that many tech companies are fearful of speaking out publicly against the administration

China could also respond by increasingly restricting tech access as trade negotiations with the U.S. continue to spiral, the Associated Press's Ken Moritsugu reports.The People’s Daily newspaper said yesterday that the government is creating a system that will build a strong firewall to "strengthen the nation’s ability to innovate and to accelerate the development of key technologies." No specifics have been released about what China is calling "a national technological security management list."

PRIVATE CLOUD

— News from the private sector:

PUBLIC CLOUD

— News from the public sector:

CHECK-INS

— Coming soon:

  • Uber will host its Uber Elevate Summit in Washington, D.C., on Tuesday and Wednesday.
  • The House Judiciary Committee will host the first of its series of hearings on online platforms and market power on Tuesday, starting with the impact of big tech on "The Free and Diverse Press."
  • E3, the country's biggest gaming expo, kicks off on Tuesday in Los Angeles.
  • The House Intelligence Committee on Thursday hosts a hearing on the "National Security Challenges of Artificial Intelligence, Manipulated Media, and Deepfakes."

FAST FWD

— News about tech workforce and culture:

#TRENDING

— News generating buzz online:

WIRED IN



https://wapo.st/2K7Zxez

Related Posts :

0 Response to "Analysis | The Technology 202: New video editing technology raises disinformation worries - The Washington Post"

Post a Comment