Analysis | The Technology 202: New NYU report urges social media companies to take down 'provably' false information - The Washington Post

Ctrl + N

Disinformation on social media isn’t going away — it’s evolving as the 2020 election approaches. A New York University report published today is calling on tech companies to prepare by taking a more active role in removing “provably” false content from their sites. 

Such a move would be a major shift for the technology companies who have been hesitant to play the role of arbiters when it comes to policing content on their sites. Companies like Facebook have been investing heavily in fact-checking partnerships with news organizations and new technology aimed at limiting the spread of false information. But the companies don't automatically remove content from their sites just because it's provably false, as highlighted earlier this year when Facebook decided to leave up a doctored video that made House Speaker Nancy Pelosi appear drunk.

Paul M. Barrett, the NYU professor who wrote the report, tells me those policies need to change to prepare for  escalating disinformation threats ahead of the next election. He says the companies have to do everything they've been doing and more to prepare for the ever-changing threat. 

“They have to take responsibility for the way their sites are misused,” Barrett, deputy director of the NYU Stern Center for Business and Human Rights, tells me in an interview. 

The report lays out sobering predictions about how a host of bad actors could build on the playbook that Russians executed to stoke political divisions ahead of the 2016 election. Since then, a new cast of actors — including Iran, China and even domestic players — have shown they’re also capable of exploiting social media for political gain, and it’s likely they’ll use different techniques, like “deepfakes” or videos and images altered with artificial intelligence. 

Here are Barret's nine recommendations for how the companies should get ready for disinformation in 2020:

1. Invest in detecting and removing deepfake videos — before regulators step in and require companies to do so. 

Deepfake technology has rapidly advanced since the last election, and Barrett predicts videos will be unleashed depicting 2020 candidates doing or saying things they never did. He says it's time for companies to invest more significantly in developing technology and hiring people to detect and remove these videos. Earlier this year, Google announced a project to invest in research on fake audio detection, and Barrett says other companies should follow.

2. Take down content that can “definitively be shown to be untrue.”

In his boldest proposal, Barrett argues companies are increasingly taking down other types of problematic content, like hate speech and posts deemed to be aimed at voter suppression. He argues that disinformation should be added to the list of content the companies eliminate when it's clear the information is false. For example, he says the companies should take down an article that could be headlined, “The Sandy Hook Massacre Was Staged,” that is definitively false. But stories that are misleading but not provably false like, for example, “Journalists Really Are the Enemy of the People,” should remain online. 

“We urge the companies to prioritize false content related to democratic institutions, starting with elections,” he said. “And we suggest that they retain clearly marked copies of removed material in a publicly accessible, searchable archive, where false content can be studied by scholars and others, but not shared or retweeted.”

3. Hire a “content overseer” reporting directly to the chief executive and chief operating officer.

Barrett says right now, responsibility for content decisions tends to be “scattered” across many different teams at the tech companies. He says it's time to centralize these decisions under one executive at a company who has significant clout to better streamline important decisions.

4. Increase defenses against misinformation at Instagram.

“Instagram has been a problem all along, but for whatever reason we don’t pay as much attention to it,' Barrett tells me in an interview. As my colleague Tonya Riley recently detailed in the Technology 202, Instagram has recently rolled out a new fact-checking tool, but experts are concerned it might have limited impact on disinformation. Barrett says it's time for Instagram and its parent company Facebook to come up with a clearer strategy for approaching disinformation, especially as he predicts phony memes could be a key tool bad actors deploy in 2020. 

5. Restrict message forwarding even more at WhatsApp.

Earlier this year, Facebook instituted a five-time limit on message forwarding to WhatsApp groups to ensure rumors and fake news don't fly rampant. Barrett thinks this is a good first step, but he recommends the company go farther and only allow users to forward a message to a group chat once. 

6. Pay more attention to the growing threat of for-profit disinformation campaigns.

For-profit disinformation services have gone global and are becoming more sophisticated, Barrett warns. There are signs some for-profit groups have used similar tactics as the Russian trolls in 2016, but their goal is to turn a profit rather than influence ideologies. Barrett says companies need to be more vigilant in this area.

7. Support legislation to increase political ad transparency on social media. 

Barrett thinks it's time for the tech industry to throw its “considerable lobbying clout” behind bills like the Honest Ads Act, which would require tech companies to be more transparent about who is paying for political ads on their platforms. Companies like Facebook have taken voluntary steps to address this issue in the absence of regulation, and Facebook and Twitter have endorsed the legislation. But Barrett is calling on the companies to make this one of their top political priorities. 

8. Collaborate more. 

The companies should create a permanent intercompany task force focused on disinformation, Barrett argues. The companies have been working together more  often to identify inauthentic behavior on their websites, but Barrett thinks they could do more if they had a more formal body to address disinformation. The companies already have such partnerships to address other harmful things, like terrorist content. 

9. Make social media literacy a prominent and permanent feature on their websites. 

Barrett tells me that social media companies have made positive strides in supporting digital literacy programs that educate the public about disinformation, like supporting classes at school for children and teens. But he says the companies could post a reminder to their users about the threat of disinformation every time they log in. “The more often users are reminded of this fact — and are taught how to distinguish real from fake — the less influence false content will wield,” he writes in the report. “Concise, incisive instruction, possibly presented in FAQs format, should be just one click away for all users of all of the platforms, all of the time.”

BITS, NIBBLES AND BYTES

BITS: Facebook may no longer show the number of "Likes" a post receives in your news feed, the company confirmed to TechCrunch's Josh Constine yesterday. Although it wouldn't say when or if it will begin testing the feature, potential plans to do so could indicate Facebook is taking criticism of the addictive nature of its products more seriously in light of increased scrutiny from government regulators.

Based on code found in Facebook's Android app by researcher Jane Manchun Wong, the test would experiment by showing some users just a few reaction emojis but not a full count of who liked the status. Instagram, which is owned by Facebook, already began experimenting with removing like counts earlier this year to some positive results. (Facebook did not release internal findings about its Instagram experiment and has argued in the past the feature isn't entirely bad for users.)

An expansion of the test across Facebook's products could be a way to placate growing criticism from regulators about the addictive nature of the platform. Sen. Josh Hawley (R-Mo.) introduced legislation in July that would give the Federal Trade Commission the authority to ban practices designed to exploit users and keep them on a platform. Facebook recently reached a settlement with the FTC following the agency's broad investigation into its privacy practices, and Facebook says the FTC has opened an antitrust investigation. 

NIBBLES: YouTube will now require channel owners to approve community-provided translations after trolls exploited the tool to harass users and spread offensive content, the Verge's Russell Brandom reports. Although YouTube has always encouraged channel owners to review contributions made by users, the new system shows the company is taking a more proactive step against harassment on its platform.

The change comes after a YouTube user named JT flagged the high level of offensive materials in translations for the popular YouTuber creator PewdiePie and pointed out how the harassing content crowded out legitimate translations. The video resulted in a harassment campaign against JT in retaliation, Russell reports. JT brought the issue to the attention of YouTube on Twitter, but was originally instructed to just report the offending translations. 

YouTube then changed its policies a few days later:

YouTube has grappled with how to address hate speech on its platform amid increasing scrutiny this summer, particularly with regard to guidelines for its content creators

BYTES: A new Chinese face-swapping app climbed the top of China's iOS App Store over the weekend, but critics are already sounding the alarm about the app maker's privacy practices Bloomberg's Colum Murphy and Zheping Huang report.

The app, Zao, allows users to upload a photo of themselves and swap it onto the faces of actors in scenes from popular movies and television shows. But much like FaceApp, the Russia-based app that attracted scrutiny earlier this summer, Zao came under fire for having “free, irrevocable, permanent, transferable, and relicense-able” rights to all this user-generated content. The company changed its policy to seek permission from users for any new uses of their images.

But the ease with which the app allows users to create "deepfakes," an increasingly catch-all term for the use of artificial intelligence to make it appear someone did or said something that never happened, underscores how little regulation exists for the use of artificial intelligence in doctoring videos. 

PUBLIC CLOUD

— News from the public sector:

PRIVATE CLOUD

— News from the private sector:

#TRENDING

—  Tech news generating buzz around the Web:

The idea of a “universal basic income” is gaining traction among Silicon Valley titans and Democratic presidential hopefuls, despite a national trend toward more restrictions on public benefits.

Robert Samuels



https://ift.tt/2MSQjnQ

0 Response to "Analysis | The Technology 202: New NYU report urges social media companies to take down 'provably' false information - The Washington Post"

Post a Comment