Not every new technology product hits the shelves.
Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.
Or maybe, it might be too dangerous.
Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.
The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:
It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.
Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.
In fact, the 2016 election helped raise awareness of an issue that Flickr co-founder Caterina Fake has been talking about in Silicon Valley for years — the ethics of technology.
That conversation was furthered by OpenAI's decision to publicize the nonrelease of their new technology last month, Fake told NPR's Lulu Garcia-Navarro.
"Tech companies don't launch products all the time, but it's rare that they announce that they're not launching a product, which is what has happened here," Fake said. "The announcement of not launching this product is basically to involve people in the conversation around what is and what is not dangerous tech."
When evaluating potential new technology, Fake asks a fundamental question: should this exist?
It's a question she explores as host of the podcast Should This Exist?
In a recent episode, Fake investigates a product called Woebot, which is an artificial intelligence-driven robot therapist. NPR spoke with Fake about the ethics of this new technology — and technology as a whole.
Interview Highlights
On evaluating Woebot
As we know, depression has increased, which has followed very closely the introduction of technology into our lives.
My initial impulse was, "Gosh, should we use technology to cure the problems of technology? That seems misguided." But, by the end of thinking through some of the possibilities of this technology, I became convinced that in fact, this was probably a good solution for it.
On the changing approach of technology developers
When we had first started Flickr, we kind of understood that what we were building was online community. Online community is something where you show up — you are yourself, you have to participate and you have to negotiate the culture of the community in which you are participating.
In a social media platform, you are so-called "eyeballs." You are a product that is being sold to advertisers. It's a completely different dynamic. When things switched from being, very early on, thought of as "online community" to being thought of as "social media," the dynamics of the entire software changed.
On technology's potential for good or evil
I feel as if technology can always be used for good, right? It has neutral valence. It is the way that humans use it, how we approach it and how we think about it — that is the most important part of technology and technology in our lives.
On how to handle technology's potential to be misused
The important part of this is to acculturate people to asking these questions. As we all know, Millennials and Gen Z and the younger folk are much more thoughtful about: what are the values behind this product or this program? And what does it do to us?
NPR's Amanda Morris produced this story for digital. NPR's Mayowa Aina produced this story for broadcast.
9(MDAzNTg0OTAyMDEyOTc4NzIxNTYxN2U1Yg004))
LULU GARCIA-NAVARRO, HOST:
Tech companies decide to kill products and ideas for products all the time. Maybe it doesn't work; there's no market - whatever. But what about when the tech is too dangerous? That's what Caterina Fake wants Silicon Valley to consider more often, the ethics of new tech. She hosts the podcast "Should This Exist?" And she joins us now from San Francisco. Thanks so much for being on WEEKEND EDITION.
CATERINA FAKE: And thanks for having me.
GARCIA-NAVARRO: So you think about this all the time. But we thought about it because OpenAI, the research nonprofit, announced they weren't releasing a text generator they developed because they feared it could be misused to create fake news. Did that admission take you by surprise?
FAKE: Tech companies don't launch products all the time. But it's rare that they announce that they're not launching a product, which is what has happened here. And the announcement of not launching this product is basically to involve people in the conversation around what is and what is not dangerous tech.
GARCIA-NAVARRO: You are in Silicon Valley. You're the co-founder of Flickr. One of the things that always struck me was at the beginning, the conversation among developers there was always, like, we are doing this for the greater good. This is part of a good for society. Has that conversation changed?
FAKE: I think it has. For example, when we had first started Flickr, we kind of understood that what we were building was online community. Online community is something where you show up. You are yourself. You have to participate. And you have to negotiate the culture of the community in which you are participating.
In a social media platform, you are our so-called eyeballs. You are a product that is being sold to advertisers. It's a completely different dynamic. And when things switched from being very early on thought of as online community to being thought of as social media, the dynamics of the entire software changed.
GARCIA-NAVARRO: So in your podcast, "Should This Exist?," I mean, what kinds of questions are you grappling with right now? I'm sure you see things all the time that are being promoted, and you think nah, this isn't going...
FAKE: That should...
GARCIA-NAVARRO: ...To be a good.
FAKE: ...Not exist.
GARCIA-NAVARRO: That should not exist.
FAKE: (Laughter). I think it would be wonderful if I were the sole arbiter of what should and should not exist.
GARCIA-NAVARRO: Me too.
FAKE: But really, we have been talking about this in the Valley forever. But it was not getting a lot of attention. And there was a kind of a catastrophic change that happened I think when, unexpectedly, to many of us, the 2016 elections had results that we did not anticipate for reasons that we could suddenly see.
GARCIA-NAVARRO: What do you say to people, though, who say that any product or platform can be misused and that it's impossible to plan for every eventuality?
FAKE: The important part of this is to acculturate people to asking these questions. And as we all know, millennials and Gen Z and the younger folk that are now, you know, kind of coming into their own, are much more thoughtful about, what are the values behind this product or this program? And what does it do to us?
For example, one of the last shows was about a product called Woebot. And what it is, it's an AI-driven bot therapist. And as we know, depression has increased, which has followed very closely the introduction of technology into our lives.
GARCIA-NAVARRO: There's studies that show that kids, specifically that are on a lot of technology and social media, feel more depressed, more alienated from their peers, et cetera.
FAKE: Yes. And my initial impulse was, gosh, should we use technology to cure the problems of technology? That seems misguided. But by the end of thinking through some of the, you know, possibilities of this technology, I became convinced that, in fact, this was probably a good solution for it.
I feel as if technology can always be used for good, right? It has neutral valence. It is the way that humans use it and how we approach it and how we think about it. That is what is the most important part of technology and technology in our lives.
GARCIA-NAVARRO: Caterina Fake is a co-founder of Flickr, a venture capitalist and a host of the podcast "Should This Exist?" Thank you so much.
FAKE: Thank you.
(SOUNDBITE OF MYLAB'S "FANCY PARTY CAKES") Transcript provided by NPR, Copyright NPR.
https://ift.tt/2HjPuBd
0 Response to "Should This Exist? The Ethics Of New Technology - KUNC"
Post a Comment