Privacy beyond HIPAA in voice technology - MobiHealthNews

While voice has been touted as an emerging technology with the ability to lower the bar to entry, industry players are now starting to warn of privacy gaps. Amazon Alexa and Google Home devices are now becoming a frequent household item, used for everything from ordering a new wardrobe to helping with homework. 

But when used in the medical industry, the technology needs to be administered differently than in the consumer world. 

“When it comes to healthcare and voice design, we have several challenges we face every day,” Freddie Feldman, voice design director at Wolters Kluwer Health, said at The Voice of Healthcare Summit at Harvard Medical School last week. “HIPAA is a big topic on everyone’s mind nowadays, and it is one we take seriously. The first thing most people think about when they hear HIPAA is securing servers platforms, but there is more to it. We have to consider things like the unintended audience for a call.”

He said that due to the nature of voice, even leaks not expressly prohibited by HIPAA can be inappropriate. For example, if the voice technology is intended for home use and gives a message from the radiology department to the house, then it’s giving away too much information, he said. 

Much of it comes down to appropriate use. For example, putting the speakers into a hospital room setting poses a different set of challenges. 

“I think as far as smart speakers and virtual assistants [go], Amazon right now only has HIPAA-eligible environments, so basically turning on and off HIPAA for specific skills, enabling HIPAA for a particular voice app or voice skill,” Sarah Lindenauer, product and portfolio manager of the innovation and digital health accelerator (IDHA) at Boston Children’s Hospital, said at MassBio’s forum on Using Clinical Data from wearables and voice. 

“Right now, it’s invite-only and only select developers or partners would develop skills that process PHI in a HIPAA-compliant manner. That is great and a step in the right directions. But when you think about placing a smart speaker in a hospital room, there is a whole host of considerations. … The entire platform isn’t HIPAA-complaint and you can’t tell, if its patient facing, how the patient is going to use it and the type of information they are going to share, and whether they are going to stay in that one skill that is deemed HIPAA-compliant.”

The provider setting

Providers are also starting to use voice to do their work, both in the exam room and in their prep work. 

“In healthcare you have to [look at] the design considerations and the advantages and disadvantages of speaking. You have to think about some of the privacy considerations of public versus private, and the concept of someone listening to you and also overhearing things that you might not want said out loud,” Dr. Yaa Kumah, assistant professor of biomedical informatics at Vanderbilt University Medical Center, said at The Voice of Healthcare Summit. 

She went on to say that there are many use cases where voice could be appropriate. For example, the technology could help providers pull up critical information during patient exams, allowing clinicians more face-to-face time with patients. This scenario also comes with its own set of challenges. She brought up the situation where a patient may come into the room with a support person. 

“If the patient is in the room and they are there with a friend and you are asking what medications are you being treated for--is that something that you need a voice assistant to assert out loud or do you suppress different kinds of information in the chart that could be considered sensitive? If you do that, does that add to the stigma? These are all things we are figuring out.”

Another situation where voice could be beneficial is patient briefings. For example, when doctors are in the privacy of their own car, they could get updates on their patients’ conditions as they commute to work. This also has its own set of concerns. Kumah said that you wouldn’t want to use the same type of technology if you are in a public location. 

The appropriate setting 

“Transitioning from one place to another [is important]. You are rounding on your patients and need to know something about the next person before you see them,” Kumah said. “So, unlike the car that is more or less private, you are on an elevator or in the hallway, so maybe it is not appropriate to speak aloud the entire summary for the patient. But maybe the tech can [sense] if you have headphones in and only then will it speak it aloud rather than displaying it on the [phone].”

It’s not only the providers that need to be able to transition modes of communication, but also patients. 

“For example, there are some private conversations you might want to have by text because you don’t want to be walking down the street and your phone goes off and says, ‘Don’t forget to take your Viagra today,’" Shwen Gwee, co-founder of Novartis Biome and global head of Open Innovations at Novartis, said during Voice Summit. "When you are at home, voice is much easier to do."

At the end of the day, industry players say in order for this technology to work it has to be about figuring out the time to use and the time not to use it. 

“I always like to ask what is the best channel to ask this question, and what is the best channel to receive it,” Lexi Kaplin, co-founder and chief product officer at conversationHEALTH, said during the voice forum. 



https://ift.tt/33BdW9G

0 Response to "Privacy beyond HIPAA in voice technology - MobiHealthNews"

Post a Comment