Monday, January 6, 2020

What's next for voice technology in healthcare?

Change is ahead, perhaps sooner than we think.  One of the more striking aspects of the flood of Voice First innovations is the near-feverish pace predictions turn into practice faster than a device can say “Hello.” Below are just a few of the possibilities that by next year at this time will be, so to speak, yesterday’s news.  By then, thorny questions about privacy, personalization, and patient-doctor voice communication will be closer to resolution. Perhaps doctors will no longer save notes for evening if they can enter them by speaking during or just after the appointment.  And in five years, perhaps scribes will not be required.  Consider:

  • Detecting health conditions by voice.  Healthcare providers are becoming interested in the use of voice technology to provide biomarkers – offering information from the sound of a person’s voice as a health status indicator. Recent studies show ability to recognize severely compromised breathing based on sound. Firms like Beyond Verbal see themselves in the emotion-detection voice analytics space, looking at the way a person speaks, possibly correlating it with an illness like coronary artery disease, or detecting the sound of an improperly used inhaler.  But other companies in the ‘emotion analytics’ segment, including AudioAnalytic and Affectiva are capable of detecting individual’s emotional status in context – with the  goal to “put an emotion chip into everything.”  During 2018, Amazon patented a version of its Alexa technology that perhaps could evaluate whether a person is ill.
  • Who is speaking? Multi-factor voice authentication becomes a reality.  Just as with banking transactions, healthcare providers and patients will want to verify and be verified as the person for whom the information is intended.  Verification and HIPAA compliance together will make voice-enabled care plans a reality. Just as with banking transactions, technologists have already begun thinking about the use of two-factor authentication in healthcare. The voice biomarker is ‘the highest common factor’ and the easiest for the user, according to Douwe Korff, of ValidSoft, or as he says, “Just Speak!”
  • Possibility of a healthcare trained voice agent.  Early in 2019, Intuition Robotics introduced PlatformQ – a platform for ‘proactive goal-oriented agents within specific domains based on context and user learning.’ This is already used in some new cars capable of proactively taking action to reduce drowsiness or  alerting a driver to a potential driving mistake. How might this work in a healthcare context?
  • Standards for voice services interoperability.  Perhaps a user of an Alexa voice assistant might wish to turn on a feature in Google Home. Or data stored in one cloud service might be useful to a user whose data is stored in a different cloud service. To date, that may not be straightforward, but the platform players are beginning to talk among themselves about standards. In September, the Voice Interoperability Initiative was announced, with the intent of enabling consumers to interact with multiple voice services from a single device. And in mid-December more than 30 companies, including Amazon, Apple, Google, Samsung, and others announced collaborating on a set of smart home standards – part of Project Connect Home over IP.  The healthcare industry should look at these efforts as signals for future healthcare voice assistant interoperability. 

[Excerpt from Voice, Health and Wellbeing 2020 report published in January 2020.]

 



from Tips For Aging In Place https://www.ageinplacetech.com/blog/whats-next-voice-technology-healthcare

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.