JAMIA study asks: What is the most accurate & efficient way to input data into an EHR?

A recent study conducted by the Journal of American Medical Informatics Association, referenced in the 2018 ONC report, “Strategy on Reducing Regulatory and Administrative Burden Relating to the Use of Health IT and EHRs,” asked a simple but important question: which is a more accurate and efficient way to input data into an EHR—speech recognition software, or manual typed entry?

Most people would assume that speech recognition would be both more efficient and more accurate, but the results of the study point in a surprisingly different direction.

The study focused on emergency department physicians—each physician was assigned 8 standardized clinical documentation tasks using a commercial EHR and Dragon Medical. These tasks were split between simple and complex tasks, and levels of efficiency and accuracy were measured for each task completion.

The results of these tests show that medical speech recognition on its own does not improve the efficiency of entering patient documentation. The numbers are quite surprising and are not inflated due to experience with the EHR or speech recognition.

For the tasks undertaken in the study, creating EHR clinical documentation with the assistance of speech recognition was nearly 20% slower for both simple and complex tasks.

In addition, there were far more system/integration errors, including network transmission delays of the speech recognition or data going to the incorrect location in the chart. The report further emphasizes that these errors could be avoided with more seamless integration of the speech recognition products or more speech-friendly EHR design.

And this is the important caveat to the study’s findings: Speech Recognition itself can be much more efficient and accurate for inputting data into the EHR, but this requires better integration between EHRs and speech recognition solutions.

The benefits of better integration is clarified in the study’s conclusion discussion—the authors write that the combination of speech recognition and natural language processing found time and usability benefits within EHR use. The report further defines that if EHR companies would help design their EHRs with speech recognition in mind (versus just keyboard and mouse), additional efficiencies will be realized, particularly in the area of system integration.

The take away here is important: EHRs have grown into more complex applications due to various regulatory requirements, payer needs and the desire to improve patient outcomes. As a result, stand-alone speech recognition requires additional functionality and integration to be of value to the input process of patient records.

The need for a fully integrated solution to help reduce the burden of entering patient documentation is the only way to add this kind of valuable functionality, save physicians time, and reduce input errors. There are 3 primary needs to ensure these user efficiencies are achieved:

  • Medical speech recognition must be used with the proper input device for the task of capturing the clinicians’ words and converting those into text.
  • Artificial Intelligence and Natural Language Processing technologies must be employed to intelligently take the text from the speech recognition software and parse out the structured data elements the EHR is expecting.
  • Seamless integration must occur with the EHR via their own supplied interfaces (APIs) so the resulting data, both narrative and structured, is placed in the proper areas of the EHR without having to navigate from screen to screen and with the required coding in place.

At NoteSwift, we’re passionate about meeting this need for physicians. Our fully integrated solution, Samantha, is available now with seamless interfaces using the EHRs APIs into many EHRs. Samantha works with any speech recognition software and provides the artificial intelligence, natural language processing, and complete “out-of-box” seamless integration necessary to not only save physicians hours of time, but also improve the accuracy of their EHR notes and better patient outcomes.

This study is another example of the fact that technology alone does not make our health care better. However, when we can bring together powerful technologies like speech recognition, artificial intelligence, and natural language processing into integrated solutions, we can create better EHR workflows for physicians and help them focus on what’s most important—caring for patients.

Click here to access the complete ONC report: Strategy on Reducing Regulatory and Administrative Burden Relating to the Use of Health IT and EHRs.

Click here to access the complete JAMIA study: Efficiency and safety of speech recognition for documentation in the electronic health record.

 


About Wayne Crandall

Wayne Crandall’s career in technology spans sales, marketing, product management, strategic development and operations. Wayne was a co-founder, executive officer, and senior vice president of sales, marketing and business development at Nuance Communications and was responsible for growing the company to over $120M following the acquisition of Dragon and SpeechWorks.

Wayne joined NoteSwift, Inc. at its inception, working with founder Dr. Chris Russell to build the team from the ground up. As President & CEO, Wayne has continued to guide the company’s growth and evolution, resulting in the development of the industry’s first AI-powered, real-time EHR transcriptionist, Samantha(TM).

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *