Metrics for Comparing User Experiences in Health IT

By David Classen
January 24, 2013 | Discussion Paper

Introduction

In virtually all Institute of Medicine (IOM) reports that have touched on patient safety during the last 15 years, information technology (IT) has played a critical role in the future view of safer patient care. This reflects an emerging view of patient safety in the IT era, with ever greater adoption of health IT across the continuum of care. Health IT used to be almost exclusively used in financial and patient billings, claims, and other administrative systems in health care. This is no longer the case, as the Health Information Technology for Economic and Clinical Health Act and subsequent regulations encouraging the meaningful use of health IT have spurred wide adoption of electronic health records (EHRs) in actual patient care across the continuum. Looking forward with respect to health IT and patient safety, the IOM recently issued a report, Health IT and Patient Safety: Building Safer Systems for Better Care (2012), that outlines how health IT may be used successfully to improve safety. Reporting of safety problems was a key part of the report, with a focus on both voluntary reporting and surveillance. Drawing from aviation and other modes of transportation, the report recommended a National Transportation Safety Board–like approach to collect, analyze, and investigate patient safety problems related to health IT, something quite unusual for health care. The report also called for Food and Drug Administration regulation of health IT if all the other recommendations failed to improve health IT safety.

Finally, the IOM report outlined a series of steps to increase the transparency of health IT vendor performance through oversight approaches that lead to publication of evaluations of health IT products, and the public sharing of user experiences with those products. Access to details of patient safety risks for health IT products “is essential to a properly functioning market where users identify the product that best suits their needs. Users need to [be able to] share information about risks and adverse events with other users and vendors as well” (IOM, 2012). Currently, health IT users cannot easily communicate their experiences with health IT effectively to each other or to the public. In other industries, “product reviews are available where users can rate their experiences with products and share lessons learned” (IOM, 2012). A consumer guide for health IT safety that included user experiences could help identify safety concerns and increase system transparency. The specific recommendations in the IOM report in this area are outlined below:

Recommendation 2: The Secretary of HHS [the Department of Health and Human Services] should ensure insofar as possible that health IT vendors support the free exchange of information about health IT experiences and issues and not prohibit sharing of such information, including details (e.g., screenshots) relating to patient safety.

Recommendation 3: The Office of the National Coordinator (ONC) should work with the private and public sectors to make comparative user experiences across vendors publicly available.

The focus of this paper is to comment on a discussion held during an IOM workshop, Comparative User Experiences for Health IT–Related Patient Safety Events, held March 20, 2012. The purpose of that workshop was to add further detail to Recommendations 2 and 3 above. This paper focuses on the following specific question discussed in the workshop: What types of metrics could be used to make comparative user experiences available for health IT
vendor products?

 

Framework for Developing Metrics

The current market has failed to deliver any effective mechanism to enable users of health IT products to share their experiences, especially related to safety issues. Given this market failure, there must be strong federal leadership to create a public-private partnership that can facilitate the development and public sharing of comparative user experiences. The only federal agency well-positioned to lead this effort is the ONC, which will need to lead this initiative for any public-private partnership to succeed.

Any approach to developing metrics for the sharing of user experiences with health IT vendor products should use the IOM sociotechnical model as an overall framework to view user experiences (The IOM sociotechnical framework for health IT-related adverse events includes people, technology (hardware and software), process, organization, and the external environment (IOM, 2012)). The metrics need to address not only technical issues, such as software usability, but also sociological issues, such as how these vendor products are actually used by patients and by frontline providers. Metrics development should start with the “goal” of sharing user experiences related to (1) health IT product vendors’ ability to prevent health IT-related patient safety problems and (2) the ability of health IT products to improve safety of care for patients. The overall approach also needs to state that these user experience metrics are meant to increase transparency of product performance through public reporting of these metrics.

Health IT user experience metrics should be organized along several dimensions, including health IT product (e.g., EHRs, personal health records, health information exchanges), specific vendor and product names, and type of user organization (In this case, “user organization” can be defined as the setting of care, such as an integrated health care system, hospital (large, medium, small), ambulatory clinic (large, small), long-term care, home care, ambulatory pharmacy, or ambulatory surgery and imaging center.)) The specific types of user experience metrics could include various measures to compare products, such as structural, process, and outcome measures.

  • Structural measures might include the ONC’s EHR safety checklist (user-level), the ONC’s vendor certification criteria (product-level), user training, and competency training and credentialing.
  • Process measures might include the Centers for Medicare & Medicaid Service’s meaningful use measures at both the healthcare organization level and the vendor levels. These measures could track details of compliance through an organization’s meaningful use reports or any subsequent meaningful use audits such as meaningful use compliance audits at the organization level or health IT vendor certification results level.
  • Outcome measures include the results of the Texas Medical Institute of Technology’s (TMIT’s) computerized provider order entry (CPOE) test, which it operates for CPOE R&D and provides to the Leapfrog Group (Metzger, 2010), in addition to the National Institute of Standards and Technology’s usability measure (NIST, 2012) results by vendor or even by user type.

 

Other types of health IT comparative user metrics might include voluntary reports of user-identified problems or safety reports or nonvoluntary reports, such as automated triggers or data mining measures to detect safety problems from operational health IT vendor products.

User-generated reports may also be a good source to learn about health IT safety hazards or safeguards. For example, health IT vendors typically have online forums or user groups where customers can exchange knowledge and experiences. Such information could also be gathered by professional organizations like Health Information Management Systems Society or the Association of Medical Directors of Information Systems, from social networking sites, or from health IT vendor evaluation organizations such as KLAS.

Another focus on health IT comparative user experiences might relate to a recommendation in the IOM report that strongly advocates ongoing post-deployment testing of operational health IT vendor systems. Although the TMIT CPOE/EHR test used by Leapfrog is the best example, different types of post-deployment testing provide other useful information, such as Adelman’s work on triggers of EHR CPOE errors (2012). Because the TMIT CPOE test used by Leapfrog is an officially endorsed National Quality Forum (NQF) safe practice, any approach to developing metrics should include surveying the broad reach of NQF-endorsed health IT-related measures (Metzger 2010).

 

Who Should Develop These Metrics?

Which organizations are candidates to develop these comparative user experience metrics? One of the IOM report recommendations is for the Secretary of HHS to “fund a new Health IT Safety Council to evaluate criteria for assessing and monitoring the safe use of health IT and the use of health IT to enhance safety. This council should operate within an existing voluntary consensus standards organization.” The organization should have the following qualifications:

  • prior experience with performance measure development,
  • health IT, human factors, and cognitive engineering expertise,
  • data standards, vocabulary, and interoperability expertise, and
  • consumer expertise (IOM, 2012).

 

Organizations that might fit this capability include, in addition to the NQF, the Institute for Healthcare Improvement, the RAND Corporation, the Agency for Healthcare Research and Quality (AHRQ) (as grantor), the American Medical Association through its Physician Consortium for Performance Improvement program, the Institute for Safe Medication Practices, the National Committee for Quality Assurance, or a large academic research center with expertise in the health IT arena.

 

Collection, Analysis, and Reporting of Measures

Assuming that health IT vendor experience metrics are created, how might they be collected, analyzed, and reported? The collection and analysis of these metrics could be handled separately from the reporting function by different organizations. The IOM report envisioned a public-private partnership as essential to improving safety and health IT together. As such, a group of organizations that have participated in public-private partnerships might be a possible choice to handle the operation of such a program. The organizations could include KLAS, HIMSS Analytics, AmericanEHR partners, ECRI Institute, Consumer’s Union, or the contractors involved in the AHRQ’s National Quality Report generation. This program could start as a pilot, for example, a proof of concept with five high-impact health IT vendor user experience metrics.

 

Conclusion

With the increasing adoption and use of health IT across health care organizations and various settings of care, increasing emphasis will be placed on the ability of these organizations to improve the safety of care. As recent reports have outlined, there will also be increased scrutiny on events where health IT has caused safety problems—including patient injury and death. Both of these issues will need to be addressed moving forward, as outlined in the IOM report Health IT and Patient Safety. As in other industries, greater public transparency about health IT vendor product performance will be a critical part of learning how to improve these products continuously, not only to prevent them from harming patients, but also to maximize their ability to improve the safety of care delivered. A key element of public transparency will be creating effective ways for users of these health IT products to share their own experiences without fear of reprisal. This paper outlines a potential approach to developing and implementing metrics that could facilitate the public sharing and reporting of health IT user experiences.

 


References

  1. Adelman, J. S., G. E. Kalkut, C. B. Schechter, J. M. Weiss, M. A. Berger, S. H. Reissman, H. W. Cohen, S. J. Lorenzen, D. A. Burack, and W. N. Southern. 2012. Understanding and preventing wrong-patient orders: A randomized controlled trial. Journal of the American Medical Informatics Association. https://doi.org/10.1136/amiajnl-2012-001055
  2. Institute of Medicine. 2012. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: The National Academies Press. https://doi.org/10.17226/13269.
  3. Metzger, J., E. Welebob, S. Levitz, D. Bates, and D. C. Classen. 2010. Meaningful use of EHRs and CPOE to improve medication safety. Health Affairs 29(4):1-8.
  4. NIST (National Institute of Standards and Technology), S. Z. Lowry, M. T. Quinn, M. Ramaiah, R. M. Schumacher, E. S. Patterson, R. North, J. Zhang, M. C. Gibbons, and P. Abbott. 2012. Technical evaluation, testing, and validation of the usability of electronic health records. NISTIR 7804. Washington, DC. Available at: https://www.nist.gov/publications/nistir-7804-technical-evaluation-testing-and-validation-usability-electronic-health (accessed May 15, 2020).

 

DOI

https://doi.org/10.31478/201301c

Suggested Citation

Classen, D. 2013. Metrics for Comparing User Experiences in Health IT. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201301c

Disclaimer

The views expressed in this discussion paper are those of the author and not necessarily of the author’s organizations or of the Institute of Medicine. The paper is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.


Join Our Community

Sign up for NAM email updates