Comparative User Experiences of Health IT Products: How User Experiences Would Be Reported and Used

By Christine A. Sinsky, Jason Hess, Ben-Tzion Karsh, James P. Keller, Ross Koppel
September 14, 2012 | Discussion Paper

Introduction

Electronic health records (EHRs) and other forms of health information technology (IT) hold the promise of transforming the way health care is delivered in the United States, improving quality and safety while lowering costs. But after a decade of experience, concerns have been raised that such positive outcomes have not been widely achieved (Jones et al., 2012; Mandl and Kohane, 2012).

The Office of the National Coordinator for Health Information Technology (ONC), charged with promoting widespread application of health IT, asked the Institute of Medicine (IOM) to address safety concerns. In its report Health IT and Patient Safety: Building Safer Systems for Better Care, released in November 2011, the IOM recommended that comparative user experiences be collected and made publicly available (IOMa, 2011):

Recommendation 3: The ONC should work with the private and public sectors to make comparative user experiences across vendors publicly available. (p. 7)

The report explains

As one step, [the Department of Health and Human Services] should ensure that vendors support users in freely exchanging information about health IT experiences and issues, including details relating to patient safety. The ability to generate, develop, and share details of safety risks is essential to a properly functioning market in which health care providers have the ability to choose products that best suit their needs. …The ONC should also work with the private sector to make comparative user experiences publicly available. …A consumer guide for health IT safety could help identify safety concerns and increase system transparency …HHS should establish a mechanism for both vendors and users to report health IT-related…unsafe conditions.… Strategies also should be developed to encourage reporting; such efforts might include removing any perceptual, contractual, legal, and logistical barriers to reporting. (IOMb, 2011)

Subsequent to the release of the report, the ONC requested that the IOM bring together individuals for a discussion on the report, specifically covering ways to generate, house, disseminate, and thus maximally leverage comparative user experience to improve the health IT environment and improve patient care. The IOM held a workshop in which participants were asked to discuss three components of Recommendation 3 from Health IT and Patient Safety: (1) identify the metrics needed and who should develop and maintain the metrics; (2) develop a plan to report comparative user experiences; and (3) consider the feasibility of reporting and guidelines for product identification.

This discussion paper provides the authors’ views on how to develop recommendations regarding how comparative user experience data could be formatted, housed, and reported.

 

Why Collect and Publicly Report Comparative User Experiences?

Systematically collecting data on user experiences and user suggestions for improvements in safety and efficiency holds the possibility of improving health IT products and their use in the service of safer care. In this context, the term “users” refers to clinicians and health care organizations.

We hope public reporting of user experiences with different health IT products will promote a more robust environment for improvements in health IT design, as it does in other industries. Consumer Reports ranks products, influencing purchasers in their decisions and, arguably, manufacturers in their future product development. Websites and social media forums provide public vehicles for users to share experiences, learn from others, and move the marketplace, and can thus harness the collective experiences of customers to affect change.

We anticipate that similar transparency in health IT user experiences will facilitate cross-vendor comparisons and the sharing of lessons learned, and provide an additional incentive for vendors to make product improvements. It may also redistribute some of the power from the vendor to the purchaser/user.

At present, some vendors prohibit users from sharing screenshots and otherwise effectively communicating with others about a problem with an EHR. There is currently no place for health IT users to share publicly the experiences they have had with their health IT products. However, even if a place were designated and developed a following, its use would be limited because of contractual prohibitions on sharing screenshots.

A voluntary multi-modal, multi-stakeholder approach to health IT safety reporting and communication may deter a more heavy-handed approach to regulating health IT vendors. Regulation by the Food and Drug Administration is a serious possibility on the horizon if improvement in health IT safety and usability is not achieved through a voluntary process.

 

Audience for Comparative User Experience Data

Anticipated consumers of data on comparative user experience include hospitals, health systems, physician practices, clinicians, researchers, informaticians, IT professionals, regulators, insurance providers, Regional Extension Center staff, database developers, and vendors.

The research community could use comparative user experience data to develop a better understanding of how health IT works in real-world clinical settings. This knowledge would support the development of conceptual models for new generations of health IT products as well as new models of workflows and personnel to employ in clinical care.

Health IT developers could utilize the data to identify flaws in their products, generate ideas for product improvements or for market research, and identify areas in which their educational materials could be improved or expanded.

Planners and purchasers for provider organizations could use this information to select products that best fit their needs.

Implementation of a health IT product is one of the most challenging and timeconsuming stages of the health IT life cycle. A database of user experience, including identification of new safety risks, would provide organizations with suggestions for ways to improve the efficiency, effectiveness, and safety of implementation. These organizations will benefit from lessons learned by similar health care organizations about how to improve the performance and safety of their existing systems. The information may include ideas on effective product upgrade practices or applications, safety risks associated with a specific vendor’s product, warnings about data entry combinations that could result in erroneous clinical orders, or suggestions for ways to enhance user training.

Health professionals responsible for setting up and managing clinical information exchange will benefit from data on user experiences. The integration of health IT products with other clinical IT products (e.g., picture archiving and communications systems, pharmacy systems) and devices is complex and requires a significant amount of upfront configuration to ensure that data are not lost or corrupted along any phase of the information exchange. A user experience database will help identify integration-related problems to avoid.

Health care organizations should also find comparative user experiences useful for workflow redesigns and new personnel infrastructures needed in a health IT–enabled health care system. In addition, organizations need to plan for the appropriate time and approaches to decommission an existing system. This can be facilitated through review of reports submitted by other health care organizations at similar stages of their health IT products’ life cycles. Many of these organizations will likely be working on the process of replacing their products with new products and should have information to share about what has worked well for them and what problems they encountered during the transition.

Ideally, this information could be parsed so that consumers of the data could easily find information relevant to their particular questions or contexts. In addition, an open and transparent approach to sharing user experience could create organic learning communities among different categories of professionals within the health care industry.

 

Multi-Modal Approach to Characterizing and Comparing User Experiences

No single measure can meet all needs for comparative user experiences. Different audiences—clinicians, vendors, implementers (e.g., chief medical informatics officer [CMIO]), organizational decision-makers (e.g., board of directors)—will have different needs. Furthermore, no single measure can fully capture the strengths and weakness of a particular health IT product. Thus, multiple modalities of acquiring and reporting user experiences are recommended, including

  • in vitro “flight simulator” laboratory evaluation of test scenarios;
  • in vivo point-of-use reporting;
  • data mining of use patterns;
  • third party–administered user surveys;
  • direct user-to-public reporting; and
  • a formalized system of hazards reporting (see Table 1).

 

Some modalities will have more scientific rigor, such as formal surveys and “flight simulator” lab testing, while other modalities will be less scientific, but will provide an opportunity for input from a large number of users and hold the potential for discovery of unanticipated user experiences.

Multiple modalities allow for a more robust understanding of hazards. For example, although a user survey may capture the subset of safety hazards that users perceive, such a survey will miss hazards users fail to recognize. Similarly, experienced users will become accustomed to a particular product and develop ingrained workarounds that preclude them from recognizing some of the product’s hazards, whereas naïve users of the product may be more perceptive of these problems. The “flight simulator” laboratory data would capture these types of hazards.

 

In Vitro: “Flight Simulator” Testing

In a laboratory setting, controlled comparisons could be made of different vendors’ product performance in test scenarios. For example, a complex patient care scenario could be tested in multiple products, determining variables such as

  • time to accomplish the task,
  • cognitive workload associated with the task,
  • situational awareness developed,
  • user perception of ease of use,
  • accuracy of the resultant care decisions,
  • ease with which the user makes an error in judgment or execution,
  • and time to train to minimum competency.

 

Cross-vendor comparisons could also analyze how individual EHRs facilitate or inhibit new models of care, such as team-based care, by looking at ease of login handoffs among team members and ease of sharing tasks among team members.

Data from testing labs might be reported by following the National Institute of Standards and Technology’s Common Industry Format for usability testing and may be amenable to histogram or graphical summaries in addition to explanatory text.

 

In Vivo: Point-of-Use Reporting

Point-of-use reporting could begin to leverage the experience of thousands of users to identify opportunities to improve safety and efficiency. Many users will not take the time to complete extensive hazard reports in the hectic course of clinical care. Some users even may have learned that such effort is futile, feeling powerless to affect change in the tools that impact their ability to carry out their professional responsibilities.

A “report here” button on every screen could reengage the end-user in the improved design of the tools they rely on. Such point-of-use reporting could include discrete, structured text options, allowing users to categorize the nature of their experiences (e.g., confusing display, cluttered layout, missing information, slow download), as well as a comment field for details.

Researchers, administrators and product design experts could use this data to identify and prioritize areas for redesign of both clinical workflows and the health IT products themselves. For example, if 3,000 users felt the display on a particular page presented a safety hazard, then trainers could alert clinicians to the potential for confusion, administrative staff could submit requests for fixes, and with public reporting of this data, vendors would have additional incentive to prioritize an improvement.

 

Data mining

Behind-the-scenes data mining of use behavior could unearth valuable information to drive improvements and innovations. The number of clicks required to complete a task or the number of times the user returns to a given screen during an episode of interaction can point to inefficiencies in design and/or use and present opportunities for improvement.

 

Surveys

Surveys could also be designed for periodic point-of-use sampling, asking users, for example, “Was this decision support tool helpful?” with the option for comments on what could make the decision support more useful.

In addition to point-of-use surveys, formal surveys—administered by third-party organizations to a sample of users of similar products from different vendors—have the potential to generate helpful comparative user experience data. Probes can be built into the surveys, tailored to the needs of multiple stakeholders. For example, to discover “lessons learned” about product selection, a probe would be the prompt “If I had to purchase this EHR again, what I would do differently this time is…” Similarly, to determine the impact of different EHRs on workflow, a probe might be “What are the positive and negative impacts of this EHR on your workflow? What did not change? What was unexpected in a good way and bad way?” The goal of such probes would be to minimize the cognitive burden on the reporter by making the desired information obvious and natural from the probes themselves.

Surveys might also provide product rankings on a variety of scales that could be made available as raw or summarized data, the way that many online product reviews appear. The key is to make data available in a format that meets the varying user needs.

 

Formal hazard reporting system

There is an urgent need for a system that allows clinicians who encounter health IT–related patient safety hazards to report those hazards. Such a system must be rapid and frictionless. It also should allow for both anonymous and self-identified reporting, provide contextual information (e.g., what system, what version, what problem), and not interrupt the clinician’s workflow. Once reports are made, they must be acknowledged, and efforts must also be made to correct the problems—lest the reporting system become an exercise in frustration, which will attenuate continued reporting.

Reporters should never be penalized, as it is better to know of hazards than to ignore them. Reports should be received with acknowledgment and appreciation, rather than with defensiveness, anger, or retribution.

The coding of the problems or hazards may fit well into the schema proposed by Walker and Hassol (2011). As urged by the IOM report and the authors of this paper, any hazard reporting system should be independent of those primarily seeking to market health IT products.

 

Removing Barriers to Reporting

It is likely that only a small subset of user experiences, of either benefit or hazard, are captured in current reporting structures. Barriers to reporting problems with usability are multifaceted and include users’ fear of retribution, vendors’ fear of liability, lack of trust in the impartiality of the entity collecting the data, time required of individual users to generate a report, cynicism about the futility of reporting (users may feel discouraged when no response to earlier reports has occurred; users may become fatigued by the automatic attribution of hazards to “user error”), and the sheer volume of reports that may be generated.

Consider a nurse who recognizes a near-miss in the course of clinical care. She may wonder if it was her fault (“Did I miss something in training?”). She may be reluctant to contact her local helpdesk to report the problem, having learned from previous experience that her concern will be dismissed as “user error” or that she may risk being labeled as technophobic or as unsupportive of her organization’s EHR implementation. Once received by the helpdesk or CMIO, the issue may be categorized as a training problem rather than a result of insufficient product design. That is, some medical and IT leaders have invested their reputations, and their organization’s time and money, in the software program; complaints that expose large problems may not be appreciated or carried forward.

If the CMIO passes the request for improvements or modifications to the vendors, the vendors will impose their own priorities—reflected in what they acknowledge, accept and address. To start, the vendor may define the problem as due to local implementation, customizations, user error, insufficient training, or the many systems with which the vendor’s system must interact (e.g., pharmacy IT, outside laboratories, radiological recordkeeping systems) in each setting. In other words, the problem may be seen as not the vendor’s responsibility or as one that requires additional training (see Figure 1).

 

 

There are logistical barriers to reporting use hazards as well. Existing systems are slow and “clunky.” Reporting problems with usability may require clinicians to abandon both the task at hand and the software they are using in the practice of health care, obliging them to shift to a new platform and software application. This causes a loss of concentration and the devotion of time that few clinicians can afford. The existing systems for reporting problems, moreover, focus more on specific functional lacks, rather than on usability difficulties. The question formats demand an explanation of the problem that is not conducive to presentation of usability issues. Moreover, there is an intimidation factor in that they require the name and organization of the reporter.

To fully leverage the experiences of individual users, we propose that public reporting include direct paths between the user and the public sphere, without filtering through parties with a potential conflict of interest (see Figure 2).

 

 

Public Reporting of Comparative User Experiences

Just as multiple modalities to probe for comparative user experience are necessary, so are multiple modalities and venues for reporting user experiences (see Figure 3).

 

 

 

We recommend a single website, hosted by a trusted government entity such as AHRQ or a trusted private entity, to serve as a hub for comparative user experiences. This hub could link to the various sources of data regarding comparative user experience, including third-party evaluations from organizations such as ERCI Institute, as well as other resources, such as the American College of Physicians’ American EHR Partners (see Figure 4). The hub would also link to vendor demonstrations, social media posts, and the results of flight simulator lab testing, user surveys, and point-of-use reports.

 

 

We also recommend that users be given the ability to filter reports from laboratory simulations, in vitro user reporting and data mining, or direct posts according to their need, allowing, for example, the CMIO of a small hospital to select user reports from others in situations similar to her own.

Because of the diversity of clinical situations and user needs, a single, meaningful summative rating (e.g., a 5-star global rating) across all measurement modalities is not the goal. Individual users will likely assess the significance, weighting, and trustworthiness of each modality and each measurement domain differently. Some will place more weight on security, others on efficiency of use. Some will trust user posts, others ECRI Institute–type establishments.

Instead of a single, overall summative rating, sub-summative ratings could be reported for each modality. That is, a summary score for clinician survey reports could be developed, similar to those used on websites that average the number of stars reviewers assign to products. Similarly, a summary score could be developed by each independent laboratory that tests health IT products. These summative scores within report modalities would provide users with relative rankings within the limitations of each modality and would subsequently allow them to weigh the value of each.

We envision that different users will report different kinds of comparative data. These data will vary with respect to quantitative/qualitative and the full continuum of subjective/objective.

 

Transparency

A health IT environment that fosters continuous learning and improvement is dependent on

  1. the ability of users to report usability issues;
  2. the ability for all to see those reports (transparency);
  3. the ability to report anonymously;
  4. the ability of vendors to freely comment on the user reports without intimidating the reporters; and
  5. the need for side-by-side comparisons of comparable functions performed in controlled trials.

 

Fundamental to these accountability efforts is the need to abandon the nondisclosure clauses in many contracts between vendor and purchaser. If clinicians are unable to inform each other of problems with their health IT products, the entire framework of knowledge exchange collapses. The removal of these contractual threats to transparency would provide a floor on which improved accountability could be built.

 

Conclusion

The goal of collecting and publicly reporting user experiences is to improve products across the industry and promote safety. After a decade of development and experience, EHRs and other health IT products have not advanced sufficiently; nor have they been adopted widely and enthusiastically, in step with other consumer products such as smartphones and iPads. Some have referred to this as a market failure (Mandl and Kohane, 2012). With EHRs, unlike other consumer product areas, there has been little opportunity for cross-vendor comparison, which has stifled the evolution of this technology.

We believe that the development of meaningful metrics of comparative user experiences in domains such as cognitive workload, accuracy of decision making, time required to perform tasks, and implementation experience will support the purchaser in making wise decisions when choosing a health IT product, and will simultaneously provide the vendor community with incentives to improve products.

Finally, we believe that public reporting of user experiences, in a variety of forums, is essential to leveraging the power of the user and purchaser to affect change.

 


References

  1. Institute of Medicine (IOM). 2012. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: The National Academies Press. https://doi.org/10.17226/13269.
  2. IOM (Institue of Medicine). 2011b. Report brief: Health IT and patient safety: Building safer sytems for better care. Available at: http://www.nationalacademies.org/hmd/~/media/Files/Report%20Files/2011/Health-IT/HealthITandPatientSafetyreportbrieffinal_new.pdf (accessed February 3, 2020).
  3. Jones, S. S., P. S. Heaton, R. S. Rudin, and E. C. Schneider. 2012. Unraveling the IT productivity paradox—lessons for health care. New England Journal of Medicine 366(24):2243-2245. https://doi.org/10.1056/NEJMp1204980
  4. Mandl, K. D., and I. S. Kohane. 2012. Escaping the EHR trap—the future of Health IT. New England Journal of Medicine 366(24):2240-2242. https://doi.org/10.1056/NEJMp1203102
  5. Walker, J., and A. Hassol. 2011. Health IT hazard manager: Design and demo. Paper presented at Agency for Healthcare Research and Quality 2011 Annual Conference, Rockville, MD.

 

DOI

https://doi.org/10.31478/201209e

Suggested Citation

Sinsky, C. A., J. Hess, B. T. Karsh, J. P. Keller, and R. Koppel. 2012. Comparative user experiences of health IT products: How user experiences would be reported and used. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201209e

Disclaimer

The views expressed in this discussion paper are those of the authors and not necessarily of the authors’ organizations or of the Institute of Medicine. The paper is intended to help inform and stimulate discussion. It has not been subjected to the re-view procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.


Join Our Community

Sign up for NAM email updates