This was the dream: we would use technology to create a seamless healthcare system, one where people, computers and machines would work together to improve patient care in many different ways. Health care would be more efficient, it would be safer, it would be less expensive, we would be able to transfer health-related information quickly and accurately.
After spending three days at a meeting this past week with some of the top experts in the field, I am not so certain that the dream is going to come true anytime soon. Perhaps more concerning, the problems–including patient safety issues–that are cropping up in so many areas are very troubling. [more]
The meeting was organized by three federal agencies involved in the oversight of medical devices, applications and health information technology including the Food and Drug Administration, the Office of the National Coordinator for Heatlh Information Technology (housed in the Executive Branch) and the Federal Communications Commission. Those three agencies recently released a report describing their vision for regulation of health information technology. The purpose of the meeting was to extend the discussion. (I have listed links to several relevant documents at the end of this blog.)
I came away with a sense that this social-technical ecosystem (their term, which I thought was actually very interesting) that we call health information technology is far more complex and the problems more pervasive than a lot of us understand–even those who work in health care and use these systems every day. The inevitable question is what would people think if they really knew how serious the issues are? More importantly, who is going to fix them?
As a consumer of health care, you probably assume that when you go to your doctor’s office or receive care in a hospital there is a reasonable certainty that the computer systems your hospital and doctor rely on are up to date and work as intended. Well, hopefully, most of the time they do. But how do we even know when they don’t? How do we know they are tested to be safe? How do we know if the latest upgrades have been installed? The answer is apparently we don’t because they aren’t.
For example, the computer programs that these systems rely on are built layer upon layer over years. They are customized in many instances. But the people who provided the original architecture for a particular system–sometimes a long time ago–aren’t around and there is no simple record of what they did. And then we put all sorts of new programs in place on top of the old programs–but they can use different computer languages and don’t always play nice with each other. One hospital computer specialist told us their hospital installed over 600 new applications into their system in just one year. Others told of the complexities of making sure the systems work well together–and that they often don’t. One unsettling theme mentioned frequently was the fact that some of these situations can become dangerous, especially if the appropriate compatibility and error testing is not done.
Then we get to the machines that are computer driven, and are supposed to integrate within these systems. Most of the time they work, sometimes they don’t. One physician made the point that even setting the time correctly between all the systems and machines is problematic (and that was while we were sitting in an auditorium that had two “atomic clocks” blinking at us with the absolute, undeniable officially government sanctioned correct time. By the way, we all checked and set our watches accordingly.)
Interoperability is becoming a key operational word in this morass of systems. Being able to exchange information accurately from one computer to another or from a machine to a computer can actually be a huge challenge. Since the computer and the machine are likely made by different companies using different standards, there is at times a “no man’s land” in between when it comes to figuring out who is responsible to make this work. We are finding that this is not an easy problem to solve given the vagaries of the multitude of systems and processes.
Make no mistake that the most disturbing moments of the meeting were public comments from physicians who talked about patients harmed and loss of life because of computer malfunctions. Wrong doses of medication, missing critical problems that the computers were supposed to address, alarms systems that were turned off–the list goes on and on.
And it’s not all about the computer systems not functioning correctly. There are human issues as well, such as who enters the information into the computer, how simple it is to enter information, whether the information is accurate and how the information is displayed to those using the computers–all of these contribute to the problems.
I certainly don’t have an answer as to how we are going to make all of this work. Suffice to say, it is going to be much more difficult than many of us had thought. It has been hard to demonstrate using all of this technology has resulted in the anticipated cost savings, improvement in patient safety or making care more effective. Oversight of these systems is going take a lot of resources, including time and money–and there isn’t a lot of either to go around these days.
Even as I wrote this blog, someone conicendentally shared with me an incident that happened that very day where a system failed to flag an incorrect order that would have resulted in a massive drug overdose to a patient. Whether the issue was faulty data entry, the computer not transmitting the information correctly or failing to flag the order or some other issue, the event was real and could have had tragic consequences had not a diligent pharmacist intervened. The patient never knew what happened, but the nurse and the physician involved were shaken and disturbed by the experience. The bottom line: incidents like these may not be as uncommon as we think, and lower confidence that these systems are going to do what those who use them expect them to do and in fact rely on them to do.
The good news is that at the least these questions are being raised. There is a nascent dialogue beginning that must continue. We must have a sense of optimism and commitment that we can find the right blend of oversight to assure all of us that we will not be harmed by that which was intended to help. But we also must have a sense of urgency that these problems cannot be ignored any longer. We will need the right combination of private initiative, government regulation, and public-private partnerships that encourage the innovation we need while not choking the process with excessively burdensome regulations yet provide reasonable assurance that the systems are safe and will perform in the real world as anticipated when they were designed.
As we all know, achieving that goal will be difficult. Notwithstanding some of the pessimism that surrounded me this week I remain an optimist that at the least we have a solid beginning to make effective medical care and patient safety “job one” when it comes to improving health through computer-based technologies.
Here are some reports that you may find useful in learning more about this topic:
1) The FDASIA report referenced above that describes the federal agencies comments on oversight of health information technology, including proposed structure and future plans:
2) A 2011 report from the Institute of Medicine that highlights patient safety issues:
3) A September 2013 guidance from the FDA on the regulation of medical applications:
4) A February 2013 report from the Bipartisan Policy Center:
5) A July 2013 report from the Office of the National Coordinator on HIT safety:
6) Finally, a “real world” consulting report from the National Colorectal Cancer Roundtable that shows the problems clinicians face in using health information technology to try and improve one aspect of clinical care, namely colorectal cancer screening: