Evidence-based health informatics

February 11, 2016 § 6 Comments

Have we reached peak e-health yet?

Anyone who works in the e-health space lives in two contradictory universes.

The first universe is that of our exciting digital health future. This shiny gadget-laden paradise sees technology in harmony with the health system, which has become adaptive, personal, and effective. Diseases tumble under the onslaught of big data and miracle smart watches. Government, industry, clinicians and people off the street hold hands around the bonfire of innovation. Teeth are unfeasibly white wherever you look.

The second universe is Dickensian. It is the doomy world in which clinicians hide in shadows, forced to use clearly dysfunctional IT systems. Electronic health records take forever to use, and don’t fit clinical work practice. Health providers hide behind burning barricades when the clinicians revolt. Government bureaucrats in crisp suits dissemble in velvet-lined rooms, softly explaining the latest cost overrun, delay, or security breach. Our personal health files get passed by street urchins hand-to-hand on dirty thumbnail drives, until they end up in the clutches of Fagin like characters.

Both of these universes are real. We live in them every day. One is all upside, the other mostly down. We will have reached peak e-health the day that the downside exceeds the upside and stays there. Depending on who you are and what you read, for many clinicians, we have arrived at that point.

The laws of informatics

To understand why e-health often disappoints requires some perspective and distance. Informed observers again and again see the same pattern of large technology driven projects sucking up all the e-health oxygen and resources, and then failing to deliver. Clinicians see that the technology they can buy as a consumer is more beautiful and more useful that anything they encounter at work.

I remember a meeting I attended with Branko Cesnik. After a long presentation about a proposed new national e-health system, focusing entirely on technical standards and information architectures, Branko piped up: “Excuse me, but you’ve broken the first law of informatics”. What he meant was that the most basic premise for any clinical information system is that it exists to solve a clinical problem. If you start with the technology, and ignore the problem, you will fail.

There are many corollary informatics laws and principles. Never build a clinical system to solve a policy or administrative problem unless it is also solving a clinical problem. Technology is just one component of the socio-technical system, and building technology in isolation from that system just builds an isolated technology [3].

Breaking the laws of informatics

So, no e-health project starts in a vacuum of memory. Rarely do we need to design a system from first principles. We have many decades of experience to tell us what the right thing to do is. Many decades of what not to do sits on the shelf next to it. Next to these sits the discipline of health informatics itself. Whilst it borrows heavily from other disciplines, it has its own central reason to exist – the study of the health system, and of how to design ways of changing it for the better, supported by technology. Informatics has produced research in volume.

Yet today it would be fair to say that most people who work in the e-health space don’t know that this evidence exists, and if they know it does exist, they probably discount it. You might hear “N of 1” excuse making, which is the argument that the evidence “does not apply here because we are different” or “we will get it right where others have failed because we are smarter”. Sometimes system builders say that the only evidence that matters is their personal experience. We are engineers after all, and not scientists. What we need are tools, resources, a target and a deadline, not research.

Well, you are not different. You are building a complex intervention in a complex system, where causality is hard to understand, let alone control. While the details of your system might differ, from a complexity science perspective, each large e-health project ends up confronting the same class of nasty problem.

The results of ignoring evidence from the past are clear to see. If many of the clinical information systems I have seen were designed according to basic principles of human factors engineering, I would like to know what those principles are. If most of today’s clinical information systems are designed to minimize technology-induced harm and error, I will hold a party and retire, my life’s work done.

The basic laws of informatics exist, but they are rarely applied. Case histories are left in boxes under desks, rather than taught to practitioners. The great work of the informatics research community sits gathering digital dust in journals and conference proceedings, and does not inform much of what is built and used daily.

None of this story is new. Many other disciplines have faced identical challenges. The very name Evidence-based Medicine (EBM), for example, is a call to arms to move from anecdote and personal experience, towards research and data driven decision-making. I remember in the late ‘90s, as the EBM movement started (and it was as much a social movement as anything else), just how hard the push back was from the medical profession. The very name was an insult! EBM was devaluing the practical, rich daily experience of every doctor, who knew their patients ‘best’, and every patient was ‘different’ to those in the research trials. So, the evidence did not apply.

EBM remains a work in progress. All you need to do today is to see a map of clinical variation to understand that much of what is done remains without an evidence base to support it. Why is one kind of prosthetic hip joint used in one hospital, but a different one in another, especially given the differences in cost, hip failure and infection? Why does one developed country have high caesarian section rates when a comparable one does not? These are the result of pragmatic ‘engineering’ decisions by clinicians – to attack the solution to a clinical problem one way, and not another.  I don’t think healthcare delivery is so different to informatics in that respect.

Is it time for evidence-based health informatics?

It is time we made the praxis of informatics evidence-based.

That means we should strive to see that every decision that is made about the selection, design, implementation and use of an informatics intervention is based on rigorously collected and analyzed data. We should choose the option that is most likely to succeed based on the very best evidence we have.

For this to happen, much needs to change in the way that research is conducted and communicated, and much needs to happen in the way that informatics is practiced as well:

  • We will need to develop a rich understanding of the kinds of questions that informatics professionals ask every day;
  • Where the evidence to answer a question exists, we need robust processes to synthesize and summarize that evidence into practitioner actionable form;
  • Where the evidence does not exist and the question is important, then it is up to researchers to conduct the research that can provide the answer.

In EBM, there is a lovely notion that we need problem oriented evidence that matters (POEM) [1] (covered in some detail in Chapter 6 of The Guide to Health Informatics). It is easy enough to imagine the questions that can be answered with informatics POEMs:

  • What is the safe limit to the number of medications I can show a clinician in a drop-down menu?
  • I want to improve medication adherence in my Type 2 Diabetic patients. Is a text message reminder the most cost-effective solution?
  • I want to reduce the time my docs spend documenting in clinic. What is the evidence that an EHR can reduce clinician documentation time?
  • How gradually should I roll out the implementation of the new EHR in my hospital?
  • What changes will I need to make to the workflow of my nursing staff if I implement this new medication management system?

EBM also emphasises that the answer to any question is never an absolute one based on the science, because the final decision is also shaped by patient preferences. A patient with cancer may choose a treatment that is less likely to cure them, because it is also less likely to have major side-effects, which is important given their other goals. The same obviously holds in evidence-based health informatics (EBHI).

The Challenges of EBHI

Making this vision come true would see some significant long term changes to the business of health informatics research and praxis:

  • Questions: Practitioners will need develop a culture of seeking evidence to answer questions, and not simply do what they have always done, or their colleagues do. They will need to be clear about their own information needs, and to be trained to ask clear and answerable questions. There will need to be a concerted partnership between practitioners and researchers to understand what an answerable question looks like. EBM has a rich taxonomy of question types and the questions in informatics will be different, emphasizing engineering, organizational, and human factors issues amongst others. There will always be questions with no answer, and that is the time experience and judgment come to the fore. Even here though, analytic tools can help informaticians explore historical data to find the best historical evidence to support choices.
  • Answers: The Cochrane Collaboration helped pioneer the development of robust processes of meta-analysis and systematic review, and the translation of these into knowledge products for clinicians. We will need to develop a new informatics knowledge translational profession that is responsible for understanding informatics questions, and finding methods to extract the most robust answers to them from the research literature and historical data. As much of this evidence does not typically come from randomised controlled trials, other methods than meta-analysis will be needed. Case libraries, which no doubt exist today, will be enhanced and shaped to support the EBHI enterprise. Because we are informaticians, we will clearly favor automated over manual ways of searching for, and summarizing, the research evidence [2]. We will also hopefully excel at developing the tools that practitioners use to frame their questions and get the answers they need. There are surely both public good and commercial drivers to support the creation of the knowledge products we need.
  • Bringing implementation science to informatics: We know that informatics interventions are complex interventions in complex systems, and that the effect of these interventions vary depending on the organisational context. So, the practice of EBHI will of necessity see answers to questions being modified because of local context. I suspect that this will mean that one of the major research challenges to emerge from embracing EBHI is to develop robust and evidence-based methods to support localization or contextualisation of knowledge. While every context is no doubt unique, we should be able to draw upon the emerging lessons of implementation science to understand how to support local variation in a way that is most likely to see successful outcomes.
  • Professionalization: Along with culture change would come changes to the way informatics professionals are accredited, and reaccredited. Continuing professional education is a foundation of the reaccreditation process, and provides a powerful opportunity for professionals to catch up with the major changes in science, and how those changes impact the way they should approach their work.


There comes a moment when surely it is time to declare that enough is enough. There is an unspoken crisis in e-health right now. The rhetoric of innovation, renewal, modernization and digitization make us all want to believers. The long and growing list of failed large-scale e-health projects, the uncomfortable silence that hangs when good people talk about the safety risks of technology, make some think that e-health is an ill-conceived if well intentioned moment in the evolution of modern health care. This does not have to be.

To avoid peak e-health we need to not just minimize the downside of what we do by avoiding mistakes. We also have to maximize the upside, and seize the transformative opportunities technology brings.

Everything I have seen in medicine’s journey to become evidence-based tells me that this will not be at all easy to accomplish, and that it will take decades. But until we do, the same mistakes will likely be rediscovered and remade.

We have the tools to create a different universe. What is needed is evidence, will, a culture of learning, and hard work. Less Dickens and dystopia. More Star Trek and utopia.

Further reading:

Since I wrote this blog a collection of important papers covering the important topic of how we evaluate health informatics and choose which technologies are fit for purpose has been published in the book Evidence-based Health Informatics.


  1. Slawson DC, Shaughnessy AF, Bennett JH. Becoming a medical information master: feeling good about not knowing everything. The Journal of Family Practice 1994;38(5):505-13
  2. Tsafnat G, Glasziou PP, Choong MK, et al. Systematic Review Automation Technologies. Systematic Reviews 2014;3(1):74
  3. Coiera E. Four rules for the reinvention of healthcare. BMJ 2004;328(7449):1197-99


 An Italian translation of this article is available


A brief guide to the health informatics research literature

February 8, 2016 § Leave a comment

Every year the body of research evidence in health informatics grows. To stay on top of that research, you need to know where to look for research findings, and what the best quality sources of it are. If you are new to informatics, or don’t have research training, then you may not know where or how to look. This page is for you.

There are a large number of journals that publish only informatics research. Many mainstream health journals will also have an occasional (and important) informatics paper in them. Rather than collecting a long list of all of these possible sources, I’d like to offer the following set of resources as a ‘core’ to start with.

(There are many other very good health informatics journals, and their omission here is not meant to imply they are not also worthwhile. We just have to start somewhere. If you have suggestions for this page I really would welcome them, and I will do my best to update the list).


If you require an overview of the recent health informatics literature, especially if you are new to the area, then you really do need to sit down and read through one of the major textbooks in the area. These will outline the different areas of research, and summarise the recent state of the art.

I am of course biased and want you to read the Guide to Health Informatics.

A collection of important papers covering the important topic of how we evaluate health informatics and choose which technologies are fit for purpose can be found in the book Evidence-based Health Informatics.

Another text that has a well-earned reputation is Ted Shortliffe’s Biomedical Informatics.

Health Informatics sits on the shoulders of the information and computer sciences, psychology, sociology, management science and more. A mistake many make is to think that you can get a handle on these topics just from a health informatics text. You wont. Here are a few classic texts, from these ‘mother’ disciplines;

Computer networks (5th ed). Tannenbaum and Wetherall.  Pearson. 2010.

Engineering Psychology & Human Performance (4th ed.). Wickens et al. Psychology Press. 2012.

Artificial Intelligence: A Modern Approach  (3rd ed). Russell and Norvig. Pearson. 2013


Google Scholar: A major barrier to accessing the research literature is that much of it is trapped behind paywalls. Unless you work at a university and can access journals via the library, you will be asked by some publishers to pay an exorbitant fee to read even individual papers. Many journals are now however open-access, or make some of their papers available free on publication. Most journals also allow authors to freely place an early copy of a paper onto a university or other repository.

The most powerful way to finding these research articles is Google Scholar. Scholar does a great job of finding all the publicly available copies of a paper, even if the journal’s version is still behind a paywall. Getting yourself comfortable with using Scholar, and exploring what it does, provides you with a major tool for accessing the research literature.

Yearbook of Medical Informatics. The International Medical Informatics Association (IMIA) is the peak global academic body for health informatics and each year produces a summary of the ‘best’ of the last year’s research from the journals in the form of the Yearbook of Medical Informatics. The recent editions of the yearbook are all freely available online.

Next, I’d suggest the following ‘core’ journals for you to skim on a regular basis. Once you are familiar with these you will no doubt move on the the very many others that publish important informatics research.

JAMIA. The Journal of the American Medical Informatics Association (AMIA) is the peak general informatics journal, and a great place to keep tabs on recent trends. While it requires a subscription, all articles are placed into open access 12 months after publication (so you can find them using Scholar) and several articles every month are free. You can keep abreast of papers as they are published through the advanced access page.

JMIR. The Journal of Medical Internet Research is a high impact specialist journal focusing on Web-based informatics interventions. It is open access which means that all articles are free.

To round out the journals you might want to add into your regular research scan the following journals which are all very well regarded.


Whilst journals typically will publish well polished work, there is often a lag of a year or more before submitted papers are published. The advantage of research conferences is that you get more recent work, sometimes at an earlier stages of development, but also closer to the cutting edge.

There are a plethora of informatics conferences internationally but the following publish their papers freely online, and are typically of high quality.

AMIA Annual Symposium. AMIA holds what is probably the most prestigious annual health informatics conference, and releases all papers via NLM. An associated AMIA Summit on Translational Sciences/Bioinformatics is also freely available.

Medinfo. IMIA holds a biannual international conference, and given its status as the peak global academic society, Medinfo papers have a truly international flavour. Papers are open access and made available by IOS press through its Studies in Health Technology and Informatics series (where many other free proceedings can be found). Recent Medinfo proceedings include 2015 and 2013.

As with textbooks and journals, it is worth remembering that much of importance to health informatics is published in other ‘mother’ disciplines. For example it is well worth keeping abreast of the following conferences for recent progress:

WWW conference. The World Wide Web Conference is organised by the ACM and is an annual conference looking at innovations in the Internet space. Recent proceedings include 2015 and 2014.

The ACM Digital Library, which contains WWW, is a cornucopia of information and computer science conference proceedings. Many a rainy weekend can be wasted browsing here. You may need to hunt the web site of the actual conference however to get free access to papers as ACM will often try to charge for papers you can find freely on the home page of the conference.

Other strategies

Browsing journals is one way to keep up to date. The other is to follow the work of individual researchers whose interests mirror your own. The easiest way to do this is to find their personal page on Google Scholar (and if they don’t have one tell them to make one!). Here is mine, as an example. There are two basic ways to attack a scholar page. When you first see a Scholar page, the papers are ranked by their impact (as measured by other people citing the papers). This will give you a feeling about the work the researcher is most noted for. The second way is to click the year button. You will then see papers in date order, starting with the most recent. This is a terrific way of seeing what your pet researcher has been up to lately.

There is a regularly updated list of biomedical informatics researchers, ranked by citation impact, and this is a great way to discover health informatics scientists. Remember that when researchers work in more specialised fields, they may not have as many citations and so be lower down the list.

Once you find a few favourite researchers, try to see what they have done recently, follow them on Twitter, and if they have a blog, try to read it.

The Forgetting Health System

October 7, 2015 § 2 Comments

Learning health systems are the next big thing. Through the use of information technology, the hope is that we can analyse all the data captured in electronic health records to speed both the process of scientific discovery and the translation of these discoveries into routine practice1,2. Every patient’s data, their response to treatment, and final outcome, will no longer be filed away, but feed the care of future patients3. It’s an exciting vision, and if we can achieve it, there is no doubt healthcare delivery would be transformed.

If we were to step back, we might conclude that although this is an admirable vision, for all its failings, the machinery of science is already working faster than we can handle it. The arena where organizational learning most needs to take hold is in the way we deliver health services. It is clear that we could do so much better in this arena. There is too much variation in patient care, too much waste and harm in the system.

So, if what we have today is not yet a learning health system, then by definition we must have the opposite – a forgetting health system. If that is the case, then here are two working definitions to contrast the two ideas:

  1. Learning system: The past shapes the future. Today’s mistake is tomorrow’s wisdom.
  2. Forgetting system: The past is the future. Today’s mistakes are forgotten quickly and are repeated tomorrow.

With this perspective it is easy to see examples everywhere of such ‘forgetting’. The history of large-scale e-health is a litany of the same case study being repeated. A large health IT project is started (usually by a government and usually as a technology innovation project rather than to fix a defined health problem). It quickly runs over time, over budget, and is treated with dismay as its users find it doesn’t do what they were promised. The end result is new problems, workarounds to circumvent the intruder technology, and in some cases, the eventual removal of the system.

The solution to this mess is of course seems to be to start a new large-scale e-health system, run by different people. Yet, like moths to a flame, these new protagonists seem to make the same set of strategic mistakes, but in new ways. Today’s large-scale health IT projects seem to be in a perpetual state of Groundhog Day, and must be a classic example of a forgetting system. This may be because there is no learning, because we convince ourselves that this time is ‘different’ and the past has nothing to teach us, or because when we look at past failures, we are unable to drawn any conclusions because of a lack of ability or imagination.

Information is lost

If a system ‘forgets’ then by implication that means that information that existed in the system has been lost, preventing its reuse to guide future events. If you think about it for one moment, the health system loses information every moment of every day, in unimaginable amounts. We only record a fraction of the events that occur. Most of these are not directly captured but rather are reconstructed from the memory of the individual making the record.

If we were to transform our forgetting system into a learning system, so that health system improvements are driven by experience, what do we need to do? For a system to ‘remember’ an event, a teachable moment, it firstly needs to be detected. You can’t remember what you never saw. Next, after detection, it needs to be recorded. Finally, this recording needs to be somehow aggregated with other events, for discovery of new knowledge to occur.

When it comes to treatment and diagnosis, we have built very precise mechanisms to detect important events or processes through a variety of diagnostic methods. The data captured are increasingly being recorded in electronic systems, and analysed. That’s the ‘big data’ movement in healthcare in a nutshell.

When it comes to health services, we still don’t know what events we should detect, what processes of service delivery should be instrumented, and there is still little thought to pooling system measurements in a way to allow across organizational analysis and learning. So, if we are to make our health services become learning ones, the task is much bigger than installing patient records.

One of the challenges to learning in health services is that, whilst it is easy to imagine that organizational memories can be stored in bits on a database, more often they will sit in the heads of those who work within them4. So, a good indicator that you are working in a forgetting system is that your organization has a high staff turnover – because it is very likely that when staff walk out the door they also walk with everything they know about the system that they are leaving, and no one has tried to capture that rich web of knowledge (if it could so easily be captured).

When learning also requires forgetting

Memories however also exist in other ways in an organisation. Crucially, they persist in the processes, protocols and built structures of the organisation, and in the workarounds and annotations that happen to physical spaces5. These structural memories are not inert or idly awaiting someone’s analysis. They sit there every moment shaping work, and altering human perceptions, actions and intent.

With time, accreted structural memories can lead to inertia, to an immovable status quo6. These kinds of memories therefore need to be managed. Some need to be revisited and revised, others are the jewels of the past that we should never forget and some, some are impeding the emerging purpose of an organization, and need to be actively forgotten. The need for organizational apoptosis, or active forgetting, is something I’ve written about before6.

From learning to adaptive systems

So, learning clearly is not nearly enough. Recording and analysing, at least as far as organisations are concerned, can only take you so far. The living walls and praxis of an organisation are already busy recording, even learning, but may not make it possible to do anything about what has been learned. For organisational inertia to be broken, and for repeated failures to be avoided, we not only need to learn, we need to actively, wisely, carefully, forget.

That will be a true learning system, because learning systems also need the capacity to change.


  1. Etheredge LM. A rapid-learning health system. Health affairs. 2007;26(2):w107-w118.
  2. Friedman CP, Wong AK, Blumenthal D. Achieving a nationwide learning health system. Science translational medicine. 2010;2(57):57cm29-57cm29.
  3. Gallego B, Walter SR, Day RO, et al. Bringing cohort studies to the bedside: framework for a ‘green button’ to support clinical decision-making. Journal of Comparative Effectiveness Research. 2015(0):1-7.
  4. Coiera E. When conversation is better than computation. Journal of the American Medical Informatics Association. 2000;7(3):277-286.
  5. Coiera E. Communication spaces. Journal of the American Medical Informatics Association. May 1, 2014 2014;21(3):414-422.
  6. Coiera E. Why system inertia makes health reform so hard. British Medical Journal. 2011;343:27-29.


Clinical Safety of the Australian Personally Controlled Electronic Heath Record (PCEHR)

November 29, 2013 § Leave a comment

Like many nations, Australia has begun to develop nation-scale E-health infrastructure. In our case it currently takes the form of a Personally Controlled Electronic Health Record (PCEHR). It is a Federal government created and operated  approach that caches clinical documents including discharge summaries, summary care records that are electively created, uploaded and maintained by a designated general practitioner, and some administrative data including information on medications prescribed and dispensed, as well as claims data from Medicare, our universal public health insurer. It is personally-controlled in that consumers must elect to opt in to the system, and may elect to hide documents or data should they not wish them to be seen – a point that has many clinicians concerned but is equally celebrated by consumer groups.

With ongoing concerns in some sectors about system cost, apparent low levels of adoption and clinical use, as well as a change in government, the PCEHR is now being reviewed to determine its fate. International observers might detect echoes of recent English and Canadian experiences here, and we will all watch with interest to see what unfolds after the Committee reports.

My submission to the PCEHR Review Committee focuses only on the clinical safety of the system, and the governance processes designed to ensure clinical safety. You can read my submission here.

As background to the submission, I have written about clinical safety of IT in health care for several years now, and some of the more relevant papers are collected here:

  1. J. S. Ash, M. Berg, E. Coiera, Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System Related Errors, Journal American Medical Informatics Association, 11(2),104-112, 2004.
  2. Coiera E, Westbrook J, Should clinical software be regulated? Medical Journal of Australia. 2006:184(12);600-1.
  3. Coiera E, Westbrook JI, Wyatt J (2006) The safety and quality of decision support systems, Methods Of Information In Medicine 45: 20-25 Suppl. 1, 2006.
  4. Magrabi F, Coiera E. Quality of prescribing decision support in primary care: still a work in progress. Medical Journal of Australia 2009; 190 (5): 227-228.
  5. Coiera E, Do we need a national electronic summary care record? Medical Journal of Australia 2011; 194(2), 90-2.
  6. [Paywall] Coiera E, Kidd M, Haikerwal M, A call to national e-health clinical safety governance, Med J Aust 2012; 196 (7): 430-431.
  7. Coiera E, Aarts J, Kulikowski K. The Dangerous Decade. Journal of the American Medical Informatics Association, 2012;19(1):2-5.
  8. [Paywall] Magrabi F, Aarts J, Nohr C, et al. A comparative review of patient safety initiatives for national health information technology. International journal of medical informatics 2012;82(5):e139-48.
  9. [Paywall] Coiera E, Why E-health is so hard, Medical Journal of Australia, 2013; 198(4),178-9.

Along with research colleagues I have been working on understanding the nature and extent of likely harms, largely through reviews of reported incidents in Australia, the US and England. A selection of our papers can be found here:

  1. Magrabi F, Ong M, Runciman W, Coiera E. An analysis of computer-related patient safety incidents to inform the development of a classification. Journal of the American Medical Informatics Association 2010;17:663-670.
  2. Magrabi F, Li SYW, Day RO, Coiera E, Errors and electronic prescribing: a controlled laboratory study to examine task complexity and interruption effects. Journal of the American Medical Informatics Association 2010 17: 575-583.
  3. Magrabi F, Ong, M, Runciman W, Coiera E, Patient Safety Problems Associated with Healthcare Information Technology: an Analysis of Adverse Events Reported to the US Food and Drug Administration, AMIA 2011 Annual Symposium, Washington DC, October 2011, 853-8.
  4. Magrabi F, Ong MS, Runciman W, Coiera E. Using FDA reports to inform a classification for health information technology safety problems. Journal of the American Medical Informatics Association 2012;19:45-53.
  5. Magrabi, F., M. Baker, I. Sinha, M.S. Ong, S. Harrison, M. R. Kidd, W. B. Runciman and E. Coiera. Clinical safety of England’s national programme for IT: A retrospective analysis of all reported safety events 2005 to 2011. International Journal of Medical Informatics 2015;84(3): 198-206.

Submission to the PCEHR Review Committee 2013

November 29, 2013 § 4 Comments

Professor Enrico Coiera, Director Centre for Health Informatics, Australian Institute of Health Innovation, UNSW

Date: 21 November 2013

The Clinical Safety of the Personally Controlled Electronic Health Record (PCEHR)

This submission comments on the consultations during PCEHR development, barriers to clinician and patient uptake and utility, and makes suggestions to accelerate adoption. The lens for these comments is patient safety.

The PCEHR like any healthcare technology may do good or harm. Correct information at a crucial moment may improve care. Misleading, missing or incorrect information may lead to mistakes and harm. There is clear evidence nationally and internationally that health IT can cause such harm [1-5].

To mitigate such risks, most industries adopt safety systems and processes at software design, build, implementation and operation. User trust that a system is safe enhances its adoption, and forces system design to be simple, user focused, and well tested.

The current PCEHR has multiple safety risks including:

  1. Using administrative data (e.g. PBS data and Prescribe/Dispense information) for clinical purposes (ascertaining current medications) – a use never intended;
  2. Using clinical documents (discharge summaries) instead of fine-grained patient data e.g. allergies. Ensuring data integrity is often not possible within documents (e.g. identifying contradicting, missing or out of date data);
  3. Together these create an electronic form of a hybrid record with no unitary view of the clinical ‘truth’. Hybrid records can lead to clinical error by impeding data search or by triggering incorrect decisions based on a partial view of the record [6];
  4. Shifting the onus for data integrity to a custodian GP avoids the PCEHR operator taking responsibility for data quality (a barrier to GP engagement and a risk because integrity requires sophisticated, often automated checking).
  5. No national process or standards to ensure that clinical software and updates (and indeed the PCEHR) are clinically safe.

The need for clinical safety to be managed within the PCEHR was fed into the PCEHR process formally [7], via internal NEHTA briefings, at public presentations at which PCEHR leadership were present and was clear from the academic literature. Indeed, a 2010 MJA editorial on the risks and benefits of likely PCEHR architectures highlighted recent evidence suggesting many approaches were problematic. It tongue-in-cheek suggested that perhaps GPs should ‘curate’ the record, only to then point out the risks with that approach [8].

Yet, at the beginning of 2012, no formal clinical safety governance arrangements existed for the PCEHR program. The notable exception was the Clinical Safety Unit within NEHTA, whose limited role was to examine the safety of standards as designed, but not as implemented. There was no process to ensure software connecting to the PCEHR was safe (in the sense that patients would not be harmed from the way information was entered, stored, retrieved or used), only that it interoperated technically. No ongoing safety incident monitoring or response function existed, beyond any internal processes the system operator might have had.

Concerns that insufficient attention was being paid to clinical safety prompted a 2012 MJA editorial on the need for national clinical safety governance both for the PCEHR as well as E-health more broadly [9]. In response, a clinical governance oversight committee was created within the Australian Commission on Safety and Quality in Health Care, (ACSQHC) to review PCEHR incidents monthly, but with no remit to look at clinical software that connects to the PCEHR. There is however no public record of how clinical incidents are determined, what incidents are reported, their risk levels or resulting harms, nor how they are made safe. A major lesson from patient safety is that open disclosure is essential to ensure patient and clinician trust in a system, and to maximize dissemination of lessons learned. This lack of transparency is likely a major barrier to uptake, especially given the sporadic media reports of errors in PCEHR data (such as incorrect medications) with the potential to lead to harm.

We recently reviewed governance arrangements for health IT safety internationally, and a wide variety of arrangements are possible from self-certification through to regulation [10]. The English NHS has a mature approach that ensures clinical software connecting to the national infrastructure complies with safety standards, closely monitors incidents and has a dedicated team to investigate and make safe any reports of near misses or actual harms.

Our recent awareness of large-scale events across national e-health systems – where potentially many thousands of patient records are affected at once – is another reason PCEHR and national e-health safety should be a priority. We recently completed, with the English NHS, an analysis of 850 of their incidents. 23% (191) of incidents were large-scale involving between 10 and 66,000 patients. Tracing all affected patients becomes difficult when dealing with a complex system composed of loosely interacting components, such as the PCEHR.


  1. A whole of system safety audit and risk assessment of the PCEHR and feeder systems should be conducted, using all internal data available, and made public. The risks of using administrative data for clinical purposes and the hybrid record structure need immediate assessment.
  2. A strong safety case for continued use of administrative data needs to be made or it should be withdrawn from the PCEHR.
  3. We need a whole of system (not just PCEHR) approach to designing and testing software (and updates) that are certifiably safe, to actively monitor for harm events, and a response function to investigate and make safe root causes of any event. Without this it is not possible for example to certify that a GP desktop system that interoperates with the PCEHR is built and operated safely when it uploads or downloads from the PCEHR.
  4. Existing PCEHR clinical safety governance functions need to be brought together in one place. The nature, size, structure, and degree to which this function is legislated to mandate safety is a discussion that must be had. Such bodies exist in other industries e.g. the civil aviation safety authority (CASA). ACSQHC is a possible home for this but would need to substantially change its mandate, resourcing, remit, and skill set.
  5. Reports of incidents and their remedies need to be made public in the same way that aviation incidents are reported. This will build trust amongst the public and clinicians, contribute to safer practice and design, and mitigate negative press when incidents invariable become public.


[See parent blog for links to papers that are not linked here]

1. Coiera E, Aarts J, Kulikowski C. The dangerous decade. Journal of the American Medical Informatics Association 2012;19:2-5

2. Patient safety problems associated with heathcare information technology: an analysis of adverse events reported to the US Food and Drug Administration. AMIA Annual Symposium Proceedings; 2011. American Medical Informatics Association.

3. Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. The National Academies Press: The National Academies Press., 2012.

4. Sparnon E, Marela W. The Role of the Electronic Health Record in Patient Safety Events. Pa Patient Saf Advis 2012;9(4):113-21

5. Coiera E, Westbrook J. Should clinical software be regulated? MJA 2006;184(12):600-01

6. Sparnon E. Spotlight on Electronic Health Record Errors: Paper or Electronic Hybrid Workflows. Pa Patient Saf Advis 2013(10):2

7. McIlwraith J, Magrabi F. Submission. Personally Controlled Electronic Health Record (PCEHR) System: Legislation Issues Paper 2011.

8. Coiera E. Do we need a national electronic summary care record. Med J Aust 2011 (online 9/11/2010);94(2):90-92

9. Coiera E, Kidd M, Haikerwal M. A call for national e-health clinical safety governance. Med J Aust 2012;196(7):430-31.

10. Magrabi F, Aarts J, Nohr C, et al. A comparative review of patient safety initiatives for national health information technology. International journal of medical informatics 2012;82(5):e139-48

Are standards necessary?

November 1, 2013 § 10 Comments

A common strategy for structuring complex human systems is to demand that everything be standards-based. The standards movement has taken hold in education and healthcare, and technical standards are seen as a prerequisite for information technology.

In healthcare, standards are visible in three critical areas, typical of many sectors: 1/ Evidence-based practice, where synthesis of the latest research generates best-practice recommendations; 2/ Safety, where performance indicators flag when processes are sub-optimal; and 3/ Technical standards, especially in information systems, which are designed to ensure different technical systems can interoperate with each other, or comply with minimum standards required for safe operation. There is a belief that ‘standardisation’ will be a forcing function, with compliance ensuring the “system” moves to the desired goal – whether that be safe care, appropriate adoption of recommended practices, or technology that actually works once implemented.

In the world of healthcare information systems, the mantra of standards and intra-operability is near a religion. Standards bodies proclaim them, governments mandate them, and as much as they can without being noticed, industry pays lip service to them, satisficing wherever they can. For such a pervasive technology, and we should see technical standards as exactly that – another technical artifact – it is surprising that there appears to be no evidence base that supports the case for their use. There seem to be no scientific trials to show that working with standards is better than not. Commonsense, communities of practice, vested interests and sunk costs, all along with the weight of belief, sustain the standards enterprise.

For those who advocate standards as a solution to system change, I believe the growing challenge of systems inertia has one a disturbing consequence. The inevitable result of an ever-growing supply of standards meeting scarce human attention and resource should from first principles reasoning lead to a new ‘Malthus’ law of standards – that the fraction of standards produced that are actually complied with, will with time asymptote toward zero[1]. To paraphrase Nobelist Herb Simon’s famous quip on information and attention, a wealth of standards leads to a poverty of their implementation[1].

It should come as no surprise then that standardisation is widely resisted, except perhaps by standards makers. Even then they tend to aggregate in competing tribes pushing one version of a standard over another. Unsurprisingly, safety goals remain elusive and evidence-based practice to many clinicians seems an academic fantasy. Given that clinical standards are often not evidence-based, such resistance may not be inappropriate[2 3].

In IT, standards committees sit for years arguing over what the ‘right’ standard is, only to find that once published, there are competing standards in the marketplace, and that technology vendors resist because of the cost of upgrading their systems to meet the new standard. Pragmatic experience in healthcare indicates standards can stifle local innovation and expertise[4]. In resource-constrained settings, trying to become standards compliant simply moves crucial resources away from front-line service provision.

There is a growing recognition that standards are a worthy and critical research topic[5]. Most standards research is empirical and case based. An important but small literature examines the ‘standardisation problem’[6] – the decision to choose amongst a set of standards. Economists have used agent-based modelling in a limited way to study the rate and extent of standards adoption[7]. Crucially, standards adoption is seen as an end in itself with current research, and there seems little work examining the effect of standardisation on system behaviour. Are standards always a good thing? There seems to be no work on the core questions of when to standardise, what to standardise, and how much of any standard one should comply with.

Clearly, some standardisation may be needed to allow the different elements of a complex human system to work together, but it is not clear how much ‘standard’ is enough, or what goes into such a standard. My theoretical work on the continuum between information and communication system design provides some guidance on when formalisation of information processes makes sense, and when things are best left fluid[8]. That framework showed that in dynamic settings where there is task uncertainty, standardisation is not a great idea. Further information system design can be shaped by understanding the dynamics of the ‘conversation’ between IT system and user, and by the task specific costs and benefits associated with technology choice[9 10].

It is remarkable that these questions are not being asked more widely. What is now needed is a rigorous analysis of how system behaviour is shaped and constrained by the act of standardisation, and whether we can develop more adaptive, dynamic approaches to standardisation that avoid system inertia and deliver flexible and sustainable human systems.

This blog is excerpted from my paper “Stasis and Adaptation“, which I gave in Copenhagen earlier this year, to open the Context-Sensitive Healthcare Conference. For an even more polemic paper from the same conference, check out Lars Botin’s paper How Standards will Degrade the Concepts of the Art of Medicine.

1. Coiera E. Why system inertia makes health reform so hard. British Medical Journal 2011;343:27-29 doi: doi:10.1136/bmj.d3693[published Online First: Epub Date]|.

2. Lee DH, Vielemeyer O. Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines. Arch Intern Med 2011;171:18-22

3. Tricoci P, Allen JM, Kramer JM, et al. (2009) Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA 301: 831-841. JAMA 2009;301:831-41

4. Coiera E. Building a National Health IT System from the Middle Out. J Am Med Inform Assoc 2009;16(3):271-73 doi: 10.1197/jamia.M3183[published Online First: Epub Date]|.

5. Lyytinen K, King JL. Standard making:  A critical research frontier for information systems research. MIS Quarterly 2006;30:405-11

6. The Standardisation problem – an economic analysis of standards in information systems. Proceedings of the 1st IEEE Conference on standardization and innovation in information technology SIIT ´99 1999.

7. Weitzel T, Beimborn D, Konig W. A unified economic model of standard diffusion: the impact of standardisation cost, network effects and network topology. MIS Quarterly 2006;30:489-514

8. Coiera E. When conversation is better than computation. Journal of the American Medical Informatics Association 2000;7(3):277-86

9. Coiera E. Mediated agent interaction. In: Quaglini BaA, ed. 8th Conference on Artificial Intelligence in Medicine. Berlin: Springer Lecture Notes in Artificial Intelligence No. 2101, 2001:1-15.

10. Coiera E. Interaction design theory. International Journal of Medical Informatics 2003;69:205-22


Where Am I?

You are currently browsing the Health Information Technology category at The Guide to Health Informatics 3rd Edition.

%d bloggers like this: