Clinical Safety of the Australian Personally Controlled Electronic Heath Record (PCEHR)

November 29, 2013 § Leave a comment

Like many nations, Australia has begun to develop nation-scale E-health infrastructure. In our case it currently takes the form of a Personally Controlled Electronic Health Record (PCEHR). It is a Federal government created and operated  approach that caches clinical documents including discharge summaries, summary care records that are electively created, uploaded and maintained by a designated general practitioner, and some administrative data including information on medications prescribed and dispensed, as well as claims data from Medicare, our universal public health insurer. It is personally-controlled in that consumers must elect to opt in to the system, and may elect to hide documents or data should they not wish them to be seen – a point that has many clinicians concerned but is equally celebrated by consumer groups.

With ongoing concerns in some sectors about system cost, apparent low levels of adoption and clinical use, as well as a change in government, the PCEHR is now being reviewed to determine its fate. International observers might detect echoes of recent English and Canadian experiences here, and we will all watch with interest to see what unfolds after the Committee reports.

My submission to the PCEHR Review Committee focuses only on the clinical safety of the system, and the governance processes designed to ensure clinical safety. You can read my submission here.

As background to the submission, I have written about clinical safety of IT in health care for several years now, and some of the more relevant papers are collected here:

  1. J. S. Ash, M. Berg, E. Coiera, Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System Related Errors, Journal American Medical Informatics Association, 11(2),104-112, 2004.
  2. Coiera E, Westbrook J, Should clinical software be regulated? Medical Journal of Australia. 2006:184(12);600-1.
  3. Coiera E, Westbrook JI, Wyatt J (2006) The safety and quality of decision support systems, Methods Of Information In Medicine 45: 20-25 Suppl. 1, 2006.
  4. Magrabi F, Coiera E. Quality of prescribing decision support in primary care: still a work in progress. Medical Journal of Australia 2009; 190 (5): 227-228.
  5. Coiera E, Do we need a national electronic summary care record? Medical Journal of Australia 2011; 194(2), 90-2.
  6. [Paywall] Coiera E, Kidd M, Haikerwal M, A call to national e-health clinical safety governance, Med J Aust 2012; 196 (7): 430-431.
  7. Coiera E, Aarts J, Kulikowski K. The Dangerous Decade. Journal of the American Medical Informatics Association, 2012;19(1):2-5.
  8. [Paywall] Magrabi F, Aarts J, Nohr C, et al. A comparative review of patient safety initiatives for national health information technology. International journal of medical informatics 2012;82(5):e139-48.
  9. [Paywall] Coiera E, Why E-health is so hard, Medical Journal of Australia, 2013; 198(4),178-9.

Along with research colleagues I have been working on understanding the nature and extent of likely harms, largely through reviews of reported incidents in Australia, the US and England. A selection of our papers can be found here:

  1. Magrabi F, Ong M, Runciman W, Coiera E. An analysis of computer-related patient safety incidents to inform the development of a classification. Journal of the American Medical Informatics Association 2010;17:663-670.
  2. Magrabi F, Li SYW, Day RO, Coiera E, Errors and electronic prescribing: a controlled laboratory study to examine task complexity and interruption effects. Journal of the American Medical Informatics Association 2010 17: 575-583.
  3. Magrabi F, Ong, M, Runciman W, Coiera E, Patient Safety Problems Associated with Healthcare Information Technology: an Analysis of Adverse Events Reported to the US Food and Drug Administration, AMIA 2011 Annual Symposium, Washington DC, October 2011, 853-8.
  4. Magrabi F, Ong MS, Runciman W, Coiera E. Using FDA reports to inform a classification for health information technology safety problems. Journal of the American Medical Informatics Association 2012;19:45-53.
  5. Magrabi, F., M. Baker, I. Sinha, M.S. Ong, S. Harrison, M. R. Kidd, W. B. Runciman and E. Coiera. Clinical safety of England’s national programme for IT: A retrospective analysis of all reported safety events 2005 to 2011. International Journal of Medical Informatics 2015;84(3): 198-206.

Submission to the PCEHR Review Committee 2013

November 29, 2013 § 4 Comments

Professor Enrico Coiera, Director Centre for Health Informatics, Australian Institute of Health Innovation, UNSW

Date: 21 November 2013

The Clinical Safety of the Personally Controlled Electronic Health Record (PCEHR)

This submission comments on the consultations during PCEHR development, barriers to clinician and patient uptake and utility, and makes suggestions to accelerate adoption. The lens for these comments is patient safety.

The PCEHR like any healthcare technology may do good or harm. Correct information at a crucial moment may improve care. Misleading, missing or incorrect information may lead to mistakes and harm. There is clear evidence nationally and internationally that health IT can cause such harm [1-5].

To mitigate such risks, most industries adopt safety systems and processes at software design, build, implementation and operation. User trust that a system is safe enhances its adoption, and forces system design to be simple, user focused, and well tested.

The current PCEHR has multiple safety risks including:

  1. Using administrative data (e.g. PBS data and Prescribe/Dispense information) for clinical purposes (ascertaining current medications) – a use never intended;
  2. Using clinical documents (discharge summaries) instead of fine-grained patient data e.g. allergies. Ensuring data integrity is often not possible within documents (e.g. identifying contradicting, missing or out of date data);
  3. Together these create an electronic form of a hybrid record with no unitary view of the clinical ‘truth’. Hybrid records can lead to clinical error by impeding data search or by triggering incorrect decisions based on a partial view of the record [6];
  4. Shifting the onus for data integrity to a custodian GP avoids the PCEHR operator taking responsibility for data quality (a barrier to GP engagement and a risk because integrity requires sophisticated, often automated checking).
  5. No national process or standards to ensure that clinical software and updates (and indeed the PCEHR) are clinically safe.

The need for clinical safety to be managed within the PCEHR was fed into the PCEHR process formally [7], via internal NEHTA briefings, at public presentations at which PCEHR leadership were present and was clear from the academic literature. Indeed, a 2010 MJA editorial on the risks and benefits of likely PCEHR architectures highlighted recent evidence suggesting many approaches were problematic. It tongue-in-cheek suggested that perhaps GPs should ‘curate’ the record, only to then point out the risks with that approach [8].

Yet, at the beginning of 2012, no formal clinical safety governance arrangements existed for the PCEHR program. The notable exception was the Clinical Safety Unit within NEHTA, whose limited role was to examine the safety of standards as designed, but not as implemented. There was no process to ensure software connecting to the PCEHR was safe (in the sense that patients would not be harmed from the way information was entered, stored, retrieved or used), only that it interoperated technically. No ongoing safety incident monitoring or response function existed, beyond any internal processes the system operator might have had.

Concerns that insufficient attention was being paid to clinical safety prompted a 2012 MJA editorial on the need for national clinical safety governance both for the PCEHR as well as E-health more broadly [9]. In response, a clinical governance oversight committee was created within the Australian Commission on Safety and Quality in Health Care, (ACSQHC) to review PCEHR incidents monthly, but with no remit to look at clinical software that connects to the PCEHR. There is however no public record of how clinical incidents are determined, what incidents are reported, their risk levels or resulting harms, nor how they are made safe. A major lesson from patient safety is that open disclosure is essential to ensure patient and clinician trust in a system, and to maximize dissemination of lessons learned. This lack of transparency is likely a major barrier to uptake, especially given the sporadic media reports of errors in PCEHR data (such as incorrect medications) with the potential to lead to harm.

We recently reviewed governance arrangements for health IT safety internationally, and a wide variety of arrangements are possible from self-certification through to regulation [10]. The English NHS has a mature approach that ensures clinical software connecting to the national infrastructure complies with safety standards, closely monitors incidents and has a dedicated team to investigate and make safe any reports of near misses or actual harms.

Our recent awareness of large-scale events across national e-health systems – where potentially many thousands of patient records are affected at once – is another reason PCEHR and national e-health safety should be a priority. We recently completed, with the English NHS, an analysis of 850 of their incidents. 23% (191) of incidents were large-scale involving between 10 and 66,000 patients. Tracing all affected patients becomes difficult when dealing with a complex system composed of loosely interacting components, such as the PCEHR.

Recommendations:

  1. A whole of system safety audit and risk assessment of the PCEHR and feeder systems should be conducted, using all internal data available, and made public. The risks of using administrative data for clinical purposes and the hybrid record structure need immediate assessment.
  2. A strong safety case for continued use of administrative data needs to be made or it should be withdrawn from the PCEHR.
  3. We need a whole of system (not just PCEHR) approach to designing and testing software (and updates) that are certifiably safe, to actively monitor for harm events, and a response function to investigate and make safe root causes of any event. Without this it is not possible for example to certify that a GP desktop system that interoperates with the PCEHR is built and operated safely when it uploads or downloads from the PCEHR.
  4. Existing PCEHR clinical safety governance functions need to be brought together in one place. The nature, size, structure, and degree to which this function is legislated to mandate safety is a discussion that must be had. Such bodies exist in other industries e.g. the civil aviation safety authority (CASA). ACSQHC is a possible home for this but would need to substantially change its mandate, resourcing, remit, and skill set.
  5. Reports of incidents and their remedies need to be made public in the same way that aviation incidents are reported. This will build trust amongst the public and clinicians, contribute to safer practice and design, and mitigate negative press when incidents invariable become public.

References

[See parent blog for links to papers that are not linked here]

1. Coiera E, Aarts J, Kulikowski C. The dangerous decade. Journal of the American Medical Informatics Association 2012;19:2-5

2. Patient safety problems associated with heathcare information technology: an analysis of adverse events reported to the US Food and Drug Administration. AMIA Annual Symposium Proceedings; 2011. American Medical Informatics Association.

3. Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. The National Academies Press: The National Academies Press., 2012.

4. Sparnon E, Marela W. The Role of the Electronic Health Record in Patient Safety Events. Pa Patient Saf Advis 2012;9(4):113-21

5. Coiera E, Westbrook J. Should clinical software be regulated? MJA 2006;184(12):600-01

6. Sparnon E. Spotlight on Electronic Health Record Errors: Paper or Electronic Hybrid Workflows. Pa Patient Saf Advis 2013(10):2

7. McIlwraith J, Magrabi F. Submission. Personally Controlled Electronic Health Record (PCEHR) System: Legislation Issues Paper 2011.

8. Coiera E. Do we need a national electronic summary care record. Med J Aust 2011 (online 9/11/2010);94(2):90-92

9. Coiera E, Kidd M, Haikerwal M. A call for national e-health clinical safety governance. Med J Aust 2012;196(7):430-31.

10. Magrabi F, Aarts J, Nohr C, et al. A comparative review of patient safety initiatives for national health information technology. International journal of medical informatics 2012;82(5):e139-48

Are standards necessary?

November 1, 2013 § 10 Comments

A common strategy for structuring complex human systems is to demand that everything be standards-based. The standards movement has taken hold in education and healthcare, and technical standards are seen as a prerequisite for information technology.

In healthcare, standards are visible in three critical areas, typical of many sectors: 1/ Evidence-based practice, where synthesis of the latest research generates best-practice recommendations; 2/ Safety, where performance indicators flag when processes are sub-optimal; and 3/ Technical standards, especially in information systems, which are designed to ensure different technical systems can interoperate with each other, or comply with minimum standards required for safe operation. There is a belief that ‘standardisation’ will be a forcing function, with compliance ensuring the “system” moves to the desired goal – whether that be safe care, appropriate adoption of recommended practices, or technology that actually works once implemented.

In the world of healthcare information systems, the mantra of standards and intra-operability is near a religion. Standards bodies proclaim them, governments mandate them, and as much as they can without being noticed, industry pays lip service to them, satisficing wherever they can. For such a pervasive technology, and we should see technical standards as exactly that – another technical artifact – it is surprising that there appears to be no evidence base that supports the case for their use. There seem to be no scientific trials to show that working with standards is better than not. Commonsense, communities of practice, vested interests and sunk costs, all along with the weight of belief, sustain the standards enterprise.

For those who advocate standards as a solution to system change, I believe the growing challenge of systems inertia has one a disturbing consequence. The inevitable result of an ever-growing supply of standards meeting scarce human attention and resource should from first principles reasoning lead to a new ‘Malthus’ law of standards – that the fraction of standards produced that are actually complied with, will with time asymptote toward zero[1]. To paraphrase Nobelist Herb Simon’s famous quip on information and attention, a wealth of standards leads to a poverty of their implementation[1].

It should come as no surprise then that standardisation is widely resisted, except perhaps by standards makers. Even then they tend to aggregate in competing tribes pushing one version of a standard over another. Unsurprisingly, safety goals remain elusive and evidence-based practice to many clinicians seems an academic fantasy. Given that clinical standards are often not evidence-based, such resistance may not be inappropriate[2 3].

In IT, standards committees sit for years arguing over what the ‘right’ standard is, only to find that once published, there are competing standards in the marketplace, and that technology vendors resist because of the cost of upgrading their systems to meet the new standard. Pragmatic experience in healthcare indicates standards can stifle local innovation and expertise[4]. In resource-constrained settings, trying to become standards compliant simply moves crucial resources away from front-line service provision.

There is a growing recognition that standards are a worthy and critical research topic[5]. Most standards research is empirical and case based. An important but small literature examines the ‘standardisation problem’[6] – the decision to choose amongst a set of standards. Economists have used agent-based modelling in a limited way to study the rate and extent of standards adoption[7]. Crucially, standards adoption is seen as an end in itself with current research, and there seems little work examining the effect of standardisation on system behaviour. Are standards always a good thing? There seems to be no work on the core questions of when to standardise, what to standardise, and how much of any standard one should comply with.

Clearly, some standardisation may be needed to allow the different elements of a complex human system to work together, but it is not clear how much ‘standard’ is enough, or what goes into such a standard. My theoretical work on the continuum between information and communication system design provides some guidance on when formalisation of information processes makes sense, and when things are best left fluid[8]. That framework showed that in dynamic settings where there is task uncertainty, standardisation is not a great idea. Further information system design can be shaped by understanding the dynamics of the ‘conversation’ between IT system and user, and by the task specific costs and benefits associated with technology choice[9 10].

It is remarkable that these questions are not being asked more widely. What is now needed is a rigorous analysis of how system behaviour is shaped and constrained by the act of standardisation, and whether we can develop more adaptive, dynamic approaches to standardisation that avoid system inertia and deliver flexible and sustainable human systems.

This blog is excerpted from my paper “Stasis and Adaptation“, which I gave in Copenhagen earlier this year, to open the Context-Sensitive Healthcare Conference. For an even more polemic paper from the same conference, check out Lars Botin’s paper How Standards will Degrade the Concepts of the Art of Medicine.

1. Coiera E. Why system inertia makes health reform so hard. British Medical Journal 2011;343:27-29 doi: doi:10.1136/bmj.d3693[published Online First: Epub Date]|.

2. Lee DH, Vielemeyer O. Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines. Arch Intern Med 2011;171:18-22

3. Tricoci P, Allen JM, Kramer JM, et al. (2009) Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA 301: 831-841. JAMA 2009;301:831-41

4. Coiera E. Building a National Health IT System from the Middle Out. J Am Med Inform Assoc 2009;16(3):271-73 doi: 10.1197/jamia.M3183[published Online First: Epub Date]|.

5. Lyytinen K, King JL. Standard making:  A critical research frontier for information systems research. MIS Quarterly 2006;30:405-11

6. The Standardisation problem – an economic analysis of standards in information systems. Proceedings of the 1st IEEE Conference on standardization and innovation in information technology SIIT ´99 1999.

7. Weitzel T, Beimborn D, Konig W. A unified economic model of standard diffusion: the impact of standardisation cost, network effects and network topology. MIS Quarterly 2006;30:489-514

8. Coiera E. When conversation is better than computation. Journal of the American Medical Informatics Association 2000;7(3):277-86

9. Coiera E. Mediated agent interaction. In: Quaglini BaA, ed. 8th Conference on Artificial Intelligence in Medicine. Berlin: Springer Lecture Notes in Artificial Intelligence No. 2101, 2001:1-15.

10. Coiera E. Interaction design theory. International Journal of Medical Informatics 2003;69:205-22

 

Where Am I?

You are currently viewing the archives for November, 2013 at The Guide to Health Informatics 3rd Edition.

%d bloggers like this: