A modest e-health proposal to government

May 12, 2015 § 4 Comments

Dear [insert country name] Government,

E-health is hard. I think we can all agree on that by now. You have spent [insert currency] [insert number] billion on e-health programs of one form or another over the last decade, and no one knows better than you how hard it is to demonstrate that you are making a difference to the quality, safety or efficiency of health care.

You also know that so much of e-health needs to happen in the public domain that, irrespective of your desire to privatise the problem, you will end up holding the can for much of what happens. E-health is your responsibility, and your citizens will, rightly or wrongly, hold you accountable.

It is so hard to get good strategic advice on e-health. You recently commissioned [insert large international consultancy firm] to prepare a new national e-health strategy, and it didn’t come cheap at [insert currency] [insert number] million. In the end it told you nothing you didn’t really already know, but at least you can say you tried.

You also commissioned [insert large international consultancy firm] to prepare a business case to back up that strategy, and it didn’t come cheaply either at [insert currency] [insert number] million. The numbers they came up with were big enough to convince Treasury to fund the national strategy, but deep in your heart of hearts you know you’ll never see a fraction of the [insert currency] promised.

It’s also really hard to find organisations that can deliver nation-scale e-health to time, to budget and of a quality that the professions and the voters all agree it’s a good thing. You want the IT folks who build these systems to understand health care, its needs and challenges, deeply. Just because they can build a great payroll system or website does not qualify them to jump in and manage an e-health project. Do you remember how [insert large IT company] ended up crashing and burning when they took on the [insert now legendary e-health project disaster]? We can all agree that didn’t go as planned, and that you didn’t exactly enjoy the coverage in the press and social media.

What you really want firstly is impartial, cheap and informed expert advice because you are in the end driven to do the right thing. Given the heated and partisan nature of politics, that advice needs to come from safe and trusted individuals. That often means the advice comes from within the tent of government, or from paid consultancies where legal contracts and the promise of future work secure your trust. You also want the IT folks who build your systems to be deeply trained in the complexities of implementing systems for e-health. The health professions, and indeed the voters, also need to be sophisticated enough to understand how to use these systems, and their limitations. That’s going to maximise your chances of success, as well as blunt the uninformed chatter that so often derails otherwise good policy.

Our proposal is a simple one. We suggest you set aside 10% of the E-health budget to train the next generation of e-health designers, builders, and users. Use the funds to resource training programs at the Masters level for future e-health policy leaders, as well as system designers, builders and implementers. Let us provide incentives to include e-health in health profession training both at primary degree and for continuing education. Let us also invest in training the public in the safe and effective use of e-health. Investing in creating a critical mass of skilled people over 5 years will be your best insurance that, when you are again faced with e-health, you have a real chance of doing the right thing.

Given how little outcome you have had for your e-health investments over the last decade, and the harsh reality that little will change over the next, this is a chance to rewrite the script. Invest in people and skills, and you might find that with time e-health isn’t so hard after all.

[insert name of concerned citizen, NGO, or professional association]

[insert date]

Clinical communication, interruption and the design of e-health

September 23, 2014 § Leave a comment

The different ways clinicians interact does not just shape the success of the communication act. Our propensity to interrupt each other, and multitask as we handle communication tasks alongside other duties, has a direct effect on how well we carry out everything we do. Interruption for example has the capacity to distort human memory processes, and lead to memory lapses as well as memory distortions.

Earlier this year I was interviewed by Dr. Robert Wachter, the Editor of the Agency for Healthcare Research and Quality (AHRQ) WebM&M. In that interview we covered the roles that interruption and multitasking play in patient safety, discussing both their risks, as well as strategies for minimising their effects. The interview also looked at the implications these communication and task management styles have for the design of information technologies.

The transcript of the interview as well as the podcast are available here.

A related 2012 editorial on the research challenges associated with interruption appeared in BMJ Quality and safety.

It is clear that our clinical information systems are not designed to be used in busy, interrupt-driven environments, and that they suffer because of it. Not only do they not fit the way people of necessity communicate and work, they lead to additional risks and have the potential to harm patients. It perplexes me that information system designers still work on the blind assumption that their users are giving their full attention to the software systems they have built. E-health systems need to be tolerant of interruption, and must be designed to support recovery from such events. Memory prompts, task markers, and retention of context once an action has been completed, are essential for the safe design of e-health systems.

 

The many tribes of informatics

August 29, 2014 § 2 Comments

My learned and senior informatics colleagues spend much time debating the different professional roles that together are needed to support the practice of informatics (e.g. informatician, informatasist). Over the years I have assembled a set of definitions for these professional roles, as well as allied concepts. I feel this list is unlikely to help the debate at all.

Informatocyst [In-for-mata-s-ist, n] (see also, legacy system). A collection or build up of information, walled off from the greater information system by a barrier of incompatible or aged interchange standards.

Infocystation [In-foh-sis-tay-shun, n] An outbreak of informatocysts. Also, the state of co-existence with these informational endoparasites.

Informablution [In-for-mah-bloo-shun, v]. Ritual cleansing of past errors. Application of holy oils extracted by data analysts from sacred dashboard. Thought to bring enlightenment and understanding of the underlying nature of things. See also Informagic.

Informatocist [In-for-mata-s-ist, n] A practitioner specialising in the identification and removal of informatacysts, trained in the uses of tools such as the infolance (not to be confused with a ‘Standards body’ which is a secret cabal that believes the simple chanting of the names of the holy standards causes informatacysts to spontaneously rupture).

Informationist [In-for-may-shun-ist, n] One who believes that all information was created by God four thousand years ago (see also, Creationist, Infolutionist).

Informatics [In-for-mat-eeks, n] An ancient religion. Also, one divided by zero; Everything; Nothing (origins obscure).

Informagic [In-for-ma-gik, n] Informagical adv. The process by which purchase of a computer immediately improves clinical outcomes. (see also meaningless use, informagical thinking, informagician)

Infomortician [In-fo-mohr-tee-shun, n] A practitioner specialising in the preservation of dead information languages (see also mortician, Cobol, Fortran, MUMPS).

Informatrician [In-for-ma-tree-shun, n] Low caste technologist, responsible for designing, building and maintaing real information systems. Does all the work, gets no recognition or rewards. H-index of zero.

Informatelist [In-for-mat- er-lys-t, n] A collector of information system definitions.

Informately [In-for-mata-lee, n] The collection and study of information systems, standards, and related terms; stamp collecting (see also, terminology). Informatelic, informatelical adj. Informatelicaly adv

Informatology [In-for-ma-tolo-gee, n] Study of the information fundament (see also, proctology, semantics, ontology).

Interoperative [Een-ta-hop-era-teev, n]. Double agent. Works for health services but secretly acts for industry. Preaches tolerance and diversity but may fabricate evidence of infocystation to undermines local systems in favour of industry “standard” products.

Telemethodist [Te-lee-meth-o-deest, n] Member of breakaway missionary sect that eschews belief in information for its own sake, emphasises its delivery to the disadvantaged through speaking tubes.

Guide to Health Informatics 3rd Edition

July 15, 2014 § 7 Comments

It’s now almost 20 years since I started to write the first edition, and over 10 years since I wrote the second. I’m very happy to announce that the text for the updated and much expanded third edition is now completed.

The 3rd Edition of the Guide to Health Informatics comes in paper and e-book versions. Purchase of the print version comes bundled with access to the VitalSource e-book version.

20% discount is available when you order it direct from the publisher – just quote code BHP01 at checkout.

You can also buy it from Amazon UK or Amazon US, and other bookstores (ISBN-13:978-1444170498). If you wish to purchase the e-book only, several options are available including a kindle edition, and the VitalSource edition, which offers options including time-limited rental, as well as full purchase, and bulk purchase for classes.

Complimentary textbook e-inspection copies are available to qualifying instructors for review prior to course adoption.

The book has a strong emphasis on demonstrating what works and what does not work in informatics. I have created a new evaluative framework that runs through the book, to help us understand why some classes of intervention appear to work so much better than others. As a taster, the new edition has 34 chapters, and is significantly longer than the 2nd edition.  The new chapters are each quite extensive in length, and focus as much as possible on basic concepts and principles, rather than simple narrative descriptions of the topics. New chapters include:

  • Implementation
  • Information system safety
  • Social networks and social media interventions
  • Model Building for Decision Support, Data Analysis and Scientific Discovery
  • Population surveillance and public health informatics
  • Clinical bioinformatics and Personalised medicine
  • Consumer Informatics

I want to thank all of those who made so many suggestions to me earlier on about what was needed in the new book. I hope I have covered off the most important topics for you. As always the balance is between creating an introductory work which has some longevity and explores the core concepts needed to understand our discipline with a single and unified voice,  or writing an encyclopaedic multi-author work that tries to do everything, but has too many voices, becomes out of date quickly, and overwhelms students. At least for this edition I think we have still managed to keep the book to being a ‘single voice’ overview – although I have had many expert colleagues help me with sourcing and structuring the material and checking what has been written. All the old chapters have had overhauls, most of them very significantly (a lot has happened in the last 10 years).

For those who are looking to use the 3rd edition as a part of a course, here is the new table of contents.

Part 1 – Basic Concepts in Informatics 1. Models 2. Information 3. Systems

Part 2 – Clinical Informatics Skills 4. Communication 5. Structuring 6. Questioning 7. Searching 8. Making decisions

Part 3 – Information Systems in Healthcare 9. Information management systems 10. The Electronic Health Record 11. Designing and evaluating information and communication systems 12. Implementation 13. Information System safety 14. Information economics

Part 4 – Guideline and Protocol-based Systems 15. Guidelines, protocols and evidence-based healthcare 16. Computer-based protocol systems 17. Designing, disseminating and applying protocols

Part 5 – Communication Systems in Healthcare 18. Communication system basics; Interlude – The Internet and World Wide Web; 19. Information and Communication networks 20. Social networks and social media interventions 21. Telehealth and mobile health

Part 6 – Language, Coding and Classification 22. Terms, codes and classification 23. Healthcare terminologies and classification systems 24. Natural language and formal terminology

Part 7 – Clinical Decision Support and Analytics 25. Clinical Decision Support Systems; Interlude – Artificial Intelligence in Medicine; 26. Computational reasoning methods 27. Model Building for Decision Support, Data Analysis and Scientific Discovery

Part 8 – Specialized applications for health informatics 28. Patient monitoring and control 29. Population surveillance and public health informatics 30. Bioinformatics 31. Clinical Bioinformatics and Personalized Medicine 32. Consumer Informatics

And I’ve broken with tradition from the earlier editions, and picked a gorgeous new cover.

Clinical Safety of the Australian Personally Controlled Electronic Heath Record (PCEHR)

November 29, 2013 § Leave a comment

Like many nations, Australia has begun to develop nation-scale E-health infrastructure. In our case it currently takes the form of a Personally Controlled Electronic Health Record (PCEHR). It is a Federal government created and operated  approach that caches clinical documents including discharge summaries, summary care records that are electively created, uploaded and maintained by a designated general practitioner, and some administrative data including information on medications prescribed and dispensed, as well as claims data from Medicare, our universal public health insurer. It is personally-controlled in that consumers must elect to opt in to the system, and may elect to hide documents or data should they not wish them to be seen – a point that has many clinicians concerned but is equally celebrated by consumer groups.

With ongoing concerns in some sectors about system cost, apparent low levels of adoption and clinical use, as well as a change in government, the PCEHR is now being reviewed to determine its fate. International observers might detect echoes of recent English and Canadian experiences here, and we will all watch with interest to see what unfolds after the Committee reports.

My submission to the PCEHR Review Committee focuses only on the clinical safety of the system, and the governance processes designed to ensure clinical safety. You can read my submission here.

As background to the submission, I have written about clinical safety of IT in health care for several years now, and some of the more relevant papers are collected here:

  1. J. S. Ash, M. Berg, E. Coiera, Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System Related Errors, Journal American Medical Informatics Association, 11(2),104-112, 2004.
  2. Coiera E, Westbrook J, Should clinical software be regulated? Medical Journal of Australia. 2006:184(12);600-1.
  3. Coiera E, Westbrook JI, Wyatt J (2006) The safety and quality of decision support systems, Methods Of Information In Medicine 45: 20-25 Suppl. 1, 2006.
  4. Magrabi F, Coiera E. Quality of prescribing decision support in primary care: still a work in progress. Medical Journal of Australia 2009; 190 (5): 227-228.
  5. Coiera E, Do we need a national electronic summary care record? Medical Journal of Australia 2011; 194(2), 90-2.
  6. [Paywall] Coiera E, Kidd M, Haikerwal M, A call to national e-health clinical safety governance, Med J Aust 2012; 196 (7): 430-431.
  7. Coiera E, Aarts J, Kulikowski K. The Dangerous Decade. Journal of the American Medical Informatics Association, 2012;19(1):2-5.
  8. [Paywall] Magrabi F, Aarts J, Nohr C, et al. A comparative review of patient safety initiatives for national health information technology. International journal of medical informatics 2012;82(5):e139-48.
  9. [Paywall] Coiera E, Why E-health is so hard, Medical Journal of Australia, 2013; 198(4),178-9.

Along with research colleagues I have been working on understanding the nature and extent of likely harms, largely through reviews of reported incidents in Australia, the US and England. A selection of our papers can be found here:

  1. Magrabi F, Ong M, Runciman W, Coiera E. An analysis of computer-related patient safety incidents to inform the development of a classification. Journal of the American Medical Informatics Association 2010;17:663-670.
  2. Magrabi F, Li SYW, Day RO, Coiera E, Errors and electronic prescribing: a controlled laboratory study to examine task complexity and interruption effects. Journal of the American Medical Informatics Association 2010 17: 575-583.
  3. Magrabi F, Ong, M, Runciman W, Coiera E, Patient Safety Problems Associated with Healthcare Information Technology: an Analysis of Adverse Events Reported to the US Food and Drug Administration, AMIA 2011 Annual Symposium, Washington DC, October 2011, 853-8.
  4. Magrabi F, Ong MS, Runciman W, Coiera E. Using FDA reports to inform a classification for health information technology safety problems. Journal of the American Medical Informatics Association 2012;19:45-53.
  5. Magrabi, F., M. Baker, I. Sinha, M.S. Ong, S. Harrison, M. R. Kidd, W. B. Runciman and E. Coiera. Clinical safety of England’s national programme for IT: A retrospective analysis of all reported safety events 2005 to 2011. International Journal of Medical Informatics 2015;84(3): 198-206.

Submission to the PCEHR Review Committee 2013

November 29, 2013 § 4 Comments

Professor Enrico Coiera, Director Centre for Health Informatics, Australian Institute of Health Innovation, UNSW

Date: 21 November 2013

The Clinical Safety of the Personally Controlled Electronic Health Record (PCEHR)

This submission comments on the consultations during PCEHR development, barriers to clinician and patient uptake and utility, and makes suggestions to accelerate adoption. The lens for these comments is patient safety.

The PCEHR like any healthcare technology may do good or harm. Correct information at a crucial moment may improve care. Misleading, missing or incorrect information may lead to mistakes and harm. There is clear evidence nationally and internationally that health IT can cause such harm [1-5].

To mitigate such risks, most industries adopt safety systems and processes at software design, build, implementation and operation. User trust that a system is safe enhances its adoption, and forces system design to be simple, user focused, and well tested.

The current PCEHR has multiple safety risks including:

  1. Using administrative data (e.g. PBS data and Prescribe/Dispense information) for clinical purposes (ascertaining current medications) – a use never intended;
  2. Using clinical documents (discharge summaries) instead of fine-grained patient data e.g. allergies. Ensuring data integrity is often not possible within documents (e.g. identifying contradicting, missing or out of date data);
  3. Together these create an electronic form of a hybrid record with no unitary view of the clinical ‘truth’. Hybrid records can lead to clinical error by impeding data search or by triggering incorrect decisions based on a partial view of the record [6];
  4. Shifting the onus for data integrity to a custodian GP avoids the PCEHR operator taking responsibility for data quality (a barrier to GP engagement and a risk because integrity requires sophisticated, often automated checking).
  5. No national process or standards to ensure that clinical software and updates (and indeed the PCEHR) are clinically safe.

The need for clinical safety to be managed within the PCEHR was fed into the PCEHR process formally [7], via internal NEHTA briefings, at public presentations at which PCEHR leadership were present and was clear from the academic literature. Indeed, a 2010 MJA editorial on the risks and benefits of likely PCEHR architectures highlighted recent evidence suggesting many approaches were problematic. It tongue-in-cheek suggested that perhaps GPs should ‘curate’ the record, only to then point out the risks with that approach [8].

Yet, at the beginning of 2012, no formal clinical safety governance arrangements existed for the PCEHR program. The notable exception was the Clinical Safety Unit within NEHTA, whose limited role was to examine the safety of standards as designed, but not as implemented. There was no process to ensure software connecting to the PCEHR was safe (in the sense that patients would not be harmed from the way information was entered, stored, retrieved or used), only that it interoperated technically. No ongoing safety incident monitoring or response function existed, beyond any internal processes the system operator might have had.

Concerns that insufficient attention was being paid to clinical safety prompted a 2012 MJA editorial on the need for national clinical safety governance both for the PCEHR as well as E-health more broadly [9]. In response, a clinical governance oversight committee was created within the Australian Commission on Safety and Quality in Health Care, (ACSQHC) to review PCEHR incidents monthly, but with no remit to look at clinical software that connects to the PCEHR. There is however no public record of how clinical incidents are determined, what incidents are reported, their risk levels or resulting harms, nor how they are made safe. A major lesson from patient safety is that open disclosure is essential to ensure patient and clinician trust in a system, and to maximize dissemination of lessons learned. This lack of transparency is likely a major barrier to uptake, especially given the sporadic media reports of errors in PCEHR data (such as incorrect medications) with the potential to lead to harm.

We recently reviewed governance arrangements for health IT safety internationally, and a wide variety of arrangements are possible from self-certification through to regulation [10]. The English NHS has a mature approach that ensures clinical software connecting to the national infrastructure complies with safety standards, closely monitors incidents and has a dedicated team to investigate and make safe any reports of near misses or actual harms.

Our recent awareness of large-scale events across national e-health systems – where potentially many thousands of patient records are affected at once – is another reason PCEHR and national e-health safety should be a priority. We recently completed, with the English NHS, an analysis of 850 of their incidents. 23% (191) of incidents were large-scale involving between 10 and 66,000 patients. Tracing all affected patients becomes difficult when dealing with a complex system composed of loosely interacting components, such as the PCEHR.

Recommendations:

  1. A whole of system safety audit and risk assessment of the PCEHR and feeder systems should be conducted, using all internal data available, and made public. The risks of using administrative data for clinical purposes and the hybrid record structure need immediate assessment.
  2. A strong safety case for continued use of administrative data needs to be made or it should be withdrawn from the PCEHR.
  3. We need a whole of system (not just PCEHR) approach to designing and testing software (and updates) that are certifiably safe, to actively monitor for harm events, and a response function to investigate and make safe root causes of any event. Without this it is not possible for example to certify that a GP desktop system that interoperates with the PCEHR is built and operated safely when it uploads or downloads from the PCEHR.
  4. Existing PCEHR clinical safety governance functions need to be brought together in one place. The nature, size, structure, and degree to which this function is legislated to mandate safety is a discussion that must be had. Such bodies exist in other industries e.g. the civil aviation safety authority (CASA). ACSQHC is a possible home for this but would need to substantially change its mandate, resourcing, remit, and skill set.
  5. Reports of incidents and their remedies need to be made public in the same way that aviation incidents are reported. This will build trust amongst the public and clinicians, contribute to safer practice and design, and mitigate negative press when incidents invariable become public.

References

[See parent blog for links to papers that are not linked here]

1. Coiera E, Aarts J, Kulikowski C. The dangerous decade. Journal of the American Medical Informatics Association 2012;19:2-5

2. Patient safety problems associated with heathcare information technology: an analysis of adverse events reported to the US Food and Drug Administration. AMIA Annual Symposium Proceedings; 2011. American Medical Informatics Association.

3. Institute of Medicine. Health IT and Patient Safety: Building Safer Systems for Better Care. The National Academies Press: The National Academies Press., 2012.

4. Sparnon E, Marela W. The Role of the Electronic Health Record in Patient Safety Events. Pa Patient Saf Advis 2012;9(4):113-21

5. Coiera E, Westbrook J. Should clinical software be regulated? MJA 2006;184(12):600-01

6. Sparnon E. Spotlight on Electronic Health Record Errors: Paper or Electronic Hybrid Workflows. Pa Patient Saf Advis 2013(10):2

7. McIlwraith J, Magrabi F. Submission. Personally Controlled Electronic Health Record (PCEHR) System: Legislation Issues Paper 2011.

8. Coiera E. Do we need a national electronic summary care record. Med J Aust 2011 (online 9/11/2010);94(2):90-92

9. Coiera E, Kidd M, Haikerwal M. A call for national e-health clinical safety governance. Med J Aust 2012;196(7):430-31.

10. Magrabi F, Aarts J, Nohr C, et al. A comparative review of patient safety initiatives for national health information technology. International journal of medical informatics 2012;82(5):e139-48

Are standards necessary?

November 1, 2013 § 10 Comments

A common strategy for structuring complex human systems is to demand that everything be standards-based. The standards movement has taken hold in education and healthcare, and technical standards are seen as a prerequisite for information technology.

In healthcare, standards are visible in three critical areas, typical of many sectors: 1/ Evidence-based practice, where synthesis of the latest research generates best-practice recommendations; 2/ Safety, where performance indicators flag when processes are sub-optimal; and 3/ Technical standards, especially in information systems, which are designed to ensure different technical systems can interoperate with each other, or comply with minimum standards required for safe operation. There is a belief that ‘standardisation’ will be a forcing function, with compliance ensuring the “system” moves to the desired goal – whether that be safe care, appropriate adoption of recommended practices, or technology that actually works once implemented.

In the world of healthcare information systems, the mantra of standards and intra-operability is near a religion. Standards bodies proclaim them, governments mandate them, and as much as they can without being noticed, industry pays lip service to them, satisficing wherever they can. For such a pervasive technology, and we should see technical standards as exactly that – another technical artifact – it is surprising that there appears to be no evidence base that supports the case for their use. There seem to be no scientific trials to show that working with standards is better than not. Commonsense, communities of practice, vested interests and sunk costs, all along with the weight of belief, sustain the standards enterprise.

For those who advocate standards as a solution to system change, I believe the growing challenge of systems inertia has one a disturbing consequence. The inevitable result of an ever-growing supply of standards meeting scarce human attention and resource should from first principles reasoning lead to a new ‘Malthus’ law of standards – that the fraction of standards produced that are actually complied with, will with time asymptote toward zero[1]. To paraphrase Nobelist Herb Simon’s famous quip on information and attention, a wealth of standards leads to a poverty of their implementation[1].

It should come as no surprise then that standardisation is widely resisted, except perhaps by standards makers. Even then they tend to aggregate in competing tribes pushing one version of a standard over another. Unsurprisingly, safety goals remain elusive and evidence-based practice to many clinicians seems an academic fantasy. Given that clinical standards are often not evidence-based, such resistance may not be inappropriate[2 3].

In IT, standards committees sit for years arguing over what the ‘right’ standard is, only to find that once published, there are competing standards in the marketplace, and that technology vendors resist because of the cost of upgrading their systems to meet the new standard. Pragmatic experience in healthcare indicates standards can stifle local innovation and expertise[4]. In resource-constrained settings, trying to become standards compliant simply moves crucial resources away from front-line service provision.

There is a growing recognition that standards are a worthy and critical research topic[5]. Most standards research is empirical and case based. An important but small literature examines the ‘standardisation problem’[6] – the decision to choose amongst a set of standards. Economists have used agent-based modelling in a limited way to study the rate and extent of standards adoption[7]. Crucially, standards adoption is seen as an end in itself with current research, and there seems little work examining the effect of standardisation on system behaviour. Are standards always a good thing? There seems to be no work on the core questions of when to standardise, what to standardise, and how much of any standard one should comply with.

Clearly, some standardisation may be needed to allow the different elements of a complex human system to work together, but it is not clear how much ‘standard’ is enough, or what goes into such a standard. My theoretical work on the continuum between information and communication system design provides some guidance on when formalisation of information processes makes sense, and when things are best left fluid[8]. That framework showed that in dynamic settings where there is task uncertainty, standardisation is not a great idea. Further information system design can be shaped by understanding the dynamics of the ‘conversation’ between IT system and user, and by the task specific costs and benefits associated with technology choice[9 10].

It is remarkable that these questions are not being asked more widely. What is now needed is a rigorous analysis of how system behaviour is shaped and constrained by the act of standardisation, and whether we can develop more adaptive, dynamic approaches to standardisation that avoid system inertia and deliver flexible and sustainable human systems.

This blog is excerpted from my paper “Stasis and Adaptation“, which I gave in Copenhagen earlier this year, to open the Context-Sensitive Healthcare Conference. For an even more polemic paper from the same conference, check out Lars Botin’s paper How Standards will Degrade the Concepts of the Art of Medicine.

1. Coiera E. Why system inertia makes health reform so hard. British Medical Journal 2011;343:27-29 doi: doi:10.1136/bmj.d3693[published Online First: Epub Date]|.

2. Lee DH, Vielemeyer O. Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines. Arch Intern Med 2011;171:18-22

3. Tricoci P, Allen JM, Kramer JM, et al. (2009) Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA 301: 831-841. JAMA 2009;301:831-41

4. Coiera E. Building a National Health IT System from the Middle Out. J Am Med Inform Assoc 2009;16(3):271-73 doi: 10.1197/jamia.M3183[published Online First: Epub Date]|.

5. Lyytinen K, King JL. Standard making:  A critical research frontier for information systems research. MIS Quarterly 2006;30:405-11

6. The Standardisation problem – an economic analysis of standards in information systems. Proceedings of the 1st IEEE Conference on standardization and innovation in information technology SIIT ´99 1999.

7. Weitzel T, Beimborn D, Konig W. A unified economic model of standard diffusion: the impact of standardisation cost, network effects and network topology. MIS Quarterly 2006;30:489-514

8. Coiera E. When conversation is better than computation. Journal of the American Medical Informatics Association 2000;7(3):277-86

9. Coiera E. Mediated agent interaction. In: Quaglini BaA, ed. 8th Conference on Artificial Intelligence in Medicine. Berlin: Springer Lecture Notes in Artificial Intelligence No. 2101, 2001:1-15.

10. Coiera E. Interaction design theory. International Journal of Medical Informatics 2003;69:205-22

 

Bending the eHealth benefits curve

June 8, 2013 § 6 Comments

Wise heads no longer look for savings in the health system. We no longer expect our new technologies, re-organisations, and programs to find a penny. The idea that money can somehow be ‘released’ through change, to then be reapplied elsewhere, is gone. Healthcare has so much pent up demand, so many unmet needs, that all our improvements can do is allow more of those needs to be met. Never comes the day that we find ourselves idle, our resources available for redeployment elsewhere.

That is why the new language in health is all about “bending the cost curve”- the idea that the very best innovation can do is to slow the growth in total system costs. No one who is informed expects you to save money anymore, just not to see as much relentless growth in the bills.

Foremost amongst the tools for bending the cost curve sits information technology. The benefits of automation, and better-informed decision making, are to both make current processes more efficient (so we can do more with the same) and safer (so we don’t pay for as many costly mistakes).

There is a problem however, and it is a discussion still only at the fringes. At least at scale, health IT is not delivering the benefits we expected. A recent report on the realised (as opposed to predicted) benefits of the hugely expensive English National Program for IT (NPfIT) shows that the whole effort might at best break even, and that in some parts of the program the realised benefit is as low as 2% of that predicted[1]. England is not especially bad as an exemplar country, it is just especially honest. E-health it seems, is much harder than we thought, at least at nation scale[2].

So, what is behind this apparent poor performance? The first explanation is simple and straightforward. We are in uncharted territory. No one has ever done anything like this before, so there are no manuals on how to build nation scale e-health systems. Worse, every country is different, with different populations, different burdens of disease, different economies, political imperatives and health delivery systems. So, it turns out that every national program for e-health is an uncontrolled, n of 1 case study. Sure, countries can talk to each other, share experiences and intelligence, but local context is all in the delivery.

Next, there is clearly a problem with how potential benefits are calculated. Healthcare is a complex system, and it is a brave individual that uses linear extrapolation to come up with the numbers for the expected benefits of an e-health program. Just because there is a 2% error rate in a process, and you can show your automation will detect it, does not mean that you can claim all that as your benefit. The automation may never be used (clinicians are like that – they are busy), or will be ignored (clinicians are also like that – sometimes they do know better, sometimes they don’t). Or the current system would have detected or remedied the error in other ways further downstream in the process.

More importantly, the costs and benefits of information are subject to network effects [3]. The marginal value of buying a fax machine was always dependent on how many other people owned a fax machine, and the same is today for owning a Facebook account, or uploading a shared health summary onto a national system.  The likelihood that the vital information your doctor uploaded onto an information system is actually seen and affects your care depends vitally on how many others doctors have done the same.

Another reason that benefits are not being realised is simply that the systems being built are the wrong ones. They solve problems no one is asking to be solved, or they build highways no one especially wants to travel down[2]. Or sometimes, they just don’t work – there comes a point when large scale IT programs that are much delayed, always asking for more resource, always promising that success is just around the corner, need to be called for what they are. Never confuse the means and the ends, because all you end up with is means without end.

Perhaps it is time to step back and talk, not about bending the cost curve down, but bending the benefits curve up. We should not be looking just for where we can optimise, we should also be looking for where we have the best chance of succeeding.

Which clinical tasks are best suited to automation? [4] It’s a simple question and one we never seem to ask. There is an assumption that just because information technology is a universal tool, that it can be universally applied. The truth is you can throw an awful lot of money at a poorly specified problem and get nothing back. Equally you can spend relatively small amounts of money in the right part of the problem space and reap great rewards. There is an unchallenged myth that large-scale national infrastructure projects will always release large-scale benefits everywhere – like fluoridation of water or better urban sanitation projects. That increasingly seems not to be the case.

What is the alternative however? If we are to focus on solving specific clinical tasks rather than building central infrastructure are we not stuck? There is so much local variation in the way things are done that imposing standard ways of working will not get very far either. There are strong hints coming from the world of consumer systems like the smart phone. For example, the reason that ‘apps’ seem to be such a successful idea is not that they are computer programs (we’ve had those for a while) but that they are cheap, disposable, substitutable, and bespoke. Information and what we need it for, how we use it, and how we access it, is a very local affair, and that is not about to change.  Health information for now is also most likely to be captured on the local systems of your hospital or GP. The logic of duplicating all or part of that local information, and shipping it to a central store, seems to not make sense technically or financially.

So, what might have once been a radical idea – that we need to architect health IT like an app store [5] – is perhaps now not so radical. There is more than a grain of truth to the proposition. We have built a world of interconnection, of personalisation, and we should embrace it. We also have the good fortune to have the information technology industry already pioneer the technologies and business models that make much of this new world possible.

Is it now time to move on, to write off sunk costs, and say good-bye to old business models and technology providers? The centralised, inflexible ‘old iron’ model of automation that has dominated e-health for a generation is probably on its last legs.

References

1. National Audit Office. Review of the final benefits statement for programmes previously managed under the National Programme for IT in the NHS, 2013. http://www.nao.org.uk/wp-content/uploads/2013/06/10171-001_NPfiT_Review.pdf

2. Coiera E. Why e-health is so hard. The Medical journal of Australia 2013;198(4):178

3. Coiera E. Information economics and the Internet. Journal American Medical Informatics Association 2000;7:215-21

4. Sintchenko V.S., Coiera E. Which clinical decisions benefit from automation? A task complexity approach. Int. J. Med. Inform 2003;70:309-16

5. Mandl KD, Kohane IS. No small change for the health information economy. New England Journal of Medicine 2009;360(13):1278-81

© Enrico Coiera 2013

Occupy Healthcare – Social media do have the potential to revolutionize medicine

May 30, 2013 § Leave a comment

Can you smell revolution in the air? Social media like Twitter and Facebook helped catalyze the Arab Spring, and the Occupy movement’s global protests. Social media are often beyond the control of government, and allow citizen groups to form, share information and respond more quickly and with greater reach than ever before. With so much disaffection with modern healthcare, will healthcare too soon have its own Arab spring? Will old power structures be taken apart, and the compact between clinician, patient, industry and government reassembled into something new?

This is a theme I explore in a Croakey Health Blog:

http://blogs.crikey.com.au/croakey/2013/05/30/occupy-healthcare-social-media-do-have-the-potential-to-revolutionize-medicine/

It is a companion to my BMJ paper Social networks, social media and social disease.

There is  a BMJ podcast to accompany the paper, and you can listen in on my commentary  which starts at 9.54 minutes into the audio recording.  The ABC Radio National Health Report also ran an interview on the topic and has both audio as well as a transcript.

Interesting times for healthcare I think ….

Help us write the 3rd Edition of the Guide to Health Informatics

May 28, 2013 § 23 Comments

The Guide to Health Informatics 2nd Edition was published in 2003, and has endured surprisingly well over the following decade. One of the guiding principles for selecting material in that text was to focus on core ideas that had a long half-life. In other words, the was focus less on the ever changing “bleeding edge” of technology and its application, and more on foundational principles and topics.

Well, we are now beavering away at the third edition, and hope for the totally revised text to be completed by the end of 2013.

We would very much welcome feedback from the community about what you would like to see in the third edition. What new topics would you like to see covered (remembering that we are going to focus on long-half life ideas and topics). What new features would you like to see in the chapters? We currently have questions at the end of each chapter and further reading. We will use this web site as a place for online teaching materials (such as a PowerPoint deck with the figures used  in the text for teachers to download).

The table of contents for the 2nd edition is here if you want to look at it again. Some of the old topics will be substantially revised (for example all the material on the internet in health). All existing chapters are being updated with the latest material.

New chapters or topic sections are being prepared for:

  • The safety of e-health (what can go wrong, how do you minimize risks)
  • Nation-scale health IT systems – their designs, functions, risks and benefits (including HIEs).
  • Consumer Informatics
  • Social networks and media
  • Modeling and analyzing large scale data sets (big data).
  • Computational discovery systems

What else do you want? Now is your chance to help shape the text!

We will use this blog keep you up to date with progress on the new edition, and continue ask for feedback on the edition as it progresses.

%d bloggers like this: