Are standards necessary?

November 1, 2013 § 10 Comments

A common strategy for structuring complex human systems is to demand that everything be standards-based. The standards movement has taken hold in education and healthcare, and technical standards are seen as a prerequisite for information technology.

In healthcare, standards are visible in three critical areas, typical of many sectors: 1/ Evidence-based practice, where synthesis of the latest research generates best-practice recommendations; 2/ Safety, where performance indicators flag when processes are sub-optimal; and 3/ Technical standards, especially in information systems, which are designed to ensure different technical systems can interoperate with each other, or comply with minimum standards required for safe operation. There is a belief that ‘standardisation’ will be a forcing function, with compliance ensuring the “system” moves to the desired goal – whether that be safe care, appropriate adoption of recommended practices, or technology that actually works once implemented.

In the world of healthcare information systems, the mantra of standards and intra-operability is near a religion. Standards bodies proclaim them, governments mandate them, and as much as they can without being noticed, industry pays lip service to them, satisficing wherever they can. For such a pervasive technology, and we should see technical standards as exactly that – another technical artifact – it is surprising that there appears to be no evidence base that supports the case for their use. There seem to be no scientific trials to show that working with standards is better than not. Commonsense, communities of practice, vested interests and sunk costs, all along with the weight of belief, sustain the standards enterprise.

For those who advocate standards as a solution to system change, I believe the growing challenge of systems inertia has one a disturbing consequence. The inevitable result of an ever-growing supply of standards meeting scarce human attention and resource should from first principles reasoning lead to a new ‘Malthus’ law of standards – that the fraction of standards produced that are actually complied with, will with time asymptote toward zero[1]. To paraphrase Nobelist Herb Simon’s famous quip on information and attention, a wealth of standards leads to a poverty of their implementation[1].

It should come as no surprise then that standardisation is widely resisted, except perhaps by standards makers. Even then they tend to aggregate in competing tribes pushing one version of a standard over another. Unsurprisingly, safety goals remain elusive and evidence-based practice to many clinicians seems an academic fantasy. Given that clinical standards are often not evidence-based, such resistance may not be inappropriate[2 3].

In IT, standards committees sit for years arguing over what the ‘right’ standard is, only to find that once published, there are competing standards in the marketplace, and that technology vendors resist because of the cost of upgrading their systems to meet the new standard. Pragmatic experience in healthcare indicates standards can stifle local innovation and expertise[4]. In resource-constrained settings, trying to become standards compliant simply moves crucial resources away from front-line service provision.

There is a growing recognition that standards are a worthy and critical research topic[5]. Most standards research is empirical and case based. An important but small literature examines the ‘standardisation problem’[6] – the decision to choose amongst a set of standards. Economists have used agent-based modelling in a limited way to study the rate and extent of standards adoption[7]. Crucially, standards adoption is seen as an end in itself with current research, and there seems little work examining the effect of standardisation on system behaviour. Are standards always a good thing? There seems to be no work on the core questions of when to standardise, what to standardise, and how much of any standard one should comply with.

Clearly, some standardisation may be needed to allow the different elements of a complex human system to work together, but it is not clear how much ‘standard’ is enough, or what goes into such a standard. My theoretical work on the continuum between information and communication system design provides some guidance on when formalisation of information processes makes sense, and when things are best left fluid[8]. That framework showed that in dynamic settings where there is task uncertainty, standardisation is not a great idea. Further information system design can be shaped by understanding the dynamics of the ‘conversation’ between IT system and user, and by the task specific costs and benefits associated with technology choice[9 10].

It is remarkable that these questions are not being asked more widely. What is now needed is a rigorous analysis of how system behaviour is shaped and constrained by the act of standardisation, and whether we can develop more adaptive, dynamic approaches to standardisation that avoid system inertia and deliver flexible and sustainable human systems.

This blog is excerpted from my paper “Stasis and Adaptation“, which I gave in Copenhagen earlier this year, to open the Context-Sensitive Healthcare Conference. For an even more polemic paper from the same conference, check out Lars Botin’s paper How Standards will Degrade the Concepts of the Art of Medicine.

1. Coiera E. Why system inertia makes health reform so hard. British Medical Journal 2011;343:27-29 doi: doi:10.1136/bmj.d3693[published Online First: Epub Date]|.

2. Lee DH, Vielemeyer O. Analysis of Overall Level of Evidence Behind Infectious Diseases Society of America Practice Guidelines. Arch Intern Med 2011;171:18-22

3. Tricoci P, Allen JM, Kramer JM, et al. (2009) Scientific Evidence Underlying the ACC/AHA Clinical Practice Guidelines. JAMA 301: 831-841. JAMA 2009;301:831-41

4. Coiera E. Building a National Health IT System from the Middle Out. J Am Med Inform Assoc 2009;16(3):271-73 doi: 10.1197/jamia.M3183[published Online First: Epub Date]|.

5. Lyytinen K, King JL. Standard making:  A critical research frontier for information systems research. MIS Quarterly 2006;30:405-11

6. The Standardisation problem – an economic analysis of standards in information systems. Proceedings of the 1st IEEE Conference on standardization and innovation in information technology SIIT ´99 1999.

7. Weitzel T, Beimborn D, Konig W. A unified economic model of standard diffusion: the impact of standardisation cost, network effects and network topology. MIS Quarterly 2006;30:489-514

8. Coiera E. When conversation is better than computation. Journal of the American Medical Informatics Association 2000;7(3):277-86

9. Coiera E. Mediated agent interaction. In: Quaglini BaA, ed. 8th Conference on Artificial Intelligence in Medicine. Berlin: Springer Lecture Notes in Artificial Intelligence No. 2101, 2001:1-15.

10. Coiera E. Interaction design theory. International Journal of Medical Informatics 2003;69:205-22

 

Bending the eHealth benefits curve

June 8, 2013 § 6 Comments

Wise heads no longer look for savings in the health system. We no longer expect our new technologies, re-organisations, and programs to find a penny. The idea that money can somehow be ‘released’ through change, to then be reapplied elsewhere, is gone. Healthcare has so much pent up demand, so many unmet needs, that all our improvements can do is allow more of those needs to be met. Never comes the day that we find ourselves idle, our resources available for redeployment elsewhere.

That is why the new language in health is all about “bending the cost curve”- the idea that the very best innovation can do is to slow the growth in total system costs. No one who is informed expects you to save money anymore, just not to see as much relentless growth in the bills.

Foremost amongst the tools for bending the cost curve sits information technology. The benefits of automation, and better-informed decision making, are to both make current processes more efficient (so we can do more with the same) and safer (so we don’t pay for as many costly mistakes).

There is a problem however, and it is a discussion still only at the fringes. At least at scale, health IT is not delivering the benefits we expected. A recent report on the realised (as opposed to predicted) benefits of the hugely expensive English National Program for IT (NPfIT) shows that the whole effort might at best break even, and that in some parts of the program the realised benefit is as low as 2% of that predicted[1]. England is not especially bad as an exemplar country, it is just especially honest. E-health it seems, is much harder than we thought, at least at nation scale[2].

So, what is behind this apparent poor performance? The first explanation is simple and straightforward. We are in uncharted territory. No one has ever done anything like this before, so there are no manuals on how to build nation scale e-health systems. Worse, every country is different, with different populations, different burdens of disease, different economies, political imperatives and health delivery systems. So, it turns out that every national program for e-health is an uncontrolled, n of 1 case study. Sure, countries can talk to each other, share experiences and intelligence, but local context is all in the delivery.

Next, there is clearly a problem with how potential benefits are calculated. Healthcare is a complex system, and it is a brave individual that uses linear extrapolation to come up with the numbers for the expected benefits of an e-health program. Just because there is a 2% error rate in a process, and you can show your automation will detect it, does not mean that you can claim all that as your benefit. The automation may never be used (clinicians are like that – they are busy), or will be ignored (clinicians are also like that – sometimes they do know better, sometimes they don’t). Or the current system would have detected or remedied the error in other ways further downstream in the process.

More importantly, the costs and benefits of information are subject to network effects [3]. The marginal value of buying a fax machine was always dependent on how many other people owned a fax machine, and the same is today for owning a Facebook account, or uploading a shared health summary onto a national system.  The likelihood that the vital information your doctor uploaded onto an information system is actually seen and affects your care depends vitally on how many others doctors have done the same.

Another reason that benefits are not being realised is simply that the systems being built are the wrong ones. They solve problems no one is asking to be solved, or they build highways no one especially wants to travel down[2]. Or sometimes, they just don’t work – there comes a point when large scale IT programs that are much delayed, always asking for more resource, always promising that success is just around the corner, need to be called for what they are. Never confuse the means and the ends, because all you end up with is means without end.

Perhaps it is time to step back and talk, not about bending the cost curve down, but bending the benefits curve up. We should not be looking just for where we can optimise, we should also be looking for where we have the best chance of succeeding.

Which clinical tasks are best suited to automation? [4] It’s a simple question and one we never seem to ask. There is an assumption that just because information technology is a universal tool, that it can be universally applied. The truth is you can throw an awful lot of money at a poorly specified problem and get nothing back. Equally you can spend relatively small amounts of money in the right part of the problem space and reap great rewards. There is an unchallenged myth that large-scale national infrastructure projects will always release large-scale benefits everywhere – like fluoridation of water or better urban sanitation projects. That increasingly seems not to be the case.

What is the alternative however? If we are to focus on solving specific clinical tasks rather than building central infrastructure are we not stuck? There is so much local variation in the way things are done that imposing standard ways of working will not get very far either. There are strong hints coming from the world of consumer systems like the smart phone. For example, the reason that ‘apps’ seem to be such a successful idea is not that they are computer programs (we’ve had those for a while) but that they are cheap, disposable, substitutable, and bespoke. Information and what we need it for, how we use it, and how we access it, is a very local affair, and that is not about to change.  Health information for now is also most likely to be captured on the local systems of your hospital or GP. The logic of duplicating all or part of that local information, and shipping it to a central store, seems to not make sense technically or financially.

So, what might have once been a radical idea – that we need to architect health IT like an app store [5] – is perhaps now not so radical. There is more than a grain of truth to the proposition. We have built a world of interconnection, of personalisation, and we should embrace it. We also have the good fortune to have the information technology industry already pioneer the technologies and business models that make much of this new world possible.

Is it now time to move on, to write off sunk costs, and say good-bye to old business models and technology providers? The centralised, inflexible ‘old iron’ model of automation that has dominated e-health for a generation is probably on its last legs.

References

1. National Audit Office. Review of the final benefits statement for programmes previously managed under the National Programme for IT in the NHS, 2013. http://www.nao.org.uk/wp-content/uploads/2013/06/10171-001_NPfiT_Review.pdf

2. Coiera E. Why e-health is so hard. The Medical journal of Australia 2013;198(4):178

3. Coiera E. Information economics and the Internet. Journal American Medical Informatics Association 2000;7:215-21

4. Sintchenko V.S., Coiera E. Which clinical decisions benefit from automation? A task complexity approach. Int. J. Med. Inform 2003;70:309-16

5. Mandl KD, Kohane IS. No small change for the health information economy. New England Journal of Medicine 2009;360(13):1278-81

© Enrico Coiera 2013

Where Am I?

You are currently browsing entries tagged with Health information technology at The Guide to Health Informatics 3rd Edition.

%d bloggers like this: