Quantifying
a physician's performance is like trying to catch a cloud with a butterfly net.
Some of the residents of Hyde, the town in Cheshire, England,
where the late Dr. Harold Shipman practiced family medicine, used to say, “He’s
a good doctor, but you don’t live long.” Indeed not: it is now believed that
Dr. Shipman, over a period lasting a quarter of a century, murdered 200 or more
of his elderly patients with injections of morphine or heroin.
If the preservation of life be not the definition of a good
doctor, what is? Here is the definition published in a recent edition of the New
England Journal of Medicine:
The habitual and judicious use of communication,
knowledge, technical skills, clinical reasoning, emotions, values,
and reflection in daily practice for the benefit of the individual and,
the community being served.
Whatever one thinks of this definition, it is clear that it would
not make the goodness of doctors altogether easy to measure.
It does not follow from the unmeasurability of something, however,
that it does not exist or is unimportant: nor, unfortunately, that what is
measurable truly exists or is at all important. Nothing is easier to measure in
an activity as complex as medical practice as the trivial, and nothing is
easier to miss than the important.
The above definition of a good doctor appeared in an article on
the need for Obamacare to ensure that doctors provide value for money so that
they can be paid by result. This is a potential problem whenever there is a
financial intermediary between the doctor and the patient. Thenceforth it is
not the patient who decides what he wants from a doctor but an insurance
company or, increasingly under Obamacare, the government.
But as the article points out, measuring a doctor’s performance is
very difficult. Most doctors perform a large number of tasks, only a tiny proportion
of which can be measured at the same time. Moreover, what is measured may not,
and often does not, measure his performance as a whole. For example,
radiologists have been graded according to the exposure time of patients during
fluoroscopy, the taking of moving pictures under x-ray exposure. This is not
unimportant, of course, because x-rays cause burns and exposure to x-rays
increases the risks of developing cancer later; but fluoroscopy is only a small
part of a radiologist’s work. As the article points out, a radiologist’s
“primary role is to provide accurate and complete interpretations of imaging
studies.” Time of exposure of patients to x-rays under fluoroscopy – which may
vary with the patient as well as the radiologist – is not an adequate measure
of the radiologist’s overall competence.
Like must always be compared with like for any valid comparison to
be drawn, and this is difficult, time-consuming and expensive to do. Even if it
were not the case that measuring a doctor’s performance is like trying to catch
a cloud with a butterfly net, the gathering of information is not without cost,
both financial and psychological (a point the authors do not make). It is not
difficult to take up half or more of a doctor’s time by gathering from him the information
necessary to prove that they are efficient in whatever the time is left to
them. It reminds of what Karl Popper once accused Wittgenstein of doing:
perpetually polishing spectacles but never actually looking through them.
He who pays the piper calls the tune (there is a very good reason
why this should be a cliché). Moreover, there is a tendency for measurement in
all modern systems to escape its ostensible purpose, to become an end in itself
as well as an employment opportunity for bureaucratic mediocrities. The process
seems as inevitable as ageing.
*****
Theodore Dalrymple, a physician, is a contributing editor of City Journal and the
Dietrich Weismann Fellow at the Manhattan Institute. His new
book is Second
Opinion: A Doctor's Notes from the Inner City.
No comments:
Post a Comment