Clicky

Being human

Ben Hoban is a GP in Exeter.

You have to know quite a bit to be a doctor, but it’s remarkable how easy it is to feel stupid. Guidance changes constantly, based on the latest research, or more often on economic or organisational grounds, and any patient with an internet connection and some time to kill in the waiting room can self-diagnose half a dozen conditions you’ve never even heard of. Wouldn’t it be nice to be able to download the latest updates in your sleep and go to work knowing that you were fully NHS-compliant, and without having to dodge any of those awkward questions about Segawa Syndrome? It certainly feels as if you need to be a bit of a machine to keep up sometimes.

The steady flow of extra work from other parts of the system often reinforces this impression, as lists of GP-to-do tasks land on our desk, compiled by someone at another desk juggling their own lists, who either thinks these jobs are necessary but isn’t in a position to do them, or is just passing them on from someone else. It can feel at times as if we’re stuck inside a vast machine, cogs turned by other cogs, with little real agency; it’s just a bit dehumanising.

Wouldn’t it be nice to be able to download the latest updates in your sleep and go to work knowing that you were fully NHS-compliant… It certainly feels as if you need to be a bit of a machine to keep up sometimes.

At the same time as we’re trying to make people work like machines, we’re also busy trying to make machines more like people. Alan Turing famously proposed the ability to convince a person that they were talking to another person as a test of Artificial Intelligence. This assumes that people think and communicate in typically “human” ways, which machines can learn to imitate, but what if machines and humans are just converging as each tries to imitate the other? Have you ever had trouble online completing a CAPTCHA to prove that you’re a real person?

Underlying much of our ambivalence about AI is the suspicion that humanity is simply a function of intelligence, and that intelligence is the same as whatever computers have that keeps on getting bigger, faster or generally more impressive: at some point we will be left behind. We can accept the idea that a computer is made from microchips containing millions of tiny switches that allow them to store information, but I suspect that for most of us, it’s unclear how this allows us to write up our notes, send e-mails or watch videos of kittens on the internet. We default to the analogy that computers are like brains, and it’s not a big jump to start giving them names and talking to them as if they were people. Alexa, think for me.

Still, few would go so far as to say that intelligence is all it takes to be human. Consider the evil genius character in so many stories, whose humanity seems diminished by an intellect lacking in other human qualities to balance it: computational power alone is not enough, and yet we sometimes hanker after it as if it were. It is not the sentient robots from which we need to be protected, but our own thinking that makes them seem desirable, the fever-dream of perfect solutions implemented by interchangeable drones.

Where does this leave us then, as doctors and as people? We are a mixed bag, fallible and inconsistent, and our fascination with AI has something of the evil genius about it, as if by force of will we could remake humanity without its flaws. In fact, we have already gone down this path by basing our healthcare system on knowledge, standardisation and professionalism at the expense of more meaningful personal interactions. This means in theory that we will always get the best treatment for our condition, regardless of who provides it. In practice, however, such a system necessarily privileges the biomedical understanding of ill health and is too intolerant of ambiguity, giving the message that only a person who has been medically investigated and optimised can be said to be healthy. The result is disempowered patients and overloaded doctors. It is a system in which machines could care perfectly for other machines, but one that creates a hostile environment for people on both sides of the desk.

When we feel helpless or overwhelmed by all that we face at work, it is not because we lack the brains to come up with a solution. It is rather that we are being human in a context in which humanity is routinely undervalued, and sometimes there are no solutions. We all need a sense of agency, an understanding that what we’re doing is meaningful in a way that goes beyond merely fixing things, and this comes not from perfect knowledge, but from relationships in which both sides extend goodwill to the other and reach an understanding of how to proceed in the face of uncertainty. Our strength lies not just in our intelligence, but also in our other human qualities, and our patients need both. Trying to keep up with the machines is a race we can never win, nor is it one we need enter. Let’s keep being human.

 

Featured Photo by Aideal Hwa on Unsplash

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Previous Story

Framing the debate: Race-based requests in medicine

Next Story

Episode 146: Investigating the signals in primary care prescribing before a diagnosis of bladder or renal cancer

Latest from Opinion

0
Would love your thoughts, please comment.x
()
x
Skip to toolbar