Richard Hamming on legal challenges computers face in medicine

The mathematician Richard Hamming on the legal challenges that delay a broader deployment of computers to medical diagnostics:

One major trouble is, among others, the legal problem. With human doctors so long as they show “due prudence” (in the legal language), then if they make a mistake the law forgives them — they are after all only human (to err is human).

But with a machine error whom do you sue? The machine? The programmer? The experts who were used to get the rules? Those who formulated the rules in more detail? Those who organized them into some order? Or those who programmed these rules?

With a machine you can prove by detailed analysis of the program, as you cannot prove with the human doctor, that there was a mistake, a wrong diagnosis. Hence my prediction is you will find a lot of computer-assisted diagnosis made by doctors, but for a long time there will be a human doctor at the end between you and the machine.

We will slowly get personal programs which will let you know a lot more about how to diagnose yourself but there will be legal troubles with such programs. For example, I doubt you will have the authority to prescribe the needed drugs without a human doctor to sign the order.

You, perhaps, have already noted all the computer programs you buy explicitly absolve the sellers from any, and I mean any responsibility for the product they sell! Often the legal problems of new applications are the main difficulty, not the engineering!

Building on Hamming’s observations, I would speculate that much of the conversation about AI paradoxes (e.g. the trolley problem applied to self-driving cars) also stems from challenges in accountability.

We are used to treating humans as agents that can be hold accountable for the consequences of their acts (except for, say, children and elderly with decreasing mental capacity.)

If our present model of accountability is based on two premises:

  1. For all practical matters, humans have free will
  2. Humans have things to lose — we “suffer” if money, freedom, or reputation is taken from us

The question then becomes: How to translate these two premises to a world where machines are ubiquitious and ever smarter? Will we wait until they seem to have free will and things to lose?


Category  The Future
Tags  Richard Hamming · Artificial Intelligence · Medicine · Moral & Ethics
Source  The Art of Doing Science and Engineering: Learning to Learn