Nuggets dark-mode light-mode

In Future

Richard Hamming on legal challenges computers face

The always-sharp Richard Hamming on the legal challenges delaying a broader deployment of computers to medical diagnostics:

One major trouble is, among others, the legal problem. With human doctors so long as they show “due prudence” (in the legal language), then if they make a mistake the law forgives them – they are after all only human (to err is human).

But with a machine error whom do you sue? The machine? The programmer? The experts who were used to get the rules? Those who formulated the rules in more detail? Those who organized them into some order? Or those who programmed these rules?

With a machine you can prove by detailed analysis of the program, as you cannot prove with the human doctor, that there was a mistake, a wrong diagnosis. Hence my prediction is you will find a lot of computer-assisted diagnosis made by doctors, but for a long time there will be a human doctor at the end between you and the machine.

We will slowly get personal programs which will let you know a lot more about how to diagnose yourself but there will be legal troubles with such programs. For example, I doubt you will have the authority to prescribe the needed drugs without a human doctor to sign the order.

You, perhaps, have already noted all the computer programs you buy explicitly absolve the sellers from any, and I mean any responsibility for the product they sell! Often the legal problems of new applications are the main difficulty, not the engineering!

Building on Hamming’s insights, I would speculate that much of the conversation about AI paradoxes (e.g. the trolley problem applied to self-driving cars) also stems from challenges in accountability.

We are used to treating humans as agents that can be hold accountable for the consequences of their acts (except for, say, children and elderly with decreasing mental capacity.)

If our present model of accountability is based on two premises:

  1. For all practical matters, humans have free will
  2. Humans have things to lose — we “suffer” if money, freedom, or reputation is taken from us

The question then becomes: How to translate these two premises to a world where machines are ubiquitious and ever smarter? Will we wait until they seem to have free will and things to lose?

In Future
Tagged with Richard Hamming · AI · Medicine · Ethics
via The Art of Doing Science and Engineering: Learning to Learn 📚

How Prague’s Charles Bridge was built

I have come across this amazing video via @Rainmaker1973:

This digital model was created for the project of the virtual exhibition “Prague at the time of Charles IV” and shows how the construction of the Charles Bridge took place in the 14th century

Isn’t it amazing how ingenious the humankind can be?

By the way, we take most of construction and civil engineering for granted nowadays, but it never ceases to amaze me the wide range of technology that people over millennia had to invent for us to get to where we are today.

In Future
Tagged with Progress · Engineering
via @Rainmaker1973 🐦

Gwern on OpenAI’s bet in the scaling hypothesis

Gwern explains well the bet OpenAI is making (and how it differs from competitors, like DeepMind):

As far as I can tell, this is what is going on: they do not have any such thing, because Google Brain (GB) and DeepMind (DM) do not believe in the scaling hypothesis the way that Sutskever, Amodei and others at OpenAI (OA) do.

GB is entirely too practical and short-term focused to dabble in such esoteric & expensive speculation, although Quoc’s group (1, 2) occasionally surprises you. They’ll dabble in something like GShard, but mostly because they expect to be likely to be able to deploy it or something like it to production in Google Translate.

DM (particularly Hassabis, I’m not sure about Legg’s current views) believes that AGI will require effectively replicating the human brain module by module, and that while these modules will be extremely large and expensive by contemporary standards, they still need to be invented and finetuned piece by piece, with little risk or surprise until the final assembly. That is how you get DM contraptions like Agent57 which are throwing the kitchen sink at the wall to see what sticks, and why they place such emphasis on neuroscience as inspiration and cross-fertilization for reverse-engineering the brain. When someone seems to have come up with a scalable architecture for a problem, like AlphaZero or AlphaStar, they are willing to pour on the gas to make it scale, but otherwise, incremental refinement on ALE and then DMLab is the game plan. They have been biting off and chewing pieces of the brain for a decade, and it’ll probably take another decade or two of steady chewing if all goes well. Because they have locked up so much talent and have so much proprietary code and believe all of that is a major moat to any competitor trying to replicate the complicated brain, they are fairly easygoing. You will not see DM ‘bet the company’ on any moonshot; Google’s cashflow isn’t going anywhere, and slow and steady wins the race.

OA, lacking anything like DM’s long-term funding from Google or its enormous headcount, is making a startup-like bet that they know an important truth which is a secret: “the scaling hypothesis is true” and so simple DRL algorithms like PPO on top of large simple architectures like RNNs or Transformers can emerge and meta-learn their way to powerful capabilities, enabling further funding for still more compute & scaling, in a virtuous cycle. And if OA is wrong to trust in the God of Straight Lines On Graphs, well, they never could compete with DM directly using DM’s favored approach, and were always going to be an also-ran footnote.

While all of this hypothetically can be replicated relatively easily (never underestimate the amount of tweaking and special sauce it takes) by competitors if they wished (the necessary amounts of compute budgets are still trivial in terms of Big Science or other investments like AlphaGo or AlphaStar or Waymo, after all), said competitors lack the very most important thing, which no amount of money or GPUs can ever cure: the courage of their convictions. They are too hidebound and deeply philosophically wrong to ever admit fault and try to overtake OA until it’s too late. This might seem absurd, but look at the repeated criticism of OA every time they release a new example of the scaling hypothesis, from GPT-1 to Dactyl to OA5 to GPT-2 to iGPT to GPT-3… (When faced with the choice between having to admit all their fancy hard work is a dead-end, swallow the bitter lesson, and start budgeting tens of millions of compute, or instead writing a tweet explaining how, “actually, GPT-3 shows that scaling is a dead end and it’s just imitation intelligence” — most people will get busy on the tweet!)

What I’ll be watching for is whether orgs beyond ‘the usual suspects’ (MS ZeRO, Nvidia, Salesfore, Allen, DM/GB, Connor/EleutherAI, FAIR) start participating or if they continue to dismiss scaling.

The original article at LessWrong has some data point estimates worth checking out as well.

In Future
Tagged with AI · OpenAI · Gwern · GPT-3 · Google Brain · DeepMind
via Are we in an AI overhang? – Comment 🌐