Pause Giant AI Experiments: An Open Letter

The letter has some flaws. Who determines “more powerful than GPT-4” and how? Is this just about LLMs?

Still much of AI community is well intentioned but too long-term focussed. Huge medium term risks need addressing too. We’ve all seen the consequences of out-of-control digital technology races too…

I have signed :writing_hand:

3 Likes

This feels too close to ‘contraband software’ for me to agree with it. I really don’t like how a lot of this text focuses on large technology companies. Many of the proposed interventions are things that could only be reasonably implemented in a proprietary system. Will it be illegal to create free software AI?

I’m certainly concerned about a lot of things, but well, you know that proverb about the genie and the bottle…

I don’t know what the right answer is, but I seriously doubt any government will be able to legislate the dangers away, even given a 6 month holiday. I don’t think society will ever adapt to anyone having the ability to generate convincing images/videos/audio clips of real people with fabricated context. Or copywriters generating thousands of words of perfectly grammatical word salad every minute.

This is an impractical solution for many reasons:

provenance and watermarking systems to help distinguish real from synthetic

As always, humanity is screwed and it’s our fault. Again.

Should we automate away all the jobs, including the fulfilling ones?

Yes. Job automation is a good thing. I firmly disagree that humans should be spending their lives performing jobs; they should be doing what they want to do instead. If you find your job fulfilling, wouldn't it be better to have so much money you never had to worry about anything, and continue doing that thing anyway? Perhaps not as a job, but a hobby, or a full-fledged business better than you ever could before.

Of course, humans need money to do what they want, and a lot of them need to get jobs to do that, and the amount of jobs they can do decreases with more automation. But ultimately, I don’t think people want to see Benshi anymore.

Humans are hired to perform jobs for a business so it can turn a profit. If there’s a way to turn a higher profit with better accuracy at a similar quality, that’s a good thing for the business and its customers. The problem with a lot of businesses is their lack of automation. It makes them inefficient, slow, and hard to deal with, and tends to lead to business failure.

On the other hand, there are bad forms of automation, which is not the same as job automation being a bad thing. You need to use tools the right way, lest you injure yourself. You don’t hold knives by the blade, for example…


Relatedly, I received an email from Mozilla today about their new initiative, Mozilla.Ai

Announcement here: Introducing Mozilla.ai: Investing in trustworthy AI

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.

So, today we are announcing Mozilla.ai: A startup — and a community — that will build a trustworthy and independent open-source AI ecosystem. Mozilla will make an initial $30M investment in the company.

The text has some flaws. For example, who & how does one decide “more powerful than GPT-4”? Is this just about LLMs? Still, that’s not a reason not to speak up.

I’m not, of course, expecting a pause and I doubt many signatories expect it too. Indeed the take was 5% when I last looked: Will FLI's "Pause Giant AI Experiments: An Open Letter" result in a successful 6+ month pause of training powerful AIs? | Manifold Markets

There is no right answer, but not speaking up is a wrong answer I think. Even Geoffrey Hinton spoke up with his concerns this week. We cannot expect a rapid, or good response, from governments. And some are pushing the accelerator.

Note that there were 5,720 signatories of the AI Principles open letter in 2017. Prominent names have diverged from that commitment, in my opinion. I’m sure they would argue otherwise.

Yes, this is interesting. We expect to have an announcement, on the same topic, in April.

1 Like

I don’t think society has the capacity to do what the letter is asking specifically because self-interest and profit will supersede any other reasoning.

For example, look at something like Remote Area Medical (YouTube) which provides free medical care to poor Americans. Tell me that we don’t already have profound risks to society and humanity. In terms of suffering and inequality, AI is nothing new.

I don’t think automation is bad. But a society that can’t meet people’s needs today won’t get any better when AI comes.

If it were up to me, I would take care of people whether AI existed or not.

2 Likes

Revisiting this today, Ars has written an article about this letter: Fearing “loss of control,” AI critics call for 6-month pause in AI development | Ars Technica

I think this is the most salient excerpt:

“The risks and harms have never been about ‘too powerful AI,’” Bender wrote in a tweet that mentions her paper, On the Dangers of Stochastic Parrots (2021). “Instead,” she continued, “They’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”

I’m certainly not an expert on AI—I don’t even consider myself well-informed—but I worry more about the people who control these services than the services themselves. In light of this, the AI Governance systems the letter recommends are something I agree with implementing, at least for models controlled by corporations like OpenAI.

I think other suggestions in this letter are overreaching, and the omnipresent problem with over-regulation is that it favors the big guys over the little guys. Again, I have no idea what the right answer is…but my gut tells me it’s not this.

And yes, measuring AI Strength like some sort of power level does not seem feasible. Well—you could probably build an AI to determine that?

1 Like

I think I might have re-evaluated some of my opinions on the need for regulation, restraints, and the degree of them: 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

I don’t think, for instance, that experiments like this should be allowed to be orchestrated without consequence: Controversy erupts over non-consensual AI mental health experiment [Updated] | Ars Technica