Motivated Reasoning Abound: Deconstructing Marc Andreessen's Essay About AI
<Too tired for witty subtitle; pretend I came up with one>
These pictures will make sense in a second.
Introduction
The Essay “Why AI Will Save the World“ by Marc Andreessen is probably the best example I have ever seen of motivated reasoning. It’s a perfect demonstration about how somebody who is otherwise perceived as smart can go about Frankenstein-ing nonsensical arguments together and thinking that they’ve actually proven their point.
I’m not even saying that I disagree with his ultimate conclusion. Whether or not AI will turn out to be the saviour of humanity is still in the air.
That being said, Andreessen’s arguments show that he cannot even begin to engage with basic criticisms of his position.
Let’s take the essay step-by-step; for the sake of comparison I have given my post the same structure as the original essay.
The Baptists and Bootleggers of AI
Andreessen starts off by explaining that people who have a negative view with AI generally fall into one of two categories
“Baptists” who genuinely believe it will be harmful
“Bootleggers” i.e. grifters and fearmongerers.
He has this to say about their motivations:
A cynic would suggest that some of the apparent Baptists are also Bootleggers – specifically the ones paid to attack AI by their universities, think tanks, activist groups, and media outlets. If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.
Sure. I agree that people will retroactively formulate a worldview based on their personal incentives, but it’s ironic that he doesn’t extend the same line of reasoning to himself.
Further, he has this to say about what happens when the Baptists get their way:
Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal, but in reality was coopted by the Bootleggers – the big banks. The result is that the same banks that were “too big to fail” in 2008 are much, much larger now.
This actually undermines his central thesis. Basically the quote above is an example of unintended consequences when you make changes to society — whether it be regulatory or technological.
Wouldn’t it be perfectly reasonable to assume that the deployment of AI would also have their own landscape of unintended consequences — many of which having the exact opposite effect of what they were initially intended? If somebody makes such a postulation, why is it reasonable to have them branded “Baptists” by default?
Will AI kill us all?
Andreessen admits here that many people who criticize AI actually have technical credentials in the field.
Now, obviously, there are true believers in killer AI... Some of these true believers are even actual innovators of the technology.
Again, this is another delicious example of irony considering Andreessen goes on every podcast imaginable giving his opinion on the subject of AI because he presumably believes he’s an authority on the subject.
Look, I’m not saying that smart people can’t be wrong. I’m just saying that smart people are smart. If I get cancer, I would trust my GP more than some random person on the street. Moreover, I would trust a specialized oncologist more than I would trust the GP. Even further, I would trust the world’s leading oncologist over any regular oncologist.
Sure, people can abuse their authority — but when it comes to increasingly complex subjects, only other authorities will have an idea of when this is happening.
Contrary to what many heterodox sense-makers would have you believe, it’s impossible to operate in this society thinking about literally everything from first principles.
Moving on, Andreessen goes on to explain that AI won’t kill us because it doesn’t have those sorts of intentions:
In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.
Again, this is a bait and switch of rhetoric. The question wasn’t “Will AI want to kill us?” The question is whether AI has the capacity to kill us.
Sure, AI does not have goals, but the backbone of AI — the recurrent neural networks which comprise them — certainly has a loss function. To say that AI doesn’t drive towards specific outcomes and away from others is disingenuous.
Saying that AI doesn't have goals because it's just math and code is like saying that humans don’t have goals because it’s just synapses in our brains firing off in a coordinated sequence.
Another quote:
And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation.
Again, he doesn’t even try to address the criticism when it’s levelled in the opposite direction. Many people would say that the culture of Silicon Valley has developed into something of a cult — and that he is square in the centre of it. Or did microdosing LSD and having a ping pong table in the middle of the office emerge from nowhere?
Will AI ruin our society?
Andreessen has this to say regarding any sort of governance around AI:
On the other hand, the slippery slope is not a fallacy, it’s an inevitability.
… the thought police are breathtakingly arrogant and presumptuous – and often outright criminal, at least in the US – and in fact are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984.
I mean, this is just… *chefs kiss*
Translation: “Basically if you impose any sort of restrictions or governance around AI then you are a fascist who is trying to bring 1984 into reality.”
Moreover, he contradicts himself in literally the previous paragraph:
There is no absolutist free speech position. First, every country, including the United States, makes at least some content illegal. Second, there are certain kinds of content, like child pornography and incitements to real world violence, that are nearly universally agreed to be off limits – legal or not – by virtually every society. So any technological platform that facilitates or generates content – speech – is going to have some restrictions.
Will AI take our jobs?
Here Andreessen gives the lazy argument which so many technological evangelists have previously put forward: that, in the long term, society has always created more jobs than it has taken away.
Setting aside the question of the increasing prevalence of bullshit jobs, as well as the stagnant (and in many cases declining) labour force participation rate, he once again fails to acknowledge why AI is categorically different than the previous technologies that came before it, strawmanning the concern.
Specifically, AI is autopoetic — meaning that it has the ability to create a better version of itself. Steel factories cannot create more steel factories; steam engines cannot create more steam engines. AI, however, very much has the capability of creating better AI.
Beyond this, he doesn’t seem to acknowledge that artificial intelligence and human beings operate on different timescales.
Again, I am entirely agnostic as to whether or not AI will ultimately destroy or create new jobs. But it’s not “Baptist doomspeak” to say that it takes time for a human to retrain for a new job — a psychologically taxing process, never mind the finances. It would be even more stressful to go through a college degree only to realize that the job you’ve been retraining for is already out of commission, because a better version of the AI got there faster.
It’s clear that, however which way the future society will look, our current society depends on the majority of prime age men and women to be working — not just from an economic perspective, but cultural as well.
As an aside, here’s an interesting passage from the essay I wanted to highlight:
For, as Milton Friedman observed, “Human wants and needs are endless” – we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there.
Again, Andreessen fails to turn the mirror back on himself — “Maybe I’m the one who’s in a cult?”
Because the quote above is a perfect example of the dangers of the Silicon Valley mindset. He simply takes it for granted that we all have insatiable desires that can never be filled. The idea that there are people out there in the world that are happy with their lot (or at least the basics) is entirely foreign to him.
In his eyes, we are always desperate for more, more, more — and the only way to get more, more, more is to let Silicon Valley develop AI unhindered.
Will AI lead to crippling inequality?
Andreessen uses the good old trickle down economics argument here:
Would Elon be even richer if he only sold cars to rich people today? No. Would he be even richer than that if he only made cars for himself? Of course not. No, he maximizes his own profit by selling to the largest possible market, the world.
Sure, this is one of the ways to get rich, and I agree with him that technology and innovation generally tends to suppress prices, making things more affordable for the working class.
But the idea that he’s made a compelling argument is ridiculous, considering that Bernard Arnault, owner of LVMH and currently the richest person in the world — uses the luxury brand strategy, catering to a small percentage of the population while explicitly excluding everybody else.
Anybody who’s taken a basic marketing course knows that you can increase your overall revenues using (broadly speaking) one of two strategies.
Flood the market, and rack up the sales through quantity
Or otherwise make the product scarce, and make a ton of profit for each incremental unit sold
“But luxury handbags are hardly the same as technology.”
Sure, but we’re talking about the market dynamics here, and technology is not immune. For example, the top 0.01% of bitcoin holders already have about 30% of the total wealth. There’s nothing to suggest that AI (and the companies that wrap around it) will be any different — or at least, it’s not “Baptism” to suggest that it might be subject to the same dynamic.
There will likely be a winner, and that winner will likely take all. Andreessen himself implicitly understands this, considering how much he’s pushing for the development of AI as fast as possible.
Will AI lead to people doing bad things?
Here Andreessen actually gives a slightly more intellectually honest version of his position, explicitly stating that “The cat is out of the bag.”
But even here, his argument is contradictory. Remember, in the previous section he explains how any sort of governance is 1984, but in this section he tries to explain bad actors away by saying that we will just use AI and existing regulation to create good actors.
So, what is it? Regulation of any kind is 1984? Or using AI for regulation is good?
The contradiction can be explained by his own personal incentives. “If you let those other people create regulations, they will only slow things down and make everything worse. But if you allow me to use AI to create regulations, then everything will be great.”
Further, the arguments in this section becomes even more ironic when contrasted against the next section…
The actual risk of not pursuing AI
In this section he ironically recognizes that people have the ability to deploy AI in net negative ways — in this case, China. Specifically, he details how they are going to create a society very much like… You guessed it… 1984.
Again, nobody is being a “Baptist” or “Bootlegger” if they don’t trust Silicon Valley to steward AI any better than the Chinese Communist Party. I agree that people in Silicon Valley (probably) aren’t as malicious — but you don’t need to be malicious in order to create a disastrous outcome.
We’ve already seen this play out before with social media companies. When Instagram was created, they had no idea that they were going to be a major source of depression for teenage girls; when Facebook was first launched, they had no idea that it would play a crucial role in political elections.
But that’s the whole point. Sometimes tools can have unintended consequences; and the more powerful the tool, the more unpredictable and potentially disastrous the consequences become.
Again, it’s not doom saying when the Silicon Valley elite have been preaching the mantra of “Move fast and break things” over the last decade and a half. AI is one of those things where we simply cannot have this same sort of mentality.
Conclusion
I honestly have no idea how AI is going to play out. Whether artificial intelligence generalizes in five years, 50 years, or 500 years, I don’t have the slightest intuition.
But this essay doesn’t really have anything to do with AI. It has to do with motivated reasoning, and how people like Andreessen will twist and contort arguments and conclusions to fit their preconceived notions.
But don’t take it from me, because I’m just one authority. But wait, actually I’m not… So maybe do take it from me.
And if you think I’m a dumb person, remember Marc Andreessen himself said that the smart people work for the dumb people — so I guess that puts me in charge.
Thanks for reading; as a reward, here’s a video I found when trying look up arm triangle chokes for Jiu Jitsu:
Hey good essay. I am going to suggest that Moloch the idea represents the biggest of all threatening problems. And further that all of the instrumental actors in GAI development will be driven and guided by the forces of Moloch. Basically "follow the incentives" or more crudely, "follow the money."
So if this is true, then we will get GAI, whatever it turns out be, and that may well be many different things. The guidance behind these GAIs may well be deeply malicious, simply ruthless and greedy, or just greedy. Welfare of the community will only be able to influence the regulators.