
If hyped tech is divisive, AGI is singularly. Why spend years on a hard problem if a superintelligence could solve it in a day? If we’re going to make an AGI anytime soon, you wouldn’t just be missing out on a gold rush. Your life’s work would be made pointless.
Maybe you should quit your hard problem and work on AGI instead. This might end up be the most effective way to solve the problem, anyway. First make a god, then have then have god solve your problem. Simple, right?
Several young people I know have followed this line of reasoning recently. Their logic is solid, given their AGI timelines are short. Unfortunately, this is not a good idea, because…
This is a deferred life plan
“First I’ll do AGI and then I’ll use it to work on my core interest” is a mutation of the common meme of a deferred life plan. Here’s what the AGI prophet has to say about deferred life plans:
[Some] people say some version of the following sentence:
”My life’s work is to build rockets, so what I’m going to do is make $100M in the next 4 years trading cryptocurrency with my crypto hedge fund… and then I’m going to build rockets.”
And they never do either. I believe that if these people would just pick one thing or the other, they would succeed at either. But the problem with this… one of the many problems with the deferred life plan, is that everyone can kind of tell that’s what you’re on, and that you’re not that committed to either.
And it just never works. I mean it must work sometimes, but I’ve seen it fail a lot… I would say, the deferred life plan, empirically, usually does not work, and if what you want to work on is an ambitious project, and you’re in Silicon Valley, and you’re competent, you very likely you can figure out a way to just work on the problem you actually care about.
Here Sam is talking about deferred life plans that involve making a lot of money. But why should the pitfalls not apply to changing your area of focus? Either way, you’re bound to run into following problems:
You’re not working on the thing you really care about, and it follows that it’ll be hard for you to commit.
Others can see that you’re not committed, and won’t want to help you.
You can work on the thing you actually care about without AGI.
Empirically, deferred life plans involving money sound good on paper, but they don’t work. AI DLPs probably won’t either.
It’s always later than you think.
Timelines are currently indefinite
There seem to be a large amount of compounding factors in prediction of AI timelines:
Moore’s law might be dead, or not
The scaling hypothesis might be true, or not
We might need small-scale breakthough in algorithms (DNNs → transformers), or not
We might need to completely change our learning algorithms (whole-brain emulation, etc.) or not
LLMs might be showing signs of intelligence right now, or not
If there was a definite timeline, it might be easy to reason about whether or you should jump in. But imagine that you spent 20 years working on this technology, and it didn’t end up applicable to your problem of interest. That would suck!
Life is too short to not work on what interests you.
Does AGI really make you obsolete?
There’s still good reasons to stick with your pet problem even if we make an AGI. I think that we broadly underestimate how hard it is to integrate new technologies with existing processes. This is why self-driving years are 5-10 years late, and we’re not using drones or VTOLs right now.
If GPT-4 ends up being “intelligent” — how is that going to impact the billions of people that move atoms around for a living? There’s good reason to believe that takeoff will be slow in many areas of interest.
Every deployment is going to be a rat’s nest of complexity. And each use case is going to require fine tuning. If you want an AGI to design your spaceship, it’s probably best to be a company that is already making and designing spaceships. You’ll be in the perfect spot to use AGI when it start to exist.
(Of course, if you believe in strong AI very soon, the calculus changes. But think quite hard if you believe that or not— timelines are indefinite!)
Integration challenges are underestimated.
You may be copping out
Say you’re working on a hard problem, like neurotech. Hard problems are hard, so you’re not making much progress. It may be frustrating to not make progress— or to be working on a roadmap that seems to be 10 or 15 years away from delivering results.
But problems are soluble. So far humans have proved to be universal machines for solving hard problems. Progress will surely be made over the long run. Arguing that “an AGI will solve it” very well may be a cope. Now instead of solving one hard problem, you have to solve two! First, you have to create an intelligent agent. Then, you have to teach it neuroscience!
AGI may be a “the grass is greener on the other side” sort of deal. AGI progress might look promising, so you may decide that you want to work on it so you can eventually use it to solve your problem. But in that case— you should probably continue to work on your problem, and adopt AGI as a tool when it’s ready to be used!
Problems are soluble.
Good reasons to work on AI
The progress we are making in AI is staggering today. You may become interested in AI for its own sake. This would be a great reason to start working on AI. Teaching sand and rocks to think is a truly magnificent problem. I pray you may minimize all the cross-entropy losses.
You might also come to the conclusion that AI poses an existential risk to society and that may compel to you to work on alignment or safety. If you get interested in this work for the sake of it, that’s great! I pray you may create the most robust visualization or whatever alignment researchers do.
Generative AI will create a lot of economic value. If you’re working on a generative AI startup, that’s great too! I pray you may reach all the PMFs.
But if your core interest lies elsewhere, beware of the deferred life plan. It is not a path that leads to happiness. Life is too short to not work on what interests you.
The AGI Deferred Life Plan
deferred life plan worked for me ¯\_(ツ)_/¯