When I was 18, my vision of the future was:
- Go to a good university and have valuable life experiences (a hybrid between an MGMT music video and the images on the median college brochure)
- Graduate with a good GPA, solid internships, and a grasp on what I wanted to do in my career
- Work really hard at a tech company that's making cool technology
- Fall in love, date for a few years, get married, start a family
- (About 50 years unaccounted for here)
- Leave the world a better place than I found it
I'm now in my early twenties, and this frankly still seems like a great life to me. In the past several years though I've become less and less certain that I'll be able to live it in the way that I envisioned, or live it at all. This is mostly due to my (progressively) short AGI timelines.
I don't think we have AGI yet. I do think we're close, however, and without the hindsight of the last few years even the capabilities of today's publicly accessible frontier models would have been considered fictions invented in the mind of a sleep deprived grad student.
I think we're extremely lucky to have this technology! For one, I've been using Claude everyday ever since sonnet-3.5 was released for absolutely everything: from code to recipes to life advice. My life would be worse without it.
I also think there are enormous risks, existential or otherwise, as capabilities continue to improve. I think this is almost certainly going to happen without global coordination at an ~unprecedented [0] level. I don't know whether or not we'll get this, but I am pessimistic by default regarding the ability for our species as a collective to coordinate and ensure good outcomes in the face of so many backward incentives [1].
So what are you even supposed to do in a situation like this? I don't know, but I do have some early thoughts.
Do something about it
You could focus on helping. Even if global coordination is hard, I think there are concrete ways as individuals to steer our world into better timelines. For instance, despite my default pessimism I think highly driven people working in AI governance can convince policymakers and world leaders to cooperate and make good decisions. I also think technical AI safety is extremely important, and I'm personally aggressively transitioning to working in that field.
Become more resilient
You could do your best to survive and thrive in whatever comes ahead by becoming more resilient. I broadly agree with this thread and this post on LessWrong, but for a few takes of my own:
- I'm not sure whether money will be important in a post-AGI world, but if it is then it's probably very important that you have some. With that being said, I'm not confident property rights continue to be universally respected in such a world (again, see [1]). I haven't yet thought of robust strategies to overcome this. I also don't know what the balance is between being profit-maximizing before the value of labor collapses (if that happens at all) and maximizing impact of working on AI safety.
- I think some of the existing takes on working on your spirituality are correct, though I would rephrase spirituality here as being something like memetic defenses or a strong attachment to reality. I anticipate this would be helpful if the future becomes very weird in unpredictable ways, which seems easily psychologically destabilizing if you're unprepared.
- I think maintaining and growing your relationships (co-workers, friends, family, etc) is underrated. Related to the previous point, I think psychological safety and resilience will be extremely important and social support is a big component of this. I also think it's much easier to navigate uncertainty when you're working together rather than alone.
Live life anyway
There are big error bars around all of these predictions. I think this is very unlikely, but there is a timeline where nothing particularly bad happens or where the scope of harm is limited. In such a timeline, it would be bad if you spent years being overly stressed about the future, or made decisions that left you worse off (e.g. committed crimes, taken out big loans).
I also think this would be bad in a timeline where the worst happens. This would mean that you spent your last few good years not truly enjoying your life and not optimizing for virtuous things like the happiness of your loved ones.
Given this, I think it's still worthwhile to interleave in the life that you envisioned anyway. This takes time and energy away from more urgent things, but it's empathic irrationalities like this that I think make humanity worth preserving.
What now?
I think we're still in a good position to make the future go well. I'll be doing my best to help make that happen. In the meantime, I'll probably go and get married.
[0] I don't think the NPT or other post-hoc global treaties are good existence proofs for this.
[1] This is partially informed by my experiences as an Indonesian but regulations make it unwise for me to discuss online.