How much should we value life?
Until recently, I've largely ignored the question of how long I expect to live, and similarly, how much value I place on my life. Why? For starters, thinking about mortality is scary, and I don't like to do things that are scary unless the benefits outweigh the cost.
So, do they? My answer has always been, "Nah. At least not in the short-to-mid term. I'll probably have to think hard about it at some point in the future though." Whatever the conclusion I draw — high value on life, low value on life — I didn't see how it would change my behavior right now. How the conclusions would be actionable.
Recently I've been changing my mind. There have been various things I've been thinking about that all seem to hinge on this question of how much we should value life.
If you expect (as in expected value) to live eg. 50 years, and thus value life at a typical value of eg. $10M:
- You probably don't have to worry much about Covid and can roughly return to normal.
- Cryonics becomes a wacky idea that won't work and thus isn't worth paying for.
- Driving is well worth it. The price you pay for the risk of dying is something like $250/year.
But... if you expect (as in expected value) to live something like 10k years:
- Covid becomes too risky to justify returning to normal.
- Cryonics becomes an intriguing idea that is well worth paying for.
- Driving becomes something where the risk of dying costs you more like $25k/year, and thus probably is not worth doing.
What else would the implications be of expecting (I'll stop saying "as in expected value" moving forward) to live 10k years? I'm not sure. I'll think more about it another time. For now, Covid, cryonics and driving are more than enough to make this a question that I am interested in exploring.
Focusing on the parts that matter
There are a lot of parameters you could play around with when exploring these questions. Some are way more important than others though. For example, in looking at how risky Covid is, you could spend time exploring whether a QALY should be valued at $200k or $175k, but that won't really impact your final conclusion too much. It won't bring you from "I shouldn't go to that restaurant" to "I should go to that restaurant". On the other hand, moving from an expectation of a 100 year lifespan to a 1,000 year lifespan could very well end up changing your final conclusion, so I think that those are the types of questions we should focus on.
Why not focus on both? The big, and the small. Well, limited time and energy is part of the answer, but it's not the main part. The main part is that the small questions are distracting. They consume cognitive resources. When you have 100 small questions that you are juggling, it is difficult to maintain a view of the bigger picture. But when you reduce something to the cruxiest four big questions, I think it becomes much easier to think about, so that's what I will be trying to do in this post.
Where on earth do you get this 10k number from?
Earth? I didn't get it from earth. I got it from dath ilan.
The short answer is as follows.
Experts largely believe that there is a realistic chance of the singularity happening in our lifetime. If you take that idea seriously, it follows that you should put wildly high valuations on life, I believe. Suppose that there is even a 10% chance of living 100k years. That gives you an expectation of 10k years.
To be honest, it actually surprises me that, from what I can tell, very few other people think like this.
And now for the long answer.
Taking ideas seriously
Taking an idea seriously means:
Looking at how a new idea fits in with your model of reality and checking for contradictions or tensions that may indicate the need of updating a belief, and then propagating that belief update through the entire web of beliefs in which it is embedded. When a belief or a set of beliefs change that can in turn have huge effects on your overarching web of interconnected beliefs. (The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.) Failing to propagate that change leads to trouble. Compartmentalization is dangerous.
When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think "Wow, that's pretty neat!" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).
Before getting into all of this, I want to anticipate something and address it in advance. I predict that a lot of people will interpret the claim of "you should expect to live for 10k years" as wacky, and not take it seriously.
I think this takes various forms. It's a spectrum. On one end of the spectrum are people who dismiss it upfront without ever giving it a chance. On the other end are people who are just the slightest bit biased against it. Who have ventured slightly away from "Do I believe this?" and towards "Must I believe this?".
To move towards the middle on this spectrum, the rationalist skill of Taking Ideas Seriously is necessary (amongst other things). Consider whether you are mentally prepared to take my wacky sounding idea seriously before continuing to read.
(I hope that doesn't sound rude. I lean away from saying things that might sound rude. Probably too far. But here I think it was a pretty important thing to say.)
To some extent (perhaps a large extent), all that I do in this post is attempt to take the singularity seriously. I'm not really providing any unique thoughts or novel insights. I'm just gluing together the work that others have done. Isn't that what all progress is? Standing on the shoulders of giants and stuff? Yes and no. It's one thing to be provided with puzzle pieces. It's another thing for those puzzle pieces to be mostly already connected for you. Where all you need to do is give them a nudge and actually put them together. That's all I feel like I am doing. Nudging those pieces together.
Edit: Shut Up And Multiply is another one that is worth mentioning.
Will the singularity really happen? If so, when?
Wait But Why
I'll start with some excerpts from the Wait But Why post The Artificial Intelligence Revolution. I think Tim Urban does a great job at breaking things down and that this is a good starting place.
And for reasons we’ll discuss later, a huge part of the scientific community believes that it’s not a matter of whether we’ll hit that tripwire, but when. Kind of a crazy piece of information.
Let’s start with the first part of the question: When are we going to hit the tripwire?
i.e. How long until the first machine reaches superintelligence?
Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:
Those people subscribe to the belief that this is happening soon—that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.
Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.
The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.
The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.
A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.
Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it’s more likely that ASI won’t actually ever be achieved.
So what do you get when you put all of these opinions together?
In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:
Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040 Median pessimistic year (90% likelihood): 2075
So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.
A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:
By 2030: 42% of respondents
By 2050: 25% By 2100: 20% After 2100: 10% Never: 2%
Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.
But AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?
Müller and Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:
The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.
We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.
Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.
Grace et al
The Wait But Why post largely focused on that Mueller and Bostrom survey. That's just one survey though. And it took place between 2012 and 2014. Can this survey be corroborated with a different survey? Is there anything more recent?
Yes, and yes. In 2016-2017, Grace et al surveyed 1634 experts.
Each individual respondent estimated the probability of HLMI arriving in future years. Taking the mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within 45 years and a 10% chance of it occurring within 9 years.
That seems close enough to the Mueller and Bostrom survey, which surveyed 550 people, to count as corroboration.
Astral Codex Ten
There's an Astral Codex Ten post called Updated Look At Long-Term AI Risks that discusses a recent 2021 survey. That survey was 1) smaller and 2) only surveyed people in AI safety related fields rather than people who work in AI more broadly. But more to the point, 3) it didn't explicitly ask about timelines, from what I could tell. Still, I get the vibe from this survey that there haven't been any drastic changes in opinions on timelines since the Bostrom and Grace surveys.
I also get that vibe from keeping an eye on LessWrong posts over time. If opinions on timelines changed drastically, or even moderately, I'd expect to be able to recall reading about it on LessWrong, and I do not. Absence of evidence is evidence of absense. Perhaps it isn't strong evidence, but it seems worth mentioning. Actually, it seems moderately strong.
There is a blog post I really liked called When Will AI Be Created? that was published on MIRI's website in 2013, and authored by Luke Muehlhauser. I take it to be representative of Luke's views of course, but also reasonably representative of the views of MIRI more broadly. Which is pretty cool. I like MIRI a lot. If there was even moderate disagreement from others at MIRI, I would expect the post to have been either altered or not published.
In the first footnote, Luke talks about lots of surveys that have been done on AI timelines. He's a thorough guy and this is a MIRI blog post rather than a personal blog post, so I expect that this footnote is a good overview of what existed at the time. And it seems pretty similar to the Bostrom and the Grace surveys. So then, I suppose at this point we have a pretty solid grasp on what the AI experts think.
But can we trust them? Good question! Luke asks and answers it.
Should we expect experts to be good at predicting AI, anyway? As Armstrong & Sotala (2012) point out, decades of research on expert performance suggest that predicting the first creation of AI is precisely the kind of task on which we should expect experts to show poor performance — e.g. because feedback is unavailable and the input stimuli are dynamic rather than static. Muehlhauser & Salamon (2013) add, “If you have a gut feeling about when AI will be created, it is probably wrong.”
Damn, that's disappointing to hear.
I wonder whether we should expect expert predictions to be underconfident or overconfident here. On the one hand, people tend to be overconfident due to the planning fallacy, and I sense that experts fall for this about as badly as normal people do. On the other hand, people underestimate the power of exponential growth. I think experts probably do a much better job at avoiding this than the rest of us, but still, exponential growth is so hard to take seriously.
So, we've got Planning Fallacy vs Exponential Growth. Who wins? I don't know. I lean slightly towards Planning Fallacy, but it's tough.
Anyway, what else can we do aside from surveying experts? Luke proposes trend extrapolation. But this is also tough to do.
Hence, it may be worth searching for a measure for which (a) progress is predictable enough to extrapolate, and for which (b) a given level of performance on that measure robustly implies the arrival of Strong AI. But to my knowledge, this has not yet been done, and it’s not clear that trend extrapolation can tell us much about AI timelines until such an argument is made, and made well.
Well, this sucks. Still, we can't just throw our arms up in despair. We have to work with what we've got. And Luke does just that.
Given these considerations, I think the most appropriate stance on the question “When will AI be created?” is something like this:
We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.
How confident is “confident”? Let’s say 70%. That is, I think it is unreasonable to be 70% confident that AI is fewer than 30 years away, and I also think it’s unreasonable to be 70% confident that AI is more than 100 years away.
This statement admits my inability to predict AI, but it also constrains my probability distribution over “years of AI creation” quite a lot.
That makes enough sense to me. I think I will adopt those beliefs myself. I trust Luke. I trust MIRI. The thought process seems good. I lean towards thinking timelines are longer than what the experts predict. I like that he was thorough in his survey of surveys of experts, and that he considered the question of whether surveying experts is even the right move in the first place. This is fine for now.
Will you be alive for the singularity?
It doesn't actually matter all that much for our purposes when exactly the singularity happens. The real question is whether or not you will be alive for it. If you are alive, you get to benefit from the drastic increases in lifespan that will follow. If not, you won't.
Warning: Very handwavvy math is forthcoming.
Let's say you are 30 years old. From the Wait But Why article, the median expert prediction for ASI is 2060. Which is roughly 40 years from now. Let's assume there is a 50% chance we get ASI at or before then. A 30 year old will be 70 in 2060. Let's assume they are still alive at the age of 70. With this logic, a 30 year old has at least a 50% chance of being alive for the singularity. Let's use this as a starting point and then make some adjustments.
Life expectancy of 85 years
Suppose that we don't reach ASI by 2060. Suppose it takes longer. Well, you'll only be 70 years old in 2060. People are currently living to ~85 years old right now, so let's say you have another 15 years to wait on ASI. I'll eyeball that at bumping us up to a 60% chance at being alive for the singularity.
Modest increases in life expectancy
There's something I've never understood about life expectancy. Let's say that people right now, in the year of 2021, are dying around age 85. That means that people who were born in 1936 can expect to live 85 years. But if you're 30 years old in the year 2021, that means you were born in 1991. Shouldn't someone who was born in the year 1991 have a longer life expectancy than someone who was born in the year 1936?
I think the answer has got to be "yes". Let's assume it's an extra 15 years. That a 30 year old today can expect to live to be 100. And let's say this gives us another 10% boost for our chances of being alive for the singularity. From 60% to 70%.
Does this 70% number hold water? Let's do a little sanity check.
If we're expecting to live another 70 years, that'll be the year 2090 (to use a round number). From Bostrom's survey:
Median pessimistic year (90% likelihood): 2075
From there it'll take some time to reach ASI. I'm just going to wave my hands and say that yeah, the 70% number passes the sanity check. Onwards.
Exponential increases in life expectancy
In the previous section, we realized that someone born in the year 1991 will probably live longer than someone born in the year 1936. Duh.
But how much longer? In the previous section, I assumed a modest increase of 15 years. I think that is too conservative though. Check this out:
Technology progresses exponentially, not linearly. Tim Urban, as always, gives us a nice, intuitive explanation of this.
Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.
This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.
But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.
No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.
And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.
In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.
This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.
Bostrom gives us a similar explanation in his book Superintelligence:
A few hundred thousand years ago growth was so slow that it took on the order of one million years for human productive capacity to increase sufficiently to sustain an additional one million individuals living at subsistence level. By 5.000 B.C., following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.
So, let's say you buy this idea of exponential increases in technology. How does that affect life expectancy? With the modest increases in the previous section we went from 85 years to 100 years. How much should we bump it up once we factor in this exponential stuff?
I don't know. Remember, artificial general intelligence comes before ASI, and AGI has got to mean lots of cool bumps in life expectancy. So... 150 years? I really don't know. That's probably undershooting it. I'm ok with eyeballing it at 150 years though. Let's do that.
An 150 year life expectancy would mean living to the year 2140. If we look at the expert surveys and then add some buffer, it seems like there'd be at least a 90% chance of living to ASI.
Didn't you say that you expect expert surveys on AI timelines to be overconfident?
Yes, I did. And I know that I referenced those same surveys a lot in this section. I just found it easier that way, and I don't think it really changes things.
Here's what I mean. I think my discussion above already has a decent amount of buffer. I think it undersells the increases in life expectancy that will happen. I think this underselling more than makes up for the fact that I was using AI timelines that were too optimistic.
Also, I'm skipping ahead here, but as we'll see later on in the post, even if the real number is something like 60% instead of 90%, it doesn't actually change things much. Orders of magnitude are what will ultimately matter.
Assuming you live to the singularity, how long would you expect to live?
Friendly reminder: Taking Ideas Seriously.
Unfortunately, this is a question that is both 1) really important and 2) really difficult to answer. I spent some time googling around and didn't really find anything. I wish there were lots of expert surveys available for this like there are for AI timelines.
Again, we can't just throw our arms up in despair at this situation. We have to make our best guesses and work with them. That's how bayesian probability works. That's how expected value works.
For starters, let's remind ourselves of how totally insane ASI is.
It's CRAZY powerful.
Because of this power, Bostrom as well as other scientists believe that it could very well lead to our immortality. How's that for an answer to the question of life extension?
And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb:
The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.
It may very well be that when evolution hits the tripwire, it permanently ends humans’ relationship with the beam and creates a new world, with or without humans.
And here's Feynman:
It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.
Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body. If perfected, this process (or a far smarter one ASI would come up with) wouldn’t just keep the body healthy, it could reverse aging. The difference between a 60-year-old’s body and a 30-year-old’s body is just a bunch of physical things that could be altered if we had the technology. ASI could build an “age refresher” that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old. Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.). A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd.
Remember when I said this?
Suppose that there is even a 10% chance of living 100k years. That gives you an expectation of 10k years.
Doesn't sound so crazy now does it?
So, what value should we use? Bostrom and others say infinity. But there's some probability that they are wrong. But some percent of infinity is still infinity! But that's an idea even I am not ready to take seriously.
Let's ask the question again: what value should we use? A trillion? A billion? A million? 100k? 10k? 1k? 100? Let's just say 100k for now, and then revisit this question later. I want to be conservative, and it's a question I'm really struggling to answer.
What will the singularity be like?
So far we've asked "Will you be alive for the singularity?" and "Assuming you live to the singularity, how long would you expect to live?". We have preliminary answers of "90% chance" and "100k" years. This gives us a rough life expectancy of 90k years, which sounds great! But there are still more questions to ask.
Utopia or dystopia?
What if we knew for a fact that the singularity would be a dystopia. A shitty place to be. An evil robot just tortures everyone all day long. You'd rather be dead than endure it.
Well, in that case, that 90% chance at living an extra 100k years doesn't sound so good. You'd choose to end your life instead of living in that post-singularity world, so you're not actually going to live those 100k years. You're just going to live the 100 years or whatever until we have a singularity, and then commit suicide. So your life expectancy would just be a normal 100 years.
Now let's suppose that there is a 20% chance that the post-singularity world is a good place to be. Well, now we can say that there is a:
90% * 20% = 18%chance that you will live another 100k pleasant years
90% * 80% = 72%chance that you live to the singularity but it sucks and you commit suicide and thus live your normal 100 year lifespan
10%chance that you don't make it to the singularity at all
The last two outcomes are insignificant enough that we can forget about them. The main thing is that 18% chance of living 100k pleasant years. That is an expectation of 18k years. A 100 year lifespan is small potatoes compared to that.
The point of this is to demonstrate that we have to ask this question of utopia or dystopia. How likely is it that we actually want to live in the post-singularity world?
Well, in reality, there are more than two possible outcomes. Maybe there's a:
- 30% chance of it being 10/10 awesome
- 20% chance of it being 9/10 really cool
- 10% chance of it being 4/10 meh
- 40% chance of it being 0/10 awful
There's a lot more possibilities than that actually, and a more thorough analysis would incorporate all of them, but here I think it makes sense as a simplification to assume that there are only two possible outcomes.
So then, what is the probability that we get the good outcome?
I have no clue. Again, I'm just a guy who writes repetitive CRUD code for web apps, and they never taught me any of this in my coding bootcamp. So let's turn to the experts again.
Fortunately, Bostrom asked this question in his survey of AI experts. Let's let Tim Urban break it down for us again.
Who or what will be in control of that power, and what will their motivation be?
The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.
Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It’s also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.
I agree with Urban about the neutral percentage being lower if it were asking about ASI instead of AGI. Since we're being handwavvy with our math, let's just assume that the respondents would assign a
52 / (52 + 31) = 63% ~ 60% chance of ASI leading to a good outcome. With that, we can use 60% instead of 20%, and say that we can expect to live
90% * 60% * 100k years = 54k years. Pretty good!
How much is a SALY worth?
There is this term that people use called a QALY. It is pronounced "qually" and stands for "quality adjusted life year". The idea is that life years aren't created equally. Losing the years from 85 to 90 years old when you have cancer isn't as bad as losing the years from 25 to 30 when you're in the prime of your life. Maybe the latter years are worth $200k each to you while the former are only worth $30k.
I want to use a different term: SALY. Singularity adjusted life year. How much would a post-singularity year be worth?
Well, it is common nowadays to assign a value of $200k to a year of life. What about after the singularity? If we get the good outcome instead of the bad outcome, the singularity seems like it'll be totally fucking awesome. I'll hand it back over to Tim Urban again to explain.
Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity. Global warming? ASI could first halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI—health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat—in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation. Of course, this would also be great for animals, who wouldn’t have to get killed by humans much anymore, and ASI could do lots of other things to save endangered species or even bring back extinct species through work with preserved DNA. ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.
The possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.
How much more awesome is that world than todays world? I don't know. It sounds pretty awesome to me. I could see there being legitimate arguments for it being 10x or 100x more awesome, and thus SALYs would be worth $2M or $20M respectively (since we're saying QALYs are worth $200k). I could even see arguments that push things up a few more orders of magnitude. Remember, this is a world where a god-like superintelligence has fine-grained control over the world at the nanometer scale.
Personally, I suspect something like 100x or more, but it's a hard idea to take seriously. Let's just say that a SALY is worth a humble $500k and move forward. We could revisit this assumption in the future if we want to.
Why would anyone want to live for that long?
Check out "Skeptic Type 5: The person who, regardless of whether cryonics can work or not, thinks it’s a bad thing" in Why Cryonics Makes Sense.
Piecing things together
We've done a lot of work so far. Let's zoom out and get a feel for the big picture.
We are trying to see how much value we should place on life. Answering that question depends on a lot of stuff. You could approach it from various angles. I like to look at the following parameters:
- How likely is it that you live to the singularity?
- How likely is it that the singularity will be good rather than bad?
- How many years do you expect to live post-singularity?
- How valuable are each of those years?
Here are my preliminary answers:
Here is what those answers imply:
- There is a
90% * 60% = 54%chance that you find yourself alive in an awesome post-singularity world.
- You can expect to live
54% * 100k years = 54k (post-singularity) years.
- The value of those 54k post-singularity years is
54k * $500k = $27B.
There's also those meager ~100 pre-singularity years worth ~$200k each, so your pre-singularity years are worth something like $20M, but that is pretty negligible next to the $27B value of your post-singularity years, right? Right. So let's not think about that pre-singularity stuff.
Note that this $27B figure gives us a lot of wiggle room if we're making the argument that life is crazy valuable.
Let's say it is $10B to make the math easier. We normally value life at about $10M. That is three orders of magnitude less. So then, our $27B could be off by a full two orders of magnitude, and life would still be a full order of magnitude more valuable than we currently value it. Eg. maybe you only think there's a 10% chance of living to the singularity, and an expectation of living 10k years post-singularity instead of 100k. Those seem like pretty conservative estimates, but even if you use them, the normal values that we place on life would still be off by an order of magnitude.
On top of this, I was trying to be conservative in my initial assumptions.
Professor Quirrell from HPMoR is one of my favorite people in the world. When I'm lucky, he makes his way into my head and reminds me that people suck and that I should be pessimistic. Let's see what happens if I listen to him here.
- Humans are going to screw something up and end the world before they ever come close to reaching ASI, let alone AGI. You've explained how technology progresses exponentially. Putting that sort of technology in the hands of a human is like putting a machine gun in the hands of a toddler: it's only a matter of time before the room is up in flames. You've got to assign at least some probabilty to that happenng. Personally, I think it is quite high.
- Alignment problems are hard. You know this. And for some reason, you are not acting as if it is true. If you create an ASI without first solving the alignment problems, you get a paperclip maximizer. Which is game over. And there's no reset button. Humans are going to rush their way into ASI without coming close to solving the alignment problem first, and we are all going to be transformed into gray goo.
- This does seem reasonable on the surface. However, things always have a way of going wrong.
- Well done, Mr. Zerner. You've managed to be more pessimistic than even I would have been here.
Let's try out some Professor Quirrell adjusted values here:
This gives us a value of
10% * 10% * 10k years * $1M/year = $100M. So, one order of magnitude larger than the typical value of life.
Responding to Professor Quirrell
Thank you for your input.
How likely is it that you live to the singularity?
Damn, I totally failed to think about the possibility that you die due to some existential risk type of thing before the singularity. My initial analysis was just focused on dying of natural causes. I'm glad I talked to you.
Let's start by looking at expert surveys on how likely it is that you die of existential risk types of stuff. I spent some time googling, and it wasn't too fruitful. In Bostrom's Existential Risk Prevention as Global Priority paper from 2012, he opens with the following:
Estimates of 10-20% total existential risk in this century are fairly typical among those who have examined the issue, though inevitably such estimates rely heavily on subjective judgment. The most reasonable estimate might be substantially higher or lower.
Which points to the following footnote:
One informal poll among mainly academic experts on various global catastrophic risks gave a median estimate of 19% probability that the human species will go extinct before the end of this century (Sandberg and Bostrom 2008). These respondents' views are not necessarily representative of the wider expert community. The U.K.'s influential Stern Review on the Economics of Climate Change (2006) used an extinction probability of 0.1% per year in calculating an effective discount rate. This is equivalent to assuming a 9.5% risk of human extinction within the next hundred years (UK Treasury 2006, Chapter 2, Technical Appendix, p. 47).
This is only talking about existential risks though. It's also possible that I die eg. if Russia bombs the US or something. Or if there is a new pandemic that I'm not able to protect myself from. Things that wouldn't necessarily be considered existential risks. So that pushes the probability of me dying before the singularity up. On the other hand, those estimates presumably factor in existential risk from unfriendly AI, so that pushes the probabilty of me dying before the singularity down. Let's just call those two considerations a wash.
Let's look at another piece of evidence. In Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards, Bostrom says the following:
In combination, these indirect arguments add important constraints to those we can glean from the direct consideration of various technological risks, although there is not room here to elaborate on the details. But the balance of evidence is such that it would appear unreasonable not to assign a substantial probability to the hypothesis that an existential disaster will do us in. My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may be considerably higher.
Hm, that's not good to hear. What does "considerably higher" mean? 80%?
What proportion of this risk comes from UFAI? It sounds like it's not that much. Bostrom says:
This is the most obvious kind of existential risk. It is conceptually easy to understand. Below are some possible ways for the world to end in a bang. I have tried to rank them roughly in order of how probable they are, in my estimation, to cause the extinction of Earth-originating intelligent life; but my intention with the ordering is more to provide a basis for further discussion than to make any firm assertions.
And then he ranks UFAI fourth on the list. So I don't think we have to filter this out much. Let's just suppose that Bostrom's best personal estimate of something other than UFAI is 70%.
Let's look at The Case For Reducing Existential Risks from 80,000 Hours now. Bingo! This has some good information.
For instance, an informal poll in 2008 at a conference on catastrophic risks found they believe it’s pretty likely we’ll face a catastrophe that kills over a billion people, and estimate a 19% chance of extinction before 2100.25
And provides the following table:
It looks like it might be referencing that same survey Bostrom mentioned though, and we can't double count evidence. However, the article also gives us these data points:
In our podcast episode with Will MacAskill we discuss why he puts the risk of extinction this century at around 1%.
In his his book The Precipice: Existential Risk and the Future of Humanity, Dr Toby Ord gives his guess at our total existential risk this century as 1 in 6 — a roll of the dice. Listen to our episode with Toby.
What should we make of these estimates? Presumably, the researchers only work on these issues because they think they’re so important, so we should expect their estimates to be high (“selection bias”). But does that mean we can dismiss their concerns entirely?
Given this, what’s our personal best guess? It’s very hard to say, but we find it hard to confidently ignore the risks. Overall, we guess the risk is likely over 3%.
That's good... but weird. Why are their estimates all so low when Bostrom said anything under 25% is misguided? I feel like I am missing something. I think highly of the people at 80,000 Hours, but I also think highly of Bostrom.
Let's just call it 30%, acknowledge that it might be more like 80%, and move on.
So what does that mean overall for this parameter of "How likely is it that you live to the singularity?"? Well, we said before that there was a 10% chance that you don't make it there due to dying of something "normal", like natural causes. Now we're saying that there's a 30% chance or so of dying of something "crazy" like nuclear war. I think we're fine adding those two numbers up to get a 40% chance of dying before the singularity, and thus a 60% chance of living to the singularity. Less than our initial 90%, but more than Professor Quirrell's pessimistic 10%.
How likely is it that the singularity will be good rather than bad?
Professor Quirrell says the following:
Alignment problems are hard. You know this. And for some reason, you are not acting as if it is true.
So then, the question is whether you think the experts underestimated the difficulty. If you think they didn't factor it in enough, you can lower your confidence accordingly.
(Warning: Spoilers for HPMoR in the following quote)
"I do not wish to speak ill of the departed," the old witch said. "But since time immemorial, the Line of Merlin Unbroken has passed to those who have thoroughly demonstrated themselves to be, not only good people, but wise enough to distinguish successors who are themselves both good and wise. A single break, anywhere along the chain, and the succession might go astray and never return! It was a mad act for Dumbledore to pass the Line to you at such a young age, even having made it conditional upon your defeat of You-Know-Who. A tarnish upon Dumbledore's legacy, that is how it will be seen." The old witch hesitated, her eyes still watching Harry. "I think it best that nobody outside this room ever learn of it."
"Um," Harry said. "You... don't think very much of Dumbledore, I take it?"
"I thought..." said the old witch. "Well. Albus Dumbledore was a better wizard than I, a better person than I, in more ways than I can easily count. But the man had his faults."
"Because, um. I mean. Dumbledore knew everything you just said. About my being young and how the Line works. You're acting like you think Dumbledore was unaware of those facts, or just ignoring them, when he made his decision. It's true that sometimes stupid people, like me, make decisions that crazy. But not Dumbledore. He was not mad." Harry swallowed, forcing a sudden moisture away from his eyes. "I think... I'm beginning to realize... Dumbledore was the only sane person, in all of this, all along. The only one who was doing the right things for anything like the right reasons..."
I could definitely see the experts underestimating the difficulty and would be happy to adjust downwards in response. It's hard to say how much though. I spent some time looking through Eliezer's twitter and stuff. I have a lot of respect for his opinions, so hearing from him might cause me to adjust downwards, and I could have sworn there was a thread from him somewhere expressing pessimism. I couldn't find it though, and his pessimism doesn't seem too different from the mainstream, so I don't know how much I can adjust downwards from the 60% number we had from the surveys. I'll eyeball it at 40%.
How many years do you expect to live post-singularity?
I think even Professor Quirrell can be convinced that we should assign some probability to having a life expectancy of something like 10B years. Even a 1 in 1000 chance of living that long is an expectation of 10M years. Remember, immortality was the answer that Nick Bostrom gave here. He is a very prominent and trustworthy figure here, and we're merely giving a 0.1% chance of him being right.
But we don't have to go that far. Again, maybe we should go that far, but 100k years seems fine as a figure to use for this parameter, despite Professor Quirrell's pessimism.
How valuable are each of those years?
I appreciate the respect. Let's just return to $500k/year though.
Our new values are:
That gives us a total value of $12B this time. In contrast to $27B the first time, $100M for Professor Quirrell, and $10M as the default we use today. So this time we're about three orders of magnitude higher than the baseline.
Adjusting for cryonics
All of this assumes that you aren't signed up for cryonics. Unfortunately, very few people are actually signed up, so this assumption is usually going to be true. But if you are signed up, it does change things.
We're talking about the possibility of a uptopian singularity here. If you die before then and are cryonically frozen, it seems pretty likely that when the singularity comes, you'll be revived.
Suppose that you die in the year 2020, are frozen, the singularity happens in the year 2220, they reach the ability to revive you in 2230, and then you live an extra 100k years after that. In that scenario, you only lose out on those 210 years from 2020 to 2230. 210 years is negligible compared to 100k, so dying, being cryonically frozen, and then being revived is basically the same thing as not dying at all.
When I first realized this, I was very excited. For a few moments I felt a huge feeling of relief. I felt like I can finally approach things like a normal human does, and stop this extreme caution regarding death. Then I realized that cryonics might not work. Duh.
How likely is it to succeed? I don't know. Again, let's look at some data points.
- Steven Harris (Alcor): 0.2-15%
- Michael Perry (Alcor): 13-77%
- Robin Hanson: 6%
- Experienced LWers: 15%
- Inexperienced LWers: 21%
- Ralph Merkle: >85% (conditional on things like good preservation, no dystopia, and nanotech)
- A lot of people don’t give numbers because it’s too speculative. (Ben Best)
- 69 super smart leading scientists say that it’s a “credible possibility”. (Although there’s suspiciously few neuroscientists)
- A lot of smart people who I respect are signed up for cryonics (Eliezer Yudkowsky, Robin Hanson, Tim Urban, Nick Bostrom, James Miller, Ralph Merkle, Ray Kurzweil, Peter Thiel…), demonstrating a revealed preference.
It seems like we're at something like a 30% chance it succeeds, let's say. So it's like there's a 30% chance we get those 100k years that we lost back. Or a 70% chance that we actually lose the years. So using our updated values:
- There's a 60% chance of living to the singularity and a 40% chance the singularity is good, so it's a 24% chance of living to a good singularity.
- Now we can multiply by 0.7 and get an 16.8% chance that we actually lose out on those life years. Because in the 30% chance that cryonics works, we don't actually lose those life years.
- So there's an 16.8% chance of us losing 100k life years, which is an expectation of 16.8k life years lost.
- Each of those years is worth $500k, so $8.4B.
Wait, that's just 70% of the $12B we had before. I'm bad at math.
So ultimately, cryonics helps but it is not enough to significantly change the conclusion.
Let's say we use $10B as the value of life. It's a round number and it's roughly in betweeen our $12B and $8.4B numbers. What are the implications of that? I'll list two.
For a healthy, vaccinated, 30 year old, what are the chances that you die after being infected with covid? Personally, I refer to this post for an answer. It says 0.004%, but acknowledges that it is very tough to calculate. Let's go with 0.004%.
How much does a microcovid cost? Well, since we're using 0.004%, a microcovid is equal to a
0.000001 * 0.00004 = 4 * 10^-11 chance of dying. And since we're valuaing life at $10B, that is a
4 * 10^-11 chance of losing $10B. So a microcovid costs
4 * 10^-11 * $10B = $0.40.
You can go on https://microcovid.org to see what that means for various activities in your area, but to use an example, for me, it means that eating indoors costs $400, which clearly is not worth it. Even if we went down an order of magnitude, paying an extra $40 to eat indoors is something I'd almost always avoid. On top of that, there are wide error bars everywhere. Personally, that makes me even more hesitant.
I wrote a post about this. Check it out. Let's reason through it a little bit differently here though.
In that post I looked at the cost per year of driving. Since writing it, I realized that looking at something like the cost per mile is more useful, so let's try that.
In the US, there is about one fatality per 100M vehicle miles traveled (VMT). So by traveling one mile, that's like a 1 in 100M chance of dying. Or a 1 in 100M chance of losing $10B, since that's what we valued life at. So then, driving a mile costs
1/100M * $10B = $10. I have a hard time imagining realistic scenarios where that would be worth it.
But you are a safer driver than average, right? How much does that reduce your risk? In the post, I observe the fact that there is a 2.5-to-1 ratio of non-alcohol to alcohol related fatalities, wave my hand, and say that maybe as a safer driver you only take 1/4 of the risk. So
$2.50/mile. That's starting to seem reasonable. It's right in line with what an Uber would cost, and a 10 mile trip would be $25. But still, it's not something you'd want to do willy nilly.
Since writing the post, I realized something important that is worth noting. I was in NYC and took an Uber up to Central Park. It might have been the craziest ride of my life. The driver was swerving, accelerating, stopping short, getting really close to other cars. However, the speeds were only like 10-20mph. I can't see an acident in that context causing death. And I think the same thing is true in various other contexts of lower speed traffic.
A cursory google search seems to support that. The risk of a given collision resulting in death is definitely higher at higher speeds. But collisions seem to happen more frequently at lower speeds, and people still die of those. I'm not clear on how these factors balance each other out. I didn't spend that much time looking into it. Eyeballing it, maybe taking a local trip at slow speeds is
$1.00/mile and taking a trip on the highway is
But you already take larger risks in your day-to-day life. So if you're going to argue that life is worth $10B and you shouldn't drive or take any covid risks, then you may as well just live in a bubble. No thanks, I'll pass.
I anticipate something like this being a highly upvoted comment, so I may as well respond to it now.
Salads are better for you than pizza. You know you should be eating salads, but sometimes, perhaps often times, you find yourself eating pizza. Oops.
Imagine that someone came up to you and said:
You're such a hypocrite. You're saying you don't want to get chinese food with me because you should be eating salads, but I see you eating pizza all of the time.
I think the logical response is:
Yeah, I do make that mistake. But just because I've made mistakes in the past doesn't mean I shouldn't still take a Begin Again attitude and try my best in the present.
Also, I haven't had the chance to research what the common death-related risks are in everyday life, other than cars and covid. If someone can inform me, that would be great.
What if life is even more valuable?
I've been trying to be conservative in my assumptions. But what if I am off by orders of magnitude in that direction? What if life should be valued at ten trillion dollars instead of ten billion.
That could very well be the case if we bump up our estimates of how many post-singularity years we'd live, and of how valuable each of those years are. If life is actually this valuable, perhaps we should do even more to avoid death than avoiding cars and covid.
"Sufficient? Probably not," Harry said. "But at least it will help. Go ahead, Deputy Headmistress."
"Just Professor will do," said she, and then, "Wingardium Leviosa."
Harry looked at his father.
"Huh," Harry said.
His father looked back at him. "Huh," his father echoed.
Then Professor Verres-Evans looked back at Professor McGonagall. "All right, you can put me down now."
His father was lowered carefully to the ground.
Harry ruffled a hand through his own hair. Maybe it was just that strange part of him which had already been convinced, but... "That's a bit of an anticlimax," Harry said. "You'd think there'd be some kind of more dramatic mental event associated with updating on an observation of infinitesimal probability -" Harry stopped himself. Mum, the witch, and even his Dad were giving him that look again. "I mean, with finding out that everything I believe is false."
Seriously, it should have been more dramatic. His brain ought to have been flushing its entire current stock of hypotheses about the universe, none of which allowed this to happen. But instead his brain just seemed to be going, All right, I saw the Hogwarts Professor wave her wand and make your father rise into the air, now what?
It is one thing to have your Logical Self decide that your life is worth eg. $10B. It's another thing for your Emotional Self to feel this. And it is yet another thing for your actual self to act on it.
I don't think I can help much in dealing with the difficulties in keeping these selves aligned. However, I do have a perspective I'd like to share that I think has some chance at being helpful.
Imagine that you had a piggy bank with $21.47 of coins inside. Would you care a lot about it getting stolen? Nah.
Now imagine that you had a piggy bank with 21.47 million dollars inside of it. Now you'd be pretty protective of it huh? What if it was billion instead of million? Trillion instead of billion?
How protective you are of a thing depends on how valuable the thing is. If your piggy bank exploded in value, you'd become much more protective of it. Similarly, if your life exploded in value, you should become much more protective of it as well.
Assume the hypothetical
Here's another perspective. Assume that your life expectancy is 100k years instead of 80. With such a long time left to live, how would you feel about getting into a two ton metal object moving at 70mph at 8am by a guy who is still drowsy and didn't get enough sleep last night amongst dozens of others similar objects in the same situation?
Really try to assume that you would otherwise live 100k years. No illness is going to kill you until then. No war. No natural disaster. No existential risk. The only things that can kill you are things you have control over.
Would you feel good about getting in the car?
If you have any thoughts, I'd love to discuss them over email: email@example.com.
#rationality- 1 toast