August 13, 2003

Singularity Sky

Charles Stross (2003), Singularity Sky (New York: Ace: 0441010725).

Before the Singularity, human beings living on Earth had looked at the stars and consoled themselves in their isolation with the comforting belief that the universe didn't care. Unfortunately, they were mistaken.

Posted by DeLong at August 13, 2003 01:43 PM | TrackBack

Comments

Ah, Stross again! I still fondly remember "A Colder War", in which Cthulhu destroys the world with a little help from Ronald Reagan.

"Looking up at the stars, I know quite well
That for all they care, I can go to hell.
But on Earth, indifference is the least
We have to fear from man or beast."

-- W.H. Auden

Posted by: Bruce Moomaw on August 13, 2003 07:42 PM

Ok, clearly you are a hard science fiction fan.

So you are familiar with Fermi's paradox, and its implications. [1]

One day, you have tell us your answer to Fermi's paradox.


[1] http://www.faughnan.com/setifail.html

Posted by: John Faughnan on August 13, 2003 07:43 PM

Ah, this is a good opportunity for me to continue my campaign against the antropocentric and cultural bias. In finance ther is survivorship bias which is analogical - we tend to focus on the winners and forget the losers in forming our views on what is typical.

Fermi's praradox thus is solved by attaching a non-biased expected life time - time to discontinuation that is, not to development into something new - for compex systems.

Human-like species typically go extinct, there were a handful of them (like Neandertalis), now we are the ony one left. Mammals typically go extinct. Cultural system usually perish, agriculture and (post-)industrialism are the only ones left.

Complexity in systems (biological, cultural, economical) might tend to grow exponentially in time, allright (towards this "Singularity"). But the probability of discontinuation might also increase at a fast rate with compexity.

We don't expect cockroaches to go extinct, but we are afraid than many mammals and especially apes will. There is a lot of single atoms in the universe but few molecules. Turn up the heat enough and molecules immediately turn to single atoms. When you cool it down again, you might have to wait quite a while for molecules to form.

Probability of disontinuation (of a subsytem) increases with complexity. This balances the tendency of compexity (in a subsystem) to grow with time.

And that solves Fermis paradox.

Posted by: Mats on August 14, 2003 02:27 AM

On Fermi's paradox (i.e. if there are extra terrestrial civilizations, where are they?)

I think, historically, the bulk of travel can be explained by four things:

1) money
2) money
3) tourism
4) money

Now, I think we can rule out tourism as a _major_ cause of interplanetary space travel. It's not just you that's going on vacation, it's your whole bloodline.

There are various reasons people travel for money, including conventions, trade and immigration. I think all of these things are goofy to do over interplanetry distances (however, I can't count out relgious-based emmigration...). Except one: trade of information. But you don't need spaceships to do this, all you need are really nifty telescopes and really powerful transmitters.

Unfortunately, our telescopes are still so bad that we can't even see Earth-like planets, let alone try to receive signals eminating from these plantes. And we have no way of replying. And since replies would take tens or hundreds of years to reach the destination, I would imagine our would-be conversants are quite patient.

But then again, setting up an intergalatic trading mechanism for information would be really hard to do. And if one wasn't set up, you basically need government support to run these signalling systems. But imagine for a moment what a transgalactic government would look like: it would be responsible for, at least, some kind of monetary system, and possible some intellectual property system, as well as protection against fraudulent solar systems. All of this would have to be set up in "slow motion", taking possibly thousands of years. And any time a new solar system joined, you'd have the probelm of not only learning their language, but also bringing them up to speed on the burocracy and how to handle themselves in the intergalactic community.

But then again, am I making the assumption that other civilizations would have a market-style economy? Maybe this isn't a Fair and Balanced assumption. On the other hand, if species are either more individualistic (i.e. cats?) or more communal (i.e. bees) than humans, would they necessarily need to evolve the ability to communicate linguistically? Hmmmm...

Posted by: Amit Dubey on August 14, 2003 05:48 AM

One ot the most difficult parameters in the Drak equation to estimate is f subscript c, the fraction of machine civilisations which are willing and able to communicate. In Brad's example on his puzzle page,this is put high as 0.1 and low as 0.01. Note that this is the average value for the whole life of the civilisation, range suggested at 10-50 years.

There's good reason to take a low estimate. Prudent civilisations would be worried about the horrific risk of contacting bad aliens (BEMs, kzin). Moral civilisations would be worried about about causing catastrophic culture shock and reducing rich, self-reliant civilisations to depressed cargo cults. Respect for the moral autonomy of others (Kant) implies not interfering with them in unpredictable and possibly harmful ways.

A civilisation would have to be both prudent and moral (at least internally, and reason requires extendingconcern to other intelligent beings) to survive a long time - it's far from clear that there's a possible evolutionary path to reliably virtuous prudence. Sadly there's a contradiction in the idea of a DARPA PAM-style futures contract in the collapse of earth civilisation in the next 50 years, but it wouldn't trade at zero.

What a prudent, rational civilisation might do is explore with robots. They would take great care to keep reporting untraceable, with heavily defended and/or self-destroying cutouts in case of finding the bad guys. If you come across an alien explorer robot, don't touch. But since its AI would inherit the prudence and probable virtue of its creators, you could safely put it on the ballot for Governor of California.

Posted by: James Wimberley on August 14, 2003 06:02 AM

Be isolated no more, but watch out for post singularity universe knocking up side the head, but in a very very caring manner.

Also, post singularity, it was decided that it was super nice to have relatives around to do some free baby sitting.

Posted by: northernLights on August 14, 2003 06:17 AM

All good responses, though with one exception no-one was immodest enough to claim the paradox was solved. Remember, it's not MY paradox. It's been puzzled over by people far smarter than I.

I do appreciate the thought though!!

The catch of the Fermi Paradox though is that if even ONE civilization propagates across star systems, they quickly (within galactic time scales) cover the galaxy. So the sieve preventing that from happening has to be VERY tight.

That's what's interesting about the paradox. What could be THAT tight? Mats proposes a common solution to the paradox, butthat solution implies a universal law -- all civilizations crash. Matt argues this by analogy to biological species, but I'm not sure the analogy holds. For that matter, why use a species as the analogy rather than a "kingdom"?

We could easily become extinct, for example, but our "memes" could propagate into abiologic entities.

James suggest exploring with robots. I suspect no biological entities actually every cross star systems -- that might be a hard barrier. But that falls into the post-singular category of answers to Fermi's paradox.

That's the tight sieve I prefer. Civilizations are never stable. They either crash and burn (Mats preference) or they go post-singular. When they are post-singular they aren't interested in exploration. It's not something post-singular entities ever do. I don't know why, I'm definitely pre-singular.


So, there might be such a universal law, analogous to Godel's Theorem. "Any nervous system complex enough to create a technologic civilization will destroy itself". That would be interesting to know. But it's not self-evident.

Posted by: John Faughnan on August 14, 2003 01:50 PM

Point taken John, my posts have displayed me as being the least humble and the one that like crashing and burning the most. Let me try to repair:

Assume that advanced civilizations actually are prone to go extinct. Then only those who have a rock-solid "security thinking" will survive. Non-proliferation, global emission control of things like greenhouse gases, etc... Wouldn't this security thinking make them impose strong "radio discipline" so they wouldn't be detected by possibly hostile foreign civilizations?

Posted by: Mats on August 14, 2003 02:55 PM

Who else read the classic sci-fi short story from...probably the 40s, in which a human ship and an alien ship meet up in deep space. Neither can trust the other, and either could possibly destroy the other's civilization based on the technology on display in the two ships. ultimately, the two crews agree to exchange ships and go back to their homeworlds...

i would also add that the sound from the beginning of "Contact" is exactly the same as the "womb" setting on the sound machine for my new baby.

i hope that has dragged everyone OT enough.

Posted by: Robert Green on August 14, 2003 02:58 PM

"Who else read the classic sci-fi short story from...probably the 40s, in which a human ship and an alien ship meet up in deep space. Neither can trust the other, and either could possibly destroy the other's civilization based on the technology on display in the two ships. ultimately, the two crews agree to exchange ships and go back to their homeworlds..."

Murray Leinster, "First Contact". Excellent story. Some highlights of the story include both sides realizing that they were using the same techniques to disguise thier origins and that just before the two ships part some members of each crew pass their time telling each other 'dirty jokes'. (It shows when the story was written that both crews were 'all male'.)

Posted by: Oscar Zoalaster on August 14, 2003 03:20 PM

Let me get this straight. The "hard science fiction" and "transhuman" types believe:

* That one day we will become immortal
* That this immortality will involve leaving behind our physical body
* That all of our wants will be satisfied without physical effort
* That the day is close at hand when all this will happen

and now that

* There are people out in the sky who care about us.

Remind me again, chaps, why you gave up believing in God?

Posted by: dsquared on August 14, 2003 11:26 PM

They don't believe in God. They believe that there *will be* a God. (Or perhaps that they *will be* God.)

Posted by: Brad DeLong on August 15, 2003 11:15 AM

They don't believe in God. They believe that there *will be* a God. (Or perhaps that they *will be* God.)
Posted by: Brad DeLong on August 15, 2003 11:15 AM

One of the obvious solutions to Fermi's Paradox is that the universe was designed so that there would be very few sentient species. Maybe only one.

Of course this solution to the paradox doesn't say much about the nature of the Designer. The Designer might be more like a cosmic computer geek than a wise God.

I don't think that's actually part of transhumanism, just Fermi's Paradox.

Posted by: John Faughnan on August 15, 2003 09:53 PM

They don't believe in God. They believe that there *will be* a God. (Or perhaps that they *will be* God.)
Posted by: Brad DeLong on August 15, 2003 11:15 AM

One of the obvious solutions to Fermi's Paradox is that the universe was designed so that there would be very few sentient species. Maybe only one.

Of course this solution to the paradox doesn't say much about the nature of the Designer. The Designer might be more like a cosmic computer geek than a wise God.

I don't think that's actually part of transhumanism, just Fermi's Paradox.

Posted by: John Faughnan on August 15, 2003 09:58 PM
Post a comment