OMG! I MISSED THE SINGULARITY? by Extropia DaSilva


I had to share this, it’s too good to just link to. So, the link to the original is there, and I highly suggest you check out her blog, but this article is quite well written.

OMG! I MISSED THE SINGULARITY?

by Extropia DaSilva
http://extropiadasilva.wordpress.com/2012/03/13/omg-i-missed-the-singularity/

If you have visited the H+ Magazine website, you are probably familiar with the advertisement showing a comic book rendering of a worried woman asking herself, “OMG…I missed the Singularity?”.

Now, there’s an interesting thought. Would it, in fact, be possible for the Singularity to happen without being noticed?

I think there are many reasons to believe people will miss the occurrence of the Singularity. Reasons such as:

1: The belief that the Singularity= the creation of artificial intelligence.

But that is A pathway to Singularity, not THE path. Vernor Vinge described several other technological developments that could lead to the Singularity:

“The IA Scenario: We enhance human intelligence through human-to-computer interfaces–that is, we achieve intelligence amplification (IA).

The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.

The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.

The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being”.

If you think the Singularity is all about making an artificial general intelligence and no machine qualifying as such is on the horizon, you might mistakenly deny it is happening, when actually one or more of those other pathways has enabled a transcendence towards a super-intelligence.

Also, what kind of AI are you focusing on? Let’s face it, when most people think of artificial intelligence, they are imagining some machine acting like a person. Following the triumph of IBM’s Watson, professional sceptic Michael Shermer pointed out that Watson could not feel triumphant about its victory. Watson cannot feel anything, it is just a machine.

Now, in this instance Shermer was probably making a fair point. We should not get too carried away with the kinds of higher-order intentionality this computer possesses. Watson did not know it won in the sense that its human rivals know they lost. It does not have the level of self-awareness necessary for the processing of such higher-order concepts. But, when anticipating the Singularity should we really focus on ‘AI that has humanlike feelings’? What about software designed to forage through gargantuan databases, detecting patterns that humans cannot recognize? So far as I know, Yudkowsky’s SeeD AI has nothing to do with making a robot that can convincingly emote; it is entirely to do with engineering impersonal AI specialized to develop software.

2: The belief that some thing will accelerate us towards superintelligence.

In other words, we look to a specific technology to carry us over the threshold. ‘cyborg implants improve with every generation, until we have chips in our heads making us supergeniuses’.

But the last two scenarios Vinge outlined make it possible for superintelligence to arise out of networks of technologies. Consider that most basic of web-browsing activities, following a hyperlink. To me this action is utterly trivial, but it is giving away useful information, in that every mouseclick, every tap on a touchscreen, informs that ‘this is interesting to someone’. When combined with the hyperlink following of everyone else, you get (in Micheal Chorost’s words) “human declarative knowledge, human choices about that knowledge”. Combine that with search engines and you get “a computer system that collects votes about those choices”. Adding the Internet gives you “a high-speed, far-flung communications network that integrates them all”. Put it all together, and what you get is a planetary-wide system that is beginning to look like a brain. But you only see it that way if you cast your gaze over a sufficiently wide network of networks.

3: The belief that the Singularity will happen by 2045 (or 2030, or…).

In other words, thinking in terms of ‘singularity as event’ as if somebody one day will throw a switch and, behold! The Singularity happens. But maybe we should learn from the transition of mere matter to ‘life’. The modern view is that there was no event we can call the origin of life, because it is decidedly arbitrary to pinpoint the moment when a system of increasing complexity becomes ‘alive’. To paraphrase Rodney Brooks, the origin of life was a period, not an event. It seems reasonable to assume that the transition of a system of increasing complexity into a state of superintelligence will also be a period rather than an event. That this may be so becomes most apparent when you consider these words of Ray Kurzweil (which obviously are also relevant to point 2):

“The kinds of scenarios I’m talking about 20 or 30 years from now are not being developed because there’s one lab that’s sitting there creating a human-level AI in a machine. They’re happening because it’s the inevitable end result of thousands of little steps. Each step is conservative, not radical, and makes perfect sense. Each one is just the next generation in some company’s product.”

By focusing on the prophecised ‘event’ of the singularity we may miss the period of time in which cummulative and convergent technologies evolved into superintelligence. Also, those conservative steps may conspire to take us over the threshold without our noticing it is happening. what Kurzweil said about each step being conservative, not radical and perfectly sensible applies at all times. This is because any new technology can only be brought into existence using method and components that already exist, and invention also results from people taking what is known at the time, plus a modicum of inspiration, and then combining bits and pieces that already exist in order to create that new technology (which then becomes a potential building block for newer inventions). Therefore, the people of 2045 will react to nanosystems or mind uploading or Artilects from the perspective of the enabling technologies and sciences of their day. To them, such things will likely be as ordinary as iPad’s and streaming gaming services are to us.

We may find that when we get to 2045 we live in fast times, but we can see on the horizon upcoming technologies that will make our current capabilities seem quite mundane. So we defer announcing ‘the Singularity is here’ until that REALLY gosh-wow stuff arrives. Then, when it does and we look to the future, once again we see technologies coming that make our current capabilities seem mundane, so once again we think “Oh this is not the Singularity, THAT is!” and so on, adinfinitum.

Whenever you apply any of these beliefs about the singularity (it is AI, it will come from A technology, it will be an event…) you artificially reduce the probability space in which the singularity can arise. The more of those beliefs apply to your way of thinking, the smaller your probability space will be compared to the actual probability space. That increases the chances of the Singularity occurring in ways and places you were not looking for it. It could happen and you would miss it.

The EPR Paradox


In her book, Punk Science: Inside the Mind of God, Dr Manjir Samantha-Laughton explains particle entanglement as a connection of two particles that will cause them to have “equal and opposite spin” (pg. 73). But she goes on to state that you can’t know which state the particle is in until you measure it. Einstein recognized that determining the spin of the “first particle determines the spin of the second” (pg. 73). He recognized that this meant that the particles are communicating instantaneously, no matter the distance, likely faster than the speed of light. If this is true, then it completely violates his theory of relativity. Later on John Bell and Alain Aspect and team proved that “entangled particles do display these spooky non-local connections” (pg. 74).

Two things bother me about this, first of all, it seems as though someone or something is mocking us. That they are making things appear to be relatively “easy” to figure out, and that we are able to find the answers, only to prove them wrong by saying “Haha, it’s actually WAY more complex than that.”

Secondly, if this is true, it could prove Singularity, among other things. I don’t like the idea that I could ‘upload’ my thoughts to the “consciousness” and someone else could read it, or “like” it. Let alone have the ability to do that to everyone else. The human mind is meant to be a private thing. We aren’t mean to be able to express what we are actually feeling, or have people really understand the truth of what we are saying (or lack of). Non-local connections are exactly as Einstein said: “spooky”. Only, we are just coming to realize the they are much spookier than we could expect.

Imagine trying to go out on a first date. All your anxiety and nerves would be passed on to the person you’re meeting with (or at least have the ability to be). So not only would you have your own anxiety to worry about you have the other persons as well, which would make things twice as awkward. Plus, how could you possibly prevent “hacking”, or mind control, or “false advertising” (“Come join us, we know how to make you a stronger, better, person.”). I don’t want the contents of my mind pouring out to the rest of the world any more than they already do in the collective unconscious. I mean, it’s weird enough to have the same topic come up with completely separate individuals in a day let alone have them “tap in” to your every thought and feeling. Imagine the overload we’d have.

Of course, all that is dependant on the fact that our brains couldn’t handle it. But, we can’t possibly know that. If the most basic of particles is able to communicate non-locally, the human brain should definitely have the capability. Perhaps it’s only a matter of getting the brain able to use more than 10% at any given time. Maybe then it could handle a whole ton of things that we can’t even imagine.

We know so little. About our world, ourselves, and everything we’re made of. Let alone everything else. It’s scary, spooky even.

A Few Updates


Russell Eponym played at the grand opening of Philadelphia, the “Brotherly Love City” in SL. It was beautiful, and his voice is always enchanting. I imagine there was around twenty people that showed up.

Today is the last day I’ll be on the internet until the twenty seventh. I’ll be on then for the E&S reading and philosophy discussion at 10amSLT/PST. I’ve not yet decided on a topic, so if you have any suggestions please feel free to comment.

Not much of an update today I’m afraid. I haven’t been going to many events in SL because of packing and the holidays. I did manage to make it to Extropia DaSilva’s Annual Christmas Lecture. It was called “Thinkers Lecture 2011: Pondscum, Scared Mice, and the Global Brain”. You can find the entire transcript over on her blog, here. It was excellent. Many, many people showed up. It’s probably the lecture that’s made me think the most. I have to say it was very well done. It presented the idea that technology may enable the human mind to connect to a “global brain”, a Facebook like feature in our brains that would allow our friends to feel what we feel, tune into our thoughts, sensations, emotions, etc. Communication on a grand scale. Definitely an interesting read, and I highly suggest you do if you weren’t able to make it to the online lecture.

I’m excited for the events to start up again in the new year. Hopefully I’ll be able to get around.