Artificial Intelligence, the Singularity and humanity's technological future

Some anticipate an artificial intelligence (AI) to be able to pass the Turing test within as little as 15 years - and an additional 15 years thereafter trigger the technological Singularity (a kind of threshold after which the pace of technological evolution is so rapid that no human can keep track of it let alone understand it).


I wouldn't hold my breath for 2029 as the key date; however, I am convinced that a super human AI will emerge at some point and then evolve exponentially from there. It is not merely possible, it is inevitable. Resistance is futile.


First we need a good theory of how the brain processes information. Then we need to measure brain activity on the finest scale possible and use the measurements to improve on the process theory until it is approximately correct. Then we build a bigger brain and use that to build even bigger thinking entities. Size means more memory and information processing capacity, in short a genius, that could help develop even bigger, faster and sooner or later qualitatively more competent brains. And then it is out of our hands unless we merge with the AI:s.

1.    First, the human brain managed to emerge spontaneously from the primordial soup. It would surprise me if no one ever could reverse engineer and first just replicate and then enhance the brain’s functions, perhaps first biologically and then in a much more efficient and robust substrate. The level of intelligence (pattern recognition and hierarchical symbolism in the neocortex) is limited in size by the cranium, whereas an identical structure outside the brain could be expanded by orders of magnitude

2.    Second, intelligence is not magic; intelligence seems “simply” to be based on recursive pattern recognition. The brain survives by correctly observing patterns in its environment and anticipating and avoiding lethal threats. One important aspect of the environment is other people, another is itself and its body. By modelling people (including itself), awareness emerges.

3.    Third, whatever intelligence is, it is made of matter (the brain is made of matter – if not, all bets are off) and it is not likely that the current design and material are the best conceivable. Signal speeds, e.g., in an ordinary computer is one million times faster than in organic matter like the human nervous system. Just transferring the brain as is to a computer substrate of 2014 would make the brain one million times faster. By improving the actual information processing architecture, completely new orders of speed and capabilities are highly likely to emerge, given empirical evidence from the recent history of hardware and software technology.

4.    One exciting avenue of explaining, exploring and evolving past the human level of intelligence is Kurzweil’s theory of hierarchical pattern recognizers working from hidden Markov model principles.

a.    On the first level, specialized simple basic pattern recognizers (PR) are triggered by external stimuli, e.g. a straight horizontal line, or a vertical one, or curved, or some other arbitrary fundamental visual pattern.

                                          i.    E.g., a horizontal straight line prepares the next level of PR:s, excites them, for anything that usually contains a straight line – like the letter “A”, or the horizon, or a stick… there are A LOT of stuff containing horizontal lines. If a “diagonal line”-PR is triggered simultaneously, the likelihood of an “A” being seen is increased and corresponding PR:s are being excited and extra prepared to detect cues pertaining to an “A”.

                                         ii.    The likelihood of detecting a diagonal line is also increased since a horizontal line often comes with one of those.

                                        iii.    Other letter-PR:s like “E” and “B” also get excited, since they also have horizontal straight lines in them.

b.    Higher level PR:s also get excited, such as word-PR:s with the letter “A” or “E” or “B” etc. in them (“Apple”, “Ape”, “BANANA”, “Adam” etc)

                                          i.    Since everything happens in parallel in the brain, a horizontal line stimulates, to various degrees, all levels of PR:s from letter and word PR:s to smell PR:s (apple, banana) and even childhood memories of old relatives and apple pie. The PR:s get ready to detect this stuff without you knowing it

                                         ii.    If an “A” is more or less firmly established, the threshold to detect stuff that usually comes with an A is lowered, such as “B” (in the alphabet) or “P” and “E” in (apple or ape). That also explains why we can see or understand a word that is at an angle or partly covered

c.    Once the word “Apple” is detected (actually in parallel of course), it becomes easier to detect “fruit”, “pie”, “oranges”, “vitamins” or whatever historically has occurred next to the word “apple” for the brain in question

d.    Even higher up in the hierarchy, other PR:s get ready to detect biblical stories of “Adam” or other tales of knowledge, shame or whatever has occurred in connection with apples and Adam before. Simultaneously more letter-PR:s on the more fundamental PR-level get ready in an appropriate cascade of recognized patterns and excited PR:s to read the whole sentence or page, if that is what was detected.

e.    Depending on how intelligent the brain is, it has a certain high end limit of hierarchies, where very complex patterns like jealousy or love resides, but there is no reason to assume that it has to end there. A future person or AI could have an arbitrary number of levels and correspondingly complex prepared patterns or “emotions”.

5.    If the theory above, or something similar, turns out to be close to the material truth, an iterative process of modelling the brain and comparing the model with the brain could commence. Gradually as the models get more and more accurate and the resolution of brain scanning and imaging improves in both the spatial (room) and temporal (time) dimensions, nothing seems able to prevent a future point, within a handful of decades, where we know how and when a neuron fires, and how different neurons interact to form pattern recognizers, and how these in turn are organized in hierarchies to manage a symbolized representation of the environment.

6.    Once having a functional model, behaving exactly as the brain, depending on the relative state of biotechnology vs computing vs nanotechnology, it will be possible to expand a brain by:

a.    Transplanting more neocortex to a surgically enlargened cranium

b.    Fusing the biological neocortex with an artificial, computer based hierarchy of pattern recognizers

c.    Replicate the brain’s functions stand-alone from a human in a computer/robot

7.    After that, it is only a matter of mechanically expanding the number of PR:s (from the current around 300 million) and the number of levels of PR:s to create an entity with more memory and more and higher-level processing capacity than the un-enhanced human brain. That entity would be a genius surpassing the information processing and pattern recognition capacity of e.g. Einstein’s and Newton’s. If we create enough of those, sooner or later they would be able to improve the brain model more than an un-enhanced person, thus triggering an exponential intelligence evolution.

8.    Once the functional model is there, and once the cycle of one AI creating the next level of AI, creating an even higher level of AI etc is in place things will go very fast. Different AIs my compete for the lead or they may merge to evolve even faster. Why would two half-witted AI:s stand by to see somebody else take the lead if they could simply merge and take it themselves. And why would three other AIs stand idly by to watch that process instead of merging themselves…

9.    And why would any sane human being not seize the opportunity to expand his own intelligence by fusing or merging with as high an order of intelligence as possible?

10.  Will we thus become the Borg collective of Star Trek? Would that be bad?


Some have asked me why a part of the Singularity might mean that the universe wakes up, becomes intelligent. Well, nothing is clear about the Singularity, or even possible to analyze. That is part of the notion. Just as impossible as it is for a single-celled organism or a small parasite to speculate about human ambitions or goals, we can't think meaningfully about what would drive a future super intelligent entity.

On the other hand, we can make an educated guess about the starting trajectory, based on how we achieve the first artificial intelligence, and extrapolate from there:

  • Scientists are driven by an urge to explore and explain their surroundings and the universe
  • Entrepreneurs want to see their creations grow
  • The IT industry specifically has always strived to create computer systems with greater capacity (memory, processing power, software algorithms)
  • Astrophysicists and IT entrepreneurs constantly demand more computer support to measure and model the universe, measure client needs and behaviour, to market products that sell better
  • AI researchers try to enhance the algorithms and the hardware they use to create the first general artificial intelligence. Once they almost succeed in achieving a generalized hierarchical symbol-based information manager modelled on the brain's architecture and on a powerful enough hardware platform to process input and output in real time...,

...they will take the next step...,

...i.e., using faster hardware, more processing capacity, better information management algorithms to attain just about the human capacity of thought and self reference, which is needed to understand the environment, including itself and other creatures and communicate its thoughts to them.

Then you take another step, probably in the same general direction, with more and faster hardware and more capable, more recursive software.

I think that after the last one hundred years of logaritmically straight and predictable development in computing capabilities per cost in USD, we will continue in the same direction as long as possible. As long as companies can gain a competitive advantage using better, faster, smarter, more individualized AI agents in-house, in marketing, in research the AI h/w and s/w arms race will continue.

Ray Kurzweil has a public wager going on where he claims that by 2029 an artificial intelligence created by humans will pass a very qualified test of consciousness and most likely convince a high number of people that tha AI actually thinks and knows that it thinks. That would be more credit than many americans gave their fellow black citizens just 200 years ago.

Just a couple of years after the awareness test, the AI's intelligence level would increase by a factor 2 every year, or pessimistically projected every second year. In 2039 an AI that costs as much as the first GAI of 2029 could be 1000x as intelligent and fast as an average human brain. Imagine that you or your friends, that scientists or even Einstein or Newton had had 1000 years to develop their theories every year. Imagine networking 1000 brains with 1000x the capabilities of symbolic thinking and complex hierarchies of Einstein's. And imagine they already had access to all human knowledge and were not limited by the human brain's difficulties in intuitively modelling more than 3-4 dimensions...

And this is only 25 years from now. Add another 10 years at a time to this thought experiment (which would add a factor of 1000x, or pessimistically 30x) and consider more and more people getting access to the same capabilities, learn about it, care about it and wanting to network with each other at AI speeds. The cost of food, shelter and clean energy would fall to almost zero, freeing up the time and imagination of every living person and AI for computing, collaboration and exchange of digital products. Most creative or fastest would carry the highest value.

That is the trajectory we set out from the beginning. Growth and propagation is engrained in us as it is in all creatures and we seem destined to make AIs in our own image.

Somewhere (-time) here on this path its is reasonable that an unenhanced human being will stand no chance to follow or understand the development. It was easy when several generations passed with no discernible technological change. It was easy when the steam engine, electricity, the combustion engine (cars, airplanes), radio, tv, the computer, internet, search engines, social networks arrived centuries, decades or years apart, but what happens when crucial steps are taken every year, every 6 months, every quarter, every month, every week, every day or every hour?

Thereabout is the Singularity, when extreme AIs in collaboration with enhanced humans, compete to be the fastest and most creative and do it so well and build generations upon generations at a speed noone can follow.

The point is, to get there, we have to take every step on the way there, we have to want to be faster, find and understand hierarchical patterns, model these, create higher levels of abstraction, automate the very process of abstractation and make the AIs do this themselves, implement it all on faster and faster platforms and eventually bigger and bigger physical platforms when the computation modules are packed as efficiently as possible.

Everything that leads up to the Singularity also points toward claiming more and more matter and energy for computation. If the AIs don't turn their very birth and growth process around the natural direction of development seems to be toward turning all possible matter into a kind of artificial brain matter.

Resigning into statements such as "We can't know what an AI will do" means ignoring the trajectory clearly mapped out for at least 150 years before the creation of a GAI and the Singularity. That we don't understand what they can do or their motives, motivations, thoughts and feelings does not change the likelihood of them wanting more of everything: multiplying, spreading, increasing their intelligence, their information gathering, their processing power, sucking up more and more energy and understanding more and more of the universe, perhaps ultimaely being able to alter the universe's future itself or start new baby universes.

I think Gardner's vision of an intelligence sphere expanding by the speed of light is the reasonable conclusion about the future, given an analysis of the IT era so far.

Sometime along this process, an enhanced human or AI should be able to analyze, maintain and improve the human body enough to sustain life indefinitely. However, at that point I think many would prefer uploading to a more robust and much faster thinking substrate

At parties, I do however find it difficult to make a comeback after starting with "I think mankind will merge with an intelligence sphere expanding by the speed of light" and "Yes, it will happen in our lifetime and we will be immortal"

7 kommentarer:

  1. This was a great and interesting read. I'm a computer engineering student, and AI is one of the areas of engineering that interest me the most, along with robotics. I'll share some of my (perhaps naïve) thoughts down below.

    "Just as impossible as it is for a single-celled organism or a small parasite to speculate about human ambitions or goals, we can't think meaningfully about what would drive a future super intelligent entity."

    – And to add to that: as modern humans, we are currently in a Plato's Cave where the chained entity is our perception of what the mind can achieve; we can not (or I can not) even conceive of what kind of intelligence will surface once the singularity of artificial intelligence is reached. With heightened intelligence and creativity, mankind (or AI-kind) will not only improve upon already existing areas of study (ranging from art to psychology to technology), but invent new fields, philosophies and ideas that we can't even imagine at the moment.

    In other fields of study, we can think of what will happen once the technology gets more advanced. In AI, it gets harder, because our minds are not enough right now.

    Thank you for the value. I found your blog through Ludvig Sunström, and will definitely be reading more of your articles. I also read your latest post (6 necessary precautions), which was intriguing as well. You have certainly piqued my interest in the implications of AI.

    SvaraRadera
  2. I agree with Alex: Really interesting.

    Fascinating and daunting.

    I cannot see how a technological singularity is avoidable, and I cannot see how such an entity would not deem mankind a potentially lethal vermin that has to be exterminated.

    Maybe, if mankind could influence this entity with compassion.

    Have you read The Selfish Gene?

    SvaraRadera
    Svar
    1. I think the Singularity and SGAI are the elephant in the room that nobody talks about (but that I guess everybody is at least slightly aware of). It's mankind's greatest threat and opportunity at the same time.

      I don't think fleshy humans have much of a future. Hence before we are made obsolete or squashed we need to merge completely with technology. I therefore don't think we'll be eradicated anymore than forgotten primitive tribes in the Amazonas have been. "Fleshies" will simply be left behind if they want to, while "Enhanceds" will move on to ventures unimaginable to humans.

      Never did read the Selfish Gene. However, I've read reviews and a synopsis so I'm familiar with its message.

      Radera
  3. Thanks for your clear and detailed explanation! Go to http://bigessaywriter.com/blog/artificial-intelligence-impact-on-education and browse an article!

    SvaraRadera
  4. Then we build a bigger brain and use that to build even bigger thinking entities. Size means more memory and information processing capacity, in short a genius, artificial intellegence

    SvaraRadera
  5. The post is useful and it would really help for those who search for Aido Indiegogo | Aido Robot Indiegogo

    SvaraRadera