Singularity World

Towards a full brain simulation

by on Aug.14, 2010, under Brain Simulation

The Mysterious Brain

It is capable of having more ideas than the number of atoms in the known universe, by constantly firing chemical and electrical signals over 500 trillion synaptic connections between 100 billion neurons.

All this magic fits a 1400cm3 box, weighs only 1.4kg and consumes just 10-20 watts of energy.

While the mystery is being uncovered little by little by neuroscientists around the world, we still have very little clue on how physical brain activity is translated to the intellectual and emotional planes, as stated in the top 10 unsolved brain mysteries, from the Discover Magazine:

An engineering perspective

A computer architect will easily notice that the structure of the brain contains many desired characteristics of a Distributed System. Cheap and highly redundant, simple, poorly specialized units, each of them responsible for just a few basic functions. Interconnections autonomously evolve towards the organic flow of information. Certain parts of the system are specialized and optimized for specific functionality, whether it’s rapid yet hard-wired response, or a neuro–bureaucratic logic and inference driven rational decision.

An Artificial Intelligence perspective

The human kind has a long history of attempts to understand, formalize, model and recreate consciousness, reasoning and intelligence. Some of those attempts date back to the time of Aristotle. This history of AI is strongly intertwined with the history of scientific and technological developments. Paradigm shifts starting from mechanical “thinking” machines, electric circuitry, analog electronics, digital electronics, and eventually, the age of software AI, spanning from rule based symbolic reasoning and logical inference, through the uncertainty-embracing probabilistic machine learning approaches, such as Artificial neural networks. In February 2010, an MIT research scientist Noah Goodman introduced A grand unified theory of AI, combining the rule based and the probabilistic approaches. The theory hasn’t yet been proven in industrial applications, but is believed to hold the potential of becoming the holy grail of AI.

Brain modeling and simulation approach

The Blue-Brain project, takes the opposite approach, by reverse-engineering the brain, modeling the neural topology on the macro scale as well as building empirical models of individual neurons on the micro scale. Those models are already the basis of a brain activity simulation. 8192 processors power the simulations in a distributed manner, totaling 28 Teraflops of computing power. Currently, in 2010, a rack of servers is required to simulate small subsets of the brain functionality. The blue brain project leads me to the main question I would like to raise in this post.

What it takes to create a complete simulation of the human brain?

I’d like to quote 4 questions from the FAQ section of the blue-brain website:

Q: What computer power would you need to simulate the whole brain?

A: The human neocortex has many millions of NCCs. For this reason we would need first an accurate replica of the NCC and then we will simplify the NCC before we begin duplications. The other approach is to covert the software NCC into a hardware version – a chip, a blue gene on a chip – and then make as many copies as one wants.

The number of neurons various markedly in the Neocortex with values between 10-100 Billion in the human brain to millions in small animals. At this stage the important issue is how to build one column. This column has 10-100’000 neurons depending on the species and particular neocortical region, and there are millions of columns.

We have estimated that we may approach real-time simulations of a NCC with 10’000 morphologically complex neurons interconnected with 10×8 synapses on an 8-12’000 processor Blue Gene/L machine. To simulate a human brain with around millions of NCCs will probably require more than proportionately more processing power. That should give an idea how much computing power will need to increase before we can simulate the human brain at the cellular level in real-time. Simulating the human brain at the molecular level is unlikely with current computing systems.

Q: Do you believe a computer can ever be an exact simulation of the human brain?

A: This is neither likely nor necessary. It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today. Mammals can make very good copies of each other; we do not need to make computer copies of mammals. That is not our goal. We want to try to understand how the biological system functions and malfunctions so that this knowledge can benefit mankind.

Q: The Blue Gene is one of the fastest supercomputers around, but is it enough?

A: Our Blue Gene is only just enough to launch this project. It is enough to simulate about 50’000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short, the computing power and not the neurophysiological data is the limiting factor.

Q: You are using 8’000 processors to simulate 10’000 neurons — is this a 1 neuron/processor model?

A: There is no software in the world currently that can run such simulations properly. The first version will place about one neuron per processor – some will have more because the neurons are less demanding. We can in principle simulate about 50’000 neurons, placing many neurons on a processor. The first version of Blue Gene cannot hold more than a few neurons on each processor. Later versions will probably be able to hold hundreds.

Conclusions

Today’s CPUs are capable of performing several billion operations per second.

Assuming that there are no more than 10 Billion computers on earth, simulating the human brain using the current architecture will require the computing resources of the entire planet.

The famous diagram by Ray Kurzweil reflects the long-term trend of exponential increase in calculations/second/$1k over the years.
Moore’s law predicts a yearly doubling of the number of transistors on an integrated circuit.

Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University estimates the human brain’s processing power to be around 100 teraflops and have a memory capacity of 100 terabytes.

According to Moore’s law, we’re around 25 years away from the point of a brain in a box for $1000.

This short research has led me to many more questions.

  • Is there any way of getting to that point in a shorter period?
  • Is brain simulation indeed a good approach for achieving general AI?
  • Will Moore’s law last for 25 more years?
  • Are current CPU models sufficient for these kinds of tasks?
:, , ,
No comments for this entry yet...

Leave a Reply

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Categories

Blogroll

Recent Posts

Archives

All entries, chronologically...