I’d like to thank the Junction for inviting me to give a talk about Scala.
Congratulations for moving to the new place. It’s exciting to see the place and the community evolve.
The Junction is an open house for entrepreneurs, uniquely creating a “pay-it-forward” acceleration model. Active entrepreneur teams (regardless of their idea) who are working full-time on their idea and are willing to help out other entrepreneurs are welcome to join, be a part of, and work at The Junction. Each member gets their own desk space, internet & as much coffee as you can drink for a period of 3 months (what we call a wave).
The Junction was founded by Genesis Partners, a leading Israeli early stage venture capital firm, as part of our partnership and collaboration with the entrepreneur community and as part of our efforts to encourage and be a part of Israeli innovation.
About the talk
Single-core performance has hit a ceiling, and building web-scale multi-core applications using imperative programming models is extremely difficult. Parallel programming creates a new set of challenges, best practices and design patterns. Scala is designed to enable building scalable systems, elegantly blending functional and object oriented paradigms into an expressive and concise language, while retaining interoperability with Java. Scala is the fastest growing JVM programming language, being rapidly adopted by leading companies such as Twitter, LinkedIn andFourSquare.
This presentation provides a comprehensive overview of the language, which managed to increase type safety while feeling more dynamic, being more concise and improving readability at the same time. We will see how Scala simplifies real life problems by empowering the developer with powerful functional programming primitives, without giving up on the object oriented paradigm. The overview includes tools for multi-core programming in Scala, the type system, collection framework and domain-specific languages.We’ll explore the power of compile-time meta-programming, which is made possible by the newly released Scala 2.10, and get a glimpse into what to expect from 2.11 in 2014.
We will also see how Scala helps overcome the inherent limitations of Java, such as type erasure, array covariance and boxing overhead.
Multiple examples emphasize how Scala pushes the JVM harder than any other mainstream language through the infinite number of boilerplate busters, increased type safety and productivity boosters from a Java developer’s perspective.
The Junction at night
Many notable inventions are being inspired by nature’s ingenuity, numerous engineering problems were solved by mimicking the results of the hidden intelligence, products of hundreds of millions years of trial and error. The nature is a testimony to the amazing ability of ordered-chaos to lead to unbelievably innovative solutions, often for nearly unsolvable problems. In this post I’m going to explore the enabling principles of natural innovation.
The traditional problem solving methodology encourages top-down thinking and step-wise design, where analysis and decomposition steps are used to break-down a problem or a system to a set of smaller, simpler sub-problems, which are, in turn, refined and further broken down, until the sub-problems are reduced to basic, atomic and often previously solved units. Over millions of years, the human brain has evolved to become optimized for problem solving, decomposition, analysis and goal setting. This leads to an ineliminable friction whenever we try think against our own (top down, mental) stream.
The phenomenon of chaotic mutations of living organisms is nature’s bottom up design pillar. Counter-intuitively, solutions are waiting for problems. If those are at the right place and time to meet the right problem, the mutated specimens may have the critical slight advantage to become the next step in the species’ evolution.
The bottom up nature of chaotic mutations alone is not sufficient to lead to the plethora of innovation. The survival of the fittest, one of the three pillars of Darwinian evolution – is the balancing top down guiding force. In the insanely dynamic nature of the environment, species must evolve to survive. Then, food-chain dependencies will create a cascade effect, forcing the dependent species to evolve, accelerating the diffusion of natural innovation even further. The fittest, the one which was destined to find the solution to its own existential problem, is more likely to survive, making favorable traits become more common in successive generations.
Metaphorically speaking, a whirlpool of innovation forms when cold top-down design winds collide with hot winds of bottom up opportunities, when diversity is constrained by cruel reality of survival.
Could humanity mimic nature’s hidden intelligence, by creating an environment for accelerated Darwinian processes, mixing the top-down and the bottom-up in the same unique way? What would the benefits, and the challenges be?
While humanity faces grand environmental, social and economical, often existential challenges, governments’ attempts to provide effective solutions fail time after time. Is it worth trying a radically different approach for solving grand challenges? If a time machine existed, and a man from the 70s was teleported to 2012, he would have been surprised by the stagnation in the space program, by the fact that diseases that could be easily treated with antibiotics have become intractable and are making a come back. Governments these days just don’t have incentives like the cold war to fuel ambitious projects like the Apollo program, which gradually makes human space travel a lost art.
Non-profit foundations like The X-Prize competition project started to emerge and play an important role in filling the void the last decade. These projects are funded by the private sector and foster grand challenges that encourage technological development. These initiatives seek the balance between ambitious and lean, between pragmatic and innovative, with the goal of catalyzing radical development that could disrupt the slow pace of progress towards solving the most important problems facing society.
The Ansari X PRIZE for Suborbital Spaceflight was the first prize from the foundation. It successfully challenged teams to build private spaceships to open the space frontier. The first part of the Ansari X PRIZE requirements was fulfilled by Mike Melvill on September 29, 2004 in the Burt Rutan designed, Microsoft co-founder Paul Allen financed spacecraft SpaceShipOne when Melvill broke the 100-kilometer (62.5 mi) mark, internationally recognized as the boundary of outer space. Brian Binnie completed the second part of the requirements on October 4, 2004. As a result, US$10 million was awarded to the winner, but more than $100 million was invested in new technologies in pursuit of the prize. Today, Sir Richard Branson, Jeff Bezos and others are actively creating a personal spaceflight industry [Wikipedia]
Competitions, just like Darwinian evolutionary dynamics, essentially crowd-source innovation. They have the right ingredients for to prosperity of pluralism. The pseudo-chaos created by diversity of approaches, backgrounds and techniques leads to incredible solutions when constrained by a clearly defined goal. This symbiotic relationship exists in the vast majority of innovative eco-systems.
Here are several more examples of eco-systems that demonstrated successful innovation crowd-source:
- The free market – A diversity of business models, technology and marketing strategies, constrained by market demand.
- Wikipedia – millions of authors (having billions of opinions), constrained by an efficient moderation eco-system.
- Kaggle – a platform and a marketplace for data crunching challenges that allows organizations to share their datasets and problems and have it scrutinized by the world’s best data scientists. In exchange for a prize, winning participants provide the algorithms that beat all other methods of solving a data-crunching problem.
- Linux – Thousands of contributors, distributions, implementations, to different problems, constrained by interfaces at integration points. Eric S. Raymond’s observations of the development process of the Linux Kernel, published the book “The Cathedral and the Bazaar”, where the fine-balance of top-down and bottom up approaches is describes in great detail.
The web brings new opportunities to scale, accelerate and manage the infinite complexity of innovation by harnessing the extelligence, the wisdom of the crowds.
In the next post we’ll explore this opportunity in detail.
The web becomes increasingly complex under the hood, to support the increasingly demanding information needs, new information types, formats & capacities. At the same time, on the surface, the web is actually becoming simpler & friendlier.
Information complexity is one of the challenges I’m especially excited about.
As in any other field, the best method to pass the complexity barrier is through introducing new levels of abstraction and modularity.
How far are we from being able to write:
“Use the current user location, display nearby restaurants on a map, add the best real time deals for the restaurants, and list them in a table, sorted by the distance from the user. Display tweets mentioning the restaurants and friends from Facebook nearby at the moment, and perform sentiment analysis on the tweets to display positive and negative feedback on each restaurant”
In the realm of data aware apps, this description is transformed to a data flow graph, which is the DNA of a data aware application.
When you take a multitude of data source, add a semantic abstraction layer over them, a new paradigm emerges. Data sources suddenly have a common language. Information exchange on a higher level is possible.
Applications can now be described in terms of the information flow graph, starting from user input, or real time feed, going through refinement and manipulation, then fed to an API. The results are filtered, sorted & aggregated, directed to a second API, augmented by data from another data source, and displayed using data aware UI elements.
You’re invited to read more on this article on semanticweb.com.
Also, see slides of my talk at the Big Data meetup, on information complexity & data aware apps.
I was born into a world of intertwined, multi-layered, living networks, surrounding me from both within my body & mind as well as from the outside world. I’m now waking up from my sleep, as a vast network of neurons is firing wake-up signals to my body. I’m now pouring a glass of water from a grid transmitting liquid H2O, and boiling an egg using the grid of propane gas. The nutrients from breakfast are now travelling through the blood vascular network to my body cells. I’m turning the TV on and watching news, streamed through a network of cables and satellites. The news has just enriched the graph of synaptic connections that my brain contains. I’ll later share those connections with friends from my social network through the communication networks. I’m heading to work now, through a complex transportation network, driving on a network of roads, concrete bridges and steel rails. Finally, I’m at work. I’m plugging in my computer to the electrical grid, reading some messages from my network of colleagues, about databases for graphs and networks.
All these networks become denser every day, in an accelerating way. New technology enables us to create more pieces of information every second, in a wider variety of forms and connecting them in new ways, aided by automatic connection mining technology, such as Facebook friend suggestions. We have more social connections today than ever before, new roads are being constructed as the growing population size, and lifespan and urbanization bring the demand for denser supporting infrastructure grids. Existing grids constantly evolve and new kinds emerge. The Internet is the newest grid out there, but it’s definitely not the last one. Have you tried to speculate what the next global grid will be? Perhaps it will be a grid of raw elements supply, for the microwave-like nano-assembly machines that every household will have. These will be used to produce every possible kind of food from molecules. It seems like every newly introduced grid, eliminated just another reason for us to get out of homes.
There are also many types of non-physical networks. They are usually related to volatile relationships between objects and events, and have decaying geo-temporal characteristics. A few examples of such networks: The network of diseases that people transmit, the network of people who passed coins and the network of the biochemical interactions within the living cells. By analysing trends and patterns on such networks, one can try to infer and predict future network structure (e.g. progression of the swine flu spread).
All these networks never stop evolving. They expand as new nodes and connection are formed and contract as they disappear. Nodes clusters collapse & split, in a seemingly chaotic nature. In fact, studies of natural networks discovered that they have a surprisingly predictable growth patterns. Massive clusters of nodes tend to attract more connections to other nodes and form dense cliques; they act as “black holes” influencing nearby parts of the network, making the graph fold into itself as they become super-nodes. Every kind of network has a unique fingerprint of evolution forces.
It is fascinating to observe our inner and outer worlds, both physical and abstract, and discover new hidden networks, exposing more layers of connections that explain the extremely connected and dynamic nature of our universe.
Here’s a short list of applications that my friends and I would like to develop using the SDK:
Auto hunger detection.
You are at work and you start to feel hungry. You’re not even consciously aware of the hunger at this stage yet. The hunger is automatically detected, and your lunch is ordered from your restaurant of choice. You know those days when you just want to have some sushi, and the next day we don’t want to hear about sushi. Your meal choice is correlated to your neuro-signals, and a program will be able to “feel” and guess what you would like to have for your lunch.
Temperature auto tuning
You won’t use the air conditioner remote control when your new air conditioner automatically adjusts the temperature as you start to feel cold or hot.
Your cognitive and mental states will one day become as priceless as the pictures that you took on your vacation. Captured cognitive & emotional signals, combined with location data from GPS, and recorded audio, will allow us to learn important correlations between places, activities, events and mental states. Those important metrics will be the next stage in the revolution that was started by bio-feedback, and will allow us to reach new levels of self-control. Once we learn to recreate emotions, we could augment movies and pictures with the emotional state that you were in when you took the picture, and create more comprehensive capture of reality.
Emotion based – Interactive movies
Interactive movie experience, in which you implicitly navigate the plot using your emotions. Your emotional state is constantly fed into the playing device, which chooses the plot which will create the most enjoyable experience for you. The movie literally learns your taste and lets your hidden emotions navigate through the plot.
The brake light turns on automatically as you think of braking, before you started to move your foot to push the brake pedal. The extra 400ms could prevent cars behind you from bumping into your car in situations of sudden braking.
Listening to music
You are in the middle of a listening to great song or podcast when you realize you need to go to take something from another room. As you go farther from the speakers, the sound system senses that you don’t hear it well enough, and adjust the volume so that you continue getting uninterrupted listening experience. You can also think of “Pause” from the other room, and the song will wait for you to come back.
Can we reach new level of expressiveness if we could transform our emotional-cognitive state to color and shapes? Perhaps common drawing tasks such as color, brush style, width and opacity control could be controlled faster and allow finer detailed drawings with far less effort.
Emotion augmented chat
The emotions of the chat partner are expressed though background music, sounds, colors and images to add emotional layers to the communication.
Emotion triggering game
The computer draws an random emotion; excitement, fear, happiness, anxiety…
You get a point when you trigger the drawn emotion in the opponent first.
Brain control headset + headphones + smartphone
A full interactive brain controlled hands free experience, which allows you to think of “email” or “calendar” and get your new messages and meetings read. This solution is great for situation when your hands are not free, and there’s too much noise to use speech recognition. You can think of “answer call” or “call mom”.
Communication augments the individual, it creates communities, it is vital for survival of the majority of species. It shaped and was shaped by the course of evolution. it is impossible to imagine a day in our life without communicating with our family, friends and colleagues.
Is there any pattern in the course of communication paradigms?
What will the next major paradigm shift in communication be like?
Is there any fundamental limit to the rate of information exchange between humans?
The above questions are all intertwined. The evolution of communication is was shaped and limited by inherent physical characteristics of the species. A clear pattern in the communication evolution will allow us to speculate on the next paradigm shift. The evolution of communication between living organisms is tightly tied with the evolution of sensing, processing and emitting organs. Nature has always found many fascinating ways for animals to communicate.
The history of communication goes back 4 billion years ago, to the age of bacteria. Single cell organisms exchange information through chemical signals. Bacteria talk to each other, distinguish between species, and have both common inter-species as well as intra-species language. Some species of bacteria have hundreds of different behaviors, affected by cell community voting mechanism. Cells within our body communicate through variety of chemical, thermal and electrical signals. Chemical communication works for small distances. It is impossible to distinguish the exact source of the information, and the information exchange rate is limited.
Around 600 million years ago, during a burst of rapid evolution, dubbed the “Cambrian explosion”, primitive nerve system and vision started to evolve. A new medium emerged, and shortly it became utilized for communication. Sea creatures developed complex light emitting capabilities, based on symbiotic relations with luminescent bacteria or complex bio-luminescence organs, lenses and color filters. This was a significant paradigm shift, which introduced new dimensions into the language: Fine granularity of the light spectrum, ability to create moving colorful 3 dimensional light shapes. It is an interesting fact that this luminescence capabilities are the dominated by primitive deep-sea creatures. We can only imagine what could be done by covering our skin with LCDs and wiring every pixel to our intellectually and socially evolved brains.
Mammals and birds communicate mostly through sound, due to its ubiquitous nature. No line of sight is required. As far as you are close enough to the origin, you’ll get the message. Due to sexual nature of all mammals and birds, rituals have evolved, whether it’s to impress the female, or to beat another rival male. Primates have developed more complex language structure which involves hoots and gestures.
Humans, during their short period of existence, have reinvented the communication more times and in more ways than ever seen in nature before. This time it wasn’t due to an introduction of a new sense, but rather a very complex processing organ. We have learned and evolved to think metaphorically. Combined with our ability to produce consonants in addition to vowels, we found a completely new way to communicate. Sequences of vowels and consonants were assigned to abstract concepts, feelings, metaphors and everyday objects. From the dawn of humanity, we’ve been a very social creature, for a good evolutionary reason. We always had to collaborate in order to survive, and collaboration is simply impossible without an effective way of communication. Information wasn’t really liberated at this point.
Then, around 30,000 years ago, there came the first human who painted a picture on a cave wall. He projected a metaphor to the visual medium. And change history again. Writing was born. For the first time, one could say something which could be heard over and over again, in different times. Information could finally be saved, perpetuated, persisted. The bandwidth increased dramatically because the producer of the information did not have to repeat the same information over and over again. The wall literally augmented, enhanced and empowered him.
The wall was stationary, and thus required the participants to physically stand against it. This all changed when humans started to write on non-stationary objects, such as clay tablets (3500 BC, Sumer), bones (1400 BC, china), stone tokens, papyrus (2200 BC) and paper. New concepts and metaphors were created all the time. It just wasn’t practical to create corresponding symbols to all new concepts, and spreading their meaning to the people. Pictographs appeared around 3500 BC. The oldest one discovered to date is from Sumar.
Then came the alphabet. it literally liberated language. The alphabet is basically a projection from the auditory to the visual space. Literate people could finally say what they could read, and write what they could say, without the need for constant learning of new symbols.
Once you have an alphabet, it’s much easier to automate the process of writing. A prototype is created, and cloned over and over again. Printing was so inevitable that it was independently invented in China around 200AD (woodblock printing), in the Holy Roman Empire by the German Johannes Gutenberg around 1440, and more times throughout the history. Writing got faster. This is a step forward in the distribution of information, rather than a paradigm shift.
The ability to transmit information without physically moving the matter is extremely powerful. Hundreds of different telegraphs appeared in the last few millennia. The Greek telegraph, used around 500BC, included the use of trumpets, drums, shouting, beacon fires, smoke signals, mirrors and semaphores. When we gained the ability to generate and conduct electric current through wires, it was only a matter of time until ideas start to travel on those wires. Now they were even easier, faster and cheaper ways to spread information.
The next information liberation step is not related to the transfer speed, but rather to the associative nature of our minds. Hypertext was born. The nonlinear reading was used hundreds of years before the invention of hypertext. Dictionaries and encyclopedias included references and interlinking. Books had tables of contents. However, for the first time, the nonlinear reading became practical, simply because hyperlinks illuminated manual paging and lookup.
We’re in the dawn of the next revolution. The Semantic Web revolution, which takes nonlinear reading even further. It makes the interlinking dynamic and automatic. Personalized nonlinear reading flows can be created for the reader, based on his intent, implicit goals, reading history, past knowledge, reading style, time constraints and interests.
Primarily at the same time, another revolution is taking place. Computers learn to understand voice. We talk much more than we type. We can talk when we drive, walk or eat. However, most of verbal information doesn’t leverage the above described technological advances, simply because it is not digitized. There are two major limiting factors: computers are not yet ubiquitous, and speech recognition quality is still relatively low. When those limiting factors disappear, Exabytes of verbal information will be processed, stored, transmitted, shared, indexed, searched, interlinked and consumed.
It’s 2010. We’ve come a long way in developing our communication capabilities. Information is being transmitted at the speed of light, broadcast to billions of people everywhere in the globe, as fast as it is created. It can be instantly found, edited and shared.
Are we done here?
The speculation part starts here. What is the next paradigm shift? No one knows, but we have billions of years of history to try to spot the trends. We’ve created a ridiculous situation where the rate of the information transmission is limited by how fast we move our tongue, and the rate that we can parse and understand human speech. It was found that the average reading, listening, typing and braille reading rate rarely exceeds 400 words per minute. There might be simple reasons for those limitations. We simply never needed to pass information at a rate faster than our lips move, and our hearing and brain never had the need to understand speech at fast rates. These days we’re literally flooded by information, the news industry explodes as users start to share and contribute events and the increasingly connected nature of information creates an infinite stream of juicy information to consume.
EEG based mind-reading headsets are becoming affordable these days. A decent Emotiv EPOC headset can be purchased for $300. That’s just the first generation of the technology, but it gives a clue on the exciting possibilities that are going to be created, once it gets more mature.
What if we could leverage the high-bandwidth optic nerve, to create a direct broadband connection to the brain, bypassing the language 400 words/minute bottleneck? We will need to create an engineered language which could be naturally transmitted through the optic nerve and processed by the visual cortex. We will have to train our brains to understand that language. Probably, some sorts of ideas will be more naturally transmitted this way than others. Some of them would have to be transformed on many levels (e.g semantic abstractions/metaphors, visual transformations, aggregations/summarizations). An efficient communication framework should have a common vocabulary and ontology, an extensible concepts framework, be multi-layered, and impose low information redundancy.
The optic nerve is responsible for transmitting visual information from the eye to the brain. It contains around 1.2 million nerve fibers (compared to 30k of the auditory nerve), capable of transmitting 8.7 megabits per second. The ultra-high bandwidth channel might hold the key to the next paradigm shift in communication. During a typical one hour professional meeting, about 2 thousand words will be exchanged between the participants. This is less than 10 kilobytes of information. Theoretically, direct mental communication via the optic nerves can enable us to complete such a meeting faster than we pronounce a single word verbally. One of
The first adopters are likely to be the military, police, hospitals and other emergency services, where the ability to shorten discussions to fractions of a second could be the difference between life and death. Businesses will highly benefit from the productivity boost. Academy people and students, who deal with information and knowledge transfer most of the time, will adopt the new medium to speed up their learning and research process. It is interesting to ask whether and when we will adopt telepathy to personal talks among friends and family. This paradigm shift will drive further profound changes, which will make us question the purpose of daily concepts such as: business trips, meeting rooms, podcasts, lectures, keyboards, text.
There are still many unanswered questions, regarding the feasibility and implications of such technology.
Will an adoption of telepathy boost the human evolution, and the technological development?
What are the ethical implications?
Are our brains capable of processing such quantities of information? How will it affect our psychology?
Will it require us to rebuild the existing Internet infrastructure?
Will a new language be required for telepathy?
The Mysterious Brain
It is capable of having more ideas than the number of atoms in the known universe, by constantly firing chemical and electrical signals over 500 trillion synaptic connections between 100 billion neurons.
All this magic fits a 1400cm3 box, weighs only 1.4kg and consumes just 10-20 watts of energy.
While the mystery is being uncovered little by little by neuroscientists around the world, we still have very little clue on how physical brain activity is translated to the intellectual and emotional planes, as stated in the top 10 unsolved brain mysteries, from the Discover Magazine:
- How is information coded in neural activity?
- How are memories stored and retrieved?
- What does the baseline activity in the brain represent?
- How do brains simulate the future?
- What are emotions?
- What is intelligence?
- How is time represented in the brain?
- Why do brains sleep and dream?
- How do the specialized systems of the brain integrate with one another?
- What is consciousness?
An engineering perspective
A computer architect will easily notice that the structure of the brain contains many desired characteristics of a Distributed System. Cheap and highly redundant, simple, poorly specialized units, each of them responsible for just a few basic functions. Interconnections autonomously evolve towards the organic flow of information. Certain parts of the system are specialized and optimized for specific functionality, whether it’s rapid yet hard-wired response, or a neuro–bureaucratic logic and inference driven rational decision.
An Artificial Intelligence perspective
The human kind has a long history of attempts to understand, formalize, model and recreate consciousness, reasoning and intelligence. Some of those attempts date back to the time of Aristotle. This history of AI is strongly intertwined with the history of scientific and technological developments. Paradigm shifts starting from mechanical “thinking” machines, electric circuitry, analog electronics, digital electronics, and eventually, the age of software AI, spanning from rule based symbolic reasoning and logical inference, through the uncertainty-embracing probabilistic machine learning approaches, such as Artificial neural networks. In February 2010, an MIT research scientist Noah Goodman introduced A grand unified theory of AI, combining the rule based and the probabilistic approaches. The theory hasn’t yet been proven in industrial applications, but is believed to hold the potential of becoming the holy grail of AI.
Brain modeling and simulation approach
The Blue-Brain project, takes the opposite approach, by reverse-engineering the brain, modeling the neural topology on the macro scale as well as building empirical models of individual neurons on the micro scale. Those models are already the basis of a brain activity simulation. 8192 processors power the simulations in a distributed manner, totaling 28 Teraflops of computing power. Currently, in 2010, a rack of servers is required to simulate small subsets of the brain functionality. The blue brain project leads me to the main question I would like to raise in this post.
What it takes to create a complete simulation of the human brain?
I’d like to quote 4 questions from the FAQ section of the blue-brain website:
Q: What computer power would you need to simulate the whole brain?
A: The human neocortex has many millions of NCCs. For this reason we would need first an accurate replica of the NCC and then we will simplify the NCC before we begin duplications. The other approach is to covert the software NCC into a hardware version – a chip, a blue gene on a chip – and then make as many copies as one wants.
The number of neurons various markedly in the Neocortex with values between 10-100 Billion in the human brain to millions in small animals. At this stage the important issue is how to build one column. This column has 10-100’000 neurons depending on the species and particular neocortical region, and there are millions of columns.
We have estimated that we may approach real-time simulations of a NCC with 10’000 morphologically complex neurons interconnected with 10×8 synapses on an 8-12’000 processor Blue Gene/L machine. To simulate a human brain with around millions of NCCs will probably require more than proportionately more processing power. That should give an idea how much computing power will need to increase before we can simulate the human brain at the cellular level in real-time. Simulating the human brain at the molecular level is unlikely with current computing systems.
Q: Do you believe a computer can ever be an exact simulation of the human brain?
A: This is neither likely nor necessary. It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today. Mammals can make very good copies of each other; we do not need to make computer copies of mammals. That is not our goal. We want to try to understand how the biological system functions and malfunctions so that this knowledge can benefit mankind.
Q: The Blue Gene is one of the fastest supercomputers around, but is it enough?
A: Our Blue Gene is only just enough to launch this project. It is enough to simulate about 50’000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short, the computing power and not the neurophysiological data is the limiting factor.
Q: You are using 8’000 processors to simulate 10’000 neurons — is this a 1 neuron/processor model?
A: There is no software in the world currently that can run such simulations properly. The first version will place about one neuron per processor – some will have more because the neurons are less demanding. We can in principle simulate about 50’000 neurons, placing many neurons on a processor. The first version of Blue Gene cannot hold more than a few neurons on each processor. Later versions will probably be able to hold hundreds.
Today’s CPUs are capable of performing several billion operations per second.
The famous diagram by Ray Kurzweil reflects the long-term trend of exponential increase in calculations/second/$1k over the years.
Moore’s law predicts a yearly doubling of the number of transistors on an integrated circuit.
Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University estimates the human brain’s processing power to be around 100 teraflops and have a memory capacity of 100 terabytes.
According to Moore’s law, we’re around 25 years away from the point of a brain in a box for $1000.
This short research has led me to many more questions.
- Is there any way of getting to that point in a shorter period?
- Is brain simulation indeed a good approach for achieving general AI?
- Will Moore’s law last for 25 more years?
- Are current CPU models sufficient for these kinds of tasks?