Erin
Lubin/The New York Times
By JOHN MARKOFF
Published: December 28, 2013
PALO ALTO,
Calif. — Computers have entered the age when they are able to learn from their
own mistakes, a development that is about to turn the digital world on its
head.
The first commercial version of the
new kind of computer chip is scheduled to be released in 2014. Not only can it
automate tasks that now require painstaking programming — for example, moving a
robot’s arm smoothly and efficiently — but it can also sidestep and even
tolerate errors, potentially making the term “computer crash” obsolete.
The new
computing approach, already in use by some large technology companies, is based
on the biological nervous system, specifically on how neurons react to stimuli
and connect with other neurons to interpret information. It allows computers to
absorb new information while carrying out a task, and adjust what they do based
on the changing signals.
In coming years,
the approach will make possible a new generation of artificial intelligence
systems that will perform some functions that humans do with ease: see, speak,
listen, navigate, manipulate and control. That can hold enormous consequences
for tasks like facial and speech recognition, navigation and planning, which
are still in elementary stages and rely heavily on human programming.
Designers
say the computing style can clear the way for robots that can safely walk and
drive in the physical world, though a thinking or conscious computer, a staple
of science fiction, is still far off on the digital horizon.
“We’re
moving from engineering computing systems to something that has many of the
characteristics of biological computing,” said Larry Smarr, an astrophysicist who
directs the California Institute
for Telecommunications and Information Technology, one of many research centers
devoted to developing these new kinds of computer circuits.
Conventional
computers are limited by what they have been programmed to do. Computer vision
systems, for example, only “recognize” objects that can be identified by the
statistics-oriented algorithms programmed into them. An algorithm is like a
recipe, a set of step-by-step instructions to perform a calculation.
But last
year, Google researchers were able to get a machine-learning algorithm, known
as a neural network, to perform an identification task without supervision. The
network scanned a database of 10 million images, and in doing so trained itself
to recognize cats.
In June, the
company said it had used those neural network
techniques to develop a new search service to help customers find specific
photos more accurately.
The new
approach, used in both hardware and software, is being driven by the explosion
of scientific knowledge about the brain. Kwabena
Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also
its limitation, as scientists are far from fully understanding how brains
function.
“We have no
clue,” he said. “I’m an engineer, and I build things. There are these
highfalutin theories, but give me one that will let me build something.”
Until now,
the design of computers was dictated by ideas originated by the mathematician John
von Neumann about 65 years ago.
Microprocessors perform operations at lightning speed, following instructions
programmed using long strings of 1s and 0s. They generally store that
information separately in what is known, colloquially, as memory, either in the
processor itself, in adjacent storage chips or in higher capacity magnetic disk
drives.
The data —
for instance, temperatures for a climate model or letters for word processing —
are shuttled in and out of the processor’s short-term memory while the computer
carries out the programmed action. The result is then moved to its main memory.
The new
processors consist of electronic components that can be connected by wires that
mimic biological synapses. Because they are based on large groups of
neuron-like elements, they are known as neuromorphic processors, a term
credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept
in the late 1980s.
They are not
“programmed.” Rather the connections between the circuits are “weighted” according
to correlations in data that the processor has already “learned.” Those weights
are then altered as data flows in to the chip, causing them to change their
values and to “spike.” That generates a signal that travels to other components
and, in reaction, changes the neural network, in essence programming the next
actions much the same way that information alters human thoughts and actions.
“Instead of
bringing data to computation as we do today, we can now bring computation to
data,” said Dharmendra Modha, an I.B.M. computer scientist
who leads the company’s cognitive computing research effort. “Sensors become
the computer, and it opens up a new way to use computer chips that can be
everywhere.”
The new
computers, which are still based on silicon chips, will not replace today’s
computers, but will augment them, at least for now. Many computer designers see
them as coprocessors, meaning they can work in tandem with other circuits that
can be embedded in smartphones and in the giant centralized computers that make
up the cloud. Modern computers already consist of a variety of coprocessors
that perform specialized tasks, like producing graphics on your cellphone and
converting visual, audio and other data for your laptop.
One great
advantage of the new approach is its ability to tolerate glitches. Traditional
computers are precise, but they cannot work around the failure of even a single
transistor. With the biological designs, the algorithms are ever changing,
allowing the system to continuously adapt and work around failures to complete
tasks.
Traditional
computers are also remarkably energy inefficient, especially when compared to
actual brains, which the new neurons are built to mimic.
I.B.M.
announced last year that it had built a supercomputer simulation of the brain
that encompassed roughly 10 billion neurons — more than 10 percent of a human
brain. It ran about 1,500 times more slowly than an actual brain. Further, it
required several megawatts of power, compared with just 20 watts of power used
by the biological brain.
Running the
program, known as Compass, which attempts to simulate a brain, at the speed of
a human brain would require a flow of electricity in a conventional computer
that is equivalent to what is needed to power both San Francisco and New York,
Dr. Modha said.
I.B.M. and
Qualcomm, as well as the Stanford research team, have already designed
neuromorphic processors, and Qualcomm has said that it is coming out in 2014
with a commercial version, which is expected to be used largely for further
development. Moreover, many universities are now focused on this new style of
computing. This fall the National Science Foundation financed the Center for Brains, Minds and Machines,
a new research center based at the Massachusetts Institute of Technology, with
Harvard and Cornell.
The largest
class on campus this fall at Stanford was a graduate level machine-learning
course covering both statistical and biological approaches, taught by the
computer scientist Andrew Ng.
More than 760 students enrolled. “That reflects the zeitgeist,” said Terry
Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered
early biologically inspired algorithms. “Everyone knows there is something big
happening, and they’re trying find out what it is.”
No comments:
Post a Comment