new AI "brains": goodbye linear number-crunching, hello self-improving pattern-p

Started by Darren Dirt, January 02, 2014, 11:19:09 AM

Previous topic - Next topic

Darren Dirt

I guess the next design of computer hardware is going to be a "reboot" based on a whole different approach, rather than just a small evolutionary improvement over the design that was essentially laid down in (and for the tasks of) the 1960s...

Quote from: http://www.dailymail.co.uk/sciencetech/article-2532550/Computer-chip-carries-tasks-without-asked-learns-mistakes.html
The current project is part of the same research that led to IBM's announcement in 2009 that it had simulated a cat's cerebral cortex, the thinking part of the brain, using a massive supercomputer.

Using progressively bigger supercomputers, IBM had previously simulated 40 per cent of a mouse's brain in 2006, a rat's full brain in 2007, and one per cent of a human's cerebral cortex in 2009.

Eventually, computer scientists want to use the chip to build a system that can mimic the entire brain, using ten billion ?neurons? and hundred trillion ?synapses?.


Hmmm... this is Good News, right? ...






I guess you could say that IBM researchers are putting themselves to the fullest possible use, which all that any conscious entity can ever hope to do...
_____________________

Strive for progress. Not perfection.
_____________________

Mr. Analog

I'm just saying, 10 years from now: IBM robo-consultants

(but don't worry, they're just as lazy and/or inept)
By Grabthar's Hammer

Darren Dirt

Quote from: Mr. Analog on January 02, 2014, 11:21:48 AM
I'm just saying, 10 years from now: IBM robo-consultants

(but don't worry, they're just as lazy and/or inept)


The actual NY Times article:
Quote from: http://www.nytimes.com/2013/12/29/science/brainlike-computers-learning-from-experience.html
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

...last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.



The new processors consist of electronic components that can be connected by wires that mimic biological synapses. ... They are not ?programmed.? Rather the connections between the circuits are ?weighted? according to correlations in data that the processor has already ?learned.? Those weights are then altered as data flows in to the chip, causing them to change their values and to ?spike.? That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.

And just how far do these designers imagine this "adjusting" will take us?
http://www.youtube.com/watch?v=HwBmPiOmEGQ
_____________________

Strive for progress. Not perfection.
_____________________

Mr. Analog

Given the lack of details in the article I'm not worried about the Cybermen attacking just yet

(however that may explain all those 'give us your gold' commercials...)
By Grabthar's Hammer