Hack52.Robust Processing Using Parallelism


Hack 52. Robust Processing Using Parallelism

Neural networks process in parallel rather than serially. This means that as processing of different aspects proceeds, previously processed aspects can be used quickly to disambiguate the processing of others.

Neural networks are massively parallel computers. Compare this to your PC, which is a serial computer. Yeah, sure, it can emulate a parallel processor, but only because it is really quick. However quick it does things, though, it does them only one at a time.

Neural processing is glacial by comparison. A neuron in the visual cortex is unlikely to fire more than every 5 milliseconds even at its maximum activation. Auditory cells have higher firing rates, but even they have an absolute minimum gap of 2 ms between sending signals. This means that for actions that take 0.5 to 1 secondsuch as noticing a ball coming toward you and catching it (and many of the things cognitive psychologists test)there are a maximum of 100 consecutive computations the brain can do in this time. This is the so-called 100 step rule.1

The reason your brain doesn't run like a PC with a 0.0001 MHz processor is because the average neuron connects onto between 1000 and 10,000 other neurons. Information is routed, and routed back, between multiple interconnected neural modules, all in parallel. This allows the slow speed of each neuron to be overcome, and also makes it natural, and necessary, that all aspects of a computational job be processed simultaneously, rather than in stages.

Any decision you make or perception you have (because what your brain decides to provide you with as a coherent experience is a kind of decision too) is made up of the contributions of many processing modules, all running simultaneously. There's no time for them to run sequentially, so they all have to be able to run with raw data and whatever else they can get hold of at the time, rather than waiting for the output of other modules.

4.10.1. In Action

A good example of simultaneous processing is in understanding language. As you hear or read, you use the context of what is being said, the possible meaning of the individual words, the syntax of the sentences, and how the sounds of each wordor the letters of each wordlook to figure out what is being said.

Consider the next sentence: "For breakfast I had bacon and ****." You don't need to know the last word to understand it, and you can make a good guess at the last word.

Can you tell the meaning of "Buy v!agra" if I email it to you? Of course you can; you don't need to have the correct letter in the second word to know what it is (if it doesn't get stopped by your spam filters first, that is).

4.10.2. How It Works

The different contributionsthe different clues you use in readinginform one another, to fill in missing information and correct mismatched information. This is one of the reasons typos can be hard to spot in text (particularly your own, in which the contribution of your understanding of the text autocorrects, in your mind, the typos before you notice them), but it's also why you're able to have conservations in loud bars. The parallel processing of different aspects of the input provides robustness to errors and incompleteness and allows information from different processes to interactively disambiguate each other.

Do you remember the email circular that went around (http://www.mrc-cbu.cam.ac.uk/personal/matt.davis/Cmabrigde) saying that you can write your sentences with the internal letters rearranged and still be understood just as well? Apparently, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm.

It's not true, of course. You understand such scrambled sentences only nearly as well as unscrambled ones. We can figure out what the sentence is in this context because of the high redundancy of the information we're given. We know the sentence makes sense, so that constrains the range of possible words that can be in it, just as the syntax does: the rules of grammar mean only some words are allowed in some positions. The word-length information is also there, as are the letters in the word. The only missing thing is the position information for the internal letters. And compensating for that is an easy bit of processing for your massively parallel, multiple constraint-satisfying, language faculty.

Perhaps the reason it seems surprising that we can read scrambled sentences is because a computer faced with the same problem would be utterly confused. Computers have to have each word fit exactly to their template for that word. No exact match, no understanding. OK, so Google can suggest correct spellings for you, but type in i am cufosned and it's stuck, whereas a human could take a guess (they face off in Figure 4-5).

Figure 4-5. Google and my friend William go head to head


This same kind of process works in vision. You have areas of visual cortex responsible for processing different elements. Some provide color information, some information on motion or depth or orientation. The interconnections between them mean that when you look at a scene they all start working and cooperatively figure out what the best fit to the incoming data is. When a fit is found, your perception snaps to it and you realize what you're looking at. This massive parallelism and interactivity mean that it can be misleading to label individual regions as "the bit that does X"; truly, no bit of the brain ever operates without every other bit of the brain operating simultaneously, and outside of that environment single brain regions wouldn't work at all.

4.10.3. End Note

  1. Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6, 205-254 (http://cognitrn.psych.indiana.edu/rgoldsto/cogsci/Feldman.pdf).



    Mind Hacks. Tips and Tools for Using Your Brain
    Mind Hacks. Tips and Tools for Using Your Brain
    ISBN: 596007795
    EAN: N/A
    Year: 2004
    Pages: 159

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net