You say you want a convolution

Print This Post

David Byron

David Byron is a software developer working for the military-industrial complex. At Popehat, he writes about art, language, theater (mostly magic), technology, lyrics, and aleatory ephemera. Serious or satirical poetry spontaneously overflows from him while he's recollecting in tranquility. @dcbyron

You may also like...

18 Responses

  1. James Pollock says:

    Of course, artificial intelligence would be a big step towards understanding the natural kind. The easiest way to study something is to note the differences and similarities between it and something that is already understood. So… build an artificial intelligence, and we understand how it works. Then compare the known to the unknown, and gain insights in to how THAT works.

  2. Grifter says:

    Well, it's hard to quantify "artificial intelligence" without appealing to some nebulous concept that makes it unintelligible. At what point consciousness?

  3. Ae Viescas says:

    So what would a computer program based on the way our brains actually work be like?

    Something like an ANN (artificial neural network), a type of AI that's been steadily developing for decades and has already solved problems traditional AI hasn't been able to touch?

    And while we're still rather prone to error on matters of our own brains, we've made remarkable progress in developing robotic interfaces with simpler brains:

    ( )

    I wouldn't call our state of AI development "incompetent" given the problems its been able to solve already.

    Of course, we aren't anywhere close to Skynet, but expectation matching is a bitch. =P

  4. Ae Viescas says:

    Also, quantifying artificial intelligence is easy: use benchmarks like the Turing test. The difficulty lies in picking a useful benchmark, but you sure as heck can put a number to a capability of something (that's the whole idea behind the Turing test)

  5. CptR says:

    A System Shock reference, ponies, and frequent e-thug beatdowns? I can rest easy knowing someone has all the important bases covered.

  6. Ae Viescas says:

    Oh, and I should of mentioned: the Turing test itself is not a great quantifier (it's too complex); it's just the general idea that comparing artificial forms of intelligence to existing forms of intelligence gives you a lot of stuff to test.

  7. David Josselyn says:

    The Turing test is not at all quantitative. It is qualitative. You only get a score out of it by presenting your program to multiple people, asking them to assign a quantity to their qualitative experience, and then averaging that result with the responses given by other testers.

  8. David says:

    @Ae Viescas, This ain't your grandpappy's neural network, no. And as a software guy, I can reassure you that ANN ("that's been steadily developing for decades and has already solved problems traditional AI hasn't been able to touch") is in its infancy, much like software engineering itself.

    The particular neural network modeling technique Hawkins is discussing here is sparse distributed representation, which is particularly suited to predictive projection based on patterned temporal sequences. I mention this for those who would prefer to google rather than watch.

  9. Josh C says:

    You are vastly overestimating our understanding of how the nervous system works. Take the peripheral nervous system for example, which is much simpler than the CNS:

    Nerves are long stringy white things. Their anatomy is fairly well understood. Inside them are something like a dozen fascicles (though the number changes as you move along each nerve), which are bundles of axons. The axons are tiny, roughly 1 micron diameter and are tightly packed. Fascicles are roughly 5-20 microns in diameter, so the number of axons to each fascicle is pretty widely variable. Fascicles join and diverge, so mapping axons will not be trivial. To my knowledge, no-one has mapped fascicles yet on any mammal, let alone humans. There was one lab which successfully stained an individual fascicle with silver, but that was used rabies, took six months (i.e. started six months before the animal was sacrificed), and has not been reproducible (though my knowledge there is a couple years out of date). I don't know of anyone who has even tried to map axons, which are what actually carries the signal. Oh, and for bonus fun, axons combine as you get older, so you lose selectivity. I was taught that it's a physical change; I don't know how that was established.

    I have some pictures, if anyone wants. You can see the fascicles quite well; the axons were visible in the microscope, but the camera resolution was too low to pick them up.

    When you look at nerve-interfacing neuroprostheses, the state of the art is stimulating small groups of fascicles, because the assumption is that axons bundled into fascicles probably go to about the same place. (see: FINE and SPINE electrodes, e.g. , though I doubt it's been approved for studies in humans yet).

    Those are just the cables going in to the "black box" brain. They basically act like wires (though, weird and robust wires; wikipedia has a good summary: once you get into the brain proper, the geometries get an order of magnitude more complex; the logic goes from "high/low" to Rube Goldberg-style accumulators of several dozen different kinds of neurotransmitters at each synapse (and each synapse may have different rules); and you go from end-to-end transmission through roughly three cells to "neural net" geometry through dozens or hundreds of cells (for simple things).

    And just for funsies, not all your processing is actually done in the brain. I'm just going to say the word "hormones," and then back quickly away from that.

    Now, none of that's to say that the stuff we (as a species) are doing isn't really, really cool. It's like we've got a radio transmitter, and by very, very precisely measuring the heat coming off the processor, we're able to pick out a few words. That is both amazing and useful, but circuit design it ain't.

  10. David says:

    You are vastly overestimating our understanding of how the nervous system works.

    @Josh C, Whom are you addressing here?

    Remember– I'm the guy who said ANN (Artificial Neural Networks) is a discipline in its infancy, and that we merely "stand at the threshold of understanding the brain's architecture and adaptability", and that we're "still pretty incompetent" in this domain, and that frankly, "we're baffled".

    Thanks for adding your neurological insights, though. It's fascinating stuff.

  11. Kelly says:

    The scientist part of me is excited to see progress in AI tech. The pessimist and geek parts are all agog wondering if we are opening ourselves to Skynet, Cylons, or Cybermen.

  12. Josh C says:


    You actually, but possibly due to poor reading skills. "Thanks to functional MRI and kindred advances in technology, such as electron microscopy and laser-scanning light microscopy, we (as a species) now stand at the threshold of understanding the brain's architecture and adaptability," seemed overly optimistic.

    This also intersected with an old hobbyhorse though, so I might have jumped the gun somewhat.

  13. David says:

    Possibly so. I deal in castles and cathedrals, so standing at the threshold counts as a beginning in my repertory of metaphors.

  14. DMC-12 says:

    "Next stop, SHODAN."

    Absolutely fantastic. That reminds me: I planned to launch a Kickstarter project to build a cortex reaver. For Christmas, of course.

  15. princessartemis says:

    This is fascinating stuff. Also frightening, I think; as you say, we have barely begun understanding our own noggins. If we, as a whole, were better able to understand our own brains, we'd be much less likely to snap off fragile pieces whilst we charge in trying to 'fix' things with a sledgehammer.

    But of course, we must do these things; we are creators, every one of us, and it is in our nature to fumble about creation trying to figure out how it ticks and then to attempt to create on our own.

  16. mojo says:

    I'd be a little leery of anything marked "Strangeloop 2012".

  17. mojo says:

    Or possibly "Leary", IYKWIM

  1. December 2, 2012

    […] You say you want a convolution […]