You say you want a convolution

Why bother with artificial intelligence when we're still pretty incompetent with natural intelligence? And yet the fact that a venture is ill advised has never stopped us before.

We aspire to control others without being able to control ourselves.

We judge others more harshly than we judge ourselves.

We take more readily than we give.

Let's talk for a moment about our brain. No, not "our brain" as in us, the crosier of Popehat. (Some blogs have a staff; we have a crosier.) I mean "our brain" as in us, the species homo sapiens somewhat laughably sapiens.

What I want to say is this: we're certainly not going to let the fact that we're baffled by our real brains impede us from trying to build fake ones, right? Perhaps aiming for artifice in matters brainial will help us grasp things actually intracranial.

Of course, if we really knew how to exercise the natural contents of our collective brainboxen, then faced with the prospect of artificial intelligence, we'd all be running around screaming, "No! Stop! Skynet! Nexus!" (Of course, some of us would be doing it with the intonations of Gene Wilder's Willy Wonka, but hey.) We'd all recognize that if we can so easily rationalize our own hypocrisy, then even if we had an anthrobotic system that was tweaked to honor the n laws of robotics, someone somewhere would hack hypocrisy and rationalization right into it. Next stop, SHODAN.

Anyhow, we are blissfully oblivious to risks. And thanks to functional MRI and kindred advances in technology, such as electron microscopy and laser-scanning light microscopy, we (as a species) now stand at the threshold of understanding the brain's architecture and adaptability. We have begun to recognize that "neural circuits tell activity how to propagate, and neural activity tells circuits how to change". It's a great time to be alive, if only for the advent of much better sci-fi.

So what would a computer program based on the way our brains actually work be like? Not one inspired by cheesy 1980s intuitions about fuzzy logic, but a rigorous adaptation of principles actually embedded in our wetware?

Happily, thanks to Jeff Hawkins (the dude who founded Palm and Handspring) we can now begin to understand the answer to that question.

Last 5 posts by David Byron


  1. James Pollock says

    Of course, artificial intelligence would be a big step towards understanding the natural kind. The easiest way to study something is to note the differences and similarities between it and something that is already understood. So… build an artificial intelligence, and we understand how it works. Then compare the known to the unknown, and gain insights in to how THAT works.

  2. Grifter says

    Well, it's hard to quantify "artificial intelligence" without appealing to some nebulous concept that makes it unintelligible. At what point consciousness?

  3. Ae Viescas says

    So what would a computer program based on the way our brains actually work be like?

    Something like an ANN (artificial neural network), a type of AI that's been steadily developing for decades and has already solved problems traditional AI hasn't been able to touch?

    And while we're still rather prone to error on matters of our own brains, we've made remarkable progress in developing robotic interfaces with simpler brains:

    ( )

    I wouldn't call our state of AI development "incompetent" given the problems its been able to solve already.

    Of course, we aren't anywhere close to Skynet, but expectation matching is a bitch. =P

  4. Ae Viescas says

    Also, quantifying artificial intelligence is easy: use benchmarks like the Turing test. The difficulty lies in picking a useful benchmark, but you sure as heck can put a number to a capability of something (that's the whole idea behind the Turing test)

  5. CptR says

    A System Shock reference, ponies, and frequent e-thug beatdowns? I can rest easy knowing someone has all the important bases covered.

  6. Ae Viescas says

    Oh, and I should of mentioned: the Turing test itself is not a great quantifier (it's too complex); it's just the general idea that comparing artificial forms of intelligence to existing forms of intelligence gives you a lot of stuff to test.

  7. David Josselyn says

    The Turing test is not at all quantitative. It is qualitative. You only get a score out of it by presenting your program to multiple people, asking them to assign a quantity to their qualitative experience, and then averaging that result with the responses given by other testers.

  8. says

    @Ae Viescas, This ain't your grandpappy's neural network, no. And as a software guy, I can reassure you that ANN ("that's been steadily developing for decades and has already solved problems traditional AI hasn't been able to touch") is in its infancy, much like software engineering itself.

    The particular neural network modeling technique Hawkins is discussing here is sparse distributed representation, which is particularly suited to predictive projection based on patterned temporal sequences. I mention this for those who would prefer to google rather than watch.

  9. Josh C says

    You are vastly overestimating our understanding of how the nervous system works. Take the peripheral nervous system for example, which is much simpler than the CNS:

    Nerves are long stringy white things. Their anatomy is fairly well understood. Inside them are something like a dozen fascicles (though the number changes as you move along each nerve), which are bundles of axons. The axons are tiny, roughly 1 micron diameter and are tightly packed. Fascicles are roughly 5-20 microns in diameter, so the number of axons to each fascicle is pretty widely variable. Fascicles join and diverge, so mapping axons will not be trivial. To my knowledge, no-one has mapped fascicles yet on any mammal, let alone humans. There was one lab which successfully stained an individual fascicle with silver, but that was used rabies, took six months (i.e. started six months before the animal was sacrificed), and has not been reproducible (though my knowledge there is a couple years out of date). I don't know of anyone who has even tried to map axons, which are what actually carries the signal. Oh, and for bonus fun, axons combine as you get older, so you lose selectivity. I was taught that it's a physical change; I don't know how that was established.

    I have some pictures, if anyone wants. You can see the fascicles quite well; the axons were visible in the microscope, but the camera resolution was too low to pick them up.

    When you look at nerve-interfacing neuroprostheses, the state of the art is stimulating small groups of fascicles, because the assumption is that axons bundled into fascicles probably go to about the same place. (see: FINE and SPINE electrodes, e.g. , though I doubt it's been approved for studies in humans yet).

    Those are just the cables going in to the "black box" brain. They basically act like wires (though, weird and robust wires; wikipedia has a good summary: once you get into the brain proper, the geometries get an order of magnitude more complex; the logic goes from "high/low" to Rube Goldberg-style accumulators of several dozen different kinds of neurotransmitters at each synapse (and each synapse may have different rules); and you go from end-to-end transmission through roughly three cells to "neural net" geometry through dozens or hundreds of cells (for simple things).

    And just for funsies, not all your processing is actually done in the brain. I'm just going to say the word "hormones," and then back quickly away from that.

    Now, none of that's to say that the stuff we (as a species) are doing isn't really, really cool. It's like we've got a radio transmitter, and by very, very precisely measuring the heat coming off the processor, we're able to pick out a few words. That is both amazing and useful, but circuit design it ain't.

  10. says

    You are vastly overestimating our understanding of how the nervous system works.

    @Josh C, Whom are you addressing here?

    Remember– I'm the guy who said ANN (Artificial Neural Networks) is a discipline in its infancy, and that we merely "stand at the threshold of understanding the brain's architecture and adaptability", and that we're "still pretty incompetent" in this domain, and that frankly, "we're baffled".

    Thanks for adding your neurological insights, though. It's fascinating stuff.

  11. Kelly says

    The scientist part of me is excited to see progress in AI tech. The pessimist and geek parts are all agog wondering if we are opening ourselves to Skynet, Cylons, or Cybermen.

  12. Josh C says


    You actually, but possibly due to poor reading skills. "Thanks to functional MRI and kindred advances in technology, such as electron microscopy and laser-scanning light microscopy, we (as a species) now stand at the threshold of understanding the brain's architecture and adaptability," seemed overly optimistic.

    This also intersected with an old hobbyhorse though, so I might have jumped the gun somewhat.

  13. says

    Possibly so. I deal in castles and cathedrals, so standing at the threshold counts as a beginning in my repertory of metaphors.

  14. DMC-12 says

    "Next stop, SHODAN."

    Absolutely fantastic. That reminds me: I planned to launch a Kickstarter project to build a cortex reaver. For Christmas, of course.

  15. princessartemis says

    This is fascinating stuff. Also frightening, I think; as you say, we have barely begun understanding our own noggins. If we, as a whole, were better able to understand our own brains, we'd be much less likely to snap off fragile pieces whilst we charge in trying to 'fix' things with a sledgehammer.

    But of course, we must do these things; we are creators, every one of us, and it is in our nature to fumble about creation trying to figure out how it ticks and then to attempt to create on our own.