Skip to main content

Oh, Hal

Aristotle believed the heart was all important and the brain was simply a radiator to keep the heart cool. Descartes, impressed by the hydraulic action of fountains in the royal gardens, developed a hydraulic analogy for the action of the brain. Thomas Henry Huxley thought of the brain as analogous to a steam engine. Now we think brains are computers and so, therefore, computers are brains.
However, leaving aside the fact that if an argument is true, the converse argument does not have to be true, the fact is that there is exactly zero evidence and exactly zero theoretical reason to believe that computers can be what brains are. To believe it nonetheless is an article of faith, not an article of science. There are, in fact, some reasons to believe that they are not.
The computer/brain analogy is compelling because computers are able to do some things we find extremely difficult and to which we attribute high intelligence to people who can do those things. Playing chess, for example. Most of these things involve what amounts to effective lookup of useful information in a large database. The things that we find simple to do, like perceiving and navigating through a complex world, computers find extraordinarily difficult. Somehow, we do not see that as a lack of intelligence.
Computers are fundamentally dualist. Brains are not. By that I mean that computers are a hardware substrate on which an algorithm created by an external entity executes. That at least suggests that the analogy between brains and computers could be just that -- an analogy. As French neuroscientist Yves Frégnac put it, "big data is not knowledge."
Large Language Models (ie. chatbots like Chat GPT and Bing) are an elaborate way of accessing big data. What strikes me about what enthusiasts are eliding from LLMs is that what they do is not driven by knowledge. It is driven by a pastiche of things that have been said in the past by humans. So when they argue that LLMs indicate the imminent arrival of true artificial intelligence, they are in effect claiming that intelligence does not depend in any way on actual knowledge. That strikes me as nonsense. The intelligence is in the data they access (that's us), not the mechanism of access.
Brains are not radiators. They are not fountains. They are not steam engines. They may not even be computers. No one really knows that yet. Another French neuroscientist, Romain Brette, has challenged this metaphor in some detail. Brette points out that in thinking of brains as running code, researchers unconsciously drift between different meanings of the word "code." Starting from a technical sense, in which code means there is a link between a stimulus and the activity of a neuron, they drift into a very different, representational, meaning in which neural codes represent that stimulus, without justifying, or even consciously acknowledging, that shift.
This is dangerously close to a homunculus model. The unstated implication, using the representational meaning of code, is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as "downstream structures'" that has access to optimal ways of decoding the signals. With LLMs, it is pretty obvious that the downstream structure is us.
The cognitive revolution in psychology, starting in the 1970's, has pretty clearly demonstrated that viewing the brain as a passive computer that responds to inputs and processes data is wrong. Brains exist in bodies. Those bodies are interacting with and intervening in the world, and a considerable portion of whatever it is that brains do is based on sensorimotor metaphors derived from these interactions. And I should point out here that the meaning of metaphor is not the usual "How shall I compare thee to a summer's day" sense. Rather, the cognitive theory of metaphor involves wholesale export of reasoning methods from one domain into a completely different one, e.g. using the ability of the brain to reason about navigation to instead think about mathematics. This is what a number line is. When the metaphor changes (as it did in mathematics from numbers as enumeration of objects to numbers as labeling positions along a path), the meaning changes as well (as when the enumeration metaphor excluded the concept of zero as well as irrational numbers from the world of numbers -- the Pythagorean position -- to the path metaphor requiring them to be numbers since otherwise those positions along a path lack labels).
In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”
Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists, Eric Jonas and Konrad Paul Kording, decided to actually do the experiment on a real (and simple) computer chip, the MOS 6507 processor that was used in popular video games in the 70's and 80's. Things did not go as expected.
They deployed the entire analytical armament of modern neuroscience to attempt reverse engineering the CPU. Despite the fact that there is a clear explanation for how the chip works, they were unable to detect from outside the hierarchy of information processing that occurs inside it. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.” This is directly related to neural networks in general as they are the blackest of black boxes. No one knows how they convert input into output, and this experiment suggests that such knowledge cannot be obtained with current techniques. Absent that knowledge, claims of "sentience" or "intelligence" are specious.
Introducing the AI Mirror Test, which very smart people keep failing

Comments

Popular posts from this blog

Tis for Today in 1925

Today is the birthday of the author and illustrator Edward Gorey, born in Chicago, 1925. His stepmother was the woman playing the guitar during the Marseillaise scene in Casablanca. The New York Times credits bookstore owner Andreas Brown and his store, the Gotham Book Mart, with launching Gorey's career: "it became the central clearing house for Mr. Gorey, presenting exhibitions of his work in the store's gallery and eventually turning him into an international celebrity." Gorey's illustrated (and sometimes wordless) books, with their vaguely ominous air and ostensibly Victorian and Edwardian settings, have long had a cult following.[7] He made a notable impact on the world of theater with his designs for the 1977 Broadway revival of Dracula, for which he won the Tony Award for Best Costume Design and was nominated for the Tony Award for Best Scenic Design. In 1980, Gorey became particularly well known for his animated introduction to the PBS series Mystery! ...

Bend and Stretch. Reach for the Stars. There Goes Jupiter, There Goes Mars...

On this day in 1976, during an interview on BBC Radio 2, British astronomer Patrick Moore announced that a very rare planetary event was about to take place—that Jupiter and Pluto would soon align in relation to Earth, and their combined gravitational pull would momentarily override Earth's own gravity and make people weigh less. He called it the Jovian-Plutonian Gravitational Effect, and said that if people jumped into the air at exactly 9:47 a.m., they would experience a floating sensation. Moore signaled, "Jump now!" over the airwaves, and within minutes the BBC switchboard was flooded with calls from people who claimed it had worked. In 1957, the BBC TV show "Panorama" ran a segment about the Swiss spaghetti harvest enjoying a "bumper year" thanks to mild weather and the elimination of the spaghetti weevil. Many credulous Britons were taken in. In 1998, Mark Boslough fabricated a press release claiming that the Alabama legislature had legally...

On This Day in 1564

It's the birthday of scientist and writer Galileo Galilei, born in Pisa, Italy (1564), who defended the scientific belief that the Earth was not the center of the Universe and was tried by the Roman Inquisition for heresy. After being prohibited from doing any more astronomy and being placed under house arrest for the rest of his life, he played a major role in inventing physics. Galileo was a mathematics professor at Padua when he first heard about a new invention from the Netherlands, the telescope. He couldn't get his hands on one to even look at, so worked out the optics on his own. He then sold it to the Doge of Venice to help him manipulate the commodities market. The Doge was able to see incoming ships before anyone else and could run down to the docks to set prices on the cargo he knew was arriving. Galileo was a very slippery character. The spyglass everyone had been talking about could magnify objects to three times their original size. The instrument Galileo...