Mental Processes

 

Mental Processes is a collection of Christopher H. Longuet-Higgin’s scientific papers (many co-authored), chosen by himself to represent his contribution to cognitive science over the past twenty years. From, in fact, the day in 1967, when “Richard Gregory and I packed our bags for Edinburgh…in the shared conviction that the workings of the mind could not possibly be as tedious as the psychologists made them out to be, or as peripheral as the physical scientists tended to assume”.

The road that Gregory and Longuet-Higgins trod was certainly new, breaking with the old-style cognitive psychology in its enthusiastic incorporation of ideas from machine intelligence. Chomskyan linguistics, computational models of language acquisition, algorithmic methods drawn from computer science, the technology of holography, and a wide array of mathematical techniques. And they trod it differently. Gregory, besotted with machines and drawn to the concrete, designer of experiments, inventor of black boxes, gifted popularizer: Longuet-Higgins, at home with the abstract and technical, intellectually adventurous, able to beam the same no-nonsense analytical engine on everything from the human soul to the game of Tic-Tac-Toe.

On the evidence gathered here their conviction on the road to Edinburgh has been justified. There is nothing that is either tedious or peripheral about Longuet-Higgins’s examinations of human memory, such as “A Holographic Model of Temporary Recall”, or attempts to articulate our cognitive relation to music in “The Grammar of Music” and “The Perception of Melodies”, or investigations of linguistic modeling in “The Algorithmic Description of Natural Language”, or contributions to the understanding of sight such as “The Role of the Vertical Dimension in Stereoscopic Vision”.

The papers on language and vision are, as Longuet-Higgins himself admits, technically difficult, while those on memory employ mathematical techniques that will surely baffle anyone not well briefed in such matters. This leaves music. Here the contributions are more accessible. “On Interpreting Bach:, for example, is an original and fascinating attempt to construct a program which would assign particular fugue subjects to particular keys; and “The Three Dimensions of Harmony: cleverly teases out and illuminates the ambiguities of standard written musical notation.

What, then, from a general standpoint, are we to make of this brave new science? It is clearly a giant step up from its parent – rat science and the crushing simplicities of stimulus/response behaviorism – though it nevertheless offers itself, as behaviourism did, as a hard-edged scientific discourse on human psychology. But does it, to use Longuet-Higgins’s modest expression of hope for his own work on human memory, allow one to escape from the “mindless world of atoms and molecules into the real world of subjective experience”? I think not, either in its present state or in the near future. And the reasons for saying this have nothing to do with the quality of Longuet-Higgin’s own work, which is undeniably first-rate, and little to do with the fact that the subject is still new and yet to fulfill its promise.

Three objections could be made to cognitive science, as Longuet-Higgins conceives of it. First, it rests on the twin notions of algorithmic procedures as models of thought processes and von Neumann’s conception of a computer as a model of brain hardware. Recent experience in artificial intelligence and elsewhere has seriously questioned the adequacy of this paradigm. Not only is it the case of that algorithms (mechanically repeatable loops) seem vastly inferior to sets of meaningful rules (expert systems) as approximations to many human thought processes, but the von Neumann conception of a computer (a sequential, one-thing-at-a-time machine whose input is precise and error-free) which encourages such algorithmics, is absurdly restricted. The brain appears to do many things at once, and runs on data that are often inaccurate, contradictory and massively redundant. Attempts to incorporate these features into a new model of computers – the so-called parallel processing machine – are where the future of computer science, and therefore cognitive science, is pointing. Of course, if these attempts are successful then a new and powerful scientific paradigm will emerge, one which, by patterning the architecture of computers on the neurology of parallel processes, will be in a strong, virtually irrefutable position to provide a “computational” model of the brain. But such a model will still leave untouched the question: can an adequate and convincing account of subjective experience emerge from a description of what is still, after all, only mechanism?

The second objection can be located in cognitive science’s uncritical enthusiasm for and incorporation of Chomsy’s work in linguistics, and in particular, his attempt to solve the problem of meaning in language by subordinating it to a prior given description of grammar. Many students and disciples of Chomsky have swallowed this syntax-before-semantics line and searched for a “semantic component” that grammatically correct sentences would plug into in order to acquire their meaning. All efforts to fulfill this programme have so far failed. Longuet-Higgins shares this passion for Chomsky’s syntactical approach and at the same time misdescription that is very illuminating. He writes that “Chomsky’s special contribution to linguistic thought was his comparison between the grammatical sentences of a natural language and the theorems of a logical system. Modern logic is conducted in symbols…and there are strict rules for determining whether a string of symbols represents a theorem…” Not so. Chomsky compared grammatical sentences and well-formed formulas (wffs) of logical systems, not theorems which are wffs that have been given a proof. It is certainly true that wffs can be generated by an algorithm and therefore strict rules exist for telling whether  a string is a wff. But no such rules exist or can exist for theorems – as mathematical logicians demonstrated in a variety of ways during the 1930s. Longuet-Higgins can certainly be forgiven for getting Chomsky wrong since, it could be argued, Chomsky’s whole approach to meaning is itself permeated by a transposed version of the wff/theorem confusion.

Finally, there are the questions of human agency and subjectivity. Analytic philosophers have thought much about what the differences are, if any, between mechanically specifiable actions and what we all think of as our mental activities. And one of the conclusions to have emerged is that it seems impossible to give an account of such mental activity without invoking the concept of intentionality, without going beyond explanations couched in the language of pure mechanism and assigning a part to conscious human agency; without, to put it crudely, invoking what in pre-behaviourist psychology, used to be called will. Allied to this is the question of subjectivity. Out of the large-scale debates about the “subject” which have preoccupied many in the humanities over the past two decades certain insistencies seem difficult to avoid. One such is that the subject (and therefore what we experience as “subjectivity”) is constituted, is the work of culture. It is a social, historical, semiotic construct that is no more explicable in terms of the brain mechanism of individuals than the content and significance of images on a television screen are reducible to characteristics of its wiring. So it looks as if cognitive science is further from subjective experience and more deeply embedded in the “mindless world of atoms and molecules” than Longuet-Higgins would have us believe.

Leave a comment