Suggested Technological Imperatives for the Research1
Shine inward, and the mind through all her powers
Irradiate, there plant eyes, all mist from thence
Purge and disperse, that I may see and tell
Of things invisible to mortal sight.
Milton
In terms of our everyday expectations, it is hard to imagine how a program of research such as this might have been conceived at all, let alone actually implemented. Taking into account the enormity of the ethical transgression that it implied, we will need to look beyond the simple disinterested pursuit of scientific knowledge for a credible motivation. One problem for the lay public in facing up to the probable reality behind this disclosure is that an acknowledgement of its truth first requires an understanding of the urgency of the necessity behind the proposal; i.e., as a scientific and technological imperative. In the absence of that understanding, the tendency will be for lay opinion to revert to denial, in natural defence against the extraordinary horror incited by the prospect of the truth of the disclosure.
Some speculative discussion of the meaning of the research in terms of its implications for the advancement of science and technology viewed within the historical trajectory of the mid-1960s is required therefore in order to understand how this research proposal was expected to fulfil the promise of access to knowledge that could not have been acquired by any other possible (i.e., ethical) means; and in the absence of which recondite data considered indispensable to the further progress of certain technologies during this period was understood to be simply beyond the reach of contemporary scientific discovery.
The scientific understanding of the executive functions of the brain, in terms of either: a) the localisation of functions within specific parts of the brain, and the interrelationship of those functional parts; or in terms of: b) the neurophysical and neurochemical operations at the cellular-synaptic level, had previously been limited, in terms of a), to neuropsychological studies of brain-damaged patients (deductions of localised cerebral function arrived at by matching impairments in motor or executive functions to specific localised injuries); or, in terms of b), to the post-mortem dissection of dead brain tissue. Both these forms of investigation were rather limited in scope. Neuropsychological investigations might have been successful in isolating which areas of the brain were necessary to certain discrete cerebral or motor functions, but were able to establish little definitive information about the exact order and sequence of cerebral processes and their dependencies. Likewise, microscopic examination of dead brain tissue led only to hypotheses about the activity of neurones and neurotransmitters in a living brain.
The post-war period was characterised, in technological terms, by a drive towards the codifying of information electronically, i.e., digitally. Alan Turing’s successes in breaking the Enigma Code at the end of WWII had suggested to information scientists that much of the processes involved in the collation, sorting, and adjudication of information might be handled more efficiently, and in ways that might guarantee freedom from human error, if they could be ‘outsourced’ to machines. Turing had precipitated this trend in his experimental concept of intelligent machines. Turing’s belief was that mental operations could be broken down into a series of finite logical steps, and that therefore it was theoretically possible to build a computational machine which could imitate these operations in their entirety. Again, technological development in this area faced two major limitations. Firstly, early computers had to be enormous in size due to the multiplicity of non-solid-state electronic components (valves) requiring individual connections; and storage media were limited to paper punch-cards and magnetic tape – limitations which perceivably would be gradually reduced along with a slow advancement and refinement in materials and electronics. Secondly, what level of sophistication of intelligent operations was it reasonable to expect from machines? While these two factors were clearly interconnected, an answer to the second problem was more difficult to perceive by projecting forward advancements in electronics, as it involved putting the question: What is the nature of intelligence?
Turing’s idea was that a distinction between conventional machines, i.e., those limited to a fixed number of discrete states, or phases, and theoretically possible ‘intelligent machines’, should be made on the basis of the prospective ability of the latter to imitate any conventional machine, at least in virtual terms, by the incorporation into its mechanism of a potentially unlimited number of new routines, by methods of successive digital encoding. The digital computer, as a basic theoretical concept, is thus understood as a universal machine. The defining characteristic of digital computers is therefore their capacity to ‘learn’ new routines, or programmes, and the only limitations on this potential are the practical ones of available digital storage and processing power. In its distinctive learning capacity, the digital computer is conceived to be analogous to the brain of a child (as exemplified by a child’s special ability to rapidly absorb new languages, for example).
In his 1950 paper: Computing Machinery and Intelligence2, which is accepted as a seminal treatise in the emergence of the discipline of Artificial Intelligence, Turing sets a formative agenda for the process by which digital computers might succeed in imitating the functions of an adult brain:
“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain […] Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.” (Turing, 1950, p.456)
And further:
“We may hope that machines will eventually compete with men in all purely intellectual fields […] It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child.” (ibid., p.460)
As an expression perhaps of the sublimated aspirations of scientific advancement, the 1950s and 1960s saw an expansion of the genre of Science Fiction, which populated an imaginary universe with aliens, humanoids, androids, and robots, with varying degrees of sophistication. Naturally, the fictional products of the literary imagination generally outstrip what is achievable in terms of everyday scientific reality; but the former tend to set dimensions of conceivable expectation with reference to the latter. Certainly, from this period onwards academic discourse in the areas of Experimental Psychology and the Philosophy of Mind began to orient itself to the discipline of Artificial Intelligence, eventually leading to the development of Cognitive Science as an academic discipline. It would have been difficult for anyone, even the most down-to-earth scientist, to conceive a model for the future which did not involve forms of robotic technology, employing a cybernetic model of intelligence based on human intelligence. The desire then for the establishment of such a cybernetic model gained support both from futuristic projections of technological advancement, but also from the present day need to define more accurately the scope and direction for primitive computational ‘intelligent machines’. If those machines should begin by making basic approximations of human intellectual processes, the future development of those machines required a more sophisticated understanding of the workings of the human brain, with particular emphasis on the developing child’s brain; more sophisticated that is than those which had so far been deducible within the fields of Neuropsychology, Experimental Psychology, Behavioural Psychology, or from the study of dead brain tissue.
Artificial Intelligence is not a discovery, nor is it a fact. It is a model – an attempt at a copy, or a reduction, of human intelligence in so far as the latter is understood as a logical mechanism. This leaves much about human intuitive and associative thought processes untouched and unexplained.3 Nevertheless, an understanding of such a logical mechanism pertaining to the operational neural networks of the brain, as it might be appropriate to imbue machines with the power of something-akin-to-a-thought-process, was lacking in the mid-1960s. Hence the appearance of a technocratic imperative to overcome this hurdle in the advancement of scientific knowledge, perhaps once and for all time. The problem with such a research demand was that the neurological processes under examination, that is, live in vivo cerebral functions at molecular scales, are not accessible to normal scientific observation and measurement without some means of invasive probing of an active human brain in a conscious living subject. The project therefore faced an immediate ethical hurdle – not only would the methods required be unprecedented and previously untested, but the application of those methods, in order to unlock the secret of a person’s intimate cerebral processes, would, in any conceivable practical context, be highly morally objectionable.
Such was the degree of imperative attached to this research project that, in the face of anticipated public disapprobation, it demanded the subject of the research be kept entirely unaware of the methods by which he was being examined, or it would never pass public ethical acceptance. In addition, and as a consequence of this necessary secrecy, the information required needed to be collected remotely (and therefore continuously), and which therefore necessitated the illicit bodily implantation of a series of devices to record and transmit this information discretely. We may infer further that this requirement necessitated the arrangement of a surgical opportunity, on the convenient pretext of a routine tonsillectomy (in any case, a medical procedure frequently employed proactively upon essentially healthy children), whereby these devices could be implanted permanently, and irreversibly, and in such a way that guaranteed that they might not later be discovered coincidentally by routine medical examination.
It is more difficult to speculate on the exact form or content of the information thereby transmitted, without some more intimate knowledge of the research programme. But I think it fair to assume that, as a minimum, some form of representation of brain activity from differing functional areas of the brain (cortical, parietal, occipital, limbic, etc.) was required to be measured (with particular attention to the brain stem – medulla – as the ‘basic input-output system’ for the brain), so that the correspondences between these areas during various executive tasks could be appreciated sequentially, probably in the form of a series of matrices. It might then be possible to construct a categorical model of brain functions in terms of the interrelations of executive functions, sensory functions, short-term and long-term memory, storage, retrieval, search, association, etc., by combining existing neuropsychological knowledge regarding the localisation of cerebral functions with new data signalling the interactions and dependencies between those functional parts. As an example of the conceptual parallels existing between contemporary neuropsychology and cybernetics, the former employs such concepts as “the Central Executive” and “Working Memory” when referring to the topology of cerebral functions – compare these two in particular with the Information Technology categories: Central Processing Unit (CPU) and Rapid Access Memory (RAM).
There is little more that I can confidently assert from the evidence available to me – the full extent of the connectivity of the devices is not so readily accessible from the images presented on other pages in this section, that is, without more specialised training in neuroanatomy, and further dedicated scan procedures, in particular MRI scans of my complete thoracic cavity.4 It remains to say that this research programme was clearly atypical in its design and scope – there was no apparent requirement, for instance, for the kind of representative sampling of research subjects which is characteristic of medical research in general. Perhaps any normal functioning brain would have satisfied requirements, but it seems I was selected in part for my above-average intelligence. It is unlikely that I would have been the sole research subject, but there would certainly have been few others. It was also atypical in the sense that it was not research directed principally at improvements in medical treatment and care, but seems to have gained its chief impetus from scientific and technological imperatives outside the field of medicine.5 Clearly, this research programme was intended to supply information that would be seminal and irreplaceable, and might not require to be repeated in quite the same form. Most importantly, it could conceivably be kept tightly secret.
Of course, the industry which has benefitted perhaps more than any other from advancements in information technology is that of the weapons and defence industry. For this reason I think it is reasonable to speculate further that a key impetus for this research programme will have been provided by the UK Ministry of Defence. The burgeoning technocracy of the post-war period has been engaged in a relentless pursuit of progress whereby the ends, however imperfectly conceived, can always be made to justify the means. The global stalemate in nuclear threat that was such a defining characteristic of the sixties, seventies, and eighties, has given way to an imperious domination in asymmetrical conventional warfare, assisted principally by advancements in electronic communications and information technology. In the key area of military supremacy, western technocracies acquired by expedience the mandate to supervene over all human considerations – for the ‘greater good’ of homeland security – and this mandate impels a kind of sheep-like obedience to the imperatives of technological advancement. In order to fulfil the dream of technological prowess, certain moral and human sacrifices must be made. The question which must be asked is: How did the progress of scientific and technological advancement become so prepossessed with the idea of its own nobility, such that it is now capable of forgiving itself the grossest of ethical atrocities?
Speaking as a victim of this kind of atrocity, I am acutely aware that my complicity was not a prerequisite. Quite the opposite, for it depended on my absolute ignorance, to be maintained at all costs. My identity in this matter is of little consequence – it could have been anyone, though pitifully it had to be a child, and one of high intellectual capacity, and of sufficiently young age to be at a formative stage of linguistic development – a factor which in turn effectively inhibited the processes of understanding that might have enabled me to conceptualise what it was that had actually happened to me at the age of five.
September 2020
Footnotes:
- The content of this page is extracted from the Analysis section of my report, pp.41-46. [back]
- Turing, Alan, Computing Machinery and Intelligence (October 1950), Mind LIX (236), pp.433-460: http://somr.info/lib/Mind-1950-TURING-433-60.pdf [back]
- For further discussion of the philosophical context of Artificial Intelligence, see the pages: Is Artificial Intelligence a Fallacy?, and: Mind: Before & Beyond Computation in the Xcetera section. [back]
- At the time of first writing this (in 2003), there were no existing MRI scans of my thoracic cavity. Since 2015, there have emerged three such scans of my thoracic/cervical spine – these are discussed in detail at: C-Spine MRI Scan (July 2020); and in Part 2 of my report, pp.72-82. [back]
- While it seems reasonable to conclude that the dominant impetus for the research arose from within the cognitive sciences, vis-à-vis the pursuit of Artificial Intelligence, it is not unreasonable to speculate, since the research clearly provided an unprecedented and unique opportunity for the study of in vivo neurological processes, that the knowledge acquired may have facilitated a range of consequential advancements across a diversity of medical fields. [back]