Warning: this website discloses medical evidence proving that British surgeons, working within the NHS, conducted a covert experimental neurosurgical operation on the brain of a five-year-old child, illicitly and without medical justification, at the North Staffordshire Infirmary in 1967. These extraordinary and shocking revelations will challenge your faith in ethical medicine..

Suggested Technological Imperatives for the Research1

... thou Celestial light
Shine inward, and the mind through all her powers
Irradiate, there plant eyes, all mist from thence
Purge and disperse, that I may see and tell
Of things invisible to mortal sight.

Milton

In order to understand how and why such a medical undertaking as this was possible during the mid-1960s, and not only possible, but which also came to be considered as an imperative, in terms of the advancement of prevailing scientific knowledge, it is necessary to speculate a little further on its probable technological value, and to situate it within an historical trajectory. Taking into account the scale of the ethical transgression which it implied, the research programme must have promised access to knowledge that could not have been acquired by any other possible means.

The scientific understanding of the executive functions of the brain, in terms of either: a) the localisation of functions within specific parts of the brain, and the interrelationship of those functional parts; or in terms of: b) the neurophysical and neurochemical operations at the cellular-synaptic level, had previously been limited, in terms of a), to the neuropsychological study of brain-damaged patients (deductions of localised cerebral function arrived at by matching impairments in motor or executive functions to specific localised injuries); or, in terms of b), to the post-mortem dissection of dead brain-tissue. Both these forms of investigation were rather limited in scope. Neuropsychological investigations might have been successful in isolating which areas of the brain were necessary to certain discrete cerebral or motor functions, but were able to establish little definitive information about the exact order and sequence of cerebral processes. Likewise, microscopic examination of dead brain-tissue led only to rather hypothetical conclusions about the activity of neurones and neurotransmitters in a living brain.

The post-war period was characterised, in technological terms, by a drive towards the codifying of information electronically, i.e., digitally. Alan Turing's successes in breaking the Enigma Code at the end of WWII had suggested to information scientists that much of the processes involved in the collation, sorting, and adjudication of information might be handled more efficiently, and in ways that might guarantee freedom from error, if they could be 'outsourced' to machines. Turing had precipitated this trend in his experimental concept of Intelligent Machines. Turing's belief was that mental operations could be broken down into a series of finite logical steps, and that therefore it was theoretically possible to build a computational machine which could imitate these operations in their entirety. Again, technological development in this area faced two major limitations. Firstly, early computers had to be enormous in size due to the multiplicity of non-solid-state electronic components (valves) requiring individual connections; and storage media were limited to paper punch-cards and magnetic tape – limitations which perceivably would be gradually reduced along with a slow advancement and refinement in materials and electronics. Secondly, what level of sophistication of intelligent operations was it reasonable to expect from machines? While these two factors were clearly interconnected, an answer to the second problem was more difficult to perceive by projecting forward advancements in electronics, as it involved putting the question: What is the nature of intelligence?

Turing's idea was that what distinguished theoretically possible 'intelligent machines' from conventional machines, i.e., those limited to a fixed number of discrete states, or phases, was the ability of the former to imitate any conventional machine, at least in virtual terms, by the incorporation into its mechanism of a potentially unlimited number of new routines, by methods of successive digital encoding. The digital computer, as a basic theoretical concept, is thus understood as a universal machine. The defining characteristic of digital computers is therefore their ability to 'learn' new routines, or programmes, and the only limitations on this potential are the practical ones of available digital storage and processing power. In its distinctive learning ability, the digital computer is conceived to be analogous to the brain of a child (as exemplified by a child's special ability to rapidly absorb new languages, for example).

In his 1950 paper: Computing Machinery and Intelligence2, which is accepted as a seminal treatise in the emergence of the discipline of Artificial Intelligence, Turing sets a formative agenda for the process by which digital computers might succeed in imitating the functions of an adult brain:

"Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain […] Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child." (Turing, 1950, p.456)

And further:

"We may hope that machines will eventually compete with men in all purely intellectual fields […] It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child." (ibid., p.460)

As an expression perhaps of the sublimated aspirations of scientific advancement, the 1950s and 1960s saw an expansion of the genre of Science Fiction, which populated an imaginary universe with aliens, humanoids, androids, and robots, with varying degrees of sophistication. Naturally, the fictional products of the literary imagination generally outstrip what is achievable in terms of everyday scientific reality, but the former tends to set dimensions of conceivable expectation with reference to the latter. Certainly, from this period onwards academic discourse in the areas of Experimental Psychology and the Philosophy of Mind began to orient itself to the discipline of Artificial Intelligence, eventually leading to the development of Cognitive Science as an academic discipline. It would have been difficult for anyone, even the most down-to-earth scientist, to conceive a model for the future which did not involve forms of robotic technology, employing a cybernetic model of intelligence based on human intelligence. The desire then for the establishment of such a cybernetic model gained support both from futuristic projections of technological advancement, but also from the present day need to define more accurately the scope and direction for primitive computational 'intelligent machines'. If those machines should begin by making basic approximations of human intellectual processes, the future development of those machines required a more sophisticated understanding of the workings of the human brain, with particular emphasis on the developing child's brain; more sophisticated that is than those which had so far been deducible within the fields of Neuropsychology, Experimental Psychology, Behavioural Psychology, or from the study of dead brain-tissue.

Artificial Intelligence is not a discovery, nor is it a fact. It is a model – an attempt at a copy, or a reduction, of human intelligence in so far as the latter is understood as a logical mechanism. This leaves much about human intuitive and associative thought processes untouched and unexplained.3 Nevertheless, an understanding of such a logical mechanism pertaining to the operational neural networks of the brain, as it might be appropriate to imbue machines with the power of something-akin-to-a-thought-process, was lacking in the mid-1960s. Hence the appearance of a technocratic imperative to overcome this hurdle in the advancement of scientific knowledge, perhaps once and for all time. The problem with such a research demand was that the neurological processes under examination, that is, live in vivo cerebral functions at molecular scales, are not accessible to normal scientific observation and measurement without some means of invasive probing of an active human brain in a conscious living subject. The project therefore faced an immediate ethical hurdle. Not only would the methods required be unprecedented and previously untested, but the application of those methods, in order to unlock the secret of a person's intimate cerebral processes, would, in any conceivable practical context, be highly morally objectionable.

Such was the degree of imperative attached to this research project that, in the face of anticipated public disapprobation, it demanded the subject of the research be kept entirely unaware of the methods by which he was being examined, or it would never pass public ethical acceptance. In addition, and as a consequence of this necessary secrecy, the information required needed to be collected remotely (and therefore continuously), and which therefore necessitated the illicit bodily implantation of a series of devices to record and transmit this information discretely. We may infer further that this requirement necessitated the arrangement of a surgical 'opportunity', on the convenient pretext of a routine tonsillectomy (in any case, a medical procedure frequently employed proactively on essentially healthy children), whereby these devices could be implanted permanently, and irreversibly, and in such a way that guaranteed that they might not later be discovered coincidentally by routine medical examination.

It is more difficult to speculate on the exact form or content of the information thereby transmitted, without some more intimate knowledge of the research programme. But I think it fair to assume that, as a minimum, some form of representation of brain activity from differing functional areas of the brain (cortical, parietal, occipital, limbic, etc.) was required to be measured, so that the correspondences between these areas during various executive tasks could be appreciated sequentially, probably in the form of a series of matrices. It might then be possible to construct a categorical model of brain functions in terms of the interrelations of executive functions, sensory functions, short-term and long-term memory, storage, retrieval, search, association, etc. Contemporary neuropsychology employs such concepts as "the Central Executive" and "Working Memory" when referring to the topology of cerebral functions – compare these two in particular with the Information Technology categories: Central Processing Unit (CPU) and Rapid Access Memory (RAM).

There is little more that I can confidently assert from the evidence available to me – the full extent of the connectivity of the devices is not so readily accessible from the images available, that is, without more specialised training in neuroanatomy, and further dedicated scan procedures, in particular MRI scans of my complete thoracic cavity. It remains to say that this research programme was clearly atypical in its design and scope – there was no apparent requirement, for instance, for the kind of representative sampling of research subjects which is characteristic of medical research in general. Perhaps any normal functioning brain would have satisfied requirements, but it seems I was selected in part for my above-average intelligence. It is unlikely that I would have been the sole research subject, but there would certainly have been few others. It was also atypical in the sense that it was not research directed principally at improvements in medical treatment and care, but seems to have gained its chief impetus from scientific and technological imperatives outside the field of medicine.4 Clearly, this research programme was intended to supply information that would be seminal and irreplaceable, and might not require to be repeated in quite the same form. Most importantly, it could conceivably be kept tightly secret.

Of course, the industry which has benefitted perhaps more than any other from advancements in information technology is that of the weapons and defence industry. For this reason I think it is reasonable to suspect that a key impetus for this research programme to have been provided by the UK Ministry of Defence, who are at least accustomed to expecting the lives of subjects as occupational sacrifices in the pursuit of national interests, and therefore might have fewer qualms over ethical constraints.

The burgeoning technocracy of the post-war period has been engaged in a relentless pursuit of progress, whereby the ends, however imperfectly conceived, can always be made to justify the means. The global stalemate in nuclear threat which was such a defining characteristic of the sixties, seventies, and eighties, has given way to an imperious domination in asymmetrical conventional warfare, assisted principally by advancements in electronic communications and information technology. In the key area of military supremacy, Western technocracies acquired by expedience the mandate to supervene over all human considerations – for the greater good of 'homeland security' – and this mandate impels a kind of sheep-like obedience to the imperatives of technological advancement. In order to fulfil the dream of technological prowess, certain moral and human sacrifices must be made. The question which must be asked is: How did the progress of scientific and technological advancement become so prepossessed with the idea of its own nobility, that it is now capable of forgiving itself the grossest of ethical atrocities?

Speaking as a victim of this kind of atrocity, I am acutely aware that my complicity was not a prerequisite. Quite the opposite, for it depended on my absolute ignorance, to be maintained at all costs. My identity in this matter is of little consequence – it could have been anyone, though pitifully it had to be a child, and one of high intellectual capacity, and of sufficiently young age, so as to effectively inhibit the processes of understanding which could have enabled me to conceptualise what it was that had actually happened to me.

  1. The content of this page is extracted from the Analysis section of my report, pp.41-46. [back]
  2. Turing, Alan, Computing Machinery and Intelligence (October 1950), Mind LIX (236), pp.433-460: http://somr.info/lib/Mind-1950-TURING-433-60.pdf [back]
  3. For further discussion of the philosophical context of Artificial Intelligence, see the pages: Is Artificial Intelligence a Fallacy?, and: Mind: Before & Beyond Computation in the X.cetera section. [back]
  4. While it seems reasonable to conclude that the dominant impetus for the research arose from within the cognitive sciences, vis-à-vis the pursuit of Artificial Intelligence, it is not unreasonable to speculate, since the research clearly provided an unprecedented and unique opportunity for the study of in vivo neurological processes, that the knowledge acquired may have facilitated a range of consequential advancements across a diversity of medical fields. [back]

back to top ^