An Important Mathematical Oversight

The original intention for this website was to encourage public awareness of an historical medical crime, one that has remained a tightly-kept British state secret now for more than five decades. The matter is of enormous public interest, not least because the motivation behind the crime itself was that of advancing scientific research into areas that would come to provide the seminal knowledge behind much of the technological progress of the last half-century. My investigation into the matter inspired a parallel enquiry into some of the fundamental principles that underpin that scientific and technological impulse.

There are therefore two principle concerns of this website, and if there is acknowledged to be a substantive connection between them, that has inevitably to do with late 20th Century developments in science and information technologies, and more broadly with the idea of an burgeoning technocracy – the suggestion of a growing alliance between corporate technology and state power – one which might be judged to have atrophied the powers conventionally assigned to liberal-democratic institutions. This link therefore serves as a segue to emphasise the equal importance, to my mind, of what is going on in the X.cetera section of the site, so that that section should not appear, from the point of view of the other, as some kind of 'afterthought'.

X.cetera is concerned with a problem in mathematics and science to do with the way we think about numbers. As a subset of the category defined as integers, elements in the series of the natural numbers are generally held to represent quantities as their absolute, or 'integral', properties. On the page: The Limits of Rationality I have made a criticism of this standard definition of integers as indices of self-contained values, on the basis that the definition obscures the fact that the relations of proportion between integers is derived from their membership of a restrictive group of characters as defined by the decimal rational schema; and that those ratios of proportion cannot be assumed to apply to the the same values when transcribed into alternative radical bases such as binary, or octal, or hexadecimal, for instance.

This means that, while the values of individual integers so transcribed will be ostensibly equal across those alternative radices, the ratios of proportion between groups of those values will not be preserved, as these must be determined uniquely according to the range of available digits within any respective radix (0-9 in decimal, 0-7 in octal, for instance); one consequence of which of course is the variable relative frequency (or 'potentiality') of specific individual digits when compared across radices. This observation has serious consequences in terms of its implications for the logical consistency of data produced within digital information systems, as the logic of those systems generally relies upon the seamless correspondence, not only of 'integral' values when transcribed between decimal and the aforementioned radices, but ultimately upon the relations of proportion between those values.

Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message. The process is taken to be neutral, faithful, transparent. The assessment of quantitative and qualitative differences at the level of the observable world retains its accuracy despite at some stage involving a reduction, at the level of machine code, to the form of a series of simple binary (or 'logical') distinctions between '1' and '0' – positive and negative. This idea relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption.

However, in the X.cetera section I am concerned to point out that the logical relationship between '1' and '0' in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits (in the case of binary, limited to two members). It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that will come as a surprise to many mathematicians and information scientists alike).

It follows that the proportional relationships affecting quantitative expressions within binary, being uniquely and restrictively determined, cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal, or hexadecimal). By extension therefore, the logical relationships within a binary system of codes, being subject to the same restrictive determinations, cannot therefore be applied with logical consistency to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but they will certainly not be logically consistent with the world of objects.

The issue of a failure of logical consistency is one that concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific 'integral' numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.

So that's some of what X.cetera is all about.. If you think you're 'ard enough!

Download my 157-page report: Special Operations in Medical Research [1.9MB]
Download my Open Letter to the British Prime Minister & Health Secretary [612KB]
The Limits of Rationality (an important mathematical oversight) [384KB]
Mind: Before & Beyond Computation [461KB]
Dawkins' Theory of Memetics – A Biological Assault on the Cultural [351KB]
Randomness, Non-Randomness, & Structural Selectivity [273KB]
PDF DOWNLOADS

Download my 157-page report
[pdf – 1.9MB]:

Download my Open Letter to the British Prime Minister & Health Secretary
[pdf – 612KB]:

The Limits of Rationality
(an important mathematical oversight)

[384KB]:

Mind: Before & Beyond Computation
[461KB]:

Dawkins' Theory of Memetics – A Biological Assault on the Cultural
[339KB]:

Randomness, Non-
Randomness, & Structural
Selectivity

[273KB]:

Suggested Technological Imperatives for the Research1

... thou Celestial light
Shine inward, and the mind through all her powers
Irradiate, there plant eyes, all mist from thence
Purge and disperse, that I may see and tell
Of things invisible to mortal sight.

Milton

In terms of our everyday expectations, it is hard to imagine how a program of research such as this might have been conceived at all, let alone actually implemented. Taking into account the enormity of the ethical transgression that it implied, we will need to look beyond the simple disinterested pursuit of scientific knowledge for a credible motivation. One problem for the lay public in facing up to the probable reality of this disclosure is that an acknowledgement of its truth first requires an understanding of the urgency of the necessity behind the proposal; i.e., as a scientific and technological imperative. In the absence of that understanding, the tendency will be for lay opinion to revert to denial, in natural defence against the extraordinary horror incited by the prospect of the truth of the disclosure.

Some speculative discussion of the meaning of the research in terms of its implications for the advancement of science and technology viewed within the historical trajectory of the mid-1960s is required therefore in order to understand how this research proposal was expected to fulfil the promise of access to knowledge that could not have been acquired by any other possible (i.e., ethical) means; and in the absence of which recondite data considered indispensable to the further progress of certain technologies during this period was understood to be simply beyond the reach of contemporary scientific discovery.

The scientific understanding of the executive functions of the brain, in terms of either: a) the localisation of functions within specific parts of the brain, and the interrelationship of those functional parts; or in terms of: b) the neurophysical and neurochemical operations at the cellular-synaptic level, had previously been limited, in terms of a), to the neuropsychological study of brain-damaged patients (deductions of localised cerebral function arrived at by matching impairments in motor or executive functions to specific localised injuries); or, in terms of b), to the post-mortem dissection of dead brain tissue. Both these forms of investigation were rather limited in scope. Neuropsychological investigations might have been successful in isolating which areas of the brain were necessary to certain discrete cerebral or motor functions, but were able to establish little definitive information about the exact order and sequence of cerebral processes and their dependencies. Likewise, microscopic examination of dead brain tissue led only to hypothesis about the activity of neurones and neurotransmitters in a living brain.

The post-war period was characterised, in technological terms, by a drive towards the codifying of information electronically, i.e., digitally. Alan Turing's successes in breaking the Enigma Code at the end of WWII had suggested to information scientists that much of the processes involved in the collation, sorting, and adjudication of information might be handled more efficiently, and in ways that might guarantee freedom from error, if they could be 'outsourced' to machines. Turing had precipitated this trend in his experimental concept of intelligent machines. Turing's belief was that mental operations could be broken down into a series of finite logical steps, and that therefore it was theoretically possible to build a computational machine which could imitate these operations in their entirety. Again, technological development in this area faced two major limitations. Firstly, early computers had to be enormous in size due to the multiplicity of non-solid-state electronic components (valves) requiring individual connections; and storage media were limited to paper punch-cards and magnetic tape – limitations which perceivably would be gradually reduced along with a slow advancement and refinement in materials and electronics. Secondly, what level of sophistication of intelligent operations was it reasonable to expect from machines? While these two factors were clearly interconnected, an answer to the second problem was more difficult to perceive by projecting forward advancements in electronics, as it involved putting the question: What is the nature of intelligence?

Turing's idea was that a distinction between conventional machines, i.e., those limited to a fixed number of discrete states, or phases, and theoretically possible 'intelligent machines', should be made on the basis of the prospective ability of the latter to imitate any conventional machine, at least in virtual terms, by the incorporation into its mechanism of a potentially unlimited number of new routines, by methods of successive digital encoding. The digital computer, as a basic theoretical concept, is thus understood as a universal machine. The defining characteristic of digital computers is therefore their capacity to 'learn' new routines, or programmes, and the only limitations on this potential are the practical ones of available digital storage and processing power. In its distinctive learning capacity, the digital computer is conceived to be analogous to the brain of a child (as exemplified by a child's special ability to rapidly absorb new languages, for example).

In his 1950 paper: Computing Machinery and Intelligence2, which is accepted as a seminal treatise in the emergence of the discipline of Artificial Intelligence, Turing sets a formative agenda for the process by which digital computers might succeed in imitating the functions of an adult brain:

"Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain […] Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child." (Turing, 1950, p.456)

And further:

"We may hope that machines will eventually compete with men in all purely intellectual fields […] It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child." (ibid., p.460)

As an expression perhaps of the sublimated aspirations of scientific advancement, the 1950s and 1960s saw an expansion of the genre of Science Fiction, which populated an imaginary universe with aliens, humanoids, androids, and robots, with varying degrees of sophistication. Naturally, the fictional products of the literary imagination generally outstrip what is achievable in terms of everyday scientific reality; but the former tend to set dimensions of conceivable expectation with reference to the latter. Certainly, from this period onwards academic discourse in the areas of Experimental Psychology and the Philosophy of Mind began to orient itself to the discipline of Artificial Intelligence, eventually leading to the development of Cognitive Science as an academic discipline. It would have been difficult for anyone, even the most down-to-earth scientist, to conceive a model for the future which did not involve forms of robotic technology, employing a cybernetic model of intelligence based on human intelligence. The desire then for the establishment of such a cybernetic model gained support both from futuristic projections of technological advancement, but also from the present day need to define more accurately the scope and direction for primitive computational 'intelligent machines'. If those machines should begin by making basic approximations of human intellectual processes, the future development of those machines required a more sophisticated understanding of the workings of the human brain, with particular emphasis on the developing child's brain; more sophisticated that is than those which had so far been deducible within the fields of Neuropsychology, Experimental Psychology, Behavioural Psychology, or from the study of dead brain tissue.

Artificial Intelligence is not a discovery, nor is it a fact. It is a model – an attempt at a copy, or a reduction, of human intelligence in so far as the latter is understood as a logical mechanism. This leaves much about human intuitive and associative thought processes untouched and unexplained.3 Nevertheless, an understanding of such a logical mechanism pertaining to the operational neural networks of the brain, as it might be appropriate to imbue machines with the power of something-akin-to-a-thought-process, was lacking in the mid-1960s. Hence the appearance of a technocratic imperative to overcome this hurdle in the advancement of scientific knowledge, perhaps once and for all time. The problem with such a research demand was that the neurological processes under examination, that is, live in vivo cerebral functions at molecular scales, are not accessible to normal scientific observation and measurement without some means of invasive probing of an active human brain in a conscious living subject. The project therefore faced an immediate ethical hurdle – not only would the methods required be unprecedented and previously untested, but the application of those methods, in order to unlock the secret of a person's intimate cerebral processes, would, in any conceivable practical context, be highly morally objectionable.

Such was the degree of imperative attached to this research project that, in the face of anticipated public disapprobation, it demanded the subject of the research be kept entirely unaware of the methods by which he was being examined, or it would never pass public ethical acceptance. In addition, and as a consequence of this necessary secrecy, the information required needed to be collected remotely (and therefore continuously), and which therefore necessitated the illicit bodily implantation of a series of devices to record and transmit this information discretely. We may infer further that this requirement necessitated the arrangement of a surgical opportunity, on the convenient pretext of a routine tonsillectomy (in any case, a medical procedure frequently employed proactively upon essentially healthy children), whereby these devices could be implanted permanently, and irreversibly, and in such a way that guaranteed that they might not later be discovered coincidentally by routine medical examination.

It is more difficult to speculate on the exact form or content of the information thereby transmitted, without some more intimate knowledge of the research programme. But I think it fair to assume that, as a minimum, some form of representation of brain activity from differing functional areas of the brain (cortical, parietal, occipital, limbic, etc.) was required to be measured (with particular attention to the brain stem – medulla – as the 'basic input-output system' for the brain), so that the correspondences between these areas during various executive tasks could be appreciated sequentially, probably in the form of a series of matrices. It might then be possible to construct a categorical model of brain functions in terms of the interrelations of executive functions, sensory functions, short-term and long-term memory, storage, retrieval, search, association, etc., by combining existing neuropsychological knowledge regarding the localisation of cerebral functions with new data signalling the interactions and dependencies between those functional parts. As an example of the conceptual parallels existing between contemporary neuropsychology and cybernetics, the former employs such concepts as "the Central Executive" and "Working Memory" when referring to the topology of cerebral functions – compare these two in particular with the Information Technology categories: Central Processing Unit (CPU) and Rapid Access Memory (RAM).

There is little more that I can confidently assert from the evidence available to me – the full extent of the connectivity of the devices is not so readily accessible from the images presented on other pages in this section, that is, without more specialised training in neuroanatomy, and further dedicated scan procedures, in particular MRI scans of my complete thoracic cavity.4 It remains to say that this research programme was clearly atypical in its design and scope – there was no apparent requirement, for instance, for the kind of representative sampling of research subjects which is characteristic of medical research in general. Perhaps any normal functioning brain would have satisfied requirements, but it seems I was selected in part for my above-average intelligence. It is unlikely that I would have been the sole research subject, but there would certainly have been few others. It was also atypical in the sense that it was not research directed principally at improvements in medical treatment and care, but seems to have gained its chief impetus from scientific and technological imperatives outside the field of medicine.5 Clearly, this research programme was intended to supply information that would be seminal and irreplaceable, and might not require to be repeated in quite the same form. Most importantly, it could conceivably be kept tightly secret.

Of course, the industry which has benefitted perhaps more than any other from advancements in information technology is that of the weapons and defence industry. For this reason I think it is reasonable to speculate further that a key impetus for this research programme will have been provided by the UK Ministry of Defence. The burgeoning technocracy of the post-war period has been engaged in a relentless pursuit of progress whereby the ends, however imperfectly conceived, can always be made to justify the means. The global stalemate in nuclear threat that was such a defining characteristic of the sixties, seventies, and eighties, has given way to an imperious domination in asymmetrical conventional warfare, assisted principally by advancements in electronic communications and information technology. In the key area of military supremacy, western technocracies acquired by expedience the mandate to supervene over all human considerations – for the 'greater good' of homeland security – and this mandate impels a kind of sheep-like obedience to the imperatives of technological advancement. In order to fulfil the dream of technological prowess, certain moral and human sacrifices must be made. The question which must be asked is: How did the progress of scientific and technological advancement become so prepossessed with the idea of its own nobility, such that it is now capable of forgiving itself the grossest of ethical atrocities?

Speaking as a victim of this kind of atrocity, I am acutely aware that my complicity was not a prerequisite. Quite the opposite, for it depended on my absolute ignorance, to be maintained at all costs. My identity in this matter is of little consequence – it could have been anyone, though pitifully it had to be a child, and one of high intellectual capacity, and of sufficiently young age, so as to effectively inhibit the processes of understanding that might have enabled me to conceptualise what it was that had actually happened to me at the age of five.

September 2020

Footnotes:

  1. The content of this page is extracted from the Analysis section of my report, pp.41-46. [back]
  2. Turing, Alan, Computing Machinery and Intelligence (October 1950), Mind LIX (236), pp.433-460: http://somr.info/lib/Mind-1950-TURING-433-60.pdf [back]
  3. For further discussion of the philosophical context of Artificial Intelligence, see the pages: Is Artificial Intelligence a Fallacy?, and: Mind: Before & Beyond Computation in the X.cetera section. [back]
  4. At the time of first writing this (in 2003), there were no existing MRI scans of my thoracic cavity. Since 2015, there have emerged three such scans of my thoracic/cervical spine – these are discussed in detail at the: C-Spine MRI Scan (July 2020) page; and in Part 2 of my report, pp.72-80. [back]
  5. While it seems reasonable to conclude that the dominant impetus for the research arose from within the cognitive sciences, vis-à-vis the pursuit of Artificial Intelligence, it is not unreasonable to speculate, since the research clearly provided an unprecedented and unique opportunity for the study of in vivo neurological processes, that the knowledge acquired may have facilitated a range of consequential advancements across a diversity of medical fields. [back]

back to top ^