An Important Mathematical Oversight

The original intention for this website was to encourage public awareness of an historical medical crime, one that has remained a tightly-kept British state secret now for more than five decades. The matter is of enormous public interest, not least because the motivation behind the crime itself was that of advancing scientific research into areas that would come to provide the seminal knowledge behind much of the technological progress of the last half-century. My investigation into the matter inspired a parallel enquiry into some of the fundamental principles that underpin that scientific and technological impulse.

There are therefore two principle concerns of this website, and if there is acknowledged to be a substantive connection between them, that has inevitably to do with late 20th Century developments in science and information technologies, and more broadly with the idea of an burgeoning technocracy – the suggestion of a growing alliance between corporate technology and state power – one which might be judged to have atrophied the powers conventionally assigned to liberal-democratic institutions. This link therefore serves as a segue to emphasise the equal importance, to my mind, of what is going on in the X.cetera section of the site, so that that section should not appear, from the point of view of the other, as some kind of 'afterthought'.

X.cetera is concerned with a problem in mathematics and science to do with the way we think about numbers. As a subset of the category defined as integers, elements in the series of the natural numbers are generally held to represent quantities as their absolute, or 'integral', properties. On the page: The Limits of Rationality I have made a criticism of this standard definition of integers as indices of self-contained values, on the basis that the definition obscures the fact that the relations of proportion between integers is derived from their membership of a restrictive group of characters as defined by the decimal rational schema; and that those ratios of proportion cannot be assumed to apply to the the same values when transcribed into alternative radical bases such as binary, or octal, or hexadecimal, for instance.

This means that, while the values of individual integers so transcribed will be ostensibly equal across those alternative radices, the ratios of proportion between groups of those values will not be preserved, as these must be determined uniquely according to the range of available digits within any respective radix (0-9 in decimal, 0-7 in octal, for instance); one consequence of which of course is the variable relative frequency (or 'potentiality') of specific individual digits when compared across radices. This observation has serious consequences in terms of its implications for the logical consistency of data produced within digital information systems, as the logic of those systems generally relies upon the seamless correspondence, not only of 'integral' values when transcribed between decimal and the aforementioned radices, but ultimately upon the relations of proportion between those values.

Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message. The process is taken to be neutral, faithful, transparent. The assessment of quantitative and qualitative differences at the level of the observable world retains its accuracy despite at some stage involving a reduction, at the level of machine code, to the form of a series of simple binary (or 'logical') distinctions between '1' and '0' – positive and negative. This idea relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption.

However, in the X.cetera section I am concerned to point out that the logical relationship between '1' and '0' in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits (in the case of binary, limited to two members). It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that will come as a surprise to many mathematicians and information scientists alike).

It follows that the proportional relationships affecting quantitative expressions within binary, being uniquely and restrictively determined, cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal, or hexadecimal). By extension therefore, the logical relationships within a binary system of codes, being subject to the same restrictive determinations, cannot therefore be applied with logical consistency to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but they will certainly not be logically consistent with the world of objects.

The issue of a failure of logical consistency is one that concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific 'integral' numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.

So that's some of what X.cetera is all about.. If you think you're 'ard enough!

Download my 157-page report: Special Operations in Medical Research [1.9MB]
Download my Open Letter to the British Prime Minister & Health Secretary [612KB]
The Limits of Rationality (an important mathematical oversight) [384KB]
Mind: Before & Beyond Computation [461KB]
Dawkins' Theory of Memetics – A Biological Assault on the Cultural [351KB]
Randomness, Non-Randomness, & Structural Selectivity [273KB]
PDF DOWNLOADS

Download my 157-page report
[pdf – 1.9MB]:

Download my Open Letter to the British Prime Minister & Health Secretary
[pdf – 612KB]:

The Limits of Rationality
(an important mathematical oversight)

[384KB]:

Mind: Before & Beyond Computation
[461KB]:

Dawkins' Theory of Memetics – A Biological Assault on the Cultural
[339KB]:

Randomness, Non-
Randomness, & Structural
Selectivity

[273KB]:

Fifty Shades of Digital

In other pages under this section, with reference to research into some of the mathematical principles that underpin the construction of digital algorithms and digital information systems, I have tried to draw attention to what appears in the analysis of that research to be a critical problem affecting the exchange of data across domains, generally speaking, within digital information systems. Those systems have been with us for a good while already; indeed some of us are young enough not to have experienced a time when digital technologies did not play a decisive determining role in the forms of our social and economic organisation. There is therefore a considerable industrial momentum already established in favour of the successful rolling-out of those technologies, together with the appearance of a growing alliance between corporate and state power – one that relies essentially upon the progress of that deployment continuing without significant interruption. Nevertheless, those technologies remain by and large experimental in terms of their extended effects in practice, and while there have been important criticisms raised in response to some of those effects (for example by the 2019 AI Now Report1), these have been necessarily reactive, and are not addressed to the kind of foundational criticisms to which I have tried to draw attention on this website – in their emphasis upon the specific problem of the logical inconsistency of data when exchanged across digital domains.

Within the full text of the title page in this section, I have tried to show why the logic that informs the ontological value of data within its domain of origin should not be considered as freely transferable outside of that domain, and why the torrid exchange of data across domains without regard to that issue is a potential source of global confusion and disarray. It has nevertheless been a tacit assumption of information scientists that computational logic is somehow transcendent of those limits. This assumption is an error-in-principle. I have argued that in order for any digital data to retain its logical consistency it cannot be considered independently from the particular set of algorithmic rules under which it was derived, that those rules exhibit no universal applicability, and that all further uses of the data outside its domain of origin must be fully qualified in terms of those original rules; i.e., with respect to the original purposes and intents of the data. It has indeed been the purpose in part of the recent General Data Protection Regulation ('GDPR') to establish regulatory limits upon the reprocessing of subject data that arbitrarily exceeds its original intents and purposes.2 However, the problem of logical inconsistency is not limited to ethical issues concerning the integrity of individuals' personal data, but one that potentially infringes upon the logical consistency of all data universally.

We may have experienced a range of failings and irregularities in our use of digital technologies that, if the faults were not anticipated to lie within the data itself, we were accustomed to attribute to human or systemic errors in the management and processing of the data, or to weaknesses in the security of its storage. There is a sense in which it appears that errors 'just happen' due to an essential incompatibility between the technology itself and the ways in which we are accustomed to work with it. While I do not go so far as to claim that all of these problems are ultimately attributable to the logical inconsistency I have highlighted (which by itself might suggest to the industry a new and fairly urgent requirement to consider remedial limitations on the liberal transfer of data across domains), in drawing attention to that particular issue as a problem inherent in data-sharing, but one that has yet to be openly acknowledged by the industry itself, there is now less reassurance available in the idea that any data problem might be eradicated by removing the factor of human error, or by simply throwing more resources at it.

Whether it may be associated with the problem of inherent logical inconsistency or otherwise, it seems to me that all digital data is at least potentially redundant (out-of-date or simply incorrect) as soon as it is compiled. This is in the nature of data produced and stored digitally, as it is essentially static and resistant to change. How often have you come across personal or other data through the Internet which is incorrect in one or more essential details, but for which there seems to be no available means to amend it, nor any shared interest in maintaining its accuracy – in the absence of which the misinformation promises to remain indelible? It is not an excessively wild extrapolation to project that the petty confusion and helplessness provoked in the researcher by such 'misinfo' is only the microcosm of a related global data-disarray, one not limited to that shared by millions if not billions of users tapping incredulous queries into their mendacious devices, and who are lucky if they can act upon fifty percent of what they find there.

While there are clearly variations in the reliability between different categories of data (according to the relative integrity of its sources), serious and unforeseen vulnerabilities arise from the sheer ubiquity of data and the expectations placed upon it, in terms of its ability to faithfully retain its ontological value. A particular weakness is the hidden tenuousness of the value attached to subject-provided data (which somehow assumes that individuals never knowingly or unknowingly provide incorrect information on forms). Whichever way you look at it, there is inevitable disagreement between the body of data (however its limits are conceived) and its reference points, which is generally unanticipated and whose scale cannot be estimated. This should be understood in terms of increasing 'entropy' in the system (i.e., as a tendency towards increasing disorder in the system).

Furthermore, there is a serious imbalance between the level of reprocessing generally done to data and the work done in evaluating it; so that while data may enjoy unwarranted liquidity in the degree to which it is exchanged as a commodity, it nevertheless remains static and resistant to change. Computational systems are imbued with imaginary super-human capabilities, which promise to do all the work for us. A not entirely unintended consequence of rapid digital innovation has been the marginalisation of human engagement and concern in the granular management of all kinds of information, because digital technology frees us in varying degrees from the labour of that engagement; at the same time it encourages us to dispense with the methods and wisdom through which we previously exercised such engagement and concern. And should the technology fail, any post-digital solution to that failure will be incommensurable with those once-trusted methods.

I think it is important to point out a factor which I'm sure every person with the least experience of digital encoding has felt, but the significance of which has not been fully appreciated by experts in the field – that there is a 'top-heavy' relationship between the degree of coding, testing, and hence debugging required to manage the concomitant distributed effects of deploying any particular digital procedure, and the limited practical needs intended to be served by that procedure. This unforgiving ratio creates a backlog of inertia and failure in information systems, the effects of which will tend to be remote from their source, and as such are largely imperceptible as to their causes.3 If the causes of the regular failures we experience in the use of novel information technology are for this reason largely imperceptible, the problem is already widely out of control. Some radical and forensic reassessment of the use of digital IT in principle is therefore required as a remedial measure.

With the highlighted problem of logical inconsistency in mind, we should firstly consider: Is there any real ontological value in the sharing of any data outside its domain of origin? This must be the point at which the integrity of the data is first compromised and its vulnerabilities exposed – where its exchange-value outstrips its use-value. Exchange-value and use-value work quasi-independently, according to different logics, so that newly emerging exchange-value of data is calculated on the basis of a somewhat mythical (redundant) conception of its original use-value. Any new use of the data is both promiscuous and precarious, as it is too remote from that original use-value.

If, as noted above, the causes of failures in data processes tend to remain opaque to us, and also to those who design and manage those systems, will those failures ever indeed be fully remediable, either on the basis of improvements in the technology itself, or in our methods of applying it? To answer in the affirmative is to express some underlying faith in the idea that information technology is essentially 'unmotivated', 'neutral', and 'impartial' – that it is implicitly benign, and effectively 'at our service', if only we could learn how to design it or to manipulate it appropriately. This needs some unpacking. The belief is firstly quite oblivious to the prospect that, aside from the effects of any human input, digital processes might in themselves be inherently responsible for the generation of inconsistencies and failures in the systems they populate. While that perception may have once remained occult and easily dismissible, by raising an alert attention, as I have attempted to do on these pages, to the unforeseen but very real problem of inherent logical inconsistency, such confidence is at least undermined.

New data tools tend to be marketed on the basis of their seductiveness as novel solutions to well-established problems. This strategy engenders a wide-eyed approach to problem-solving that is prepared to abandon established and proven methodologies in favour of 'revolutionary' and unprecedented solutions – an approach which is inimical to the prospect of unforeseen and deleterious consequences that tend to arise, with apparent inevitability, from the use of these novel technologies. It is a form of recklessness borne out of the idea that any use of technology (by virtue of the fact that it displaces human involvement) cannot in itself be the cause of error, because by the nature of technology it is unmotivated and impartial, and in that sense implicitly benign. The error must therefore result from the fact that we have employed the technology in some unrefined manner – that we are in effect 'infants' in the use of this technology which is itself in its infancy. We have apparently committed ourselves to a very steep learning-curve, abandoning previous wisdoms and skills, in exchange for a rather vain expectation that technology will ultimately provide some form of complete solution; while we remain inimical to the realisation that any single technological advance is likely to create as many new intractable problems as those it purports to solve.

[continues]

4 April 2021 (revised: 16 August 2022)

back to top ^

Footnotes:

  1. The 2019 AI Now Report, produced by the AI Now Institute, New York University. This Report addresses a range of socially regressive effects that follow from the use of advanced AI technologies, particularly within the labour market with respect to the 'gig economy' and the use of zero-hours contracts – practices which depend upon the widespread divestment of employment rights from workers, and which encouraged Yanis Varoufakis, during a recent TV interview, to identify the dominant features of this new economy under the attribute of "techno-feudalism", suggesting that the rights enjoyed by gig-economy workers were little better than those of a sort of motorised medieval serfdom. The Report is also concerned over regressive social consequences following the rapid expansion of public surveillance technologies, particularly in the area of facial-recognition systems, and their implications for individual privacy. From a majority feminine perspective, the report emphasises a tendency for AI technologies to create inherent algorithmic biases, typically entrenching existing patterns of inequality and discrimination, and to result in the further consolidation of power amongst the already powerful, through the "private automation of public infrastructure". CITATION: Kate Crawford, Roel Dobbe, Theodora Dryer, Genevieve Fried, Ben Green, Elizabeth Kaziunas, Amba Kak, Varoon Mathur, Erin McElroy, Andrea Nill Sánchez, Deborah Raji, Joy Lisi Rankin, Rashida Richardson, Jason Schultz, Sarah Myers West, and Meredith Whittaker; AI Now 2019 Report. New York: AI Now Institute, 2019: https://ainowinstitute.org/AI_Now_2019_Report.html. [back]
  2. Article 5 §1(b) of GDPR states:

    "Personal data shall be:
    […]
    collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes ('purpose limitation');". [back]

  3. A particularly poignant example of this problem – that the causes of software and systems failures tend to remain opaque to a large proportion not only of users, but also to those who manage those systems – is the catalogue of systemic errors experienced by sub-postmasters in the UK following the Post Office's rolling-out of its Horizon branch accounting IT system, which began as a pilot scheme in 1996. The system was the cause of widespread shortfalls in the accounts being submitted by the company's sub-post offices. These shortfalls were first reported in the year 2000. The Post Office had failed to investigate the problem initially, instead pursuing spurious allegations of false-accounting, fraud, and theft against as many as 900 sub-postmasters, 736 of whom were successfully prosecuted, with many being either jailed or bankrupted as a result. A team of forensic accountants, Second Sight, appointed by the Post Office in 2013, declared the Horizon system as "not fit for purpose", and reported that it regularly failed to track certain specific forms of transaction. The Post Office, at that point already committed to private prosecutions against hundreds of innocent sub-postmasters, dismissed Second Sight's critical report, and five senior Post Office executives declared, with spectacular arrogance: "We cannot conceive of there being failings in our Horizon system". The Post Office has since generally relied upon confidentiality clauses as a means of deterring further enquiry and investigation into this monumental miscarriage of justice, and the cases of the falsely convicted sub-postmasters are only now beginning to be heard by the Court of Appeal, thanks in part to the Justice For Subpostmasters Alliance (JFSA). See also Nick Wallis' extensive blog on the case at: https://www.postofficetrial.com. [back]

back to top ^