An Important Mathematical Oversight

The original intention for this website was to encourage public awareness of an historical medical crime, one that has remained a tightly-kept British state secret now for more than five decades. The matter is of enormous public interest, not least because the motivation behind the crime itself was that of advancing scientific research into areas that would come to provide the seminal knowledge behind much of the technological progress of the last half-century. My investigation into the matter inspired a parallel enquiry into some of the fundamental principles that underpin that scientific and technological impulse.

There are therefore two principle concerns of this website, and if there is acknowledged to be a substantive connection between them, that has inevitably to do with late 20th Century developments in science and information technologies, and more broadly with the idea of an burgeoning technocracy – the suggestion of a growing alliance between corporate technology and state power – one that might be judged to have atrophied the powers conventionally assigned to liberal-democratic institutions. This link therefore serves as a segue to emphasise the equal importance, to my mind, of what is going on in the X.cetera section of the site, so that that section should not appear, from the point of view of the other, as some kind of afterthought.

X.cetera is concerned with a problem in mathematics and science to do with the way we think about numbers. As a subset of the category defined as integers, elements in the series of the natural numbers are generally held to represent quantities as their absolute, or ‘integral’, properties. It is argued that this conventional understanding of integers, which is the one widely held amongst mathematicians and scientists adopting mathematical principles, is the cause of a significant oversight with regard to changes in the relations of proportion between numerical values, i.e., when those values are transposed out of the decimal rational schema into alternative numerical radices such as those of binary, octal, and hexadecimal, etc.

On the page: The Limits of Rationality it is argued that the relations of proportion between integers are dictated principally by their membership of the restricted group of characters (0-9) as defined by the decimal rational schema; and that corresponding ratios of proportion cannot be assumed to apply between otherwise numerically equal values when transposed into alternative numerical radices having either reduced (as in binary or octal, for instance) or extended (as in hexadecimal) member-ranges.

This is shown to be objectively the case by the results published at: Radical Affinity and Variant Proportion in Natural Numbers, which show that for a series of exponential values in decimal, where the logarithmic ratios between those values are consistently equal to 1, the corresponding series of values when transposed into any radix from binary to nonary (base-9) results in logarithmic ratios having no consistent value at all, in each case producing a graph showing a series of variegated peaks and troughs displaying proportional inconsistency.

These findings are previously unacknowledged by mathematicians and information scientists alike, but the import of the findings is that, while the discrete values of individual integers transposed into alternative radices will be ostensibly equal across those radices, the ratios of proportion between those values will not be preserved, as these ratios must be determined uniquely according to the range of available digits within any respective radix (0-9 in decimal, 0-7 in octal, for instance); one consequence of which of course is the variable relative frequency (or ‘potentiality’) of specific individual digits when compared across radices. This observation has serious consequences in terms of its implications for the logical consistency of data produced within digital information systems, as the logic of those systems generally relies upon the seamless correspondence, not only of ‘integral’ values when transcribed between decimal and the aforementioned radices, but ultimately upon the relations of proportion between those values.

Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message. The process is taken to be neutral, faithful, transparent. While the assessment of quantitative and qualitative differences at the level of the observable world necessarily entails assessments of proportion, the digital encoding of those assessments ultimately involves a reduction, at the level of machine code, to the form of a series of simple binary (or ‘logical’) distinctions between ‘1’ and ‘0’ – positive and negative. The process relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption. Logic is assumed to operate consistently without limits, as a sort of ‘ambient’ condition of information systems.

In the X.cetera section I am concerned to point out however that the logical relationship between ‘1’ and ‘0’ in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits limited to two members. It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that, despite its apparent simplicity, may well come as a surprise to many mathematicians and information scientists alike).

As the proportional relationships affecting quantitative expressions within binary are uniquely and restrictively determined, they cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal, or hexadecimal). By extension therefore, the logical relationships within a binary system of codes, being subject to the same restrictive determinations, cannot therefore be applied with logical consistency to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but they will certainly not be logically consistent with the world of objects.

The issue of a failure of logical consistency is one that concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific ‘integral’ numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.

So that’s some of what X.cetera is all about.. If you think you’re ‘ard enough!

[ PDF version ]

[ PDF version ]

[ PDF version ]

[ PDF version ]

PDF DOWNLOADS

Download my 165-page
report: Special Operations
in Medical Research

[pdf – 1.93MB]:

Download my Open Letter to the British Prime Minister & Health Secretary
[pdf – 230KB]:

The Limits of Rationality
(An important mathematical oversight)

[814KB]:

Radical Affinity and
Variant Proportion in
Natural Numbers

[2.2MB]:

Mind: Before & Beyond Computation
[644KB]:

Dawkins' Theory of Memetics – A Biological Assault on the Cultural
[508KB]:

Randomness, Non-
Randomness, & Structural Selectivity

[696KB]:

The Limits of Rationality (An important mathematical oversight)

The pages linked under this section are intended as a complement to the main content of this site – they refer to some recent ongoing research projects, which in some sense were provoked in tandem with the main content, though they are independent from it, and still very much work-in-progress. At any rate, I hope it might offset some of the heavy seriousness of my principle exposition, which is rather inescapable. These notes and essays remain fairly discursive, but also, considering the depths of the subject matter, reasonably concise and, I hope, accessible.

The following discussion involves an enquiry into some of the properties of the natural numbers, and entails a critique of the conventional definition of an integer as a stable index of numeric value. This is with concern to the fact that digital information systems require numeric values to be represented across a range of numerical radices (decimal, binary, octal, hexadecimal, etc.), and for them to serve not only as an index of quantity, but also ultimately as the basis of the machine-code for a multitude of operational processing instructions upon data. Later in the discussion I resolve upon a critique of digital information systems insofar as it is judged that those systems may be characterised by a tendency towards inherent logical inconsistency.

Rationality and Proportion in the Natural Numbers

In the following I use both the terms ‘number’ and ‘integer’ interchangeably, although I should point out that this enquiry is predominantly concerned with the set of the natural numbers, i.e., those positive whole numbers (including zero) we conventionally employ to count. Natural numbers are a subset of the category integers, as the latter also includes negative whole numbers, with which I am unconcerned. I am concerned however with the technical definition of an integer – i.e., as an entity in itself, whose properties are generally understood to be self-contained (‘integral’) – a definition hence inherited by the sub-category natural numbers.

We are familiar with the term ‘irrational numbers’ in maths – referring to examples such as √2, or π, and which implies that the figure cannot be expressed exactly as the ratio of any two whole numbers (e.g., 22/7 – a close rational approximation to π), and therefore does not resolve to a finite number of decimal places, or to a settled pattern of recurring digits following the decimal point. Irrational numbers have played a decisive role in the history of mathematics because, as they are impossible to define as discrete and finite magnitudes by means of number, they cannot be represented proportionally without resorting to geometry; while much of the modern development of mathematics has involved the shift from an emphasis upon Classical geometry as its foundation to one based upon abstract algebraic notation. The motivation towards abstraction was therefore determined to a great extent by the need to represent the irrational numbers without requiring their explanation in terms of continuous geometric magnitudes.1

The abstract representation of irrational numbers within algebra allows them to enter into calculations which also involve the rational quantities of discrete integers and finite fractions, thereby combining elements that were previously considered incommensurable in terms of their proportion. The effect of this was to subvert the Classical distinction between discrete and continuous forms of magnitude. In the Greek understanding of discrete magnitudes, number always related to the being in existence of “a definite number of definite objects”,2 and so the idea of their proportion was similarly grounded in the idea of a number of separable objects in existence. There is a different mode of proportion that applies to geometrical objects such as lines and planes – one that involves a continuously divisible scale, in comparison to the ‘staggered’ scale that would apply in the case of a number of discrete objects.

In terms of abstract algebraic notation, the Cartesian coordinates (x,y), for instance, might commonly stand in for any unknown integer value; but they may also take on the value of irrationals such as √2, which in the Greek tradition could only be represented diagrammatically. Therefore, as a complement to this newly empowered, purely intellectual form of mathematical discourse (epitomised in Descartes’ project for a mathesis universalis), the idea of proportion (similarly, of logic) demanded a comparable abstraction, so that proportion is no longer seen to derive ecologically according to the forms of distribution of the objects under analysis, and instead becomes applied axiomatically – from without.

In the received definition of an integer as a measure of abstract quantity, the basis of an integer’s integrity (hence also that of the natural numbers) is no longer dependent upon its relation to the phenomenal identity of objects in existence, but to the purely conceptual identity that inheres in the unit ‘1’ – a value which is nevertheless understood to reside intrinsically and invariably within the concept.3 Hence, integers also acquire an axiomatic definition, and any integer will display proportional invariance with respect to its constituent units (to say ‘5’ is for all purposes equivalent to saying ‘1+1+1+1+1’ – the former is simply a more manageable expression); and by virtue of this we can depend upon them as signifiers of pure quantity, untroubled by issues of quality. The difficulty with this received understanding is that the proportional unit ‘1’, as an abstract entity, is only ever a symbolic construct, derived under the general concept of number, and which stands in, by a sort of tacit mental agreement, as an index for value. As such it is a character that lacks a stable substantial basis, unless, that is, we assert that certain mental constructs possess transcendental objectivity. If we consider the unit ‘1’ in the context of binary notation, for instance, we perceive that in addition to its quantitative value it has also come to acquire an important syntactical property – it is now invested with the quality of ‘positivity’, it being the only alternative character to the ‘negative’ ‘0’. Can such syntactic properties be contained within the transcendental objectivity of the unit ‘1’, considering that they do not similarly apply to ‘1’ in decimal? In this case clearly not, as the property arises only as a condition of the restrictive binary relationship between the two digits within that particular system of notation.

Hence, somewhat antithetically to received understanding, it appears as a necessary conclusion that there are dynamic, context-specific attributes associated with particular integers which are not absolute or fixed (intrinsic), but variable, and which are determined extrinsically, according to the relative frequency of individual elements within the restricted range of available characters circumscribed by the terms of the current working radix (0-9 in decimal, 0-7 in octal, for instance). These attributes inevitably impose certain syntactical dependencies upon the characters within those notations. In that case, the proportionality that we are accustomed to apply axiomatically to the set of the natural numbers as self-contained entities should be reconsidered as a characteristic which rather depends exclusively upon the system of their notation within the decimal rational schema – one which will not automatically transfer as a given property to the same values when transcribed across alternative numerical radices. The conditions of proportionality that obtain between integers in a decimal system will be inconsistent with those obtaining between their corresponding (numerically equal) values in an alternative radix. As far as I am aware, this problem is one that has not been previously reported amongst mathematicians and so it will help at this stage to be able to refer to some empirical proof.

The page Radical Affinity and Variant Proportion in Natural Numbers in this section (and associated pdf file) presents a series of numerical datasets of the decimal exponential series x0, [...], x10 , beginning with the decimal value x=10 (extended for x=(2, [...], 9) in the pdf), in comparison with corresponding series from all number bases from binary to nonary (base-9). It then displays tables and graphs of the values of the logarithmic differences between successive exponential values in each series; i.e., employing the derived radical logarithms (logb) for each respective radix (base-b). In each case, with a few exceptions, the graphs reveal a failure of logical consistency. The ratios between successive exponentials of, for instance, 128 (=1010) when treated as octal logarithms, display a series that cannot be determined on any rational principles. The problem arises due to the fact that octal logarithms (log8) are derived from ‘common’ or decimal logarithms (log10 – also written as ‘Log’), according to the formula: log8x=log10x/log108. If one performs the same exercise for successive exponential values in the decimal series, and produces a series of graphs showing the distributions of values for constant values of x, with the exponential index z occupying the horizontal axis, the results are a series of horizontal straight lines at y=Logx. In the examples for the radical series described above however, horizontal straight lines occur only in a limited number of cases4. The distributions revealed are mostly irregular series of variegated peaks and troughs displaying proportional inconsistency – see for example the graph of the octal series in the image shown below.

Graph to show logarithmic differences between octal correspondents of sequential exponentials of x=10 (decimal).

r=(log8xz) – (log8xz-1), for x=128

These findings therefore pose problems for use of the logarithmic function as logarithms express common ratios of proportion, and logarithms for diverse number bases (logb) are conventionally assumed to be perfectly derivable from ‘common’ logarithms (log10). If the logarithmic differences between successive exponentials in, for instance, the octal series: 12z8 (derived from the decimal value x=10 – see also the Octal section of Radical Affinity etc.) do not produce a horizontal straight line, then these values are not proportionally consistent with their corresponding (numerically ‘equal’) values in the decimal series: 10z10 , whose logarithmic differences do produce a horizontal straight line. This disclosure of a failure in consistency through use of the logarithmic function undermines the principle of rational proportionality conventionally understood to hold between diverse numerical radices and indicates that rationality operates effectively only under formally circumscribed limits, where previously, in terms of conventional mathematical understanding, no such limits had been perceived or considered.

These empirical findings confirm in principle that there are qualitative (or ‘behavioural’) properties that arise out of the relational (group) characteristics of particular integers; otherwise, the restrictive proportional rules that appear to be native to individual numerical radices would be empirically impossible, or absurd, and therefore this must undermine the standard assumption of absolute proportional invariance between integers. However, I feel that it would be a mistake to consider such behavioural properties inhering mysteriously as intrinsic properties of integers themselves. Contrary to the standard definition of an integer (i.e., as an ‘integral whole’, or entity in itself), numbers are primarily constructions of the intellect, and as such do not really have the status of phenomenal objects capable of holding any intrinsic properties, aside from their notional quantities. Therefore, if they also exhibit empirical behavioural properties, it is likely these arise out of the sequential relationships between numerical characters (digits) with respect to their relative frequency as members of a limited group of available characters. The fact that in binary, for instance, the available characters are limited to ‘0’ and ‘1’, means that an instance of ‘1’ in binary is quite differently potentiated from the same instance in decimal, even though the values 12 and 110 are quantitatively identical.

The logical ‘either/or’ (positive/negative) characteristic of binary notation noted earlier is of course what enables digital computer systems to employ binary code principally to convey a series of processing instructions, rather than serving merely as an index of quantity. The behavioural properties of individual digits according to the system of their notation may then be extrapolated in principle from the binary example, so that the factor of the relative frequency of individual digits according to the range of available digits within their respective radix (binary, octal, decimal, hexadecimal, etc.) comes to determine the logical potential of those digits uniquely in accordance with the rules that organise each respective radix, and which distinguish it from all alternative radices.

This analysis leads us to conclude that the exercise of rational proportionality (proportional invariance) in terms of quantitative understanding, as a governing principle, with universal applicability (therefore across diverse numerical radices), entails a basic technical misapprehension: it fails to perceive that the ratios of proportion obtaining in any quantitative system will depend implicitly on the terms of a signifying regime (i.e., the restrictive array of select digits at our disposal); the proportional rules of which will vary according to the range of available signifying elements, and the relative frequency (or ‘logical potentiality’) of individual elements therein.

An Inconvenient Truth Revealed

It is unfortunate that this recognition of the principle of variant proportionality between numerically equal integer values when expressed across diverse number radices (which has so far gone entirely unremarked by mathematicians and information scientists alike) was not made prior to the emergence in the late 20th Century of digital computing and digital information systems, for, as I will attempt to show in what follows, the issue has serious consequences for the logical consistency of data produced within those systems.

Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message; the process is taken to be neutral, faithful, transparent. The assessment of quantitative and qualitative differences at the level of the observable world retains its accuracy despite at some stage involving a reduction, at the level of machine code, to the form of a series of simple binary (or ‘logical’) distinctions between ‘1’ and ‘0’ – positive and negative. This idea relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption.

However, as should now be clear from the analysis indicated above, the logical relationship between ‘1’ and ‘0’ in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits (in the case of binary, limited to two members). It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that may well be unfamiliar, and perhaps unwelcome, to many mathematicians and information scientists alike).

It follows that the proportional relationships affecting quantitative expressions within binary, being uniquely and restrictively determined, cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal or hexadecimal). By extension therefore, the logical relationships within a binary (and hence digital) system of codes, being subject to the same restrictive determinations, cannot therefore be applied, with logical consistency that is, to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but there is insufficient reason to expect that they will be logically consistent with the world of objects.

The issue of a failure of logical consistency is one which concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific ‘integral’ numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.

Logical Inconsistency is Inherent in Digital Information Systems

The extent of the problem of logical inconsistency is not limited however to that of the effects upon data arising from transformations of existing analogue information into digital format. Unfortunately, it is not a saving feature of digital information systems that, although not quite fully consistent with traditional analogue means of speaking about and depicting the world, they nevertheless result in a novel digitally-enhanced view through which they are able to maintain their own form of technologically-informed consistency. Rather, logical inconsistency is a recurrent and irremediable condition of data derived out of digital information processes, once that data is treated in isolation from the specific algorithmic processes under which it has been derived.

The principle that it is possible to encode information from a variety of non-digital sources into digital format and to reproduce that information with transparency depends implicitly on the idea that logic (i.e., proportionality) transcends the particular method of encoding logical values, implying that the rules of logic operate universally and are derived from somewhere external to the code. According to the analysis indicated above however, it is suggested that the ratios between numeric values expressed in any given numerical radix will be proportionally inconsistent with the ratios between the same values when expressed in an alternative radix, due to the fact that the rules of proportionality, understood correctly, are in fact derived uniquely and restrictively according to the internal characterological requirements of the specific codebase employed. This tells us that the principle widely employed in digital information systems5 – that of the seamless correspondence of logical values whether they are expressed as decimal, octal, hexadecimal, or as binary values – is now revealed as a mathematical error-in-principle.

As I have indicated in the previous section above, this makes problematic the assumption of consistency and transparency in the conversion of analogue information into digital format. However, the problem of logical inconsistency as a consequence of the non-universality of the rules of the various codebases employed is not limited to the (mostly unseen) machine-level translation of strictly numerical values from decimal or hexadecimal values back and forth into binary ones. The issue also has a bearing at the programming level – the level at which data objects are consciously selected and manipulated, and at which computational algorithms are constructed. Even at this level – at which most of the design and engineering component of digital information processing takes place – there is an overriding assumption that the logic of digital processes derives from a given repository of functional objects that possess universal logical potential, and that the resulting algorithmic procedures are merely instantiations of (rather than themselves constituting unique constructions of) elements of a system of logic that is preordained in the design of the various programming languages and programming interfaces.

But there is no universal programming language, and, in addition to that, there are no universal rules for the formulation of computational procedures, and hence of algorithms; so that each complete and functional algorithm must establish its own unique set of rules for the manipulation of its requisite data objects. Therefore, the data that is returned as the result of any algorithmic procedure (program) owes its existence and character to the unique set of rules established by the algorithm, from which it exclusively derives; which is to say that the returned data is qualitatively determined by those rules (rather than by some non-existent set of universal logical principles arising elsewhere) and has no absolute value or significance considered independently of that qualification.

To clarify these statements, we should consider what exactly is implied in the term ‘algorithm’, in order to understand why any particular algorithmic procedure must be considered as comprising a set of rules that are unique, and why its resultant data should therefore be understood as non-transferable. That is to say, when considered independently from the rules under which it is derived, the resultant data possesses no universally accessible logical consistency.

Not all logical or mathematical functions are computable6, but the ones which are computable are referred to as ‘algorithms’, and are exactly those functions defined as recursive functions. A recursive function is that in which the definition of the function includes an instance of the function nested within itself. For instance, the set of natural numbers is subject to a recursive definition: “Zero is a natural number” defines the base case as the nested instance of the function – its functional properties being given a priori as a) wholeness; b) serving as an index of quantity; and c) having a successor. The remainder of the natural numbers are then defined as the (potentially infinite) succession of each member by another (sharing identical functional properties) in an incremental series. It is the recursive character of the function that makes it computable (that is, executable by a hypothetical machine, or Turing machine). In an important (simplified) sense then, computable functions (algorithms), as examples of recursive functions, are directly analogous in principle to the recursive function that defines the set of the natural numbers.7

The nested function has the property of being discrete and isolable – these characteristics being transferable, by definition, to each other instance of the function. In these terms, the function defining the natural numbers has the (perhaps paradoxical) characteristic of ‘countable infinity’ – as each instance of the function is discrete, there is the possibility of identifying each individual instance by giving it a unique name. In spite however of its potential in theory to proceed, as in the case of the natural numbers, to infinity, a computable function must at some stage know when to stop and return a result (as there is no appreciable function served by an endlessly continuous computation). At that point then the algorithm must know how to name its product, i.e., to give it a value; and therefore must have a system of rules for the naming of its products, and one that is uniquely tailored according to the actions the algorithm is designed to perform on its available inputs.

What is missing from the definition given above for the algorithm defining the natural numbers? We could not continue to count the natural numbers (potentially to infinity) without the ability to give each successive integer its unique identifier. However, we could neither continue to count them on the basis of absolutely unique identifiers, as it would be impossible to remember them all, and we would be unable to tell at a glance the scalar location of any particular integer in relation to the series as a whole. Therefore, we must have a system of rules which ‘recycles’ the names in a cascading series of registers (for example, in the series: 5, 25, 105, 1005, etc.); and that set of rules is exactly those pertaining to the radix of the number system, which defines the set of available digits in which the series may be written, including the maximum writable digit for any single register, before that register must ‘roll over’ to zero, and either spawn a new register to the left with the value ‘1’, or increment the existing register to the left by 1. We can consider each distinct number radix (e.g., binary, ternary, octal, hexadecimal etc.) as a distinct computable function, each requiring its own uniquely tailored set of rules, analogously with our general definition of computable functions given above.8

For most everyday counting purposes, and particularly in terms of economics and finance, we naturally employ the decimal (or ‘denary’) system of notation in the counting of natural numbers. The algorithmic rules that define the decimal system are therefore normally taken for granted – we do not need to state them explicitly. However, the rules are always employed implicitly – they may not be abandoned or considered as irrelevant, or our system of notation would then become meaningless. If, for instance, we were performing a series of translations of numerical values between different radices, we would of course need to make explicit the relevant radix in the case of each written value, including those in decimal, to avoid confusion. The essential point is that, when considering expressions of value (numerical or otherwise) as the returned results of algorithmic functions (such as that of the series of natural numbers, or indeed any other computable function), the particular and unique set of rules that constitute each distinct algorithmic procedure, and through which data values are always exclusively derived, are indispensable to and must always be borne in mind in any proportionate evaluation of the data – they may not be left behind and considered as irrelevant, or the data itself will become meaningless and a source only of confusion.

It is important to emphasise in this analysis that, in accordance with the definition of recursive functions outlined above, a computational algorithm is functionally defined by reference to itself, through a nested instance of the function, rather than by reference to any universally available functional definition. In the broad context of data derived through digital information processes, it is essential therefore to the proportionate evaluation of all resultant data, that the data be qualified with respect to the particular algorithmic procedures through which it has been derived. There is no magical property of external logical consistency that accrues to the data simply because it has been derived through a dispassionate mechanical procedure – the data is consistent only with respect to the rules under which it has been processed, and which therefore must be made explicit in all quotations or comparisons of the data, to avoid confusion and disarray.

Such qualifications however are rarely made these days in the context of the general mêlée of data sharing that accompanies our collective online activity. Consider the Internet as an essentially unregulated aggregation of information from innumerable sources where there are no established standards or guidelines that specifically require any contributor to make explicit qualifications for its data with respect to the rules that define it and give it its unique and potent existence. We should not be overly surprised therefore if this laxity should contribute, as an unintended consequence, to the problem of society appearing progressively to lose any reliable criteria of objective truth with regard to information made available through it to the public domain.

The alacrity with which data tends to be mined, exchanged, and reprocessed, reflects a special kind of feverish momentum that belongs to a particular category of emerging commodity – much like that attached to oil and gold at various stages in the history of the United States. Our contemporary ‘data rush’ is really concerned with but a limited aspect of most data – its brute exchangeability – which implies symptomatically that those who gain from the merchandising of data are prone to suppress any obligation to reflect upon or to evaluate the actual relevance of the data they seek to market to its purported real-world criteria.

Conclusion

It was stated above that computable functions (algorithms) performed upon data values are defined as recursive functions, and are analogous, as a matter of principle, to the recursive function that defines the set of natural numbers. Logical consistency in digital information processes is therefore directly analogous to proportional consistency in the set of the natural numbers, which the preceding analysis now reveals as a principle that depends locally upon the rules (i.e., the restrictive array of available writable digits) governing the particular numerical radix we happen to be working in, and cannot be applied with consistency across alternative numerical radices. We should then make the precautionary observation that the logical consistency of data in a digital information system must likewise arise as a unique product of the particular algorithmic rules governing the processing of that data. It should not be taken for granted that two independent sets of data produced under different algorithmic rules, but relating to the same real-world criteria, will be logically consistent with each other merely by virtue of their shared ontological content. That is to say that the sharing of referential criteria between independent sets of data is always a notional one – one that requires each set of data to be qualified with respect to the rules under which the data has been derived.

Nevertheless, since the development of digital computing, and most significantly for the last three decades, computer science has relied upon the assumption of logical consistency as an integral, that is to say, as a given, transcendent property of data produced by digital means; and as one ideally transferable across multiple systems. It has failed to appreciate logical consistency as a property conditional upon the specific non-universal rules under which data is respectively processed. This technical misapprehension derives ultimately from a mathematical oversight, under which it has been assumed that the proportional consistency of a decimal system might be interpreted as a governing universal principle, applicable across diverse number radices. The analysis presented here indicates rather that the adoption of decimal notation as the universal method of numerical description is an arbitrary choice, and that the limited and restrictive proportional rules which govern that system can no longer be tacitly assumed as having any universal applicability.

May 2016
(revised: 3 November 2023)

back to top ^

Footnotes:

  1. There are numerous references that might be cited here, but the distinction in Greek mathematical thought between ‘discrete’ and ‘continuous’ (or ‘homogeneous’) magnitudes and its influence upon modern algebra is discussed at length in two articles by Daniel Sutherland: Kant on Arithmetic, Algebra, and the Theory of Proportions, Journal of the History of Philosophy, vol.44, no.4 (2006), pp.533-558; and: Kant’s Philosophy of Mathematics and the Greek Mathematical Tradition, The Philosophical Review, Vol. 113, No. 2 (April 2004), pp.157-201. See also: Renaissance notions of number and magnitude, Malet, A., Historia Mathematica, 33 (2006), pp.63-81. See also Jacob Klein’s influential book on the development of algebra and the changing conception of number in the pre-modern period: Greek Mathematical Thought and the Origin of Algebra, Brann, E. (tr.), Cambridge: MIT, 1968 (repr. 1992). [back]
  2. Klein, ibid., Ch.6, pp.46-60. [back]
  3. Even in complete abstraction from the material world of objects therefore, integers somehow retain a trace of their Classical role, through a reification of the idea of intrinsic value, which still inheres as it were ‘magically’ in our system of notion – that which, as a methodological abstraction, is now derived purely under the concept of number. The reification of abstract numerical quantities, particularly during the early modern period, may also be viewed as a form of psychic recompense for the fact that in England during the 17th Century the cash base of society lost its essential intrinsic value in gold and silver, due in part to a relentless debasing of the coinage by ‘clipping’ and counterfeiture, and hence began to be replaced by paper notes and copper coinage around the turn of the century, following Sir Isaac Newton’s stewardship of the Royal Mint during the period 1696-1727 (see: Newton and the Counterfeiter, Levenson, T., Mariner Books, 2010). This fundamental shift in the conception of monetary value from one based upon the intrinsic value of an amount of bullion necessarily present in any exchange relationship, to one that merely represented that value as existing elsewhere by way of a promise to pay, thus significantly enhancing the liquidity of finance and exchange in the motivation of trade, was one that occurred in parallel with the progressive realisation of Descartes’ (and Leibnitz’s) project for a universal mathesis based upon abstract formal notation. [back]
  4. In the resulting distributions horizontal straight lines are found to occur only where the decimal value of x (prior to its conversion to base-b) is equal to the value b, or to b2 or b3 (also, it is assumed, by extension to bn) – see Comments on pages 31 & 51 of the extended pdf version of Radical Affinity etc.. [back]
  5. In terms of the largely unseen hardware-instruction (machine-code) level, digital information systems have made extensive use of octal and hexadecimal (base-16), in place of decimal, as the radices for conversions of strings of binary code into more manageable quantitative units. Historically, in older 12- or 24-bit computer architectures, octal was employed because the relationship of octal to binary is more hardware-efficient than that of decimal, as each octal digit is easily converted into a maximum of three binary digits, while decimal requires four. More recently, it has become standard practice to express a string of eight binary digits (a byte) by dividing it into two groups of four, and representing each group by a single hexadecimal digit (e.g., the binary 10011111 is split into 1001 and 1111, and represented as 9F in hexadecimal – corresponding to 9 and 15 in decimal). [back]
  6. One explanation given for this is that, while the set of the natural numbers is ‘countably infinite’, the number of possible functions upon the natural numbers is uncountable. Any computable function may be represented in the form of a hypothetical Turing machine, and, as individual Turing machines may be represented as unique sequences of coded instructions in binary notation, those binary sequences may be converted into their decimal correspondents, so that every possible computable function is definable as a unique decimal serial number. The number of possible Turing machines is therefore clearly countable, and as the number of possible functions on the natural numbers is uncountable, the number of possible functions is by definition greater than the number of computable ones. For further elaboration and specific proof of this principle see: Section 5 of: Barker-Plummer, D., Turing Machines, The Stanford Encyclopedia of Philosophy, Summer 2013 Edition, Edward N. Zalta (ed.): http://plato.stanford.edu/archives/sum2013/entries/turing-machine/ (accessed 09/12/2014).

    Turing’s formulation of the Turing machine hypothesis, in his 1936 paper: On Computable Numbers..., was largely an attempt to answer the question of whether there was in principle some general mechanical procedure that could be employed as a method of resolving all mathematical problems. The question became framed in terms of whether there exists a general algorithm (i.e., Turing machine) which would be able determine if another Turing machine Tn ever stops (i.e., computes a result) for a given input m. This became known as the “Entscheidungsproblem” or “Halting problem”. Turing’s conclusion was that there was no such algorithm. From that conclusion it follows that there are mathematical problems for which there exists no computational (i.e., mechanical) solution. See: On Computable Numbers, with an Application to the Entscheidungsproblem; Proceedings of the London Mathematical Society, 2 (1937) 42: 230-65: http://somr.info/lib/Turing_paper_1936.pdf. See also: Ch. 2, pp.45-83 of: Penrose, R., The Emporer’s New Mind, OUP, 1989; as well as pp.168-177 of the same, with reference to Diophantine equations and other examples of non-recursive mathematics. [back]

  7. The principle of recursion is nicely illustrated by the characteristics of a series of Russian Dolls. It is important to recognise that not all of the properties of the base case are transferable – for instance, zero is unique amongst the natural numbers in not having a predecessor. [back]
  8. For a discussion of these criteria in relation to Turing machines, see the section: Turing Machines & Logical Inconsistency, in the page: Mind: Before & Beyond Computation. [back]

back to top ^