


Informational optimality of decimal numeration system
Dr. David Kisets
The National Physical Laboratory of Israel,
Danziger "A" bldg., Hebrew University GivatRam, Jerusalem,
91904, Fax: 026520797, Email: kisets@netvision.net.il
Abstract: The paper discusses the information approach to the investigation of numeration systems aimed at recognizing the degree of their informational optimality. The method of investigation is based on the Benford’s Law and the principle of information cyclicity. The discovery of the unique optimality of commonly used decimal system is the end result of the investigation. Along with theoretical significance this is of practical use in applied metrology.
Keywords: probability, information, optimization,
numeration system.
1 Introduction
A variety of numeration systems are practiced. The most widely used system of numeration is the decimal system, which uses base 10. The binary system uses base 2 and is important because of its application to modern computers. Two other systems are also used to this purpose, octal (base 8) and hexadecimal (base 16). Besides, the duodecimal system is practiced that uses 12 as a base and has some advantages. A system of base 60, used by the ancient Babylonians, still survives in our smaller divisions both of time and of angle, i.e., minutes and seconds. The enumeration of examples might be carried on. However, regardless of any technological or traditional reasons of application, it is the decimal system that is of particular concern to mankind.
In general, any integer z greater than
one can be used as the base of a numeration system, and the system will
employ z different digits. We use the decimal or other numeration system,
which (to the author’s knowledge) has never been investigated in view of
its influence on measurement and/or estimation information. Nevertheless,
the quality of the information depends on the numeration system used. This
happened to be true owing to the mathematical theorem known as Benford's
Law [1] and the principle of information cyclicity [2]. The harmonization
of measurements, for instance, demands the numbering system to be optimum;
otherwise the additional loss or excess of measurement information is inevitable.
The paper proves the information optimality of commonly used decimal system.
Despite the brief presentation, the author believes that this paper may
be of interest for the specialists in mathematics, metrology, qualimetry,
and other academic fields.
2 Benford’s probabilities
According to the Benford’s Law, in any numeration system the probability P(d) of any number d from 1 to (z 1) is calculated as follows:
P (d) = log_{z} (1 + 1/d)
The probabilities P (d) form
a complete group of independent events, i.e. their sum = 1, and a logarithmic
sequence has obvious classification character. If, for instance, z
= 10, then d = 1 ? 9 repeats for the subgroups: 10 ? 90, 100 ? 900,
etc. owing to the first digits of each subgroup. The different probabilities
of the numbers on the one hand, and the infinite diversity of potential
numeration systems on the other hand suggest an existence of optimum numeration
system that can be determined by examining the logarithmic sequences.
3 Optimization criterion
The minimum and maximum Benford's probability, i.e. P_{min} = P [d = (z  1)] and P_{max} = P (d = 1) form the specific system of two components, which ratio indicates to the degree of optimality of a numeration system. In terms of information theory [3], the optimum ratio (P_{min }/P_{max})_{o} can be determined with the recently discovered principle of information cyclicity. According to this principle, the approximate cyclical connection between the sufficient and maximum probabilities always exists in a system of independent events, characterized by the complete group of probabilities. The ratio of the sufficient probability to the maximum probability is called the accuracy coefficient r_{o} = 1/2p . The derivation of information cyclicity is briefly dealt with in the Appendix. From these considerations, the optimization criterion for Benford's probabilities may be expressed as follows:
(P_{min }/P_{max})_{o} = P [d = (z  1)] / P (d = 1) = 1/2p
4 Optimum numeration system
In accordance with the aboveestablished criterion, the index of optimum numeration system z_{o} (that is the logarithm base) is determined as the rounding off number as follows:
z_{o} = arg min ? log_{z} [1 + 1/.(z  1)]/ log_{z} (1 + 1)  (1/2p )? = 10
Therefore, the optimality of decimal system may be considered as proven.
The same result can be achieved with the
equation [log_{z} (1+1)/2p = log_{z} [1 +
1/(z_{o}  1)], from which it follows that z_{o}
= 1+ [exp (ln2/ln2p ) – 1]^{1} = 10 (as the rounding off value).
This value does not depend on z that in some way signifies the absoluteness
of the information optimality of decimal numeration and enables to consider
z_{o} = 10 as one of information constants (common with
the golden section f_{o} = 0.618 as the mathematical measure
of harmonious relation and the optimum accuracy coefficient r_{o}
= 1/2p ) which rounding off product r_{o}* f_{o}*
z_{o} = 1 may be called The Law of Information Constants.
5 Conclusion
The obtained result demonstrates the unique
informational optimality of commonly used decimal system. Another words,
any numeration system, other than decimal system, is informatively either
insufficient or excessive.
Appendix: Information cyclicity
The derivation of information cyclicity is based on the equivalency between the entropy () of the system of n components characterized by probabilities (p) and the maximum entropy (ln j ) of the system of j informative components of the same probability (p_{j} = 1/j ). From the equality of these entropies, the number j? is calculated as j = exp ( ). The approximate (2% estimation error) cyclical connection: n/(n  j ) = p_{1}/ pj= 2p , called the principle of information cyclicity, was discovered when analyzing the formula for a linear diagram of probabilities (Dp_{j} = p_{j} – p_{j+1} = constant > 0) proven as the best calculation model. This method contains the fundamental estimation uncertainty due to the inequality of probabilities in the redundant part (from j to n) of the linear diagram. This part, being considered as the separate subsystem of the complete group of probabilities, possesses its own redundancy. This local redundancy causes both the optimization insufficiency and uncertainty in the main informative part (j=1?j ) of the system of n components.
The more accurate calculation can be achieved when taking into account the redundant number of the subsystem components that is equal to (n  j)(1  j/n). The same number of the weightiest subsystem components may be considered as the limit of possible addition to j or the interval of uncertainty in determining the informative components. The optimum numberj_{o} is within the limits from j to [j+ (n  j)(1  j/n)]. By substituting the sum for integral and for n ® ?, the relative values j_{o}/n and the uncertainty of its estimation U (j_{o}/n) are determined as follows:
j_{o}/n = 0.5{1 + [(j_{o}/n)]^{2}} = 0.840 = (1 – 1/2p );
U (j_{o}/n) = 0.5[1  (j_{o}/n)]^{2} = 0.015 = 0.1(1/2p ),
where: j= exp{ ; K_{1} = 2/(n + 1);
(j_{o}/n) = {[(n + 1)/n^{2}] exp (0.5  ln 2)} = 0.824.
With 0.5% estimation error the ratio n/(n  j ) = 2p mathes the principle of information cyclicity.
For the system of two components the derivation is simplified. In that case the goal is to find such a ratio between the two probabilities, for which the number of informative components calculated satisfies the following equation:
j = exp ( p_{1} ln p_{1} – p_{2} ln p_{2}) = 1.5
This equation illustrates the most uncertain
(50% confidence) situation about allowing or ignoring the lesser (by probability)
of the two components. The calculation results in the conclusion that with
0.5% estimation error the ratio p_{2}/p_{1} =
1/2p that is in agreement with the principle of information cyclicity.
References
[1] Malcolm W.Browne, Following Benford's Law, or Looking Out for N^{o} 1, The Standard, ASQ, 1998, Vol. 984, 18–20.
[2] D. Kisets, Optimum Traceability Type Hierarchies. OIML Bulletin, 1997, XXXVIII (2), 30–36.
[3] C. E. Shannon, A mathematical theory
of communication, Bell Syst. Tech. J. 27 (1948) 379–423.
Technical
College  Bourgas,
All
rights reserved, © March, 2000