American Journal of Computational and Applied Mathematics

p-ISSN: 2165-8935    e-ISSN: 2165-8943

2018;  8(5): 93-102

doi:10.5923/j.ajcam.20180805.02

 

h, k, NA: Evaluating the Relative Uncertainty of Measurement

Boris Menin

Mechanical & Refrigeration Consultation Expert, Beer-Sheba, Israel

Correspondence to: Boris Menin, Mechanical & Refrigeration Consultation Expert, Beer-Sheba, Israel.

Email:

Copyright © 2018 The Author(s). Published by Scientific & Academic Publishing.

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract

For verification of the measurement accuracy of fundamental physical constants, the CODATA unique technique is used. This procedure is a complex process based on a careful discussion of the input data, and the justification and construction of tables of values sufficient for the direct use of the relative uncertainty are conducted using modern advanced statistical methods and powerful computers. However, at every stage of data processing, researchers must rely on common sense, that is, an expert conclusion. In this article, the author proposes a theoretical and informational grounded justification for calculating the relative uncertainty by using the comparative uncertainty. A detailed description of the data and the processing procedures do not require considerable time. Examples of measurements results of three fundamental constants are analysed, applying information-oriented approach. Comparison of the achieved relative and comparative uncertainties is introduced and discussed.

Keywords: Avogadro number, Boltzmann constant, Planck constant, Information theory, Comparative and relative uncertainties

Cite this paper: Boris Menin, h, k, NA: Evaluating the Relative Uncertainty of Measurement, American Journal of Computational and Applied Mathematics , Vol. 8 No. 5, 2018, pp. 93-102. doi: 10.5923/j.ajcam.20180805.02.

1. Introduction

It is expected that in 2018 that the International System of Units (SI) will be redefined on the basis of certain values of some fundamental constants [1]. This is a dramatic change, one consequence of which is that there will no longer be a clear distinction between the base quantities and derived quantities [2].
Recommended CODATA values and units for constants [3] are based on the conventions of the current SI, and any modifications to these conventions will have implications for the units. One consequence of this difference is that mathematics does not provide information on how to include units in the analysis of physical phenomena. One of the goals of the SI is to provide a systematic structure for including units in equations describing physical phenomena.
The desire to reduce the value of uncertainty in the measurement of fundamental physical constants is due to several reasons. First, the achievement of an accurate quantitative description of the physical universe depends on the numerical values of the constants that appear in theories. Secondly, the general consistency and validity of the basic theories of physics can be proved by carefully studying the numerical values of these constants, determined from different experiments in different fields of physics.
One needs to note the main feature of the CODATA technique in determining the relative uncertainty of one fundamental physical constant or the other. For using CODATA technique based on solid principles of probability and statistics, tables of values that allow direct use of relative uncertainty are constructed, using modern advanced statistical methods and powerful computers. This, in turn, allows for checking the consistency of the input data and the output set of values. However, at every stage of data processing, one needs to use her or his intuition, knowledge and experience (one's personal philosophical leanings [4]). In addition, it should be that the concept of relative uncertainty was used when considering the accuracy of the achieved results (absolute value and absolute uncertainty of the separate quantities and criteria) during the measurement process in different applications. However, this method for identifying the measurement accuracy does not indicate the direction of deviation from the true value of the main quantity. In addition, it involves an element of subjective judgment [5].
Let us start with the disclaimer: the author does not promote the CODATA method and does not blame him. The author draws attention to the fact that the introduction of a statistical-expert method for estimating the accuracy of measurement of a fundamental physical constant assumes, by default, the existence of such nonprofessional criteria that are not accepted at scientific conferences, are not resolved for publication in scientific journals and are simply not mentioned in personal conversations: the desire to promote their own project, the desire to obtain additional investment to continue experiments, and to achieve international recognition of the achieved results. Apparently, this situation suits most researchers.
The aim of this paper is an introduction of information-oriented approach [6] for analyzing measurement of fundamental physical constants along with a relative uncertainty, by a comparative uncertainty and to compare results. It is explained by the fact that in the framework of the information approach, a theoretical and informational grounding and justification is carried out for calculating the relative uncertainty.

2. Information Approach

During modeling process, scientists use quantities inherent in the SI. SI is generated by the collective imagination. SI is an instrument, which is characterized by the presence of the equiprobable accounting of any quantity by a conscious observer that develops the model due his knowledge, intuition, and experience. Each quantity allows the researcher to obtain a certain amount of information about the studied object. The total number of quantities can be calculated, and this corresponds to the maximum amount of information contained in the SI.
In addition, every experimenter selects a particular class of phenomena (CoP) to study the measurement process of the fundamental physical constant. CoP is a set of physical phenomena and processes described by a finite number of base and derived quantities that characterize certain features of the object [7]. For example, in mechanics, SI uses the basis {the length L, weight M, time Т}, that is, CoPSI LMТ.
Surprisingly, one can calculate the total number of dimensional and dimensionless quantities inherent in SI. By that, to calculate the first-born, absolute uncertainty in determining the dimensionless researched main quantity, "embedded" in a physical-mathematical model and caused only by the limited number of chosen quantities. It can be organized following the below mentioned steps:
(1) There are ξ = 7 base quantities: L is the length, M is the mass, Т is time, I is the electric current, Θ is the thermodynamic temperature, J is the luminous intensity, F is the amount of substances [8];
(2) The dimension of any derived quantity q can only be expressed as a unique combination of dimensions of the main base quantities to different powers:
(1)
(3) l, m... f are exponents of the base quantities, which take only integer values, and the range of each has a maximum and minimum value:
(2)
(3)
(4)
where еl, …, еf are the number of choices of dimensions for each quantity, for example, L-3 is used in a formula of density, and 4 in the Stefan-Boltzmann law.
(4) The total number of dimension options of physical quantities equals
(5)
where "-1" corresponds to the case where all exponents of the base quantities in the formula (1) are treated to zero dimension.
(5) The value Ψ° includes both required, and inverse quantities (for example, L¹ is the length, L-1 is the running length). The object can be judged knowing only one of its symmetrical parts, whereas others structurally duplicating this part may be regarded as information empty. Therefore, the number of options of dimensions may be reduced by 2 times. This means that the total number of dimension options of physical quantities without inverse quantities equals Ψ = Ψ°/2 = 38,272.
(6) According to π-theorem [9], the number μSI of possible dimensionless criteria with ξ = 7 base quantities for SI will be
(6)
(7) Then, let there be a situation, wherein all quantities µSI of SI can be taken into account, provided the choice of these quantities is considered, a priori, equally probable. In this case, µSI corresponds to a certain value of entropy and may be calculated by the following formula [10]:
(7)
where H is entropy of SI including equally probable accounted quantities, is the Boltzmann's constant.
(8) When a researcher chooses the influencing factors (the conscious limitation of the number of quantities that describe an object, in comparison with the total number µSI), entropy of the mathematical model changes a priori. Then one can write
(8)
where ΔA' is the a priori amount of information pertaining to the observed object due to the choice of the CoP; HprHps is the entropy difference between two cases, pr, a priori; ps, a posterior; Q is efficiency of the experimental observation (a thought experiment, no distortion is brought into the real system), Q=1; z' is the number of physical quantities in the selected CoP; β' is the number of base quantities in the selected CoP.
(9) The value ΔA' is linked to the a priori absolute uncertainty of the model, caused only due to the choice of the CoP, Δpmm' and S, the dimensionless interval of observation of the main researched dimensionless quantity u, through the following dependence [6, 10]:
(9)
(10) Following the same reasoning, it can be shown that the a priori absolute uncertainty of a model of the observed object, caused by the number of recorded dimensionless criteria chosen in the model, Δpmm" takes the following form:
(10)
where z" is the number of dimensional quantities recorded in a mathematical model; β" is the number of base quantities recorded in a model; Δpmm'' cannot be defined without declaring the chosen CoP (Δpmm').
(11) Summarizing (9) and (10), it is possible to calculate the dimensionless total absolute uncertainty Δupmm in determining the dimensionless main quantity u:
(11)
where ε = Δupmm/S is the comparative uncertainty [6, 10].
An overall uncertainty of the model including inaccurate input data, physical assumptions, the approximate solution of the integral-differential equations, and so on, will be larger than Δupmm. Thus, Δupmm is only one first-born and least component of a possible mismatch of a real measurement process of the fundamental constant and its modeling results.
The relationship (11) is a conformity principle and testifies that, in nature, there is a fundamental limit to the accuracy of measuring any observed material object, which cannot be surpassed by any improvement of instruments, methods of measurement, and the model’s computerization. It sets a limit on the expedient increasing of the measurement accuracy when conducting experimental studies of measurement of the fundamental constants.
Within the abovementioned approach and for a given CoP, one could define the actual value of the minimum comparative uncertainty inherent in a model with a chosen finite number of quantities for each specific CoP.
Equating the derivative of Δupmm/S (11), with respect to z'-β' to zero, gives the following condition for achieving the minimum comparative uncertainty for a particular CoP:
(12)
By using (12), one can find the values for the lowest achievable comparative uncertainties for different CoPSI; moreover, the values of the comparative uncertainties and the numbers of the chosen variables are different for each CoPSI. For example, at measurements of Boltzmann constant, is usually used. In this case the minimum comparative uncertainty (εmin)LMTθF can be calculated by the following way:
(13)
(14)
(15)
To reach εLMTΘ, the required number of dimensionless criteria z″-β″ equals:
(16)
Below is a summary Table 1 of comparative uncertainties and the optimal number of dimensionless criteria considered in the model for each CoP.
Table 1. Comparative uncertainties and optimal number of dimensionless criteria
     
It is to be noted that the relative and comparative uncertainties of the dimensional quantity U and the dimensionless quantity u are equal
(17)
where S, Δu are the dimensionless quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensionless quantity u; S*, ΔU are the dimensional quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensional quantity U; a is the dimensional scale parameter with the same dimension as that of U and S*; r is a relative uncertainty of the dimensional quantity U; R is a relative uncertainty of the dimensionless quantity u.

3. Formulation of Procedure

If the range of observation S is not defined, the information obtained during the observation/measurement cannot be determined, and the entropic price becomes infinitely large [10].
In the framework of the information-oriented approach, it seems that the theoretical limit of the absolute and relative uncertainties depends on the empirical value, that is, possible interval of placing (the observed range of variations) S of the measured physical constant. In other words, the results will be completely different if a larger interval of changes is considered in the measured fundamental physical constant. It is right, however, if S is not declared, the information obtained in the measurement cannot be determined. Any specific measurement requires a certain (finite) a priori information about the components of the measurement and interval of observation of the measured quantity. These requirements are so universal that it acts as a postulate of metrology [11]. This, the observed range of variations, depends on the knowledge of the developer before undertaking the study. "If nothing is known about the system studied, then S is determined by the limits of the measuring devices used" [10]. That is why, taking into account Brillouin’s suggestions, there are two options of applying the conformity principle to analyze the measurement data of the fundamental physical constants.
First, this principle dictates, factually, analyzing the data of the magnitude of the achievable relative uncertainty at the moment taking into account the latest results of measurements. The extended range of changes in the quantity under study S indicates an imperfection of the measuring devices, which leads to a large value of the relative uncertainty. The development of measuring technology, the increase in the accuracy of measuring instruments, and the improvement in the existing and newly created measurement methods together lead to an increase in the knowledge of the object under study and, consequently, the magnitude of the achievable relative uncertainty decreases. However, this process is not infinite and is limited by the conformity principle. The reader should bear in mind that this conformity principle is not a shortcoming of the measurement equipment or engineering device, but of the way the human brains work. When predicting behavior of any physical process, physicists are, in fact, predicting the perceivable output of instrumentation. It is true that, according to the µ-hypothesis, observation is not a measurement, but a process that creates a unique physical world with respect to each particular observer. Thus, in this case, the range of observation (possible interval of placing) of the fundamental physical constant S is chosen as the difference between the maximum and minimum values of the physical constant measured by different scientific groups during a certain period of recent years. Only in the presence of the results of various experiments one can speak about the possible appearance of a measured value in a certain range. Thus, using the smallest attainable comparative uncertainty inherent in the selected class of phenomena during measuring the fundamental constant, it is possible to calculate the recommended minimum relative uncertainty that is compared with the relative uncertainty of each published study. In what follows, this method is denoted as IARU and includes the following steps:
(1) From the published data of each experiment, the value z, relative uncertainty rz and standard uncertainty uz (possible interval of u placing) of the fundamental physical constant are chosen;
(2) The experimental absolute uncertainty Δz is calculated by multiplying the fundamental physical constant value z and its relative uncertainty rz attained during the experiment, Δz = z · rz;
(3) The maximum zmax and minimum zmin values of the measured physical constant are selected from the list of measured values zi of the fundamental physical constant mentioned in different studies;
(4) As a possible interval for placing the observed fundamental constant Sz, the difference between the maximum and minimum values is calculated, Sz = zmax - zmin;
(5) The selected comparative uncertainty εT (Table 1) inherent in the model describing the measurement of the fundamental constant is multiplied by the possible interval of placement of the observed fundamental constant Sz to obtain the absolute experimental uncertainty value ΔIARU in accordance with the IARU, ΔIARU = εT · Sz;
(6) To calculate the relative uncertainty rIARU in accordance with the IARU, this absolute uncertainty ΔIARU is divided by the arithmetic mean of the selected maximum and minimum values, rIARU = ΔIARU / ((zmax + zmin)/2);
(7) The relative uncertainty obtained rIARU is compared with the experimental relative uncertainties ri achieved in various studies;
(8) According to IARU, a comparative experimental uncertainty of each study, εIARUi is calculated by dividing the experimental absolute uncertainty of each study Δz on the difference between the maximum and minimum values of the measured fundamental constant Sz, εIARUi = Δz / Sz. These calculated comparative uncertainties are also compared with the selected comparative uncertainty εT (Table 1).
Second, S is determined by the limits of the measuring devices used [10]. This means that as the observation interval in which the expected true value of the measured fundamental physical constant is located, a standard uncertainty is selected when measuring the physical constant in each particular experiment. Compared with various fields of technology, experimental physics is better for the fact that in all the researches, the experimenters introduce the output data of the measurement with uncertainty bars. At the same time, it should be remembered that the standard uncertainty of a particular measurement is subjective, because the conscious observer probably did not take into account this or that uncertainty. The experimenters calculate the standard uncertainty, taking into account all possibilities, they noticed the measured uncertainties. Then, one calculates ratio between the absolute uncertainty reached in an experiment and standard uncertainty, acting as a possible interval for allocating a fundamental physical constant. So, in the framework of the information approach, the comparative uncertainties achieved in the studies are calculated, which in turn are compared with the theoretically achievable comparative uncertainty inherent in the chosen class of phenomena. Standard uncertainty can be calculated also for quantities that are not normally distributed. Transformation of different types of uncertainty sources into standard uncertainty is very important. In what follows, this method is denoted as IACU and includes the following steps.
(1) From the published data of each experiment, the value z, relative uncertainty rz and standard uncertainty uz (possible interval of placing) of the fundamental physical constant are chosen;
(2) The experimental absolute uncertainty Δz is calculated by multiplying the fundamental physical constant value z and its relative uncertainty rz attained during the experiment, Δz = z · rz;
(3) The achieved experimental comparative uncertainty of each published research εIACUi is calculated by dividing the experimental absolute uncertainty Δz on the standard uncertainty uz, εIACUi = Δz / uz;
(4) The experimental calculated comparative uncertainty εIACUi is compared with the selected comparative uncertainty εT (Table 1) inherent in the model, which describes the measurement of the fundamental constant.
We will apply IARU and IACU in analyzing the data measurement of the three fundamental physical constants h, k, NA.

4. Applications

4.1. Planck Constant h

Planck constant h is of great importance in modern physics. It is explained by the following reasons [12]:
(a) It defines the quanta (minimum amount) for the energy of light and therefore also the energies of electrons in atoms. The existence of a smallest unit of light energy is one of the foundations of quantum mechanics.
(b) It is a factor in the Uncertainty Principle, discovered by Werner Heisenberg in 1927;
(c) Planck constant has enabled the construction of the transistors, integrated circuits, and chips that have revolutionized our lives.
(d) For over a century, the weight of a kilogram has been determined by a physical object, but that could change in 2018 under a new proposal that would base it on Planck constant.
Therefore, a huge amount of researches were dedicated to the Planck constant measurement [13]. The most summarized data published in scientific journals in recent years, about the magnitude of the standard uncertainty of the Planck constant and the Boltzmann constant measurements are presented in [14].
The measurements, made during 2011-2018, were analyzed for this study. The data are summarized in Table 2 [15-27]. It has been demonstrated that two methods have the capability of realizing the kilogram according to its future definition with relative standard uncertainties of a few parts in 108: the Kibble balance (CoPSI ≡ LMТI) and the x-ray crystal density (XRCD) method (CoPSI ≡ LMТF).
Table 2. Plank constant determinations and relative and comparative uncertainties achieved
     
Following the method IARU, one can argue about the order of the desired value of the relative uncertainty (rmin)LMTI. For this purpose, we take into account the following data: (εmin)LMTI = 0.0245 (Table 1); an estimated observation interval of h is chosen as the difference in its values obtained from the experimental results of two projects: hmax = 6.62607040577·10-34 m²·kg·s-2·K-1 [22] and hmin = 6.626069216·10-34 m²·kg·s-2·K-1 [23]. In this case, the possible observed range Sh of h placement is equal to:
(18)
Then, the lowest possible absolute uncertainty for CoPSI LMТI equals
(19)
In this case, taking into account (108), the lowest relative uncertainty (rmin)LMTI for CoPSI LMТI is the following
(20)
This value corresponds to recommendations mentioned in [27] 1.0·10-8 and should be satisfactory to the existing mass standards community. Then, a capability for prediction of the Planck constant value by usage of the comparative uncertainty allows improving our fundamental comprehension of complex phenomenon as well as to apply this comprehension to the solution of specific problems.
It is obvious that an additional view of the existing problem will, most likely, help to understand the existing situation and identify concrete ways for its solution. Reducing the value of the comparative uncertainty of the Planck constant measurement obtained from different experiments to the value of 0.0245 will serve as a convincing argument for the professionals involved in the evolution of the SI.
Following the method IARU, it is seen from the data given in Table 2 that there was a dramatic improving the accuracy of the measurement of the Planck constant during last decade. It is authorized true when based on calculation of the relative uncertainty. Judging the data by the comparative uncertainty following to IACU, one can see that the measurement accuracy has significantly changed too. At the same time, there is yet a gap between the comparative uncertainty calculated according to the information-oriented approach εLMТI =0.0245 and the experimental magnitudes achieved during measuring h, for example, 0.0557 [27]. It must be mentioned that, most likely, the exactness of Planck constant as other fundamental physical constants, cannot be infinite. Therefore, the development of a larger number of designs and improvement of the various experimental facilities for the measurement of Planck constant is an absolute must [29]. The requirements of accuracy and methodological diversity for the prerequisites for the redefinition of a unit of mass and for its realization in terms of a fundamental constant nature must be continued.
The results of the definitions of the Planck constant (IACU), obtained with various measurements of recent years, are very consistent. This issue may become more significant for future comparisons, when the uncertainty of the implementation experiments will become less. Current research and development, as well as improvement of measurement methods, will probably reduce some components of uncertainty in the future and, therefore, will steadily increase the accuracy of measurement of the Planck constant.

4.2. Boltzmann Constant k

The analysis of the Boltzmann constant k plays an increasingly important role in our physics today to ensure the correct contribution to the next CODATA value and to the new definition of the Kelvin. This task is more difficult and crucial when its true target value is not known. This is the case for any methodologies intended to look at the problem from a possible another view and which, may have different constraints and need special discussion.
A detailed analysis of the measurements of Boltzmann constant made since 1973 is available in [27, 30]. The more recent of these measurements, made during 2011-2018 [27, 30-38], were analyzed for this study. The data are summarized in Table 3. The noted scientific articles, in most cases belong to CoPSILMТF [31-35, 37], some to CoPSILMТI [36, 38]. Although the authors of the research studies cited in these papers mentioned all the possible sources of uncertainty, the values of absolute and relative uncertainties can still differ by more than two times. And, a similar situation exists in the spread of the values of comparative uncertainty.
Table 3. Boltzmann constant determinations and relative and comparative uncertainties achieved
     
Following the method IARU, one can argue about the order of the desired value of the relative uncertainty of CoPSILMТF, which is usually used for obtaining measurements of the Boltzmann constant. An estimated observation interval of k is chosen as the difference in its values obtained from the experimental results of two projects: kmax = 1.3806517 ·10-23 m²·kg·s-2·K-1 [31] and kmin = 1.3806482·10-23 m²·kg·s-2·K-1 [36]. In this case, the possible observed range Sk of k placing is equal to:
(21)
For this purpose, one can arrive at the lowest comparative uncertainty εLMTθF using the following conditions:
(22)
(23)
where "-1" corresponds to the case whose all the base quantities exponents are zero in formula (1); 5 corresponds to the five base quantities L, M, T, and F; division by 2 indicates that there are direct and inverse quantities, for example, L1 is the length and L-1 is the run length. The object can be judged based on the knowledge of only one of its symmetrical parts, whereas the other parts that structurally duplicate this part may be regarded as information-empty. Therefore, the number of options of dimensions may be reduced by 2 times.
According to equations (22) and (23),
(24)
Then, the lowest possible absolute uncertainty for CoPSILMТF is given by the following:
(25)
In this case, the lowest possible relative uncertainty (rmin)LMTθF for CoPSI LMТF is the following:
(26)
This value agrees well with the recommendations (3.7·107) cited in [27, 30].
Guided by the method IACU, one can calculate the achieved comparative uncertainty in each experiment (Table 3). There is no significant difference between the comparative uncertainty calculated according to the information-oriented approach εLMТθF = 0.1331 and the experimental magnitudes achieved during measuring k, for example, 0.1460 [30]. It means that further future improvements of test benches could be recommended. That is why the information approach can be used as an additional tool for the new definition of the Kelvin and for revising the International System of Units.

4.3. Avogadro Constant NA

The Avogadro constant, NA, is the physical constant that connects microscopic and macroscopic quantities, and is indispensable especially in the field of chemistry. In addition, the Avogadro constant is closely related to the fundamental physical constants, namely, the electron relative atomic mass, fine-structure constant, Rydberg constant, and Planck constant.
During the period 2011 to 2017, several scientific publications were analyzed, based on the available relative and comparative uncertainty values [15, 16, 18, 22, 24-26, 39-41], and the results are summarized in Table 4.
Table 4. Avogadro constant determinations and relative and comparative uncertainties achieved
     
The true and precise value of the Avogadro constant is not known at the moment. Therefore, CODATA task group on fundamental constants (TGFC) periodically review and declare its recommended value of the Avogadro constant and its relative uncertainty.
Following the method IARU, one can argue about the order of the desired value for the relative uncertainty (rmin)LMTI for CoPSILMТI. An estimated observation interval of the Avogadro number is chosen as the difference in its values obtained from the experimental results of two projects: NAmin = 6.0221405235·1023 mol-1 [26] and NAmax = 6.0221414834·1023 mol-1 [40]. Then, the dimensional possible observed range SN of Na variations is given by the following:
(27)
The choice of the author of (NAmax – NAmin) seems subjective and arbitrary. However, we need to emphasize that only in the presence of the results of various experiments, one can speak about the possible occurrence of a measured quantity in a certain range.
The following values are considered: (εmin)LMTF = 0.0245 (Table 1) and SN=9.6·1016 mol-1 (27). Then, the lowest possible absolute uncertainty is given by the following:
(28)
In this case, the lowest possible relative uncertainty (rmin)LMTF is as follows:
(29)
This value (29) agrees well with the recommendations mentioned in [41], 9.1·10-9, and can be particularly relevant in the run-up to the adoption of new definitions for SI units.
It seems that the theoretical limit of the absolute and relative uncertainties depends on the empirical value, that is, on the observed range of variations in S. In other words, the results will be completely different if a larger interval of changes is considered in Avogadro number. It's right. For example, in the case NPmax = 7.15·1023 mol-1 [42], Perrin’s experiments belong to CoPSI LMТθ. Then, taking into account that NAmin = 6.0221405235·1023 mol-1 [26], (εmin)LMTθ = 0.0446 (Table 1) the lowest possible absolute uncertainty (Δmin)PLMTθ, and relative uncertainty, (rmin)PLMTθ, would be equal to the following:
(30)
(31)
(32)
Thus, within the framework of the proposed information approach, and with 100-year-old imperfect measurement equipment, the achievable relative uncertainty is 7.6·10-3 (32), which is much higher than 3.9·10-9 (29) that can be achieved by the accuracy of modern measuring instruments and the knowledge about the true-target magnitude of Avogadro constant.
According to the data given in Table 4, there was an impressed improving the accuracy of the measurement of the Avogadro constant during last decade. It is authorized true when based on calculation of the relative uncertainty (IARU). Judging the data by the comparative uncertainty following to IACU, one can see that the measurement accuracy has not significantly changed. Unfortunately, there is significant difference between the comparative uncertainty calculated according to the information-oriented approach εLMТF=0.0146 and the experimental magnitudes achieved during measuring NA, for example, 0.3033 [41]. The difference may be explained by the fact that experimenters take into account a very contrast number of quantities in comparison with the formulated number according to the information-oriented approach. It means that the further future improvements of test benches could be recommended.
With all these, the ability to predict the relative uncertainty of the Avogadro number by using the comparative uncertainty allows for improving the fundamental comprehension of the complex phenomena, as also for applying this comprehension to solving specific problems.

5. Discussion and Conclusions

The proposed information approach makes it possible to calculate the absolute minimum uncertainty in the measurement of the investigated quantity of the phenomenon using formula (13). The calculation of the recommended relative uncertainty is a useful consequence of the formulated μ-hypothesis and is presented for application in calculating the relative uncertainty of measurement of various physical constants.
The information-oriented approach, in particular, IARU, makes it possible to calculate with high accuracy the relative uncertainty, which is in good agreement with the recommendations of CODATA. The principal difference of this method, in comparison with the existing statistical and expert methodology of CODATA, is the fact that the information method is theoretically justified.
Significant differences in the values of the comparative uncertainties achieved in the experiments and calculated in accordance with the IACU can be explained as follows. The very concept of comparative uncertainty, within the framework of the information approach, assumes an equally probable account of various variables, regardless of their specific choice by scientists when formulating a model for measuring a particular fundamental constant. Based on their experience, intuition and knowledge, the researchers build a model containing a small number of quantities, and which, in their opinion, reflects the fundamental essence of the process under investigation. In this case, many phenomena, perhaps not significant, secondary, which characterized by specific quantities, are not taken into account.
For example, when measuring a value of Planck constant by the LNE Kibble balance (CoPSILMТI), located inside and shielded, temperature (base quantity is θ) and humidity are controlled, and the air (base quantity is F) density is calculated [26]. Thus, the possible influence of temperature and the use of other type of gas, for example, inert gas, are neglected by developers. In this case, we get a paradoxical situation. On one side, different groups of scientists dealing with the problem of measuring a certain fundamental constant and using the same method of measurement "learn" from each other and improve the test stand to reduce uncertainties known to them. This is clearly seen using the IARU method: when measuring h, k, NA, all the comparative uncertainties are very consistent, especially for measurements made in recent years. However, ignoring a large number of secondary factors, which are neglected by experimenters, leads to a significant variance in the comparative uncertainties calculated by the IACU method.
Although the goal of our work is to obtain a fundamental restriction on the measurement of fundamental physical constants, we can also ask whether it is possible to reach this limit in a physically well-formulated model. Since our estimation is given by optimization in comparison with the achieved comparative uncertainty and the observation interval, it is clear that in the practical case, the limit cannot be reached. This is due to the fact that there is an unavoidable uncertainty of the model. It implies the initial preferences of the researcher, based on his intuition, knowledge and experience, in the process of his formulation. The magnitude of this uncertainty is an indicator of how likely it is that your personal philosophical inclinations will affect the outcome of this process. When a person mentally builds a model, at each stage of its construction, there is some probability that the model will not match this phenomenon with a high degree of accuracy.
Our framework is not limited to measuring the fundamental physical constants: all considerations within our framework are purely informational theoretically in nature and apply to any models of experimental physics and technology.
It would seem, on the side of the CODATA scientists - the ability to analyze and extensive knowledge. Why does the real results of the information method allow you to go the other way? The most important reason is that the analysis is based on factual data, based on a theoretically grounded approach, rather than biased, statistically expert, motivated by any beliefs or preferences. Also, the facts necessary for scientific analysis simply appeared only in recent years. Yes, there was no precedent such that the amateur suddenly presented a theoretically grounded approach based on information theory. This was in 2015 for the first time.
An information-oriented approach leads us to the following conclusions. If the mathematics and physics that describe the surrounding reality are effective human creations, then we must take into account the relationship between human consciousness and reality. In addition, the ultimate limits of theoretical, computational, experimental and observational methods, even using the best computers and the most complex experiments, such as the Large Hadron Collider, are limited to the μ-hypothesis applicable to any human activity. Undoubtedly, the current unprecedented scientific and technological progress will continue. However, since the limit for this advance exists, the speed of discoveries is slowed down. This remark is especially important for artificial intelligence, which seeks to create a truly super intelligent machine.

References

[1]  M.J.T. Milton, R. Davis, and N. Fletcher, “Towards a new SI: a review of progress made since 2011,” Metrologia, 51, 21-30, 2014.
[2]  I.M. Mills, P.J. Mohr, T.J. Quinn, B.N. Taylor, and E.R. Williams, “Adapting the international system of units to the twenty-first century,” Phil. Trans. R. Soc. A, 369, 3907-24, 2011.
[3]  P.J. Mohr, B.N. Taylor, and D.B. Newell, “CODATA recommended values of the basic physical constants: 2010,” Rev. Mod. Phys., 84, 1527-1605, 2012. http://sci-hub.tw/10.1103/RevModPhys.84.1527.
[4]  B. Dodson, “Quantum Physics and the Nature of Reality (QPNR) survey: 2011,” 2013. https://goo.gl/z6HCRQ.
[5]  M. Henrion and B. Fischhoff, “Assessing uncertainty in physical constants,” Am. J. of Phys., 54(9), 791-798, 1986. https://goo.gl/WFjryK.
[6]  B.M. Menin, “Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena,” American Journal of Computational and Applied Mathematics, 7(1), 11-24, 2017. https://goo.gl/m3ukQi.
[7]  L.I. Sedov, Similarity and Dimensional Methods in Mechanic, CRC Press, Florida, 1993.
[8]  NIST Special Publication 330 (SP330), “The International System of Units (SI),” 2008. http://goo.gl/4mcVwX.
[9]  L. Yarin, The Pi-Theorem, Springer-Verlag, Berlin, 2012. https://goo.gl/dtNq3D.
[10]  L. Brillouin, Scientific uncertainty and information, Academic Press, New York, 1964.
[11]  V.A. Balalaev, V.A. Slayev, and A.I. Sinyakov, Potential accuracy of measurements: Scientific publication –Textbook / Edited by Slayev, V.А., ANO NPO Professional, St. Petersburg, 2005, In Russian. http://www.vniim.ru/files/PotTochIzm.pdf.
[12]  P.J. Mohr, B.N. Taylor, and D.B. Newell, “CODATA recommended values of the fundamental physical constants: 2006,” Rev. Modern Phys., 80, 1– 98, 2008. https://goo.gl/2MFJoV.
[13]  R. Steiner, “History and progress on accurate measurements of the Planck constant,” Rep. Prog. Phys., 76(1), 1-46, 2013. http://goo.gl/s1GomR.
[14]  P.J. Mohr, D.B. Newell, B.N. Taylor, and E. Tiesinga, “Data and analysis for the CODATA 2017 special fundamental constants adjustment,” Metrologia, vol. 55, pp. 125–146, 2018. https://ws680.nist.gov/publication/get_pdf.cfm?pub_id=923939.
[15]  B. Andreas et al, “Determination of the Avogadro constant by counting the atoms in a 28Si crystal,” Phys. Rev. Lett., 106 030801, 1-4, 2011.
[16]  B. Andreas et al, “Counting the atoms in a 28Si crystal for a new kilogram definition,” Metrologia, 48, 1-14, 2011. http://sci-hub.tw/10.1088/0026-1394/48/2/S01.
[17]  C.A. Sanchez, D.M. Wood, R.G. Green, J. O. Liard, and D. Inglis, “A determination of Planck’s constant using the NRC watt balance,” Metrologia, 51(2), 5-14, 2014. http://sci-hub.tw/10.1088/0026-1394/51/2/S5.
[18]  Y. Azuma et al, “Improved measurement results for the Avogadro constant using a 28Si-enriched crystal,” Metrologia, 52, 360–375, 2015.
[19]  S. Schlamminger et al, “A summary of the Planck constant measurements using a watt balance with a superconducting solenoid at NIST,” Metrologia, 52, 1–5, 2015. http://sci-hub.tw/10.1088/0026-1394/52/2/L5.
[20]  D. Haddad et al, “A precise instrument to determine the Plank constant, and the future kilogram,” Rev. Sci. Instrum., 87, 061301, 2016. http://sci-hub.tw/10.1063/1.4953825.
[21]  B.M. Wood, C.A. Sanchez, R.G. Green, J.O. Liard, “A summary of the Planck constant determinations using the NRC Kibble balance,” Metrologia, 54, 399–409, 2017. http://iopscience.iop.org/article/10.1088/1681-7575/aa70bf/pdf.
[22]  G. Bartl et al, “A new 28Si single crystal: counting the atoms for the new kilogram definition,” Metrologia, vol. 54(5), pp. 693-715, 2017. http://iopscience.iop.org/article/10.1088/1681-7575/aa7820/pdf.
[23]  Z. Li et al, “The first determination of the Planck constant with the joule balance NIM-2,” Metrologia, 54(5), 763-774, 2017. http://sci-hub.tw/10.1088/1681-7575/aa7a65.
[24]  D. Haddad et al, “Measurement of the Planck constant at the National Institute of Standards and Technology from 2015 to 2017,” Metrologia, 54, 633–641, 2017. http://iopscience.iop.org/article/10.1088/1681-7575/aa7bf2/pdf.
[25]  N. Kuramoto et al, “Determination of the Avogadro constant by the XRCD method using a 28Si-enriched sphere, Metrologia, 54, 716-729, 2017. http://sci-hub.tw/10.1088/1681-7575/aa77d1.
[26]  M. Thomas et al, “A determination of the Planck constant using the LNE Kibble balance in air,” Metrologia, 54, 468–480, 2017. http://sci-hub.tw/10.1088/1681-7575/aa7882.
[27]  D.B. Newell et al, “The CODATA 2017 Values of h, e, k, and NA,” Metrologia, 54, 1-6, 2017. http://sci-hub.tw/10.1088/1681-7575/aa950a#.
[28]  A. Possolo et al, “Evaluation of the accuracy, consistency, and stability of measurements of the Planck constant used in the redefinition of the International system of units,” Metrologia, 55, 29–37, 2018. https://pdfs.semanticscholar.org/4515/25a1afe173535b59993d5ba787d1890be6c8.pdf.
[29]  A. Eichenberger, G. Geneve, and P. Gournay, “Determination of the Planck constant by means of a watt balance,” Eur. Phys. J. Special Topics, 172, 363–383, 2009. http://sci-hub.bz/10.1140/epjst/e2009-01061-3.
[30]  J. Fischer et al, “The Boltzmann project,” Metrologia, 55(2), 1-36, 2018. http://sci-hub.tw/10.1088/1681-7575/aaa790.
[31]  S.P. Benz et al, “An electronic measurement of the Boltzmann constant,” Metrologia, 48, 142-153, 2011. http://sci-hub.tw/10.1088/0026-1394/48/3/008.
[32]  L. Pitre et al, “Determination of the Boltzmann constant k from the speed of sound in helium gas at the triple point of water,” Metrologia, 52, 263– 273, 2015. http://sci-hub.tw/10.1088/0026-1394/52/5/S263.
[33]  R.M. Gavioso et al, “A determination of the molar gas constant R by acoustic thermometry in helium,” Metrologia, 52, 274–304, 2015. http://sci-hub.tw/10.1088/0026-1394/52/5/S274.
[34]  L. Pitre et al, “New measurement of the Boltzmann constant k by acoustic thermometry of helium-4 gas,” Metrologia, 54(6), 856-873, 2017. https://ws680.nist.gov/publication/get_pdf.cfm?pub_id=923465.
[35]  M. de Podesta et al, “Re-estimation of argon isotope ratios leading to a revised estimate of the Boltzmann constant,” Metrologia, vol. 54(5), pp. 683-692, 2017. http://sci-hub.tw/10.1088/1681-7575/aa7880.
[36]  C. Gaiser et al, “Final determination of the Boltzmann constant by dielectric-constant gas thermometry,” Metrologia, 54(3), 280-289, 2017. http://sci-hub.tw/10.1088/1681-7575/aa62e3.
[37]  X.J. Feng et al, “Determination of the Boltzmann constant with cylindrical acoustic gas thermometry: new and previous results combined,” Metrologia, 54(5), 748-762, 2017. http://sci-hub.tw/10.1088/1681-7575/aa7b4a.
[38]  Q. Jifeng et al, “An improved electronic determination of the Boltzmann constant by Johnson noise thermometry,” Metrologia, 54(4), 549-558, 2017. http://sci-hub.tw/10.1088/1681-7575/aa781e.
[39]  P. Mohr, D. Newell, and B. Taylor, “CODATA recommended values of the fundamental constants: 2014,” Rev. Mod. Phys., 88, 1-73, 2016. http://sci-hub.tw/10.1103/RevModPhys.88.035009.
[40]  D. Haddad et al, A precise instrument to determine the Plank constant, and the future kilogram, Rev. Sci. Instrum., 87(6) (2016) 1-14. https://aip.scitation.org/doi/pdf/10.1063/1.4953825.
[41]  B.V. Wood, C.A. Sanchez, R.G. Green, and J. O. Liard, “A summary of the Planck constant determinations using the NRC Kibble balance,” Metrologia, 54, 399–409, 2017. http://iopscience.iop.org/article/10.1088/1681-7575/aa70bf/pdf.
[42]  J.B. Perrin, “Brownian movement and molecular reality,” Trans. F. Soddy, London: Taylor and Francis 1910. This is a translation of an article appeared in Annales de Chimie et de Physique, 8me Series, 1909.