American Journal of Computational and Applied Mathematics

p-ISSN: 2165-8935    e-ISSN: 2165-8943

2018;  8(4): 70-79

doi:10.5923/j.ajcam.20180804.02

 

Novel Method for Calculating the Measurement Relative Uncertainty of the Fundamental Constants

Boris Menin

Mechanical & Refrigeration Consultation Expert, Beer-Sheba, Israel

Correspondence to: Boris Menin , Mechanical & Refrigeration Consultation Expert, Beer-Sheba, Israel.

Email:

Copyright © 2018 The Author(s). Published by Scientific & Academic Publishing.

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract

Heisenberg's 90-year-old tenet on uncertainties in physics asserts that, in nature, determination of the accuracy of coordinates and momentum of any material object has a fundamental limit. Besides, Planck's constant is vanishingly small, with respect to macro bodies, and hence cannot be used for practical applications. In this paper, the author proposes another novel limit, based on the concept that every model contains a certain amount of information about the object under study, and hence it must have optimal number of selected quantities. The author demonstrates how, by the usual measurements of fundamental physical constants, the proposed novel limit can be applied to estimate the permissible absolute and relative uncertainties of the metric being measured. For this, the author used the information theory for giving a theoretical explanation and for grounding of the experimental results, which determine the precision of different fundamental constants. It is shown that this new fundamental limit, characterizing the discrepancy between a model and the observed object, cannot be overcome by any improvement in measuring instruments, mathematical methods or super-powerful computers.

Keywords: Computational modelling, Information theory, Measurement of fundamental physical constants, Theory of measurements, Theory of similarity

Cite this paper: Boris Menin , Novel Method for Calculating the Measurement Relative Uncertainty of the Fundamental Constants, American Journal of Computational and Applied Mathematics , Vol. 8 No. 4, 2018, pp. 70-79. doi: 10.5923/j.ajcam.20180804.02.

1. Basic Thesis

Modeling is an information process through which information about a state and the behaviour of an observed object is obtained from the developed model. This information is the main subject of interest in modeling theory [1].
Let a specific object under investigation be considered. The modeller, during a thought experiment (the distortion is not brought in a real system) chooses, according to his or her knowledge, intuition and experience, specific quantities that characterize a studied process. The choice of set of quantities is constrained not only by the possible duration of the study and its permitted cost. The main problem of the modeling process is that the observer selects quantities from a vast but finite set of quantities that are defined within, for example, the International System of Units (SI). When modeling a physical phenomenon, one group of scientists may choose quantities that may substantially differ from those chosen by another group, as happened, for example, during the study of electrons that behave like particles or waves. That is why SI can be characterized by equally probable accounting of any quantity chosen by the modeler. The SI includes seven base quantities: L- length, M– mass, Т– time, I– electric current, thermodynamic temperature, J– force of light and F– amount of substance [2].
SI has the consensus of the scientists. Besides, modeling of the phenomena is impossible without SI, which is considered the basis of people’s knowledge about the nature surrounding them. SI includes the base and the derived quantities used for describing different classes of phenomena (CoP). For example, in SI mechanics, there a basis used is {L– length, M– mass and Т– time}, i.e. CoPSILMT.
It is known [3] that the dimension of any derived quantity can be expressed as a unique function of the product of base quantities with certain exponents, i.e., l... f, which can take only integer values and change over specific ranges:
(1)
(2)
(3)
where means "corresponds to a dimension"; еl, …, еf denote the numbers of choices of dimensions for each base quantity. For example, L-3 is used in a formula of density, and in the Stefan-Boltzmann law.
Because SI is an Abelian finite group [4, 5] with the natural structure of a module over the ring of integers, the exponents of the base quantities in formula (1) for SI take only integer values! Thankfully, because of this fact, and considering (1) - (3) and the π-theorem [6], the total number of possible dimensionless criteria µSI of SI could be calculated
(4)
where Ψ is a maximum number of distinct dimensions in SI; "-1" corresponds to the case when all the base quantities exponents are zero in formula (1); ξ =7 corresponds to the seven base quantities L, M, T, I, , J and F; division by 2 indicates that there are direct and inverse quantities, e.g. L1 is the length and L-1 is the run length. The object can be judged based on knowing only one of its symmetrical parts, while others that structurally duplicate this part may be regarded as information empty. Therefore, the number of options of dimensions may be reduced by 2 times; µSI, called the group number, corresponds to the maximum amount of information contained in SI. Each quantity allows the researcher to obtain a certain amount of information about the studied object. The main definitions and estimates of the amount of information, used in the experiment, were clearly formulated by L. Brillouin [7].
In this case, one should note that condition (1) is a very strong constraint. It is well known that not every physical system can be represented as an Abelian group. The presentation of experimental results as a formula, where the main quantity is represented in the form of the correlation function of the one-quantity selected functions, has many limitations. However, in this study, condition (1) can be successfully applied to a system that is not in nature, for example, SI. In this system, the derived quantities are always represented as the product of the base quantities to different degrees.
According to the above mentioned reality, it is possible to consider a choice of a quantity as a random process and to consider that a particular quantity will be equally probable. This approach completely ignores the human evaluation of information. In other words, the set of 100 notes played by chimpanzees will have exactly the same amount of information as that of the 100 notes melody played by Mozart in his Piano Concerto No.21 (Andante movement). It should be noted that the approval of the equiprobable occurrence of quantities is justified by the purpose of the research – finding the minimum absolute uncertainty Δpmm of the researched quantity, stipulated by the level of the detail of the observed object. Indeed, any other distribution yields less information, which leads to a larger uncertainty of the model, in comparison with the uncertainty calculated at the equally probable accounting of quantities. Then, let there be a situation wherein all quantities µSI of SI can be taken into account, provided the choice of these quantities is considered, a priori, equally probable. In this case, µSI corresponds to a certain value of entropy and may be calculated by the following formula [4, 7]:
(5)
where H is entropy of SI including µSI, equally probable accounted quantities, kb is the Boltzmann's constant.
When a researcher chooses the influencing factors (the conscious limitation of the number of quantities that describe an object, in comparison with the total number µSI), entropy of the mathematical model changes a priori. The entropy change is generally measured as follows:
(6)
where ΔH is the entropy difference between two cases, pr – "a priori" and ps - "a posteriori".
"The efficiency Q of the experimental observation method can be defined as the ratio of the information obtained to the entropy change accompanying the observation" [7]. During a thought experiment, no distortion is brought into the real system, that is why Q=1. Then one can write it according to (6) [7]:
(7)
where ΔA is the a priori information quantity pertaining to the observed object.
Using Equations (5) - (7) and imposing symbols – where z' is the number of physical dimensional quantities in the selected CoP and β' is the number of base quantities in the selected CoP – lead to the following equation:
(8)
where ΔA' is the a priori amount of information pertaining to the observed object due to the choice of the CoP.
The value ΔA' is linked to the a priori absolute uncertainty of the model, caused only due to the choice of the CoP, Δpmm' and S, the dimensionless interval of observation of the main researched dimensionless quantity u, through the following dependence [7]:
(9)
Substitution of (8) into (9) gives the following dependence:
(10)
Following the same reasoning, it can be shown that the a priori absolute uncertainty of a model of the observed object, caused by the number of recorded dimensionless criteria chosen in the model, Δ" takes the following form:
(11)
where Δpmm'' cannot be defined without declaring the chosen CoP (Δpmm'); z" is the number of physical dimensional quantities recorded in a mathematical model and β" is the number of the base quantities recorded in a model.
According to theorem [8], the total information amount can be separated into information identifying the element of the partition, plus the information identifying an element within subsets of the partition. Therefore, we represent the total absolute uncertainty in determining the dimensionless main researched quantity u, Δpmm as the sum of two terms, in which a first term of a measure of information defines Δpmm' and the second term dictates the choice of Δpmm'':
(12)
where Δpmm is caused only by the dimension of a physical-mathematical model (that is limited to the number of chosen quantities). Δpmm is the property of the model that reflects a certain number of characteristics of the researched phenomena.
All the above derivations can be summarized in the form of µSI hypothesis: In model formulation, let the system of base quantities with a total number of dimensional physical quantities be denoted by Ψ, where ξ of which are chosen and are independent of dimension. In the framework of the phenomena class (z' is the total number of dimensional quantities and β' is the number of base quantities), there is a dimensionless main quantity u that is raised to a given range of values S. Then, the absolute uncertainty Δpmm that contains u (for a given number of physical dimensional quantities, z" is recorded in a model where β" of which are the number of chosen base quantities) can be determined from the relationship:
(13)
where ε = Δpmm/S is called the comparative uncertainty.
Equation (13), surprisingly, is very simple. The absolute and relative uncertainties are familiar to physicists, but not comparative uncertainty, because it is seldom mentioned. But, the comparative uncertainty value is of great importance in the application of information theory to physics and engineering sciences [7].
The overall absolute uncertainty of the model, including inaccurate input data, physical assumptions, the approximate solution of the integral-differential equations etc., will be larger than Δpmm. Thus, Δpmm is the first-born and least component of a possible mismatch of real object and its modeling results.
In fact, equation (13) can be regarded as the conformity principle (uncertainty relation) for the process of model development. No model can produce results that contradict the relation (13). That is, any change in the level of the detailed description of the observed object (z''-β''; z'-β') causes a change in the minimum comparative uncertainty value Δpmm/S of the model of a specific CoP and in the achieved accuracy of each main quantity, characterizing the internal structure of the object. In other words, the conformity principle fundamentally establishes the accuracy limit (for a given class of phenomena) of simultaneously defining a pair of quantities, observed by a conscious researcher, particularly, the absolute uncertainty in the measurement of the investigated quantity and the interval of its changes.
Thus, it follows that the fuzziness (inaccurate representation) of the object in the eyes of the researcher depends both on the chosen class of phenomena and on the number of quantities taken into account by the conscious observer; the latter directly depends on the knowledge, the life experience and intuition of the researcher. Objectively, these factors, already stated above, render it possible to consider the choice of a quantity as a random process, with an equally probable account of a particular quantity.
µSI hypothesis lays down that, in nature, there is a fundamental limit to the accuracy of measuring any process, which cannot be surpassed by any improvement of instruments, measurement methods or the model’s computerization. The value of this limit is much higher and stronger than what the Heisenberg uncertainty relation provides.
It is to be noted that the relative and comparative uncertainties of the dimensional quantity U and the dimensionless quantity u are equal
(14)
where S, Δu are the dimensionless quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensionless quantity u; S*, ΔU are the dimensional quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensional quantity U; a is the dimensional scale parameter with the same dimension as that of U and S*; r is a relative uncertainty of the dimensional quantity U; R is a relative uncertainty of the dimensionless quantity u.
Similarity theory is used here for several reasons. When studying phenomena occurring in the world around us, it is advisable to consider, not individual quantities, but their combination or criteria, which have a certain physical meaning. The methods of the similarity theory, which are based on the analysis of integral-differential equations and boundary conditions, allow for identifying these criteria. In addition, the transition from dimensional physical quantities to dimensionless quantities reduces the number of counted values. The predetermined value of the dimensionless criterion can be obtained by various combinations of dimensional physical quantities, included in the criterion. This means that when considering the problems of new quantities, one has to take into account, not a single case, but a number of events united by some common properties. It is important to note that the universality of similarity transformations is determined by invariant relations that characterize the structure of all laws of nature, including the laws of relativistic nuclear physics. Moreover, a dimensional analysis of the viewpoint of the mathematical apparatus, has a group structure, and the similarity criteria are invariants of groups. The concept of a group is a mathematical representation of the concept of symmetry, which is one of the most fundamental concepts of modern physics. That is why the conclusions and calculations, made in accordance with the proposed method, can be applied to all the dimensional fundamental physical constants.
Equating the derivative of Δpmm/S (13) with respect to z'-β' to zero, gives the following condition for achieving the minimum comparative uncertainty for a particular CoP:
(15)
By using (15), one can find the values for the lowest achievable comparative uncertainties for different CoPSI; moreover, the values of the comparative uncertainties and the numbers of the chosen variables are different for each CoP. For example, all measurements of the Avogadro number belong to the CoPSILMTF. Considering the aforementioned explanations, as also (3) and (15), the lowest comparative uncertainty εLMTF can be reached, using the following conditions:
(16)
(17)
where "-1" corresponds to the case wherein all the base quantities exponents are zero in formula (1); 4 corresponds to the four base quantities L, M, T and F; division by 2 indicates that there are direct and inverse quantities, e.g. L1 is the length and L-1 is the run length. The object can be judged based on knowing only one of its symmetrical parts, while the other parts that structurally duplicate this part may be regarded as information-empty. Therefore, the number of options of dimensions may be reduced by 2 times.
According to (13),
(18)
This information approach has already been applied to different engineering applications [4]. Several examples of analyzing measurements of the fundamental physical constants are presented below to convince readers that this metric is universal, and, surprisingly, the results are not coincidental, but trustworthy.

2. Applications

2.1. Avogadro Number NA

During the period 2001 to 2015, several scientific publications were analyzed, based on the available relative and comparative uncertainty values [9-16], and the results are summarized in Table 1. In order to apply a stated approach, an estimated observation interval of the Avogadro number is chosen as the difference in its values obtained from the experimental results of two projects: Namin = 6.0221339(27)·1023 mol-1 [9] (De Bievre et al., 2001) and Namax = 6.022140857(74)·1023 mol-1 [15].
Table 1. Avogadro number determinations and relative and comparative uncertainties achieved
     
Then the dimensional possible observed range SN of Na variations is given by the following:
(19)
The choice of the author of (Namax - Namin) seems subjective and arbitrary. However, we need to emphasize that the standard deviation of a particular measurement cannot be chosen as an interval of changes in the measured variable due to the subjectivity of the conscious observer, who probably did not take into account this or that uncertainty. Only in the presence of the results of various experiments we can speak about the possible occurrence of a measured quantity in a certain range.
The true and precise value of the Avogadro number is not known at the moment. Therefore, CODATA task group on fundamental constants (TGFC) periodically review and declare its recommended value of the Avogadro number and its relative uncertainty.
Applying the present approach, one can argue about the order of the desired value for the relative uncertainty (rmin)LMTF for CoPSILMТF. For this purpose, the following values are considered: (εmin)LMTF = 0.0146 (18) and SN=6.9·1017 mol-1 (19). Then, the lowest possible absolute uncertainty is given by the following:
(20)
In this case, the lowest possible relative uncertainty (rmin)LMTF is as follows:
(21)
This value (21) agrees well with the recommendations mentioned in [16], 2·10-8, and can be particularly relevant in the run-up to the adoption of new definitions for SI units.
It seems that the theoretical limit of uncertainty depends on the empirical value, i.e. on the observed range of variations in S. In other words, the results will be completely different if a larger interval of changes is considered in Avogadro's number. It's right. However, if S is not declared, the information obtained in the measurement cannot be determined. Any specific measurement requires a certain (finite) a priori information about the components of the measurement and interval of observation of the measured quantity. These requirements are so universal that it acts as a postulate of metrology [17]. This, a priori range of changes, depends on the knowledge of the developer prior to undertaking the study. "If nothing is known about the system studied, then S is determined by the limits of the measuring devices used" [7]. The extended range of changes in the quantity under study S indicates an imperfection of the measuring devices, which leads to a large value of the relative uncertainty. The development of measuring technology, the increase in the accuracy of measuring instruments and the improvement in the existing and newly created measurement methods together lead to an increase in the knowledge of the object under study and consequently the magnitude of the achievable relative uncertainty decreases. However, this process is not infinite and is limited by the conformity principle. It is important to mention that this conformity principle is not a shortcoming of the measurement equipment or engineering device, but of the way the human brains work. When predicting behavior of any physical process, physicists are in fact predicting the perceivable output of instrumentation. It is true that, according to the µ-hypothesis, observation is not a measurement, but a process that creates a unique physical world with respect to each particular observer. This principle dictates, factually, the magnitude of the achievable relative uncertainty at the moment taking into account the latest results of measurements.
For example, in the case NPmax = 7.15·1023 mol-1 [18], Perrin’s experiments belong to CoPSI LMТθ. Then, taking into account that Namin = 6.0221339(27)·1023 mol-1 [9], (εmin)LMTθ = 0.0446 [4], the lowest possible absolute uncertainty ΔPLMTθ, and relative uncertainty, rPLMTθ, would be equal to the following:
(22)
(23)
(24)
Thus, within the framework of the proposed information approach, and with 100-year-old imperfect measurement equipment, the achievable relative uncertainty is 2.5·10-3 (24), which is much higher than 1.7·10-8 (24) that can be achieved by the accuracy of modern measuring instruments and the knowledge about the true-target magnitude of Avogadro's constant.
With all this, the ability to predict the relative uncertainty of the Avogadro number by using the comparative uncertainty allows for improving the fundamental comprehension of the complex phenomena, as also for applying this comprehension to solving specific problems.

2.2. Boltzmann Constant kb

A detailed analysis of the measurements of Boltzmann’s constant, made since 1973 is available in [19]. The more recent of these measurements, made during 2015-2018 [19-25], were analyzed for this study. In order to apply a stated approach, an estimated observation interval of kb is chosen as the difference in its values obtained from the experimental results of two projects: kbmax = 1.3806513·10-23 m²·kg·s-2·K-1 [22] and kbmin = 1.380648428·10-23 m²·kg·s-2·K-1 [2]. In this case, the possible observed range Sk of kb variation is equal to:
(25)
The data are summarized in Table 2. Although the authors of the research studies cited in this paper mentioned all the possible sources of uncertainty, the values of absolute and relative uncertainties can still differ by more than ten times. And, a similar situation exists in the spread of the values of comparative uncertainty.
Table 2. Boltzmann constant determinations and relative and comparative uncertainties achieved
     
One can argue about the order of the desired value of the relative uncertainty of CoPSILMТF, which is usually used for obtaining measurements of the Boltzmann’s constant. For this purpose, taking into account (3), (4), (15), one can arrive at the lowest comparative uncertainty εLMTθF using the following conditions:
(26)
(27)
where "-1" corresponds to the case whose all the base quantities exponents are zero in formula (1); 5 corresponds to the five base quantities and F; division by 2 indicates that there are direct and inverse quantities, e.g. L1 is the length and L-1 is the run length. The object can be judged based on the knowledge of only one of its symmetrical parts, while the other parts that structurally duplicate this part may be regarded as information-empty. Therefore, the number of options of dimensions may be reduced by 2 times.
According to (13), (26) and (27),
(28)
Then, the lowest possible absolute uncertainty for is given by the following:
(29)
In this case, the following is the lowest possible relative uncertainty rLMTθF for
(30)
This value agrees well with the recommendation (3.7·107) cited in [19, 25]. That is why the information approach can be used as an additional tool for the new definition of the Kelvin and for revising the International System of Units.

2.3. Summarized Data

Following an analogous procedure, the measurement results for the Planck constant, the fine structure constant, the Rydberg constant, the Avogadro number, the mass of a proton, the proton magnetic moment, in nuclear magnetons and W-boson mass were analyzed and the results summarized in Table 3 [16, 25-32]. The discrepancy between the published and the calculated values of relative uncertainty could be because of insufficient volume of data, which can be overcome by the author in his future work. Another reason is the need to improve experimental test benches. It is necessary to hope for the best: the continuation of the financing of experimental research, but the ideas for improving measurement methods never end.
Table 3. Fundamental physical constants: recommended and calculated relative uncertainties
     
All the calculated relative uncertainties are smaller than the published values, excepting those of the Rydberg constant [27] and proton mass [30], which are smaller than the values calculated according to the proposed method. On one side, if there is not complete agreement between the results of one's work and experiment, one should not allow oneself to be too discouraged, because the discrepancy may well be due to minor features that are not properly taken into account and that will get cleared up with further developments of the theory [33]. On other side, the question of reliability is crucial as the refinement of fundamental constants through pioneering methods is extremely vulnerable [34]. For example, in the case of Rydberg constant, there was scatter in the data, although not reflected by increase in uncertainty [34]. That is, there was a clear deterioration of the situation in the case where the result, by itself, is important. However, the accuracy of most input data was determined not by statistical, but by systematic uncertainties, whose evaluation is often the most important part of the experiment or calculation. In this case, the pioneering research is affected by lack of previous experience, although the specialists are highly qualified to use the latest technologies. Thus, a paradoxical situation develops, wherein more inconsistencies produce more serious vulnerabilities in new measurement and computing technologies [34]. This situation calls for a new information approach, which can play a positive role in anticipating and adopting new definitions of units for the International System SI.
One needs to note the fundamental difference between the described method and the CODATA technique in determining the relative uncertainty of one fundamental physical constant or the other. For using CODATA technique based on solid principles of probability and statistics, tables of values that allow direct use of relative uncertainty are constructed, using modern advanced statistical methods and powerful computers. This, in turn, allows for checking the consistency of the input data and the output set of values. However, at every stage of data processing, one needs to use her or his intuition, knowledge and experience (one's personal philosophical leanings [35]. In the framework of the presented approach, a theoretical and informational grounding and justification are carried out for calculating the relative uncertainty. Detailed data description and processing do not require considerable time. Thus, the μ-hypothesis is an exact mathematical and thus, scientific concept.

3. Discussion and Conclusions

The proposed information approach has its own implications. Any physical process, from quantum mechanics to palpitation, can be viewed by the observer only through the idiosyncratic "lens." Its material is a combination of not only mathematical equations, but also of the researcher's desire, intuition, experience and knowledge. These, in turn, are framed by SI, which is chosen with the consensus of the researchers. Thus, a sort of aberration—distortion of reality—creeps into modeling, prior to the formulation of any physical, or even, mathematical statement. The degree of distortion of the image in comparison with the actual process depends essentially on the chosen class of phenomena and the number of the “quantities created by observation” [36].
The accuracy of the model of any physical phenomena can no longer be considered limited by the boundaries, determined by the Heisenberg uncertainty relation. “Potential accuracy of the measurement” [17] is limited by the initially known unrecoverable comparative uncertainty determined by the μ-hypothesis and depending on the class of phenomena and the number of quantities chosen by the strong-willed researcher. This is where equation (13) can be considered a kind of compromise solution between future possibilities, limitations in improving measuring devices, diversity in mathematical calculation methods, and the increasing power of computers.
Under the unrecoverable uncertainty of the model, we mean the initial preferences of the researcher, based on his intuition, knowledge and experience, in the process of its formulation. The magnitude of this uncertainty is an indicator of how likely it is that one's personal philosophical leanings will affect the outcome of this process. Therefore, modeling, like any information process, looks like any similar process in nature - noisy, with random fluctuations, in our case, an equiprobable choice of quantities that depends on the observer. When a person mentally builds a model, at each stage of its construction there is some probability that the model will not match the phenomenon to a high degree of accuracy.
The quality of the scientific hypothesis should be judged not only by its correspondence to empirical data, but also by its predictions. In this study, information theory was used to give a theoretical explanation and grounding of the experimental results, which determine the precision of different fundamental constants. A focus on the real is what allows the information measure approach to explore new avenues in the different physical theories and technologies. The approach proposed here can answer one fundamental question -  how we are seeing?  -  because it is based on the fundamental subject, namely the International system of units. The information approach allows for crafting of a meaningful picture of future results, because it is based on the realities of the present. In this sense, when applying the results of precision research to the limitations that constrain modern physics, it is necessary to clearly understand the research framework and the way the original data can be modified [34]. This can be considered as an additional reason for speedy implementation of the μSI - hypothesis, the concept of SI and, in general, the information approach for analyzing existing experimental data on the measurement of fundamental physical constants. The experimental physics segment is expected to be the most rewarding application for the information method, thanks to a greater demand for high accuracy measurements. The proposed information approach allows for calculating the absolute minimum uncertainty of the measurement of the investigated quantity of the phenomenon, using formula (13). Calculation of the recommended relative uncertainty is a useful consequence of the formulated μ-hypothesis and is presented for application in calculation of relative measurement uncertainty of different physical constants.
The main purpose of most measurement models is to make predictions, in verifying the true-target magnitude of the researched quantity. The quantity that need to be predicted are generally not experimentally observable before the prediction, since otherwise no prediction would be needed. Assessing the credibility of such extrapolative predictions is challenging. In validation CODATA's approach, the model outputs for observed quantities are constructed, using modern advanced statistical methods and powerful computers to determine if they are consistent. By itself, this consistency only ensures that the model can predict the measured physical constants under the conditions of the observations [37]. This limitation dramatically reduces the utility of the CODATA effort for decision making because it implies nothing about predictions for scenarios outside of the range of observations. μ-hypothesis proposes and explores a predictive assessment process of the relative uncertainty that supports extrapolative predictions for models of measurement of the fundamental physical constants.
The findings of this study are applicable to all the models in physics and engineering including climate, heat- and mass-transfer and theoretical and experimental physics systems in which there is always a trade-off between model’s complexity and the accuracy required. On other side, the proposed method is not claimed to be universally applicable, because it does not answer the question on the selection of specific physical quantities for the best representation of the surrounding world. The information-oriented approach for estimating the model's uncertainty does not involve any spatio-temporal or causal relationship between the quantities involved; instead, it considers only the differences between their numbers. However, it can be firmly asserted that the findings presented here reveal, contrary to what is generally believed, that the precision of physics and engineering devices is fundamentally bounded by certain constraints and cannot be improved to an arbitrarily high degree of accuracy. The outcome of this study, which seems to be too good to be true, indeed turns out to be a real breakthrough.
It is now possible to design optimal models, which use the required number of the dimensional quantities that correspond to the selected SBQ, chosen according to engineering and experimental physics considerations.
The theory of measurements and its concepts remain the correct science today, in the twenty-first century, and will remain faithful forever (paraphrase of Prof. L.B. Okun [38]. The use of the μSI hypothesis only limits the scope of the measurement theory for uncertainties exceeding the uncertainty in the physical-mathematical model due to its finiteness. The key idea is that, although the basic principles of measurement remain valid, they need to be applied discreetly, depending on the stage of model's computerization.
Though the summarized data and explanations to Tables 1-3 appear to confirm the predictive power of the μ-hypothesis, the present author is skeptical of considering them as "confirmation". In fact, the μ-hypothesis is considered a Black Swan [39] among the existing theories related to checking the discrepancy between the model and the observed object, because none of the existing methods for validating and verifying the constructed model takes into account the smallest absolute uncertainty of the model’s measured quantity, caused by the choice of the class of the phenomena and the number of quantities created by observation.
“Our knowledge of the world begins not with matter but with perceptions” [40]. According to the μ-hypothesis, there are no physical quantities independent of the observer. Instead, all physical quantities refer to the observer. This is motivated by the fact that, according to the information approach, different observers can take differently account of the same sequence of events. Therefore, each observer assumes to "dwell" in his own physical world, as determined by the context of his own observations.
Finally, because the values of comparative uncertainties and the required number of the chosen quantities are completely independent and different for each class of a phenomena, the attained approach can now, in principle, become an arbitrary metric for comparing different models that describe the same recognized object. In this way, the information measure approach will radically alter the present understanding of the modeling process. In conclusion, it must be said that, fortunately or unfortunately, one sees everything in the world around him or her, through a haze of doubts and errors, excepting love and friendship. If you did not know about the μ-hypothesis, you would not come to this conclusion.

References

[1]  Kunes, J. (2012). Similarity and modelling in science and technology, Camb. Int. Sci. Pub. https://goo.gl/CnZfT1.
[2]  NIST Special Publication 330 (SP330) (2008). The International System of Units (SI), 2008. http://goo.gl/4mcVwX.
[3]  Sonin, A.A. (2001). The physical basis of dimensional analysis, 2nd ed. Department of Mechanical Engineering, MIT, Cambridge. goo.gl/2BaQM6.
[4]  Menin, B.M. (2017). Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena. American Journal of Computational and Applied Mathematics, 7(1): 11-24. https://goo.gl/m3ukQi.
[5]  Laszlo, A. (1964). Systematization of dimensionless quantities by group theory. Int. J. Heat Mass Transfer, 7(4): 423-430. http://sci-hub.tw/10.1016/0017-9310(64)90134-6.
[6]  Yarin, L. (2012). The Pi-Theorem. Springer-Verlag. Berlin. https://goo.gl/dtNq3D.
[7]  Brillouin, L. (1964). Scientific uncertainty and information. Academic Press, New York.
[8]  Schroeder, M.J. (2004). An alternative to entropy in the measurement of information. Entropy, 6: 388-412. goo.gl/vg8fk5.
[9]  De Bievre, P. et al. (2001) A Reassessment of the Molar Volume of Silicon and of the Avogadro Constant. IEEE Transactions on instrumentation and measurement, 50(2): 593-597. https://goo.gl/uC284d.
[10]  Becker, P. et al. (2003). Determination of the Avogadro constant via the silicon route. Metrologia, 40(5): 271–287. http://sci-hub.tw/10.1088/0026-1394/40/5/010.
[11]  Mohr, P. J., Taylor, B. N. and Newell, D. B. (2006). CODATA recommended values of the fundamental physical constants: 2006. Rev. Modern Phys., 80: 1–98. https://physics.nist.gov/cuu/Constants/RevModPhys_80_000633acc.pdf.
[12]  International Avogadro project (2011). Phys. Rev. Lett., 106. https://www.bipm.org/en/bipm/mass/avogadro/.
[13]  Andreas, B. et al. (2011). Determination of the Avogadro Constant by Counting the Atoms in a 28Si Crystal. Phys. Rev. Lett., 106: 1-4. https://doi.org/10.1103/PhysRevLett.106.030801.
[14]  Mohr, P.J., Taylor, B.N. and Newell, D. B. (2012). CODATA recommended values of the Fundamental Physical Constants: 2010. Journal of Physical and Chemical Reference Data, 41(4), 043109. https://physics.nist.gov/cuu/pdf/JPCRD2010CODATA.pdf.
[15]  CODATA Recommended Values of the Fundamental Physical Constants: 2014. https://goo.gl/d1BrYL.
[16]  Y. Azuma, Y. et al. (2015). Improved measurement results for the Avogadro constant using a 28Si-enriched crystal. Metrologia, 52(2): 360-375. https://goo.gl/PURKaG.
[17]  Balalaev, V.A., Slayev, V.A. and Sinyakov, A.I. (2005). Potential accuracy of measurements: Scientific publication Textbook / Edited by Slayev, V.А., ANO NPO Professional, St. Petersburg. In Russian. http://www.vniim.ru/files/PotTochIzm.pdf.
[18]  Perrin, J.B. (1909). Brownian Movement and Molecular Reality, Trans. F. Soddy, London: Taylor and Francis, 1910. This is a translation of an article appeared in Annales de Chimie et de Physique, 8me Series.
[19]  Fischer, J. et al. (2018). The Boltzmann project. Metrologia, 55(2): 1-36. http://sci-hub.tw/10.1088/1681-7575/aaa790.
[20]  Gavioso, R.M. et al. (2015). A determination of the molar gas constant R by acoustic thermometry in helium. Metrologia, 52(5): 274-304. http://sci-hub.tw/10.1088/0026-1394/52/5/S217.
[21]  Christof, G., Zandt, T. and Fellmuth, B. (2015). Dielectric-constant gas thermometry. Metrologia, 52: 217–226. http://sci-hub.tw/10.1088/0026-1394/52/5/S217#.
[22]  Qu, J., Benz, S.P., Pollarolo, A., Rogalla, H., Tew, W.L., White, R. and Zhou, K. (2015) Improved electronic Measurement of the Boltzmann constant by Johnson noise thermometry. Metrologia, 52: 242-256. https://goo.gl/0i1nYq.
[23]  Feng, X. J., Zhang, J.T., Lin, H., Gillis, K.A., Mehl, J.B., Moldover, M.R., Zhang K. and Duan, Y.N. (2017). Determination of the Boltzmann constant with cylindrical acoustic gas thermometry. Metrologia, 54: 748-762. http://sci-hub.tw/10.1088/1681-7575/aa7b4a#.
[24]  Pitre, L. et al. (2017). New measurement of the Boltzmann constant k by acoustic thermometry of helium-4 gas. Metrologia, 54, pp. 856-873. http://ws680.nist.gov/publication/get_pdf.cfm?pub_id=923465.
[25]  Newell, D.B. et al. (2017). The CODATA 2017 Values of h, e, k, and NA, Metrologia, 5(4): 1-6. http://sci-hub.tw/10.1088/1681-7575/aa950a#.
[26]  Mohr, P.J. et al. (2008). CODATA recommended values of the fundamental physical constants: 2006. Rev. Modern Phys., 80: 1-98.
[27]  Mohr, P.J. et al. (2016). CODATA Recommended Values of the Fundamental Physical Constants: 2014. Reviews of Modern Physics, 88. https://goo.gl/O0x4cv.
[28]  Wood, B.M., Sanchez, C.A., Green, R.G. and Liard, J.O. (2017). A summary of the Plank constant determinations using the NRC Kibble balance. Metrologia, 54: 399-409. http://sci-hub.tw/10.1088/1681-7575/aa70bf.
[29]  Kirakosyan, G. S. (2010). The correlation of the fine structure constant with the redistribution of intensities in interference of the circularly polarized Compton’s wave. Gen. Phys.: 1-7. http://n-t.ru/tp/ng/fs1.pdf.
[30]  Heiße, F. et al. (2017). High-Precision Measurement of the Proton’s Atomic Mass. Physical Review Letters, 119, 033001: 1-6. https://goo.gl/iRozRW.
[31]  Schneider, G. et al. (2017). Double-trap measurement of the proton magnetic moment at 0.3 parts per billion precision. Science, 358(6366): 1081-1084. http://sci-hub.tw/10.1126/science.aan0207.
[32]  ATLAS Collaboration (2018). Measurement of the W-boson mass in pp collisions at √ s = 7TeV with the ATLAS detector. Eur. Phys. J. C, 78(110): 1-61. https://link.springer.com/article/10.1140%2Fepjc%2Fs10052-017-5475-4.
[33]  Dirac, P.A.M. (1963). The Evolution of the Physicist’s Picture of Nature. Scientific American, 208(5): 45-53. http://sci-hub.tw/10.1038/scientificamerican0563-45.
[34]  Karshenboim, S.G. (2013). Progress in the accuracy of the fundamental physical constants: 2010 CODATA recommended values. UFN, 183(9): 935-962. https://goo.gl/rj8RJK.
[35]  Dodson, B. (2013). Quantum Physics and the Nature of Reality (QPNR) survey: 2011. https://goo.gl/z6HCRQ.
[36]  Bernardo, K. (2017). Making Sense of the Mental Universe. Philosophy and Cosmology, 19: 33-49. https://philarchive.org/archive/KASMSO.
[37]  Oliver, T. A., Terejanu, G., Simmons, C. S. and Moser, R. D. (2015). Validating predictions of unobserved quantities. Computer Methods in Applied Mechanics and Engineering, 283: 1310-1335. http://sci-hub.tw/10.1016/j.cma.2014.08.023.
[38]  Okun, L. B. (2008). Theory of relativity and the theorem of Pythagoras. UFN, 51(622): 1-38. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.419.2666&rep=rep1&type=pdf.
[39]  Taleb, N.N. (2007). The Black Swan. Random House Trade paperback, NY.
[40]  Linde, A. (2015). Universe, Life, Consciousness: 1-13. https://www.scienceandnonduality.com/universe-life-consciousness-by-andrei-linde/.