American Journal of Intelligent Systems
p-ISSN: 2165-8978 e-ISSN: 2165-8994
2018; 8(1): 6-11
doi:10.5923/j.ajis.20180801.02

Rohit Raturi
Enterprise Solutions, KPMG LLP, Montvale, USA
Correspondence to: Rohit Raturi, Enterprise Solutions, KPMG LLP, Montvale, USA.
| Email: | ![]() |
Copyright © 2018 The Author(s). Published by Scientific & Academic Publishing.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

In this article we study function interpolation problem from interpolating polynomials and artificial neural networks point of view. Function interpolation plays a very important role in many areas of experimental and theoretical sciences. Usual means of function interpolation are interpolation polynomials (Lagrange, Newton, splines, Bezier, etc.). Here we show that a specific strategy of function interpolation realized by means of artificial neural networks is much efficient than, e.g., Lagrange interpolation polynomial.
Keywords: Function interpolation, Approximation, Artificial intelligence, Input layer, Output layer, Hidden layer, Optimization, Weight, Bias, Machine learning
Cite this paper: Rohit Raturi, Large Data Analysis via Interpolation of Functions: Interpolating Polynomials vs Artificial Neural Networks, American Journal of Intelligent Systems, Vol. 8 No. 1, 2018, pp. 6-11. doi: 10.5923/j.ajis.20180801.02.
where I is a finite set of indexes. The problem is to construct a function,
Here
is the given interval of the variable
on which the interpolation is required. Naturally,
Depending on the specific problem under consideration,
can be bounded or unbounded.Applications of the function interpolation ranges from numerical solution of ordinary and partial differential equations to solution of the best fitting problem for finite sets of data. Usually, real-life observations or measurements at discrete time-instants are gathered in a data set like
For example, in astronomical observations of a planetary motion,
are the time-instants when the coordinate
of the center of mass of a planet is measured. In bio-laboratory,
maybe the state of a mouse at time-instant
The radiation intensity
is measured at discrete points
Finally, in particle tracing experiments,
may be the coordinate of a particle observed at the time-instant
In all those examples, it is a very important and challenging problem to know the measuring quantity at all (even non-measured) time-instants. This problem is usually solved by the methods of function interpolation theory [1-3].
is infinite), Hermite interpolation (when both the values of a function and its derivatives are given), wavelets, Gaussian processes (when the interpolating data contain some noise), etc. The choice of a specific interpolating method depends on such criteria as interpolation accuracy, low cost computer realization, ability to cover sparse data sets, etc.The most delicate analysis require large and sparse data sets. For instance, in the case of large data sets, in order to perform a delicate analysis, the interval
must be split into a large number of sub-intervals where the data (observations, measurements, etc.) are easier to interpolate. Then, interpolating functions are interpolated in each sub-interval and eventually the unknown function is represented as the totality of sub-interpolants. In other words, let
be a splitting of the set
i.e,
and let
be the corresponding interpolants, i.e.,
Then, the unknown function on the whole
is defined as follows:
Here
is a finite set of indexes. The choice of
is conditioned by the choice of the splitting
If
contains only several points, then the simple interpolation tools such as piecewise constant or linear interpolation can be used efficiently. Schematic representation of the above mentioned is brought in Fig 1.
consists of only two couples. In such cases, the interpolating function is given as follows:
Sometimes, piecewise constant (linear) interpolation is combined with the splitting method above to construct a linear interpolating function within each interval of splitting:
In the case of regularly distributed points, polynomial interpolation is usually used. The most common polynomial interpolating functions are:1. Lagrange interpolating polynomial:
Where
2. Newton interpolating polynomial:
Where 
is the ith divided difference;3. Chebyshev interpolating polynomials of the first and second kinds:
In the case of very dense data sets, the spline interpolation is more proffered than polynomial interpolation. As it is well-known, splines are defined piecewise in terms of polynomials of required order:
Thus, instead of piecewise constant (linear) interpolating functions, splines provide polynomial interpolation within each sub-interval of the splitting. In the case of periodic or almost periodic data, such as tidal or climate change observation, trigonometric polynomials are used:
Rectangular function has many applications in electrical engineering and signal processing. Besides, it serves as an activation function for many advanced neural networks.The rect function also can be expressed in terms of the Heaviside
function as follows:
Or
Let us remind that the Heaviside function is defined as follows:
Figures 2-4 show the numerical comparison of the interpolation of the rect function by means of Lagrange interpolating polynomial and artificial neural networks. Figure 3 shows that when we consider 50 nodes within
, the quadratic error between the Lagrange interpolating polynomial and artificial neural network interpolation dramatically decreases up to 6th epochs. From 7th epoch on, the quadratic error remains almost unaltered and is of
order.![]() | Figure 2. Approximation of rect function in 50 nodes (upper) and 100 nodes (lower): ANN algorithm requires 10 and 15 iterations, respectively |
![]() | Figure 3. Quadratic error approximation of rect function in with 100 nodes |
![]() | Figure 4. Function fit (upper) and regression behaviour (middle) and network performance (lower) for rect function in with 100 nodes |
where * denotes the convolution operation. Ramp function and has many applications in engineering (it is used in the so-called half-wave rectification, which is used to convert alternating current into direct current by allowing only positive voltages), artificial neural networks (it serves as an activation function), finance, statistics, fluid mechanics, etc.According to the definition of the Heaviside function, the rump function can be represented also as
The numerical comparison of the interpolation of the ramp function by means of Lagrange interpolating polynomial and artificial neural networks is shown in Figures 5-7. Figure 6 shows that when we consider 50 nodes within
the quadratic error between the Lagrange interpolating polynomial and artificial neural network interpolation dramatically decreases up to 5th epochs. From 6th epoch on, the quadratic error remains almost unaltered and is of
order.![]() | Figure 5. Approximation of R function in with 50 nodes (upper) and 100 nodes (lower): ANN algorithm requires 15 and 29 iterations, respectively |
![]() | Figure 6. Quadratic error approximation of R function in with 100 nodes |
![]() | Figure 7. Function fit (upper) and regression behavior (middle) and network performance (lower) for R function in with 100 nodes |
On the other hand, the triangular function is defined as the convolution of the rectangular function with itself, i.e.,
Therefore,
Using the relation between the rectangular and Heaviside functions, it is possible to express the triangular function in terms of the Heaviside function. More specifically, since
by virtue of the trivial equality
then
The numerical comparison of the interpolation of the tri function by means of Lagrange interpolating polynomial and artificial neural networks is shown in Figures 8-10. Figure 9 shows that when we consider 50 nodes within
the quadratic error between the Lagrange interpolating polynomial and artificial neural network interpolation dramatically decreases up to 10th epochs. From 6th epoch on, the quadratic error remains almost unaltered and is of
order.![]() | Figure 8. Approximation of triangular function in with 50 nodes (upper) and 100 nodes (lower): ANN algorithm requires 12 and 30 iterations, respectively |
![]() | Figure 9. Quadratic error approximation of triangular function in with 100 nodes |
![]() | Figure 10. Function fit (upper) and regression behavior (middle) and network performance (lower) for triangular function i with 100 nodes |