Electrical and Electronic Engineering

p-ISSN: 2162-9455    e-ISSN: 2162-8459

2012;  2(5): 277-283

doi: 10.5923/j.eee.20120205.06

Linear Invariant Statistics for Signal Parameter Estimation

Vyacheslav Latyshev

Moscow aviation institute (national research university), department of Radio Electronics aircraft, Moscow, 125993, Russia

Correspondence to: Vyacheslav Latyshev , Moscow aviation institute (national research university), department of Radio Electronics aircraft, Moscow, 125993, Russia.

Email:

Copyright © 2012 Scientific & Academic Publishing. All Rights Reserved.

Abstract

This paper presents an approach to obtain an invariant statistics for estimation interested signal parameters independently from unwanted parameters in two dimensional parameter problems. The proposed algorithm is based on the exclusion of Fisher’s information about the unwanted parameters and maintaining information about the parameter of interest. Simultaneous reduction the partial Fisher information matrices to diagonal forms provides the key steps for separation of the signal space into two orthogonal subspaces, containing Fisher’s information about different parameters. The proposed approach requires knowledge of the statistical distributions of signals of interest. The application examples with time delay and Doppler shift as the parameters are provided as a means of evidencing the advantages of the theory.

Keywords: Fisher’s Information Concentration, Independent Time Delay , Doppler Shift Estimations, Invariant Statistics

Cite this paper: Vyacheslav Latyshev , "Linear Invariant Statistics for Signal Parameter Estimation", Electrical and Electronic Engineering, Vol. 2 No. 5, 2012, pp. 277-283. doi: 10.5923/j.eee.20120205.06.

1. Introduction

In signal processing tasks the available data usually depend on several parameters at the same time. The most common are the time delay, the Doppler shift, the initial phase and amplitude when signal is reflected or transmitted from moving target. Depending on the problem to be solved some of the parameters we are interested in, while others are not significant. For example the signal processing task of a GPS receiver can be divided into two fundamental parts: signal acquisition and signal tracking. Acquisition is by far the more computationally demanding task, requiring a search across a two-dimensional space of unknown time delay and Doppler shift for each GPS satellite to be acquired. Even acquisition task is made there is always same errors in the estimate of Doppler shift and time delay due to coarseness of the grid[1,2].
The processing algorithm for these tasks may be simplified if we find statistics, which depends on time delay and is independent of Doppler shift and vice versa. If statistics do not depend on nuisance parameters change, we call them invariant.
To determine the position of a signal on the time axis, Doppler shift is not required. The latter can be regarded as unwanted parameter because its changes complicate the processing of the available data[1,2]. If you want to estimate the time delay only, you need statistics invariant to Doppler shift.
Conversely, if you need to determine objects’ speed it is important to estimate the Doppler shift. However, in its turn the errors in the estimating of the range can be considered as the unwanted or nuisance parameters. In this case, it is desirable to have an invariant to the time delay estimation errors statistics.
Statistics which are independent of the unwanted parameters can be obtained by averaging procedure for conditional probability density of the data over the unwanted parameters, taking into account their a priori distribution[3]. However, firstly the averaging procedure itself requires a large amount of computations. Secondly, the relevant information about a priori distributions of the unwanted parameters is required, but is usually absent.
In this paper we propose a method for finding invariant statistics in the two-parameter problems. These statistics can be used to estimate the parameters independently of one another.
This method is based on the orthogonal decomposition of the observed data with the concentration of Fisher information in the first terms of the series[4]. Such an orthogonal series can be used to improve the accuracy of maximum likelihood estimates for parameters that are nonlinearly related to the signals[5-7]. Besides the data dimension reduction with the concentration of Fisher information in a small number of members can dramatically reduce the complexity of the Bayesian estimates[8,9]. Sharing the orthogonal decomposition with the concentration of Fisher information about the Doppler shift and time delay provides a means to obtain statistics that depend on one of the parameters, and do not depend on the other. The ideas presented in[10,11] allow to obtain the invariant statistics for independent estimates of the time delay and Doppler shift. A similar approach for the independent estimates of the Doppler shift and the initial phase is presented in[12].
This article presents a method for obtaining invariant statistics in the two-parameter problem. These parameters are the time delay and Doppler shift. We then show how to use the singular value decomposition to obtain the required statistics. Next, we illustrate the method on the results of numerical simulations.

2. Theoretical background

Let denote available data column-vector , where is a parameter of interest and is an unwanted parameter. Additive noise vector is a zero-mean circular Gaussian with nonsingular covariance matrix .
First of all, recall that the accuracy of unbiased estimate of an arbitrary parameter is determined from the Cramer–Rao inequality (CRI)[3]. In accordance with the CRI the variance is inversely proportional to the Fisher’s information about parameter :
(1)
where denotes an expectation, - is the conditional probability density function of parameter with known a priori probability density . The larger the Fisher’s information the higher the accuracy.
The proposed method is based on two facts. Firstly we use an orthogonal decomposition of the observed data with the concentration main part of Fisher information in the first few terms in the series. It allows you to save this information about the estimated parameter in the required statistics. Corresponding theorem proved in[4]. Here it is presented in the appendix. Secondly, if the statistics do not depend on a parameter, it is impossible to estimate it. This statistics don’t contain any Fisher’s information about this parameter and can be considered invariant to its changes. We hope to find the vector of statistics for estimation of and at the same time invariant to changes of . To obtain it we have to suppress Fisher’s information concerning parameter and keep Fisher‘s information concerning parameter .
Consider and , where , the superscript H denotes complex conjugate matrix transpose. Matrices and can be interpreted as the mean partial Fisher information matrices with respect to and correspondingly. In the appendix the eigenvalues and the eigenvectors of an analogous matrix in (34) are used to accumulate Fisher’s information in the diagonal elements. Diagonal form of reveals actual dimension of the subspace in initial signal space, which contain the most part of Fisher’s information about . To provide the invariant to statistics we can obtain projection onto mentioned subspace and exclude it from . The same statement we can conclude concerning . On the other hand, the diagonal form of reveals actual dimension of the subspace containing Fisher’s information about . We have to keep this information because it is connected with accuracy of the estimation in accordance with the CRI[3]. To provide both intentions simultaneously let us bring into use next auxiliary matrix
(2)
(3)
where is the identity -matrix, is the diagonal matrix containing ordered eigenvalues of : . Note, that provides diagonal form too[13]:
(4)
So we have . It gives the criterion for separation the original signal space into two orthogonal subspaces, containing Fisher’s information about and , respectively and exclude Fisher’s information about unwanted parameter. Suppose is the actual dimension of subspace, which contains the most part of Fisher’s information about . Let is a -matrix, which consists of the first rows of . Using the linear transformation we can obtain invariant statistics for estimation. On the other hand the matrix is the orthogonal projector onto invariant subspace with respect to parameter .
Note that, according to[13] (3) can be performed if and are chosen to be the matrix of the eigenvalues and eigenvectors of the matrix , respectively.
Next, we consider this approach on specific examples.

3. Simulation

Let the N-dimensional data column-vector be an additive mixture of the deterministic component and the distortion vector
(5)
where signal depends on a priori unknown time delay and Doppler shift . Assume both parameters are statistically mutually independent. The parameter vector has bounded domain of variation, where 2D probability density is specified.
Let it is necessary to estimate from the observation only. In this case, the can be considered as a nuisance parameter. To find an independent estimate of , it is desirable to obtain the statistics in the form of linear functions of vector that are not affected by the . It is also necessary to minimize possible deterioration of the estimation accuracy, if is estimated using these statistics.
We use the next two matrices, which are similar to and in (2):
(6)
(7)
Here, and are derivatives. The differentiation are performed with respect to the parameters indicated by the indexes or . Symbol denotes an expectation over the vector random variable .
In accordance with (2), let introduce the auxiliary matrix
(8)
In such a way on the basis of the generalized eigenvectors of the matrix pair we can divide the observation space into two mutually orthogonal subspaces containing the Fisher information about time delay and Doppler shift .
Finding the matrix C may be complicated if the auxiliary matrix is ill-conditioned. In this case we use the following approach. We divide the entire range of the Doppler shift at the L discrete intervals. For each of them we have column vectors and , where is the middle of the corresponding interval, . Let form the following matrices:
(9)
(10)
and combine matrix . Then the auxiliary matrix (8) can be obtained as follows
(11)
Instead of inverting a matrix we obtain singular value decomposition matrix [14]:
(12)
where is diagonal matrix containing the singular values of . Matrices and are composed of the left and right singular vectors, respectively. For example let and the singular values are positive numbers ordered such that .
From the analysis of singular values we choose the number m to obtain a certain well-defined approximation for :
(13)
where and consist of the first columns of and , corresponding to the largest singular values. If we use the link of the SVD with eigenvalue decompositions[15]:
(14)
we obtain a projector which implements the first equation (3)
(15)
To perform the second equation in (3), we transform the matrix to
(16)
The singular value decomposition gives the orthonormal transformation to diagonalize covariance matrix . That is,
(17)
The combination of (13) and (15) gives the overall transformation matrix:
(18)
This matrix diagonalizes three symmetric matrices and simultaneously. Analysis of the eigenvalues of the matrix reveals the dimension of the subspaces containing all or the most part Fisher’s information with respect to the Doppler shift. Suppose it is equal to. Then the invariant to the Doppler shift subspace has the dimension . Let denote the matrix consisting of the first columns of . Projector onto this subspace:
(19)
Similarly, let is the dimension of subspaces containing all or the most part Fisher’s information with respect to the time delay. Then the invariant to the time delay subspace has the dimension . Let denote the matrix consisting of the last columns of . Projector onto this subspace:
(20)
To illustrate this approach consider the sampled chirp signal:
(21)
(22)
The complex envelope for this signal:
(23)
Thus the column vector is a discrete version of this complex envelope.
To illustrate the influence of parameters and we can use the Mahalanobis distance between signals[13]:
(24)
If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the Euclidean distance.
Figure 1 shows the Euclidean distance between the delayed signals with the Doppler shift and the signal with zero values of these parameters. The number of samples is fixed to , normalized duration of the signal . The normalized Doppler shift is the random variable whose behavior is governed by uniform probability density inside a range . The relief of the Euclidean distance has a narrow canyon, located at a certain angle to the time axis. Further we consider the time delay as a parameter we are interested in and the Doppler shift as an unwanted parameter.
Figure 1. The Euclidean distance between the chirp signals as a function of the time delay and the Doppler shift
The transformation (19) gives the statistics, which should be invariant to the Doppler shift. Figure 2 shows the central part of relief of Mahalanobis distances for the projections of the signals onto 23-dimensional subspace invariant to Doppler shift:
(25)
where and . The approximate matrix (13) has rank .
Figure 2. The Mahalanobis distance between the projections of the chirp signals onto a subspace invariant to Doppler shift
In contrast to the previous figure we see that the relief has a narrow canyon, located strictly parallel to the Doppler shift axis. The resulting statistics are invariant to Doppler shift changes. Hence time delay estimate may be obtained regardless of the Doppler shift magnitude. If you calculate the amount of Fisher’s information in the projection onto the specified 23-dimensional subspace, we find that it contains 99% of the amount contained in the original observations. Therefore, the estimate of the time delay using the projections is possible without loss of accuracy.
On the other hand, the time delay estimates are characterized by the certain estimation accuracy. Position of the signal on the time axis is defined with a small error. To eliminate the effects of this error on the accuracy of the Doppler shift measurement, it is desirable to use statistics that is invariant to small errors in the time delay determining. The following Figure 3 shows the Mahalanobis distances for the projections of the chirp signals onto the 1-dimensional subspace that is invariant to small errors in the time delay estimation.
Figure 3. The Mahalanobis distance between the projections of the chirp signals onto a subspace invariant to small errors in the time delay estimation
Here the canyon is strictly parallel to the time delay axis. Therefore the resulting statistics are invariant to the errors in time delay estimation. Hence Doppler shift estimate may be obtained regardless of the small errors in time delay estimates. The following two figures refer to the Gold code of 7 bits. Both figures show relief of the distance between signals.
Figure 4. The Euclidean distance for Gold code of 7 bits
Figure 4 corresponds to the Euclidean distance. The parameters are the same as in the previous example. We see here the local gap nearby the true values of and .
Figure 5 corresponds to the distance (25) for the projections of signals onto the 4-dimensional subspace with the Fisher information concerning only. Now the narrow canyon is parallel to frequency axes. It implies that the true value of may be estimated independently from . The amount of Fisher’s information in the projection onto the specified 4-dimensional subspace is equal to 86% of the amount contained in the observations. Therefore, the estimate of the time delay using the projections is possible with some loss of accuracy.
Figure 5. The Mahalanobis distance between the projections of the Gold code onto a subspace invariant to Doppler shift
Figure 6. The Euclidean distance for the periodic sequences of the Gold
Figure 7. The Mahalanobis distance for the projections of the periodic sequences of the Gold code onto a subspace invariant to Doppler shift
The Figures 6 and 7 show similar functions for the periodic sequences of the Gold code. The signal consists of three consecutive periods. It is evident that the invariance property is preserved in this case.

4. Conclusions

We have presented the approach to obtain the statistics for signal parameter estimation, which is invariant to unwanted parameter in different two-dimensional parameter problems. The approach is based on excluding Fisher’s information concerning unwanted parameter and saving Fisher‘s information about parameter of interest. The generalized eigenvectors of the matrix pair are used as a means of dividing the observation space into two mutually orthogonal subspaces containing the Fisher information about time delay and Doppler shift. Numerical results show that this approach is effective. To illustrate the procedure the chirp signal and Gold code are used, where time delay is the parameter we are interested in and Doppler shift is the unwanted parameter. The presented approach can be used for estimation problems and for the signal recognition.

Appendix

Let the observation space corresponds to the set of N observations: . Thus, each set can be thought of as a point in a N-dimensional space and can be denoted by a column vector , where and are the N-dimensional vectors of a signal and a noise correspondingly. Vector is Gaussian with nonsingular covariance matrix . We assume that is known. In general the variable appears in a signal in a nonlinear manner.
To obtain the m-dimensional vector with we use linear transformation with the transformation matrix . We need such the matrix that guarantees minimal losses of estimation accuracy of a parameter using vector . In addition to foregone requirements we try to represent in a new coordinate system in which the components are statistically independent random variables: , where is a diagonal identity matrix. It is convenient to write transformation matrix in the form of . Here is a symmetric square root from we have
(26)
Taking into account the Gaussian distribution of noise from the expression (1) we obtain the Fisher’s information about the parameter in the observation:
(27)
where subscript N is used to distinguish the initial dimension of the observation from a new reducing dimension m. is the column vector of derivatives.
The Fisher information in the vector [4]:
(28)
The loss of the Fisher information:
(29)
The mean of the loss of the Fisher information:
(30)
where denotes an expectation over the random variable . Thus we need the transformation matrix which provides minimal value of .
Theorem: the linear transformation with the matrix provides minimal mean of the loss of the Fisher information , if the column vectors of are the orthonormal eigenvectors of
(31)
corresponding to largest eigenvalues. At the same time
(32)
where are the eigenvalues of .
Proof: let rewrite (30) in the next form:
(33)
The first term does not depend on . Therefore we have minimal value of if the subtrahend is maximal. Denote it . Inverting averaging with summation and taking into account the equality:
(34)
we have:
(35)
The expression in brackets is a symmetric matrix:
(36)
In compliance with the theorem about eigenvalues and eigenvectors[14] the maximal value of takes place if are the orthonormal eigenvectors of the matrix , corresponding to largest eigenvalues and
(37)
The equality implies trace of the matrix (36):
(38)
On the other hand, . It implies:
(39)
Note we can assert, that a subspace spanned by the column vectors is the m-dimensional subspace of the observation space with maximal Fisher information content about the parameter among any another m-dimensional subspaces.

References

[1]  Luke M.B. Winternitz, William A. Bamford, and Gregory W. Heckler. A GPS Receiver for High-Altitude Satellite Navigation. IEEE jourrnal of selected topics in signal processing, vol.3, NO.4, august 2009, pp.541-556.
[2]  Letizia Lo Presti, Xuefen Zhu, Maurizio Fantino, Paolo Mulassano. GNSS Signal Acquisition in the Presence of Sign Transition. IEEE jourrnal of selected topics in signal processing, vol.3, NO.4, august 2009, pp.557-569.
[3]  H. L. Van Trees, “Detection, Estimation, and Modulation Theory,” John Wiley and Sons, Inc. New York, 1968, part 1.
[4]  V. Latyshev, Data compression for the problems of the parameter estimation, Journal of Communications Technology and Electronics , vol.33, №3, pp.635-637, 1988, Russia.
[5]  V. Latyshev, O. Nikerova. Improved accuracy of the time delay estimation. Journal of Communications Technology and Electronics , vol.52, №5, pp.562-573, 2007, Russia.
[6]  V. Latyshev, V. Data compression for the problems of the Doppler shift estimation. 10 International Conference «Digital signal processing and its application», Moscow, Russia, March, 2008, Proceedings, pp.225-228 (in Russian).
[7]  V. Latyshev, D. Dunin. Parameter estimation for a signal of known waveform using a dimension reduction of data. 11 International Conference «Digital signal processing and its application», Moscow, Russia, March 25-27, 2009, Proceedings, pp. 337-340 (in Russian).
[8]  V. Latyshev, D. Dunin. Bayesian estimation of the Doppler shift using of data dimension reduction. Journal of Communications Technology and Electronics, vol.55, №2, pp.193-202, 2010, Russia.
[9]  V. Latyshev, D. Dunin. Data dimension reduction with the Fisher’s information preservation in the signal parameter estimation problems. Information-measuring and Control systems, v.9, №8, pp.32-43, 2011, (in Russian).
[10]  V. Latyshev. Subspace-based estimation of time of arrival and Doppler shift for a signal of known waveform. International Journal of Microwave and Wireless Technologies, Volume 1, Special Issue 03, June 2009, pp 209-214, Published by Cambridge University Press 15 May 2009.
[11]  V. Latyshev. Linear invariant statistics for signal parameter estimation. International Radar Symposium (IRS2011) Leipzig, Germany, 2011, 7.09-9.11. Proceedings, 224 –229.
[12]  V. Latyshev, D. Dunin. Subspace-based estimation of Doppler shift and phase. International Radar Symposium. Hamburg, Germany, 2009, 8.09-11.09. Proceedings, 683 –686.
[13]  K. Fukunaga, “Introduction to Statistical Pattern Recognition,” Academic Press, San Diego, 1990.
[14]  G. Golub and C.F. Van Loan, Matrix Computations. Bultimore, MD: Johns Hopkins Univ. press, 1984.
[15]  A. Van Der Veen, E. F. Deprettere, A. L. Swindlehurst, "Subspace-Based signal analysis using singular value decomposition", Proceedings IEEE, vo.81, N9, pp.1277-1308, 1999.