An Introduction to the Kalman Filter

An Introduction to the Kalman Filter
by
Greg Welch1
and
Gary Bishop2
Department of Computer Science
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-3175
Abstract
In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to ad-vances in digital computing, the Kalman filter has been the subject of extensive re-search and application, particularly in the area of autonomou
s or assisted
navigation.
The Kalman filter is a set of mathematical equations that provides an efficient com-putational (recursive) solution of the least-squares method. The filter is very pow-erful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is un-known.
The purpose of this paper is to provide a practical introduction to the discrete Kal-man filter. This introduction includes a description and some discussion of the basic discrete Kalman filter, a derivation, description and some discussion of the extend-ed Kalman filter, and a relatively simple (tangible) example with real numbers & results.
2. gb@cs.unc.edu, www.cs.unc.edu/~gb
An Introduction to the Kalman Filter 2 1The Discrete Kalman Filter
In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem [Kalman60]. Since that time, due in large part to advances in digital computing, the  Kalman filter  has been the subject of extensive research and application,
particularly in the area of autonomous or assisted navigation. A very “friendly” introduction to the general idea of the Kalman filter can be found in Chapter 1 of [Maybeck79], while a more complete introductory discussion can be found in [Sorenson70], which also contains some interesting historical narrative. More extensive references include [Gelb74], [Maybeck79], [Lewis86],
[Brown92], and [Jacobs93].
The Process to be Estimated
The Kalman filter addresses the general problem of trying to estimate the state  of a discrete-time controlled process that is governed by the linear stochastic difference equation
,
(1.1)
with a measurement  that is北重阿尔斯通
.(1.2)The random variables  and  represent the process and measurement noise (respectively). They are assumed to be independent (of each other), white, and with normal probability distributions
,
(1.3).(1.4)
The  matrix
A  in the difference equation (1.1) relates the state at time step  k  to the state at step  k  +1, in the absence of either a driving function or process noise. The  matrix
B  relates the control input  to the state  x  . The  matrix  H  in the measurement equation (1.2) relates the state to the measurement  z  k  .
The Computational Origins of the Filter
We define  (note the “super minus”) to be our    a priori  state estimate at step  k  given knowledge of the process prior to step  k  , and  to be our    a posteriori  state estimate at step  k  given measurement . We can then define    a priori  and    a posteriori  estimate errors as
The  a priori  estimate error covariance is then
,(1.5)
x ℜn ∈x k 1+A k x k Bu k w k ++=z ℜm ∈z k H k x k v k +=w k v k p w ()N 0Q ,()∼p v ()N 0R ,()∼n n ×n l ×u ℜl ∈m n ×x ˆk -ℜn ∈x
ˆk ℜn ∈z k e k -x k x病毒抗体
两个加快ˆk -, and –≡e k x k x
ˆk .–≡P k -E e k -e k -T []=
and the a posteriori  estimate error covariance is
.(1.6)
In deriving the equations for the Kalman filter, we begin with the goal of finding an equation that computes an a posteriori  state estimate  as a linear combination of an a priori  estimate  and a weighted difference between an actual measurement  and a measurement prediction  as shown belo
w in (1.7). Some justification for (1.7) is given in “The Probabilistic Origins of the Filter” found below.
(1.7)The difference  in (1.7) is called the measurement innovation , or the residual . The residual reflects the discrepancy between the predicted measurement  and the actual measurement . A residual of zero means that the two are in complete agreement.
The  matrix
K  in (1.7) is chosen to be the gain  or blending factor  that minimizes the a posteriori  error covariance (1.6). This minimization can be accomplished by first substituting (1.7) into the above definition for , substituting that into (1.6), performing the indicated expectations, taking the derivative of the trace of the result with respect to K , setting that result equal to zero, and then solving for K . For more details see [Maybeck79], [Brown92], or [Jacobs93]. One form of the resulting K  that minimizes (1.6) is given by 1
.(1.8)
Looking at (1.8) we see that as the measurement error covariance  approaches zero, the gain K  weights the residual more heavily. Specifically,
.
On the other hand, as the a priori  estimate error covariance  approaches zero, the gain K  weights
the residual less heavily. Specifically,
.Another way of thinking about the weighting by K  is that as the measurement error covariance  approaches zero, the actual measurement  is “trusted” more and more, while the predicted measurement  is trusted less and less. On the other hand, as the a priori  estimate error covariance  approaches zero the actual measurement  is trusted less and less, while the predicted measurement
is trusted more presents the Kalman gain in one popular form.
P k E e k e k T []=x ˆk x ˆk -z k H k x ˆk -x
ˆk x ˆk -K z k H k x ˆk -–()+=z k H k x ˆk -–()H k x
ˆk -z k n m ×e k K k P k -H k T H k P k -H k T R k
+()1–=P k -H k T H k P k -H k T R k +---------------------------------=R k K k R k 0→lim H k
1–=P k -K k P k -
0→lim 0=R k z k H k x ˆk -P k -z k H k x
ˆk -
The Probabilistic Origins of the Filter
The justification for (1.7) is rooted in the probability of the a priori  estimate  conditioned on all prior measurements  (Bayes’ rule). For now let it suffice to point out that the Kalman filter maintains the first two moments of the state distribution,
The a posteriori  state estimate (1.7) reflects the mean (the first moment) of the state distribution— it is normally distributed if the conditions of (1.3) and (1.4) are met. The a posteriori  estimate error covariance (1.6) reflects the variance of the state distribution (the second non-central moment). In other words,滑稽戏满园春
.For more details on the probabilistic origins of the Kalman filter, see [Maybeck79], [Brown92], or
[Jacobs93].
The Discrete Kalman Filter Algorithm
We will begin this section with a broad overview, covering the “high-level” operation of one form of the discrete Kalman filter (see the previous footnote). After presenting this high-level view, we will narrow the focus to the specific equations and their use in this version of the filter.
人才管理系统
The Kalman filter estimates a process by using a form of feedback control: the filter estimates the process state at some time and then obtains feedback in the form of (noisy) measurements. As such, the equations for the Kalman filter fall into two groups: time update  equations and measurement update  equations. The time update equations are responsible for projecting forward (in time) the current state and error covariance estimates to obtain the a priori  estimates for the next time step. The measurement update equations are responsible for the feedback—i.e. for incorporating a new measurement into the a priori  estimate to obtain an improved a posteriori  estimate.
The time update equations can also be thought of as predictor  equations, while the measurement update equations can be thought of as corrector equations. Indeed the final estimation algorithm resembles that of a predictor-corrector  algorithm for solving numerical problems as shown below in Figure 1-1.
Figure 1-1. The ongoing discrete Kalman filter cycle. The time update  projects the current state estimate ahead in time. The measurement update  adjusts the projected estimate by an actual measurement at that time.
x
ˆk -
z k E x k []x
ˆk =E x k x
ˆk –()x k x ˆk –()T []P k .=p x k z k ()N E x k []E x k x
ˆk –()x k x ˆk –()T [],()∼N x ˆk P k ,().=稀土永磁
The specific equations for the time and measurement updates are presented below in Table 1-1 and Table 1-2.Again notice how the time update equations in Table 1-1 project the state and covariance estimates from time step k  to step k +1.  and
B  are from (1.1), while  is from (1.3). Initial conditions for the filter are discussed in the earlier references.
The first task during the measurement update is to compute the Kalman gain, . Notice that the equation given here as (1.11) is the same as (1.8). The next step is to actually measure the process to obtain , and then to generate an a posteriori  state estimate by incorporating the measurement as in (1.12). Again (1.12) is simply (1.7) repeated here for completeness. The final step is to obtain an a posteriori  error covariance estimate via (1.13).
After each time and measurement update pair, the process is repeated with the previous a posteriori  estimates used to project or predict the new a priori  estimates. This recursive nature is one of the very appealing features of the Kalman filter—it makes practical implementations much more feasible than (for example) an implementation of a Weiner filter [Brown92] which is designed to operate on all  of the data  directly  for each estimate. The Kalman filter instead recursively
conditions the current estimate on all of the past measurements. Figure 1-2 below offers a complete picture of the operation of the filter, combining the high-level diagram of Figure 1-1 with the equations from Table 1-1 and Table 1-2.
Filter Parameters and Tuning
In the actual implementation of the filter, each of the measurement error covariance matrix  and the process noise  (given by (1.4) and (1.3) respectively) might be measured prior to operation of the filter. In the case of the measurement error covariance  in particular this makes sense—because we need to be able to measure the process (while operating the filter) we should generally be able to take some off-line sample measurements in order to determine the variance of the measurement error.
Table 1-1: Discrete Kalman filter time update equations.
(1.9)(1.10)
Table 1-2: Discrete Kalman filter measurement update equations.
(1.11)
(1.12)
(1.13)x
ˆk 1+-A k x ˆk Bu k +=P k 1+-A k P k A k T Q k +=A k Q k K k P k -H k T H k P k -H k T R k +()1–=x ˆk x ˆk -K z k H k x ˆk -
–()+=P k I K k H k –()P k -=K k z k R k Q k R k
In the case of , often times the choice is less deterministic. For example, this noise source is often used to represent the uncertainty in the process model (1.1). Sometimes a very poor model can be used simply by “injecting” enough uncertainty via the selection of . Certainly in this case one would hope that the measurements of the process would be reliable.
In either case, whether or not we have a rational basis for choosing the parameters, often times superior filter performance (statistically speaking) can be obtained by “tuning” the filter parameters  and . The tuning is usually performed off-line, frequently with the help of another (distinct) Kalman filter.
Figure 1-2. A complete picture of the operation of the Kalman filter, com-bining the high-level diagram of Figure 1-1 with the equations from Table 1-1 and Table 1-2.
In closing we note that under conditions where  and .are constant, both the estimation error covariance  and the Kalman gain  will stabilize quickly and then remain constant (see the filter update equations in Figure 1-2). If this is the case, these parameters can be pre-computed by either running the filter off-line, or for example by solving (1.10) for the steady-state value of  by defining  and solving for .
It is frequently the case however that the measurement error (in particular) does not remain constant. For example, when sighting beacons in our optoelectronic tracker ceiling panels, the noise in measurements of nearby beacons will be smaller than that in far-away beacons. Also, the process noise  is sometimes changed dynamically during filter operation in order to adjust to different dynamics. For example, in the case of tracking the head of a user of a 3D virtual environment we might reduce the magnitude of  if the user seems to be moving slowly, and increase the magnitude if the dynamics start changing rapidly. In such a case  can be used to model not only the uncertainty in the model, but also the uncertainty of the user’s intentions.
Q k Q k Q k R k
k -and P k -Q k R k P k K k P k P k -P k ≡P k Q k Q k Q k

本文发布于:2024-09-22 01:55:20,感谢您对本站的认可!

本文链接:https://www.17tex.com/xueshu/96879.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议