diff --git a/00_Preface.ipynb b/00-Preface.ipynb
similarity index 100%
rename from 00_Preface.ipynb
rename to 00-Preface.ipynb
diff --git a/01_g-h_filter.ipynb b/01-g-h-filter.ipynb
similarity index 100%
rename from 01_g-h_filter.ipynb
rename to 01-g-h-filter.ipynb
diff --git a/02_Discrete_Bayes.ipynb b/02-Discrete-Bayes.ipynb
similarity index 100%
rename from 02_Discrete_Bayes.ipynb
rename to 02-Discrete-Bayes.ipynb
diff --git a/03_Gaussians.ipynb b/03-Gaussians.ipynb
similarity index 100%
rename from 03_Gaussians.ipynb
rename to 03-Gaussians.ipynb
diff --git a/04_One_Dimensional_Kalman_Filters.ipynb b/04-One-Dimensional-Kalman-Filters.ipynb
similarity index 100%
rename from 04_One_Dimensional_Kalman_Filters.ipynb
rename to 04-One-Dimensional-Kalman-Filters.ipynb
diff --git a/05_Multivariate_Gaussians.ipynb b/05-Multivariate-Gaussians.ipynb
similarity index 100%
rename from 05_Multivariate_Gaussians.ipynb
rename to 05-Multivariate-Gaussians.ipynb
diff --git a/05_Multivariate_Kalman_Filters.ipynb b/06-Multivariate-Kalman-Filters.ipynb
similarity index 99%
rename from 05_Multivariate_Kalman_Filters.ipynb
rename to 06-Multivariate-Kalman-Filters.ipynb
index 1d6c8f1..8a0a3b3 100644
--- a/05_Multivariate_Kalman_Filters.ipynb
+++ b/06-Multivariate-Kalman-Filters.ipynb
@@ -1742,9 +1742,7 @@
"source": [
"In other words, the *Kalman gain* equation is doing nothing more than computing a ratio based on how much we trust the prediction vs the measurement. If we are confident in our measurements and unconfident in our predictions $\\mathbf{K}$ will favor the measurement, and vice versa. The equation is complicated because we are doing this in multiple dimensions via matrices, but the concept is simple - scale by a ratio, same as the univariate case.\n",
"\n",
- "Without going into the derivation of $\\mathbf{K}$, I'll say that this equation is the result of finding a value of $\\mathbf{K}$ that optimizes the *mean-square estimation error*. It does this by finding the minimal values for $\\mathbf{P}$ along its diagonal. Recall that the diagonal of $\\mathbf{P}$ is the variance for each state variable. So, this equation for $\\mathbf{K}$ ensures that the Kalman filter output is optimal. To put this in concrete terms, for our dog tracking problem this means that the estimates for both position and velocity will be optimal in a least squares sense.\n",
- "\n",
- "fork"
+ "Without going into the derivation of $\\mathbf{K}$, I'll say that this equation is the result of finding a value of $\\mathbf{K}$ that optimizes the *mean-square estimation error*. It does this by finding the minimal values for $\\mathbf{P}$ along its diagonal. Recall that the diagonal of $\\mathbf{P}$ is the variance for each state variable. So, this equation for $\\mathbf{K}$ ensures that the Kalman filter output is optimal. To put this in concrete terms, for our dog tracking problem this means that the estimates for both position and velocity will be optimal in a least squares sense."
]
},
{
@@ -1753,19 +1751,19 @@
"source": [
"**Residual**\n",
"\n",
- "$\\textbf{y} = \\textbf{z} - \\textbf{Hx}$\n",
+ "$\\mathbf{y} = \\mathbf{z} - \\mathbf{Hx}$\n",
"\n",
- "This is an easy one as we've covered this equation while designing the measurement function $\\mathbf{H}$. Recall that the measurement function converts a state into a measurement. So $\\textbf{Hx}$ converts $\\textbf{x}$ into an equivalent measurement. Once that is done, we can subtract it from the measurement $\\textbf{z}$ to get the residual - the difference between the measurement and prediction.\n",
+ "This is an easy one as we've covered this equation while designing the measurement function $\\mathbf{H}$. Recall that the measurement function converts a state into a measurement. So $\\mathbf{Hx}$ converts $\\mathbf{x}$ into an equivalent measurement. Once that is done, we can subtract it from the measurement $\\mathbf{z}$ to get the residual - the difference between the measurement and prediction.\n",
"\n",
"**State Update**\n",
"\n",
- "$\\mathbf{x} =\\mathbf{x^-} +\\mathbf{Ky} $\n",
+ "$\\mathbf{x} = \\mathbf{x}^- + \\mathbf{Ky}$\n",
"\n",
- "We select our new state to be along the residual, scaled by the Kalman gain. The scaling is performed by $\\mathbf{Ky}$, which then needs to be added to the prior: $\\mathbf{x} =\\mathbf{x^-} +\\mathbf{Ky} $\n",
+ "We select our new state to be along the residual, scaled by the Kalman gain. The scaling is performed by $\\mathbf{Ky}$, which then needs to be added to the prior: $\\mathbf{x} =\\mathbf{x}^- + \\mathbf{Ky}$.\n",
"\n",
"**Covariance Update**\n",
"\n",
- "$\\mathbf{P} = (\\mathbf{I}-\\mathbf{KH})\\mathbf{P^-}$\n",
+ "$\\mathbf{P} = (\\mathbf{I}-\\mathbf{KH})\\mathbf{P}^-$\n",
"\n",
"$\\mathbf{I}$ is the identity matrix, and is the way we represent $1$ in multiple dimensions. $\\mathbf{H}$ is our measurement function, and is a constant. So, simplified, this is simply $\\mathbf{P} = (1-c\\mathbf{K})\\mathbf{P}$. $\\mathbf{K}$ is our ratio of how much prediction vs measurement we use. So, if $\\mathbf{K}$ is large then $(1-\\mathbf{cK})$ is small, and $\\mathbf{P}$ will be made smaller than it was. If $\\mathbf{K}$ is small, then $(1-\\mathbf{cK})$ is large, and $\\mathbf{P}$ will be made larger than it was. So we adjust the size of our uncertainty by some factor of the *Kalman gain*.\n",
"\n",
@@ -2634,7 +2632,7 @@
"source": [
"Keep looking at these plots until you grasp how to interpret the covariance matrix $\\mathbf{P}$. When you start dealing with a, say, $9{\\times}9$ matrix it may seem overwhelming - there are 81 numbers to interpret. Just break it down - the diagonal contains the variance for each state variable, and all off diagonal elements are the product of two variances and a scaling factor $p$. You will not be able to plot a $9{\\times}9$ matrix on the screen because it would require living in 10-D space, so you have to develop your intuition and understanding in this simple, 2-D case. \n",
"\n",
- "> **sidebar**: when plotting covariance ellipses, make sure to always use `ax.set_aspect('equal')` or `plt.axis('equal')` in your code (the former lets you set the xlim and ylim values). If the axis use different scales the ellipses will be drawn distorted. For example, the ellipse may be drawn as being taller than it is wide, but it may actually be wider than tall."
+ "> **sidebar**: when plotting covariance ellipses, make sure to always use ax.set_aspect('equal') or plt.axis('equal') in your code (the former lets you set the xlim and ylim values). If the axis use different scales the ellipses will be drawn distorted. For example, the ellipse may be drawn as being taller than it is wide, but it may actually be wider than tall."
]
},
{
diff --git a/06_Kalman_Filter_Math.ipynb b/07-Kalman-Filter-Math.ipynb
similarity index 100%
rename from 06_Kalman_Filter_Math.ipynb
rename to 07-Kalman-Filter-Math.ipynb
diff --git a/07_Designing_Kalman_Filters.ipynb b/08-Designing-Kalman-Filters.ipynb
similarity index 100%
rename from 07_Designing_Kalman_Filters.ipynb
rename to 08-Designing-Kalman-Filters.ipynb
diff --git a/08_Nonlinear_Filtering.ipynb b/09-Nonlinear-Filtering.ipynb
similarity index 100%
rename from 08_Nonlinear_Filtering.ipynb
rename to 09-Nonlinear-Filtering.ipynb
diff --git a/09_Unscented_Kalman_Filter.ipynb b/10-Unscented-Kalman-Filter.ipynb
similarity index 100%
rename from 09_Unscented_Kalman_Filter.ipynb
rename to 10-Unscented-Kalman-Filter.ipynb
diff --git a/10_Extended_Kalman_Filters.ipynb b/11-Extended-Kalman-Filters.ipynb
similarity index 100%
rename from 10_Extended_Kalman_Filters.ipynb
rename to 11-Extended-Kalman-Filters.ipynb
diff --git a/11_Particle_Filters.ipynb b/12-Particle-Filters.ipynb
similarity index 100%
rename from 11_Particle_Filters.ipynb
rename to 12-Particle-Filters.ipynb
diff --git a/12_Smoothing.ipynb b/13-Smoothing.ipynb
similarity index 100%
rename from 12_Smoothing.ipynb
rename to 13-Smoothing.ipynb
diff --git a/13_Adaptive_Filtering.ipynb b/14-Adaptive-Filtering.ipynb
similarity index 100%
rename from 13_Adaptive_Filtering.ipynb
rename to 14-Adaptive-Filtering.ipynb
diff --git a/Appendix_A_Installation.ipynb b/Appendix-A-Installation.ipynb
similarity index 100%
rename from Appendix_A_Installation.ipynb
rename to Appendix-A-Installation.ipynb
diff --git a/Appendix_B_Symbols_and_Notations.ipynb b/Appendix-B-Symbols-and-Notations.ipynb
similarity index 100%
rename from Appendix_B_Symbols_and_Notations.ipynb
rename to Appendix-B-Symbols-and-Notations.ipynb
diff --git a/Appendix_C_Walking_Through_KF_Code.ipynb b/Appendix-C-Walking-Through-KF-Code.ipynb
similarity index 100%
rename from Appendix_C_Walking_Through_KF_Code.ipynb
rename to Appendix-C-Walking-Through-KF-Code.ipynb
diff --git a/Appendix_D_HInfinity_Filters.ipynb b/Appendix-D-HInfinity-Filters.ipynb
similarity index 100%
rename from Appendix_D_HInfinity_Filters.ipynb
rename to Appendix-D-HInfinity-Filters.ipynb
diff --git a/Appendix_E_Ensemble_Kalman_Filters.ipynb b/Appendix-E-Ensemble-Kalman-Filters.ipynb
similarity index 100%
rename from Appendix_E_Ensemble_Kalman_Filters.ipynb
rename to Appendix-E-Ensemble-Kalman-Filters.ipynb
diff --git a/Appendix_F_FilterPy_Code.ipynb b/Appendix-F-FilterPy-Code.ipynb
similarity index 100%
rename from Appendix_F_FilterPy_Code.ipynb
rename to Appendix-F-FilterPy-Code.ipynb
diff --git a/Appendix_G_Designing_Nonlinear_Kalman_Filters.ipynb b/Appendix-G-Designing-Nonlinear-Kalman-Filters.ipynb
similarity index 100%
rename from Appendix_G_Designing_Nonlinear_Kalman_Filters.ipynb
rename to Appendix-G-Designing-Nonlinear-Kalman-Filters.ipynb
diff --git a/Appendix_H_Least_Squares_Filters.ipynb b/Appendix-H-Least-Squares-Filters.ipynb
similarity index 100%
rename from Appendix_H_Least_Squares_Filters.ipynb
rename to Appendix-H-Least-Squares-Filters.ipynb
diff --git a/pdf/merge_book.py b/pdf/merge_book.py
index b6fb051..8bb6a94 100644
--- a/pdf/merge_book.py
+++ b/pdf/merge_book.py
@@ -34,23 +34,23 @@ if __name__ == '__main__':
'../Appendix_A_Installation.ipynb'])'''
merge_notebooks(f,
- ['../00_Preface.ipynb',
- '../01_g-h_filter.ipynb',
- '../02_Discrete_Bayes.ipynb',
- '../03_Gaussians.ipynb',
- '../04_One_Dimensional_Kalman_Filters.ipynb',
- '../05_Multivariate_Gaussians.ipynb',
- '../05_Multivariate_Kalman_Filters.ipynb',
- '../06_Kalman_Filter_Math.ipynb',
- '../07_Designing_Kalman_Filters.ipynb',
- '../08_Nonlinear_Filtering.ipynb',
- '../09_Unscented_Kalman_Filter.ipynb',
- '../10_Extended_Kalman_Filters.ipynb',
- '../11_Particle_Filters.ipynb',
- '../12_Smoothing.ipynb',
- '../13_Adaptive_Filtering.ipynb',
- '../Appendix_A_Installation.ipynb',
- '../Appendix_B_Symbols_and_Notations.ipynb',
- '../Appendix_C_Walking_Through_KF_Code.ipynb',
- '../Appendix_D_HInfinity_Filters.ipynb',
- '../Appendix_E_Ensemble_Kalman_Filters.ipynb'])
+ ['../00-Preface.ipynb',
+ '../01-g-h-filter.ipynb',
+ '../02-Discrete-Bayes.ipynb',
+ '../03-Gaussians.ipynb',
+ '../04-One-Dimensional-Kalman-Filters.ipynb',
+ '../05-Multivariate-Gaussians.ipynb',
+ '../06-Multivariate-Kalman-Filters.ipynb',
+ '../07-Kalman-Filter-Math.ipynb',
+ '../08-Designing-Kalman-Filters.ipynb',
+ '../09-Nonlinear-Filtering.ipynb',
+ '../10-Unscented-Kalman-Filter.ipynb',
+ '../11-Extended-Kalman-Filters.ipynb',
+ '../12-Particle-Filters.ipynb',
+ '../13-Smoothing.ipynb',
+ '../14-Adaptive-Filtering.ipynb',
+ '../Appendix-A-Installation.ipynb',
+ '../Appendix-B-Symbols-and-Notations.ipynb',
+ '../Appendix-C-Walking-Through-KF-Code.ipynb',
+ '../Appendix-D-HInfinity-Filters.ipynb',
+ '../Appendix-E-Ensemble-Kalman-Filters.ipynb'])
diff --git a/table_of_contents.ipynb b/table_of_contents.ipynb
index 4c82b6e..d3eecd5 100644
--- a/table_of_contents.ipynb
+++ b/table_of_contents.ipynb
@@ -10,119 +10,121 @@
"Table of Contents\n",
"-----\n",
"\n",
- "[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00_Preface.ipynb)\n",
+ "[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00-Preface.ipynb)\n",
" \n",
"Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.\n",
"\n",
"\n",
- "[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01_g-h_filter.ipynb)\n",
+ "[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01-g-h-filter.ipynb)\n",
"\n",
"Intuitive introduction to the g-h filter, also known as the $\\alpha$-$\\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. \n",
"\n",
"\n",
- "[**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02_Discrete_Bayes.ipynb)\n",
+ "[**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02-Discrete-Bayes.ipynb)\n",
"\n",
"Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.\n",
"\n",
- "[**Chapter 3: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03_Gaussians.ipynb)\n",
+ "[**Chapter 3: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03-Gaussians.ipynb)\n",
"\n",
"Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.\n",
"\n",
"\n",
- "[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04_One_Dimensional_Kalman_Filters.ipynb)\n",
+ "[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04-One-Dimensional-Kalman-Filters.ipynb)\n",
"\n",
"Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. \n",
"\n",
"\n",
- "[**Chapter 5: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05_Multivariate_Kalman_Filters.ipynb)\n",
+ "[**Chapter 5: Multivariate Gaussians**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05-Multivariate-Gaussians.ipynb)\n",
"\n",
- "We extend the Kalman filter developed in the previous chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.\n",
+ "Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.\n",
"\n",
+ "[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06-Multivariate-Kalman-Filters.ipynb)\n",
"\n",
- "[**Chapter 6: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06_Kalman_Filter_Math.ipynb)\n",
+ "We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.\n",
+ "\n",
+ "[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07-Kalman-Filter-Math.ipynb)\n",
"\n",
"We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. \n",
"\n",
"\n",
- "[**Chapter 7: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07_Designing_Kalman_Filters.ipynb)\n",
+ "[**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08-Designing-Kalman-Filters.ipynb)\n",
"\n",
"Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.\n",
"\n",
"\n",
- "[**Chapter 8: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08_Nonlinear_Filtering.ipynb)\n",
+ "[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09-Nonlinear-Filtering.ipynb)\n",
"\n",
"Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.\n",
"\n",
"\n",
- "[**Chapter 9: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09_Unscented_Kalman_Filter.ipynb)\n",
+ "[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10-Unscented-Kalman-Filter.ipynb)\n",
"\n",
"Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.\n",
"\n",
"This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.\n",
"\n",
"\n",
- "[**Chapter 10: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10_Extended_Kalman_Filters.ipynb)\n",
+ "[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11-Extended-Kalman-Filters.ipynb)\n",
"\n",
"Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. \n",
"\n",
"\n",
- "[**Chapter 11: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11_Particle_Filters.ipynb)\n",
+ "[**Chapter 12: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12-Particle-Filters.ipynb)\n",
"\n",
- "**in progress, not checked in**\n",
"Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.\n",
"\n",
"\n",
- "[**Chapter 12: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12_Smoothing.ipynb)\n",
+ "[**Chapter 13: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Smoothing.ipynb)\n",
"\n",
"Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.\n",
"\n",
"\n",
- "[**Chapter 13: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13_Adaptive_Filtering.ipynb)\n",
+ "[**Chapter 14: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Adaptive-Filtering.ipynb)\n",
" \n",
"Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.\n",
"\n",
"\n",
- "[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_A_Installation.ipynb)\n",
+ "[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-A-Installation.ipynb)\n",
"\n",
"Brief introduction of Python and how it is used in this book. Description of the companion\n",
"library FilterPy. \n",
" \n",
"\n",
- "[**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_B_Symbols_and_Notations.ipynb)\n",
+ "[**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-B-Symbols-and-Notations.ipynb)\n",
"\n",
"Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.\n",
"\n",
"*Still just a collection of notes at this point.*\n",
"\n",
"\n",
- "[**Appendix C: Walking through the Kalman Filter code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_C_Walking_Through_KF_Code.ipynb)\n",
+ "[**Appendix C: Walking through the Kalman Filter code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-C-Walking-Through-KF-Code.ipynb)\n",
"\n",
"A brief walkthrough of the KalmanFilter class from FilterPy.\n",
"\n",
"\n",
- "[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_D_HInfinity_Filters.ipynb)\n",
+ "[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-D-HInfinity-Filters.ipynb)\n",
" \n",
"Describes the $H_\\infty$ filter. \n",
"\n",
"*I have code that implements the filter, but no supporting text yet.*\n",
"\n",
"\n",
- "[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_E_Ensemble_Kalman_Filters.ipynb)\n",
+ "[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-E-Ensemble-Kalman-Filters.ipynb)\n",
"\n",
"Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.\n",
"\n",
"\n",
- "[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_F_Filterpy_Code.ipynb)\n",
+ "[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-F-Filterpy-Code.ipynb)\n",
"\n",
"Listings of important classes from FilterPy that are used in this book.\n",
"\n",
"\n",
- "[*Appendix G: Designing Nonlinear Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_G_Designing_Nonlinear_Kalman_Filters.ipynb)\n",
+ "[*Appendix G: Designing Nonlinear Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-G-Designing-Nonlinear-Kalman-Filters.ipynb)\n",
"\n",
"Works through some examples of the design of Kalman filters for nonlinear problems. *This is still very much a work in progress.*\n",
"\n",
"\n",
- "[**Appendix H: Least Squares Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_H_Least_Squares_Filters.ipynb)\n",
+ "[**Appendix H: Least Squares Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-H-Least-Squares-Filters.ipynb)\n",
"\n",
"**not written yet**\n",
"\n",