TK-MIP for the PC
Up Banner Screen Main Menu Defining Model(s) Kalman Filtering Monte-Carlo Sim Filter Processing Output Displays Tutorials Advanced Topics LO\LQG Overview Assorted Details I Assorted Details II Match S/W to App Summary & Pricing

Do more in less time!   TK-MIP helps guide you to do it right the VERY first time!      Representative TK-MIP GUI screens are depicted above as examples of our clear user-prompting style.

(Our navigation buttons are at the TOP of each screen.)

Get free TK-MIP® tutorial software that demonstrates TeK Associates’ software development style.

Microsoft Word™ is to WordPerfect™ as TK-MIP™ is to ... everything else that claims to perform comparable processing!

Harness the strength and power of a polymath to benefit you! It’s encapsulated within TK-MIP!

Some Relevant Facts:

·Kalman Filters (KF) [or their computational and algorithmic equivalent] are used in ALL modern navigation systems and in ALL modern radar and optical target trackers.
·KF processing is needed for situations where sensor measurement data (and/or its associated underlying system model) are significantly noise-corrupted.
·The Kalman Filter ameliorates the effects of the noises and significantly improves ability to accurately assess “what is actually going on” regarding the true system state!
·KF methodology enables the setting of rational specifications pertaining to required accuracies and requisite measurement sampling rates even before hardware is built (via use of Covariance Analysis as one aspect of the KF methodology).
·Most Kalman Filter applications (with well known computer burden tally of required flops & CPU memory usage size for its mechanization) need to process in “real-time” to keep up with the steadily sampled stream of sensor measurement data.

Who needs TK-MIP?

· Systems Engineers · Operations Research Engineers · Radar Engineers
· Navigation Engineers · Some Process Control Engineers · Systems Analysts
· Control Engineers · Some Applied Mathematicians · Some Physicists
· Some Statisticians ... and graduate students in the above respective disciplines. · Analysis Houses (like us)
· Aerospace & Defense Primes · Aerospace & Defense subcontractors · Auto/Transportation R&D
·Federally Funded Research & Development Centers (FFRDC’s)

Question again: “Who Needs TK-MIP?” Answer: “Anyone with a need to either:...”

Perform Kalman filtering (KF) or other more up-to-date statistical estimation processing to ameliorate the effect of noises in systems or in their sensor measurements;

Assess or predict system behavior & achievable accuracies even before starting the actual hardware development (allowing rational allotment of SPECS for constituent hardware components beforehand from “error budgets” calculated via one aspect of the KF methodology);

Compute Linear Quadratic Gaussian (LQG) cost function minimization or LQG/Loop Transfer Recovery (LTR) feedback control gains (both prominently utilizing a KF in the feedback path);

Provide concise and correct technical explanations as snapshots of technological milestones achieved in Kalman Filtering and related Statistical Estimation theory. As massive numbers of senior workers retire from both the Aerospace & Defense Industries and from NASA, there is a need to provide precedents that explain to new personnel why things are done a certain way and what historical technical developments paved the way as justification. The clear tutorials in TK-MIP fill this void nicely.

The above capabilities of TK-MIP should be of high interest to potential customers because:

  1. Kalman Filters are in widespread use in a variety of applications.

  2. For KF applications: Linear Time Invariant (LTI) case <=> EASY!

  3. For KF applications: Nonlinear or Time-varying cases <=> HARD!

  4. Real-world applications are almost always nonlinear and time-varying!

  5. For KF & LQG/LTR: LTI is what students learn and practice in school.

  6. Many competitor’s have capable, young, energetic employees with excellent computer skills. However, few are familiar with the above reality!

  7. TK-MIP handles Nonlinear & Time-Varying without Pain. TK-MIP was developed in the background over 15 years by TeK Associates personnel intimately familiar with both the theory and its relevant applications (and their respective goals, and with writing out the solutions and then coding them up).

  8. TK-MIP handles nonlinear & time-varying right “out-of-the-box”.

  9. User sets initial switches within TK-MIP from dropdown menu to match architecture needed for specific application at hand!

  10. No programming is necessary to use and run TK-MIP (yet some customization of a sort is availed, if needed, but restricted to specific places being: Input/Output (I/O) types and associated baud and sample rates & rotational and bias offset transformations invoked [all possibly time-varying]). Restrictions are enforced to prevent Users from inadvertently clobbering existing coded algorithms within TK-MIP that have already been validated by TeK Associates as performing correctly.

  11. TK-MIP also handles any algebraic loops encountered in the feedback path with finesse rather than with brute force (as otherwise encountered if user needs to invoke Gear’s Integration for state or estimator propagation in time).

  12. On 19 February 2008, Rudolf Kalman received the prestigious $500,000 Charles Stark Draper Prize in Washington, D. C. for his original development or derivations consisting of 3 papers, one published in 1959 and the other two in the early 1960's. It is finally universally recognized that these algorithms now permeate and revolutionized all of modern technology (although first recognized early on with regard to weapons systems and national defense).

An option is that the reader may further pursue any of the underlined topics presented here at their own volition merely by clicking on the underlined links that follow next. We offer a detailed description of our stance on use of State Variables and Descriptor representations and our strong apprehension concerning use of the Matrix Signum Function and on MatLab’s apparent mishandling of Level Crossing situations and our view of existing problems with certain Random Number Generators (RNG’s) and other Potentially Embarrassing Historical Loose Ends further below. These particular viewpoints motivated the design of our TK-MIP software to avoid these particular problems that we are aware of and also seek to alert others to. We are cognizant of historical state-of-the-art software as well [39]-[42]. At General Electric Corporate Research & Development Center, Dr. Kerr was a protégée of his fellow coworkers: Joe Watson, Hal Moore, and Dr. Glen Roe. Please click on Summary & Pricing button at top of this screen for a free demo download representative of our TK-MIP® software. If any potential customer has further interest in purchasing our TK-MIP® software, a detailed order form for printout (to be sent back to us via mail or fax) is available within the free demo by clicking on the obvious Menu Item appearing at the top of the primary demo screen (which is the Tutorials Screen within the actual TK-MIP® software). We also include representative numerical algorithms (fundamental to our software) for users to test for numerical accuracy and computational speed to satisfy themselves regarding its efficacy and efficiency before making any commitment to purchase. We have followed the development of this optimal and sub-optimal estimation technology for over 35 years and continue to do so with eternal vigilance so that TK-MIP users can immediately reap the benefits of our experience and expertise (without having to be experts in these areas themselves).

TeK Associates® is currently offering a high quality Windowsä 9x\ME\WinNT \2000\XP\Vista\Windows 7\Gazelle (see Note [1] at the bottom of this web page) compatible and intuitive menu-driven PC software product TK-MIP for sale (to professional engineers, scientists, statisticians, and mathematicians and to the students in these respective disciplines) that performs Kalman Filtering for applications possessing linear models and exclusively white Gaussian noises with known covariance statistics (and can perform many alternative approximate nonlinear estimation variants for practically handling nonlinear applications) as well as performing Kalman Smoothing\Monte-Carlo System Simulation and (Optionally) Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control and which provides:

·        Clear on-line tutorials in the supporting theory, including explicit block diagrams describing TK-MIP processing options,

·        a clear, self-documenting, intuitive Graphical User Interface (GUI). No user manual is needed. This GUI is easy to learn; hard to forget; and built-in prompting is conveniently at hand as a reminder for the user who needs to use TK-MIP only intermittently (as, say, with a manager or other multi-disciplinary scientist or engineer).

·        a synopsis of significant prior applications (with details from our own pioneering experience),

·      use of Military Grid Coordinates (a.k.a. World Coordinates) as well as in standard Latitude, Longitude, and Elevation coordinates for object and sensor locations, both being available within TK-MIP (for quick and easy conversions back and forth for possible map output),

·        implementation considerations and a repertoire of test problems for on-line software calibration\validation,

·        use of a self-contained on-line consultant feature which includes our own TK-MIP Textbook and also provides answers and solutions to many cutting edge problems in the specific topic areas,

·        how to access our 1000+ entry bibliography of recent (and historical) critical technological innovations that are included in a searchable database (directly accessible and able to be quickly searched  using any derived keyword present in the reference citation, e.g., GPS, author’s name, name of conference or workshop, date, etc.),

·        option of invoking Jonker-Valgenant-Castanon (J-V-C) algorithm for solving the inherent “Assignment Problem” of Operations Research that arises within Multi-Target Tracking (MTT) utilizing estimators,

·       compliance with Department of Defense (DoD) Directive 8100.2 for software, 

·        and manner of effective and efficient TK-MIP use

so Users can learn and progress at their own rate (or as time allows) with a quick review always being readily at hand on-line on your own system without having to search for a misplaced or unintentionally discarded User manual or for an Internet connection. Besides allowing system simulations for measurement data generation and its subsequent processing, actual Application Data can also be processed by TK-MIP, by accessing sensor data via serial port,  via parallel port, or via a variety of commercially available Data Acquisition Cards (DAQ) using RS-232, PXI, VXI, GPIB, Ethernet protocols (as directed by the User) from TK-MIP’s MAIN MENU. TK-MIP is independent stand-alone software unrelated to MATLAB® or SIMULINK® and, as such, does NOT rely on or need these products and TK-MIP RUNS in only 16 Megabytes of RAM. Since TK-MIP outputs its results in an ASCII formatted matrix data stream, these outputs may be passed on to MATLAB® or SIMULINK® after making simple standard accommodations. TeK Associates is committed to keeping TK-MIP affordable by holding the price at only $499 for a single User license (plus $7.00 for Shipping and Handling via regular mail and $15.00 via FedEx or UPS Blue). Please click on Summary & Pricing button at top of this screen for a free software demo download.

An ability to perform certain TK-MIP computations for IMM is provided within TK-MIP but doing so in parallel within the framework of Grid Computing is NOT being pursued at this time for two reasons: (1) The a’ priori quantifiable CPU load of this Kalman filter-based variation is modest (as are the CPU burdens of Maximum Likelihood Least Squares-based estimation and LQG/LTR control algorithms as well). (2) Moreover, it has been revealed in 2005 that there is a serious security hole in the SSH (Secure Shell) used by almost all systems currently engaged in grid computing [37].

TK-MIP offers pre-specified general structures, with options (already built-in and tested) that are merely to be User-selected at run time (as accessed through an easy-to-use logical and intuitive menu structure that exactly matches this application technology area). This structure expedites implementation by availing easy cross-checking via a copyrighted proprietary methodology [involving proposed software benchmarks of known closed-form solution, applicable to test any software that professes to solve these same types of problems] and by requiring less than a week to accomplish a full scale simulation and evaluation (versus the 6 weeks to 6 months usually needed to implement from scratch by $pecialists). User still has to enter the matrix parameters that characterize or describe his application (an afternoon’s work if he already has this description available, as is frequently the case) but is not required to do any programming per se when the application structure is exclusively linear and time-invariant. TK-MIP outputs can also be User-directed afterwards for display within the low cost PC-based Mapptitude™ GIS software by Caliper, or within the more widely known MAPINFO®, or within DeLorme’s or ESRI’s mapping software.

An earlier version of TeK Associates’ commercial software product, TK-MIP Version 1, was unveiled for the first time and initially demonstrated at our Booth 4304 at IEEE Electro ’95 in Boston, MA (21-23 June 1995). Our marketing techniques rely on maintaining a strong presence in the open technical literature by offering new results, new solutions, and new applications. Since the TK-MIP Version 2.0 approach allows Users to quickly perform Kalman Filtering\Smoothing\Simulation\Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) Optimal Feedback Regulator Control, there is no longer a need for the User to explicitly program these activities (thus avoiding any encounters with unpleasant software bugs inadvertently introduced) and User may instead focus more on the particular application at hand (and its associated underlying design of experiment). This TK-MIP software has been validated to also correctly handle time-varying linear systems, as routinely arise in linearizing general nonlinear systems occurring in realistic applications. An impressive array of auxiliary supporting functions are also included within this software such as spreadsheet inputting and editing of system description matrices; User-selectable color display plotting on screen and from a printer with a capability to simply specify the detailed format of output graphs individually and in arrays in order to convey a story through useful juxta-positioned comparisons; and offering alternative tabular display of outputs; along with pre-formatted printing of results for ease-of-use and clarity by pre-engineered design; automatic conversion from continuous-time state variable or auto-regressive (AR) or auto-regressive moving average (ARMA) mathematical model system representation to discrete-time state variable representation (i.e., as a Linear System Realization); facilities for simulating vector colored noise of specified character rather than just conventional white noise (by providing the capability to perform Matrix Spectral Factorization (MSF) to obtain the requisite preliminary shaping filter). [These last two features are only found in TK-MIP to date. There is more on MSF to follow next below.]  

Another advantage possessed by TK-MIP over any of our competitor’s software is that we provide the only software that successfully implements Matrix Spectral Factorization (MSF). MSF is a Kalman Filtering accoutrement that enables the rigorous routine handling (or system modeling) of serially time-correlated measurement noises and process noises that would otherwise be too general and more challenging than could normally be handled within a standard Kalman filter framework that expects only additive Gaussian White Noise (GWN) as input. Such unruly, more general noises are accommodated within TK-MIP via a two-step procedure of (1) using MSF to decompose them into an associated dynamical system Multi-Input Multi-Output (MIMO) transfer function ultimately stimulated by (WGN) and then (2) our explicitly implementing an algorithm for specifying a corresponding linear system realization representing the particular specified MIMO time-correlation matrix or, equivalently, its power spectral matrix. Such systems structurally decomposed in this way can still be handled or processed now within a standard estimation theory framework by just increasing the original systems dimension by augmenting these noise dynamics into the known dynamics of the original system, and then both can be handled within a standard state-variable or descriptor system formulation of somewhat higher dimensions. These techniques are discussed and demonstrated in:

Kerr, T. H., “Applying Stochastic Integral Equations to Solve a Particular Stochastic Modeling Problem,” Ph.D. Thesis in the Department of Electrical Engineering, University of Iowa, Iowa City, Iowa, January 1971.
Kerr, T. H., Multichannel Shaping Filter Formulations for Vector Random Process Modeling Using Matrix Spectral Factorization, MIT Lincoln Laboratory Report No. PA-500, Lexington, MA, 27 March 1989.
Kerr, T. H., “Emulating Random Process Target Statistics (Using MSF),” IEEE Trans. on Aerospace and Electronic Systems, Vol. 30, No. 2, pp. 556-577, Apr. 1994.

A more realistically detailed model may be used for the system simulation while alternative reduced-order models can be used for estimation and\or control, as usually occurs in practice because of constraints on tolerable computational delay and computational capacity of supporting processing resources, which usually restricts the fidelity of the model to be used in practical implementations to principal components that hopefully “capture the essence” of the true system behavior. Simulated noise in TK-MIP may be Gaussian white noise, Poisson white noise, or a weighted mixture of the two (with specified variances being provided for both as User-specified inputs) as a worse case consideration in a sensitivity evaluation of application designs. Filtering and\or control performance sensitivities may also be revealed under variations in underlying statistics, initial conditions, and variations in model structure, in model order, or in its specific parameter values. Prior to pursuing the above described detailed activities, Observability\Controllability testing can be performed automatically (for linear time-invariant applications) to assure that structural conditions are suitable for performing Kalman filtering, smoothing, and optimal control. A novel aspect is that there is a full on-line tutorial on how to use the software as well as describing the theoretical underpinnings, along with block diagrams and explanatory equations since TK-MIP is also designed for the novice student Users from various disciplines as well as for experts in electrical or mechanical engineering, where these techniques originated. The pedagogical detail may be turned off (and is automatically unloaded from RAM during actual signal processing so that it doesn’t contribute to any unnecessary overhead) but may be turned back on any time a gentle reminder is again sought. A sophisticated proprietary pedagogical technique is used that is much more responsive to immediate User questions than would otherwise be availed by merely interrogating a standard WINDOWS 9X/2000/ME/NT/XP Help system and this is enhanced by exhibiting descriptive equations when appropriate for knowledgeable Users. TK-MIP also includes the option of activating a modern version of square root Kalman filtering (for effectively achieving double precision accuracy without actually invoking double precision calculations nor incurring additional time delay beyond what is present for a standard version of Kalman filtering) which is a numerically stable version for real-time on-line use, where this type of robustness is important over the long haul of continuous operation.

Simulation Demos at earlier IEEE Electro: Although the TK-MIP software is general purpose, TeK Associates originally demonstrated this software in Boston, MA on 21-23 June 1995 for (1) the situation of an aircraft equipped with an inertial navigation system (INS) using a Global Navigation System (GPS) receiver for periodic NAV updates;(2) several benchmark test problems of known closed-form solution (for comparison purposes in verification\validation). These are both now included within the package as standard examples to help familiarize new Users with the workings of TK-MIP.

This TK-MIP software program utilizes a discrete-time SYSTEM dynamics state variable model of the following form:

                                x(k+1) = A x(k) + F w(k) + [ B u(k) ],

with

                                x(0) = x0 (the initial condition)

and the discrete-time Sensor MEASUREMENT (data) observation model is of the following algebraic form:

                                z(k) = C x(k) + G v(k),

where:

                          x(k) is the system state at time-step k, 

                         A is the N by N System Transition Matrix,

                          z is the observed sensor data measurements,

                          C is the M by N Observation Matrix,

                          w(k) & v(k) are INDEPENDENT Gaussian white noise

                                    (GWN) sequences with specified variances: Q & R, 

                          F is the process noise Gain Matrix,

                          G is the measurement noise Gain Matrix,

                          u is an (optional) exogenous control input sequence,

                          B is the (optional) Gain Matrix for the above control.

.[The above matrices A, C, F, G, B, Q, R can be time-varying explicit functions of the time index k or be merely constant matrices.]

The white noise w(.):

·         is of zero mean: E [ w(k) ] = 0 for all time steps k,

·         is INDEPENDENT (uncorrelated) as

            E [ w(k) wT(j) ] = 0 for all k not equal to j,

      and as

          E [ w(k) wT(k) ] = Q for all k,

      where 

           Q = QT > 0 or Q = QT0, 

       (i.e., Q is a symmetric and positive semi-definite matrix),

·         is Gaussianly distributed [denoted by w(k) ~ N(0,Q) ] at each time-step k,

·         is INDEPENDENT of the Gaussian initial condition x(0) as: 

            E [ x(0) wT(k) ] = 0 for all k,

      and is also nominally INDEPENDENT of the Gaussian white measurement noise v(k):

          E [ w(k) vT(j) ] = 0 for all k not equal to j,

      and likewise for the definition of the zero mean measurement noise v(k) except that its variance is R, where in the above  

              wT(·)

      represents the transpose of w(·) and the symbol E [ · ] denotes the standard unconditional expectation operator.

Elaborating further on TK-MIP Version 2.0 CURRENT CAPABILITIES: This Software package can:

·        SIMULATE any Gauss-Markov process specified in linear time-invariant (or time-varying) state space form or of arbitrary dimensions N.

·        TEST for WHITENESS and NORMALITY of SIMULATED Gaussian White Noise (GWN) sequence: w(k) for k=1, 2, 3,... (from Pseudo-Random Number [PRN] generator).

·     SIMULATE sensor observation data as the sum of a Gauss-Markov process, C x(k), and an INDEPENDENT Gaussian White Measurement Noise, G v(k).

·     DESIGN and RUN a Kalman FILTER with specified  gains, constant or propagated, on REAL (Actual) or SIMULATED observed sensor measurement data.

·        DESIGN and RUN a Kalman SMOOTHER (now also known as Retrodiction) or a Maximum Likelihood Batch Least Squares algorithm on REAL (actual) or SIMULATED data.

·        DISPLAY both DATA and ESTIMATES graphically in COLOR  and\or output to a printer or to a file (in ASCII).

·        CALCULATE SIMULATOR and ESTIMATOR response to ANY (Optional) Control Input.

·        EXHIBIT the behavior of the Error Propagation (Riccati) Equation:

·        For full generality, unlike most other Kalman filter software packages or add-ons like MATLAB with SIMULINK and their associated Control Systems toolkit, TK-MIP avoids using Schur computational techniques, which are only applicable to a narrowly restricted set of time-invariant system matrices and TK-MIP also avoids invoking the Matrix Signum function for any calculations because they routinely fail for marginally stable systems (a condition which is not usually warned of prior to invoking such calculations within MATLAB®). See Notes in Reference [2] at the bottom of this web page for more perspective.

·        From current session, User can SAVE-ALL at END of each Major Step (to gracefully accommodate any external interruptions imposed upon the USER) or RESURRECT-ALL results previously saved earlier, even from prior sessions. [This feature is only available for the simpler situation of having  linear time-invariant (LTI) models and not for time-varying models, nor for nonlinear situations, nor for Interactive Multiple Model (IMM) filtering situations, all of which can be handled within TK-MIP after slight  modifications (as directed by the User from the MAIN MENU, Model Specification, and Filtering screens) but these more challenging scenarios do NOT offer the SAVE-ALL and RESURRECT-ALL capability because of the additional layers of complexity to be encountered for these special cases causing them to not  be amenable to being handled within a single structural form].

·        OFFER EXACT discrete-time EQUIVALENT to continuous-time white noise (as a User option) for greater accuracy in SIMULATION (and closer agreement between underlying model representation in FILTERING and SMOOTHING).

·        AVAIL the User with special TEST PROBLEMS\TEST CASES (of known closed-form solution) to confirm PROPER PERFORMANCE of any software of this type.

·        OFFER ACCESS to time histories of User-designated Kalman Filter\Smoother COVARIANCES (in order to avail a separate autonomous covariance analysis capability). Benefit of having a standard Covariance Analysis capability is that it can be used to establish Estimation Error Budget Specifications (before systems are built).

·        ACCEPT output transformations to change coordinate reference for the Simulator, the Filter state OUTPUTS and associated Covariance OUTPUTS [a need that arises in NAVIGATION and Target Tracking applications as a User-specified time-varying orthogonal transformation with User-specified coordinate off-set (also known as possessing a specified time-varying bias)]. TK-MIP supplies a wide repertoire of pre-tested standard coordinate transforms for the user to choose from or to concatenate to meet their application needs (which avoids the need to insert less well validated User code here for this purpose). 

·        OFFER Pade Approximation Technique as a more accurate alternative (for the same number of terms retained) to use of standard Taylor Series approach for calculating the Transition Matrix or Matrix Exponential.

·        PERFORM CHOLESKI DECOMPOSITION (to specify F and\or G from specified Q and\or R) as

Q = F · FT,

      and as 

R = G · GT,

      where outputted decomposition factors FT and GT are upper triangular.

·        CHOLESKI DECOMPOSITION can also be used to investigate a matrix’s positive definiteness\ semi-definiteness (as arises for Q, R, P0, P, and M [defined further below]).

·        PERFORMS Matrix Spectral Factorization (to handle any serially time-correlated noises encountered in application system modeling by putting them in the standard KF form via state augmentation) [e.g., In the frequency domain, the known associated matrix power spectrum is factored to be of the form

Sww(s) = WT(-s) · W(s),

      (where s is the bilateral Laplace Transform variable) then one can perform a REALIZATION from one of the above output factors as

 WT(-s) = C2 (sInxn-A2)-1 F2,

      to now accomplish a complete specification of the three associated subsystem matrices on the right hand side above (where both Matrix Spectral Factorization and specifying system realizations of a specified Matrix Transfer Function of the above form are completely automated within TK-MIP). The above three matrices are used to augment the original state variable representation of the system as [C|C2], [A|A2], [F|F2] so that the newly augmented system (now of a slightly higher system state dimension) ultimately has only WGN contaminating it as system and measurement noises (as again putting the associated resulting system into the standard form to which Kalman filtering directly applies).]

·        OUTPUT results:

*         to the MONITOR Screen DISPLAY,

*         to the PRINTER (as a User option) and can have COMPRESSED OUTPUT by sampling results at LARGER time steps (or at the SAME time step) or for fewer intermediate variables,

*         to a FILE (ASCII) on the hard disk (as a User option) [separately from SIMULATOR, Kalman FILTER, Kalman SMOOTHER (for both estimates and covariance time histories)].

·        OUTPUTS final Pseudo Random Number (PRN) generator seed value so that if subsequent runs are to resume, with START TIME being the same as prior FINAL TIME, the NEW run can dovetail with the OLD as a continuation of the SAME sample function (by using OLD final seed for PRN as NEW starting seed for resumed PRN).

·        SOLVE FOR Linear Quadratic Gaussian\Loop Transfer Recovery (LQG\LTR) OPTIMAL Linear feedback Regulator Control [of the following feedback form, respectively, involving either the explicit state or the estimated state, depending upon which is more conveniently available in the particular application, either as:

  

      or as

  

      for both by utilizing a planning interval forward in time either over a FINITE horizon or over an INFINITE horizon cost index (i.e., as transient or steady-state cases, respectively, where M in the above is constant only for the steady-state case). Strictly speaking, only the last expression is LQG\LTR since the former expression is LQ feedback control.

·        Provide Password SECURITY capability, compliant with the National Security Administration’s (NSA’s) Greenbook specifications (Reference [3] at the bottom of this web page) to prevent general unrestricted access to data and units utilized in applications in TK-MIP that may be sensitive or proprietary. It is mandatory that easy access to outsiders be prevented, especially for Navigation applications since enlightened extrapolations from known gyro drift-rates in airborne applications can reveal targeting, radio-silent rendezvous, and bombing accuracy’s [which are typically Classified]; hence critical NAV system parameters (of internal gyros and accelerometers) are usually Classified as well, except for those that are so very coarse that they are of no interest in tactical or strategic missions.

·        Offer information on how the User may proceed to get an appropriate mathematical model representation for common applications of interest by offering concrete examples & pointers to prior published precedents in the open literature, and provide pointers to third party software & planned updates to TK-MIP to eventually include this capability of model inference/model-order and structure determination from on-line measured data. Indeed, a journal to provide such information has begun, entitled Mathematical Modeling of Systems: Methods, Tools and Applications in Engineering and Related Sciences, Swets & Zeitlinger Publishers, P.O. Box 825, 2160 SZ LISSE, Netherlands (Mar. 1995).

Click here to get the (draft copy) of AIAA Standards for Atmospheric Modeling, as specified by the various U.S. and other International agencies.

Click here to get the (draft copy) of AIAA Standards for Flight Mechanics Modeling, as specified by the pertinent responsible U.S.and other International agencies.

·        Depicts other standard alternative symbol notations (and identifies their source as a precedent) that have historically arisen within the Kalman filter context and that have been adhered to (for awhile, anyway) by pockets of practitioners. This can be an area of considerable confusion, especially when notation has been standardized for decades, then modern writers (unaware of the prior standardization) use different notation in their more recent textbooks on the subject, thus driving a schism between the old and new generation of practitioners. “A rose by any other name....”

·        Provides a mechanism for our “shrink-wrap” TK-MIP software product to perform Extended Kalman Filtering (EKF) and be compatible with other PC-based software (accomplished through successful cross-program or inter-program communications and hand-shaking and facilitated by TeK Associates recognizing and complying with existing Microsoft Software Standards for software program interaction such as abiding by that of ActiveX or COM).  Therefore, TK-MIP® can be used either in a stand alone fashion or in conjunction with other software for performing the estimation and tracking function, as indicated below:  

TK-MIP® can interact with other software (by having adhered to the Microsoft COM standard)

Get free TK-MIP® tutorial software

TeK Associates’sTK-MIP®

Intercommunication to and fro via COM API’s

Other software abiding by COM like AGI’s STK®§, NI’s LabView®, MathWork’s MatLAb®/Simulink® or even with MathSoft’s MathCad®

Interface capability

§AGI also provides their HTTP/IP-based CONNECT® API methodology to enable cross-communication with other external software programs (as well as providing the more recent COM/ActiveX option) and AGI promises to continue to support CONNECT® in order to have complete backwards compatibility with what older versions of STK could do.

Linux Operating Systems can also be accommodated using Mono®, a program that will allow Linux Operating Systems to run .NET® applications (i.e., Microsoft products developed in Studio.NET normally require at least WindowsXP or Windows2000 to be the host Operating Systems for .NET applications). Version 1 Mono® was released by 29 October 2005. (By July 2007, two other software products, Mainsoft® and Wine®, have also emerged for providing compatibility of Window’s-based software to a Linux Operating System.)

While we vigorously vouch for the veracity of our TK-MIP software, for routine legal reasons we must invoke the standard “shrink-wrap” software disclaimers and other prudent limitations on any liability exposure.

·     We also offer an improved methodology for implementing an Iterated  EKF (IEKF),  all within TK-MIP. However, for these less standard application specializations, additional intermediate steps must be performed by the User external to TK-MIP in using symbol manipulation programs (such as Maple®, Mathematica®, MacSyma®, Reduce, Derive®, etc.) to specify 1st derivative Jacobians in closed-form, as needed (or else just obtain this necessary information manually or as previously published) and then enter it into TK-MIP where needed and as prompted. TK-MIP also provides a mechanism for performing Interactive Multiple Model (IMM) filtering for both the linear and nonlinear cases (where applicability of on-line probability calculations for the nonlinear case is perhaps more heuristic).

·    TK-MIP utilizes the J-V-C approach to MTT. The Kalman filtering technology of either a standard Kalman Filter or an EKF or an Interactive Multiple Model (IMM) bank-of-filters appear to be more suitable for use with Multitarget Tracking (MTT) data association algorithms (as input for the initial stage of creating “gates” by using on-line real-time filter computed covariances (more specifically, by using its squareroot or standard deviation) centered about the prior “best” computed target estimate in order to associate new measurements received with existing targets or to spawn new targets for those measurements with no prior target association) than, say, use of Kalman smoothing, retrodiction, or Batch Least Squares Maximum Likelihood (BLS) curve-fits because the former are a fixed, a priori known and fixed in-place computational burden in CPU time and computer memory size allocations, which is not the case with BLS and the other “smoothing” variants. Examples of alternative algorithmic approaches to implementing Multi-target tracking (MTT) in conjunction with Kalman Filter technology (in roughly historical order) are through the joint use of either (1) Munkres’ algorithm, (2) generalized Hungarian algorithm, (3) Murty’s algorithm (1968), (4) zero-one Integer Programming approach of Morefield, (5) Jonker-Valgenant-Castanon (J-V-C), (6) Multiple Hypothesis Testing [MHT], all of which either assign radar-returns-to-targets or targets-to-radar returns, respectively, like assigning resources to tasks as a solution to the “Assignment Problem” of Operations Research. Also see recent discussion of the most computationally burdensome MHT approach in Blackman, S. S., “Multiple Hypothesis Tracking for Multiple Target Tracking,” Systems Magazine Tutorials of IEEE Aerospace and Electron. Sys., Vol. 19, No. 1, pp. 5-18, Jan. 2004. Use of track-before-detect in conjunction with approximate or exact GLR has some optimal properties (as recently recognized in 2008 IEEE publications) and is also a much lesser computational burden than MHT. Also see Miller, M. L., Stone, H, S., Cox, I. J., “Optimizing Murty’s Ranked Assignment Method,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 33, No. 7, pp. 851-862, July 1997. Another: Frankel, L., and Feder, M., “Recursive Expectation-Maximizing (EM) Algorithms for Time-Varying Parameters with Applications to Multi-target Tracking,” IEEE Trans. on Signal Processing, Vol. 47, No. 2, pp. 306-320, February 1999. Yet another: Buzzi, S., Lops, M., Venturino, L., Ferri, M., “Track-before-Detect Procedures in a Multi-Target Environment,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1135-1150, July 2008. Mahdavi, M., “Solving NP-Complete Problems by Harmony Search,” on pp. 53-70 in Music-Inspired Harmony Search Algorithms, Zong Woo Gee (Ed.), Springer-Verlag, NY, 2009.

·    Current and prior versions of TK-MIP were designed to handle out-of-sequence sensor measurement data as long as each individual measurement  is time-tagged (synonym: time-stamped), as is usually the case with modern data sensors. Out-of-sequence measurements are handled by TK-MIP only when it is used in the standalone mode. When TK-MIP is used via COM within another application, the out-of-sequence sensor measurements must be handled at the higher level by that specific application since TK-MIP usage via COM will intentionally be handling one measurement at a time (either for a single Kalman filter, for IMM, or for Extended Kalman filter). However, in the COM mode, TK-MIP also outputs the transition matrix from the prior measurement to the current measurement, as needed for higher level handling of out-of-sequence measurements. Proper methodology for handling these situations is discussed in Bar-Shalom, Y., Chen, H., Mallick, M., “One-Step Solution for the Multi-step Out-Of-Sequence-Measurement Problem in Tracking,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 40, No. 1, pp. 27-37, Jan. 2004, in Bar-Shalom, Y., Chen, H., “IMM Estimator with Out-of-Sequence Measurements,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 41, No. 1, pp. 90-98, Jan. 2005, and in Bar-Shalom, Y., Chen, H., “Removal of Out-Of-Sequence Measurements from Tracks,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 45, No. 2, pp. 612-619, Apr. 2009.

·     One further benefit of using TK-MIP is that by its utilizing a state variable formulation exclusively rather than “wiring diagrams” as our competitors do, it is a more straightforward quest to recognize algebraic loops, which may occur within feedback configurations. Such an identification of any algebraic loops that exist allows further exploitation of this recognizable structure for distinguishing and isolating it to an advantage by then invoking a descriptor system formulation, which actually reduces the number of integrators required for implementation and thus simplifies by reducing the total computational burden in integrating out these underlying differential equations constituting the particular system’s representation as its underlying fundamental mathematical model. Our competitors, with their “wiring diagrams” or block diagrams, typically invoke use of “Gear [8] integration techniques” as the option to use when algebraic loops are encountered and then just plow through them with massive computational force rather than with finesse since in complicated wiring diagrams, the algebraic loops are not as immediately recognizable nor as easily isolated and “Gear integration techniques” are notoriously large computational burdens and CPU-time sinks.

·        Hooks and tutorials are already present and in-place for future add-ons for model parameter identification, robust control, robust Kalman filtering, and a multi-channel approximation to maximum entropy spectral estimation (exact in the SISO case). The last algorithm is important for handling the spectral analysis of multi-channel data that is likely correlated (e.g., in-phase and quadrature-phase signal reception for modulated signals, primary polarization and orthogonal polarization signal returns from the same targets for radar, principal component analysis in statistics). Standard Burg algorithm is exact but merely Single Input-Single Output (SISO) as are standard lattice filter approaches to spectral estimation. Current situation is analogous to 50 years ago in network synthesis when Van Valkenberg, Guillemin, and others showed how to implement SISO transfer functions by “Reza resistance extraction”, or multiple Cauer, and Foster form alternatives but had to wait for Robert W. Newcomb to lead the way in 1966 in synthesizing MIMO networks (in his Linear Multi-port Synthesis textbook) based on a simpler early version of Matrix Spectral Factorization. Harry Y.-F. Lam’s (Bell Telephone Laboratories, Inc.) 1979 Analog and Digital Filters: Design and Realization  textbook correctly characterizes Digital Filtering at that time as largely a re-packaging and re-naming of the Network Synthesis results of the prior 30 years but with a different slant.

APPLICATIONS: Over the past forty nine years, Kalman filters (KF) have been used in telephone line echo-cancellers, missiles, aircraft, ships, submarines, tanks that shoot-on-the-run, air traffic control (ATC) radars, defense and targeting radar, Global Position System (GPS) sets, and other standard navigation equipment. Historically, Kalman filters have even been used in modeling to account for the deleterious effect of human reaction times on the ultimate performance of a control system having a man-in-the-loop. In recent years, GPS\Kalman filter combinations in conjunction with laser disk-based digital map technology is being considered for use in future automobiles (as well as in ships using displays rather than paper charts) to tell the driver\pilot where he is and how to get where he wants to be. Commercial products as well as military vehicles and platforms rely on Kalman filters. Computers are used to implement Kalman filters and to test out their performance (assess the speed of response and the accuracy of their estimates) beforehand in extensive simulations. Indeed, Kalman filter variations are now being used for supervised learning in some Neural Networks. (For more examples, see special March 1983 issue of IEEE Transactions on Automatic Control, Vol. AC-28, No. 3 entirely devoted to other less well known applications of Kalman filters. Also see March 1982 NATO AGARDograph No. 256 and February 1970 NATO AGARDograph No. 139 for standard applications.) [How to maximally exploit the inherent mathematical duality between Kalman filters and Linear Quadratic regulators is explained in: (1) Kristin M. Strong and John R. Sesak, “Optimal Compensator Design with One Riccati Equation,” A Collection of Technical Papers, Pt 1, Proceedings of AIAA Guidance, Navigation, and Control Conference, Portland Oregon, pp. 105-113, 20-22 Aug. 1990. (2) Lin-Fan Wei and Fang-Bo Yeh, “On Dual Algebraic Riccati Equations,” IEEE Transactions on Automatic Control, Vol. AC-36, No. 4, pp. 511-512, Apr. 1991. (3) Blackmore, P. A., Bitmead, R. R., “Duality Between the Discrete-Time Kalman Filter and LQ Control Law,” IEEE Transactions on Automatic Control, Vol. AC-40, No. 8, pp. 1442-1443, Aug. 1995.] 

Acknowledging Possible Limitations of TK-MIP: In the interest of full disclosure, TeK Associates admits to and warns potential users that, unlike what is available within MatLab, TK-MIP does not inherently posses the capability of exactly modeling as wide an array of transcendental functions but it definitely does admirably handle the most prevalent and most fundamental ones. TK-MIP does handle the expected standard rational functions (i.e., ratios of polynomials), squareroot, exponential, sine, cosine, tangent, yx as well as those other 21 math functions that are derived from these intrinsic math functions (in well-known ways that TK-MIP offers as a summary reminder in a list immediately below) and composite function combinations of the aforementioned standard functions. Additionally, TK-MIP does, in fact, offer several of the most frequently encountered transcendental functions (as specifically called out following the 21 functions listed in the table below). 

However, the capability that TK-MIP does offer should suffice in all practical engineering and scientific situations. Physical systems rarely (if ever) need anything more than what TK-MIP already provides. [By limiting TK-MIP in this way to have only the most likely functions that arise in practice be internally available, we are able to keep the TK-MIP computer resource usage footprint reasonably small thus avoiding unnecessary memory overhead and expensive add-on toolboxes.] Beyond what TK-MIP offers right “out-of-the-box”, arbitrary transcendental functions can always be closely approximated using TK-MIP’s standard repertoire of functions (using well-known equivalences found in Abramowitz and Stegun’s 1964 book, Handbook of Mathematical Functions) or by using either rational function approximations, using Taylor series expansions, using Pade approximations, or using “spline” function interpolated approximations or any other interpolation function approach (e.g., radial basis functions) to whatever degree of closeness desired. 

While TeK Associates is well aware that many special transcendental functions arise in conjunction with solving Maxwell’s equations encountered along with  the resulting Sturm-Louville 2nd order ODE’s and their associated orthogonal function solution series used to match the inherent physical application boundary conditions obtained when separation-of-variables is applied to Maxwell’s Wave Equation or to the more specialized Helmholtz Equation (encountered when a single frequency sinusoidal excitation is exclusively present throughout Maxwell’s Wave Equation). The well known and enumerated “eleven distinctly different coordinate systems of Eisenhart” (depending upon what symmetry is present, if any), for the Neumann, Dirichlet, or Cauchy mixed boundary matching situation, just described, is for physical systems represented by Partial Differential Equations (PDE’s) and, again, TK-MIP is expected to be used exclusively for physical and theoretical systems described by Ordinary Differential Equations (ODE’s ) [except for TK-MIP’s single 2D Image processing example screen]

If the user is really in a bind to get an answer quickly for some nonstandard cases beyond what TK-MIP was designed to anticipate handling, the user can always resort to use of either Maple®, Mathematica, MacSyma, Reduce, or Derive® to obtain the associated transition matrix needed for his problem, to evaluate it numerically, and then to feed it back into TK-MIP since TK-MIP accepts pre-calculated transition matrices or any other matrices that constitute a valid state variable model (as accessed via TK-MIP’s Model Specification Menu Screen [which accepts AutoRegressive Moving Average models ARMA/AR/MA as well and automatically converts them to state variable form by well-known prescribed algorithmic rules]). 

Of course, TK-MIP routinely includes the standard matrix  exponential algorithm (based on a multi-variable Taylor series expansion) for calculating the transition matrix for Linear Time-Invariant systems (or a small time interval approximation of a time-varying linear or even of a nonlinear system by such an LTI system). Similarly, TK-MIP also includes the alternative Pade approximate approach to computing the matrix exponential (with numerical benefits of bounding the inaccuracy of the necessary series truncation inevitably incurred to be away from the point of series expansion and also not at the point furthest away by having the error being incurred vacillating above and below over the entire step), as a beneficial property of using Pade approximations that has been well-known for over 40+ years or more.

Go to Top

Click here to download a 1MByte pdf file that serves as an example of our expertise in radar target tracking and in recent developments in Estimation Theory and Kalman Filtering.

TeK Associates’ stance on why GUI’s for Simulation & Implementation should subscribe to a State Variable perspective:

Before initially embarking on developing TK-MIP, we at TeK Associates surveyed what products are currently available for pursuing such computations typically encountered in the field of “Signals and Systems”. We continue to be up-to-date on what competitors offer. However, we are appalled that so many simulation packages, to date, use a Graphical User Interface (GUI) metaphor that harkens back to 45+ years ago (prior to Kalman’s break-through use of state variables in discovering the solution structure of the Kalman filter and of that of the optimal LQG feedback regulator, both accomplished in 1960). The idea of having to “wire together” various system components is what is totally avoided when the inherent state variable model structure is properly recognized and exploited to an advantage for the insight that it provides, as done in TK-MIP. The cognizant analyst need only specify the necessary models in state variable form and this notion is no longer somewhat foreign (as it was before 1960) but is prevalent and dominant today in the various technology areas that seek either simulations for analysis or which seek real-time processing implementations for an encompassing solution mechanized on the PC or on some other adequately agile processors.

Why go backwards in time with technological representations? It is so much easier and clearer to “play the ball where it lies” (to use a metaphorical expression that arises in the sport of golf). Likewise, Capital Equipment Corp.’s (CEC) contemporary software product, TesTPoint®, for accessing measurement data from various transducers in designing Test and Measurement and Data Acquisition solutions also avoids implementing “wiring diagrams”. CEC makes somewhat similar chastising statements in the literature accompanying TesTPoint® (along the same lines as are mentioned above about not needing wiring diagrams), since CEC also recognizes that such a technological representation is no longer necessary (even though some engineers are evidently comfortable with such a familiar metaphor, but it ultimately slows down progress since it is an unnecessary step that takes up time with busy work). CEC’s approach is object oriented. CEC is now a wholly owned subsidiary of National Instruments (NI) so blatantly provocative statements about lacking any need for “wiring diagrams” will likely be suppressed in the future since NI’s LabView and LabWindows rely on such constructs. Perhaps the old way is still so prevalent as a metaphor utilized within today’s GUI’s because it appeals to the comfort zone of archaic managers by being similar to the “wiring diagrams” that they had used in 1960’s era simulations of IBM’s Continuous System Modeling Program (CSMP®) or in General Electric’s DYNASAR® or in General Electric’s 1970’s version: Automated Dynamic Analyzer (ADA®). See Refs. [11] to [13]. One could reasonable ask “how does TK-MIP cope with situations of time delay being modeled without using a “wiring diagrams”?” Answer: use a Pade rational function approximation to ideal time delay, otherwise represented in the frequency domain as exp{-sTi}. An approximation to the delay involving both numerator and denominator two term polynomials, while being a Pade approximation to the ideal time delay, it is also known in the parlance of Proportional-Integral-Derivative (PID) control as a lag network. It can be included or incorporated within the standard state variable representation of the system merely by augmenting the system state with these additional dynamics. Higher order polynomials in the numerator and denominator of more accurate Pade approximations may be handled similarly as merely slightly higher order system models. A Zero Order Hold (ZOH) is modeled using Pade approximations in Appendix A of [58]. An alternative is to implement the effect of the time delay as a differential-difference equation (of the retarded type) to be solved rather than as an ODE. [If more discussion on differential-difference equations is needed, a thorough explanation is provided on this specific topic by no less a mathematician than the late Richard Bellman himself in one of his books published in the 1960’s along with a coauthor Cooke, Kenneth [51] also see [52].]

Checking a state space model for the presence or absence of proper components in its defining state variable 1st order vector ordinary differential equation (ODE) is more straight forward and actually corresponds to the proper connections existing within “wiring diagrams”. Use of the state variables is closer to the underlying physics of the situation, as appropriately captured by the mathematics that describes it. The dynamic system corresponds to a coupled system of ordinary differential equations, whose solution evolves in time and requires certain boundary or initial conditions to be specified and associated technical conditions to be satisfied (in order to be uniquely described); while the measurement data collection, in general, merely corresponds to a set of algebraic equations that describe the situation. Additionally, noises may be present in both the dynamics and in the measurements.  (TeK Associates acknowledges that certain process control applications, with a massive number of production steps may prefer to represent the simulation model in block diagram form and to mask details at different hierarchical levels for simplicity in dealing with it altogether or for concealing proprietary details from the user who does not have a “need to know”.)   Go to Top

Use of Descriptor System representations enable other structural benefits to accrue:

(“Descriptor Systems” is the name used for a slight special case generalization of the usual state variable representation but which offers considerable computational advantages for specific system structures exhibiting algebraic loops within the system dynamics.)

using the minimum number of integrators to capture the essence and behavior of the candidate system, thus avoiding redundancies and extra expense associated with implementing more integrators than are actually needed (a benefit recognized in the 1960’s investigations of the minimum realization of a system, as pursued to avoid “unobservable” internal instabilities of an over-determined model);

allowing certain simplifying structures present to be recognized and exploited to an advantage such as certain systems being classified as being “bilinear” and consequently possessing a more tractable calculation of the associated transition matrix using the “Lie Group” techniques (elucidated by Prof. Roger Brockett (Harvard) in Ref. [4] and further elucidated in Chap. 5 of [14]) than would otherwise be the case for general nonlinear systems;

allowing other structural details to be obvious at-a-glance from the state variable representation that would otherwise be somewhat obscured in a “wiring diagram” such as whether or not the system should be classified as being time-varying linear, or as nonlinear, or as autonomous;

properly converting a continuous-time model into an equivalent discrete-time model [5] (which is more efficient for processing).

QUESTION: Which simulation products use “wiring diagrams”? ANSWER: National Instruments’ LabView and MatrixX®, The MathWorks’ Simulink, MathSoft’s MathCad, Visual Solutions’ VisSim, ProtoSim, and the list goes on and on. Why don’t they come on up to the new millennium instead of continuing to dwell back in the 1960’s when Linpack and Eispack were in vogue as the “cutting edge” scientific numerical computation software packages? 

Notice that The MathWorks offers use of a state variable representation but only for Linear Time Invariant  (LTI) systems (such as occurs, for example, in estim.m, dlqr.m, dreg.m, lge.m, h2lqg.m, lqe.m, dreg.m, ltry.m, ltru.m, lqg.m, lqe2.m, lqry.m, dh2lqg.m for MatLab). 

Practical systems are seldom so benign as to be merely LTI; but, instead, are usually time-varying and/or nonlinear (yet a state variable formulation would still be a natural fit to these situations as well). Indeed, the linearization of a nonlinear system is time-varying in general. However, the user needs to be cognizant and aware of when and why LTI solution methodologies break down in the face of practical systems (and TK-MIP keeps the User well informed of limitations and assumptions that must be met in order that LTI-based results remain as valid approximations for those non-LTI system structures encountered in practical applications and what must be done to properly handle the situations when the LTI approximations are not valid). MatLab users currently must “roll their own” non-LTI state variable customizations from scratch (and MatLab and Simulink offer no constructive hints of when it is necessary for the user to do so). To date, only technical specialists who are adequately informed beforehand, know when to use existing LTI-tools and when they must use more generally applicable solution techniques and methodologies that, usually, are a larger computational burden (and which the users themselves must create and include on their own initiative, by their own volition, under the assumption that they have adequate knowledge of how to properly handle the modeling situation at hand using state-of-the-art solution techniques for the particular structure present). 

When models are contorted into “wiring diagrams”, other aspects become more complicated too. An example being the assortment of different algorithms that MatLab and Simulink must offer for performing numerical integration. MatLab’s Numerical Differentiation Formulas (NDF’s) are highly touted in Ref. [6], below, for being able to handle “the integration of ‘stiff systems’ (that had historically been the bane of software packages for system simulation) of the form: 
                                            M(t) dy/dt = f(y,t) over the time interval [t0, tF]
where the mass matrix, M(t), pre-multiplying on the left hand side is non-singular and (usually) sparse”, as conditions explicitly stated in [6]. If M(t) were, in fact, square and nonsingular, then both sides of the previous equation could be pre-multiplied throughout by Inv[M(t)] to put this ODE into the more standard and familiar state-variable form or at worse into “descriptor” system form. Descriptor systems can, all the more, deal with or  handle an M(t) that is in fact singular [7], (The quoted result of Ref. [7] is generalized even further in Ref. [8].)

“Stiff systems” typically exhibit behavior corresponding to many separate time constants being present that span a huge range that encompasses extremely long as well as extremely short response times and, as a consequence, adversely affect standard solution techniques [such as a Runge-Kutta 4th or higher order predictor-corrector with normally adaptive step size selection] by the associated error criterion for a stiff system dictating that the adaptive step size be forced to be the very shortest availed [as controlled by the fastest time constant present] that is so “itsy-bitsy” that progress is slow and extremely expensive in total CPU time consumed. The very worst case for integrators and even for stiff integration algorithms is when one portion or loop of the system has a finite time constant and another portion has a time constant of zero (being an instantaneous response with no integrators at all in the loop thus being devoid of any lag). However, Ref. [7] (on page 130, Ex. 21) historically demonstrated that such structures can be routinely decomposed and re-expressed by extracting an algebraic equation along with a lower dimensional Ordinary Differential Equation (ODE) of the simple standard form: 
                                           dy/dt = f(y,t) over [t0, tF]
that would no longer need the sophisticated, more CPU time-consuming “stiff” Gear-like (Ref. [8]) “implicit integration” solution algorithms (involving Jacobian derivatives) for obtaining accurate answers as the solution is numerically computed forwards in time. Such are the benefits of posing system models and their corresponding simulation GUI’s in a form where structure can be revealed and recognized at-a-glance and be exploited to a computational advantage by extricating the simpler underlying structure. Such structural observations are obscured in “wiring diagrams” so the more costly integration algorithms need to be included for utilization in the new fangled (but misleading and, in TeK’s opinion, misguided) GUI system representations because the alternative straight forward approach, available from Refs. [7], [9], [15]-[18], [38], [43]-[49] (for which simpler numerical integration routines suffice), would be stymied. Such structure would be obvious from a state variable perspective within a GUI that utilizes the state variable convention to visually represent complex practical systems, as offered in TK-MIP. (Also see Ref. [10] for more on NDF’s.) Descriptor system representations decompose the short circuit fast loop or short time constant into an algebraic equation devoid of dynamics along with a lower dimensional residual dynamics’ representation. Both of these operations reduce the complexity in adequately representing such otherwise “stiff” systems and, moreover, obviate the need for special Gear-type implicit integration algorithms entirely in these particular situations since simpler Runge-Kutta predictor-corrector algorithms suffice. However, in systems that have a wide range of effective time constants or fast and slow inner and outer control loops (as in fighter aircraft guidance laws) or because of the presence of fast and slow sampling rates, Gear-type integration may still be needed but invoked (more parsimoniously) only in the relatively rare instances when descriptor representation doesn’t suffice in totally removing the problem. We are familiar with Sampled Data Systems and the benefits of lower duty cycles and know how to handle such situations within a purely State Variable or Descriptor System context with samplers, zero-order holds, first-order holds (or higher order holds if necessary).  We are also familiar with how to handle delay-differential equations of the retarded type.   Go to Top 

Problems in MatLab’s  Apparent Handling of Level Crossing Situations (as frequently arise in a variety of practical Applications)

Another perceived problem with The MathWorks’ MatLab regarding its ability to detect the instant of level-crossing occurrence (as when a test statistic exceeds a specified constant decision threshold or exceeds a deterministically specified time-varying decision threshold as arises, say, in Constant False Alarm Rate [CFAR] detection implementations and in other significant scientific and engineering applications).

This capability has existed within MatLab since 1995, as announced at the Yearly International MatLab User Conference, but only for completely deterministic situations since the underlying algorithms utilized for integration are of the form known as Runge-Kutta predictor/corrector-based and are stymied when the noise (albeit pseudo-random noise [PRN]) is present in the simulation for application realism. The presence of noise has been the bane of all but the coarsest and simplest of integration algorithm methodologies since the earliest days of IBM’s CSMP. However, engineering applications, where threshold comparisons are crucial, usually include the presence of noise too in standard Signal Detection (i.e., is the desired signal sought present in the receiver input or just noise only)? This situation arises in radar and communications applications, in Kalman filter-based Failure Detection or in its mathematical dual as Maneuver Detection applications, or in peak picking as it arises in sonar/sonobuoy processing or in Image Processing. The 1995 MatLab function for determining when a level crossing event had occurred availed the instant of crossing to infinite precision yet can only be invoked for the integration evolution of purely deterministic ODE’s devoid of noise. Noise discrimination is the fundamental goal in all “Signal Detection” situations faced by practical applications engineers.  

           Go to Top

Existing problems with certain Random Number Generators (RGN’s) 

TeK Associates is aware of recent thinking and explicit numerical comparisons regarding the veracity of uniform (pseudo-)random number generators (RNG’s) as, say, reported in [19] and we have instituted the necessary remedies within our TK-MIP®, as prescribed in [19]. (Please see L’Ecuyer’s article [and Web Site: http://www.iro.umontreal.ca/~lecuyer] for explicit quantifications of RNG’s for Microsoft’s Excel® and for Microsoft’s Visual Basic® as well as for what had been available in Sun’s JAVA®.) Earlier warnings about the failings of many popular RNG’s have been offered in the technical literature for the last 35 years by George Marsaglia, who, for quite awhile, was the only “voice in the wilderness” alerting and warning analysts and software implementers to the problems existing in many standard, popular (pseudo-)RNG’s since they exhibit significant patterns such as “random numbers falling mainly in the planes” when generated by the linear congruential method of [21]

Prior to these cautions mentioned above, the prevalent view regarding the efficacy of RNG’s for the last 35 years had been conveyed in [20], which endorsed use of only the linear congruential method consisting of a iteration equation of the following form: 

    xn+1 = a xn + b (mod T), starting with n = 0 and proceeding on, with x0 at n=0 being the initial seed,

with specific choices of the three constant parameters a, b, and T to be used for proper implementation with a particular computer register size being specified in [20]; however, variates generated by this algorithm are, in fact, sequentially correlated with known correlation between variates s-steps apart according to:

    ρs = {  [ 1-6 (ßs / T)(1 - (ßs / T)) ] / as } + µ,
where this expression along with the constant parameters appearing above are defined and explained on the first page in Sec. 26.8 of [21]. Marsaglia had found this linear congruential approach to be somewhat faulty, as mentioned above. An article by Cleave Moler [22] apparently conveys a more benign situation than actually exists for numerically-based PRN generators of the linear congruential type depicted above, however, [22] was not peer reviewed nor did it appear in the open literature. The problems with existing RNG’s were acknowledged publicly by mathematicians in a session at the 1994 meeting of the American Association for the Advancement of Science. A talk was presented by Persi Diaconis on a minor scandal of sorts (this was their exact choice of words, as reported in the AAAS publication, Science) concerning the lack of a well-developed theory for random number generators. For most applications, random numbers are generated by numerical algorithms whose outputs, when subjected to various tests, appear to be random. Diaconis described the series of tests for randomness proposed by George Marsaglia in the mid-1980’s; all existing generators of the time FAILED at least one of these tests.  Also see [53]. Prof. L’Ecuyer offers improvements over what was conveyed by Persi Diaconis and is up to date with modern computer languages and constructs. Even better results are reported for a new hardware-based simulation approach in [23].

Many sources recommend use of historically well-known Monte-Carlo simulation techniques to emulate a Gaussian vector random process that possesses the matrix autocorrelation function inputted as the prescribed symmetric positive semidefinite WGN intensity matrix. The Gaussianess that is also the associated goal for the generated output process may be obtained by any one of four standard approaches listed in Sec. 26.8.6a of [21] for a random number generator of uniform variates used as the input driver. However this approach specifically uses the technique of summing six independent uniformly distributed random variables (r.v.) to closely approximate a Gaussianly distributed variant. The theoretical justification is that the probability density function (pdf) of the sum of two statistically independent r.v.’s is the convolution of their respective underlying probability density functions. For the sum of two independent uniform r.v.’s, the resulting pdf is triangular; for the sum of three independent uniform r.v.’s, the resulting pdf is a trapezoid; and, in like manner, the more uniform r.v.’s included in the sum, the more bell shaped is the result. The Central Limit Theorem (CLT) can be invoked, which states that the sums of independent identically distributed (i.i.d.) r.v.’s goes to Gaussian (in distribution). The sum of just six is a sufficiently good engineering approximation for practical purposes. A slight wrinkle in the above is that supposedly ideal Gaussian uncorrelated white noise is eventually obtained from operations on independent uniformly distributed random variables, where uniform random variables are generated via the above standard linear congruential method, with the pitfall of possessing known cross-correlation, as already discussed above. This cross-correlated aspect may be remedied or compensated for (since it is known) via use of a Choleski decomposition to achieve the theoretical ideal uncorrelated white noise, a technique illustrated in Example 2, pp. 306-312 of [24], which is, perhaps, comparable to what is also reported later in [25].  Historically related investigations are [54] - [57]

Now regarding cross-platform confirmation or comparison of an algorithm’s performance and behavior in the presence of PRN, A problem had previously existed in easily confirming exact one-to-one correspondence of output results on the two machines if the respective machine register size or word length differed (and, at that time, only the linear congruential method was of interest as the fundamental generator of uniformly distributed PRN). Lincoln Laboratory’s Larry S. Segal had a theoretical solution for adapting the cross-platform comparison so that identical PRN sequences were generated, despite differences in word sizes between the two different computers. However, as we see now, this particular solution technique is for the wrong PRN (which, unfortunately, was the only one in vogue back then (1969 to about 2000), as advocated by Donald Knuth in The Art of Computer Programming, Volume 2). A more straight-forward, common sense solution is to just generate the PRN numbers on the first machine (assuming access to it) and output them in a double precision file that is then subsequently read as input by the second machine as its source of uniformly distributed PRN. The algorithms inputs would then be identical so the outputs would be expected to correspond within the limits or slight computational leaway or epsilon tolerance allotted (and that should evidently be granted, as associated with effects of round-off and truncation error [which is highly likely to be slightly different between the two machines]).     Go to Top

Other Potentially Embarrassing Historical Loose Ends

A so-designated “backwards error analysis”  had previously been performed by Wilkinson and Reinsh for the Singular Value Decomposition (SVD) implementation utilized within EISPACK® so that an approximate measure of the condition number is ostensibly available (as asserted in Refs. [26, p. 78], [27]) for user monitoring as a gauge of the degree of numerical ill conditioning encountered during the computations that consequently dictate the degree of confidence to be assigned to the final answer that is the ultimate output. (Less reassuring open research questions pertaining to SVD condition numbers are divulged in Refs. [28], [29], indicating that some aspects of SVD were STILL open questions in 1978, even though the earlier 1977 user manual [26] offers only reassurances of the validity of the SVD related expressions present there for use in the calculation  of the associated SVD condition number.) An update to the SVD condition number calculation has more recently become available [30] (please compare this to the result in [31, pp. 289-301]). These and related issues along with analytic closed-form examples, counterexamples, and summaries of what existing SVD subroutines work (and which do not work as well) and many application issues are discussed in detail in [32] (as a more in depth discussion beyond what was availed in its peer-reviewed precursor [33]).

While there is much current activity in 2005 for the parallelization of algorithms and for efficiently using computing machines or embedded processors that have more than one processor available for simultaneous computations and networked computers are being pursued for group processing objectives (known as grid computing), the validation of SVD (and other matrix routines within EISPACK®) by seventeen voluntarily cooperative but independent universities and government laboratories across the USA, was depicted in a single page or two within [26]. Unfortunately, this executive summary enabled a perceived comparison of the efficiency of different machines in solving identical matrix test problems (of different dimensions). The Boroughs’ computers, which were specifically designed to handle parallel processing and which were (arguably) decades ahead of the other computer manufacturers in this aspect in the mid 1970’s, should have “blown everyone else out of the water” if only EISPACK were based on “column-wise operations” instead of on “row-wise” operations. Unfortunately, EISPACK was inherently partitioned and optimized for only row-wise implementation. Taking IBM 360’s performance as a benchmark within [26], the performance of the Boroughs’ computers was inadvertently depicted as taking two orders of magnitude longer for the same calculations (the depiction or more properly the reader interpretation of this aspect) was because the Boroughs’ computer testers, in this situation, did not exploit the strengths that the Boroughs’ computers possessed because the implementers at each site did not know beforehand that this was to be a head-to-head comparison later between sites). The Boroughs Corporation was put at a significant disadvantage immediately thereafter and was subsequently joined with Sperry-UNIVAC to form Unisys in the middle 1980’s. Instead of discussing the millions of things The MathWorks does right (and impressively so as a quality product), we, instead, homed in here on its few prior problems so that we at TeK Associates would not make the same mistakes in our development of TK-MIP and, besides, just a few mistakes constitute a much shorter list and these flaws are more fun to point out. [Just ask any wife!] However, out of admiration, TeK Associates feels compelled to acknowledge that in the 1990’s, The MathWorks were leaders on the scene that discovered the “Microsoft .dll bug” as it adversely affected almost all other new product installations (and which severely afflicted all MathCad installations that year) and The MathWorks compensated for this problem by rolling back to a previous bug-free version of that same Microsoft .dll before most others had even figured out what was causing the problem. Similarly, with the bug in multiple precision integer operations (that a West Virginia Number Theorists was the first to uncover) that adversely affected Intel machines that year (due to an incompatibility problem with the underlying BIOS being utilized at the time), The MathWorks was quick to figure out early on how to compensate for the adverse effect in long integer calculations so that their product still gave correct answers even in the face of that pervasive flaw that bit so many. We will not be so petty as to dwell on the small evolutionary problems with (1) The Mathworks’ tutorial on MatLab as it ran in 1992 on Itel 486 machines after The MathWorks had originally developed it on 386 machines. The screens shot by the User like a flash with no pause included that would enable a User to actually read what it said. Similarly, we do not dwell on (2) a utility screen in Simulink that was evidently developed by someone with a very large screen monitor. When viewed by users with average sized monitor screens, there was no way to see the existing decision option buttons way down below the bottom of the screen (where no vertical scroll bars were made available to move it up enough for the user to see them) that were to be selected (i.e., clicked on) by the User as the main concluding User action solicited by that particular screen in order to proceed. Nor do we dwell on the early days of the mid 1990’s, when (3) The MatWorks relied exclusively upon Ghostscripts for converting MatLab outputs into images for reports and for printout. At least they worked (but sometimes caused computer crashes and “The Blue Screen of Death”. Of course Microsoft Windows was no longer subject to “General Protection Faults” (GPF) after the introduction of Windows 95 (since Microsoft changed the name of what this type of computer crash was called, as an example of “doublespeak” from George Orwell’s novel 1984). However, “The Blue Screen of Death” still occurred.

 

The MathWorks cautions that since MatLab is matrix-based, any nested loops (using Do…While, For…Do) should be avoided or, at most, only one of several nested loops should be explicitly in MatLab and the other remaining loops should be written in a different computer language such as in C (and, historically, possibly in Fortran for the older versions of MatLab). This means that standard programming constructs such as linked lists cannot be conveniently handled within MatLab. When nested loops are encountered within MatLab, an extraordinarily and exorbitantly long time for completion or convergence is usually experienced. An example of such a CPU-intensive time-wasting situation for achieving convergence to a final answer being the inverse problem of Case 6 (algorithm EDEVCI [available from The MathWorks’ Web Site, associated with David Hu’s book]) in [50] of specifying the 3-D Error Ellipsoid for a completely general Gaussian with different major and minor axis variances and nonzero cross-correlations for a particular specified containment probability [being 0.97 within the National Missile Defense (NMD) program rather than the standard SEP, which is defined for ellipsoidal containment with probability 0.50];

  Go to Top

One advantage that we at TeK Associates have is that we already know who the experts are in various areas (e.g., [34]-[36], [38]) and where the leading edge lies within the many related constituent fields that feed into simulation and into statistical signal processing, Kalman filtering, and in the many other diverse contributing areas and various consequential application areas. We know the relevant history and so we do not turn a blind eye to the many drawbacks of LQG theory and its associated solution techniques but, instead, we explicitly point out its drawbacks (for a clarification example, please click the Navigation Button at the top entitled Assorted Details I and then view the top screen of this series) and its compensating mechanisms like Loop Transfer Recovery (LTR).   Go to Top   

                      

“We don’t just talk and write about what is the right thing to do! We actually 

practice what we preach (both in software development and in life)!”

REFERENCES:

[1] WINDOWSä is a registered trademark of Microsoft Corporation. TK-MIP is a registered trademark of TeK Associates. MATLAB® and SIMULINK® are registered trademarks of The MathWorks.

[2] RESURGENCE OF MATRIX SIGNUM FUNCTION TECHNIQUES-AND ITS VULNERABILITY

In analogy to the scalar iteration equation that is routinely used within floating point digital computer implementations to calculate explicit squareroots recursively as:

            e(i+1) = 1/2 [ e(i) + 1/e(i) ] then (i=i+1); initialized with e(0) = a

in order to obtain the squareroot of the real number “a”, there exists the notion of Matrix Signum Function, defined similarly (but in terms of matrices), which iteratively loops using the following :

            E(i+1) = 0.5 [ E(i) + E-1(i) ] then (i=i+1); initialized with E(0) = A,

in order to obtain the Matrix Signum of matrix A as Signum(A). There has apparently been a recent resurgence of interest and use of this methodology to obtain numerical solutions to a wide variety of problems, as in:

Laub, A. J., “A Schur Method for Solving Algebraic Riccati Equations,” MIT Report No. LIDS-R-859, Laboratory for Information and Decision Systems, Oct. 1978.

Gardiner, J. D., and Laub, A. J., “A Generalization of the Matrix-Sign-Function Solution for Algebraic Riccati Equations,” International Journal of Control, Vol. 44, No. 3, pp. 823-832, 1986.

Kenney, C., and Laub, A. J., “Rational Iterative Methods for the Matrix Sign Function,” SIAM Journal of Matrix Analysis Applications, Vol. 12, No. 2, pp. 273-291, April 1991.

Kenney, C. S., Laub, A. J., “The Matrix Sign Function,” IEEE Trans. on Automatic Control, Vol. AC-40, No. 8, pp. 1330-1348, Aug. 1995.

It was historically pointed out using both theory and numerical counterexamples, that the notion of a scalar signum, denoted as sgn(s), being +1 for s > 0, being -1 for s < 0, and being 0 for s = 0, has no valid analogy for s being a complex variable so the Matrix Signum Function is not well-defined for matrices that have eigenvalues that are not exclusively real numbers (corresponding to system matrices for systems that are not strictly stable by having some eigenvalues on the imaginary axis). In the early 1970’s, there was considerable enthusiasm for use of Matrix Signum Function techniques in numerical computations as evidenced by:

Beavers, A. N., and Denman, E. D., “A Computational Method for Eigenvalues and Eigenvectors of a Matrix with Real Eigenvalues,” Numerical Mathematics, Vol. 21, pp. 389-396, 1973.

Popeea, C. and Lupas, L., “Numerical Stabilizability Tests by a Matrix Sign Function,” IEEE Trans. on Automatic Control, Vol. AC-22, No. 4, pp. 654-656, Aug. 1977.

Denman, E. D., and Beavers, A. N., “The Matrix Sign Function and Computations in Systems,” Applied Mathematics and Computation, Vol. 2, No 1, pp. 63-94, 1976.

However, significant counterexamples were presented to elucidate areas of likely numerical difficulties with the above technique in:

Barnett, S., “Comments on ‘The Matrix Sign Function and Computation in Systems’,” Applied Mathematics and Computation, Vol. 4, No. 3, pp. 277-279, 1978.

and, moreover, even when these techniques are applicable (and it is known a’priori that the matrices have strictly stable and exclusively real eigenvalues as a given [perhaps unrealistic] condition or hypothesized assumption), the following reference:

Barrand, A. Y., “Comments on ‘The Numerical Solution of ATQ + QA = -C’,” IEEE Trans. on Automatic Control, Vol. AC-24, No. 4, pp. 671-672, Aug. 1977.

identifies more inherent weakness in using Schur. In particular, it provides additional comments to delineate the large number of iterations to be expected prior to convergence of the iterative formula, discussed above, which is used to define the signum of a matrix as:
            Limit for i as it goes to of E(i+1) = 0.5 [ E(i) + E-1(i) ] = signum(A) = sgn(A).

It appears that technologist should be cautious and less sure about relying on Schur. Also see

Petkov, P. H., Christov, N. D., Konstantinov, M. M., “On the Numerical Properties of the Schur Approach for Solving the Matrix Riccati Equation,” System Control Letters, Vol. 9, No. 3, pp. 197-201, 1987.

(Notice that The MathWorks doesn’t warn users about the above cited weaknesses and restrictions in their Schur-based algorithms that their consultant and contributing numerical analyst, Prof. Alan J. Laub, evidently strongly endorses, as reflected in his four publications, cited above, that span three decades. If the prior objections, mentioned explicitly above, had been refuted or placated, we would not be so concerned now but that is not the case, which is why TeK Associates brings these issues up again.)          Go to Top

                          

[3] “Password Management Guidelines,” Doc. No. CSC-STD-002-35, DoD Computer Security Center, Ft. Meade, MD, 12 April 1985 [also known as (a.k.a.) “The Green book”].

[4] Brockett, R., Finite Dimensional Linear Systems, Wiley, NY, 1970.

[5] Gupta, S. C., “Phase-Locked Loops,” Proceedings of the IEEE, Vol. 68, No. 2, 1975. 

[6] Shampine, L. F., Reichett, M. W., “The MatLab ODE Suite,” SIAM Journal on Scientific Computing, Vol. 18, pp. 1-22, 1997.

[7] Luenberger, D. G., Introduction to Dynamic Systems: Theory, Models, and Applications, John Wiley & Sons, NY, 1979.

[8] Gear, C. W., Watanabe, D. S., “Stability and Convergence of Variable Order Multi-step Methods,” SIAM Journal of Numerical Analysis, Vol. 11, pp. 1044-1058, 1974. (Also see Gear, C. W., Automatic Multirate Methods for Ordinary Differential Equations, Rept. No. UIUCDCS-T-80-1000, Jan. 1980.)  [The numerical analyst, C. W. Gear, at the University of Illinois, devised these integration techniques that also involve computation of the Jacobian or gradient corresponding to all the components participating in the integration. A synonym for Gear integration is Implicit Integration.]

[9] Luenberger, D. G., “Dynamic Equations in Descriptor Form,” IEEE Trans. on Automatic Control, Vol. 22, No. 3, pp. 312-321, Jun. 1977.

[10] Kagiwada, H., Kalaba, R., Rasakhoo, N., Spingarn, K., Numerical Derivatives and Nonlinear Analysis, Angelo Miele (Ed.), Mathematical Concepts and Methods in Science and Engineering, Vol. 31, Plenum Press, NY, 1986.

[11] Kerr, T. H., ADA70 Steady-State Initial-Value Convergence Techniques, General Electric Class 2 Report, Technical Information Series No. 72 CRD095, 1972.

[12] Kerr, T. H., “A Simplified Approach to Obtaining the Steady-State Initial Conditions for Linear System Simulations,” Proceedings of the Fifth Annual Pittsburgh Conference on Modeling and Simulation, 1974.

[13] Kerr, T. H., “Exact Methodology for Testing Linear System Software Using Idempotent Matrices and Other Closed-Form Analytic Results,” Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 142-168, San Diego, CA, 29 July-3 Aug. 2001.

[14] Mohler, R. R., Nonlinear Systems: Vol. II: Application to Bilinear Control, Prentice-Hall, Englewood Cliffs, NJ, 1991.

[15] Nikoukhah, R., Campbell, S. L., Delebecque, F., “Kalman Filtering for General Discrete-Time Linear Systems,” IEEE Trans. on Automatic Control, Vol. 44, No. 10, pp. 1829-1839, Oct. 1999.

[16] Nikoukhah, R., Taylor, D., Willsky, A. S., Levy, B. C., “Graph Structure and Recursive Estimation of Noisy Linear Relations,” Journal of Mathematical Systems, Estimation, and Control, Vol. 5, No. 4, pp. 1-37, 1995.

[17] Nikoukhah, R., Willsky, A. S., Levy, B. C., “Kalman Filtering and Riccati Equations for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 37, pp. 1325-1342, 1992.

[18] Lin, C., Wang, Q.-G., Lee, T. H., “Robust Normalization and Stabilization of Uncertain Descriptor Systems with Norm-Bounded Perturbations,” IEEE Trans. on Automatic Control, Vol. 50,  No. 4, pp. 515-519, Apr. 2005.  

[19] L’Ecuyer, P., “Software for Uniform Random Number Generation: Distinguishing the Good from the Bad,” Proceedings of the 2001 Winter Simulation Conference entitled 2001: A Simulation Odessey, Edited by B. A. Peters, J. S. Smith, D. J. Medeiros, M. W. Roher, Vol. 1, pp. 95-105, Arlington, VA, 9-12 Dec. 2001. (Several years ago but still after 2000, Prof.  L’Ecuyer allegedly received a contract from The MathWorks to bring them up to date by fixing their random number generator.)

[20] Knuth, D., The Art of Computer Programming, Vol. 2, Addison-Wesley, Reading, MA, 1969 (with a 1996 revision). 

[21] Abramowitz, M., and Stegun, I. A., Handbook of Mathematical Tables, National Bureau of Standards, AMS Series 55, 1966.

[22] Moler, C., “Random thoughts: 10435 years is a very long time,” MATLAB News & Notes: The Newsletter for MATLAB, SIMULINK, and Toolbox Users, Fall 1995.

[23] Callegari, S., Rovatti, R., Setti, G., “Embeddable ADC-Based True Random Number Generator for Cryptographic Applications Exploiting Nonlinear Signal Processing and Chaos,” IEEE Trans. on Signal Processing, Vol. 53, No. 2, pp. 793-805, Feb. 2005. [TeK Comment: this approach may be too strong for  easy decryption but results still germane to excellent computational simulation of systems without subtle cross-correlations in the random number generators contaminating or degrading output results.]

[24] Kerr, T. H., Applying Stochastic Integral Equations to Solve a Particular Stochastic Modeling Problem, Ph.D. thesis, Department of Electrical Engineering, Univ. of Iowa, Iowa City, IA, 1971.

[25] Kay, S., “Efficient Generation of Colored Noise,” Proceedings of IEEE, Vol. 69, No. 4, pp. 480-481, April 1981.

[26] Garbow, B. S., Boyle, J. M., Dongarra, J. J., and Moler, C. B., Matrix Eigensystem Routines, EISPACK guide extension, Lecture Notes in Comput. Science, Vol. 51, 1977.

[27] Dongarra, J. J., Moler, C. B., Bunch, J. R., and Stewart, G. W., LINPACK User’s Guide, SIAM, Philadelphia, PA, 1979.

[28] Moler, C. B, Three Research Problems in Numerical Linear Algebra, Numerical Analysis Proceedings of Symposium in Applied Mathematics, Vol. 22, 1978.

[29] Stewart, G. W., “On the Perturbations of Pseudo-Inverses, Projections, and Least Square Problems,” SIAM Review, Vol. 19, pp. 634-662, 1977.

[30] Byers, R., “A LINPACK-style Condition Estimator for the Equation AX - XBT = C,” IEEE Trans. on Automatic Control, Vol. 29, No. 10, pp. 926-928, 1984.

[31] Stewart, G. W., Introduction to Matrix Computations, Academic Press, NY 1973.

[32] Kerr, T. H., “Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control,” a chapter in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, N.Y., pp. 57-107, 1988. 

[33] Kerr, T. H., “The Proper Computation of the Matrix Pseudo-Inverse and its Impact in MVRO Filtering,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 21, No. 5, pp. 711-724, Sep. 1985.

[34] Miller, K. S., Some Eclectic Matrix Theory, Robert E. Krieger Publishing Company, Malabur, FL, 1987.

[35] Miller, K. S., and Walsh, J. B., Advanced Trigonometry, Robert E. Krieger Publishing Company, Huntington, NY, 1977.

[36] Greene, D. H., Knuth, D. E., Mathematics for the Analysis of Algorithms, Second Edition, Birkhauser, Boston, 1982. 

[37] Roberts, P. F., “MIT research and grid hacks reveal SSH holes,” eWeek, Vol. 22, No. 20, pp.7, 8, 16 May 2005. [This article points out an existing vulnerability of large-scale computing environments such as occur with grid computing network environments and with supercomputer clusters (as widely used by universities and research networks). TEK Associates avoids a Grid Computing mechanization for this reason and because TK-MIP’s computational requirements are both modest and quantifiable.]  

[38] Yan, Z., Duan, G., “Time Domain Solution to Descriptor Variable Systems,” IEEE Trans. on Automatic Control, Vol. 50, No. 11, pp. 1796-1798, November 2005. 

[39] Roe, G. M., Pseudorandom Sequences for the Determination of System Response Characteristics: Sampled Data Systems, General Electric Research and Development Center Class 1 Report No. 63-RL-3341E, Schenectady, NY, June 1963.

[40] Watson, J. M. (Editor), Technical Computations State-Of-The-Art by Computations Technology Workshops, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 68-G-021, Schenectady, NY, December 1968.

[41] Carter, G. K. (Editor), Computer Program Abstracts--Numerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 69-G-021, Schenectady, NY, August 1969.

[42] Carter, G. K. (Editor), Computer Program Abstracts--Numerical Methods by Numerical Methods Workshop, General Electric Information Sciences Laboratory Research and Development Center Class 2 Report No. 72GEN010, Schenectady, NY, April 1972.  

[43] Stykel, T., “On Some Norms for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 51, No. 5, pp. 842-847, May 2006.

[44] Fridman, E., “Descriptor Discretized Lyapunov Functional Method: Analysis and Design,” IEEE Trans. on Automatic Control, Vol. 51, No. 5, pp. 890-897, May 2006. 

[45] Zhang, L., Lam, J., Zhang, Q., “Lyapunov and Riccati Equations of Discrete-Time Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 44, No. 11, pp. 2134-2139, November 1999.

[46] Koening, D., “Observer Design for Unknown Input Nonlinear Descriptor Systems via Convex Optimization,” IEEE Trans. on Automatic Control, Vol. 51, No. 6, pp. 1047-1052, June 2006.

[47] Gao, Z., Ho, D. W. C., “State/Noise Estimator for Descriptor Systems with Application to Sensor Fault Diagnosis,” IEEE Trans. on Signal Processing, Vol. 54, No. 4, pp. 1316-1326, April 2006.   

[48] Ishihara, J. Y., Terra, M. H., Campos, J. C. T., “Robust Kalman Filter for Descriptor Systems,” IEEE Trans. on Automatic Control, Vol. 51, No. 8, pp. 1354-1358, August 2006.

[49] Terra, M. H., Ishihara, J. Y., Padoan, Jr., A. C., “Information Filtering and Array Algorithms for Descriptor Systems Subject to Parameter Uncertainties,” IEEE Trans. on Signal Processing, Vol. 55, No. 1, pp. 1-9, Jan. 2007.

[50]  Hu, David, Y., Spatial Error Analysis, IEEE Press, NY, 1999.

[51]  Bellman, R. and Cooke, Kenneth, Differential Difference Equations: mathematics in science and engineering, Academic Press, NY, Dec. 1963.

[52]  Bierman, G. T., “Square-root Information Filter for Linear Systems with Time Delay,” IEEE Trans. on Automatic Control, Vol. 32, pp. 110-113, Dec. 1987. 

[53] Bach, E., “Efficient Prediction of Marsaglia-Zaman Random Number Generator,” IEEE Trans. on Information Theory, Vol. 44, No. 3, pp. 1253-1257, May 1998.

[54] Chan, Y. K., Edsinger, R. W., “A Correlated Random Number Generator and Its Use to Estimate False Alarm Rates,” IEEE Trans. on Automatic control, June 1981.

[55] Morgan, D. R., “Analysis of Digital Random Numbers Generated from Serial Samples of Correlated Gaussian Noise,” IEEE Trans. on Information Theory, Vol. 27, No. 2, March 1981.

[56] Atkinson, A. C., “Tests of Pseudo-random Numbers,” Applied Statistics, Vol. 29, No. 2, pp. 154-171, 1980.

[57] Sanwate, D. V., and Pursley, M. B., “Crosscorelation Properties of Pseudo-random and Related Sequences,” Proceedings of the IEEE, Vol. 68, No. 5, pp. 593-619, May 1980.

[58] Bossert, D. E., Lamont, G. B., Horowitz, I., “Design of Discrete Robust Controllers using Quantitative Feedback Theory and a Pseudo-Continuous-Time Approach,” on pp. 25-31 in Osita D. T. NWOKAH (Ed.), Recent Developments in Quantitative Feedback Theory: Work by Prof. Issac Horowitz, presented at the winter annual meeting of the American Society of Mechanical Engineers,  Dallas, TX, 25-30 Nov. 1990.

            Go to Top

These bugs may plague other softwarebut our frogs eradicate them completely as they keep all programming bugs far away from TK-MIP!

TeK Associates has an assortment of watch dog frogs that eradicate such software “bugs”.

Go to Top

(If you wish to print information from Web Sites with black backgrounds, we recommend that you first invert colors.)

 

TeK Associates’ motto: “We work hard to make your job easier!”