Consulting Service
Up Sensor Fusion Proposals & Reports Cramer-Rao Analysis KF for Nav &Tracking Event Detection Software&Simulation Recent Clients

 

  Engineering Consulting in Kalman filtering theory and its applications

(Our navigation buttons are at the TOP of each screen.)

Key Benefits: Metaphorically speaking, we can provide a participant for a “Tiger Team”, review board, or technical “hit man”: (we will be a “Boon” to your projects, as in Richard Boone, who stared in the title role from 1957 until 1963)

Tap into our 50+ year reservoir of hands-on experience in linear Kalman filter estimation (for INS, for JTIDS RelNav, for ICNIA [a predecessor to JTRS (pronounced “Jitters”), both explicitly defined further below], and for GPS, and with the prior NavSat [a.k.a. Transit] from older NOVA and DELTA satellites) and approximate nonlinear estimation in Early Warning Radar (EWR) and sonar\sonobuoy target tracking (related to Lofar/Difar and LAMPS).
Familiarity with historical application constraints and specs for many platforms (especially including C-3 Poseidon, C-4 Poseidon-back-fit, and D-1 Trident SSBN and SSN attack submarine mission objectives, scenarios, and countermeasures). We have had first hand shipboard experience in San Diego in the 1980’s and earlier weapons system and fire control training in the 1970’s (regarding numbers and mixes of Reentry Vehicles [RV’s]) at Dam Neck, VA. We have also been aboard the Compass Island (sister ship of Cobra Judy used for strategic radar and at sea missile tracking) in the 1970’s, where components being planned for use within the SSBN Navigation Room are tested beforehand (in a room that was laid out identically but “bass-ackwards” from how it is oriented within actual SSBN’s). The U.S.S. Compass Island was replaced in this role in the late 1970’s by the U.S.S. Vanguard (as obtained from NASA). We are aware of vintage 1970’s vibration tests for submarine INS components using “Big Bertha and the Little Chippers” operating on the deck directly above it. Present day barge tests with submerged C-4 plastic explosives emulating depth charges and use of 300 pound swinging hammers, capable of impacting at up to 100 g’s, now reveal weaknesses or non-compliance of electronics within the expected dangerous environments is just as important today (even if their names are no longer as colorful). We have also performed GPS testing, both dockside and at sea, onboard the SSN-701 La Jolla in the 1980’s at the San Diego, CA submarine base (for NADC).
Saves customers’ time and money in these and related areas.
Reduces customers’ project risk.
Willing to perform our work at our customers’ facilities when requested to do so because of security concerns (but at customers’ expense).

Click here to download a 1MByte pdf file that serves as an example of our expertise in strategic radar target tracking and in recent developments in Estimation Theory and in Kalman Filter-related technology.

Click here to download a 1.72MByte pdf file discussing and analyzing existing pitfalls associated with improper use of “shaping filters”.

Click here to download a 88.8Kbyte discussion and analysis of weaknesses in a new approach to linear system realizations by Chris Jekeli.

Click here to download a 500KByte pdf file with a detailed account of the historical and current status of the rigorous handling of nonlinear control systems with stochastic inputs (a.k.a. random noise inputs) circa 1969 (a topic that we still follow).

Secondary Table of Contents listing Informational topics further below (corresponding to pertinent Historical Technical Events in related areas)

Click here to jump to our Capabilities.

Click here to jump to Who We Are and What We Have Done and What We Can do for You

Click here to jump to A View of Several Topics that were Missing from 9th Annual High Performance Embedded Computer Workshop (September 2005).

Click here to jump to Important Points conveyed at EDA Tech Forum (7 October 2005).

Click here to jump to Important Aspects of New England Chinese Information & Network Association (NECINA) on (29 October 2005)

Click here to jump to Important Aspects of National Instruments Symposium for Measurement &Automation (1 November 2005)

Click here to jump to Important Aspects of Ziff Davis Endpoint Security Innovations Road Show (7 December 2005)

Click here to jump to Important Aspects of Analytical Graphics Inc. Technical Symposium for STK7_(30 January 2006)

Click here to jump to Important Aspects of The MathWorks Technical Symposium on using Simulink for Signal Processing and Communications System Design (31 January 2006)

Click here to jump to Important Aspects of National Instruments Technical Symposium for LabView 8.0 Developer Education Day (30 March 2006)

Click here to jump to Important Aspects of Lecroy and The MathWorks Technical Presentation on Data Customization (20 April 2005).

Click here to jump to Important Aspects of IEEE Life Fellow William P. Delaney Lincoln Laboratory talk on Space-Based Radar (15 April  2006).

Click here to jump to Important Footnote to Aspects of IEEE Life Fellow William P. Delaney Lincoln Laboratory talk on Space-Based Radar (15 April  2006).

Click here to jump to Important Aspects of Open Architecture DOD  planning Seminar (9  May 2006).

Click here to jump to Important Aspects of Microsoft Windows Embedded Product Sessions (23 April 2006).

Click here to jump to important Aspects of AGI Missile Defense Seminar 2006 (10 August 2006).

Click here to jump to TeK Associates' THOUGHTS Following Two Half-Day Presentations by COMSOL on COMSOL Multiphysics (6 March 2009).

Click here to jump to TeK Associates' Objections Following HP "Rethinking Server Virtualization" workshop.

Click here to jump to Status of Microsoft Software Security.

Click here to jump to Unsettling Thought for the Day.

Click here to jump to a second Unsettling Thought for the Day.

Click here to jump to yet a third Unsettling Thought for the Day.

Click here to jump to yet a fourth Unsettling Thought for the Day.

Click here to jump to yet a fifth Unsettling Thought for the Day.

Click here to jump to yet a sixth Unsettling Thought for the Day.

Click here to jump to References cited for sections below.

Click here to jump to the screen that contains our Primary Table of Contents 

Capabilities:

   Consulting for engineering design, analysis, and performance evaluations of alternative algorithms.

   Proposal\report preparation for mathematically-based methodologies and algorithmic topics.

   Independent Verification and Validation (IV&V) of software provided by others.

   Preparation of Software Requirements Specification (SRS) in the estimation area for navigation or Early Warning Radar (EWR) target tracking.

   Implementing software prototypes and exercising them in MatLabฎ\Simulinkฎ (or in TK-MIP™) simulations as a precursor performance baseline (from which software may subsequently be generated automatically using a cross-platform compiler from a Simulinkฎ code base-see DeRose, L., Padua, D., “A MatLab to Fortran 90 Translator and its Effectiveness Proceedings of 10th ACM International Conference on Supercomputing-ICS’96, Philadelphia, PA, pp. 309-316, May 1996 and DeRose, L., Padua, D., “Techniques for the Translation of MatLab Programs to Fortran 90 Proceedings of 10th ACM Trans. on Programming Language Systems, Vol. 21, No. 2, pp. 285-322, Mar. 1999.).

   Preparing clear, easily understood final status reports to accompany completion of all initiatives.

   Marketing...sometimes blatantly advertising capabilities and past successes.

We go through the detailed epsilon-delta arguments so that you don’t have to (by our bending over backwards to explain things in simple terms that are understandable up and down the line at all levels of sophistication and interests).     Go To Table of Contents

Click on the “before” photo at left   En garde!

TeK Associates’ areas of special expertise: Decentralized Kalman filters; automatic Event Detection (i.e. detecting owncraft NAV component failure or detecting enemy target vehicle maneuvers) by further post-processing Kalman filter outputs; specifying Kalman filters for INS\GPS Navigation applications; investigating approximate nonlinear filtering for Reentry Vehicle (RV) target tracking in strategic National Missile Defense (NMD) scenarios using radar, InfraRed (IR), & other Angle-Only Tracking (AOT) methodologies. (We also have experience with optimal control algorithms and follow its supporting literature. There is well-known and documented "duality" between the results and techniques of these two fields.) We also have experience in the area of “Search and Screening” and search-rate exposure and sensor behavior, as arise in military surveillance considerations and countermeasure concerns.

Our capabilities and experience carry over to investigations of other similar mathematics-based algorithms as well (such as to multi-channel Maximum Entropy spectral estimation techniques, which are also model-based). We can also perform a baseline assessment of expected interactions with other algorithms present on a particular platform such as interactions with multi-target tracking or with clutter suppression algorithms for radar applications. We have also looked into aspects of Sidelobe Canceller (SLC) algorithms and of Space-Time Adaptive Processing (STAP). We are aware of STAP’s severe vulnerability to jammers that are non-stationary (in the statistical sense). We are also aware of how to numerically quantify the adverse effects of enemy jamming on tracking algorithm performance. We have experience in analyzing the performance of Inertial Navigation Systems (INS), Global Positioning System (GPS) receivers, and its NAVSAT predecessor as they affect INS navigation outputs, and with certain particular radar applications as they relate to target tracking, Kalman filtering, and other estimation algorithms and approaches. From cradle to grave.... We cover the waterfront. Icelandic: “Frแ upphafi til enda.”

We are not intimidated by modern mathematics nor by evolving terminology and buzz words but enjoy the challenge of dealing with it. We know the difference between affine and linear systems and we are as comfortable with Zak and Abel transforms as we are with conventional FFT’s and Laplace transforms in the parlance of classical mainstream Electrical Engineering. We are aware of the significance of Lie Group theory in modern time-frequency signal processing applications as well as Lie algebras having previously arisen in bilinear systems and in investigations of when tractable finite dimensional realizations occur in seeking to implement certain exact nonlinear filters (and how Lie Groups also arise in the more controversial String and Super-String theories currently being pursued by some researchers in quantum mechanical ties to cosmology [85], [86]) and the practical origins and historical “roots” of Lie Groups as having arisen in investigations of how to properly perform “separation-of-variables” in seeking solutions to challenging partial differential equations (PDE’s) in applications.  (Hey, we have our roots in Electrical Engineering and so are familiar with Maxwell’s equations [transmission lines, characteristic impedance z0, Smith Charts, Voltage Standing Wave Ratio (VSWR), waveguides with TM or TE modes, standard TEM waves in space, optical fibers and its various modes, and their High Altitude Electromagnetic Pulse (HEMP), HEMP, bleaching vulnerability unless adequately shielded or doped with halides, etc.] as well as with the role and techniques of Schr๖dinger’s equation in modern physics and quantum mechanics [with mesons, pions, and muons, charm and color quarks, fermions, and bosons], and the challenging Navier-Stokes Partial differential Equations [PDE’s] arising in aerodynamics and fluid flow.) Schr๖dinger’s equation is somewhat similar to the Kolmogorov equation (where there are, in general, a forwards and backwards Kolmogorov equation that describe the statistical estimation situation in continuous time) and also known as the Fokker-Planck equation arising in optimal statistical estimation for both linear and Gaussian case (which is very tractable and simplifies to the standard Kalman filter) and general nonlinear case (usually very intractable and computationally tedious for all except the simplest of problems that, unfortunately, are neither realistic nor practical). The similarity is that both deal with the time evolution of probability density functions (pdf’s) or information flow. [For Schr๖dinger’s equation, the solution must first be multiplied by its own conjugate and normalized before it is a true pdf.] With use of COMSOL Multiphysics, there is now hope for new theoretical breakthroughs associated with computational insights gained. Compare this to [97]. There is a similar even more challenging PDE that arises for stochastic control [93] related to the Bucy-Mortensen-Kushner PDE (see page 176 in [94] for a clear, concise perspective), that may perhaps now be solved using COMSOL Multiphysics without invoking the bogus so-called Separation Theorem for nonlinear systems. It is bogus only for nonlinear systems, where it is a standard assumption that is made merely to gain tractability (whether or not it is true and warranted because usually it is not). COMSOL Multiphysicscan also handle the Navier Stokes Equations of Computational Fluid Dynamics (CFD). Also see Pavel B. Bochev, Max G. Gunzburger, Least-Squares Finite Element Methods, Applied Mathematical Sciences, Vol. 166, Springer Science + Business Media, LLC, NY, 2009.

We are as comfortable in a Hilbert Space or in a Banach Space context as we are with analysis in standard finite dimensional Euclidean spaces with the usual metric. We are aware of the utility of counterexamples that serve as cautionary guideposts in an analytic quest but we use balance and common sense as we proceed. We know the difference between point-set topology and algebraic topology. Foliations, manifolds, and trees don’t faze us. We understand when integrals and limit taking can be validly interchanged (and when it can’t) and we know when it is important to distinguish between different types of integrals, both deterministic and stochastic. We are aware of the various alternative and approved analytic measures that yield a different answer for the estimate of the fractal dimension for the same exact problem in common. We also routinely track and use new developments in statistics and random process theory (including importance sampling, bispectra, and trispectra techniques, alpha-stable distributions). We are familiar with martingales, semi-martingales, and nested increasing sigma-algebras and its relationship to conditional expectation. We know about the Scottish Book and have a copy. However, we are also very results-oriented and so usually downplay the detailed analytic underpinnings when we report results to a customer unless they specifically request that such detail be supplied. Normally, we convey results at a high level and seek to avoid inundating busy readers with details that they may not appreciate nor want to hear about for their application.

                            

We know that customers are busy handling their own more pressing problems, fighting fires, chasing deadlines, and want to trust us to do the “right thing” in these supporting analytic areas; however, we will always make customers aware of any problems encountered along these lines if these types of problems become an issue or a “show stopper”. We can be of value in making the transition from initial theory to software implementation of new algorithms because of our experience and wide understanding of the underlying analytics and by our proven track record of insuring that critical aspects of a solution are not lost during the translation into working computer code. 

We state issues simply and write them down clearly in a manner that is easy for customers to comprehend. We don’t seek to impress by using contorted compound-complex sentences or multi-syllabic words. We try to keep things as simple as possible (but not more so). This is our quest and our forte.   Go to Top  Go To Secondary Table of Contents

A partial list of our prior accomplishments (constrained here merely to our main specialty: the area of Kalman filter related topics and concerns):

Developed the theory and implemented a real-time failure detection scheme, denoted as the Two Confidence Region (CR2) approach (for the navigation systems of the U.S. Navy’s SINS/ESGM of C-4 Trident and C-4 backfit Poseidon Submarines) based on subsequent processing of Kalman filter output estimates and covariances (from the 7-state SINS STAR [Statistical Reset] Navigation filter and 15-state SINS/ESGM Navigation filter) in order to deduce presence or absence of ellipsoidal overlap. Development included implementation of truth model and filter model simulation, performance evaluations, decision threshold settings, theoretical and practical evaluation of associated Receiver Operating Characteristics (ROC), evaluation of performance with real system data too versus mere simulation and evaluating effect of any imperfect (=practical) Failure Detection, Identification, and Reconfiguration [FDIR] methodology (such as this) on total navigation system Reliability/Availability and its theoretical and practical evaluation [1]-[6], [13]-[15], [19]-[21], [26];
Posed the problem of submarine navaid fix utilization while evading enemy surveillance as a “cat and mouse” game of “sensor schedule optimization” within the Kalman filtering context [7], [8], [12] (methodology is unclassified but quantifications using parameters from JHU/APL are SECRET);
Investigated use of a decentralized filtering formulation within U.S. Navy’s Joint Tactical Information Distribution System (JTIDS) Relative Navigation (RelNav) and demonstrated how stability of the collection of JTIDS filters of the participants could be established using associated Lyapunov functions (if a particular structural form of decentralized filter were adopted and used as we recommended) [9]-[11];
Found flaws (as specifically identified by us) in many software implementations by others associated with computational tests of matrix positive semi-definiteness as used both in the inputted covariance models of Q, R, and P0 for the Kalman filter and in the on-line computed covariances availed as output from the Kalman filters used for U.S. Navy Inertial Navigation System (INS) and Sonobuoy Target Tracking filters [22], [24], [25], [52];
Developed and evolved a catalog of analytic test problems of known closed-form solution to test and verify various critical aspects of software code devoted to Kalman filter [23], [27]-[32] implementation (useful for software IV&V in both the Kalman filter and Linear Quadratic Gaussian feedback control areas) [27]-[32], [34];
Found errors in calculation of matrix pseudoinverse as it arose in a particular reduced order filter formulation [33], [34] and corrected prevalent misconception in the asserted computer burden of existing Minimum Variance Reduced Order Filter (MVRO) formulation [33], [34];
Critically reported the status of all failure detection techniques that we had encountered by the middle 1980’s as we investigated decentralized filtering and FDIR for the Air Force’s Integrated Communications Navigation, Identification for Avionics (ICNIA). This was decentralized filtering to be used aboard a single self-contained platform to ameliorate the effect of any battle damage or anomalous component failures in Navaid subsystem sensors (as a consequence of existing component Mean-Time-To-Failure/Mean-Time-Before-Failure [MTTF/MTBF]) to still provide a measure of self-healing, fail-safe operation, or limp home capability in the face of potential failures “by doing the best that one can with what was still available” [16]-[18], [40];
Researched and Implemented a 6-state Radar Target Tracking filter for strategic Reentry Vehicles (RV’s). Since mathematical model within tracking filter is nonlinear in both dynamics and measurements, we used an Iterated Extended Kalman Filter (IEKF) as an approximate implementation that met the goals of being real-time while finding an analytic simplification that yielded high accuracy without incurring as much computational burden as had previously been the case for IEKF’s [35];
Implemented and simulated an Angle-Only Tracking (AOT) filter [also known as a Bearings-Only Tracking (BOT) Filter] for the situation of two or more Radars cooperatively coordinating to track strategic RV targets when enemy jamming causes the usual range measurements to be denied. This situation is even more nonlinear and potentially sensitive and unstable than the usual unjammed case where both range and angle measurements are available to the target tracking filter (which is still an approximate nonlinear filter but not as taxing to work with as in AOT) [36];
Identified a pitfall in an existing evaluation methodology purported to be useful for calculating Cramer-Rao Lower Bounds (CRLB) for nonlinear situations (as arise in radar target tracking, AOT, passive directional sonar, or sonobuoy target tracking, where some form of an Extended Kalman Filter [EKF] is typically used) [37];
Developed a decentralized 2-D Kalman filter-based sensor fusion approach for handling image enhancement [38], [39];
Evaluated Cramer-Rao Lower Bounds for the situation of exoatmospheric target tracking using a recursive estimator such as an EKF [41]-[43];
Participated in writing the software specifications for Updated Early Warning Radar (UEWR) as part of National Missile Defense (NMD);
Participated in the performance evaluations for UEWR between Maximum Likelihood Batch Least Squares (BLS) processing (which is iterative over the whole time interval) versus use of an Extended Kalman Filter (RVCC or UVW EKF) implementation (which is merely a recursive filter offering computed outputs at each measurement time step). Of great interest were the online computed covariances. Only the EKF provided these in real-time although the BLS provided associated covariances that were more accurate or “truthful” [44]-[46];
Developed and simulated GPS/INS integration within an Airborne platform used to support electronic terrain board data collection [47];
Developed a Kalman filter-based covariance analysis program in MatLabฎ and exercised it by performing quantitative analyses of the relative pointing accuracy associated with each of several alternative candidate INS platforms of varying gyro drift-rate quality (and cost) by using high quality GPS external position and velocity fix alternatives: (1) P(Y)-code, (2) differential mode, or (3) kinematic mode at  higher rates to enhance the INS with frequent updates to compensate for gyro drift degradations that otherwise adversely increase in magnitude and severity to the system as time elapses or passes;
Thirty years later, returned to the topic of the first bullet and obtained new results [48], [49];
Thirty years later, returned to the topic of the second bullet and obtained new results [50], [51];
Developed TK-MIP™ to capture and encapsulate our Kalman filter knowledge and applications experience (and LQG/LTR experience) and put it on a platter to make it easily and affordably available and accessible to others (both novices and experts) to enjoy by expediting their simulation and/or real-time processing and evaluation tasks and, in the case of the novices, learning those topics that are germane. Clarity in Graphical User Interface (GUI) interaction was one of our primary goals that we meet. Please click on TK-MIP version 2.0 for PC button at top of our Home Page screen to proceed to a representative free demo download of our TK-MIPฎ software. If any potential customer has further interest in purchasing our TK-MIPฎ software, a detailed order form for printout (to be sent back to us) is available within the free demo by clicking on the obvious Menu Item appearing at the top of the primary demo screen (which is the Tutorials Screen within the actual TK-MIPฎ software). We also include representative numerical algorithms (fundamental to our software) for users to test for numerical accuracy and computational speed to satisfy themselves regarding its efficacy and efficiency before making any commitment to purchase.
Go to Top   Go To Secondary Table of Contents

We say what we see!

A View of Several Topics that were Missing from 9th Annual HPEC

MIT Lincoln Laboratory’s 9th Annual High Performance Embedded Computer (HPEC) Workshop (20-22 September 2005) was, as in the past, of high quality and well worth attending. However, the present author, well known for possessing a critical eye, will now sacrifice diplomacy and brevity for the sake of clarity in the hope that such explicitness will be useful to others for future remedy. To this end, we make the following observations: although Linux, IBM, and Sun Microsystems technology thrusts were prominently displayed and discussed at the 9th annual HPEC Workshop, as well as the contribution of other third party vendors; notably absent was any mention of what Microsoft has accomplished along those same development lines. The U.S. Navy AEGIS cruiser U.S.S. Yorktown debacle of being “dead in the water” for 15 minutes when it used Windows NTฎ some years back [In September 2009, at the Embedded System Conference (ESC) at the Hynes Convention Center in Boston, MA, it was revealed that this failure was due to operator error by initiating a “divide-by-zero” operation without any error handling present in the software to mitigate its effect] was cited at this workshop and, perhaps, was motivation for leaving Microsoft out of the mix as other Commercial-Off-The-Shelf (COTS) products were being considered more seriously by DoD. However, nothing is static and Microsoft has continued to improve and correct past mistakes as well! Microsoft has a considerable R&D budget for innovative improvements and a running joke is that Microsoft usually “doesn’t get it right until Version 3”. (More will be said at the very end about how Microsoft has been turning itself around regarding computer security and their prior lack of it.) However, I never bet against Bill Gates. As an actual software developer myself, I naturally have a love-hate relationship with Microsoft, as do most people, but you have to respect their accomplishments.

At the HPEC Workshop, the following occurred that, in my opinion, were somewhat slanted in the facts that were portrayed or somewhat objectionable by overlooking other somewhat lucrative alternatives:

Automatic garbage collection for C was mentioned as an accomplishment or milestone target to be achieved. (Microsoft’s C# has already had that feature for over two plus years in .NETฎ);
IBM is developing a single Integrated Development Environment (IDE) that will be used in common for developing software with several different computer languages. (Microsoft has had this capability for over two years in Microsoft’s Studio.NETฎ, with about 20 planned alternative computer languages present there for software developers to choose from [including an eventual capability to write code targeted to an Apple computer, as would be cross-compiled]);
Another two year old West Coast (in Oregon) standards service (discussed in a presentation that was a last minute substitution) offers education, training, and certification in the fast paced evolution of VITA I/O standards for a $10,000/yr for an organizational membership (while the older, well-known Instrumentation, Systems, and Automation Society [ISA] does the same thing [i.e., education, training, and software certification] for $85/yr for an individual membership and offers [and welcomes] grassroots corporate participation to any desired degree of involvement in defining and/or critiquing the evolving standards.) Also not mentioned at HPEC, other standards related to VHDLฎ and Verilogฎ are available for free from Accellera, which passes off to the IEEE (e.g., Standards 1800, 1850, 1476.4, 1481 and SystemVerilog accoutrements such as Property Specification Languages [PSL], Open Verilog Library [OVL], Open Kit [OK], Verilog AMS, Interface Technology [ITC], and Test Compression in a single chip system), which, in turn, works through the IEC in Geneva, Switzerland for international acceptance. Accellera is an entity-based consortia, with only one vote per an organization being allowed and any non-member can monitor and inspect technical subcommittee (TSC) work for free. (Only early preliminary versions of these standards are available for free from Accellera as the standards are evolving, since once they are finalized and accepted by the IEEE any free downloads would be a violation of IEEE’s copyright agreement because the IEEE sells these IEEE Standards as a way of generating revenue.) Moreover, although not mentioned at this HPEC Workshop, easy development paths are currently available through use of:

1.  Capital Equipment Corp.’s (CEC) contemporary software product, TesTPointฎ, for accessing measurement data from various transducers in designing Test and Measurement and Data Acquisition solutions,

2.      National Instruments’ (NI) LabViewฎ version 7.1 and 8 (version 8 being advertised as of 1 October 2005) along with its real-time toolkit,

3.  The MathWorks, Inc., using MatLabฎ and Simulinkฎ and its Data Acquisition toolbox and its Fixed Point Toolbox, and, in particular, using a third party product (from Altera), can automatically generate VHDL and Verilog for targeted FPGAs or ASICs (where the repertoire of target processors is currently somewhat limited, e.g., XILINX, Symplify DSP) directly from the model simulation in Simulink on a desktop or laptop PC. There is automatic generation of the system-level Simulink testbench, ModelSim testbench, and system-level verification is provided (as presented at EDA Tech Forum in Waltham, MA on 7 Oct. 2005);

where, in items 1 and 2 above, TesTPoint can only be a Server but LabView can be either a Client or a Server or both. (CEC is now a wholly owned subsidiary of NI in 2005);

Despite widespread agreement among software developers and computer science practitioners about 9 years ago that the number of function points achieved (being the significant objectives accomplished within a certain number of lines of code written by the programmer, with Microsoft’s Visual Basicฎ being way out in front of the pack of available software development languages (as assessed in the mid 1990’s and reported at one of the monthly meetings of the Boston Section of the IEEE Computer Society) was a better measure of programmer or project productivity than merely the number of lines of code produced; the mere number of lines of code (LoC) was again resorted to by both MPT, Inc. and Lincoln Laboratory as a significant component of its proposed software productivity measures; (Follow-up comment in 2009: If only Lincoln Laboratory had exploited available resources on the MIT main campus such as tapping the brain of Prof. Barbara Liskov, who received the ACM’s 2009 Alan M. Turing Award for her innovative work in computer languages and who has been at MIT since 1972. In 2008, Prof. Barbara Jane Liskov (MIT) won the Turing Award — often called “the Nobel Prize of computing for inventing—data abstraction—that organized code into modules (and avoided “GOTO’s”).
No mention was made of a big pitfall or standard caution associated with use of any version of MatLab, be it standard desktop MatLab or MatLabฎ within a Grid Computing environment: The MathWorks cautions that since MatLab is matrix-based, any nested loops (using Do…While, For…Do) should be avoided or, at most, only one of several nested loops should be explicitly in MatLab and the other remaining loops should be written in a different computer language such as in C (and, historically, possibly in Fortran for the older versions of MatLab). This means that standard programming constructs such as linked lists cannot be conveniently handled within MatLab. When nested loops are encountered within MatLab, an extraordinarily and exorbitantly long time for completion or convergence is usually experienced. An example of such a CPU-intensive time-wasting situation for achieving convergence to a final answer being the inverse problem of Case 6 (algorithm EDEVCI [available from The MathWorks’ Website, associated with David Hu’s book]) in [53] of specifying the 3-D Error Ellipsoid for a completely general Gaussian with different major and minor axis variances and nonzero cross-correlations for a particular specified containment probability [being 0.97 within the National Missile Defense (NMD) program rather than the standard Spherical Error Probable (SEP), which is defined for ellipsoidal containment with probability 0.50]. (Notice that these practitioners actually tinker with fundamental definitions that up to then had been standardized and fixed--this represents an opportunity to bamboozle!);
No mention of the security flaw discovered by two MIT researchers this past year as existing in all current Grid Computing environments and in most Super Computing environments as well (see [54]). Lincoln Laboratory mentioned, instead, that “their Grid Computing is behind their Firewall”. Evidently they have never heard of disgruntled employees wreaking havoc within their own premises. (Onboard U.S. submarines, there are “man-amuck” drills that are routinely practiced to deal harshly with subversives onboard, which, historically, have existed and required crew protection from.) Optimistic developments for the affordable enabling of Grid Computing are already available in 2004 for C/C++ implementations, for JAVA implementations, and now in 2005 for other less restricted, more general language implementations including Fortran (via Digipede, as discussed in [70], which gives it a 4 star rating out of a possible 5). [After 2005, other approaches to Grid Computing also exist, like Berkeley Open Infrastructure for Network Computing (BOINC), which accommodates volunteer participants running Windows, Mac OS System X, Linux, and  FreeBSD. As a “quasi-supercomputing” platform, BOINC has about 527,880 active computers (hosts) worldwide processing, on average, 5.428 petaFLOPS as of August 8, 2010, which tops the processing power of the current fastest supercomputer system (Cray XT5 (Jaguar), with a sustained processing rate of 1.759 PFLOPS). BOINC is free software which is released under the GNU Lesser General Public License.]
Apparent reluctance of several analysts at HPEC (e.g., QinetiQ, Ltd., AccelChip Inc., MIT Lincoln Laboratory) to consider the further option of using a Householder Transformation in seeking solutions to a general over-determined simultaneous system of linear equations Ax=b, where A is (m x n) of real entries, b  is an m-vector of known real valued entries being specified, with m > or = n. Instead, these analysts confine the candidates to be merely use of:
  1. Cholesky algorithm,

  2. Singular Value Decomposition (SVD),

  3. QR algorithm,

even though numerical analysts depict Householder in [55] as being adequate for obtaining the solution and actually having the lowest operations count or flop count. The subset of cases to consider was contorted by the presenters (AccelChip Inc.) beyond what most mathematicians would consider as the following adequately rigorous statement: “If the coefficient matrix A and the augmented matrix [A|b] have the same rank=p, then solutions exist (but may be an [n-p]-fold infinite number of solutions). However, if rank[A] = rank[A|b] = n, where n is the number of unknowns, then the solution exists and is unique.” Effects of round-off and machine precision in floating point applications and in fixed point applications should only slightly alter this straightforward statement of the theoretical underpinnings for solving simultaneous systems of linear equations. Charlie Rader (Lincoln Laboratory) was said to have strongly influenced the above analysts’ choice in narrowing the scope to only the above three options. Charlie Rader and Alan Steinhardt had worked with hyperbolic householder transformations 20 years ago and published their work on this [56] but evidently ended up using a different algorithm in the actual final practical application;

Lincoln Laboratory Benchmark Test includes requirement to implement the Space-Time Adaptive Processing (STAP) algorithm in software. While this algorithm provides excellent processing results for phased arrays in ideal situations, the less well know down side is that STAP is extremely vulnerable to enemy jammers that are more sophisticated than mere Barrage jammers (i.e., wideband stationary Gaussian White Noise [GWN] of constant power). Nonstationary (in the statistical sense) GWN jammers or synchronized blinking jammers operating at a sufficiently high blink rate (i.e., faster than the known and published convergence rates of the published beam-forming null-steering algorithms of the phased array or side lobe canceller) wreaks havoc with STAP [57] as does nonstationary clutter (i.e., nonstationarity of clutter is the usual case), where adverse sensitivity to nonstationary clutter is admitted in [58] on pages 77, 107, and in Sec. 2.5 (sensitivity, as just used, is too soft a word since it is a show stopper; ergodicity of the covariance whereby a good estimate of the actual covariance may be obtained from time samples requires that the random process being sampled be stationary [in the statistical sense]; otherwise, the covariance cannot be obtained from time samples-the conclusion is as simple as that and it is not at odds with what is conveyed in [82], [102]);
Sun Microsystems (as presented by Guy Steel) is working on a variant of Fortran called Fortressฎ for parallel processing. (Fortran 95 accommodates parallel processing implementations already and is one of the languages included within Studio.NETฎ. Also see HP/Compact’s Digital Visual Fortranฎ and Absoft’s Pro Fortranฎ, both having versions of Fortran 95 for over 5 years for Windowsฎ platforms.)

We found 2004’s 8th Annual HPEC discussion by Chris Richmond (Lincoln Laboratory) somewhat objectionable along the following lines: Unlike what Lincoln Laboratory asserted, current Pentek Website portrays the empirical “Moore’s Law”, as adhered to for over the last 40+ years, to still be viable for hardware. Moreover, Lincoln Laboratory’s Fast Fourier Transform FFT example of where algorithms can take up the slack in development applications by satisfying “Moore’s Law” instead used 1965 J. W. Cooley & J. W. Tukey result as its date of inception instead of more recent revelation [59] that Carl Fredrick Gauss was the originator, as published in 1805. Time scale of 200 years (=2005-1805) vice 40 years (=2005-1965) refutes last year’s conclusion that FFT innovative developments adhere to “Moore’s Law”. https://www.technologyreview.com/s/615226/were-not-prepared-for-the-end-of-moores-law/ (As an aside, Winograd’s version of the FFT, although possessing the smallest operations count, needed higher level managerial control and bookkeeping actions that caused the entire algorithm to actually take longer on von Neumann sequential machines than did the Cooley-Tukey FFT. Winston (Win) Smith’s Swift (Sequence-Wide Investigation with Fourier Transform): algorithm for computing the FFT using number theoretic primes that are not a power of two turned out to be a disappointment by not possessing the symmetric rounding characteristics of the hardware “butterfly” implementations associated with Cooley-Tukey FFT’s that ameliorates the effect of repetitive round-off errors that adversely accumulate for long run times. The hope had been that data to be transformed would no longer need to be padded with zeroes to be a power of 2 long since padding like this reduces the intensity or clarity of the resulting output of the FFT processing. However, this hope evaporated. Currently, the newly declared “winner” is FFTW. [See fairly recent issues of Dr. Dobb’s Journal since 2001 for more detail about the Fastest Fourier Transform in the West: FFTW].) In 2010, TeK Associates strongly suggests that, with the prevalent routine availability of multi-core machines from several different manufacturers, signal processing technologists should reexamine the Winograd version of the FFT, mentioned above, for likely improvements in speed beyond that of the FFTW since having the managerial aspects of the Winograd algorithm confined to one core to be handled in parallel in a more straight forward fashion now without getting in the way of the multiplications, additions, and shifting operations of the FFT on another core with fast cross-communication (due to the close proximity between them) for proper inter-process coordination.

Further external substantiation of the unabated continuation of “Moore’s Law” at least in the near term can be found in the October 2005 issue of Inc. magazine within Michael S. Hopkins’ article: “75 Reasons to be Glad You’re an American Entrepreneur Right Nowpp. 89-95, in particular, Item 17 (page 90) says: “Moore’s Law-despite anyone who says it no longer applies, we guarantee that tomorrow the computer your company needs will again be faster, better, and cheaper than it is today.” There is further recent evidence confirming the trend of “Moore’s Law” in [73], [81], [83]. Intel claims that while there is physics that dictates encountering limitations in seeking to push electrons through smaller and smaller dimensions at faster and faster speeds thus causing more heating for diminishing returns (due to Ohm’s law and the heating up thus increasing the resistance), the new way “Moore’s Law” can be maintained on course is through parallelism or, more specifically, by not demanding faster processing speeds of individual chips but, instead, by partitioning software solutions across several processor chips in parallel (a nontrivial pursuit that is challenging in and of itself but still possible with sufficient engineering creativity and insight), where chips themselves may be manufactured to have a parallel structure or even a 3-D structure. 

Molecular afterburners mean more “Moore” (dated 24 July 2009 of Science as a later follow-up on this topic of “Moore’s law”)
Years of research into molecular computing has done little to supplant silicon in the computing world, which has led to creative suggestions for solving a major roadblock to the continuation of “Moore’s Law”: uneven doping that is acceptable at large sizes becomes unacceptable at the nanoscale. A solution, invented at Rice University, involves coating the silicon instead with a layer of dopant “afterburner”. Admittedly, a temporary solution, but effective none-the-less.

As  a 2018 follow-up on “Moore’s Law”, see Item 21 of  https://www.vox.com/2014/11/24/7272929/global-poverty-health-crime-literacy-good-news 
As a 29 October 2018 follow-up on “Moore’s Law”, “continuing to scale down the size of transistor features has become costlier and trickier in recent years. So much so that only four manufacturers of logic chips—GlobalFoundries, Intel, Samsung, and TSMC—were even planning to continue the multibillion-dollar effort. Those ranks have now thinned, and schedules for the remaining companies are slipping. But don’t count “out quite yet. If you’ve got the cash, you can now hold evidence of its power in your hand in the form of at least two smartphones. And new ways to improve performance without shrinking transistors appear to be in the offing.” (The following  link is accessible only by signing in to LinkedIn: https://www.linkedin.com/pulse/good-bad-weird-3-directions-moores-law-alvin-lieberman-/?)

Extending Moore’s Law:
https://www.linkedin.com/pulse/quantum-strangeness-gives-rise-new-electronics-alvin-lieberman-/ 

https://asunow.asu.edu/20190211-quantum-strangeness-gives-rise-new-electronics 

https://www.technologyreview.com/s/615226/were-not-prepared-for-the-end-of-moores-law/ 

https://newsroom.ibm.com/2021-05-06-IBM-Unveils-Worlds-First-2-Nanometer-Chip-Technology,-Opening-a-New-Frontier-for-Semiconductors

The following discussion about the utility of decentralized Kalman filters ends in a discussion of some recent development options offered by Microsoft.

As a precedent, decentralized Kalman filters were used for C-4 Trident and C-4 backfit Poseidon submarine navigation in the 1970’s and 1980’s within the Ships Inertial Navigation System (SINS), Electrostatically Supported Gyro Monitor (ESGM), and joint SINS/ESGM operation (each system having its own filter running simultaneously, as blessed by Hy Strell and Norman Zabb [Sperry Systems Management]) to ultimately provide only one output to the Navigator (and to Fire control). These systems were jointly analyzed by engineers at SSM, Johns Hopkins University Applied Physics Laboratory (APL), Rockwell International/Autonetics Division (RI/AD), The Analytical Sciences Corporation (TASC), and Dynamics Research Corporation (DRC). Decentralized Kalman filters also naturally arise in networked radio systems that attempt to provide navigation connectivity, such as done in the Navy version of the Joint Tactical Information Distribution System (JTIDS RelNav) for Relative Navigation, and by Air Force Integrated Communications Navigation and Identification for Avionics (ICNIA), and possibly useful in the current Joint Tactical Radio Systems (JTRS), pronounced “Jitters”. [Integrated Communication, Navigation, and Identification for Avionics (ICNIA) ~1983 (by ITT, Nutley, NJ and by TRW in Redondo Beach, CA) consisted of an Air Force airborne architecture for simultaneous handling of a combination of almost the exact same radio systems, as a precedent twenty five years ago for the Advanced Tactical Fighter (ATF), as is now being pursued by JTRS.] An even earlier Airforce initiative (~1980), first spearheaded at C. S. Draper Laboratory in 1979, known as the Multi-Frequency Multi-Band Airborne Radio System (MFBARS) architecture simultaneously handled within 3 ATR cases (one full, two half full, within 100 lbs., within 1.7 cubic feet) what had previously occupied 13 ATR cases of various sizes, within 300 lbs., within 7.0 cubic feet (where both MFBARS and ICNIA were being pursued for Wright-Patterson AFB and the latter having an associated Advanced or Adaptive Modular Antenna [AMA], with revolutionary joint antenna concepts spearheaded in the 1980’s by Jerry Covert at WPAFB), the following participating systems are depicted in the table below:

Software radios first appeared in 1980. The Army’s HAVEQUICK came later than the other radio systems depicted above.

Motivation for JTRS is that a smaller, lighter software radio will enable future combat troops in the field to carry more water instead of heavy communications equipment, as the standard pragmatic trade-off. A drawback to the use of JTRS beyond this field situation is that current platforms already have adequate radio communications and JTRS will have to match current form, fit, and function, F3, so that any changes in power consumption, or cooling, or volume expected for JTRS (even if all are less) within a line replaceable unit (LRU) will still cause a huge expense in backfitting platforms to accommodate JTRS or anything new. Another problem is that, in order to be flexible enough to accommodate a change in radio protocol (i.e., JTRS changing mode from one particular radio system to another for its advertised interoperability or to change frequency, or to change waveform (of the current 30+ waveforms in the JTel collection) or to change communications protocol, there is a need to reboot the processor. The time constraints for doing so are tight. A maximum of 4 seconds to reboot the operating system may suffice (as an objective being sought, as pushing the envelop pretty hard) but an implementation taking or needing as much as 40 seconds to reboot is much too long to be practically accommodated by JTRS since it needs to be agile in switching modes. (See Software Radio Architecture [SCA] MIL-STD-188-110B. Also see “ ‘Jitters’ Radio To Provide Steal Communication Links,” Aviation Week & Space Technology, pp. 67-68, 15 April 2002.) (Also see 45 year old MIT Lincoln Laboratory report by Fred C. Schweppe on selecting optimal waveforms.) Hmmm...Draper Laboratory spearheaded the effort in this area for 25+ years yet MIT Lincoln Laboratory gets the Air Force contract for JTTRS. Maybe we should ask someone about it? Perhaps Vince Vitto knows? Maybe Jim Shields would know?

BAE Systems’ New Software-defined Radio Assembly Makes Space Missions More Flexible:
https://apnews.com/Business%20Wire/0cadb145a2ca44cdb9eeb6d95e786060 

As a pleasant surprise, Microsoft’s Windows XPe (XPembedded) using Ardence’s (previously Venturcom’s) ReadyOn appears to be able to reboot within 4 to 7 seconds, as a feat demonstrated for all in the audience to see at the Real-Time Embedded Computer Conference (RTECC) in Framingham, MA on 24 May 2005. Use of such an operating system (a reduced footprint duplicate for embedded processors of a Desktop Windows XP) in conjunction with Ardence’s excellent third party tool, ReadyOn (now, in May 2006, it is called ArdenceSelect with Instant On/Off [In 2009, IBM has a version of this same idea that they advertise as being TurboRAM.]) within embedded architectures leverages all experience with the existing wide set of readily available and familiar favorite software development tools and Integrated Development Environments (IDE’s) currently on Desktop or Laptop Personal Computers under Windows XP that can now be used for developing the software eventually intended for the embedded target machines. Moreover, in the embedded environment, there are tools that suppress any pop-up messages that usually need a mouse click or keyboard key press that may be routinely encountered in a desktop or laptop software implementation that would otherwise plague an embedded implementation (that is likely without any monitor screen, or mouse, or keyboard in the target application). Similarly, recall that in DOS on a desktop machine running Microsoft Windows, programmers can routinely suppress DOS messages to the user or error reports that would normally appear within a DOS Window on the monitor screen merely by redirecting the message to a file and then killing the file (by deleting it or its contents). Other options for embedded operating systems from Microsoft are Windows CE and Windows NTe (NTembedded).

After software development is completed on the host machine, as a planned development vehicle for the designated target machine, Microsoft has automated the task of selecting the subset of operating system software that is to be ported over to the target machine to support successful running of the particular software program that the user developed and to automatically include only those portions of the embedded operating system needed for successful running including all dependencies that the casual or less experienced user may be unaware of as being needed. In this way, the final target software footprint and its supporting operating system may be kept small without exceeding hardware resources. For timeliness of response and reconfiguration, alternate reboot strategies (e.g., use of flash memory) are also supported by Windows XPe for target processor systems with perhaps no hard disk present. Such is the situation now for embedded devices. A wireless development path also exists within the Microsoft Windows XPe framework adhering to existing standards and which doesn’t force a proprietary turnkey approach on customers or users. All these new and recent Microsoft software products and high quality, compatible, third party tools makes it easier to pursue Commercial Off-The-Shelf (COTS) implementations using readily available hardware processors with known capabilities and characteristics and possessing low expense because they are already manufactured in high volumes and already have a path in place to avoid future obsolescence (so as to not disappoint its existing, substantial customer base). Microsoft also offers other operating systems for embedded applications (to match the PC operating systems upon which the software was originally developed which potentially eases eventual migration to embedded platforms, with proper subsets of operating systems being automatically extractable and tailored to support what a particular software application solution requires (thus enabling a reduced footprint for the operating system invoked for use in the particular application). Microsoft also guarantees support for these embedded Operating Systems for at least 10 years, and, for large volume customers, promises to give them the actual operating system source code too (for developers to provide their own long term support beyond what Microsoft now offers) and Microsoft now says that “they are not expecting to get paid until the software developer gets paid”. This is ideal risk mitigation that also leverages the experience of the army of PC programmers already available as a payoff for sticking with the Microsoft approach. The conventional wisdom of the 1980’s and 1990’s is still just as true: “Never bet against Bill Gates!” [Windows 7 after 22 October 2009 also has an embedded option similar to what was just described but more uniformly quantized into component parts that support a particular user application.] Additionally in 2006, National Instruments (NI) has several new products for LabView that can expedite software radio analysis and simulation both theoretically and in hardware prototypes: NI RF Modulation Toolkit (for AM, FM, PM, ASK, FSK, MSK, GMSK, PSK, QPSK, PAM, and QAM) complements the NI PXI-5660 RF Vector Signal Analyzer (with $15K, $13K, $21K options) and the PXI-5671 RF Vector Signal Generator (with $19K, $16K, $21K options) with $95 RF cabling and NI PXI-2592 500 MHz Multiplexer ($1,595). NI Partners: SeaSolve, MindReady, and UK-based AMFAX have existing software solutions for handling several different standards including Bluetooth 1.2 & 2.0, AM/FM/RDS, 802.11 a,b,g, DySPAN 802.2, Wi-Fi/WiMAX/WLAN/WPAN, ZigBee (802.15.4) and for Industrial, Scientific, and Medical (ISM) band (viz., 59-64 GHz designated for unlicensed users at no charge) usage and also accommodating PXI-6682 GPS Time-Stamp and Location within a slot of PCIe-compatible NI’s PXI Modules. They can also comply with constraints imposed by the Defense Security Service (DDS) for properly handling and declassifying received data. Click here to see a white paper with descriptions of newer, perhaps less familiar, modulation conventions and protocols.

To read about inherent benefits of having a background in HAM Radio operations for military operators working in more state-of-the-art defense systems, please click on:

https://forums.qrz.com/index.php?threads/military-experts-say-radio-amateurs-highly-knowledgeable-asset-in-hf-communication.729052/ 

Comments on the above article by Scott Sminkey:
"I've been a ham radio operator for about 45 years. My comments on the article are:

- Don't kid yourself about knowledge needed to get a ham radio license. It's pretty easy. Unless things have changed dramatically, the test is multiple-choice and the question pool is publicly available, due to concerns in all sorts of government exams about cultural bias and probably other things. Being a ham radio operator and doing more than using a simple "appliance" handheld VHF/UHF radio can expose you to some real, practical technical things. In the HF spectrum, radio wave propagation is something you'll need to learn to make effective use of the different frequency bands available versus time-of-day and solar activity, for example. Working in the microwave region is likely to require building your own antennas and possible transmitters.

- "Having the latest software, drivers, and firmware is essential in achieving successful HF operation." This is certainly not true for ham radio, giving it an advantage over military HF radios (if I understand correctly that paragraph in the article). Ham radios, except for those that transmit data rather than voice or Morse code, require only a power source (often 12VDC, sometimes at high current), radio, and antenna with no connections to anything else.

- The article mentions "QRPX." In ham lingo, "QRP" means low power, often battery powered and/or solar powered. To work at really low power, Morse code is used. It's slow to get info from point A to point B, but usually quite reliable even under conditions of interference (man-made and natural) and unfavorable propagation. During a solar peak year, I was able to log contacts with 40-something countries in two days with only 5 watts of power and simple wire antenna about 20 feet above ground.

- The one huge advantage of amateur radio is that there is no central point of failure, i.e., nothing that a hostile entity can attack. Furthermore, the operators are not in well-known locations like military bases. Trained ham radio operators are also used to operating in a directed network of independent stations where the director ("net control") can be any of them, if need be. Good examples of this can be seen at public events like the Boston marathon.

Anyway...that's my take on these topics addressed in the above link."

Subsequent Comments on the above by Thomas H. Kerr:

"Thanks for your comments Scott! Also, thanks for taking the time to write them down.

I was never a Ham operator myself but I did have a crystal set as a kid and did the grounding to my radiator (after scraping off the white paint [as prescribed]) in order to have a good electrical connection, and I ran a 50 ft. wire out of my second story window down to a 6 foot high steel chain-linked fence (the wire antenna served as the hypotenuse of the triangle that was formed). Everything worked well, and I was pleasantly surprised. I later substituted a cardboard cigar box speaker for my headphones and that worked fine too. I had to cut out a 1-inch hole in the middle to use the top of the box as a sounding board and used the carbon rods from two D-cells that I sacrificed for this to bracket the hole. I notched the rods with a "finger nail" file after gluing them securely to the box on either side of the hole and placed a loose mechanical pencil lead in the groves so that it would span both of the two carbon rods over the hole and would vibrate when an AM signal was received. The carbon rods still had their battery caps. The leads that previously went to the ear phones were now attached to these metal caps that were retained at the end of the carbon rods from the D-cell Batteries.

I was in grade school (mid 1950's) when I put this together. Crystal had a cat's whisker. I had used a wire-wound choke too for tuning with a pick-off that somewhat ressembled that of a rheostat or variable resistor. I know that a "choke" is an inductor. My first message received was a performance of Jean-Carlos Menotti's "Amahl and the Night Visitors" (at Christmas Time, when it was repeatedly broadcast ad infinitum back in those days).

Different Topic: Later, I put a motor together (when I was in 6th grade) and had to put windings on the armature and use strips of thin copper as brushes for the slip rings on the armature. I was an amateur with the armature! I also had a real telegraph in Jr. High School. Later, as an undergrad in college (1963-1967), I could appreciate brushless motors that avoided that wrinkle. Having brushes let me see that, over time, they lost their springiness' and I also had to keep them lubricated with oil that was conducting. As time went on, the brushes were less effective. They became somewhat floppy from metal fatigue.

I had electric lab at my technical high School. Nothing was as complicated as what I had already performed as a child in grade school. However, use of multiple on/off switches for controlling the same light fixtures from upstairs and downstairs were interesting.

In 1979, I worked on a frequency-hopped Time Difference Multiple Access (TDMA) Radio system called JTIDS RelNav for DoD Air Force, Navy, and Army customers on several different types of platforms (e.g., Ships, Aircraft, Land Vehicles, soldier man-packs), which also had a "designated controller" and no central point of vulnerability. If controller was taken out, another transmitting station stepped up to take its place. They were ranked on the basis of their respective absolute navigation accuracies. Power levels were so high that it was not allowed to be used in the CONUS during peacetime (or else it would swamp out other things of importance). JTIDS JPO was at Hanscom Field. At 960-1215 MHz, It had notches at IFF, VOR/DME, TACAN, and stayed away from GPS frequencies (L1=1575.42 MHZ, L2=1227.6 MHZ) too. There are so many more GPS frequencies nowadays and more structure to the various signals [e.g., GPSIIIF: L1, L2, L3, L5, L1C, L2C, P-code, C/A-code, M-code, L1M, L2M, L1C compatibility with Global Navigation Satellite Systems (GNSS)].
Both JTIDS and GPS have a preamble to their respective signal structure, and the structure was published (but scaling wasn't widely available). It also used Reed-Solomon code. GPS uses Gold Code. Later version of JTIDS for the Air Force was DTDMA rather than merely TDMA, where the new D preceding it stands for "Distributed".

While working on U.S. submarines and other things, I had to deal with Loran-C as well. The 1992 Federal Navigation Plan called for shutting down Loran-C; however, since the Coast Guard had just obtained their Loran-C receivers and had them installed and working the way that they wanted them to be, the U.S. postponed the planned shut down and continued using them until somewhere within the span 2009 until 2015 during the Obama administration. There are now plans to bring it back as eLoran to help both civilian and defense users thwart enemy "spoofing" of GPS since Loran and eLoran are very repeatable (with detailed transmission path length corrections for: over land, over foliage, over ice, over water both for fresh water or salt water) and reveal enemy walk-off of GPS-indicated locations. 

I was also somewhat familiar with an historical U.S. RF-based navigation system, OMEGA, which is no longer supported: OMEGA was the first global-range radio navigation system, operated by the United States in cooperation with six partner nations. Like Loran-C, it was a hyperbolic navigation system, enabling ships and aircraft to determine their position by receiving very low frequency (VLF) radio signals in the range 10 to 14 kHz, transmitted by a global network of eight fixed terrestrial radio beacons, using a navigation receiver unit. It became operational around 1971 and was shut down in 1997. Drs. Triveni Upadhyay and Duncan Cox worked on OMEGA at Draper Laboratory before they broke off to form Mayflower Communications, which deals more exclusively with GPS applications now. 

I suspect that the above article on the link was to encourage communications and navigation operators currently in military service to better hone their RF skills by becoming Ham Operators in their free spare time. It would keep them out of trouble, let them communicate with home, and serve to further expand their Radio skills that are depended upon in the service." 

A word of caution: the cognizant older programmers are likely to also be familiar with “objects”, “classes”, and “collections”, but may tend to avoid using these constructs and methods in favor of using earlier alternate constructs and methods that are standard in Structured Programming because use of “objects”, “classes”, and “collections” is more time consuming in implementation, which is a real-time constraint consideration in embedded applications. Use of Assembly Language (ASM) may still be useful for achieving the necessary speed-up in computationally intensive situations of repetitive and/or intensive Signal Processing. Prevalence of “Banker’s Rounding” or “Gaussian Rounding” instead of, or in place of, standard “scientific rounding” in several products from Microsoft should be viewed as a boon and not a bust, as recently revealed by many.  (We recently observed that The MathWorks now offers the option of using Banker’s rounding within MatLab too.)

We at TeK Associates agree with this favorable assessment of the effect of “Banker’s Rounding”, as encountered and directly tested by us within our own considerable computational numerical DSP experience (e.g., we obtained exact closed form solutions to many well-known eigenvalue-eigenvector problems, determinant evaluations for both exclusively real valued matrices and matrices with complex numbers as entries). Prior to this surprising revelation, we had expected the old 1970’s era IEEE Standard for round-off implementation to be better since it was widely endorsed by numerical analysts at that time. So much for that! Blah! (Evidently, you can’t trust anybody but must always test it yourself! Experience has taught me that this is a good principle to live by [especially in software development]! (Prior scientific round-off used in most software was described in Virginia Klema and Alan Lobe, "The Singular Value Decomposition: Its computation and some applications", IEEE T-AC, pp. 164-176, April 1980.) Please click here to view an important historical application of Squareroot filtering used to remedy a round-off problem that previously existed in the Patriot Missile Radar.

A plethora of new books on the subject of software radios have been recently published: [87]-[92]. Please notice that [89] is written by people with Laboratory and company affiliations that did not participate in the earlier programs and so are not tainted by real knowledge of what has already transpired and been accomplished in the existing Air Force and Navy JTIDS RelNav programs nor in the Air Force MFBARS and ICNIA programs of 20+ years ago. Indeed, at least one of the authors of [89] earned his Ph.D. in Chemistry not in Electrical Engineering. For more about the “Cognitive Radio” initiative, please see articles and reports by Dr. John Chapin (Vanu, Inc.), whom then President Clinton had awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). Dr. Chapin is chairman of the 1900sg-a project on certification of radios with dynamic spectrum access. [This is a far cry from the old crystal radio with a “whisker connection” that we personally built as an 8 year old child back in 1953, having a 50 foot wire antenna strung out of our second story back window (in Northwest Washington, D. C.)  and the other side of the receiver grounded in my bedroom to our house radiator that had to have the paint scratched off to make good electrical contact. Adequate speakers (or a microphone) could be made from an empty cigar box as an acoustic resonating cavity with a small 2 inch diameter hole cut in the center of one of the two flat sides of larger area, then the carbon rods from inside two defunct D batteries (with nicks made in each using a fingernail file) were glued to the left and right sides of the cigar box bracketing the hole, and an unused carbon “lead” from a mechanical pencil was placed gently to bridge between the two nicks over the hole but not fastened or glued at all but left free to vibrate. Wires were run from the two metal caps of the D battery carbon rods to the output leads of the crystal radio and its whisker if the cigar box were being used as a speaker. The crystal radio receiver worked best at night. Ah, the good old days! Why is all digital software-based radio to be preferred over digital Monolithic Microwave Integrated Chips (MMIC) radio technology with comparable savings in size, power, volume, and cooling requirements, and a likely capability to accommodate adaptive waveforms too?] Technical issues describing the differences between software radio, software-defined radio, and software receivers (and other GNSS-related answers) may be found on the GPS World’s Tech Talk BLOG: http://www.gpsworld.com/techtalk.   

The above views about Microsoft capabilities are my own and are not taken from any earlier Microsoft literature. I wouldn’t believe their advertising literature anyway. I only believe what I see and have seen and done myself in this and almost every other software area. (The late president, Harry S. Truman, and I perhaps share this same personality trait in common even though “I’m not from Missouri”.) Hey, I remember the 3 tier Client-Server capabilities that Microsoft claimed as being possible with its Visual Basic product, then 3 years later a representative of Microsoft admitted at VBITS that now Microsoft can actually do what they had claimed 3 years earlier. Big whoopee! The situation is apparently much better for Microsoft now regarding security and quality control. Please see further below for the evidence.

Some historical concerns regarding MIT Lincoln Laboratory’s ASAP Workshop:

We are also awaiting investigations into why Space Time Adaptive Processing (STAP) algorithms assume enemy threat is merely stationary WGN “barrage” jamming. As mentioned above, STAP appears to be very vulnerable to non-stationary jamming just as Joe Guerci acknowledges in his Space Time Adaptive Processing book where a similar, almost identical situation is a failing for STAP if the clutter present is nonstationary (and therefore can’t be measured on-line and compensated for). Many STAP algorithms to date have utilized Wiener filters (which only handle time invariant situations in the frequency domain) instead of using Kalman filters (which can handle non-stationary time varying situations directly in the time domain). It is well-known that Wiener filters are a special more restrictive case of a Kalman filter [61, p.142, 242] and that Multi-Input-Multi-Output (MIMO) Wiener filters have the more challenging extra baggage of needing Matrix Spectral Factorization (MSF) [60] instead of equivalently just solving the more benign Riccati equation. In the early 1990’s in an award winning paper, Prof. Tom Kailath (Stanford) and his student established that most adaptive filters in current use are merely special cases of Kalman filters [72].  See [98] and [99] for possible mitigating circumstances.

Based on my prior experience at performing Receiver Operating Characteristic (ROC) trade-offs  [13], [14], [23], another hot button of mine is that the Generalized Likelihood Ratio (GLR) test (frequently featured at past ASAP Workshops), although of interest in several diverse applications over the last 45 years, still has not had the decision threshold specification for it rigorously defined. The specification of the test statistic itself has been done over and over again. Some applications, notably speech recognition performed by Prof. Solo at MIT ~2000, use GLR without any decision threshold at all. The so-called “GLR of Ed Kelley” (Lincoln Laboratory, retired) is not a GLR per se but, rather, is a pseudo-GLR but useful none-the-less. More needs to be done in specifying Pd and Pfa for both these test statistics so that proper ROC curves can be elucidated that in turn will allow proper specification of the decision thresholds (which is usually the slope of the tangent to the ROC curve at the particular operating point being utilized).  

 Go To Secondary Table of Contents

Further Outcome of Attending EDA Tech Forum on 7 October 2005:

TeK Associates asked our Accelleraฎ presenter, Dennis Brophy, currently vice-chairman of Accellera and, until recently, its chairman, whether VHDL or Verilog can analyze the cross heating effects that may cause computations to proceed at a slower clock rate than originally anticipated during the engineering design. I mentioned that such considerations usually involve analysis invoking the Heat Conduction equation from Partial Differential Equations (PDE’s), which can be quite challenging, involved, computationally intensive, and not usually real-time. (I had seen optimization programs at Northeastern University in conjunction with Lockheed Saunders in ~2000, from Dr. Paul Kolodny and others) that attempted to select chip layout and geometry by squeezing more and more components into the chip real estate on each consecutive iteration without any consideration of heating, need for heat sinks, power analysis, or presence of conduction fans or other cooling mechanisms. Dennis Brophy said that certain research projects are currently underway at Princeton University along these lines but that nothing like this is currently available within VHDL or within Verilog. They don’t currently consider heating due to close proximity of components on these outputted chips using these design tools. TeK Associates’ comments on this: “This can be a cause for concern. People have known about this hole since the early 1980’s, as VHDL was being designed. Have we been ostriches with our heads stuck in the sand? Are people seeking to avoid seeing this obvious hole in the current design process? It appears that this deplorable situation is 30+ years overdue for fixing!” [By 23 January 2009, TeK Associates became aware that COMSOL Multiphysicsฎ may offer a solution to this problem but COMSOL Multiphysicsฎ has yet to identify this within their advertising literature as an application capability. TeK Associates has already alerted them to it. COMSOL Multiphysicsฎ ver. 3.5a can currently import SPICE but not VHDLฎ and Verilogฎ yet. COMSOL ver. 4.0, released somewhat after Spring 2010, has a totally new GUI face and offers many new capabilities: such as CAD LiveLinks to Pro/ENGINEERฎ, SolidWorksฎ, and Inventorฎ.]             Go To Secondary Table of Contents

Historical Problems Beyond HPEC and ASAP:

For over 55 years, the Linear Quadratic Gaussian (LQG) feedback optimal control paradigm, which originated at MIT Lincoln Laboratory by Michael Athans and has been widely disseminated from MIT’s Electronic Systems Laboratory (ESL) [later known as the Laboratory for Information and Decision Systems (LIDS)] to several government agencies  (including the Federal Reserve during the dark days of “stagflation” in the early 1980’s, when they were willing to try anything) is marginally stable by possessing zero phase margin (and as such, is easily perturbed to instability by high frequency system dynamics that were, perhaps, not adequately modeled (as is usually the case in most situations), or by the normal aging of components which causes design parameters to change significantly. Ref. 62 ostensibly addresses this issue but, although the title explicitly says LQG, the authors merely address the LQ feedback control paradigm in [62]. The purely deterministic LQ feedback control paradigm (worked out by Rudolf Kalman himself) that utilizes the exact system states is more benign than that of LQG, which utilizes a linear Kalman filter to obtain estimates of the unknown system states that are to be feedback through the system control gain matrix K(t), which is specified by the LQ theory. Problems that are prevalent with using this LQG methodology were identified by others in [63], [64], [65]. Within the last 20 years, a remedy has been found by “robustifying” the LQG methodology by augmenting it with an additional step of Loop Transfer Recovery (LTR) so that LQG/LTR is more satisfactory than just LQG alone [66]. No warnings about these detrimental aspects of LQG came from AlphaTech (now part of BAE), even though this product was developed and taught by the founders of AlphaTech to several generations of MIT Ph.D. engineering graduates (and propagated to others through the technical literature, which they controlled at the time);

The sensor fusion methodology of Covariance Intersection (CI) has already been demonstrated to be useless [67], but, despite these revelations, AlphaTech continued with CI methodology [68], which usually degenerates to cases where = 0 or  = 1, rather than the more useful situations where 0 < ต < 1. Now, with the advent of [68], even more the computed situations are for the degenerate cases of  = 0 or  = 1 (and not the useful case of 0 < ต < 1). Technology is not supposed to go backwards, especially after people know better.

An evident trend now materializing is that all the IEEE editors for Kalman filter topics (with applications in INS and GPS navigation and radar target tracking) all hail from the same school: University of Connecticut (Storrs). [However, unlike for MIT and its LQG already discussed above, no egregious transgressions have been perpetrated by the University of Connecticut that we are aware of.] This lack of “checks and balances” in publications was why LQG persisted and prevailed for so long without any associated warnings from others (because almost all opposing cautions were suppressed from the earlier [and later] literature). There are several current worries about the efficacy of the Interactive Multiple Model (IMM) methodology that, likewise, will probably never see the light of day in the open literature. This is the time to worry. However, someone with an affiliation different from UCON published an IMM paper in IEEE AES in 2009 [100] that strongly relied on using the technique of Covariance Intersection (CI) as a critical component! Other researchers (as well as TeK Associates) have already dismissed Covariance Intersection as being both unreliable and demonstrably useless to invoke within an estimation context. Go To Secondary Table of Contents

      Information Gained from Attending the New England Chinese Information & Network Association (NECINA) “Open Source Conference” (Radisson Hotel, Chelmsford, MA; 29 October 2005):

        According to Bill Hewitt, Senior Vice President and Chief Marketing Officer for Novell (previously of Peoplesoft, which was recently absorbed by Oracle), Novell has sponsored 8 programmers (from 8 different countries), working together to develop Mono a program that will allow Linux Operating Systems to run .NET applications (i.e., Microsoft products developed in VisualStudio.NET normally require WindowsXP or Windows2000 to be the host Operating Systems for .NET applications). Version 1 Mono has already been released by 29 October 2005. (By July 2007, two other software products, Mainsoftand Wine, have also emerged for providing compatibility of Window’s-based software to a Linux Operating System.)

        According to Bill Hewitt, since starting to use OpenOffice, he has been extremely pleased with the results. While the actions required to accomplish a particular objective are different, the results are (in his opinion) even better than what is produced by Microsoft Word. (This may, perhaps, have merely been a purely politically motivated statement, entirely without merit);

      Conference lead organizer and moderator, Richard Wang (evidently with Oracle), gave a splendid rebuttal afterwards to Bill Hewitt’s disparaging remarks about Oracle’s lack of activities in the Open Source Software area. Richard enumerated the several Open Source Software projects ongoing at Oracle;

        Embedded Linux is being used for a large number of devices needing embedded Operating Systems;

        MIT Media Lab has a $100 Laptop project to make computers available to disadvantaged people so that they and their progeny can be computer savvy. While hardware cost of parts manufacturing and assembly of a Laptop is below $100, the cost of software raises the total price to about $500 unless Open Source Software is used and Linux is the OS. [In 2009, it was revealed that India has trumped the U.S. by producing a $10 Laptop (or perhaps a $20 Laptop proclaim their squealing detractors, obviously upset by being trumped)];

        is a widely used security product for both open and proprietary applications. Astaro is an up and coming company for Internet Security;

        Bugzilla is the new product for software defect tracking;

        Magilla’s free downloadable Firefox Browser is very popular in 2005 and users can easily customize it so their personal version looks uniquely different from everybody else’s. [Is this really a good thing in a corporate environment, where standardization is usually a benefit?].

       There are 60 different versions of Open Source Licenses approved by the (but 600 other versions exist that are not approved. One has to be careful since some require that any product that uses the Open Software must itself be open Source. Black Duck Software (Doug Levin, CEO) automates the tracking and compliance issues of OpenSource Software Licenses;

        According to Dr. Wuqiang Li (Consulate General in New York, People’s Republic of China), China has developed an application, SciLabฎ, which is essentially MatLab, but which runs on Linux platforms. Apparently, The MathWorks had nothing to do with it. [In 2009, “R” is a new open source scientific and symbol manipulation software with a format very similar to the others just mentioned];

        PRC (China) has the largest percentage of PC’s in the world using Open Source Software and Linux OS (because, in the words of Dr. Wuqiang Li, “their western regions are so poor and they cannot afford expensive software”);

        Since 2003, agreements have been signed between the following entities: IBM & PRC; Novell & PRC; Germany & PRC; France & PRC; Japan, Korea, & PRC; India & PRC;

       Using grants from their Ministry of Education (MoE), now 40 universities in PRC are teaching Linux.

      According to the four panelists from different segments of industry, Open Source Software is the big “new thing” in the last 4 years that has U .S. Venture Capitalist (VC’s) excited. Corporations that purchase Open Source Software don’t usually actually modify it. They just use it “as is” but are willing to pay for support contracts as a type of insurance policy to guarantee that it will always be working properly. Widespread use of Open Source Software offers the hope of significantly reducing the current operating budgets of existing corporate IT departments (by 75%). General Electric was mentioned as one such adopter for a particular Open Source product for world wide operations. It was mentioned that IBM has invested $2 billion to promote Linux. Novell says IBM didn’t purchase the Linux patents (from them) when they licensed use of Linux, so people can just ignore the numerous lawsuits bandied about to and fro as just being frivolous. 

Go To Secondary Table of Contents

Information Gained from Attending National Instruments Technical Symposium for Measurement and Automation (Radisson Hotel, Chelmsford, MA; 1 November 2005):

        LabView Ver. 8 (just released 3 weeks earlier) has about 100 new worthwhile features and runs only on Microsoft WindowsXP and Windows2000;

        Current Version 8 of LabView accommodates integration with standard Source Code Control and Configuration Management software (like Rational Roseฎ);

        LabView 8 has more than 75 new math and analysis functions (for a total of more than 500 advanced math functions);

        Can have full integration of LabView 8 with .NET;

        Without needing LabViewฎ. Enterprize version contains both versions of Measurement Studio ;

        LabView 8 has interactive driver development wizard, and has Real-Time toolkits and Fixed Point Toolkits and other specialized toolkits for developing applications targeted to FPGA’s (currently only XILINX) and ASICS, and any 3rd party 32-bit processor;

        LabView 8 can be used for distributed target applications that are asynchronous and use shared variables for convenience of cross-communication and can synchronize threads;

        LabView 8 can now be targeted to a PDA or to wireless applications using Bluetooth protocol.  

Go To Secondary Table of Contents

Information obtained by attending Ziff Davis’s half day “Endpoint Security Innovation Road Show” on Wednesday, 7 December 2005 at Four Seasons Hotel in Boston (featuring joint Intel, LANDesk, and Macrovision):

“Software downtime costs organizations billions every year,” as quoted in Macrovision’s FlexNet AdminStudio.
“Hacker” penetration into the networks of commercial corporations has recently increased significantly, with a severe inflection point trust upwards occurring in year 2000 and maintained thereafter as an almost exponential growth in the number and severity of such attacks.
Attacks are no longer merely to call attention to what the hackers can do “as, hey, look at me” but are much more malicious by being for the hacker’s economic gain (e.g., “phishing”, “denial of service” by system message swamping and associated black mail or “protection rackets” like those perpetrated in the 1920’s and ’30’s by mobsters).
Technical innovation has occurred in an attempt to (hopefully) better meet the increased challenge of outside penetration and to prevent more chaos from being introduced into the already challenging mix by combining the functions of automated instantaneous hardware and software “Status Assessment” of all local networked PC’s, as enabled by new chip technology recently developed by Intel called Intel/AMT (Automated Management Technology), which includes direct http-links (dormant by default, but which can be remotely turned on by those with such privileges) and nonvolatile memory-both built in. 128-bit encryption is also availed to prevent others from exploiting this new architecture. There is no need for the networked PC’s to be turned on. There is a trickle of sufficient power even in the “turned-off” state to enable and support the interrogation and status assessment goals. These new chips are already shipping on new Intel laptops and predictions are that they will be in new desktops by the first quarter of 2006. Currently, backfits to prior PC’s are not possible. Only the new Intel PC’s will have this capability, which enables the following capabilities from Macrovision’s FlexNet AdminStudio in conjunction with LANDesk/Itel/AMT:
  1. Automated remote external firewall enforcement and maintenance;
  2. Remote administering of diagnosis and repair of problems on hundreds or even on thousands of networked PC’s (with diverse operating systems or version numbers or patch updates and heterogeneous hardware, bios, and firmware) all from the administrator’s consoles in an automated fashion with full visibility into the hardware and software inventory and status of each PC on the network;
  3. Remotely handle new installations in response to user requests for new software applications and better ascertain steady-state usage so that licensing contracts do not exceed the actual need (as a cost saving measure that is advertised to usually pay for all these new pricey capabilities within a few months);
  4. Network administration of patches (e.g., those multiple patches arriving from Microsoft on “Patch Tuesday” [the 2nd Tuesday of each month]) can be combined with prudent Test Lab “proof of concept” practices that ensure that such updates don’t break any of the existing diverse systems on the network (but still simultaneously allows immediate dissemination of updates or new software application requests to the requisite PC target machines during times of low network traffic but remain dormant and invisible to the user until after administrator completes adequate Test Laboratory demonstrations of safety [at which time a short, simple low traffic message may be sent in an automated fashion enabling activation of the updates or new application at each local PC]);
  5. Combining all of the above functions with automated virus protection and inoculation and enforced policies of password maintenance and possible use of SSL 3.1 128-bit encryption, where necessary;
  6. Recommendation that user’s not be allowed to alter their local assigned configurations and settings and by disallowing user freedom for any contraband local software installations by automating enforcement of a restrictive [but safe] policy by the immediate isolation from the network of offending PC’s no longer in compliance (so that any possibility of contagion or cross-contamination is avoided by such an enforced quarantine). Affected PC’s can also be remotely brought back into compliance by the administrator and then returned to service for the user.

LANDesk runs on Microsoft Server 2000 and on Microsoft Server 2003.

Perimeter security Controls Access: enforced limited access and strong recommendation that users with administrative privileges go through a thorough security screening so that only “trusted personnel” have such access. Networks are only as strong as their weakest link!

Gartner Research publication entitled “Magic Quadrant for PC Life Cycle Configuration Management, 2005 ID Number: G00131185, 17 Oct. 2005 was distributed at this meeting, and it rated LANDesk LANDesk’s trend and track record. The audience was told that LANDesk had originally been a part of Intel but was spun-off (and is now 13 years old). The presenter for LANDesk at this meeting and one of its founding fathers, Dave R. Taylor, was an exceptionally impressive and knowledgeable speaker who had also previously worked for Symantec. This Gartner document did fault Microsoft for [previously] “having an imaging capability that only integrates its own imaging format and not 3rd party tools such as Ghost”; however, “by August 2005 Microsoft delivered the ability to use the more reliable Windows Update Scanning Agent (rather than the beleaguered Microsoft Baseline Security Analyzer) for patching under SMS 2003.”  Go To Secondary Table of Contents

Information Gained from Attending Analytical Graphics, Inc. (AGI) Technical Symposium for STK7ฎ (Westin Hotel, Waltham, MA; 30 January 2006):

 AGI’s STK7ฎ

STK7 Astrogator offers both “impulsive burn” and “finite burn” options for an orbit insertion vehicle and the user can account for fuel consumed by appropriately reducing the mass after each burn to appropriately reflect the residual mass of the vehicle’s remaining fuel. Continuous burn does not appear to be an option even though it had been completely handled in Murray R. Spiegel’s 1963 book on Differential Equations for the problem of “from earth to moon”. New inputs about version 8.1 in July 2007 indicate “continuous burn” is an option that is indeed provided;

While AGI’s Space Systems Scenario demonstration of their Astrogator trajectory solver (under the topic of Backwards Targeting) was very intuitive and obviously intended to convince the audience that it was sufficient for any orbit insertion needs by offing a forward ODE solver (to be run forwards in time) for the initial portion and also a backwards solver for the end-game or final portion (to be run backwards in time) and then joined up where both intersected smoothly in both position and velocity “in analogy to the transcontinental railroad construction feat of from West to East and  East to West joining up in 1869 at Promontory Point with the so-called Golden Spike.” Unfortunately, (based only on what was said and presented at the meeting) this current capability offered by STK7 is not yet sufficient for more general tasks encountered in realistic transfer orbit scenarios. AGI provides a good first step in the right direction but has yet to follow through completely. What they currently offer will “chew up” and waste a lot of computer computation time and not necessarily obtain a final solution but the overview that AGI provided gives the false impression (through the coarseness of the output trajectory segment within high level graphics and the fact that AGI choose an example that appeared to be completely planar) that AGI’s technique will eventually work through extensive trial-and-error to match up the two 3-D positions and 3-D velocities perfectly for the backwards and forward trajectory segments. In reality, this perfect match-up is not likely to happen without a top level mechanism in place, as imposed to force convergence to occur to a final satisfactory answer or solution and not just relying on a manual operator or analyst-in-the-loop making an heroic attempt at achieving this goal of joining up the results of calculating in both the forwards and backwards directions. Where the analogy fails or departs from the historical train travel situation is that, in general, the starting orbit may likely be in one plane while the target final ending orbit may be in another plane. If this were merely a 2-body problem with the central force of gravity of one body operating on a point mass vehicle (thus defining an osculating plane in which the vehicle’s motion is confined, as determined by its initial conditions at the time of cut-off as it proceeds in a ballistic trajectory) and if all subsequent  thrusting of the vehicle were similarly confined or constrained to only occur just within that same plane, then both positions and velocities could indeed be treated as purely 2-D, as AGI depicted in their presentation. The same can also be said of the backwards calculations as determined from the final parking orbit (where the real final conditions are treated as initial conditions in a backwards solver). Exactly matching up both 2-D positions and 2-D velocities would be an incredible coincidence (i.e., not likely to happen). However, the rub is that the desired parking orbit is not likely to be in the same plane as all the other motion spawned from the actual initial conditions so that the match-up of 6 quantities of interest as a continuous solution is even less likely than the perfect match-up of just 4 quantities of interest. This is obviously a 3-body problem and not merely the conjoining of two 2-body problems. What is needed is an analytic technique to tie these objectives together and enforce compliance and guarantee improvement on each successive iteration with each exact boundary condition being preserved as a hard constraint. Such solution techniques have been available for decades for this general nonlinear Two-Point Boundary Value Problem (TPBVP) by invoking either the well-known “Shooting Method” or by invoking “Invariant Imbedding”, both being merely approximate techniques (but good enough) for nonlinear situations and exact for linear TPBVP’s. Consulting the results of historical Numerical Analysis textbooks should easily get AGI out of this current unpleasant predicament or dilemma, where AGI apparently has been punting instead of “going for the gold”. Engineers always approximate but for this situation there is a better solution than what AGI has embraced to date (being 30 January 2006). This complaint is based solely on what AGI handed out and presented then. The results in [84] may also be of interest for helping solve AGI’s problem. (If there was more depth to AGI’s methodology than was presented there in the name of expediency of presentation, then we apologize. We only bring these topics up in order to help them to see what we see from the audience as lacking as AGI embarks on a 10 city tour.)

AGI’s tools mesh nicely with other Microsoft Office Products like M/S Word, M/S Powerpoint, M/S Excel, etc. by adhering to M/S standard COM software technology conventions to enable copy, cut, and paste to and between these products from AGI computed outputs.

Some subtle problems within AGI’s presentation for Astrogator that I can’t resist pointing out (verifiable from what is explicitly depicted in the meeting handout) are:

Listing of satellite orbits that can be handled as being only LEO, MEO, HEO, GEO, 

Need explicit mention of the ability to handle Molyina orbital applications too (which is, simultaneously, both LEO and HEO),

Explicitly mentioned handling only Hohmann orbital transfers,

Need to also handle or mention gravity assisted transfers,

Need to also handle or mention aero-assisted transfers (as a newer approach, mentioned as "Yet Another Unsettling Thought for the Day" at the bottom of this screen before the references);

AGI’s has an excellent product in STK7 that is apparently the premiere graphical orbital mechanics product, while also being fairly easy and straight forward to use. AGI also makes it easy to distribute output results to user’s managers and clients by also providing the services of their AGI GlobeServer via the Internet. Once AGI solves the problem described immediately above, we would unabashedly endorse their results without qualification. However, we would caution that STK7, in general, apparently only gives the results for an ideal case or idealized situation. Kalman filter-like sensitivity techniques need to be utilized to elucidate fundamental trade-offs that arise all along the way in seeking a practical implementation of these first cut solutions that are initially found by trail-blazing using STK7Go To Secondary Table of Contents

Information Gained from Attending The MathWorks Technical Symposium on Using Simulinkฎ for Signal Processing and Defense Communications System Design (Marriott Hotel, Burlington, MA; 31 January 2006):

The Mathworks’s Simulinkฎ version has many endearing features and runs  on Microsoft WindowsXP and Windows2000. Toolboxes are specialty functions used with MatLab Version 7. Seven toolboxes of immediate interest for these types of applications are:

Data Acquisition Toolbox,

Instrument Control Toolbox,

Fixed Point Toolbox (explicitly verifiable on page 72 of the meeting handout),

Image Acquisition Toolbox (explicitly verifiable on page 66 of the meeting handout),

Video and Image Processing Toolbox (explicitly verifiable support for TI Digital Media DM642 on page 62 of the meeting handout),

RF Toolbox,

Mapping Toolbox (new major update seeking to challenge AGI, as mentioned above);

      Blocksets contain the specialty function blocks used with Simulink 6. Five new blocksets featured at this presentation that are now available are:

Communications Blockset 3,

Signal Processing Blockset 6,

Fixed Point Blockset,

RF (Radio Frequency) Blockset 1,

Video and Image Processing Blockset;

    Users can now have fixed-point support (explicitly verifiable on page 73 of the meeting handout) for:

Simulink 6 (IFFT example explicitly verifiable on pages 68 and 69 of the meeting handout),

Signal Processing Blockset 6,

Stateflow; 

An impressively  wide spectrum of application examples were demonstrated at this presentation from hypothesized JTRS Software radio transmitter and receiver components, to Automatic Pattern Recognition and Template matching in shape and color for vision and video, to GPS receivers (for L1 only, for Clear/Acquisition only, no precise code, no L2, nor L3, nor L5, no Gold Code but did use GPS signal emulator for some degree of realism), to showing (only) the steps to pursue for automatic code generation for specific hardware targets like some of those manufactured by Analog Devices, Inc. (SHARC, TigerSHARC, Blackfin via use of SDL’s DSPdeveloper), by Texas Instruments (TI C6000™, TI C2000™), and by Motorolaฎ (MPC555). For other DSP 3rd party targets, can use The MathWorks Real-Time Workshop in conjunction with Real-Time embedded coder to obtain transportable C code (but user must develop their own drivers for hardware peripherals). This additional required driver design task could be a significant hurtle for this particular development approach. (Hey, National Instruments’ LabView 7 possesses an automatic template for personal user hardware driver development. Also see the excellent textbook [71] for creating your own drivers. Early scuttlebutt about M/S Vista Operating System is that any drivers that one attempts to introduce into the platform must have been digitally “signed” and previously “approved” by Microsoft. Yes, such a procedure would reduce likely driver chaos on the various machines just like the use of the M/S System Registry did for controlling inadvertent and unwanted automatic overwrites by earlier versions of DLL’s within Windows95 to WindowsXP.)

Benefits and mechanism for The MathWorks’ handling of “cross-platform code generation” has already been treated in detail in an earlier fall 2005 meeting report, offered above;

The MathWorks pushed hard on the concept of having the output of Hardware and Software Specifications be an executable Implementation-Independent Model (IIM) to then be able to verify such abstract specifications beforehand. This concept is extremely interesting but just as controversial. Besides violating well-known criteria and definitions of what should constitute a Software specification as merely desirerata without imposing any unnecessary design constraints or early decisions on how to proceed, any executable model, no matter how high level or how coarse, incorporates a degree of design considerations having already been made. “Track-before-Detect” radar algorithms or “Streak Processing” algorithms would never have had a technological resurgence if such constraints were in force. These now extremely useful algorithmic processing options would have been designed out from the start;

The MathWorks claimed that MatLab is the de facto industry standard (perhaps only for them). Others in the running are by National Instruments: LabView7, LabWindows, ControlX =  MatrixX (the latter having won top awards from the Federal Government DOD and Aerospace sectors in the mid 1990’s for accelerating Aircraft manufacturing productivity through automatic generation of efficient C code and/or, alternatively, generating efficient Ada code, previously under The MathWorks but that is no longer the case after it was given to National Instruments as part of the settlement of a lawsuit between The MathWorks and NI), AGI’s STK7, OPNET (according to the SCA), and Stephen Wolfram’s Mathematica from England (each tool having its own particular advocates for very good reasons);

Compatible hardware resources (explicitly verifiable on page 93 of the meeting handout) are:

(Canadian) Lyrtech Signal Master (www.lyrtech.com):

  1. DSP-in-the-loop, co-simulation of Simulink models,

  2. TI C67X or C62X,

  3. ADI 2116X or ADI 2106X,

  4. Xilinx Virtex series.

Nallatech Fuse Toolbox for MatLab (www.nallatech.com):

  1. Data transfer and device download directly from MatLab,

  2. Rapid interfacing and integration with Nallatech’s DIME products.

Some obvious failings within current version (on 31 January 2006) of RF Blockset that I can’t resist pointing out (explicitly verifiable on page 50 of the meeting handout) are:

Lack of units or magnitudes on “source impedance”,

Lack of any obvious way to make this impedance complex (most likely the case),

Lack of units on the “Maximum length of the impulse response”,

The bottom check box option to “add noise” doesn’t avail type of noise, its distribution, or its pertinent parameters (like its mean, its variance, etc.);

Perhaps the above RF Blockset is only in Beta Testing (or should be in Beta testing, based on our objections above).  Go To Secondary Table of Contents

Information Gained from Attending National Instruments Technical Symposium for LabView Developer Education Day (Radisson Hotel, Chelmsford, MA; 30 March  2006):

        Excellent discussion of tools techniques and best practices to use with  LabView Ver. 8 in architecting and developing a large professional application. Gives good advice and design principles for developing easily decipherable Graphical User Interface (GUI) or a clean, understandable Control Panel for the user or customer who needs to interact with it;

      Advanced NI-DAQmx Programming techniques with LabView.

        LabView communication techniques for distributed applications (e.g., consider benefits and drawbacks in use of TCP/IP, of shared variables, of data streaming, and of distributed application automation).

        LabView 8 with data management and storage strategies (e.g., use of technical data management [TDM], which merges XML flexibility for recording self-describing header information along with having an internally designated structure and hierarchy [at the file, at the group, or at the channel level] with the compactness of a binary representation of the actual data entries). Also discussed the new DIAdem DataFinder Technology with ease of maintenance and deployment (without any need to involve the IT department per se). Makes it easy to search for and retrieve any data logged or to modify what is stored, the number of channels conjoined, and the format to be utilized, as needed;

     We at TeK Associates find a minor fault with the NI slide that depicts Nyquist’s Sampling Theorem as being different (or requiring less frequent sampling) for preserving signal integrity or identity in the frequency domain than in the time domain. Its simply not true. Situation is the same for both domains. This conclusion follows directly from the proof of Nyquist’s Sampling Theorem;

        We at TeK Associates find a minor fault with the NI slide that depicts the necessary transmission line interpretation for long leads at high frequencies as having both a characteristic impedance and a capacitance. The term characteristic impedance, when applied to transmission lines or antennas, has both real and reactive components incorporated within it by definition. When input and output impedances match, then there are no degrading reflections and no need to monitor distortion or higher harmonics of the signal of interest. When there is a serious mismatch present, there are existing analysis techniques for quantifying the effects such as through use of a Smith chart, quantification in terms of the Voltage Standing Wave Ratio (VSWR), etc. All this is classical electrical engineering;

        We at TeK Associates find a minor fault with the NI lunch time plenary speaker’s claim that LabView 8’s incorporation of Levenberg-Marquardt method is statistical curve fitting on the cutting edge. Recall that the same algorithm is in the 1986 Cambridge University Press book by William Vetterling, Saul Teukolsky, William Press, and Brian Flannery book entitled Numerical Recipies-Example Book (FORTRAN), pp. 197-209 (It was ostensibly developed earlier by a numerical analyst at Dupont Laboratory in the 1960’s). Treatment of outliers has been standard in statistical analysis for over 30 years, as can be gleaned from N. L. Johnson’s, and F. C. Leone’s textbook on The Design of Experiments for Engineers and Scientists, John Wiley, NY, 1965. [Documentation describing the Levenberg-Marquardt Least Squares curve fitting algorithm (as cited in the 1986 Cambridge University Press book by William Vetterling, Saul Teukolsky, William Press, and Brian Flannery entitled Numerical -Example Book {FORTRAN}, pp. 197-209 (where it was ostensibly developed earlier by Levenberg in 1944 and rediscovered by Donald Marquardt, a prominent numerical analyst at Dupont Laboratory in the 1960’s) is published in Marquardt, D., “An Algorithm for Least Squares Estimation of Nonlinear Parameters,” SIAM Journal of Applied Mathematics, Vol. 11, 1963. Also see Press, W. H., Teukolsky, S. A., et al, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, NY, 1992, (2nd Edition) 1996, (3rd Edition) 2007). The behavior of the Levenberg-Marquardt algorithm is described as interpolating or alternating between the behavior of a Gauss-Newton algorithm and the method of Gradient Descent (or method of Steepest Descent). Click here for a nice description of this algorithm in Wikipedia.];

        We at TeK Associates express concern that NI’s new DIADem product cannot yet handle encryption/decryption of data other than suggest that an outside 3rd party tool be used. The reason that this suggestion is not satisfactory is that it could interfere with the very real benefit that Diadem offers: handling self-describing headings within the designated structure of XML and handling the associated data compactly in binary, all automatically. The challenge is in knowing what to encrypt and what not to encrypt in order to preserve the XML capabilities and not inadvertently clobber them when seeking to retrieve the data. Diadem should be able to easily incorporate encryption automatically itself. Such a capability is needed for a classified processing mode-a need that many defense companies routinely have;

        We at TeK Associates suspect that NI could likely benefit from a detailed knowledge of certain design principles relating historically Automatic Gain Control (AGC) designs, which correctly handle any magnitude of signal received without the user having to know and explicitly supply or enter maximum and minimum values beforehand, as may be a totally unrealistic constraint for many practical applications. Analog AGC designs for continuous time and sampled signals have been used within DOD applications for over at least the last 40 years;

        We at TeK Associates suspect that NI could likely benefit from learning more about the prevalence of other types of corrupting noises besides just Gaussian or Normally distributed bell-shaped noises. Techniques also exist for handling or taming the adverse corrupting effects of any noises that may be present not just in the measurements but also in the systems themselves. NI needs to know “how to play the ball where it lies” in order to know how to assist others faced with these problems so that their users are not “on their own” in facing such problems that plague many. The use of Kalman filters is just one of many tools for ameliorating the effect  of noises in dynamic systems.  

        Follow-Up: By 30 November 2006, NI had announced that LabView 8.1 and LabWindows/CVI 8.1 can both now run on Linux.  Go To Secondary Table of Contents

Information Gained from Attending Lecroy & The MathWorks Technical Presentation on Data Customization (Marriott Hotel, Burlington, MA; 20 April  2006):

       A excellent point was made that Lecroy scopes can run their partner MatLab’s code to customize the definition of pulse rise time (or any other pertinent parameter of interest) if it differs from what is already inherently coded within Lecroy scopes as factory settings (which nominally adhere to current definitions prescribed by the IEEE). An example of when one would want to alter this standard definition of rise time was given in Electromagnetic Compatibility (EMC) testing. It is useful to avail such flexibility that is easy to invoke;

      See page 228, Sec. 10.4 Simultaneous Amplitude and Phase Approximations in [77] for why it would be a bad idea to include as many as a 500th order Butterworth Filter in any reasonable setting where an order no greater than 10 would usually suffice (and even lower would be preferred). For an explicit modern application, please see [96]. Such issues arise in seeking to approximate an ideal low pass filter from the perspective of viewing from the frequency domain. For a maximally flat approximation with equi-ripple in both the pass-band and in the stop-band, the Butterworth filter is the first choice for an implementation before making further refinements. Evidently, the numerical analyst responsible for implementing the MatLab capability in this area has no concept regarding the adverse phase consequences incurred by having such a high order Butterworth Filter, as indicated in the figure below by advertising a capability of 500th order, even though the resulting magnitude more closely approaches that of an ideal filter the higher the degree;

Information Gained from Attending IEEE Life Members meeting by William P. Delaney on “Visions of Radars in Space” about constellation of Space-Based Radar satellites (between 8 or 9 up to about 18) continuously viewing the earth (MIT Lincoln Laboratory, Lexington, MA; 25 April  2006):

       Delaney characterized technologists as being of three different categories: “those in favor of space-based radar, those opposed to it because it would change the status quo and undermine their existing authority of being in charge of a current alternative surveillance approach and its resources that would later be threatened with being supplanted by Space-Based Radar (SBR), and those who didn’t give a rat’s ass and had never thought about it.” We hasten to add that a fourth category that Delaney overlooked would be those who had thought about it and see perils and pitfalls and are intimately aware of the CON’s (in both senses);

      Upon considering the pros and cons of alternative surveillance approaches to get an adequate view of the New England coastal region in case of earthquake or other national emergency (with consideration of the significant mountain masking present in the geography under consideration), Delaney only mentioned options of use of multiple AWACS aircraft, and use UAV such as Global Hawk. As an audience member, I brought up the alternative option of a temporarily hanging balloon-born radar up higher than AWACs but lower and less expensive to deploy than satellite-born radar. [Addendum: I may need to retract that speculation about the possible usefulness of balloons or blimps after Raytheon’s JLENS blimp drifted off to the state of Pennsylvania, dragging its failed mooring, when it was supposed to be monitoring threats in and around Washington, D.C. and also suffered from a miss in not seeing “the postman flying his slow moving lawn chair equipped with balloons” that landed near the White House from several states away (ostensibly because it was temporarily turned off).];

        To my mind, land-based jammers would be the greatest threat to finitely powered satellite-born radar because of the possibility of having very large amounts of power (perhaps even from dedicated nuclear plants) to swamp such a satellite-borne space radar system. Multiple synchronized blinking jammers could play havoc with the convergence of standard null steering algorithms that would be kept in a continuous state of flux and not be allowed to converge to null out the pesky jammers (via a methodology clearly discussed in Paladin Press books published two decades ago in the open literature and having been standard reading for terrorists and soldiers of fortune for decades as well);

       A 5000-element, 2.5 degree beam-width X-band satellite antenna array of the dimensions speculated on for space-based radar would likely experience flexure modes, vibrations, and oscillations needing to be damped out and actively controlled and needing to be analyzed as a distributed large scale structure (as has been done for the space station as a precedent). Delaney didn’t mention whether such considerations arose with Lincoln Laboratory’s analysis final report completed in 2002. Dr. Robert W. Miller had already retired by then and moved to Virginia. Dr. Miller had been the cognizant Kalman filter target tracker expert for Space-Based radar in the 1980’s and 1990’s;

       Delaney speculated that Space-Time Adaptive Processing (STAP) could be brought to bear on Space-Based Radar (SBR) to solve all the existing clutter problems and  refine Moving Target Indicators for this platform. Reference [79] mentions that such techniques are not applicable to nonstationary (in the statistical sense) clutter (nor is STAP, for the same reasons,  applicable to nonstationary jammers [57]). See [98], [99], and [101] for possible mitigating circumstances;

      Unlike what is done for GPS in their semi-synchronous orbits at 22,000 Kilometers, the Low Earth Orbit (LEO) advocated by Lincoln Laboratory for Space-Based Radar would encounter drag from earth’s atmosphere and so need to perform station keeping and need extra on-board fuel for such activities thus shortening each satellite’s useful life and increasing the payload weight. Delaney did not discuss this aspect. Delaney did mention the long life of GPS satellites beyond what they were originally designed for NavSaT (or Sat Nav used by U.S. submarines for at-sea position fixes to compensate for the gyro-drift of the onboard Inertial Navigation System) as the decca and nova satellite-borne predecessor to GPS was designed for a 6 year lifetime but lasted well beyond 18 years. Unfortunately, the spacing for NavSat fly-overs had originally been every two hours but over the ensuing 18 years for these LEO satellites, the gap between coverage was tabulated in the mid 1970’s as having severely degraded with up to 4 to 6 hour gaps in coverage in certain geographical locations. An enemy could exploit this situation in knowing when to look;

      There was no discussion of safety issues of general world population exposure to space-based X-band microwave radiation. The soviets were always more conservative in setting lower limits to maximum tolerable human radar exposure than the U.S. has been. Lincoln Laboratory managers who have done tours of duty on the Marshall Islands at Kwajalein, near several large Early Warning Radar test sites (such as Tradex, Altair, recent X-band radar that is both electronically scanned and mechanically rotated) have seemingly been more susceptible to testicular cancer and detached retinas than is the case within the general population. If the nearby chain-link fence rattles from EM radiation, then its exposure to human genes is likely susceptible too!

      While Delaney suggested making Space-Based Radar results available for research by universities to perfect better processing algorithms, there was no indication that encryption of the down-link was being considered in the current estimate of a trillion bits per second of signal processing load. Lincoln Laboratory has a history of overlooking or ignoring mandatory encryption since their main objective is usually just “proof of concept” and an absence of encryption considerations also occurred with Herb Kottler’s Division 9 miniature UAV design of the late 1980’s. [In late 2009, it was revealed in the nightly news that the U.S. had deployed UAVs to Afghanistan that lacked encryption and whose images were  in fact accessed and exploited to an advantage by the enemy.] Since the Forward Edge of the Battle Area (FEBA) could be observed via Space-Based radar and be potentially exploited to an advantage by an adversary, existing Air Force, Navy, and Army protocols dictate that such information be handled by RED/BLACKER cabling approaches involving encryption of the down link (which appears to be at odds or contradictory to giving universities live feeds for researching new processing algorithms, as had been originally suggested by Delaney [to pass the buck]).

         งTwo other aspects that were also overlooked: (1) a lack of redundant gyros in using only three orthogonal single-degree-of-freedom conventional mechanical spinning rotor gyros so UAV design was not robust with respect to incurring even a single routine gyro failure (or accelerometer failure) which would then completely compromise or jeopardize the success of its mission; (2) a lack of any calibration procedure to get the inertial navigation system up and operating well (i.e., accurately) after having been stored on a shelf for awhile. Use of Micron gyro, which are electromagnetically supported spherical gyros, possess two input axes and are known as two-degree-of-freedom gyros and so having just two provides one redundant input axis. Use of three Rockwell Micron gyros has full redundancy. Ring Laser gyros  have excellent shelf-life characteristics that tend to exhibit the same constant random white noise level and (random) constant bias trends, as rigorously numerically established computationally during initial calibration of the system weeks, or months, or even years earlier. This was politely pointed out from the audience upon hearing the first public presentation of the overall UAV design, as briefed to members of Division 9. It is indeed a pity that they did not run these aspects by any knowledgeable individuals at Charles Stark Draper Laboratory (who could confirm or deny these apprehensions that were expressed from the audience by this particular employee trying to be a good team player and warn them when there was a problem that they were evidently unaware of).   There is a simple explanation for any perceived venom (ha!) exhibited on this Website by me regarding MIT Lincoln Laboratory: having been employed there from 1986 until 1992, without ever being assigned or availed of a reasonably working PC of my own (despite repeated requests for such) which required that I return to work every night and most weekends for 6 years (thus jeopardizing my family affiliations) to perform my assigned tasks on any PC that was available at night. Assistant Group 95 Leader, Mr. Bill Brown, had asked me, being new to Group 95, to be the first to use the LaTeX computer language to document my own reports and to also show the Group 95 secretaries how to use it. I prepared my first report using LaTeX and labeled the front cover of the printout of the source code version with "BEFORE" and labeled the front cover of the printout of the OUTPUTTED version with "AFTER" (hand printed in red pen) and gave both to the Head Secretary so that she would have both  "before" and "after" illustrative examples (along with the somewhat cryptic textbook by Leslie Lamport, as the only textbook available at the time, on the subject of "how to use LaTeX"). Comedy of Errors 1: Unbeknownst to me, Group 95 Leader, Dr. Bill Ince, had grabbed a copy of my report off of the Head Secretary’s desk and he attempted to read it and was unable to do so easily because of all the little symbols throughout so he complained directly to me about its lack of legibility (in his opinion). Even though Bill Brown was supposed to be my line supervisor and knew what was going on, Bill Ince had inadvertently grabbed the LaTeX source code version (i.e., the wrong version) of the 2 that were on her desk, which is why he had trouble going through it. When I explained that to him, he was suspicious of my explanation and seemed to classify my explanation "as an excuse". Months later, three project reports that I had personally written in LaTeX myself and submitted to the Lincoln Laboratory Publications Department for timely dissemination as completed documents were, instead, confiscated and shelved for 12 months by my Group 95 Leader, (the late) Bill Ince, and subsequently made available a year late through no fault of mine, who had submitted them on time but was now working in Group 53 after his latest "fit of temper", described next. Comedy of Errors 2: I had previously worked for "Intermetrics" from 1979 until 1986, as stated on my resume and as appeared on my prior published papers as my affiliation then and also included on the one for which I had just received the IEEE AES M. Barry Carlton Award and Medal a year earlier for "Best Paper" in IEEE AES Transactions that year. In 1989, Bill Ince called "Infometrics" and found that I had never worked there and so confronted me about it. When I explained that I had worked at "Intermetrics" (founded in 1969 as a spin-off of Draper Laboratory) and not at "Infometrics", Bill Ince’s response was "stop giving me your excuses" even though Bill Ince was the person who had phoned the wrong company! Much later, Phil Waldron, head of Lincoln’s HR department, had directed Group 53 Leader, (the late) Al Schwendtner to get me a good PC on which to work since that had previously been an issue in Group 95. When HR checked afterwards, Schwendtner said that he had done so (when, in fact, he had not). HR did not cross-check but took him at his word. The last straw was when I was criticized as punishment from Group 95 Consultant, Prof. Arthur B. Baggeroer (MIT), over my publishing a paper on the topic of Fred C. Schweppe’s Likelihood Ratio test that I originally wrote as a TASC internal memo 12 years earlier at TASC, but that had been later pirated by Jim Kain, at TASC, an action that was known to William O’Halloran (a Division Leader at TASC at the time I had written it) and known to John Fagan as well and to many others (e.g., Dr. Stephen Alter) at TASC to whom I had personally given copies two years before Jim Kain had it retyped and appended his name to it. Proof is that my version had my name and earlier run-time date directly within the listing of the Fortran computer code that I had run remotely via telephone to the mainframe GE computers that I was already familiar with from my prior affiliation with General Electric Corporate R&D Center in Schenectady, NY from 1971 until 1973. Further proof was in the difference or distinction at the end of this paper, published by me in IEEE Trans. on Aerospace and Electronic Systems in 1989, where I additionally worked out the answer to handling a random process described by the same state variable model structure but which, additionally, had a non-zero mean. This variation was sought in a particular exercise at the end of the pertinent chapter in Harry Van Tree's textbook series, Vol. 3.  Upon my initially joining Group 95, I submitted my work plan compatible and in compliance with what Bill Brown had asked me to do: (1) work with Prof. Charles Therrien (NPS) in reporting his approach to solving the problem using 2-D spectral analysis techniques and that I jointly write progress reports with Therrien (as I did several times); (2) pursue my own approach to solving the problem using matrix spectral factorization (MSF) based on results in my approved 1971 Ph.D. thesis; and (3) help report the findings that Prof. Art Baggeroer and his Master's Degree student, William Huang had already accomplished before I arrived but still help them document their results and supporting theory, which I had no hand in developing but was still required to read and comprehend in order to document it. Comedy of Errors 3: The above #2 was my main focus but Prof. Baggeroer seemed to think that I should be pursuing #3 by continuing to ask me to make minor cosmetic changes and revisions (in my documentation of their approach #3 that I had provided) ad infinitum and maybe additional supporting numerical results. (I later, ~ 1990, saw a Lincoln Laboratory Rule [in effect before I arrived at MIT Lincoln Laboratory] that warned against using results from ongoing academic degree theses that had not yet been officially approved by an institution and, as such, should not be a part of ongoing Lincoln Laboratory projects. I can indeed see why not since I was in fact caught between Prof. Baggeroer and his Master's Degree student, Bill Huang, who had yet to submit his completed and approved thesis in 1988-89 (and who was then working on Neural Networks in a different Group with a different supervisor even before I arrived). I was not working on supplying more numerical results for this #3. I had other higher priority tasks that I was performing, according to the work schedule that I had initially submitted to Group 95 leaders in 1986. [I alone was responsible for #2, above along with the work of my assigned programmer]. Prof. Baggeroer was not my professor. I had been working in industry on DoD work for 17 years following my Ph.D. and I did nor want to dilute my current efforts on #3. The astute reader may aptly summarize my situation as that I was full of #2!) In Group 53, Al Schwendtner also said to me that the only reason that I have my publications is that I work for Lincoln Laboratory. I thought to myself, how does that explain my 41 publications before I came there? I also think about how does that explain my 30+ publications since my no longer being confined there? Evidently, these two Group Leaders at LLMIT (the 1st from the U.K., the 2nd from South Africa) "didn’t have their heads screwed on straight". It’s their problem, not mine! In another situation for Group 53, I had performed a preliminary project investigation and had written a report recommending use of GPS external navaid fixes in conjunction with the planned use of the Honeywell Ring Laser Gyro Navigator for use aboard the existing Grumman G-1 aircraft for collecting the pertinent terrain board multi-sensor data within which candidate targets were to be placed. I suggested use GPS because both LaserNav INS and GPS availed sufficient waypoints to cover the entire area to be swept by the aircraft from above and would allow everything to be done in real-time with appropriate preflight grooming beforehand! Group 76 notified me that they were going to follow my suggestion and that they had procured a GPS set that Lincoln already had available. They merely needed to provide the necessary cabling hook-up to the LaserNav. When I tried to convey the good news to Group 53, the  Group leader Schwendtner demanded "that I never use the term GPS in his presence ever again since they could not afford one" (despite the fact that Group 76 was currently pursuing just that, unbeknownst to him). Schwendtner used my suggested use of GPS against me in my yearly review. I was thus stymied in attempting to help Group 76 with their cabling issues since I was precluded from informing them that their current GPS manufacturer had both types of cabling options preinstalled  in their particular model of GPS set and they need merely request a GPS replacement with the cabling that they sought rather than continue to try to "roll their own" with the cabling that they had (which was an apparent bottleneck as they were "reinventing the wheel"). I was, instead, directed by Group 53 to pursue use of optical marker placement, which (IN MY VIEW) would be a ridiculous burden on the flight crew in real-time for both visually observing external retro-reflector markers and then communicating such sightings to the pilot regarding how his instantaneous course should be altered and such Herculean tasks would be entirely unnecessary with rational use of the ample number of waypoints available from both GPS and LaserNav for a preplanned flight. There are existing routine procedures, both visual and aural, for a pilot to properly head and fly to the next waypoint in the consecutive sequence that completely constitute the course. Why complicate it unnecessarily (for no good reason)? Dr. Robert Hull was present during all my interactions with Group 53 Leaders. At least I got some good practice using LaTeX at Lincoln Laboratory. I was self-taught in LaTeX at Lincoln Laboratory by reading. In the process of going from Group 95 to Group 53, I found out that there were two different brands of LaTeX that had been distributed at Lincoln Laboratory, a cheap $30.00 version (within Group 95) that relied too heavily on a random number generator when deciding where to place page breaks and a more expensive PCTech version of LaTeX (within Group 53) that exhibited more consistency and was easier to use. It was the latter version of LaTeX that I later purchased for my own personal use at my company, TeK Associates. Always the professional, I wrote a final memo to Group 95 leaders to alert them to the differences between the two  LaTeX flavors before I departed. Go To Secondary Table of Contents 

       Meowwwwwwwww! Get it? Me, ow!

Information Gained from Attending Open Architecture Seminars (Arrow local office, 35 Upton Drive, Wilmington, MA; 9 May 2006):

       U.S. Navy website provides the Navy Open Architecture standards and Guidance documents for public download. CORBA is being abided by as well as RTI’s DDS methodology for relational distributed databases (ODBC and JDBC compatible) using merely SQL commands. Blue Cat Lynuxworks was approved for use in embedded applications rather than use of Red Hat Linux.

        An open-standards operating system such as LynxOS RTOS must be used as the operating system in all new U.S. Navy systems, according to Navy Open Architecture (OA)-to ensure future interoperability and to support software reuse. This includes DD(X) next generation warship; SSDS-Shipboard Self-Defense System; COTS for AEGIS-equipped cruiser conversion; Spy radar program; TMS UK Navy sonar systems (display and communications); Patriot Missile trainer and simulator; Joint Tactical Combat Training System for DD(X); BSG-1 Program Nuclear Tomahawk Missile Program; and Naval Undersea Warfare Center (NUWC) submarine Trainer; and NSWC SGS/AC Shipboard Gridlock System and Automatic Correlation.

       The Navy wants various subsystems to be interchangeable across several platforms to reduce initial procurement costs and life cycle support expenses. The toy Lego™ analogy  was invoked of  being able to “mix and match” and always being able to fit together. One of the speakers on the panel said that he was attempting to standardize Inertial Navigation Systems (INS) used on military platforms so that by utilizing the economy of scale, it would naturally bring down the prices for such INS’s. (I expressed my worry that in order to do so unequivocally, they would have to standardize on the most expensive one that arises for the platform that has the most stringent operational constraints. My example was that SSBN’s have the most taxing operational environment and also the need for the most accurate Inertial Navigation Systems (but lower external NAVAID fix update rates than other platforms). The speaker challenged my assertion saying that helicopters have a more severe vibration environment than the submarines. I countered by pointing out that the standard submarine war time operating environment must withstand impact of depth charges in fairly close proximity. The speaker clearly had never been challenged before.) 

        A “system conforming to specifications” versus a “system being compliant with specifications” was explained regarding POSIX.1 Certification, POSIX.1b-for real-time extensions, and POSIX.1c-for Pthreads (parallel threads for parallel processing). (Evidently being “compliant” is weaker and means that where it does not conform exactly is known and documented.) For more clarification and elaboration, please see http://standards.ieee.org/reqauth/POSIX/POSIX2.html or http://standards.ieee.org/regauth/POSIX/index.html or http://www.eeglossary.com/posix.htm. The FAA’s ARINC 653 and DO-178B were mentioned as originally blazing the trail. LynxOS-178, LynxOS-SE, and LynxSECURE were discussed within this context as being relied upon to satisfy hard real-time requirements.

        RTI (a spin-off from Stanford University) has a very nice distributed data base system (using DDS) that, by so doing, avoids a single point vulnerability. When asked about any multi-level security (MLS) being available within their DDS distributed data base product, RTI said that MLS had not been included but that “hooks” were included so that creative software developers may (perhaps) be able to engineer such a capability but that no MLS is currently available for it as it comes out-of-the-box. (RTI claims to have a ready list of precedents where contractors and Primes who have tailored their distributed database product to their DOD-mandated MLS needs for a variety of application.)

        Follow-Up on Security Vulnerability in Linux (possible buffer overflow in Linux DVD driver portion of the Linux kernel). However, this vulnerability may be exploited through other hardware mechanisms besides just use of DVD. For example, it is possible to exploit this vulnerability by using a custom USB storage device which, when plugged in, has root access to the system. This DVD driver-related security vulnerability was introduced into Linux in version 2.2.16 back in the year 2000 and has continued to be present up until version 2.6.17.3. As of the beginning of July 2006, there were no fixes for this bug yet.    

        Follow-up in 2009: In comparison,  Ubuntu Linux appears to be the big player in the non-military commercial world in 2009 with respect to installing applications on servers and possessing ample tools for ease in installation (according to the Best in eWeek in 2009).          

Go To Secondary Table of Contents

Information Gained from Attending Microsoft Windows Embedded Product Sessions (Microsoft local office, 201 Jones Rd., 6th Floor, Waltham, MA; 23 April  2006):

       All presentations can be found at: www.microsoft.com/windows/embedded/techseminar.mspx ;

        Microsoft has an impressive array of new products and pricing strategies to better map cost effectiveness of software operating system in use within an embedded application to business line and targeted end customer needs.  One example being the flexibility of Windows Point-of-Service (WINPOS) pricing and ability to accept Windows OS updates and OEM application updates for no additional charge;

        Microsoft promises Operating System support for their array of Embedded Operating Systems for 10 years and in some cases for 15 years. This is much longer than Microsoft offers for their prevalent desktop Operating Systems;

        Microsoft claims that now its motto for OEM is that “Microsoft doesn’t expect to get paid until the OEM developers get paid” (and the price and royalties are now more reasonable and, if the number of embedded items sold exceeds 5,000, for certain Microsoft plan options, then developers can actually have copies of the particular Microsoft embedded OS source code and can further modify it to suit their customization needs),   Go To Table of Contents

Information Gained from Attending Analytical Graphics, Inc. (AGI) Missile Defense Seminar 2006 (Marriott Hotel, Burlington, MA; 10 August 2006):

       A great “dog and pony show” featuring capabilities of STK 7.1 (released May 2006) and excellent presenters Victor Alvarez (Product Manager) and Amanda Brewer (Technical Marketing Engineer);

        AGI’s STK8ฎ is expected out by October 2006 and current version of STK/MMT 7.1 (Missile Modeling Tools) was released in July 2006. AGI mentioned that the Missile Modeling Tools were developed in conjunction with SAIC’s Advanced Technology Group (SAIC/ATG) Huntsville, AL;

        AGI’s STKฎ was ostensibly validated several years ago by Aerospace Corporation but presenters were not specific or convincing about the date and were not specific about who within Aerospace Corporation did the validation or even whether STKฎ was officially sanctioned by them;

        AGI’s STKฎ Vector Geometry Tool contains more than 50 pre-configured specialized coordinate frames to aid in the visualization of complex geometry;

ท    STK7ฎ OPTISIG developed from Teledyne Brown in Huntsville, AL is due out in an upcoming release. OPTISIG contains electro-optic and infrared sensor modelingGo To Secondary Table of Contents

TeK Associates’ Thoughts Following Two Half-Day Presentations by COMSOL, Inc. of Its Product, COMSOL Multiphysics (at New England Executive Park in Burlington, MA on 6 March 2009):

Particularly good discussions by COMSOL Multiphysics regarding electromagnetic modeling (and a lucid helpful review)::
https://www.comsol.com/blogs/computational-electromagnetics-modeling-which-module-to-use/?utm_content=buffera1aa0&utm_medium=Social&utm_source=LinkedIn&utm_campaign=comsol_social_pages 
and optics modeling:
https://www.comsol.com/blogs/introducing-ray-optics-module/ 

We are very enthusiastic about the capabilities of COMSOL Multiphysicsฎ for optimization. Of course, it is vulnerable to the same difficulties that every other optimization algorithm is vulnerable to but can reap the benefits of 40+ years of optimization conferences and workshop experiences. COMSOL Multiphysicsฎ uses a conjugate gradient technique and the developers are looking into gradient-free methods (where the gradient does not exist or can not be computed conveniently). We users can not ask for any more than this! COMSOL Multiphysicsฎ evidently also incorporates some aspects of randomized search so it can not be easily fooled into converging to merely local optima rather than to a global optimum. This is also important for getting out of bad situations where a gradient search algorithm chatters back and forth orthogonally along a ridge between two peaks (that look similar to situation in the White Mountains of New Hampshire at Mt. Lincoln, Mt Liberty, and Mt. Layette [Little Haystack] Mountain) for an inordinate number of iterations while making only slow incremental progress toward the true maximum.

We were especially pleased because by having its current structure, COMSOL Multiphysicsฎ is already set up to also successfully handle Multi-Objective Optimization (involving more than one cost function). The existing 40+ year old theory says that while only the real line can be totally ordered such that for two elements s, t, then s < t or t < s or t = s, unlike for two or more dimensions. However, the theory of Pareto-optimality can still find the Pareto-optimal set (rather than a single optimal point) for multiple costs. Such issues arise in realistic trade-off analyses where there are typically more than just one design facet under  consideration for which an optimization of sorts is being sought.

Suppose that one has three scalar cost functions of interest and concern, say, J1. J2, and J3. And suppose that one seeks to simultaneously choose the best parameter or function u that drives toward min[J1(u)], max[J2 (u)], and min[J3(u)]. First, make the optimization go in the same direction for each by simultaneously choosing the best parameter u that drives toward min[J1(u)], min[-J2 (u)], and min[J3(u)].

Again from a 40+ year old body of theory, there is the “Method-of-Linear-Combinations” that tackles solving the above problem by converting it into the following single scalar cost function that must be optimized multiple times:

By finding u to minimize J(u) = ต1 [J1(u)] +  ต2 [-J2(u)] + ต3 [J3(u)], where for the fixed positive scalars (ต1, ต2, ต3), we have that ต1 + ต2 + ต3 = 1. 

This optimization must be performed again and again for different values of (ต1, ต2, ต3) over the full span of possibilities in order to fully elucidate the entire Pareto-optimal set. Practical considerations dictate that the actual values to be used for fixed (ต1, ต2, ต3) be incrementally quantized. No value ต in the Pareto-optimal set is any better than any other ต within the set with regard to the above three cost functions. Some other criterion must be imposed to pick out just one “winner”. Use of the "Method-of-Linear-Combinations", just described, only works as a way to elucidate the Pareto-optimal set when all of the cost functions involved are Convex or bowl-shaped (or at worse weakly convex by allowing some flatness in some of the cost functions corresponding to being merely positive semi-definite).

In the 1970’s, we (now at TeK Associates) applied these Multi-Objective Optimization (involving more than one cost function) “Method-of-Linear-Combinations” under contract to the Navy SP-2413 for their missile launching submarine C-4 backfit and D-1 scenarios from the point of view of parsimoniously using alternative external navaids that were necessary to compensate for the internal drift rate of gyros within the submarine SINS/ESGM Navigation Systems in order to maintain requisite navigation accuracy (in case they were called upon to launch a missile, which inherits its starting position error from its host submarine) while minimizing exposure of the submarine to enemy surveillance while using those navaids. These navigation systems utilized several Kalman filters within their mechanizations, hence our involvement and the presence of Kalman filters within the model. The underlying models were merely ODE’s rather than PDE’s, Optimization was on a mainframe and cost $1,000 per iteration until the algorithm converged. There exist Kalman filter constructs for models better described by PDE’s but PDE’s are unnecessary for submarine navigation considerations. 

In order to compute a model, you need to select, add, and run a study. How do you choose the right one?

Watch this 20-minute video on how to determine which study is most appropriate for your specific modeling scenario, as well as how to add and run studies in COMSOL Multiphysics:

https://www.comsol.com/video/adding-and-running-studies-for-models-in-comsol-multiphysics?utm_content=buffer591a0&utm_medium=Social&utm_source=LinkedIn&utm_campaign=comsol_social_pages 

Go to Top   Go To Secondary Table of Contents

TeK Associates’ objections Following HP “Rethinking Server Virtualization” workshop (at Hyatt Regency in Cambridge, MA on Wednesday, 24 June 2009):

While our objections here do not relate directly to the VMware product Vsphere per se and we are aware of their other quality products like VMware’s Fusionฎ, our objections below focus on the fact that VMware was not immediately forthcoming about the nature of the problem that they are ostensibly solving with Vsphere by not  directly addressing the real issues and the design parameters and active constraints that are encountered.

One diagram depicted their (VMware Vsphereฎ’s) underlying virtualization philosophy for handling Fault Tolerance using both hardware and software controlled data redundancy to create “virtual machines” yet they had two memory banks, one being active and the other echoing all operations passively as a warm standby system ready to replace the primary system if it goes down. While the idea of having a warm standby system to switch to “instantaneously” is a very desirable idealization, their (VMware Vsphereฎ’s) approach ignored the reality that first any failure needs to be detected before the desired switch to a new configuration for processing reliance takes place and that the necessary intermediate fault detection algorithm must always trade off false alarm rate versus miss detection rate, neither being perfect (as being identically zero). A finite latency also occurs before any real fault detection algorithm can finalize the decision that a system failure has occurred. TeK Associates’ view is that it is well nigh impossible to instantaneously identify which of the two systems had failed if voting was occurring only between just these two systems, as initially indicated within this VMware Vsphereฎ presentation. It takes three or more voting systems in order to isolate the source of failure and the latency even with three identical systems with voting to determine the odd-man-out in this decision is still non-zero (an alternative rule to use is “mid-point select”). These representatives of VMware Vsphereฎ appeared to be thrown into a quandary when we asked “what if the passive backup system failed first while the primary system is still performing adequately”? They did not have a ready answer for this rather obvious question that anyone could reasonably raise. (For an interesting historical perspective and precedent: Stratus Computer and Tandem had focused on fault tolerant computing over 25 years ago.)

The VMware Vsphereฎ representatives showed slides that implied that they could go beyond 0.99999 availability all the way to 100% certainty. This appears to have been an unbridled marketing slide since usually for any system to achieve 100% certainty in reliability, it incurs an infinite cost. It is not routinely achieved in practical systems. It is only achieved in idealizations and exaggerations. The use of triple redundancy across everything, including power and cooling, is the usual way to guarantee success in a high value mission (with this high cost incurred), such as encountered within our experience within navigation systems for nuclear submarines and with what we know about the Space Shuttle (recall that Intermetrics, Inc. was responsible for the third (so-called “back-up”) computer while IBM was responsible for the other two. There was a timing glitch across data boundaries that initially caused an alarm to be raised on the back-up computer during the extensive testing on the ground before the first launch. Recall that Intermetrics was found blameless regarding this issue. Intermetrics also provided a product called DIT that detected any Space Shuttle System failures and, moreover, Intermetrics Inc. provided the computer language HAL/S that was used on the Space Shuttle). [Historically, for both navigation for SSBN’s and for the entire Space Shuttle STS, the computer capacity was held hostage to 10 year old technology at inception and this sad situation persisted for decades afterwards until  new upgrades were embarked upon for each and rebid after another RFQ and RFP were issued. For the current International Space Station, being tied into antiquated technology for the duration of its useful life cycle is simply avoided by enabling scheduled replacements or augmentation with current cutting edge technology within laptops, as the new ruggedized COTS equipment capturing and encapsulating these new desirable capabilities become available that warrant such inclusion within the existing system at pre-planned locations throughout the platform.]

The VMware Vsphereฎ representatives implied that a competitive external data storage approach utilizing scheduled offloading of data to a Google facility is prone to being a single point failure by the VMware Vsphereฎ representatives depicting Google as having 2000 servers in one warehouse tended by one person. It is not very likely that this is actually the case. In our experience, Google is well aware of safe practices and abides by them. Google is not foolish. Far from it.

On the plus side, the VMware Vsphereฎ representatives did emphasize the need for geographically distributing the redundancy a reasonable distance away to avoid being a single point vulnerability to weather or natural disasters (or terrorists). The VMware Vsphereฎ representatives warned that the desired redundant equipment should be no further than a distance of 200 miles away otherwise the latency from transmission time delay incurred would be more than 5 seconds and that is a critical design parameter. We appreciate being alerted to this constraint. It was also the still unsolved problem that plagued Satellite-based point-to-point radio communications (of 16 years ago) that precluded being able to perform the requisite channel equalization sought because the time delay incurred was beyond any for which autonomous channel equalization had been successfully performed.   

          Go to Top   Go To Secondary Table of Contents

A Ray of Hope as Microsoft Improves the Security of its Products (as had previously been sorely lacking):  

In 2002 after being plagued by the computer worms “Blaster” and “Slammer”, Microsoft suspended its program developments for more than two months and sent all its 9000 programmers to remedial security classes [69];

Microsoft now invites security specialists in for critical reviews of its products and pays close attention to what they say [69]

Microsoft now hosts “Blue Hat” meetings to see how it can shore up its ailing security and has acted responsively and responsibly to this end. Claims are that Windows XP with Service Patch 2 (SP2) is much less vulnerable than its past Windows products. Future products will be even more secure [69]. 

See References [75], [76], [79], [80] for more confirming evidence of the turn around in philosophy for the better at Microsoft.

      I sincerely believe that Microsoft (M/S) could potentially be the U.S.’s “ace-in-the-hole” for Commercial-off-the-Shelf (COTS) products if and when M/S gets its act together in the various computer security concerns, as is the current M/S trend. (Even more so since the advent of a Linux Server virus in October 2005. Such is the peril of using OpenSource software where anyone can view the existing vulnerabilities and choose to exploit them whenever they wish.)  

      [However, a slide at The Mathworks’ 31 January 2006 presentation (but absent in the meeting handout) discussed further above, contained a DOD recommendation for SCA that listed approved hardware and Operating Systems and, unfortunately, did not include Microsoft on this short list. Way to go DoD! Now, unlike what was the case during WWII, when Ford’s and General Motors’ assembly lines were available to back up the U.S., these giants are now no longer available to take up any possible war production slack. The closest thing the U.S. now has to a world class Super Star corporation capable of world domination is explicitly excluded from participation in SCA when Microsoft’s yearly R&D budget rivals that of  DoD’s. Recall that the U.S. can’t rely on Bell Labs’ or General Electric’s R&D (post Jack Welch) any more and DARPA now appears to hang their hopes way too much on the mere activity of FFRDC’s and its usually vacuous hype (one prime example of underhanded tactics perpetrated on the unsuspecting military officers that yearly oversee FFRDC’s activities are that certain organizations provide names for newer satellites that are spelled differently but sound the same (i.e., are homonyms), when orally pronounced, as earlier pioneering satellites launched by other organizations so that historical credit is unfairly grabbed too). Another trick used at a well-known FFRDC was to list Dr. Richard Bucy, one of the simultaneous independent discoverers of the discrete-time formulation of the Kalman filter, in their official Organizational Telephone Book more than 5 years after his departure. Naturally, Microsoft’s development path support is the epitome of a cogent COTS philosophy but an alternative Microsoft path for SCA has evidently now been ruled out by official directive from the start. Such clear thinking in the past gave us nice light aluminum ships that saved fuel (with a low melting temperature) which would burn up in combat instead of withstanding routine battle damage.]

Go to Top   Go To Secondary Table of Contents

  MATRIXx MATRIXx was developed by N. K. Gupta (who used to work for Raman Mehra [manager of Parameter Identification] at Systems Control, Inc. in Palo Alto CA when the two worked in the Parameter Identification Group there in the late 1960's and early 1970's before Raman Mehra returned to the Boston area to teach at Harvard University temporarily before founding Scientific Systems Inc., originally in Cambridge, MA but now in Cummings Park, Woburn, MA. N. K. Gupta eventually left Systems Control and worked with Thomas Kailath and others at Stanford University while N. K. was president of the company that developed MATRIXx software.

      In the late 1990's, MATRIXx received an award from the Federal Government for its utility in generating fficient C-code and efficient Ada code. Engineers at McDonnell-Douglas in St. Louis swore by MATRIXx in 1997 and used it for most of their projects and gave a glowing independent endorsement. It is similar to MatLab/Simulink in that it can be used for simulation first and then used to convert simulations to efficient C code or Ada flight code automatically.

      Once McDonnell-Douglas was acquired by Boeing, McDonnell-Douglas engineers were required to use Boeing's EASYFIVE simulation language. There were software programs to automatically convert MATRIXx to EASYFIVE (even though engineers lamented that MATRIXx was much better).

      In the late 1990's or early 2000 time frame, The MathWorks purchased the rights to MATRIXx and had been working closely with National Instruments on NI's Labview. NI was even using MatLab as their scripting language within Labview. That suddenly changed and there were lawsuits and bad blood between The MathWorks and National Instruments. In the settlement, NI got MATRIXx and apparently neither is allowed to discuss it.  One of the primary customers of NI's MATRIXx is United Technologies now having absorbed Goodrich ISR in ~2013, Collins Aerospace later, and all now part of Raytheon.

     Unrelated to MatLab: There are several incidents where I am very glad that cooler heads prevailed: 
(1) During the Cuban Missile Crisis, the Russian radar operators were commanded to launch if they observed any threatening activity from the U.S. They 
observed some threatening activity but decided NOT to launch after all! 
(2) a software upgrade for our U.S. Strategic Early Warning Radar neglected to account for the relative location of Earth's moon which triggered a treat 
warning but we figured out what the actual problem was rather than "launch"! 
(3) Immediately following strategic defense exercises at Cheyenne Mountain, after they switched back to being on ALERT, they noticed a pattern where  
some aspects of the exact same prior "simulated" threat exercise appeared on their now activated radar screens that apparently were still in associated  
computer buffers or queues and were recognized as such by the operators and NOT responded to as a real threat. 
Phew, I'm glad that we dodged the bullet yet again!

      Noninvasive tapping of fiber optic cables was presented by MIT Prof. Jeffery H. Shapiro at an IEEE Information Theory meeting to a standing room only crowd
back in the early to middle 1980's. Two things had to happen before it could become a reality: (1) "Squeezed states of light" (which has been achieved) and (2) 
monopole magnets (which hasn't happened yet). https://en.wikipedia.org/wiki/Jeffrey_Shapiro 

Unsettling thought for the day: The DoD is supposed to save money by using Commercial-Off-The-Shelf (COTS) equipment instead of relying on specialized turnkey software solutions, which, in the past, wired in a particular company’s software solution for the duration of the entire life cycle of the weapon system. Obviously analyses have been performed that support significant DoD cost savings by using COTS. A more burning question in my mind is whether any analysis has been performed to determine how much money DoD will loose using COTS when there is likely widespread pilfering? The situation for COTS use is essentially bilateral since the movement of COTS products can go both ways: “more easy come, more easy go”. Idealistically speaking, “surely our service men and military contractors would not steal from our own defense!” What about the existing precedents over the last 60 years. An unfortunate and embarrassing further substantiation of this fear of likely COTS pilfering has occurred in 2007 pertaining to loses at both the Veterans Administration (VA) and at NASA. Somebody was even selling crates of military rations on E-bay in mid February 2007. Other COTS products would be less obvious a standout than military rations are. Valuable COTS products should be tagged with GPSID or RFID labels to ferret out and prosecute the criminal crude that try to exploit the military services in this way. (For you fellow oldsters, remember Phil Silvers as the original Sgt. Ernie Bilko on TV instead of Steve Martin’s later portrayal in the movie?) In the good old days of the 1950’s and 1960’s, in order that the U.S. would be able to handle “wars of attrition”, it was mandated that every component within all U.S. weapon systems have two domestic suppliers within the Continental United States (CONUS) rather than sending the jobs offshore. The act of calling for blockades, sieges, and embargoes have been standard military practices in warfare over the past three millennia, so how then did the pointy headed defense analysts get us into the current COTS dependency predicament? Now with COTS, we have to stop and ask, “Excuse me sir, but can you please provide us with all critical replacement parts to certain particular weapons systems for the foreseeable future that we must now rely on before we can retaliate and attack you for egregious offenses or even defend ourselves against your aggressions?”  Go To Secondary Table of Contents   Go to Top 

A second unsettling thought for the day: After the X-Prize was won by Burt Rutan’s Mojave team at Scaled Composites Corp. in 2004 using the reusable SpaceShipOne, where the second suborbital space flight within two weeks was piloted by Brian Bennie, experts now only predict that the likely practical application for such reusable spacecraft will be affordable tourist excursions into space eventually for $30K to $50K a pop, as the price per trip speculated in 2004 to be the likely cost. Since we at TeK Associates are also sensitive to homeland security issues, we strongly recommend keeping a close eye on such so-called “tourists”. Potential well-funded terrorist can commandeer such craft after take off and redirect the flight to sensitive targets in a suicide mission as a series of surprise malicious events that unravel so fast in space that standard U.S. Reentry Vehicle interception techniques may be stymied due to a lack of time-to-go  before impact along with the tremendous speed of this craft upon reentry, which would ordinarily be ignored as only a tourist vehicle while its true co-opted mission may be more sinister and lethal. The planned upgrade to SpaceShipOne is to have two pilots and a capacity of greater than 600 lbs of cargo/payload-supposedly consisting of additional passengers (or a disastrous surprise). The first  spaceport is to be in Ras al-Khaimah, United Saudi Emirate at an estimated cost of $265 Million [74] ostensibly only because of its proximity to Dubai. This planned structure may, perhaps, worry many for the reasons stated above even though the U.S. Space tourism firm, Space Adventures, has its headquarters in Virginia.  See the next item for further developments and updates as to considerably higher price of tickets and closer CONUS launch site and new Company Name. Private spaceship makes first solo glide flight

Carried aloft by its mothership to an altitude of 45,000 feet and released over the Mojave Desert, Virgin Galactic's space tourism rocket SpaceShipTwo achieved its first solo glide flight Sunday. 11 Oct. 2010. The entire test flight lasted about 25 minutes and the separation was performed without difficulty. Read further Comment

SpaceShipTwo, also built by famed aircraft designer Burt Rutan, is based on his prototype that won the $10 million prize in 2004 for being the first manned private rocket to reach space.

Tickets to ride aboard SpaceShipTwo cost about $200,000 per person, with the added inducement of no extra charge for luggage. Some 370 customers have allegedly plunked down deposits totaling $50 million, according to Virgin Galactic.

Commercial flights will fly out of New Mexico where a spaceport is currently under construction. Officials from Virgin Galactic and other dignitaries will gather at the spaceport on 22 Oct. 2010 for an event commemorating the finished runway. The event will also feature a flyover by SpaceShipTwo and WhiteKnightTwo. (A new multi-million dollar prize, announced in 2011, is now for the first private enterprise flight reaching the surface of the moon again and returning. This endeavor could also be similarly co-opted in the manner warned about here.)

                          

Do not let an enemy of the USA catch us sleeping!
Go To Secondary Table of Contents    Go to Top

Yet a third unsettling thought for the day: While existing military surveillance strategy for being aware of possible threats to existing space assets apparently involves monitoring only those space objects in relatively close proximity or within a relatively restrictive region that entertains only Hohlman transfers as the maneuver that likely threats would use to change from lower orbits to higher orbits (to get within lethal striking distance of its target) as the only efficient optimal maneuver (without considering the 20+year-old confirmed concept of aero-assisted orbit change maneuver approaches involving first descending and then using both the drag and subsequent lift of skipping off the earth’s atmosphere to achieve a surprise direction change and as a truly energy-minimizing optimal [but not time-optimal] way enemy space assets can maneuver to attain the same objective of close proximity as an analogous “castling move”, having the obvious military advantage of catching designated targets [and their protectors] off-guard by essentially coming out of “left field” in a way that is totally unexpected nor prepared for). See Frank Zimmermann and Anthony J. Calise, “Numerical Optimization Study of Aeroassisted Orbital Transfer,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 21, No. 1, January-February, pp. 127-133, 1998. The U.S.’s eternally vigilant radar surveillance persists within a background of space junk that must be  tracked and maintained/updated within a Space Object Catalog: http://astria.tacc.utexas.edu/AstriaGraph/ 

In 2016, NASA and FAA seek to takeover compilation of Space Object Catalog from U.S. DoD, as now headed up by Dr. Moriba Jah, Director, Space Object Behavioral Sciences. The amount of “space junk” is now estimated at 22,000 objects to be tracked and cataloged. “Advanced Methods in Resident Space Object Characterization”-Presented at Stanford University on 1 February 2019: https://www.linkedin.com/feed/   (I’m not sure that this link works after the first day.)

Space Debris Removal:

https://www.bbc.com/news/science-environment-47252304 

https://www.keranews.org/post/increase-launches-and-satellites-comes-threats-space-flight 

https://www.npr.org/templates/story/story.php?storyId=6923805 

http://www.cnn.com/2009/TECH/02/12/us.russia.satellite.crash/index.html 

https://drive.google.com/file/d/1-XmhTAzuyimkaxrMzz9rM4_aEJ1p-DJN/view 

https://www.space-track.org/auth/login 

https://www.surrey.ac.uk/surrey-space-centre/missions/removedebris 

https://www.space-data.org/sda/ 

https://docs.fcc.gov/public/attachments/DOC-355102A1.pdf 

Space Traffic Management:

https://espi.or.at/news/espi-hosted-an-evening-event-on-space-traffic-management 

India's anti-satellite missile test may have created 6,500 pieces of space junk larger than a pencil eraser, according to a new simulation:

https://www.businessinsider.com/india-anti-satellite-missile-test-space-debris-cloud-2019-3 

India says space debris from anti-satellite test to 'vanish' in 45 days:

https://www.reuters.com/article/us-india-satellite-idUSKCN1R91DM 

The Robert Strauss Space Security and Safety Program at The University of Texas at Austin now has an official website: 

https://lnkd.in/etahW4T 

Informational Video: Space Traffic Management 101:
Dr. Moriba Jah: "I'm pleased to announce our first of what will be many videos from our Space Security and Safety program @strausscenter where our Brumley Fellow Alyssa Goessler gives us a Space Safety, Security, and Sustainability 101. Please share far and wide! https://lnkd.in/dU3Re9q
#spaceenvironmentalists #satyarising #eyesonthesky #moribasvoxpopuli #tek4spacesustainability #leveluponssa #publicvoices #secondrung #jahniverse

Moriba Jah, Associate Professor, Mrs. Pearlie Dashiell Henderson Centennial Fellowship in Engineering (13 October 2020):
I’m pleased to announce that we are delivering CCSDS Orbit Ephemeris Messages (OEMs) on bit.ly/astriagraph for the following Anthropogenic Space Objects (ASOs): STARLINK, IRIDIUM, FLOCK, INTELSAT, GALAXY, TDRS, GALILEO! You should see the following clickable link. Enjoy! #eyesonthesky  #spaceenvironmentalists  #leveluponssa Orbit Determination

The one and only Dr. Moriba Jah returns to the Cold Star Project... this is NOT his typical interview... and it is NOT our first discussion! Join us for the truth about #space #situationalawareness

https://lnkd.in/dTfpe3b 

https://www.ibm.com/ibm/history/witexhibit/wit_intro.htm  

Oden Institute for Computational Engineering and Sciences | University of Texas at Austin:
I'm looking for students hungry to embrace complexity and trandiscplinarity: https://lnkd.in/dnSqKVg  APPLY NOW! #eyesonthesky #moribasvoxpopuli #jahniverse #satyarising #spaceenvironmentalists 

https://www.oden.utexas.edu/graduate-studies/admissions/

Senate bill would assign space traffic management work to Commerce Department - SpaceNews:

The chairman of the Senate Commerce Committee introduced a bill Wednesday to formally give the Commerce Department space traffic management responsibilities, but the funding required to carry out that work remains uncertain.

https://spacenews.com/senate-bill-would-assign-space-traffic-management-work-to-commerce-department/ 

It is with great pleasure and excitement that I (Dr. Moriba Jah) officially introduces the newest permanent committee of the International Academy of Astronautics... Space Traffic Management: https://lnkd.in/dHHq9Bj 

Please submit a paper (extended due date) to our next IAA/UT Austin Space Traffic Management conference: https://lnkd.in/dxkZarU 

#eyesonthesky #moribasvoxpopuli #jahniverse #spaceenvironmentalists #leveluponssa #twoknightsonehorse
Permanent Committees:
https://iaaspace.org/about/permanent-committees/#1608221127992-3f6dda0a-3cd4 

One year ago Dr. Moriba Jah and I (NOT Tom Kerr) had this key pre-covid interview on the Cold Star Project. Together with space lawyer Christopher Johnson's first interview and Dr. Joel Sercel's discussion, you'll be up to speed fast on what's actually happening in #space. The #spaceindustry issues...the law...the capabilities.

Dr. Jah and Mr. Johnson have had further interviews with me (NOT Tom Kerr) on the show, and some issues like the space #situationalawareness one have gotten through 2020 a lot more of the public and official attention that they need. But if you're new to space, this is a great place to start.
https://lnkd.in/gc6tjw5 

(Apr. 2021) German Space Agency to use Lockheed Martin tool to track space debris:
Germany will use #LockheedMartin’s space situational awareness #software to track objects in #space. The system alerts operators to anomalies or potential collisions and suggests mitigating actions. 
https://www.defensenews.com/battlefield-tech/space/2021/04/12/german-space-agency-to-use-lockheed-martin-tool-to-track-space-debris/?_lrsc=1d1d60ae-bcce-4fdb-9de0-d0f4c7188ee0 

(Apr. 2021) Spurbeck, J., Jah, M.K., Kucharski, D. et al. Near Real Time Satellite Event Detection, Characterization, and Operational Assessment Via the Exploitation 
of Remote Photoacoustic Signatures. J Astronaut Sci 68, 197–224 (2021).
https://doi.org/10.1007/s40295-021-00252-5 

Accepted: 18 January 2021; Published: 13 April 2021; Issue Date: March 2021. DOI: https://doi.org/10.1007/s40295-021-00252-5 
TeK Associates downloaded the above published report to our website so reader can access it immediately by clicking here.

(Apr. 2021) Real-time space junk map marvously depicts the scale of problem:

https://newseu.cgtn.com/news/2021-04-28/Real-time-space-junk-map-shows-scale-of-problem--ZO4Mo18uZO/index.html 

http://astria.tacc.utexas.edu/AstriaGraph/ 

(Apr. 2021) In order to flatten the curve on the “spreading” of orbital debris we need more space actors to comply with the science that’s been captured in space debris mitigation guidelines: 

we need nation states to make these into their national space law and enforce it!

We need to eliminate “super-spreader” events on orbit...by removing the massive derelict rocket bodies and upper stages. 

These objects are taking up orbital carrying capacity that could otherwise be used by working satellites.

95% of all orbital carrying capacity is being taken up by space debris...

https://lnkd.in/gXugz2e 

https://www.bbc.com/news/av/science-environment-56845104 

Want to be a part of the solution? Do it here: https://lnkd.in/g9ADX76 

(May 2021) Needed: Rules of thumb for avoiding collisions in space | Aerospace America:

Given the recent SpaceX and OneWeb stories regarding an alleged near miss...I’ve said this before: Probability of Collision as a measure of collision risk is 
nonsensical because it is TOTALLY subjective and critically dependent on the data used to compute it and the associated dimensions of data quality: accuracy, 
timeliness, completeness, uniqueness, consistency, validity, etc.

All data are not created equal nor have the same amount of information. Data do not equal information!!!! Information is the answer from data once we ask it
a question. If we pose no question of data, there is no information. We feast on data and are starved for information. This is the tradecraft of the competent
data engineer, scientist, and analyst. Most people wouldn’t recognize one if they were slapped by one! Too many people are all too happy to eat from the 
plate of jargon. Others are happy to keep you confused to their benefit. Shell games abound. We might lose LEO as a consequence.

https://aerospaceamerica.aiaa.org/departments/needed-rules-of-thumb-for-avoiding-collisions-in-space/ 

In April 2022, Prof. Jah said that he is very proud to have co-authored a relevant Nature Astronomy article: https://lnkd.in/gNCtnDJj 

Free version can be found here: https://lnkd.in/gQVnqjgt 

Go To Secondary Table of Contents   Go to Top 

Yet a fourth unsettling thought for the day: In late November 2008, the Boston Globe again reports preliminary U.S. plans to build a defense against Armageddon due to an asteroid strike of the earth. Originally, such a defensive system was called for in 1989 after the end of the Cold War (and before the 16-22 July 1994 spectacle of Shoemaker-Levy 9 comet being pulled apart and breaking up into the “string of pearls” that sequentially impacted Jupiter, all within view of the Hubble telescope to fan additional fears) and some government physicists needed a new welfare project. As a precedent In the early 1970’s, the ship-borne Phalanx CIWS (Close-In Weapon System) was undergoing initial test. A missile was aimed beyond the test ship and the onboard Phalanx was activated to fire upon it continuously with a successive barrage of bullets in order that its momentum be sufficiently changed and therefore alter the missile’s direction and trajectory. While Phalanx successfully changed the missile’s direction and trajectory, the missile, unfortunately, ended up actually hitting the test ship unlike what was planned or sought. (Fortunately, no life was lost and the ship was soon to be decommissioned anyway.) The test was declared a “success” because the Phalanx did, in fact, change the missiles direction although it did not prevent the missile from hitting the targeted ship as the Navy had sought as the primary goal for developing Phalanx in the first place. A similar mishap could occur with an asteroid-to-earth collision prevention system but the consequences of such a similar error would be much more dire, grim, and earth shattering (literally). (Where is Bruce Willis when you need him?)  

Again, a 2016 update to use of Phalanx: After Raytheon’s JLENS blimp drifted off to the state of Pennsylvania, dragging its failed mooring, when it was supposed to be monitoring threats in and around Washington, D.C. and also suffered from a miss in not seeing “the postman flying his slow moving lawn chair equipped with balloons” that landed near the White House from several states away (ostensibly because JLENS was temporarily turned off). A land-based Phalanx has now been deployed at critical locations there and around the U.S.A., where needed.

Go To Secondary Table of Contents    Go to Top

Yet a fifth unsettling thought for the day: Navy’s Aluminum Ships (existing before the 1980’s) were a good idea for peace time too but warfare revealed a low melting point for aluminum that was catastrophic for warships. TeK Associates wonders how the aluminum ship idea got so far without objections and adequate challenges?. Current FAA/ARINC and DOD GPS jam resistance demonstrations, in the opinion of TeK Associates, are against dumb jammers consisting merely of broad band Gaussian White Noise (GWN) sources. More realistic jamming threat is more sophisticated and well known and documented in a Paladin Press book, published in the U.S. more than 30+ years ago. A German processing methodology known as Space Time Adaptive Processing, (STAP), originally adopted in the U.S. by MITRE for GPS antenna jamming mitigation (by the [late] Ron Fante at MITRE) and by Lincoln Laboratory of MIT for radar processing and jamming mitigation, is only applicable to thwarting Wide Band GWN jammers. A more sophisticated enemy would likely use worse against us. TeK Associates is less enthusiastic about STAP because we have been given a false sense of security that does not actually exist. TeK Associates’ bias or humble contrarian view is that while STAP can adequately withstand multiple barrage jammers that only use wide-band stationary Gaussian White Noise (GWN), the tactics of a sophisticated enemy would not be so dumb. TeK Associates suspects that STAP has apparently been over sold by Lincoln Laboratory of MIT, by MITRE, and originally by certain German authors who initiated this particular signal processing approach. This STAP approach is especially vulnerable since Paladin Press book published that jammers should do otherwise against phased arrays and fixed arrays over thirty years ago, and Paladin Press is usually read by soldiers of fortune and terrorists (recall that there was a Soldier of Fortune Magazine that had been sold at every routine U.S. corner magazine store in the 1950’s and 1960’s). A copy of this Paladin Press book is in the MIT Lincoln Laboratory library open literature. This Paladin Press book suggests use of statistically nonstationary jammers to destroy “ergodicity” of the variance and prevent STAP from obtaining ensemble averages of covariances from sample averages. From mathematical logic, "Ergodicity" (E) => "Stationarity" (S) in the statistical sense, so equivalently, ~S => ~E, as the contra-positive. [Time samples and their averages still exist and subsequent calculations that appear to be beam-forming still proceed for these vulnerable systems but these statistics are apparently useless and misleading since they no longer correspond to actual ensemble statistics, as they would be if the systems were still wide sense stationary and ergodic but, alas, they are NOT! For insights into the proper supporting theory, please read either one of the first two general but introductory random process textbooks: (1) Athanasios Papoulis, Probability, Random Variables, and Noise, McGraw-Hill, NY, 1965 or (2) R.J. Schwarz and B. Friedland, Linear Systems, McGraw-Hill Book Co. New York 1965, and then, please read (3) J. R. Guerci, Space-Time Adaptive Processing for Radar, 2nd Edition. Norwood, MA: Artech House, 2014. http://ieee-aess.org/contact/joseph-r-guerci] These more sophisticated jammers are fairly easy to implement merely as slight modifications of standard broadband WGN barrage jammers. Paladin book also says to use “Blinking Synchronized” jammers. Open literature publication distributed in Eli Brookner’s open literature IEEE course several years ago (in ~2000) allowed us (and anyone else who can think analytically) to infer how fast a blinking jammer’s blink rate would need to be in order to befuddle null steering algorithms by keeping them in a continuous state of flux (by sufficiently changing on each iteration) unable to then converge to successfully place nulls on all the offending jammers of this type because of the "peek-a-boo" chaos that was being caused. Published algorithm operation counts and known speed of existing state-of-the-art processors gives a strong clue. (ELINT and/or SIGINT could be used to confirm STAP’s lack of effectiveness in this situation.) Well, the U.S. knows how to jam too: In 2018, the U.S. Army’s new Jamming Pod is discussed here: https://www.c4isrnet.com/digital-show-dailies/ausa/2018/10/12/what-is-the-armys-integrated-jamming-and-cyber-pod-capable-of/ If we used “persistently exciting” signals (a concept from Parameter Identification) to modulate WGN jammers, the disruptive result would be even more effective by never exhibiting any periodicity over which corresponding time strips could be time-averaged to obtain the type of ensemble-averages, as Papoulis discusses could be possible under the topic of random processes exhibiting “periodic stationarity” [or perhaps even “periodic ergodicity”?] as a consequence! “My reading Applebaum's paper” doesn’t help unless the classified appendices in the original accompanying the Applebaum report: [Syracuse University Research Corporation, Syracuse, NY, USA, General Electric Company Limited, Syracuse, NY, USA, S. P. Applebaum, "Adaptive arrays", IEEE Transactions on Antennas and Propagation, Vol. AP-24, No. 5, pp. 585-598, Sept. 1976. Abstract: A method for adaptively optimizing the signal-to-noise ratio of an array antenna is presented. Optimum element weights are derived for a prescribed environment and a given signal direction. The derivation is extended to the optimization of a "generalized" signal-to-noise ratio which permits specification of preferred weights for the normal quiescent environment. The relation of the adaptive array to sidelobe cancellation is shown, and a real-time adaptive implementation is discussed. For illustration, the performance of an adaptive linear array is presented for various jammer configurations.] address some critical aspect that I have overlooked here. I very much wish that this were the case! A ray of hope is conveyed here. [The results in Applebaum's nice report/paper, cited above, are fine and sufficient justification for any application when the corrupting noise jammers are merely Broadband Stationary Gaussian White Noise (barrage) jammers.] An impressive example of Eli Brookner's genius, insights, and clarity is provided (please click here) to see his excellent overview of every pertinent topic (except, perhaps, considerations of ergodicity of the variance). Another view of radars is conveyed by clicking here.

A more recent worry falls within the category of disrupting civilian applications since 5G is upon us and will be using phased array antennas too. If the same amelioration techniques are used for these applications as used for DoD GPS applications  https://go.usa.gov/xMZ2q, then the predicament is identical of being vulnerable for the same reasons. "Self-driving cars" may suffer as well as IoT applications that rely on a 5G wireless approach. This can have grave economic consequences for the National economy's of many nations. Evidently complicit were the official DoD and FAA/ARINC testers, who confined attention by only testing against "STATISTICALLY STATIONARY" enemy barrage jammers that were successfully thwarted (rather than vice-versa for "STATISTICALLY NON-STATIONARY" enemy jammers by not even bothering to test against them). Perhaps a false sense of security ensued with our "heads buried in the sand like ostriches"? While "wide-sense stationary" Gaussians, due to their Gaussianess are also "strict-sense stationary" Gaussians and ordinary Gaussians like this exhibit or possess 1st and 2nd moments that are ergodic, with their measurable and available time-averages being theoretically equal to their essemble averages (as needed in computations for placing antenna nulls on offending jammers); such nice analytic results are, apparently, not available with "statistically non-stationary" Gaussians. Persistently Exciting Signals By Munther A Dahleh, MIT lecture 4 Please see [57], [58] in reference list below.


My allegiance is to the USA and to its defense and NOT to any Major Aerospace Contractor nor to 2 local FFR&DC's, who may have put us into this situation and yet don't acknowledge their pivotal role in this.

Yet a sixth unsettling thought for the day: While Tom Kerr doesn't have the full technical depth/breath nor sufficient practical experience in this radar 
arena to make definitive clarifying pronouncements himself; however, he did notice some unsettling trends, as he discusses further below.

Regarding recent expert complaints about MIMO radar, my comment, echoing the sentiments of the inimitable Bugs Bunny (in parodying Hamlet), is: "methinks he 
doth protest too much!" Rather than merely match MIMO radar performance achievement claims using prior radar techniques, Dr. Eli Brookner, instead goes beyond
that by implementing additional "heroic measures", which involve additional processing and some back-pedaling in order to match what MIMO radar routinely achieves or provides and offers as "new". Instead, in one fell swoop, if Dr. Eli Brookner were to actually embrace the MIMO radar methodology, he could achieve all these "new" capabilities (rather than discourage or scoff with disdain at the MIMO radar results as not being entirely "new").

From personal professional experience, Tom Kerr DOES KNOW that MIMO Communications (a.k.a. diversity multi-Channel transmitters and receivers [107]) are beneficial in their own right.
MIMO Control has been in use for 50+ years! Multi-port Network Synthesis has been available since mid-1960's. Wiener Filters, using MIMO Matrix Spectral Factorization (MSF) obtains "realizable 
filters" (i.e., with all poles in the Left Half Plane [LHP] and of minimum phase [i.e., all zeroes are in the LHP as well]) [as demonstrated in TK-MIP here, and here, and here], and have been in use since the
late 1960's; and, for the scalar case, since the 1940's. https://en.wikipedia.org/wiki/MIMO  

However, Tom Kerr knows that these mere historical precedents alone utilizing MIMO are NOT sufficient for anyone to justify endorsing MIMO radar. Instead, Tom Kerr will rely on what others have already found out on this topic.

Tom Kerr knows that "technology rolls on" and evolves no matter who stands at the door trying to block it! Tom feels that it is better to "go with the flow". Dr. Eli Brookner used the state-of-the-art that was available at that time. Old friend, as George and Ira Gershwin said so well: "Who could ask for anything more?"

To see more, please click on Eli's most recent IEEE Distinguished AES Speaker lecture at Tufts University in 2018.  (Tom Kerr is 2nd from far right in the photograph. Of course, Eli is front and center.) 

A favorable consensus usually rules in technology as well as elsewhere!

Lincoln Laboratory of MIT evidently endorsed the MIMO radar approach some years ago. Others also embraced MIMO radar, as can be seen by clicking here.

Moreover, these new MIMO radar results, routinely obtained using MIMO, appear to have motivated some technologists to return to earlier radar processing 
approaches and then work harder to obtain similar or comparable results for the older processing approaches (apparently, one-at-a-time) now inspired by MIMO radar claims.
The older radar approaches are not "new" but merely their ability to achieve such comparable results "by doing more" is now "new" (by following up on what
MIMO radar can do [simultaneously] and, therefore, has actually done it 1st).

Here are some of Dr. Eli Brookner's somewhat negative views on MIMO radar:
http://ieee-aess.org/sites/ieee-aess.org/files/2.BROOKNER%2CELI%2CMIMO%2CPUBBIOISRAELSHORTSUM169-114%2CIEEEAESS.pdf 
http://www.radarindia.com/Proceedings%20Archive/IRSI-17/081.pdf 
https://in.bgu.ac.il/en/engn/ece/radar/Site%20Assets/Pages/Keynote-Speakers/MIMO-RADARS.pdf 
Eli Brookner, "MIMO radar demystified and where it makes sense to use,"
Conference: ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
DOI:10.1109/ICASSP.2014.6854613

Please also peruse both of Dr. Eli Brookner's very nice 1997 articles: 
https://www.microwavejournal.com/articles/2080-major-advances-in-phased-arrays-part-1  

https://www.microwavejournal.com/articles/2094-major-advances-in-phased-arrays-part-ii 

and also:

https://www.info.com/serp?q=phased%20array%20radar%20design&segment=info.0495&s1aid=1062234054&s1cid=6743318580&s1agid=78954689469&s1kid=kwd-305628704865&utm_source=adwords&gclid=EAIaIQobChMIibvF-a239AIVEJfICh0IUwomEAMYAiAAEgIukvD_BwE

Eli Brookner's MIMO Radar for Automobiles:

https://ieeexplore.ieee.org/document/9020848 

Dr. Eli Brookner's apparent stance on MIMO radar: "Contrary to claims made, Multiple Input and Multiple Output (MIMO) radars do not provide an
order of magnitude or better angle resolution, accuracy and identifiability (i.e., the ability to resolve and identify targets) over conventional radars. 
Their claim is based on using a MIMO array radar system consisting of a full transmit array and thinned receive array (or vice versa; denoted 
here as a full/thin array). This claim for MIMO results from making the wrong comparison to a full conventional array rather than to a conventional 
full/thin array. It is shown here that a conventional full/thin array radar can have the same angle accuracy, resolution, and identifiability as a 
MIMO full/thin array." "Where does the MIMO radar provide a better angle accuracy than a conventional radar? A monostatic MIMO array radar 
does provide a better angle accuracy than its conventional monostatic equivalent, but is only about a factor of 1/√2 (29 percent) better and its  
resolution is the same."


However, the advocates of MIMO radar make the following claim:
What are some advantages of using a MIMO radar?:
-Compared to conventional phased array radars that need successive scans to cover the entire FOV, this is another advantage of MIMO 
radars for applications that require fast reaction times. Time Division Multiplexing (TDM) is one way to achieve orthogonality among transmit channels.

https://www.mathworks.com/help/phased/ug/increasing-angular-resolution-with-mimo-radars.html 

Jian Li and highly regarded Petre Stoica list the main advantages of MIMO radar to be: ‘’Significantly improved parameter identifiably.’’ Simply put, MIMO radar 
improves the maximum number of identifiable targets.

https://nutaq.com/blog/mimo-radar-and-phased-array-radar#:~:text=%2C%20Jian%20Li%20and%20Petre%20Stoica%20list%20the,radar%20improves%20the%20maximum%20number%20of%20identifiable%20targets .
Multiple-input multiple-output radar is an advanced type of phased array radar employing digital receivers and waveform generators distributed across the 
radar aperture. MIMO radar signals propagate in a fashion similar to multistatic radar. However, instead of distributing the radar elements throughout the 
surveillance area, antennas are closely located to obtain better spatial resolution, Doppler resolution, and dynamic range. MIMO radar may also be used 
to obtain low-probability-of-intercept radar properties.

https://wikimili.com/en/Multistatic_radar 

https://en.wikipedia.org/wiki/MIMO_radar 

https://www.radartutorial.eu/02.basics/MIMO-radar.en.html 

https://www.mathworks.com/campaigns/offers/hybrid-beamforming-white-paper.html?ef_id=EAIaIQobChMIibvF-a239AIVEJfICh0IUwomEAMYASAAEgJEa_D_BwE:G:s&s_kwcid=AL!8664!3!195803145046!b!!g!!%2Bphased%20%2Barray&s_eid=psn_42237499163&q=+phased%20+array&gclid=EAIaIQobChMIibvF-a239AIVEJfICh0IUwomEAMYASAAEgJEa_D_BwE 

https://en.wikipedia.org/wiki/MIMO_radar#:~:text=There%20are%20a%20variety,in%20an%20interleaved%20way&text=There%20are,interleaved%20way&text=a%20variety,in%20an 

Effects of Mobility on mmWave Massive MIMO Beamforming: 
This paper explores the effectiveness of massive MIMO beamforming in mitigating mmWave propagation challenges using Wireless InSite. 
READ MORE: https://resources.remcom.com/publications-presentations/publications-effects-of-mobility-on-mmwave-massive-mimo-beamforming-in-dynamic-urban-environments?utm_source=ieee-spectrum&utm_medium=display&utm_campaign=newsletter-ads 

Effects of Mobility on mmWave Massive MIMO Beamforming in Dynamic, Urban Environments

Wireless service providers have begun taking advantage of the expanse of spectrum available in the mmWave band thanks to a growing demand for remarkably 
higher data rates and an endlessly increasing number of connected users. However, mmWave propagation does not come without its challenges. For example, 
increased free-space path loss is a concern, as is a greater attenuation for diffracted beams in comparison to sub-7 GHz bands.

mmWave massive MIMO beamforming is a technology that can help solve some of these problems. But to do so effectively, it must be able to adapt to dynamic 
channels as devices move and signals interact with vehicles and people moving throughout a space.

This paper explores the effectiveness of massive MIMO beamforming at mitigating the challenges expected from using the expanse of spectrum available in the 
mmWave band. Specifically, it explores a use case involving our Wireless InSitu EM propagation software and its mmWave hybrid beamformer capabilities to 
transmit data streams to a sedan driving in central Manhattan.

The discussion focuses on how the wide bandwidths afforded by mmWaves compel the use of adaptive beam steering methods with deteriorated performance in a 
mobile environment due to latency. The impact of latency alone can be significant, while the other effects of mobility have not been included in this 
discussion and would degrade performance even further.

Despite such minor technical quibbles, Dr. Eli Brookner indeed remains as one of my technology heroes! He is an inspiration to all of us who really care.

Please view Dr. Eli Brookner's 2021 Obituary by clicking here. He will indeed be missed!

Another giant in the radar area, Merrill Skolnik, died two months later, at age 94.

Personally, Tom Kerr tries to keep abreast of nice new discussions or revelations in radar such as:

http://www.tekassociates.biz/8569_0_art_file_31669_mcnph8_convrtIEEETAESTrackSmallLEOObjRadarFenceRadar.pdf 

https://hal.archives-ouvertes.fr/hal-01070959/document 

https://www.facebook.com/IEEEAESS/videos/passive-radar/748838592565039/ 

https://www.facebook.com/IEEEAESS/videos/radar-adaptivity-antenna-based-signal-processing-techniques/337163170947463/ 

https://us.artechhouse.com/Radar-for-Fully-Autonomous-Driving-P2262.aspx (This new book is highly recommended for what it contains regarding new perspectives and insights!)

Space Fence:

https://www.lockheedmartin.com/en-us/products/space-fence.html 

https://directory.eoportal.org/web/eoportal/satellite-missions/content/-/article/space-fence 

https://spacenews.com/leolabs-western-australia-radar/ 

https://battle-updates.com/update/satellite-systems-satcom-and-space-systems-update-93/  

Tom Kerr has always enjoyed the writings, wisdom, and creativity of others including or especially that of Robert J. Fitzgerald (Raytheon-retired).

Go To Secondary Table of Contents    Go to Top

References (a partial list):

  1. Kerr, T. H., “A Two Ellipsoid Overlap Test for Real-Time Failure Detection and Isolation by Confidence Regions Proceedings of IEEE Conference on Decision and Control, Phoenix, AZ, December 1974.
  2. Kerr, T. H., “Poseidon Improvement Studies: Real-Time Failure Detection in the SINS\ESGM (U) TASC Report TR-418-20, Reading, MA, June 1974 (Confidential).
  3. Kerr, T. H., “Failure Detection in the SINS\ESGM System (U) TASC Report TR-528-3-1, Reading, MA, July 1975 (Confidential).
  4. Kerr, T. H., “Improving ESGM Failure Detection in the SINS\ESGM System (U) TASC Report TR-678-3-1, Reading, MA, October 1976 (Confidential).
  5. Kerr, T. H., “Failure Detection Aids for Human Operator Decisions in a Precision Inertial Navigation System Complex Proceedings of Symposium on Applications of Decision Theory to Problems of Diagnosis and Repair, Keith Womer (editor), Wright-Patterson AFB, OH: AFIT TR 76-15, AFIT\EN, Oct. 1976, sponsored by the local Dayton Chapter of the American Statistical Association, Fairborn, Ohio, pp. 98-127, June 1976.
  6. Kerr, T. H., “Real-Time Failure Detection: A Static Nonlinear Optimization Problem that Yields a Two Ellipsoid Overlap Test Journal of Optimization Theory and Applications, Vol. 22, No. 4, August 1977.
  7. Kerr, T. H., “Preliminary Quantitative Evaluation of Accuracy\Observables Trade-off in Selecting Loran\NAVSAT Fix Strategies (U) TASC Technical Information Memorandum TIM-889-3-1, Reading, MA, December 1977 (Confidential).
  8. Kerr, T. H., “Improving C-3 SSBN Navaid Utilization (U) TASC Technical Information Memorandum TIM-1390-3-1, Reading, MA, August 1979 (Secret).
  9. Kerr, T. H., “Stability Conditions for the RelNav Community as a Decentralized Estimator-Final Report Intermetrics, Inc. Report No. IR-480, Cambridge, MA, 10 August 1980, for NADC (Warminster, PA).
  10. Kerr, T. H., and Chin, L., “A Stable Decentralized Filtering Implementation for JTIDS RelNav Proceedings of IEEE Position, Location, and Navigation Symposium (PLANS), Atlantic City, NJ, 8-11 December 1980.
  11. Kerr, T.H., and Chin, L., “The Theory and Techniques of Discrete-Time Decentralized Filters in Advances in the Techniques and Technology in the Application of Nonlinear Filters and Kalman Filters, edited by C. T. Leondes, NATO Advisory Group for Aerospace Research and Development, AGARDograph No. 256, Noordhoff International Publishing, Lieden, 1981.
  12. Kerr, T. H., “Modeling and Evaluating an Empirical INS Difference Monitoring Procedure Used to Sequence SSBN Navaid Fixes Proceedings of the Annual Meeting of the Institute of Navigation, U.S. Naval Academy, Annapolis, Md., 9-11 June 1981. (Selected for reprinting in Navigation: Journal of the Institute of Navigation, Vol. 28, No. 4, pp. 263-285, Winter 1981- 1982).
  13. Kerr, T. H., “Statistical Analysis of a Two Ellipsoid Overlap Test for Real-Time Failure Detection IEEE Transactions on Automatic Control, Vol. 25, No. 4, August 1980.
  14. Kerr, T. H., “False Alarm and Correct Detection Probabilities Over a Time Interval for Restricted Classes of Failure Detection Algorithms IEEE Transactions on Information Theory, Vol. 28, No. 4, pp. 619-631, July 1982.
  15. Kerr, T. H., “Examining the Controversy Over the Acceptability of SPRT and GLR Techniques and Other Loose Ends in Failure Detection Proceedings of the American Control Conference, San Francisco, CA, 22-24 June 1983.
  16. Carlson, N. A., Kerr, T. H., Sacks, J. E., “Integrated Navigation Concept Study Intermetrics Report No. IR-MA-321, 15 June 1984, for ITT (Nutley, NJ) for ICNIA (Wright Patterson AFB).
  17. Kerr, T. H., “Decentralized Filtering and Redundancy Management Failure Detection for Multi-Sensor Integrated Navigation Systems Proceedings of the National Technical Meeting of the Institute of Navigation (ION), San Diego, CA, 15-17 January 1985.
  18. Kerr, T. H., “Decentralized Filtering and Redundancy Management for Multisensor Navigation IEEE Trans. on Aerospace and Electronic Systems, Vol. 23, No. 1, pp. 83-119, Jan. 1987 (correction on p. 412 of May and on p. 599 of July 1987 issues).
  19. Kerr, T. H., “Comments on ‘A Chi-Square Test for Fault Detection in Kalman Filters’,” IEEE Transactions on Automatic Control, Vol. 35, No. 11, pp. 1277-1278, November 1990.
  20. Kerr, T. H., “A Critique of Several Failure Detection Approaches for Navigation Systems IEEE Transactions on Automatic Control, Vol. 34, No. 7, pp. 791-792, July 1989.
  21. Kerr, T. H., “On Duality Between Failure Detection and Radar\Optical Maneuver Detection IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, No. 4, pp. 581-583, July 1989.
  22. Kerr, T. H., “The Principal Minor Test for Semidefinite Matrices-Author’s Reply,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, p. 767, Sep.-Oct. 1989.
  23. Kerr, T. H., “An Analytic Example of a Schweppe Likelihood Ratio Detector,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 25, No. 4, pp. 545-558, Jul. 1989.
  24. Kerr, T. H., “Fallacies in Computational Testing of Matrix Positive Definiteness/Semidefiniteness,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 26, No. 2, pp. 415-421, Mar. 1990.
  25. Kerr, T. H., “On Misstatements of the Test for Positive Semidefinite Matrices,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, pp. 571-572, May-Jun. 1990.
  26. Kerr, T. H., “Comments on ‘An Algorithm for Real-Time Failure Detection in Kalman Filters’,” IEEE Trans. on Automatic Control, Vol. 43, No. 5, pp. 682-683, May 1998.
  27. Kerr, T. H., “Rationale for Monte-Carlo Simulator Design to Support Multichannel Spectral Estimation and/or Kalman Filter Performance Testing and Software Validation/Verification Using Closed-Form Test Cases,” MIT Lincoln Laboratory Report No. PA-512, Lexington, MA, 22 Dec. 1989 (BSD).
  28. Kerr, T. H., “A Constructive Use of Idempotent Matrices to Validate Linear Systems Analysis Software,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 26, No. 6, pp. 935-952, Nov. 1990 (minor correction in Nov. 1991 issue).
  29. Kerr, T. H., “Numerical Approximations and Other Structural Issues in Practical Implementations of Kalman Filtering a chapter in Approximate Kalman Filtering, edited by Guanrong Chen, 1993.
  30. Kerr, T. H., and Satz, H., S., “Applications of Some Explicit Formulas for the Matrix Exponential in Linear Systems Software Validation,” Proceedings of 16th Digital Avionics System Conference, Vol. I, pp. 1.4-9 to 1.4-20, Irvine, CA, 26-30 Oct. 1997.
  31. Kerr, T. H., “Verification of Linear System Software Sub-Modules using Analytic Closed-Form Results Proceedings of The Workshop on Estimation, Tracking, and Fusion: A Tribute to Yaakov Bar-Shalom (on the occasion of his 60th Birthday) following the Fourth ONR/GTRI Workshop on Target Tracking and Sensor Fusion, Naval Postgraduate School, Monterey, CA, 17 May 2001.
  32. Kerr, T. H., “Exact Methodology for Testing Linear System Software Using Idempotent Matrices and Other Closed-Form Analytic Results Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 142-168, San Diego, CA, 29 July-3 Aug. 2001.
  33. Kerr, T. H., “The Proper Computation of the Matrix Pseudo-Inverse and its Impact in MVRO Filtering,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 21, No. 5, pp. 711-724, Sep. 1985.
  34. Kerr, T. H., “Computational Techniques for the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control,” in Control and Dynamic Systems-Advances in Theory and Applications, Vol. XXVIII: Advances in Algorithms and computational Techniques for Dynamic Control Systems, Part 1 of 3, C. T. Leondes (Ed.), Academic Press, N.Y., 1988.
  35. Kerr, T. H., “Streamlining Measurement Iteration for EKF Target Tracking,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 2, Mar. 1991 (minor correction appears in Nov. 1991 issue).
  36. Kerr, T. H., “Assessing and Improving the Status of Existing Angle-Only Tracking (AOT) Results Proceedings of the International Conference on Signal Processing Applications & Technology (ICSPAT), Boston, MA, pp. 1574-1587, 24-26 Oct. 1995.
  37. Kerr, T. H., “Status of CR-Like Lower bounds for Nonlinear Filtering IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, No. 5, pp. 590-601, Sep. 1989 (Author’s reply in Vol. 26, No. 5, pp. 896-898, Sep. 1990).
  38. Kerr, T. H., “Extending Decentralized Kalman Filtering (KF) to 2-D for Real-Time Multisensor Image Fusion and\or Restoration Signal Processing, Sensor Fusion, and Target Recognition V, Proceedings of SPIE Conference, Vol. 2755, Orlando, FL, pp. 548-564, 8-10 Apr. 1996.
  39. Kerr, T. H., “Extending Decentralized Kalman Filtering (KF) to 2D for Real-Time Multisensor Image Fusion and\or Restoration: Optimality of Some Decentralized KF Architectures Proceedings of the International Conference on Signal Processing Applications & Technology (ICSPAT96), Boston, MA, 7-10 Oct. 1996.
  40. Kerr, T. H., “Comments on ‘Federated Square Root Filter for Decentralized Parallel Processes’ IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, No. 6, Nov. 1991.
  41. Kerr, T. H., “Cramer-Rao Lower Bound Implementation and Analysis: CRLB Target Tracking Evaluation Methodology for NMD Radars MITRE Technical Report, Contract No. F19628-94-C-0001, Project No. 03984000-N0, Bedford, MA, February 1998.
  42. Kerr, T. H., “Developing Cramer-Rao Lower Bounds to Gauge the Effectiveness of UEWR Target Tracking Filters Proceedings of AIAA\BMDO Technology Readiness Conference and Exhibit, Colorado Springs, CO, 3-7 August 1998.
  43. Kerr, T. H., “UEWR Design Notebook-Section 2.3: Track Analysis TeK Associates, Lexington, MA, (for XonTech, Hartwell Rd, Lexington, MA), XonTech Report No. D744-10300, 29 March 1999.
  44. Kerr, T. H., and Satz, H. S., “Evaluation of Batch Filter Behavior in comparison to EKF, TeK Associates, Lexington, MA, (for Raytheon, Sudbury, MA), 22 Nov. 1999.
  45. Satz, H. S., Kerr, T.  H., “Comparison of Batch and Kalman Filtering for Radar Tracking, Proceedings of 10th Annual AIAA/BMDO Conference, Williamsburg, VA, 25 Jul. 2001 (Unclassified, but Conference Proceedings are SECRET).
  46. Kerr, T. H., “TeK Associates’ view in comparing use of a recursive Extended Kalman Filter (EKF) versus use of Batch Least Squares (BLS) algorithm for UEWR, TeK Associates, Lexington, MA, (for Raytheon, Sudbury, MA), 12 Sep. 2000.
  47. Kerr, T. H., “Use of GPS/INS in the Design of Airborne Multisensor Data Collection Missions (for Tuning NN-based ATR algorithms),” the Institute of Navigation Proceedings of GPS-94, Salt Lake City, UT, pp. 1173-1188, 20-23 Sep. 1994.
  48. Kerr, T. H., “Comments on ‘Determining if Two Solid Ellipsoids Intersect’,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 28, No. 1, pp. 189-190, Jan.-Feb. 2005.
  49. Kerr, T. H., “Integral Evaluation Enabling Performance Trade-offs for Two Confidence Region-Based Failure Detection  AIAA Journal of Guidance, Control, and Dynamics, Vol. 29, No. 3, pp. 757-762, May-Jun. 2006.
  50. Kerr, T. H., “Further Comments on ‘Optimal Sensor Selection Strategy for Discrete-Time Estimators’,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 31, No. 3, pp. 1159-1166, June 1995.
  51. Kerr, T. H., “Sensor Scheduling in Kalman Filters: Evaluating a Procedure for Varying Submarine Navaids Proceedings of 57th Annual Meeting of the Institute of Navigation, pp. 310-324, Albuquerque, NM, 9-13 June 2001.
  52. Kerr, T. H., “The Principal Minor Test for Semidefinite Matrices-Author’s Reply AIAA Journal of Guidance, Control, and Dynamics, Vol. 13, No. 3, p. 767, Sep.-Oct. 1989.
  53. Hu, David, Y., Spatial Error Analysis, IEEE Press, NY, 1999.
  54. Roberts, P. F., “MIT research and grid hacks reveal SSH holes,” eWeek, Vol. 22, No. 20, pp. 7, 8, 16 May 2005.
  55. Golub, G. H., Van Loan, C. F., Matrix Computations, 3rd Edition, The Johns Hopkins University Press, Baltimore, MD, 1996.
  56. Rader, C. M., Steinhardt, A. O., “Hyperbolic Householder Transformations IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 34, No. 6, pp. 1589-1602, December 1986.
  57. Kerr, T. H., “Vulnerability of Recent GPS Adaptive Antenna Processing (and all STAP/SLC) to Statistically Non-Stationary Jammer Threats Proceedings of SPIE, Session 4473: Tracking Small Targets, pp. 62-73, San Diego, CA, 29 July-3 Aug. 2001.
  58. Guerci, J. R., Space-Time Adaptive Processing for Radar, Artech House, Norwood, MA, 2003. Also see 2nd Edition.
  59. Heideman, M. T., Johnson, D. H., Burrus, C. S., “Gauss and the History of the Fast Fourier Transform IEEE ASSP Magazine, pp. 14-21, October 1984.
  60. Kerr, T. H., “Emulating Random Process Target Statistics (using MSF) IEEE Transactions on Aerospace and Electronic Systems, Vol. AES-30, No. 2, pp. 556-577, April 1994.
  61. Gelb, A. (Ed.), Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974.
  62. Safonov, M. G., Athans, M., “Gain and Phase Margins for Multiloop LQG Regulators IEEE Transactions on Automatic Control, Vol. 22, No. 2, pp. 173-179, Apr. 1977.
  63. Doyle, J. C., “Guaranteed Margins for LQG Regulators IEEE Transactions on Automatic Control, Vol. 23, No. 4, pp. 756-757, Aug. 1978.
  64. Gimble, M. J., “Implicit and Explicit LQG Self-Tuning Controllers Automatica, Vol. 20, No. 5, pp. 661-669, 1984.
  65. Astrom, K. J., Haggued, T., “Automatic Tuning of Simple Regulators with Specification on Phase and Amplitude Margins Automatica, Vol. 20, No. 5, pp. 645-651, 1984.
  66. Lewis, F. L., Applied Optimal Control and Estimation, Prentice-Hall and Texas Instruments Digital Signal Processing Series, 1992.
  67. C. Y. Chong, C. Y., Mori, S., “Convex Combination and Covariance Intersection Algorithms in Distributed Fusion Proc. of 4th Intern. Conf. on Information Fusion, Montreal, CA, Aug. 2001.
  68. Chen, L., Arambel, P. O., Mehra, R. K.,  “Estimation Under Unknown Correlation: Covariance Intersection Revisited IEEE Trans. on Automatic Control, Vol. 47, No. 11, pp. 1879-1882, Nov. 2002.
  69. Markoff, J., “At Microsoft, Interlopers Sound Off on Security The New York Times, pages C-1, C-7, Monday, 17 October 2005.
  70. Wayne, R., “That Parallel Beat Software Development, Vol. 14, No. 1, pp. 24-28, Jan. 2006.
  71. Oney, W., Programming with Microsoft Windows Driver Model, 2nd Edition, Microsoft Press, Redmond, WA, 2003.
  72. Sayed, A. H., and Kailath, T., “A State-Space Approach to Adaptive RLS Filtering IEEE Signal Processing Magazine, Vol. 11, No. 3, pp. 18-60, Jul. 1994. [Also see all related sequels by these two authors and as coauthors.]
  73. Markoff, J., “I.B.M. Researchers Find a Way to Keep Moore’s Law on Pace The New York Times, p. C4, Monday, 20 February 2006.
  74. Reuters, “Space Adventures gets Approval for Spaceport The Boston Globe, p. A11, Monday, 20 February 2006.
  75. Corio, C., “First Look: New Security Features in Windows Vista TechNet Magazine-Special Report: Security, Vol. 2, No. 3, pp. 34-39, May-June 2006.
  76. Hensing, R., “Behind the Scenes: How Microsoft Built a Unified Approach to Windows Security TechNet Magazine Special Report: Security, Vol. 2, No. 3, pp. 40-45, May-June 2006.
  77. Baher, H., Synthesis of Electrical Networks, John Wiley & Sons, Inc., NY, 1984.
  78. Smith, S. T., “Covariance, Subspace, and Intrinsic Cramer-Rao Bounds,” IEEE Trans. on Signal Processing, Vol. 53, No. 5, pp. 1610-1630, May 2005.
  79. “Security Watch: Mozilla, Microsoft Mend Merchandise,” PC Magazine online: <PCM_SecurityWatch@enews.pcmag.com>, full article at http://www.pcmag.com/article2/0,1895,1949924,00.asp , 18 April 2006.
  80. Howard, M., LeBlanc, D., Writing Secure Code: practical strategies and techniques for secure application coding in a networked world, 2nd Edition, Microsoft Press, Redmond, WA, 2003. [“Required reading at Microsoft.”-Bill Gates]
  81. Cho, A., “A New Way to Beat the Limits on Shrinking Transistors,” Science, Vol. 313, Issue 5774, p. 672, 5 May 2006.
  82. Reed, I. S., Mallet, J. I., Brennan, L. E., “Rapid Convergence Rate in Adaptive Arrays,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 10, No. 6, pp. 853-863, Nov. 1974.
  83. Gabelli, J., Feve, G., Berroir, J.-M., Placais, B., Cavanna, A., Etienne, B., Jin, Y., Glattli, D. C., “Violation of Kirchhoff’s Laws for a Coherrent RC Circuit,” Science, Vol. 313, Issue 5786, pp. 499-502, 28 July 2006. [Same issue of journal on page 405 subtitles this article in its summarization as “Kicking Out Kirchhoff’s Laws.” For a fully coherent circuit consisting of a quantum resistor (point contact) and a quantum capacitor in series, Kirchoff’s Laws no longer describe the resistence of the system. In addition to highlighting the differences in electronic transport behavior between quantum and classical, these results should prove useful for future implementation of quantum computers. For pertinent details, see www.sciencemag.org/cgi/content/full/1126940/DC1.]
  84. Holsapple, R., Venkataraman, R., Doman, D., “New, Fast Numerical Method for Solving Two-Point Boundary-Value Problems,” AIAA Journal of Guidance, Control, and Dynamics, Vol. 27, No. 2, Engineering Notes, pp. 301-304, March-April 2004.
  85. Wolt, P., Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law, Basic, 2006.

  86. Smolin, L., The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, Houghton-Mifflin, NY, 2006.

  87. Eklund, C., Marks, R. B., Ponnuswamy, S., Standwood, K. L., van Waes, N. J. M., WirelessMAN: Inside the IEEE 802.16 Standard for Wireless Metropolitan Networks, IEEE Standards Wireless Series, Standards Information Network,, IEEE Press, NY, 2006.W

  88. Wescott, T., Applied Control Theory for Embedded Systems, Embedded Technology Series, Newnes Elsevier, Inc., Boston, MA, 2006.

  89. Fette, B. (Ed.), Cognitive Radio Technology, Communication Engineering Series, Newnes Elsevier, Inc., Boston, MA, 2006.

  90. Shepard, S., WiMAX Crash Course, McGraw-Hill, NY, 2006. [A nice collection of Common Technical Acronyms in Appendix A, pp. 237-323.]

  91. Willert-Porada, M. (Ed.), Advances in Microware and Radio Frequency Processing: 8th International Conference on Microwave and High-Frequency Heating, Springer-Verlag, NY, 2006.

  92. Travostino, F., Mambretti, J., Karmous-Edwards, G., Grid Networks: Enabling Grids with Advanced Communication Technology, John Wiley & Sons, Ltd, Chichester, West Sussex, UK, 2006.

  93. Mortensen, R. E., Optimal Control of Continuous-Time Stochastic Systems, Ph.D. Thesis (engineering), Univ. of California, Berkeley, CA, 1966.

  94. Jazwinski, A. H., Stochastic Processes and Filtering Theory, Academic Press, NY, 1970 [a book that we do NOT view as “yet another addition to an already heavy shelf...” (where the phrase within quotation marks is an exact quote used to characterize this book by a certain Stanford Univ. Ph. D, alumnus Kenneth Senne at Lincoln Laboratory in his 1972 review of this book [95]) and we view this book to be an easily accessible blue print for future developments as well as admirably summarizing the past contributions of others in terms that are easily read and understood. While A. H. Jazwinski could have easily made his book more obscure and abstract and technically challenging (which would have meant a less arduous writing task for Jazwinski) for a markedly narrower readership as a consequence, he did not do so (and limited arguments in his book to being merely mean square convergence only, as he clearly acknowledged was the necessary trade-off), as is good business sense for a wider appreciation and distribution of his publication because of its more general appeal and accessibility to others who had not studied mathematical Measure Theory or, equivalently, Advanced Probability Theory or Advanced Stochastic Processes in the Mathematics Departments].

  95. Senne, K. D., “Review of ‘Stochastic Processes and Filtering Theory (Andrew H. Jazwinski, 1970)’,” IEEE Trans. on Automatic Control, Vol. 17, No. 5, pp. 752-753, Oct. 1972.

  96. Nehori, A., “Adaptive Parameter Estimation for a Constrained Low-Pass Butterworth System,” IEEE Trans. on Automatic Control, Vol. 33, No. 1, pp. 109-112, Jan. 1988.

  97. Challa, S., Bar-Shalom, Y., “Nonlinear Filter Design Using Fokker-Planck-Kolgmogorov Probability Density Evolution,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 36, No. 1, pp. 309-315, Jan. 2000.

  98. Shackelford, A. K., Gerlach, K.,  Blunt, S. D., “Partially Adaptive STAP using the FRACTA Algorithm,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 45, No. 1, pp. 58-69, Jan. 2009.

  99. DiPietro,R. C., “Extended factored space-time processing for airborne radar,” in Proceedings of 26th Asilimar Conference, pp. 425-430, Pacific Grove, CA, Oct. 1992.

  100. Farrell, W. J., “Interacting Multiple Model Filter for Tactical Ballistic Missile Tracking,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 2, pp. 418-426, April 2008.

  101. Buzzi, S., Lops, M., Venturino, L., Ferri, M., “Track-before-Detect Procedures in a Multi-Target Environment,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1135-1150, July 2008.

  102. Ries, P., Lapierre, F. D., Verly, J. G., “Fundamentals of Spatial and Doppler Frequencies in Radar STAP,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 44, No. 3, pp. 1118-1134, July 2008.

  103. Click here to view our recent short comment in the Institute of Navigation Journal. Kerr, T. H., “Comment on ‘Low-Noise Linear Combination of Triple-Frequency Carrier Phase Measurements’,” Navigation: Journal of the Institute of Navigation, Vol.57, No. 2, pp. 161,162, Summer 2010.

  104. Click here to view our abstract for GNC Challenges for Miniature Autonomous Systems Workshop, 26-28 October 2009 to occur at Fort Walton Beach,.FL  Click here to obtain the corresponding 1.40 MByte PowerPoint presentation. 

  105. Kerr, T. H., “Comment on ‘Precision Free-Inertial Navigation with Gravity Compensation by an Onboard Gradiometer’,”  AIAA Journal of Guidance, Control, and Dynamics, July-Aug. 2007.   

  106. Sergey Stepanov, Relativistic World, Vol. 1: Mechanics, De Gruyter GmbH, Boston, MA, 2018. 107. How to Calculate an Ensemble of Neural Network Model Weights in Keras (Polyak Averaging) https://lnkd.in/gmVUr7W  

  107. John M. Wozencraft and Irwin Mark Jacobs, Principles of Communication Engineering, Waveland Press, Prospect Park, IL, Jun 1, 1990.

  108. Yuan, J. S.-C. and W. M. Wonham. "Probing signals for model reference identification," IEEE Trans. Aut. Control, Vol. 22, pp. 530-538, 1977. (persistently exciting)

  109. https://www.unoosa.org/documents/pdf/psa/activities/2004/vienna/presentations/monday/pm/walter.pdf  (Just getting started in Brazil. circa 2005 considering GPS/GLONASS and eventually GNSS/Galileo)

  110. https://www.jpier.org/PIERM/pier.php?paper=17041403 similarly elsewhere.

  111. de Morais, T. N., Oliveira, A. B. V., Walter, F., "Global Behavior of Equatorial Anomaly Since 1999 and Effects on GPS," IEEE AES Systems Magazine, Vol. 20, No. 3, Mar. 2005, pp. 15-23. (Adverse Atmospheric Effects)

Go to Top     Go To Secondary Table of Contents

Please click on the above or click here.

Click here to see evidence of our continuing education regarding Neural Networks

Click here to see the numerous Continuing Education Courses Tom has taken to keep up with evolving technology.

Athans, M., and Schweppe, F. C., “Matrix Gradients and Matrix Calculations MIT Lincoln Laboratory, Lexington, MA, Technical Note No. TN 1965-63, 1965.

Within a summer 2 week short course on “Kalman Filtering and LQG Control” at MIT in 1974, the above document was distributed and a false cover sheet was attached, which read:

Athans, M., et al, “Matrix Gradients and Matrix Calculations MIT Lincoln Laboratory, Lexington, MA, Technical Note No. TN 1965-63, 1965.

The grammatical rule is: “for three or more authors of a publication, et al may be used”. Should single coauthor, Fred C. Schweppe, be reduced to merely an “et al”?

Go to Top     Go To Secondary Table of Contents

kemosabe1 Dodecahedron, as arises in certain approaches to ideal INS configurations.   

For more about Dodecahedron for Redundant INS: see Chien, T. T., "An Adaptive Technique for a Redundant-Sensor Navigation System," Rept. T-560, C. S. Draper Laboratory, Cambridge, MA, 1972.

As Gabby Hayes, depicted above on the left, used to say: “Dag nab it!” and “yer d-a-r-n tootin’!” (in an easily recognizable and distinctive voice).

(If you wish to print information from Websites with black backgrounds, we recommend that you should first invert colors.)

Go to Top     Go To Secondary Table of Contents

TeK Associates Motto: "We work hard to make your job easier!"