The assessment of complex algorithms like sensor fusion requires an aggregate analysis across a large heterogeneous data set that represents the possible operating conditions. This talk will discuss the process and tools we use to analyze this data, where we want to take things, hurdles we have left to overcome, lessons we have learned along the way, and best practices we can recommend.
In the military aviation development and test process, it is common to record all of the digital communication that occurs between the myriad of on-aircraft systems (computers, data link radios, sensors, etc.). One of our main research areas is sensor fusion whereby on-aircraft, real-time software compares and fuses tracks from all of the onboard sensors as well as those reported by other aircraft in the area. Track association is one component of sensor fusion. By association we are referring to the decision made by the fusion engine as to whether two or more track reports represent the same physical entity (“track matching"). Track reports may be provided by the same sensor, different sensors on a single aircraft, or multiple sensors across multiple aircraft. Assessing the performance of the sensor fusion algorithms requires an aggregate analysis across a large set of flight test data both from a mission perspective (single mission, but looking at data from multiple aircraft) and across multiple missions (evaluating performance over time and across a wide range of test conditions). This talk will focus on the post-flight assessment process related to track association decisions.
In order to analyze and compare tracks (time series reports on the position and movement of a physical entity) it is necessary to address any time alignment issues, perform coordinate transformations to get all tracks into a common coordinate frame, and store the data in a format that facilitates aggregate analysis. The time alignment related challenges stem from the fact that the data is recorded on multiple asynchronous digital interfaces. Also, time synchronization between aircraft is not robust and can require time shifting some of the recordings. Prior to ingesting the data, the track reports vary widely in the coordinate frame used. Some sensors only report in a single dimension (e.g. azimuth only), others in two (e.g. azimuth and elevation), and others in three (e.g. azimuth, elevation, and range or latitude, longitude and altitude). Some of the sensors report the tracks kinematic information in addition to its position which can also vary by coordinate frame. We have found pandas’ powerful data manipulation capabilities and its extensive time series data support to be very effective in both preparing the data for analysis as well as the analysis itself.
Once the tracks are on the same time scale and in the same coordinate frame, we can compare tracks to each other (either from the same aircraft or different aircraft) and compare them to “truth” (the best state estimates available for the relevant physical entities in the airspace). In addition to the track matching analysis, we analyze sensor error over time and as software and hardware configurations change. This talk will discuss this analysis process, its challenges, and how we leverage Python and pandas to get the job done.
A key component to our analysis is visualization. We need to be able to explore the data visually and we need to effectively communicate the results. Towards this end, we utilize a mixture of matplotlib, PyQtGraph, and Bokeh. We will discuss how and why we use these tools along with new methodologies and frameworks we are investigating.