Sending a satellite to space is easier than ever before. Cubesats are relatively inexpensive but at the same time they are capable enough to track ships, watch for earthquakes, or even observe exoplanets. "Ease of launch" does not mean "ease of operation", though; the satellites send a wealth of telemetry back to earth, and turning that flood of information into action is difficult. This difficulty only increases as fleets scale in size, from one-off research projects to hundreds of satellites providing commercial services.
Since the beginning of 2018, the Open Source Polaris project has used Python's rich ecosystem to build a machine learning pipeline applicable to any mission. Now, since about summer 2020, it’s doing so under the umbrella of the Libre Space Foundation.
Polaris analyzes telemetry for each satellite, automatically extracts dependencies among its components, and displays these in an interactive, browser-based 3D graph. Spacecraft operators can navigate their graph and understand relations between their telemetry. This not only gives operators another tool to monitor performance, diagnose problems, and predict satellite behaviour but also gives them a succinct, intuitive representation of all information being evaluated.
We have already presented Polaris in previous editions of OSCW, but this year we want to give an update to the Community about the most recent developments. In particular, we’d like to show how Polaris can detect anomalies in telemetry and how we use external parameters, such as space weather, to get better insight.
During last year, mainly thanks to Google Summer of Code 2020, we implemented an autoencoder based (deep learning) algorithm which creates concise representations of the telemetry data for each timestamp. By finding the vector distance between “neural kernels” in consecutive timestamps, we get a number which tells us how far an esnemble of telemetry has deviated with time. By setting a threshold for these numbers, alerts can be triggered whenever an anomaly is detected (i.e. a high value). The reason why this works well is because as the model learns the best and minimum representation for the telemetry data, it also learns how a change in one data point is likely to affect the rest of the data points, allowing us to automatically amplify or subdue the important and not so important/routine events.
From the space weather side, we added support for proton, electron, magnetic, gamma and solar events, with data sourced from swpc.noaa.gov. This addition is especially useful as the effect of space weather on the cubesat hardware (reflected by the telemetry) can easily be observed.
We want to open the discussion with an update to the project as described above and then open the floor for the audience to give us feedback, request features and brainstorm about useful ways to integrate machine learning analysis into spacecraft operations.