Approach to Making Your First Satellite: #1 MVP

What happens when software engineers start figuring out how to build a satellite system? They begin by building a Minimum Viable Product (MVP), gaining valuable hands-on experience in the process. Here is our first developer-penned blog post for the more tech-savvy readers, explaining how Reaktor approached the challenge of building a functional end-to-end MVP from the Mission Control room to CPUs inside the Hello World satellite.

This is the first blog post in the series of more technically oriented articles that aim to cover what we’ve been working on so far and what we intend to do in the near future. We try to use terminology related to this domain, so, depending on your background, some things may sound more foreign than they really are.

This article covers a quick background on the motivations of the project and the technical state of the project so far. Later in the series, we aim to cover different aspects of the project in more detail and also release other material along the way for the more technically oriented audience. In the meantime, you can follow us with #ReaktorSpace at Twitter for semi-daily updates.


Background: Why Space?

The motivation for this project is to better understand the specific problems of the domain in order to be able to enter the market and do business in this very interesting and technically challenging environment.

As our background is in software, a significant part of our work is built on top of the learnings from the Aalto-1 and Aalto-2 satellites that will be launched later this year. We’re working in close collaboration with the people involved with those projects. They have the best understanding of the radio and hardware aspects related to this project.

Reaktor MVP Approach

In order to validate our expectations that we could develop something practical, we wanted to take the same mental approach that we apply in our more common projects: start with a Minimum Viable Product (MVP). To maximize the learning opportunities, we want to work from end to end, without logical or practical black boxes. This means anything from the mission control user interface (UI) on your terminal of choice to the hardware (HW) in the satellite.

This approach allows us to minimize the possibility of bumping into major obstacles late in the project that might invalidate our whole project. As MVP is not a one-shot throwaway proof of concept (PoC), you still need to have the long-term vision in mind in order to evolve the implementation according to the rising and planned needs of the future. Since it is also a fully functional system, you could take it – or any version built on top of it – into use without a considerable lead time.

MVP | Reaktor.comEven though this project might in general sound like we are just solving technical problems, it also poses non-technical challenges that need to be answered. Because this is a new kind of project for us, we also need to understand new regulations and find a balance with public relations (PR) and training.

By sharing our learnings, we hope to attract new interest from talent and businesses that we might’ve missed before. Knowledge sharing is also very important for building up the required technical knowhow within the company for potential future projects. With this series of technical articles, we hope to facilitate at least some of these goals.

Now that the background is covered, let’s dive in.

Parts of the MVP

So what do MVP and end-to-end mean in the context of this project? This early in the project, it’s something you can fit on your desk. This includes the complete pipeline over the Internet and radio from the UI to the sensors on board the satellite hardware and back. On table, only the large production antennas need to be replaced with smaller ones.

The parts that form the end-to-end functionality are as follows:

  • End-user mission control interface, be it a mobile UI or a customer-facing API to integrate their backend systems with
  • Core infrastructure, which manages all the resources and delegates end-user requests further toward the satellites and compiles responses
  • Ground stations, providing the physical link to the satellites over various radio technologies
  • A satellite platform hosting the end-user applications that collect and process data for the specific mission requirements from the available sensors. Sensors can be basic cameras, telemetry (TM) sensors, or more scientific instruments.

Mission control – from the business point of view – is the logical part of the whole system. It determines at a high level what operations should be done with the resources available in the system. Ground stations provide the means of communication with the satellites, and satellites provide the means to host an array of resources that mission control can use.

Next, let’s go through what each of these parts means in practice at the moment.

Mission control

The minimal requirement for mission control is to provide a programming interface with a restricted set of capabilities. This can be extended in the future for more specific customer and operational needs. At the moment, this is a low-level telecommand (TC) API that can be invoked over a network by piping your usual UNIX command line tools together. The server side is a simple TCP server that functions as a bridge for the GS software. Later, this will most likely evolve into a HTTP REST API that provides higher-level functionality. Clients can be implemented on top of these higher-level API as needed.

Core infrastructure

As our MVP setup currently has only a single ground station and a satellite, the mission control backend is not yet a priority and is more of a logical pass-through layer at this point, facilitating only real-time communications with the satellite. However, the design must be flexible enough to support multiple ground stations, satellites, and mission-control clients operating simultaneously.

The backend is responsible of tracking all of the configured satellites and commanding the ground stations to facilitate communication between the mission control and the satellites. It monitors the transmitted telemetry data and collects the application data. It also queues and delivers data both ways. For example, in the case of lossless application data, it needs to store and compose file fragments over several days, as the files might be large compared to the available bandwidth. The connection quality might also be very weak, causing corrupted or dropped packets.

At the moment, we use GPredict locally for ground station management. It can control the antenna motors for tracking and make Doppler frequency corrections to the radio software.

Ground stations

Each ground station in our design functions as a radio bridge between mission control and the application end-points in the satellite. The ground station knows only a minimal amount of information regarding the data that passes through it. The core infrastructure keeps track of all of the satellites and commands each ground station, passing required tracking and radio configurations for targeted satellites.

Our radio setup is based on software-defined radio (SDR), wherein the radio pipeline is configured and run on commercial off-the-shelf (COTS) CPUs, instead of specialized hardware. We currently use GNU Radio open-source software, which has excellent prototyping and extension capabilities. We’ve also used Gqrx (also based on GNU Radio) for manual operations and learning. SDR hardware provides a very flexible way of working due to its shifting development to software that can be easily modified as needed. The hardware only translates between the radio signal and the signal generated by the SDR software pipeline.

HackRF | reaktor.com

We are currently using two different SDRs for development: HackRF One, and USRP B210. HackRF can be configured as a half-duplex transceiver for frequencies ranging from 1 MHz to 6 GHz. USRP can run full duplex in the range from 70 MHz to 6 GHz. So far, we’ve used basic telescopic monopole antennas for two-way desktop communications. Outdoors, we’ve tested handheld UHF Yagis and tripod VHF cross dipole antennas for receiving weather satellites. The next step is to get and integrate software-controlled motorized masts to support automatic tracking of the satellites and fit the proper antennas.

More on SDR:

Satellite platform

The Hello World satellite platform is mostly based on the design and learnings from the Aalto-1 and Aalto-2 satellites that will be launched this year. It is a two-unit CubeSat form factor satellite. The satellite subsystems consist of 10×10 cm PCBs, each built from COTS components and connected via CAN bus to each other. In its physical setup, the subsystems are connected via a stack connector to each other. Currently, there are just a bunch of jumper wires doing that job.

Subsystems include On-Board Computer (OBC), an industrial safety-hardened CPU working as the flight computer, a power subsystem (EPS) responsible for charging batteries with solar panels, a magnetorquer-based Attitude Determination and Control (ADCS) that functions as the means of rotating the satellite (we don’t have propulsion, so we can only aim the satellite), a communications subsystem providing us with the UHF radio link, and an application processing unit (APU).

The subsystems other than the OBC and APU each run their own microcontrollers that abstract details of those systems behind the CAN interface. Additional capabilities include a 13-megapixel camera with very basic optics, an S-Band high-frequency downlink, GPS, and possibly something else that we can easily fit within the size, power, bandwidth, and monetary budget constraints.

Portions of the subsystems are duplicated with cold spares in case something goes horribly wrong. The probability of a failure increases over time due to, e.g., radiation conditions in space. The current end-to-end table setup consists of OBC, EPS, UHF link, APU, and camera.

Going forward

With this setup, we can currently send commands from mission control to the GNU Radio-SDR stack, which transmits the commands over UFH to APU. APU can then take photos and send an acknowledgement of the operation’s success back to the mission control.

We’re currently working on a higher-level protocol stack to allow reliable commands and better addressing capabilities within the mission control satellite network, including streaming the payload-produced data back to the ground systems intact.

We hope you liked our first technical post from the project. We will cover different aspects of the project in greater detail in upcoming articles. If you have feedback or requests regarding on what we cover and how, let us know so we can better serve the interests of our audience.

More information about Reaktor Space here.

Recommended posts