Savva Pistolas

Reflections on the radiowaves - TETRA:BURST and secure software in CNI.

TETRA:BURSTCOMIC

There’s an article doing the rounds relating to ‘TETRA:BURST’, wherein five vulnerabilities have been discovered in the radio technology that supports multiple government agencies, law enforcement, and emergency services both in the UK and abroad. There’s a few different angles to look at here - including the fact that it appears one of the major issues with the standards come from ‘intentional weakening’ that was done to facilitate the sale of the systems abroad to actors considered hostile by the United States. I wanted to reflect on a few bits and pieces around it, and will start with some context.

The Context

A research team has discovered the aforementioned issues that allow for the eavesdropping, interception, and injection of signals through the system. This affects the stronger cryptographic standards in use by TETRA in the UK and abroad. This is a profound change to the risk surface of any critical or sensitive system that uses this technology, either for communication or for control of systems or processes. I would suggest reading either the Wired article on the breach, or the Register’s piece as context.

The TETRA standard was developed by the European Telecommunications Standards Institute (ETSI) in the late 1980s, the first version of TETRA was published in 1995, providing the foundation for the standard's subsequent evolution and widespread adoption across Europe. The standard was further refined over the years with multiple releases. While the standard includes provisions for encryption to ensure secure communications, the encryption algorithms and technologies used in TETRA systems are not publicly disclosed. When ETSI deals with third-parties such as governments or CNI actors, they are subject to Non-disclosure agreements (NDAs) and are expected to treat their knowledge of the encryption process and algorithms as proprietary data.

The researchers were unhindered by this and were able to make their discoveries with their hands on an off-the-shelf MTM5400 Motorola radio. It took them four months to extract the algorithms from the device.

Looking back on the publicly available information on the system it’s surprising that this wasn’t called out sooner - a WikiLeaks document sharing a 2006 US State Department communication describes the concerns that an Italian radio producer had in exporting the TETRA radio system to Iran. Their concerns were eased with a nudge-and-wink reassurance that the TETRA-based system was ‘less than 40-bits’, very clearly suggesting that this wasn’t as strong a standard as was expected of TETRA in friendly markets.

It’s worth mentioning that the company has since issued a statement sharing that “The TETRA security standards have been specified together with national security agencies and are designed for and subject to export control regulations which determine the strength of the encryption” - This is not at all reassuring and suggests requirements for US export security products to be weakened at point of sale for foreign actors of interest to the intelligence agencies of the USA.

The Reflections

I find events like these to be incredibly interesting, because it almost always involves a private sector actor refusing to collaborate on public facing security research, and then having to admit to the presence of huge vulnerabilities when researchers find what they were looking for anyway. The outcome is that various and diverse components of the Critical National Infrastructure of multiple countries are compromised and we’re all left wondering if bad actors are already utilising these exploits (almost certainly in this case). It invites reflection on whether ‘security through obscurity’ is ever a viable recommendation, even in cases where national security is involved.

The general position is that it is a security consideration to hide the underlying mechanisms by which the secure communication technology works, and that this hinders or frustrates efforts to reverse-engineer or tamper with the technology. This position is concretised in the requirement to submit to NDAs to use the technology - and in some cases punish those conducting research on private and secret systems. This is not a position that is tenable.

Firstly - as the evidence historically and continuously suggests, the secrecy of a codebase or technology does not interfere with reverse engineering efforts over time. if an actor has access to even a single example of the binary or commercial device that runs the software, they are positioned to run security tests and discover vulnerabilities. Secondly, NDAs are not adhered to by bad actors, meaning that the only utility of this legal protection framework is for the purposes of damage control in the courts after a security incident - not for the purposes of preventative security research or continuous improvement.

Finally, keeping controls and processes hidden provides avenues for avoiding accountability; the perception of privacy that comes from something being hidden invites a false sense of security. This bad habit of assuming secrets are safe has been discussed as a failure in the principles of security by the Cybersecurity and Infrastructure Security Agency (CISA) amongst others.

Also known as ‘security by obscurity’, there are some commonly accepted variants of implementation that are considered useful additions to the security of a given application - provided they are not seen as the sole protective measures of it. Examples include hiding the version numbers of the software that comprises your website or application - expressly to delay or annoy attackers in their reconnaissance. Being realistic, this frustration can only ever materialise in time or money, which in the case of threats to Critical National Infrastructure is never going to be a factor. I can’t foresee the round table of bad actors looking to disrupt the essential industries of a developed state being effectively delayed or dismayed by the possibility of having to spend an extra few weeks of manpower on guessing how something works.

This incident reminded me of an article I read on how the license agreement that the farming hardware and vehicle manufacturer John Deere placed on farmers who purchased their equipment. The agreement expressly forbade any meaningful repair or modification work on their equipment. It effectively banned farmers out in the middle of nowhere from working on their tractors, and required John Deere to send out technicians when things went wrong. Given that the same agreement disallowed farmers to sue for loss in the case of a non-functioning piece of embedded software, farmers would be left high and dry if the software failed them during harvesting season. The natural response was to develop their own system of resilience, which actually meant cracking the software to run an alternative operating system written by an anonymous Ukrainian developer.

The quite baffling outcome of this situation was that this commitment to security by obscurity (and clearly the attempt to lock folk into an ecosystem and protect proprietary information) on the part of John Deere actually may have resulted in a measurable undermining of the food security of the United States. It was (and continues to be) an immeasurable phenomenon - as farmers will - for obvious reasons, not confess to breaching the licence agreement.

Will there be - for example, a flash point in the near future where 35% of the tractors that keep America fed seize up or fall foul of a ransomware attack, and John Deere will be able to deny any involvement despite creating the conditions by which the vulnerability was introduced? It certainly is possible.

That is, unless the attack targets their standard and unaltered system - discovered in 2021 to be running an insecure version of Linux with a pathway to root access

The clear takeaway for me is that on the grandest and most important scales we are still using secrecy as synonymous with security. This is a dated opinion as Information Security is a provable and binary state of secure or insecure. The only way to continuously assess this state is with systems of accountability in testing and auditing. We would not suggest that the physical security of a premises could be enhanced enough by hiding it that doing so would negate any need for any perimeter protections. When it comes to components of Critical National Infrastructure, these mechanisms of obfuscation seem to create the inevitability of insecure systems. They seem to be in operation only to protect the companies or bodies that produce and benefit from the use of the systems - whether they are secure or not.

On the other side of this same coin is the ongoing use by bad actors of open and transparent systems, which - by their open nature, have seen more attention from researchers and improvement over many iterations. It feels nonsensical to me that I can configure a laptop using open source encryption technology that denies access to even the most funded and motivated government actors or agencies, but that our own police communications are susceptible to interception by virtue of the nature of the company that produces their hardware.

We should use the very same logic we apply to the industries that are so essential to the fabric of our nations and economies that we determine them to be ‘Critical National Infrastructure’, to identify and bring into protection the software that enables their secure operation. This software should be open and available where possible so that it can benefit from the security features that the community-focused, peer-to-peer internet manages to bring to other such examples. It would be a monumental - but most useful, task. To do anything less than this makes inevitable the next TETRA:BURST, wherein we discover that the Information Security we rely on to save lives is built on foundations of sand.

Subscribe to my blog via email or RSS feed.