What is a TEE and what are the challenges of validating one?

Share on twitter
Share on linkedin
Share on reddit

This post is Part 1 of a 3-part series derived from TrustInSoft’s latest white paper, “How Exhaustive Static Analysis can Prove the Security of TEEs” To obtain a FREE copy, CLICK HERE.

Modern consumer technology is increasingly complex, connected, and personalized. Our devices are carrying more and more sensitive data that must be protected.

We need to be able to trust our devices to protect both themselves and our data from malevolent hackers. In essence, we need that our devices guarantee us a high level of trust.

A trusted execution environment (TEE) is a secure area within a processor designed to provide the level of trust we require. It is an environment in which the code executed and the data accessed are both isolated and protected. It ensures both confidentiality (no unauthorized parties can access the data) and integrity (nothing can change the code and its behavior).1 Figure 1 illustrates the partitioning of a trusted execution environment (“Secure World”) within a processor.

Figure 1: Partitioning a system using a trusted execution environment (Source: #embeddedbits) 2

TEEs are essential components of many of the devices we own and use, including smartphones, tablets, game consoles, set-top boxes, and smart TVs. They are well-suited to providing security for applications like storage and management of device encryption keys, biometric authentication, mobile e-commerce applications, and digital copyright protection.

For a TEE to fulfill its function, however, the behavior of its code must be perfectly deterministic, reliable, and impervious to attack. Therefore, it must be free of software errors that could be either sources of anomalous behavior or vulnerabilities for hackers to exploit.

The challenges of validating a trusted execution environment

But TEEs are made up mostly of code. Lots and lots of code. While not as extensive as rich operating systems, they’re still complex beasts.

When we write code, we introduce bugs. That’s simply inevitable, due to the complexity of today’s applications and the propensity for human error in complex processes like software development. The larger and more complex the piece of code, the greater the number of bugs introduced, and the more difficult it is to detect and eliminate those bugs. According to data pipeline management and analytics firm Coralogix:3

  • Developers, on average, create 70 bugs per 1000 lines of code
  • 15 bugs per 1,000 lines of code find their way to customers
  • Fixing a bug takes 30 times longer than writing a line of code
  • 75% of a developer’s time (1500 hours/year) is spent on debugging

For many applications—most consumer apps, for example—removing bugs is not a great concern. As long as a bug isn’t a showstopper, it can be cleaned up in a future release.

For applications where safety, reliability, or security are critical, however, failure to find and remove bugs could lead to disaster. A TEE is one such application. The problem, then, is finding an efficient way to eliminate those undefined behaviors.

Why traditional software testing will fail to validate your TEE

Traditional software testing consists of taking the software requirements, which have been developed by decomposing and refining the top-level system requirements, developing a test plan and test cases that cover those requirements as thoroughly as possible, and then testing the software against those requirements. You take the requirements and brainstorm your test cases. In the end, you hope that the tests you’ve performed have adequately verified that the software meets its requirements and that the code exhibits no anomalous behaviors under real-world conditions.

The process is an iterative one based on finding bugs and fixing them. It might also be called a “best-effort” process; you do the best you can to find all the bugs you can in the time you have budgeted for software testing.

This traditional process, however, offers no guarantees that every last bug has been eliminated.

The common tools on the market supporting traditional software testing are typically designed to uncover obvious errors. Most search for patterns that are recognized as bad coding techniques. Unfortunately, such tools are not designed to find more uncommon, subtle, and insidious errors. Thus, they provide no guarantee that whatever happens in the environment, your software will be impervious to attack, and that it’s not going to crash.

Some software bugs are very subtle. They may be triggered by use cases not envisioned by test planners. Such bugs often remain latent until triggered by an unexpected event… or until exploited by an industrious and ill-intentioned hacker.

Traditional software testing has failed many companies when it comes to security from cyberattacks. Here are just two recent examples:

  • In January 2017, the US FDA and Dept. of Homeland Security issued warnings against at least 465,000 St. Jude’s Medical RF-enabled cardiac devices. Software vulnerabilities in the devices could have allowed hackers to remotely access a patient’s implanted device and disable therapeutic care, drain the battery, or even administer painful electric shocks. These flaws had been revealed previously by short-selling firm Muddy Waters and the security firm MedSec, alleging negligence in St. Jude Medical’s software development practices.4
  • The WannaCry ransomware attack of May 2017 encrypted the data of more than 200,000 computers across 150 countries. WannaCry was based on EternalBlue, a sophisticated cyberattack exploit stolen from the U.S. National Security Agency (NSA). EternalBlue exploits a vulnerability in Microsoft’s implementation of the Server Message Block (SMB) protocol in Windows and Windows Server. One month later, the NotPetya malware attack used EternalBlue to destroy data on computers across Europe, the U.S., and elsewhere. According to Kaspersky Labs, damage estimates from WannaCry range from $4 billion to $8 billion. Losses from NotPetya may have surpassed $10 billion.5

Besides lacking a guarantee, traditional software has also proven to be a very expensive method for validating critical software—software that needs to be flawless, like that in a TEE.

In our next post, we’ll examine why that is true. We’ll also look at why industries that place a high premium on reliability or safety began moving away from traditional software testing over twenty years ago, and why developers who need to assure device security must now travel a similar path.

If you don’t want to wait for the rest of the series, download our new white paper, “How Exhaustive Static Analysis Can Prove the Security of TEEs.” Besides the content of this blog series, the white paper also looks briefly at a pair of companies that have adopted this technology to validate trusted execution environments. Plus, we provide some tips on what to look for when choosing the solution that’s right for your organization. You can download your free copy here.

This post is Part 1 of a 3-part series derived from TrustInSoft’s latest white paper, “How Exhaustive Static Analysis can Prove the Security of TEEs” To obtain a FREE copy, CLICK HERE.


Prado, S.; Introduction to Trusted Execution Environment and ARM’s TrustZone; #embeddedbits, March 2020. 

2    Ibid.

3   Assaraf, A.; This is what your developers are doing 75% of the time, and this is the cost you pay; Coralogix, February 2015.

4  465,000 Abbott pacemakers vulnerable to hacking, need a firmware fix; CSO, September 2017.

5  Snow, J.; Top 5 most notorious cyberattacks; Kaspersky, December 2018.

You might also like these articles

Sign up for our monthly newsletter

Get notified of new articles !