29.09.2022

Why safety-critical software is hitting the “affordability wall” and how system developers can cut its cost dramatically

This post is Part 1 of a 3-part series derived from TrustInSoft’s latest white paper, “Delivering Safety-Critical Software Faster and Cheaper: How Exhaustive Static Analysis Guarantees Correctness and Security while Reducing Cost and Schedule.” To obtain a FREE copy, CLICK HERE.

Graphic: Why safety-critical software is hitting the “affordability wall” and how system developers can cut its cost dramatically, Critical Systems Series Blog 1.

Introduction

In this series of three blog articles, we will start with an example related to the aviation industry to raise awareness for factors that may be applicable to all safety and cybersecurity critical industries from aviation, defense and energy to transportation and medical.

Ever since General Dynamics rolled out the first fully authoritative fly-by-wire digital flight control system in the F-16 in 1978,[i] critical systems have grown increasingly reliant on software. Over the last several decades, the growing size and complexity of that critical software have caused its development and verification costs to grow exponentially.

Software is now the leading cost in critical systems. According to some experts, one-third of new airplane costs are attributable to software and software development. Meanwhile, in the automotive sector, electronics and software account for 25% of the capital costs of new vehicles. [ii]

Critical software is hitting the “affordability wall”

According to some researchers, software costs in the aerospace industry have leaped the “affordability wall.” This is illustrated in Figure 1, below, where onboard software growth has been plotted logarithmically over time.[iii]

Graph: Estimated onboard SLOC growth in aircraft from 1960 to 2020.
Figure 1: Estimated onboard SLOC growth in aircraft from 1960 to 2020
Source: Redman et. al, “Virtual Integration for Improved System Design”, SEI, 2010.

Figure 1, created in 2010, shows that onboard software hit the affordability limit sometime around 2006, at 27 million SLOC. Since then, onboard software has continued to grow. Naturally, its complexity and cost have done likewise.

 

One of the primary drivers of cost in critical software is the need to assure safety.

Safety must become more cost-effective

Safety has long been a paramount concern in industries such as aerospace, defense, automotive, medical devices, and nuclear energy, where engineering errors can have catastrophic consequences for human life.

Naturally, that concern has extended to software development. The list of standards providing guidance on software development for safety-critical systems is a long one. In the aerospace industry alone, one can count the US DoD MIL-STD-882D, the UK MoD DEF-STAN-00-56, the FAA System Safety Handbook, NASA-STD 8719.13B, and RTCA DO-178C.

This last document, DO-178C, has become the de facto standard for safety in the civil aviation industry. The FAA recognizes DO-178 as the primary means of showing compliance with United States law on the part of avionics software in civil aircraft certification.[iv]

While all of these critical industries have reached codified best practices for achieving safety, there is still no agreement on how to assure safety cost-effectively.

 

To achieve cost-effective safety, new methods are needed.

Earlier error correction would cut costs dramatically

Infographic: Late discovery drives qualification costs. Major cost saving through rework avoidance by early discovery & correction. Where faults are introduced, where faults are found and the estimated cost for removal. 80% of all errors are not discovered until system integration tests or later.
Figure 2: Late discovery of software errors increases cost of rework
Source: Feiler, et al, Four Pillars for Improving the Quality of Safety-Critical Software-Reliant Systems, SEI, 2013.

As shown in Figure 2, 80% of all errors are not discovered until system integration tests or later, and the rework effort to correct a problem in later phases can be as high as 300-1000 times the cost of in-phase correction.[v] Therefore, it is desirable to discover such problems earlier in the life cycle and thus increase the chance of tests passing the first time around.

An additional complication: cybersecurity

Since the turn of the millennium, critical software developers have faced an additional complication.

Beyond ransomware and data breaches, some companies in critical software industries have already experienced costly setbacks and negative publicity due to cybersecurity problems.

In April 2015, when a noted cybersecurity researcher tweeted mid-flight about his airplane’s technical vulnerabilities, he was detained by the FBI upon landing. He soon found himself at the center of renewed debate over security risks in commercial airliners. Documents later released by the FBI showed that federal agents believed that Chris Roberts of the firm One World Labs didn’t just joke about in-flight computer flaws. Indeed, he may have hacked into a plane’s navigation system and instructed it to change course. This shocking claim would suggest that passengers could gain access to critical flight control systems.[vi]

In January 2017, the US FDA and Dept. of Homeland Security issued warnings against at least 465,000 St. Jude’s Medical RF-enabled cardiac devices. Software vulnerabilities in the devices could allow hackers to remotely access a patient’s implanted device, disable therapeutic care, drain the battery, or even administer painful electric shocks. Short-selling firm Muddy Waters had previously revealed these flaws in August 2016, based on a report by the security firm MedSec, alleging negligence in St. Jude Medical’s software development practices.[vii]

Industries are working furiously on the problem, but standards addressing software cybersecurity are relatively new. Industry processes to assure compliance with these standards are not yet mature. Best practices related to software cybersecurity assurance are still being established. Often, that means software developers must engage in lengthy discussions with customers and regulatory authorities to determine what constitutes compliance.

Clearly, in domains that rely on safety-critical and reliability-critical software, any code related to connectivity must be free of software errors and other vulnerabilities. Such anomalies could allow hackers access to critical functionality, exposing companies to costly recalls, liability, and reputational damage. Critical software developers will not only need to show they are compliant with relevant standards and regulations. They will also need to adopt technologies and methodologies that can guarantee their software is impervious to cyberattacks.

This post is Part 1 of a 3-part series derived from TrustInSoft’s latest white paper, “Delivering Safety-Critical Software Faster and Cheaper: How Exhaustive Static Analysis Guarantees Correctness and Security while Reducing Cost and Schedule.” To obtain a FREE copy, CLICK HERE.

An additional complication: cybersecurity

In Part 2 of this series, we’ll examine how exhaustive static analysis can slash software development costs and verification schedules for developers of safety-critical systems. We’ll see how some of the biggest names in critical system development have used exhaustive static analysis to reduce verification costs by 80% and more, improve code quality by a factor of ten over industry norms, and improve productivity by as much as 1500%.

References

[i]      Aviation in the Digital Age, Wikipedia.

[ii]      Making Safety-Critical Software Development Affordable with Static Analysis, GrammaTech, April 2020.

[iii]      Redman, et. al, Virtual Integration for Improved System Design, Proceedings of the 1st Analytic Virtual Integration of Cyber-Physical Systems Workshop, November 2010.

[iv]      Advisory Circular 20-115D – Airborne Software Development Assurance Using EUROCAE ED-12( ) and RTCA DO-178( ), U.S. Department of Transportation Federal Aviation Administration, July 2017.

[v]      Feiler, P., Goodenough, J., Gurfinkel, A., Weinstock, C. and Wrage, L., Four Pillars for Improving the Quality of Safety-Critical Software-Reliant Systems, SEI, April 2013.

[vi]      Roberts, P., Did a hacker really make a plane go sideways?, Christian Science Monitor, May 2015.

[vii]      465,000 Abbott pacemakers vulnerable to hacking, need a firmware fix, CSO, September 2017.

Newsletter

Related articles

April 17, 2024