An Introduction to Fuzzing

What it is, who uses it and why, its benefits and the tools involved?

Graphic Fuzzing Blog 1: An Introduction to Fuzzing, what it is, who uses it and why, its benefits and the tools involved

The rapid growth in the connectivity of software-driven products over the last several years is expected to continue for the foreseeable future.[1], [2]  That growth, however, has also increased both the attack surfaces of those products and the interest on the part of unscrupulous hackers and unfriendly governments in breaching them.

The need to counter this growing cyber threat has caused many software development organizations to incorporate fuzzing into their software verification regimen.

But what is fuzzing? Who uses it and why? What tools are involved? And what are its benefits? In this post, we’ll try to answer those questions.

This post is Part 1 of a 3-part series derived from TrustInSoft’s new guide to fuzzing for cybersecurity entitled “Fuzzing and Beyond.” To obtain a FREE copy, CLICK HERE.

What is fuzzing?

In general practice, fuzzing is an automated software testing technique that rapidly applies large numbers of valid, nearly-valid, or invalid inputs to a program, one after the other, in a search for undesired behaviors (vulnerabilities).

By “nearly-valid” inputs, we mean inputs that meet the expected form of the input space but contain values that are malformed or unexpected. The idea is to automatically generate inputs for the program under test (PUT), trying to find such parameters and input data that cause the program to misbehave in some way. This may result in a safety or security flaw, such as a crash, memory leak, or arbitrary code execution.

Ultimately, the goal of fuzzing is to automate the process of finding vulnerabilities by generating a large number of test inputs that exercise a program or system in ways that are unexpected or that stress its functionality.

Motivations and principles of fuzzing

Programmers often make many assumptions concerning the structure and contents of the data their programs handle internally.

For example, an application may store in memory an array of a certain size and use a variable to indicate this size. If at some point the actual size of the array does not match what is stored in the size variable, then the assumption is broken and the internal state of the application is invalid. This may, of course, cause severe problems. In our example, an out-of-bounds write—which could result in a crash or an arbitrary code execution if exploited by an attacker—is quite possible.

Furthermore, even if the program’s internal data manipulation logic is flawless, its internal state may still become corrupted when it reads data from the outside. All external information entering the application, whether it be a command line parameter or a collection of bytes received through a network socket, must be properly verified and either accepted as valid or rejected as invalid.

If the program is completely correct, safe, and secure, then it should recognize and gracefully reject any invalid inputs. However, due to programming oversights or other anomalies, invalid inputs are not always caught at the frontier. Some may be accepted inside the program, invalidating its internal state. This is especially prone to happen when external data validation is not a trivial task.

Applications that handle complex structured inputs—communications using specific protocols or employing specific file formats for information storage, for example—are especially vulnerable.

This is where fuzz testing can be deployed with great effect. Fuzzers attempt to generate invalid, unexpected, or completely random data to feed a given program in the hope of discovering any holes in its input verification. Basically, their aim is to detect the situations when the program accepts an invalid input as valid.

Although there are different approaches to generating such inputs, many fuzzers skim along the valid/invalid input border. They attempt to generate inputs that are almost valid but contain some subtle invalidity or expose an obscure corner case.

Typical uses of fuzzing

In a nutshell, fuzzing is used to expose flaws in a software program. Historically, it has been found extremely efficient in detecting safety and security issues, both in applications and operating systems.

While fuzzing can be used as a part of any general-purpose software testing program, it is most useful (and most used) in a cybersecurity context. It helps improve robustness against malicious penetration via unanticipated inputs.

In short, fuzzing can be used to detect all kinds of bugs, but it is most often used to uncover security vulnerabilities.

Typical users

Who uses fuzzing? In general, potential users include anyone interested in detecting security vulnerabilities.

More specifically, typical users fall into three general categories.

Hackers (black hats) use fuzzers and fuzzing with malicious intentions. Their aim is to detect security vulnerabilities they can exploit, so they can take control of the software for financial or espionage motives.

The second group, software security researchers (white hats) frequently employ the same methods as black hats. They typically use fuzzing to discover security flaws in new software. They then report their findings to the software vendor, so the latter can correct the defect found before black hats can cause them any damage.

The third group includes software developers, penetration testers, and other software testers. This group generally uses somewhat different tools and methods than black hats and white hats, because they have access to the source code. They need to do a far more thorough job than hackers, who only need to find one vulnerability they can exploit.

Next, we’ll look at the differences between the various fuzzing tools and methods these groups use.

Fuzzing engines

The principal tool used in fuzzing is the fuzzing engine. Commonly referred to as “fuzzers,” fuzzing engines are not all created equal. They can be characterized along a number of lines, as the space is highly multi-dimensional. It is important to choose a fuzzer that is well-suited to your application. We will look at a number of ways in which the fuzzer space is segmented.

Need for source code

Fuzzing engines can be either compiler-based or binary-only.

Compiler-based fuzzers require access to the source code. They include a special compiler for the target programming language that adds lightweight instrumentation to the PUT when compiling it. That instrumentation typically collects coverage data during the fuzz campaign or provides data to the oracle function.

State-of-the-art fuzzers also use compilers to apply fuzzing-enhancing transformations that improve the execution speed of the PUT, enable easier penetration, and track interesting behaviors.[3]

Binary-only fuzzers are designed for situations where source code is unavailable. In practice, many fuzzing use cases are binary only, especially for security researchers working on closed-source, proprietary, or commercial software. Such fuzzers are restricted to binary instrumentation.[4]

Until very recently, available options for binary-only fuzzing have been unable to match the speed and transformation of their compiler counterparts, thus limiting their effectiveness.[5], [6]

Awareness of program structure

Black-box fuzzers are unaware of the internal structure of the PUT. They observe only the target program’s input/output behavior, treating it as a “black box” they can’t see inside. Most early fuzzers were of this type. Some modern black-box fuzzers like Funfuzz[7] and Peach[8] take the structure of the PUT’s inputs into account to generate more meaningful test cases without inspecting the source code.[9]

Black-box fuzzers are commonly used by hackers due to their ease of use and versatility. They are also used by white-hat security professionals who do not have access to the source code or who are assessing the likelihood of exploitation by hackers under such conditions.

White-box fuzzers generate test cases by analyzing the code structure of the PUT and the information they gather during execution. With this information, they are able to explore the target program’s execution paths systematically.

The term “white-box fuzzing” was introduced by Patrice Godefroid to refer to fuzz testing that employs dynamic symbolic execution (DSE), a variant of symbolic execution.[10] The term is also used to describe fuzzers that employ taint analysis.[11] White-box fuzzing typically incurs much higher overhead than black-box fuzzing, partly because DSE implementations tend to employ SMT (Satisfiability Modulo Theories) solving and dynamic instrumentation. They require more work to set up and their processing is much slower.[12]

Grey-box fuzzers occupy a middle ground between the two extremes. Unlike black-box fuzzers, they can gather some information from inside the PUT to assess its structure and/or its executions. Unlike the white-box variety, grey-box fuzzers do not reason about the full semantics of the PUT. Instead, they tend to limit their investigation to performing some lightweight static analysis and/or gathering some dynamic execution data, like code coverage. Grey-box fuzzers aim to strike an effective balance between execution speed, ease of use, and ensuring broad test coverage.[13]

Coverage-guided grey-box fuzzing is probably the most successful fuzzing approach. This method adds a feedback loop to keep and mutate only the few test cases reaching new code coverage. The rationale behind it is that exhaustively exploring the target code will likely reveal more vulnerabilities. Coverage is collected via instrumentation inserted into the target program at compilation.[14] Widely successful coverage-guided grey-box fuzzers include AFL,[15]AFL++,[16] libFuzzer,[17] and honggFuzz[18].

How inputs are generated

Mutation-based fuzzers take the seeds (valid inputs) in their seed pool and generate collections of fuzz inputs by altering (mutating) them, mostly by bit manipulation, into forms that may be valid or invalid.

Generation-based fuzzers take the valid input structure provided to them, analyze it, and generate entirely new inputs that match the valid input structure.

Awareness of input structure

Dumb (unstructured) fuzzers produce completely random inputs that do not necessarily match the prescribed format of the expected input. Most early fuzzers were of this type. Due to their simplicity, dumb fuzzers can produce results with little work, but their coverage will be extremely limited. Such primitive fuzzers are unlikely to produce sufficient results to help ensure cybersecurity.

Through awareness of input structure, smart (structured) fuzzers are able to generate randomized inputs that are valid enough to pass program parser checks and penetrate deep into the program logic. These require more work to set up compared to dumb fuzzers since the user must define for the fuzzer the target program’s input format, but they are far more likely to trigger edge cases and find vulnerabilities thanks to greater code coverage.

Types of inputs generated

Many fuzzers are optimized for fuzzing specific types of input formats, including:


  • File
  • Network
  • Kernel I/O
  • UI
  • Web
  • Thread (concurrency)


These specializations crosscut the other categories listed earlier.

Benefits of fuzzing with state-of-the-art fuzzing tools

Fuzz testing with state-of-the-art fuzzing tools offers software development organizations a number of significant benefits.

First, most fuzzing tools are relatively easy to use. This is especially true of black-box and grey-box fuzzers, which cover the vast majority of use cases.

Second, fuzzing rapidly expands your testing campaigns. It allows you to quickly and easily extend the scope of your unit tests, and it can be used in both unit testing and integration testing.

Next, fuzzing rapidly expands the code coverage of your testing. White-box and grey-box fuzzers typically include compilers that add code instrumentation that collects coverage data. In addition, the fuzzing algorithms of sophisticated fuzzers like AFL contain logic for directing coverage while limiting redundant cases and economizing campaigns. These facilities can quickly increase code coverage at the beginning of your test campaign by 60% to 80% compared to normal unit testing.

Finally, fuzzing can be easily scaled, parallelized, and combined with other techniques like static analysis and dynamic analysis. In fact, as we’ll see in the next installment of this series, fuzzing can be optimized by enhancing it with our own tool, TrustInSoft Analyzer.

This post is Part 1 of a 3-part series derived from TrustInSoft’s new guide to fuzzing for cybersecurity entitled “Fuzzing and Beyond.” To obtain a FREE copy, CLICK HERE

In our next post…

In Part 2 of this series, we’ll examine the limitations of fuzzing and how many of those limitations can be overcome by pairing a good fuzzer with a formal-methods-based code analysis tool like TrustInSoft Analyzer.

Then, in the finale of the series, we’ll look at how to go beyond fuzzing to overcome the last of those limitations and guarantee airtight security in your most critical applications.

For additional information, see our white paper

If you find this post useful, our new white paper, Fuzzing and Beyond, contains still more information on fuzzing and fuzzing tools. To download your FREE copy, CLICK HERE.


[1]  Hyper Connectivity Market Forecast 2022-2030, Precedence Research, September 2022.

[2]  Internet of Things Connectivity Market, Emergen Research, June 2022.

[3]  Nagy, S., et al,  Breaking Through Binaries: Compiler-quality Instrumentation for Better Binary-only Fuzzing, 30th Usenix Security Symposium, August 2021.

[4]  Ibid.

[5]  Ibid.

[6]  Pauley, E., et al, Performant Binary Fuzzing without Source Code using Static Instrumentation, IEEE, October 2022.

[7]  Mozilla Security, Funfuzz, https://github.com/MozillaSecurity/funfuzz.

[8]  GitLab, Peach Fuzzer, https://peachtech.gitlab.io/peach-fuzzer-community/.

[9]  Manès, V., et al, The Art, Science, and Engineering of Fuzzing: A Survey, IEEE, October 2019.

[10]  Godefroid, P., Random testing for security: Blackbox vs. whitebox fuzzing, Proceedings of the International Workshop on Random Testing, 2007.

[11]  Ganesh, V., Leek, T., Rinard, M., Taint-based directed whitebox fuzzing, IEEE, May 2009.

[12]  Manès, V., et al, The Art, Science, and Engineering of Fuzzing: A Survey, IEEE, October 2019.

[13] Ibid.

[14] Nagy, S., et al,  Breaking Through Binaries: Compiler-quality Instrumentation for Better Binary-only Fuzzing, 30th Usenix Security Symposium, August 2021.

[15]  Zalewski, M., American fuzzy lop.

[16]  Advanced Fuzzing League ++, AFLPlusPlus.

[17]  Serebryany, K., Continuous Fuzzing with libFuzzer and AddressSanitizer, IEEE, November 2016.

[18]  Swiecki, R., honggfuzz.


Related articles

April 9, 2024