Keynote SSTIC 2009 – Fuzzing, past present and future
Presentation: Ari Takanen
Software has a « vulnerability window«", just after releasing.
The point of this presentation is not to disucuss about software's defects (zero days).
Fuzzing have some known problems, like finding metrics, choosing the right tool, focusing on right resources to find more bugs.
And of course, the vendors motivation to use fuzzing on their own applications.
A good fuzzer have to check 3 main categories
- Features
- Performance
- Robustness
The original fuzzing system was completely random (no link with the software's protocol). But it used the interface model (command line parameters)! The results where not really interesting.
In summary:
Always have a model to fuzz (protocols, features)
Watch for instrumentation (mem leaks, mem corruptions, business logic, etc.)
And of course, always use automation (generating, executing, analyzing).
Note that: Fuzzing != robustness testing.
Most of fuzzers don't have random aspect.
There are 2 main techniques:
Mutation (non-intelligent, semi random modifications) and generation(intelligent, model-based)
PROTOS (the famous project Ari working on) was created in order to avoid any random part, to focus on real intelligent tests, with precise specification of the software (protocols and features).
What about the future?
More complex behavior, like mutation tests, more complex block-based tests. And automated model protocols building, and syntax and semantic anomalization.
Some interesting information:
Best fuzzers have 70% bug detecting.
Combining 2 fuzzers: 70-90%
Fuzzing is now more present in industry. All major software security groups are using fuzzing. Should be used more often in Quality Assurance of a product.
