One weekend I got inspired by this old article on slashdot and so I decided to write my own fuzz program. You can download it from here. The sourceforge web page describing it can be found at the sourceforge"
Another project along the same lines is the Bulletproof penguin project Scott Maxwell, the person running that project has a slightly different tack. Scott downloaded the tools used to do the original study and then performed his tests on the applications using them. He seems to be focusing on the same tools that the original study focused on. I on the other hand, decided to write my own fuzz testing tool. The reason for this is that I wanted to test more kinds of software and I wanted the tool to be more automatic.
The overall goal is to improve the overall security of Linux by fixing bugs. Paraphrasing Theo DeRandt, the head of the OpenBSD project, If you go about fixing bugs, then security is one of the benefits. This only goes so far because you can concievably have the perfectly implemented piece of code which provides a backdoor but I personally am not interested in dealing with that. I will let the other folk working on Bastille Linux and other security related pieces of software take charge of making sure that there are no backdoors or conceptual errors in linux. I will take up the mantle of trying to ensure that each and every utility is as robust as it can possibly be.
My version of fuzz is supposed to transcend the original fuzz program used to prepare the original fuzz paper. I hope to make it transcend the original fuzz in several ways:
The fuzz generator is designed to attack certain kinds of software and expose one particular kind of bug common in software. This is the situation where the programmer implicity makes some assumtions about the data stream that the program will be parsing. If the data stream is substantially different then the program might not be able to deal with it. This approach has several limitiations. Fist of all since the data stream is really and truly random, it is very likely that this will throughly test out a very small percentage of the total program. I have several ideas on how to improve this. The most ambitious is to compile the programs with profiling support and then use the information about the code coverage gathered from that as a goodness measure to some genetic algorithms which mate the data to achieve the greatest possible code coverage.
Ben Woodard Last modified: Mon Nov 29 05:35:20 PST 1999