-
Notifications
You must be signed in to change notification settings - Fork 6
/
Copy pathen.search-data.min.09ab2740bbb0887cd3487618751649b23d8db817f28a27ec16b20e5372a2f884.js
1 lines (1 loc) · 421 KB
/
en.search-data.min.09ab2740bbb0887cd3487618751649b23d8db817f28a27ec16b20e5372a2f884.js
1
'use strict';(function(){const b={cache:!0};b.doc={id:'id',field:['title','content'],store:['title','href']};const a=FlexSearch.create('balance',b);window.bookSearchIndex=a,a.add({id:0,href:'/building/',title:"Building",content:"Build and install AFL++ Download the lastest devel version with:\n$ git clone https://github.com/AFLplusplus/AFLplusplus $ cd AFLplusplus AFL++ has many build options. The easiest is to build and install everything:\n$ make distrib $ sudo make install Note that \u0026ldquo;make distrib\u0026rdquo; also builds llvm_mode, qemu_mode, unicorn_mode and more. If you just want plain afl then do \u0026ldquo;make all\u0026rdquo;, however compiling and using at least llvm_mode is highly recommended for much better results - hence in this case\n$ make source-only is what you should choose.\nThese build options exist:\n all: just the main AFL++ binaries binary-only: everything for binary-only fuzzing: qemu_mode, unicorn_mode, libdislocator, libtokencap, radamsa source-only: everything for source code fuzzing: llvm_mode, libdislocator, libtokencap, radamsa distrib: everything (for both binary-only and source code fuzzing) install: installs everything you have compiled with the build options above clean: cleans everything. for qemu_mode and unicorn_mode it means it deletes all downloads as well code-format: format the code, do this before you commit and send a PR please! tests: runs test cases to ensure that all features are still working as they should help: shows these build options Unless you are on Mac OS X you can also build statically linked versions of the AFL++ binaries by passing the STATIC=1 argument to make:\n$ make all STATIC=1 Note that AFL++ is faster and better the newer the compilers used are. Hence gcc-9 and especially llvm-9 should be the compilers of choice. If your distribution does not have them, you can use the Dockerfile:\n$ docker build -t aflplusplus "}),a.add({id:1,href:'/docs/',title:"Docs",content:"AFL++ Documentation You can browse a part of the AFL++ doc here.\n Binary-only fuzzing Environment variables Status screen Parallel fuzzing Performance tips Note for ASan Power schedules Custom mutators Technical Details Historical Notes "}),a.add({id:2,href:'/docs/tutorials/',title:"Tutorials",content:"AFL++ Tutorials Fuzzing libxml2 with AFL++ Third party Fuzzing software: common challenges and potential solutions (Github Security Lab) Fuzzing sockets, part 1: FTP servers Fuzzing a Gameboy Emulator with AFL++ (bananamafia) AFL++ Docker suite images (Pentagrid) "}),a.add({id:3,href:'/features/',title:"Features",content:"AFL++ Features Many improvements were made over the official afl release - which did not get any feature improvements since November 2017.\nAmong other changes afl++ has a more performant llvm_mode, supports llvm up to version 11, QEMU 5.1, more speed and crashfixes for QEMU, better *BSD and Android support and much, much more.\nAdditionally the following features and patches have been integrated:\n AFLfast\u0026rsquo;s power schedules by Marcel Böhme: https://github.com/mboehme/aflfast\n The new excellent MOpt mutator: https://github.com/puppet-meteor/MOpt-AFL\n InsTrim, a very effective CFG llvm_mode instrumentation implementation for large targets: https://github.com/csienslab/instrim\n C. Holler\u0026rsquo;s afl-fuzz Python mutator module and llvm_mode whitelist support: https://github.com/choller/afl\n Custom mutator by a library (instead of Python) by kyakdan\n Unicorn mode which allows fuzzing of binaries from completely different platforms (integration provided by domenukk)\n LAF-Intel or CompCov support for llvm_mode, qemu_mode and unicorn_mode\n NeverZero patch for afl-gcc, llvm_mode, qemu_mode and unicorn_mode which prevents a wrapping map value to zero, increases coverage\n Persistent mode and deferred forkserver for qemu_mode\n Win32 PE binary-only fuzzing with QEMU and Wine\n Radamsa mutator (enable with -R to add or -RR to run it exclusively).\n QBDI mode to fuzz android native libraries via QBDI framework\n The new CmpLog instrumentation for LLVM and QEMU inspired by Redqueen\n LLVM mode Ngram coverage by Adrian Herrera https://github.com/adrianherrera/afl-ngram-pass\n A more thorough list is available in the PATCHES file.\n Feature/Instrumentation afl-gcc llvm_mode gcc_plugin qemu_mode unicorn_mode NeverZero x x(1) (2) x x Persistent mode x x x86[_64]/arm[64] x LAF-Intel / CompCov x x86[_64]/arm[64] x86[_64]/arm CmpLog x x86[_64]/arm[64] Whitelist x x (x)(3) Non-colliding coverage x(4) (x)(5) InsTrim x Ngram prev_loc coverage x(6) Context coverage x Snapshot LKM support x (x)(5) neverZero:\n(1) default for LLVM \u0026gt;= 9.0, env var for older version due an efficiency bug in llvm \u0026lt;= 8\n(2) GCC creates non-performant code, hence it is disabled in gcc_plugin\n(3) partially via AFL_CODE_START/AFL_CODE_END\n(4) Only for LLVM \u0026gt;= 9 and not all targets compile\n(5) upcoming, development in the branch\n(6) not compatible with LTO and InsTrim and needs at least LLVM \u0026gt;= 4.1\nSo all in all this is the best-of afl that is currently out there :-)\n"}),a.add({id:4,href:'/papers/',title:"Papers",content:"Papers Works based on AFL++ Bibtex\n2020 Andrea Fioraldi, Dominik Maier, Heiko Eißfeldt, and Marc Heuse. \u0026ldquo;AFL++: Combining incremental steps of fuzzing research\u0026rdquo;. In 14th USENIX Workshop on Offensive Technologies (WOOT 20). USENIX Association, Aug. 2020.\n Andrea Fioraldi, Daniele Cono D’Elia, and Leonardo Querzoni. \u0026ldquo;Fuzzing binaries for memory safety errors with QASan\u0026rdquo;. In 2020 IEEE Secure Development Conference (SecDev), 2020.\n Dominik Maier, Lukas Seidel, and Shinjo Park. \u0026ldquo;BaseSAFE: BasebandSAnitized Fuzzing through Emulation\u0026rdquo;. In 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 20), Linz (Virtual Event), Austria, July 2020.\n 2021 Jinghan Wang, Chengyu Song, and Heng Yin. \u0026ldquo;Reinforcement Learning-based Hierarchical Seed Scheduling for Greybox Fuzzing\u0026rdquo;. In Proceedings of the 2021 Network and Distributed System Security Symposium (NDSS'21), February 2021.\n Luca Borzacchiello, Emilio Coppa and Camil Demetrescu. \u0026ldquo;Fuzzing Symbolic Expressions\u0026rdquo;. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), 2021.\n Sihang Liu, Suyash Mahar, Baishakhi Ray, and Samira Khan. \u0026ldquo;PMFuzz: Test Case Generation for Persistent Memory Programs\u0026rdquo;. The International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2021\n Andrea Fioraldi, Daniele Cono D\u0026rsquo;Elia, Davide Balzarotti. \u0026ldquo;The Use of Likely Invariants as Feedback for Fuzzers\u0026rdquo;. In 30th USENIX Security Symposium (USENIX Security 21), USENIX Association, August 2021.\n Prashast Srivastava and Mathias Payer. \u0026ldquo;Gramatron: Effective Grammar-Aware Fuzzing\u0026rdquo;. InProceedings of the 30th ACM SIGSOFT International Sympo-sium on Software Testing and Analysis (ISSTA ’21), July 11–17, 2021, Virtual, Denmark.\n Works citing AFL++ Bibtex\n2020 Andrea Fioraldi, Daniele Cono D’Elia, and Emilio Coppa. \u0026ldquo;WEIZZ: Automatic grey-box fuzzing for structured binary formats\u0026rdquo;. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2020, New York, NY, USA, 2020. Association for Computing Machinery.\n Marcel Böhme, Valentin Manès, and Sang Kil Cha. \u0026ldquo;Boosting fuzzer efficiency: An information theoretic perspective\u0026rdquo;. In Proceedings of the 14th Joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE, pages 1–11, 2020.\n Güler, Emre and Görz, Philipp and Geretto, Elia and Jemmett, Andrea and Österlund, Sebastian and Bos, Herbert and Giuffrida, Cristiano and Holz, Thorsten. \u0026ldquo;Cupid: Automatic fuzzer selection for collaborative fuzzing\u0026rdquo;. In Annual Computer Security Applications Conference (ACSAC), ACM, 2020 (Austin, USA, December 2020), ACM\n Ahmad Hazimeh, Adrian Herrera, and Mathias Payer. \u0026ldquo;Magma: A Ground-Truth Fuzzing Benchmark\u0026rdquo;. Proc. ACM Meas. Anal. Comput. Syst. 4, 3, Article 49 (December 2020), 29 pages.\n Vishnyakov A., Fedotov A., Kuts D., Novikov A., Parygina D., Kobrin E., Logunova V., Belecky P., Kurmangaleev Sh. \u0026ldquo;Sydr: Cutting Edge Dynamic Symbolic Execution\u0026rdquo;. 2020 Ivannikov ISPRAS Open Conference (ISPRAS), IEEE, 2020, pp. 46-54.\n 2021 Stefan Nagy, Anh Nguyen-Tuong, Jason Hiser, Jack Davidson, and Matthew Hicks. \u0026ldquo;Breaking Through Binaries: Compiler-quality Instrumentation for Better Binary-only Fuzzing\u0026rdquo;. In 30th USENIX Security Symposium (USENIX Security 21), USENIX Association, August 2021.\n Dominik Maier and Lukas Seidel. \u0026ldquo;JMPscare: Introspection for Binary-Only Fuzzing\u0026rdquo;. Workshop on Binary Analysis Research (BAR). Vol. 2021. 2021.\n Yu-Chuan Liang and Hsu-Chun Hsiao. \u0026ldquo;icLibFuzzer: Isolated-context libFuzzer for Improving Fuzzer Comparability\u0026rdquo;. Workshop on Binary Analysis Research (BAR). Vol. 2021. 2021.\n Adrian Herrera, Hendra Gunadi, Shane Magrath, Michael Norrish, MathiasPayer, and Antony L. Hosking. \u0026ldquo;Seed Selection for Successful Fuzzing\u0026rdquo;. InProceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA ’21), July 11–17, 2021, Virtual, Denmark.\n "}),a.add({id:5,href:'/aflpp_fuzzing_framework_proposal/',title:"Aflpp Fuzzing Framework Proposal",content:"AFL++ as a Fuzzing Framework Proposal by Andrea.\nBig changes were done in AFL++ to improve usability but the tool remains an extension of the legendary AFL that inherits also its limitations.\nThe future of AFL++, in my opinion, is not to improve the performance of AFL of a percentile.\nWe don\u0026rsquo;t aim to build the \u0026ldquo;best\u0026rdquo; fuzzer, the best fuzzer is the fuzzer that you write for your target. We just want to give you all the pieces to do so easily and effectively.\nA framework to build fuzzers We want to create a fuzzing framework with all the pieces to build fuzzer, a sort of \u0026ldquo;LLVM of fuzzers\u0026rdquo;.\nafl-fuzz will be just one of the frontends to this library.\nWe will code it entirely in C starting from the existing AFL++ codebase for the maximum compatibility. One of our goals is to allow a dynamic binary instrumentation (DBI) or a debugger to inject the entire library in a target process (like in frida-fuzzer, but better and NOT in Javascript).\nImagine injecting the library in a Windows application with a DLL injection with a harness that fuzzes an API with a structured mutator and without coverage. Or maybe with hardware feedback as coverage, or using a DBI, there is a landscape of possibilities.\nMultiple fuzzers in one Imagine that you built 2 fuzzers but want to share their results, we don\u0026rsquo;t want to synchronize testcases anymore (we will maintain the possibility to do that ofc for backward compatibility with AFL but seriously we want to deprecate it).\nYou can define these 2 fuzzers, run the first one in a thread and run, e.g., 3 instances of the second running on 3 threads. All in the same process sharing results immediately.\nThere are several multithreaded fuzzers, most notably honggfuzz, but our idea is to go further and have different configurations running in different threads, not simply a multithreaded fuzzer.\nBasic building blocks To be an effective framework, we define the basic set of building blocks and how they interact.\nA fuzzer can have multiple executors (a forkserver is an executor), for instance one for CmpLog, one for coverage feedback, one for ASan that executes the binary each time that a new interesting input is found.\nA fuzzer can have multiple feedback mechanisms (one for executor or multiple for executor e.g. edge coverage + cmp). When a new testcase triggers new feedback a callback decides if the new input has to be inserted in the normal queue or in the per-feedback queue or both.\nSo the seed selection (yielding the favored testcases set) can work on both normal queue or per-feedback queue. A mechanism for seed scheduling can be designed to stress a single type of feedback if the others are stuck (e.g. fuzzing does not produce anymore edge coverage but we still produce feedback regarding memory allocation size, then the fuzzer will use with more probability testcases from this per-feedback queue).\nThere will be also the possibility to use a custom algorithm for calculating the energy of a testcase that can be different for each feedback mechanism.\nMutators are independent sets of mutations. A scheduling policy can be set for such mutations (by default randomly taken like in havoc).\nEntities Virtual Input/Input State (hold input buffer and associated metadata (e.g. structure) for a testcase item) Seed Queue per input channel Executor (Forkserver, Fauxserver, Network Connector) Input Channel (A way to send a new testcase to the target (multiple can be stacked, i.e. change command line parameters sparingly, then fuzz for each option over file/stdin)) Observation Channel (shared mem or whatever, also a mmaped file, define a generic interface) Feedback Feedback Reducer (VFuzz \u0026ldquo;Sensor\u0026rdquo; -\u0026gt; Reduce Observation Channel output to feedback value) Feedback specific queue Feedback specific seed scheduler Feedback specific seed energy Generic queue Generic seed scheduler Generic seed scheduler \u0026gt; andrea duplicate? Stage Mutator (simple) StackedMutator Mutation run callback (schedule Mutations) Interfaces Executor { observers // more than one current_inputs // more than one place_inputs() // e.g. write to file or in the target memory init() destroy() run_target() } Request { executor // request from the fuzzer (e.g. gimme me more input) } ObservationChannel { init() destroy() pre_run() // memset 0 in the edge coverage case in AFL post_run() } Feedback { executor specific_queue init() destroy() reduce_feedback() // not bool but e.g. float 0.0 - 1.0 } FeedbackSpecificQueue { feedback scheduler energy_calc } GenericQueue { feedbacks // all feedbacks scheduler energy_calc is_interesting() // not bool but e.g. float 0.0 - 1.0 } Stage { executor scheduler mutators init() destroy() run() } Mutator() { mutate() } StackedMutator() { mutations[] mutate() } Implementation struct afl_virtual_input { u8 (*init_cb)(struct afl_virtual_input*); // can be NULL u8 (*destroy_cb)(struct afl_virtual_input*); // can be NULL u8* buffer; u32 len; }; struct afl_executor { u8 (*init_cb)(struct afl_executor*); // can be NULL u8 (*destroy_cb)(struct afl_executor*); // can be NULL u8 (*run_target_cb)(struct afl_executor*); u8 (*place_input_cb)(struct afl_executor*); // assume current_input is valid struct afl_virtual_input* current_input; struct afl_observation_channel* observers; u32 observers_num; }; struct afl_request_handler { u8 (*init_cb)(struct afl_request_handler*); // can be NULL u8 (*destroy_cb)(struct afl_request_handler*); // can be NULL u8 (*handle_cb)(struct afl_executor* executor, void* data); s32 request_id; // the dispatcher check this }; struct afl_observation_channel { u8 (*init_cb)(struct afl_observation_channel*); // can be NULL u8 (*destroy_cb)(struct afl_observation_channel*); // can be NULL u8 (*flush_cb)(struct afl_observation_channel*); // can be NULL u8 (*reset_cb)(struct afl_observation_channel*); // can be NULL // extend here adding e.g. a shared memory }; struct afl_feedback { u8 (*init_cb)(struct afl_feedback*); // can be NULL u8 (*destroy_cb)(struct afl_feedback*); // can be NULL u64 (*reducer_function)(u64, u64); // new_value = reducer(old_value, proposed_value) s32 (*is_interesting_cb)(struct afl_executor* executor); // returns rate struct afl_queue* specific_queue; }; struct afl_queue_entry { u8 (*init_cb)(struct afl_queue_entry*); // can be NULL u8 (*destroy_cb)(struct afl_queue_entry*); // can be NULL // typical queue entry fields, omit for lazyness }; struct afl_queue { u8 (*init_cb)(struct afl_queue*); // can be NULL u8 (*destroy_cb)(struct afl_queue*); // can be NULL u8 (*add_cb)(struct afl_executor* executor, s32 rate); struct afl_queue_entry* start; struct afl_queue_entry* current; u32 size; }; struct afl_stage { u8 (*init_cb)(struct afl_stage*); // can be NULL u8 (*destroy_cb)(struct afl_stage*); // can be NULL // run is not virtual u8 (*shceduler_func)(struct afl_stage*, struct afl_mutator*); struct afl_mutator* mutators; u32 mutators_num; struct afl_executor* executor; }; struct afl_mutator { u8 (*init_cb)(struct afl_mutator*); // can be NULL u8 (*destroy_cb)(struct afl_mutator*); // can be NULL u8 (*mutate_cb)(struct afl_virtual_input* input); }; typedef u8 (*mutation_func_t)(struct afl_virtual_input* input); struct afl_stacked_mutator { struct afl_mutator super; mutation_func_t* mutations; u32 mutations_num; // mutate_cb here is a scheduler of mutations }; Inheritance Example of extension of afl_virtual_input.\nstruct afl_structured_input { struct afl_virtual_input super; struct virtual_structure* structure; }; u8 destroy_structure(struct afl_virtual_input* me) { struct afl_structured_input* i = baseof(struct afl_structured_input, super, me); ck_free(i-\u0026gt;structure); return R_OK; // all good } struct afl_virtual_input* new_structured_input(void) { struct afl_structured_input* i = ck_alloc(sizeof(struct afl_structured_input)); i-\u0026gt;super.destroy_cb = \u0026amp;destroy_structure; i-\u0026gt;structure = new_virtual_structure(); return \u0026amp;i-\u0026gt;super; } Obsolete this part is obsolete, don\u0026rsquo;t read (maintained as a reference).\nExample functions From a current source code perspective, afl-fuzz.c would be the main.c and all other files be part of libaflpp.so\nSeeds ssize_t afpp_seedselection_load_seeds(struct_aflpp *aflpp, char *directory_or_file)\n ssize_t := number of seeds loaded void aflpp_seedselection_configure(struct_aflpp *aflpp, uint32_t weight_time, uint32_t weight_len, uint32_t rare, bool now)\nSetup a specific seed selection strategy. might need more options.\n weight_\u0026hellip; : apply weighting to this characeristic (0 = none, 1 = x1, 2 = x2, etc.) now:= false: starting next cycle, true: immedeatly int32_t aflpp_seedselection_custom_register(struct_aflpp *aflpp, void *custom_seed_calculation_callback)\nRegister your own seed selection algorithm\n int32_t := -1 : failed, \u0026gt;= 0 : custom_seedselection_id bool aflpp_seedselection_custom_enable|disable(struct_aflpp *aflpp, int32_t custom_mutator_id)\n bool := true : success, false : failure (not defined) void aflpp_seedselection_method(struct_aflpp *aflpp, uint32_t method, bool now)\nAlternativly select a pre-coded stategy\n method := enum { EXPLORE, FAST, COE, LIN, QUAD, EXPLOIT, MMOPT, RARE } bool aflpp_seedselection_next(struct_aflpp *aflpp)\nGo to the next seed. Normally this would not be used as aflpp_mutations_mutate() would do that.\n bool := true: this starts next cycle Mutation void aflpp_mutations_configure(struct_aflpp *aflpp, uint64_t mutations, bool now)\nConfigure the mutator\n mutations := enum { BITFLIP, ARITH, DICT, HAVOC, MORE_HAVOC, \u0026hellip; }, combined with OR int32_t aflpp_mutations_custom_register(struct_aflpp *aflpp, void *custom_init, void *custom_new_seed, void *custom_mutate, ... \n int32_t := -1 : failed, \u0026gt;= 0 : custom_mutator_id void* parameters can be NULL bool aflpp_mutations_custom_enable|disable(struct_aflpp *aflpp, int32_t custom_mutator_id)\n bool := true : success, false : failure (not defined) ssize_t aflpp_mutations_mutate(struct_aflpp *aflpp, uint32_t end, uint32_t count, uint32_t min_len, uint32_t max_len, void *sender_callback)\n end := enum { NONE, DONE_WITH_SEED, DONE_MUTATION_TYPE), combined with OR count := number of maximum of mutations to perform, 0 = no limit (and basically what end says but can then not be NONE) sender_callback := the function that sends the data to the target (e.g. to stdin, file, tcp/ip, ipc, ioctl, \u0026hellip;) ssize_t := number of mutations performed ssize_t aflpp_mutations_mutate_specfic(struct_aflpp *aflpp, uint64_t mutator_type, uint32_t count, uint32_t min_len, uint32_t max_len, void *sender_callback)\nsame as aflpp_mutations_mutate() but only use this specific mutator (of enum mutations)\nssize_t aflpp_mutations_mutate_custom(struct_aflpp *aflpp, int32_t custom_mutator_id, uint32_t count, uint32_t min_len, uint32_t max_len, void *sender_callback)\nsame as aflpp_mutations_mutate() but only use this specific mutator (of enum mutations)\nAlso: load dictionary + enable/disable dictionary, etc.\nInput Sender ssize_t send_input(struct_aflpp *aflpp, u8 *buf, uint32_t len)\nwe should also have default senders, e.g. aflpp_send_stdin, aflpp_send_file, aflpp_send_argv, aflpp_send_network, \u0026hellip; for which some need a _configure, e.g. for file, network, argv\nstruct_aflpp has pointers to struct_seed, struct_mutation, \u0026hellip;\n"}),a.add({id:6,href:'/docs/afl-fuzz_approach/',title:"Afl Fuzz Approach",content:"The afl-fuzz approach AFL++ is a brute-force fuzzer coupled with an exceedingly simple but rock-solid instrumentation-guided genetic algorithm. It uses a modified form of edge coverage to effortlessly pick up subtle, local-scale changes to program control flow.\nSimplifying a bit, the overall algorithm can be summed up as:\n Load user-supplied initial test cases into the queue.\n Take the next input file from the queue.\n Attempt to trim the test case to the smallest size that doesn\u0026rsquo;t alter the measured behavior of the program.\n Repeatedly mutate the file using a balanced and well-researched variety of traditional fuzzing strategies.\n If any of the generated mutations resulted in a new state transition recorded by the instrumentation, add mutated output as a new entry in the queue.\n Go to 2.\n The discovered test cases are also periodically culled to eliminate ones that have been obsoleted by newer, higher-coverage finds; and undergo several other instrumentation-driven effort minimization steps.\nAs a side result of the fuzzing process, the tool creates a small, self-contained corpus of interesting test cases. These are extremely useful for seeding other, labor- or resource-intensive testing regimes - for example, for stress-testing browsers, office applications, graphics suites, or closed-source tools.\nThe fuzzer is thoroughly tested to deliver out-of-the-box performance far superior to blind fuzzing or coverage-only tools.\nUnderstanding the status screen This section provides an overview of the status screen - plus tips for troubleshooting any warnings and red text shown in the UI.\nFor the general instruction manual, see README.md.\nA note about colors The status screen and error messages use colors to keep things readable and attract your attention to the most important details. For example, red almost always means \u0026ldquo;consult this doc\u0026rdquo; :-)\nUnfortunately, the UI will only render correctly if your terminal is using traditional un*x palette (white text on black background) or something close to that.\nIf you are using inverse video, you may want to change your settings, say:\n For GNOME Terminal, go to Edit \u0026gt; Profile preferences, select the \u0026ldquo;colors\u0026rdquo; tab, and from the list of built-in schemes, choose \u0026ldquo;white on black\u0026rdquo;. For the MacOS X Terminal app, open a new window using the \u0026ldquo;Pro\u0026rdquo; scheme via the Shell \u0026gt; New Window menu (or make \u0026ldquo;Pro\u0026rdquo; your default). Alternatively, if you really like your current colors, you can edit config.h to comment out USE_COLORS, then do make clean all.\nWe are not aware of any other simple way to make this work without causing other side effects - sorry about that.\nWith that out of the way, let\u0026rsquo;s talk about what\u0026rsquo;s actually on the screen\u0026hellip;\nThe status bar american fuzzy lop ++3.01a (default) [fast] {0} The top line shows you which mode afl-fuzz is running in (normal: \u0026ldquo;american fuzzy lop\u0026rdquo;, crash exploration mode: \u0026ldquo;peruvian rabbit mode\u0026rdquo;) and the version of AFL++. Next to the version is the banner, which, if not set with -T by hand, will either show the binary name being fuzzed, or the -M/-S main/secondary name for parallel fuzzing. Second to last is the power schedule mode being run (default: fast). Finally, the last item is the CPU id.\nProcess timing +----------------------------------------------------+ | run time : 0 days, 8 hrs, 32 min, 43 sec | | last new find : 0 days, 0 hrs, 6 min, 40 sec | | last uniq crash : none seen yet | | last uniq hang : 0 days, 1 hrs, 24 min, 32 sec | +----------------------------------------------------+ This section is fairly self-explanatory: it tells you how long the fuzzer has been running and how much time has elapsed since its most recent finds. This is broken down into \u0026ldquo;paths\u0026rdquo; (a shorthand for test cases that trigger new execution patterns), crashes, and hangs.\nWhen it comes to timing: there is no hard rule, but most fuzzing jobs should be expected to run for days or weeks; in fact, for a moderately complex project, the first pass will probably take a day or so. Every now and then, some jobs will be allowed to run for months.\nThere\u0026rsquo;s one important thing to watch out for: if the tool is not finding new paths within several minutes of starting, you\u0026rsquo;re probably not invoking the target binary correctly and it never gets to parse the input files that are thrown at it; other possible explanations are that the default memory limit (-m) is too restrictive and the program exits after failing to allocate a buffer very early on; or that the input files are patently invalid and always fail a basic header check.\nIf there are no new paths showing up for a while, you will eventually see a big red warning in this section, too :-)\nOverall results +-----------------------+ | cycles done : 0 | | total paths : 2095 | | uniq crashes : 0 | | uniq hangs : 19 | +-----------------------+ The first field in this section gives you the count of queue passes done so far\n that is, the number of times the fuzzer went over all the interesting test cases discovered so far, fuzzed them, and looped back to the very beginning. Every fuzzing session should be allowed to complete at least one cycle; and ideally, should run much longer than that. As noted earlier, the first pass can take a day or longer, so sit back and relax.\nTo help make the call on when to hit Ctrl-C, the cycle counter is color-coded. It is shown in magenta during the first pass, progresses to yellow if new finds are still being made in subsequent rounds, then blue when that ends - and finally, turns green after the fuzzer hasn\u0026rsquo;t been seeing any action for a longer while.\nThe remaining fields in this part of the screen should be pretty obvious: there\u0026rsquo;s the number of test cases (\u0026ldquo;paths\u0026rdquo;) discovered so far, and the number of unique faults. The test cases, crashes, and hangs can be explored in real-time by browsing the output directory, see #interpreting-output.\nCycle progress +-------------------------------------+ | now processing : 1296 (61.86%) | | paths timed out : 0 (0.00%) | +-------------------------------------+ This box tells you how far along the fuzzer is with the current queue cycle: it shows the ID of the test case it is currently working on, plus the number of inputs it decided to ditch because they were persistently timing out.\nThe \u0026ldquo;*\u0026rdquo; suffix sometimes shown in the first line means that the currently processed path is not \u0026ldquo;favored\u0026rdquo; (a property discussed later on).\nMap coverage +--------------------------------------+ | map density : 10.15% / 29.07% | | count coverage : 4.03 bits/tuple | +--------------------------------------+ The section provides some trivia about the coverage observed by the instrumentation embedded in the target binary.\nThe first line in the box tells you how many branch tuples already were hit, in proportion to how much the bitmap can hold. The number on the left describes the current input; the one on the right is the value for the entire input corpus.\nBe wary of extremes:\n Absolute numbers below 200 or so suggest one of three things: that the program is extremely simple; that it is not instrumented properly (e.g., due to being linked against a non-instrumented copy of the target library); or that it is bailing out prematurely on your input test cases. The fuzzer will try to mark this in pink, just to make you aware. Percentages over 70% may very rarely happen with very complex programs that make heavy use of template-generated code. Because high bitmap density makes it harder for the fuzzer to reliably discern new program states, we recommend recompiling the binary with AFL_INST_RATIO=10 or so and trying again (see env_variables.md). The fuzzer will flag high percentages in red. Chances are, you will never see that unless you\u0026rsquo;re fuzzing extremely hairy software (say, v8, perl, ffmpeg). The other line deals with the variability in tuple hit counts seen in the binary. In essence, if every taken branch is always taken a fixed number of times for all the inputs that were tried, this will read 1.00. As we manage to trigger other hit counts for every branch, the needle will start to move toward 8.00 (every bit in the 8-bit map hit), but will probably never reach that extreme.\nTogether, the values can be useful for comparing the coverage of several different fuzzing jobs that rely on the same instrumented binary.\nStage progress +-------------------------------------+ | now trying : interest 32/8 | | stage execs : 3996/34.4k (11.62%) | | total execs : 27.4M | | exec speed : 891.7/sec | +-------------------------------------+ This part gives you an in-depth peek at what the fuzzer is actually doing right now. It tells you about the current stage, which can be any of:\n calibration - a pre-fuzzing stage where the execution path is examined to detect anomalies, establish baseline execution speed, and so on. Executed very briefly whenever a new find is being made. trim L/S - another pre-fuzzing stage where the test case is trimmed to the shortest form that still produces the same execution path. The length (L) and stepover (S) are chosen in general relationship to file size. bitflip L/S - deterministic bit flips. There are L bits toggled at any given time, walking the input file with S-bit increments. The current L/S variants are: 1/1, 2/1, 4/1, 8/8, 16/8, 32/8. arith L/8 - deterministic arithmetics. The fuzzer tries to subtract or add small integers to 8-, 16-, and 32-bit values. The stepover is always 8 bits. interest L/8 - deterministic value overwrite. The fuzzer has a list of known \u0026ldquo;interesting\u0026rdquo; 8-, 16-, and 32-bit values to try. The stepover is 8 bits. extras - deterministic injection of dictionary terms. This can be shown as \u0026ldquo;user\u0026rdquo; or \u0026ldquo;auto\u0026rdquo;, depending on whether the fuzzer is using a user-supplied dictionary (-x) or an auto-created one. You will also see \u0026ldquo;over\u0026rdquo; or \u0026ldquo;insert\u0026rdquo;, depending on whether the dictionary words overwrite existing data or are inserted by offsetting the remaining data to accommodate their length. havoc - a sort-of-fixed-length cycle with stacked random tweaks. The operations attempted during this stage include bit flips, overwrites with random and \u0026ldquo;interesting\u0026rdquo; integers, block deletion, block duplication, plus assorted dictionary-related operations (if a dictionary is supplied in the first place). splice - a last-resort strategy that kicks in after the first full queue cycle with no new paths. It is equivalent to \u0026lsquo;havoc\u0026rsquo;, except that it first splices together two random inputs from the queue at some arbitrarily selected midpoint. sync - a stage used only when -M or -S is set (see fuzzing_in_depth.md:3c) Using multiple cores). No real fuzzing is involved, but the tool scans the output from other fuzzers and imports test cases as necessary. The first time this is done, it may take several minutes or so. The remaining fields should be fairly self-evident: there\u0026rsquo;s the exec count progress indicator for the current stage, a global exec counter, and a benchmark for the current program execution speed. This may fluctuate from one test case to another, but the benchmark should be ideally over 500 execs/sec most of the time - and if it stays below 100, the job will probably take very long.\nThe fuzzer will explicitly warn you about slow targets, too. If this happens, see the best_practices.md#improving-speed for ideas on how to speed things up.\nFindings in depth +--------------------------------------+ | favored paths : 879 (41.96%) | | new edges on : 423 (20.19%) | | total crashes : 0 (0 unique) | | total tmouts : 24 (19 unique) | +--------------------------------------+ This gives you several metrics that are of interest mostly to complete nerds. The section includes the number of paths that the fuzzer likes the most based on a minimization algorithm baked into the code (these will get considerably more air time), and the number of test cases that actually resulted in better edge coverage (versus just pushing the branch hit counters up). There are also additional, more detailed counters for crashes and timeouts.\nNote that the timeout counter is somewhat different from the hang counter; this one includes all test cases that exceeded the timeout, even if they did not exceed it by a margin sufficient to be classified as hangs.\nFuzzing strategy yields +-----------------------------------------------------+ | bit flips : 57/289k, 18/289k, 18/288k | | byte flips : 0/36.2k, 4/35.7k, 7/34.6k | | arithmetics : 53/2.54M, 0/537k, 0/55.2k | | known ints : 8/322k, 12/1.32M, 10/1.70M | | dictionary : 9/52k, 1/53k, 1/24k | |havoc/splice : 1903/20.0M, 0/0 | |py/custom/rq : unused, 53/2.54M, unused | | trim/eff : 20.31%/9201, 17.05% | +-----------------------------------------------------+ This is just another nerd-targeted section keeping track of how many paths were netted, in proportion to the number of execs attempted, for each of the fuzzing strategies discussed earlier on. This serves to convincingly validate assumptions about the usefulness of the various approaches taken by afl-fuzz.\nThe trim strategy stats in this section are a bit different than the rest. The first number in this line shows the ratio of bytes removed from the input files; the second one corresponds to the number of execs needed to achieve this goal. Finally, the third number shows the proportion of bytes that, although not possible to remove, were deemed to have no effect and were excluded from some of the more expensive deterministic fuzzing steps.\nNote that when deterministic mutation mode is off (which is the default because it is not very efficient) the first five lines display \u0026ldquo;disabled (default, enable with -D)\u0026rdquo;.\nOnly what is activated will have counter shown.\nPath geometry +---------------------+ | levels : 5 | | pending : 1570 | | pend fav : 583 | | own finds : 0 | | imported : 0 | | stability : 100.00% | +---------------------+ The first field in this section tracks the path depth reached through the guided fuzzing process. In essence: the initial test cases supplied by the user are considered \u0026ldquo;level 1\u0026rdquo;. The test cases that can be derived from that through traditional fuzzing are considered \u0026ldquo;level 2\u0026rdquo;; the ones derived by using these as inputs to subsequent fuzzing rounds are \u0026ldquo;level 3\u0026rdquo;; and so forth. The maximum depth is therefore a rough proxy for how much value you\u0026rsquo;re getting out of the instrumentation-guided approach taken by afl-fuzz.\nThe next field shows you the number of inputs that have not gone through any fuzzing yet. The same stat is also given for \u0026ldquo;favored\u0026rdquo; entries that the fuzzer really wants to get to in this queue cycle (the non-favored entries may have to wait a couple of cycles to get their chance).\nNext is the number of new paths found during this fuzzing section and imported from other fuzzer instances when doing parallelized fuzzing; and the extent to which identical inputs appear to sometimes produce variable behavior in the tested binary.\nThat last bit is actually fairly interesting: it measures the consistency of observed traces. If a program always behaves the same for the same input data, it will earn a score of 100%. When the value is lower but still shown in purple, the fuzzing process is unlikely to be negatively affected. If it goes into red, you may be in trouble, since AFL++ will have difficulty discerning between meaningful and \u0026ldquo;phantom\u0026rdquo; effects of tweaking the input file.\nNow, most targets will just get a 100% score, but when you see lower figures, there are several things to look at:\n The use of uninitialized memory in conjunction with some intrinsic sources of entropy in the tested binary. Harmless to AFL, but could be indicative of a security bug. Attempts to manipulate persistent resources, such as left over temporary files or shared memory objects. This is usually harmless, but you may want to double-check to make sure the program isn\u0026rsquo;t bailing out prematurely. Running out of disk space, SHM handles, or other global resources can trigger this, too. Hitting some functionality that is actually designed to behave randomly. Generally harmless. For example, when fuzzing sqlite, an input like select random(); will trigger a variable execution path. Multiple threads executing at once in semi-random order. This is harmless when the \u0026lsquo;stability\u0026rsquo; metric stays over 90% or so, but can become an issue if not. Here\u0026rsquo;s what to try: Use afl-clang-fast from instrumentation - it uses a thread-local tracking model that is less prone to concurrency issues, See if the target can be compiled or run without threads. Common ./configure options include --without-threads, --disable-pthreads, or --disable-openmp. Replace pthreads with GNU Pth (https://www.gnu.org/software/pth/), which allows you to use a deterministic scheduler. In persistent mode, minor drops in the \u0026ldquo;stability\u0026rdquo; metric can be normal, because not all the code behaves identically when re-entered; but major dips may signify that the code within __AFL_LOOP() is not behaving correctly on subsequent iterations (e.g., due to incomplete clean-up or reinitialization of the state) and that most of the fuzzing effort goes to waste. The paths where variable behavior is detected are marked with a matching entry in the \u0026lt;out_dir\u0026gt;/queue/.state/variable_behavior/ directory, so you can look them up easily.\nCPU load [cpu: 25%] This tiny widget shows the apparent CPU utilization on the local system. It is calculated by taking the number of processes in the \u0026ldquo;runnable\u0026rdquo; state, and then comparing it to the number of logical cores on the system.\nIf the value is shown in green, you are using fewer CPU cores than available on your system and can probably parallelize to improve performance; for tips on how to do that, see fuzzing_in_depth.md:3c) Using multiple cores.\nIf the value is shown in red, your CPU is possibly oversubscribed, and running additional fuzzers may not give you any benefits.\nOf course, this benchmark is very simplistic; it tells you how many processes are ready to run, but not how resource-hungry they may be. It also doesn\u0026rsquo;t distinguish between physical cores, logical cores, and virtualized CPUs; the performance characteristics of each of these will differ quite a bit.\nIf you want a more accurate measurement, you can run the afl-gotcpu utility from the command line.\nInterpreting output See #understanding-the-status-screen for information on how to interpret the displayed stats and monitor the health of the process. Be sure to consult this file especially if any UI elements are highlighted in red.\nThe fuzzing process will continue until you press Ctrl-C. At a minimum, you want to allow the fuzzer to complete one queue cycle, which may take anywhere from a couple of hours to a week or so.\nThere are three subdirectories created within the output directory and updated in real-time:\n queue/ - test cases for every distinctive execution path, plus all the starting files given by the user. This is the synthesized corpus.\n Before using this corpus for any other purposes, you can shrink it to a smaller size using the afl-cmin tool. The tool will find a smaller subset of files offering equivalent edge coverage. crashes/ - unique test cases that cause the tested program to receive a fatal signal (e.g., SIGSEGV, SIGILL, SIGABRT). The entries are grouped by the received signal.\n hangs/ - unique test cases that cause the tested program to time out. The default time limit before something is classified as a hang is the larger of 1 second and the value of the -t parameter. The value can be fine-tuned by setting AFL_HANG_TMOUT, but this is rarely necessary.\n Crashes and hangs are considered \u0026ldquo;unique\u0026rdquo; if the associated execution paths involve any state transitions not seen in previously-recorded faults. If a single bug can be reached in multiple ways, there will be some count inflation early in the process, but this should quickly taper off.\nThe file names for crashes and hangs are correlated with the parent, non-faulting queue entries. This should help with debugging.\nVisualizing If you have gnuplot installed, you can also generate some pretty graphs for any active fuzzing task using afl-plot. For an example of how this looks like, see https://lcamtuf.coredump.cx/afl/plot/.\nYou can also manually build and install afl-plot-ui, which is a helper utility for showing the graphs generated by afl-plot in a graphical window using GTK. You can build and install it as follows:\nsudo apt install libgtk-3-0 libgtk-3-dev pkg-config cd utils/plot_ui make cd ../../ sudo make install To learn more about remote monitoring and metrics visualization with StatsD, see rpc_statsd.md.\nAddendum: status and plot files For unattended operation, some of the key status screen information can be also found in a machine-readable format in the fuzzer_stats file in the output directory. This includes:\n start_time - unix time indicating the start time of afl-fuzz last_update - unix time corresponding to the last update of this file run_time - run time in seconds to the last update of this file fuzzer_pid - PID of the fuzzer process cycles_done - queue cycles completed so far cycles_wo_finds - number of cycles without any new paths found execs_done - number of execve() calls attempted execs_per_sec - overall number of execs per second corpus_count - total number of entries in the queue corpus_favored - number of queue entries that are favored corpus_found - number of entries discovered through local fuzzing corpus_imported - number of entries imported from other instances max_depth - number of levels in the generated data set cur_item - currently processed entry number pending_favs - number of favored entries still waiting to be fuzzed pending_total - number of all entries waiting to be fuzzed corpus_variable - number of test cases showing variable behavior stability - percentage of bitmap bytes that behave consistently bitmap_cvg - percentage of edge coverage found in the map so far saved_crashes - number of unique crashes recorded saved_hangs - number of unique hangs encountered last_find - seconds since the last find was found last_crash - seconds since the last crash was found last_hang - seconds since the last hang was found execs_since_crash - execs since the last crash was found exec_timeout - the -t command line value slowest_exec_ms - real time of the slowest execution in ms peak_rss_mb - max rss usage reached during fuzzing in MB edges_found - how many edges have been found var_byte_count - how many edges are non-deterministic afl_banner - banner text (e.g., the target name) afl_version - the version of AFL++ used target_mode - default, persistent, qemu, unicorn, non-instrumented command_line - full command line used for the fuzzing session Most of these map directly to the UI elements discussed earlier on.\nOn top of that, you can also find an entry called plot_data, containing a plottable history for most of these fields. If you have gnuplot installed, you can turn this into a nice progress report with the included afl-plot tool.\nAddendum: automatically sending metrics with StatsD In a CI environment or when running multiple fuzzers, it can be tedious to log into each of them or deploy scripts to read the fuzzer statistics. Using AFL_STATSD (and the other related environment variables AFL_STATSD_HOST, AFL_STATSD_PORT, AFL_STATSD_TAGS_FLAVOR) you can automatically send metrics to your favorite StatsD server. Depending on your StatsD server, you will be able to monitor, trigger alerts, or perform actions based on these metrics (e.g.: alert on slow exec/s for a new build, threshold of crashes, time since last crash \u0026gt; X, etc.).\nThe selected metrics are a subset of all the metrics found in the status and in the plot file. The list is the following: cycle_done, cycles_wo_finds, execs_done,execs_per_sec, corpus_count, corpus_favored, corpus_found, corpus_imported, max_depth, cur_item, pending_favs, pending_total, corpus_variable, saved_crashes, saved_hangs, total_crashes, slowest_exec_ms, edges_found, var_byte_count, havoc_expansion. Their definitions can be found in the addendum above.\nWhen using multiple fuzzer instances with StatsD, it is strongly recommended to setup the flavor (AFL_STATSD_TAGS_FLAVOR) to match your StatsD server. This will allow you to see individual fuzzer performance, detect bad ones, see the progress of each strategy\u0026hellip;\n"}),a.add({id:7,href:'/docs/best_practices/',title:"Best Practices",content:"Best practices Contents Targets Fuzzing a target with source code available Fuzzing a target with dlopen() instrumented libraries Fuzzing a binary-only target Fuzzing a GUI program Fuzzing a network service Improvements Improving speed Improving stability Targets Fuzzing a target with source code available To learn how to fuzz a target if source code is available, see fuzzing_in_depth.md.\nFuzzing a target with dlopen instrumented libraries If a source code based fuzzing target loads instrumented libraries with dlopen() after the forkserver has been activated and non-colliding coverage instrumentation is used (PCGUARD (which is the default), or LTO), then this an issue, because this would enlarge the coverage map, but afl-fuzz doesn\u0026rsquo;t know about it.\nThe solution is to use AFL_PRELOAD for all dlopen()\u0026lsquo;ed libraries to ensure that all coverage targets are present on startup in the target, even if accessed only later with dlopen().\nFor PCGUARD instrumentation abort() is called if this is detected, for LTO there will either be no coverage for the instrumented dlopen()\u0026lsquo;ed libraries or you will see lots of crashes in the UI.\nNote that this is not an issue if you use the inferiour afl-gcc-fast, afl-gcc orAFL_LLVM_INSTRUMENT=CLASSIC/NGRAM/CTX afl-clang-fast instrumentation.\nFuzzing a binary-only target For a comprehensive guide, see fuzzing_binary-only_targets.md.\nFuzzing a GUI program If the GUI program can read the fuzz data from a file (via the command line, a fixed location or via an environment variable) without needing any user interaction, then it would be suitable for fuzzing.\nOtherwise, it is not possible without modifying the source code - which is a very good idea anyway as the GUI functionality is a huge CPU/time overhead for the fuzzing.\nSo create a new main() that just reads the test case and calls the functionality for processing the input that the GUI program is using.\nFuzzing a network service Fuzzing a network service does not work \u0026ldquo;out of the box\u0026rdquo;.\nUsing a network channel is inadequate for several reasons:\n it has a slow-down of x10-20 on the fuzzing speed it does not scale to fuzzing multiple instances easily, instead of one initial data packet often a back-and-forth interplay of packets is needed for stateful protocols (which is totally unsupported by most coverage aware fuzzers). The established method to fuzz network services is to modify the source code to read from a file or stdin (fd 0) (or even faster via shared memory, combine this with persistent mode instrumentation/README.persistent_mode.md and you have a performance gain of x10 instead of a performance loss of over x10\n that is a x100 difference!). If modifying the source is not an option (e.g., because you only have a binary and perform binary fuzzing) you can also use a shared library with AFL_PRELOAD to emulate the network. This is also much faster than the real network would be. See utils/socket_fuzzing/.\nThere is an outdated AFL++ branch that implements networking if you are desperate though: https://github.com/AFLplusplus/AFLplusplus/tree/networking\n however, a better option is AFLnet (https://github.com/aflnet/aflnet) which allows you to define network state with different type of data packets. Improvements Improving speed Use llvm_mode: afl-clang-lto (llvm \u0026gt;= 11) or afl-clang-fast (llvm \u0026gt;= 9 recommended). Use persistent mode (x2-x20 speed increase). Instrument just what you are interested in, see instrumentation/README.instrument_list.md. If you do not use shmem persistent mode, use AFL_TMPDIR to put the input file directory on a tempfs location, see env_variables.md. Improve Linux kernel performance: modify /etc/default/grub, set GRUB_CMDLINE_LINUX_DEFAULT=\u0026quot;ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off\u0026quot;; then update-grub and reboot (warning: makes the system less secure). Running on an ext2 filesystem with noatime mount option will be a bit faster than on any other journaling filesystem. Use your cores (fuzzing_in_depth.md:3c) Using multiple cores)! Improving stability For fuzzing, a 100% stable target that covers all edges is the best case. A 90% stable target that covers all edges is, however, better than a 100% stable target that ignores 10% of the edges.\nWith instability, you basically have a partial coverage loss on an edge, with ignored functions you have a full loss on that edges.\nThere are functions that are unstable, but also provide value to coverage, e.g., init functions that use fuzz data as input. If, however, a function that has nothing to do with the input data is the source of instability, e.g., checking jitter, or is a hash map function etc., then it should not be instrumented.\nTo be able to exclude these functions (based on AFL++\u0026rsquo;s measured stability), the following process will allow to identify functions with variable edges.\nFour steps are required to do this and it also requires quite some knowledge of coding and/or disassembly and is effectively possible only with afl-clang-fast PCGUARD and afl-clang-lto LTO instrumentation.\n Instrument to be able to find the responsible function(s):\na) For LTO instrumented binaries, this can be documented during compile time, just set export AFL_LLVM_DOCUMENT_IDS=/path/to/a/file. This file will have one assigned edge ID and the corresponding function per line.\nb) For PCGUARD instrumented binaries, it is much more difficult. Here you can either modify the __sanitizer_cov_trace_pc_guard function in instrumentation/afl-llvm-rt.o.c to write a backtrace to a file if the ID in __afl_area_ptr[*guard] is one of the unstable edge IDs. (Example code is already there). Then recompile and reinstall llvm_mode and rebuild your target. Run the recompiled target with afl-fuzz for a while and then check the file that you wrote with the backtrace information. Alternatively, you can use gdb to hook __sanitizer_cov_trace_pc_guard_init on start, check to which memory address the edge ID value is written, and set a write breakpoint to that address (watch 0x.....).\nc) In other instrumentation types, this is not possible. So just recompile with the two mentioned above. This is just for identifying the functions that have unstable edges.\n Identify which edge ID numbers are unstable.\nRun the target with export AFL_DEBUG=1 for a few minutes then terminate. The out/fuzzer_stats file will then show the edge IDs that were identified as unstable in the var_bytes entry. You can match these numbers directly to the data you created in the first step. Now you know which functions are responsible for the instability\n Create a text file with the filenames/functions\nIdentify which source code files contain the functions that you need to remove from instrumentation, or just specify the functions you want to skip for instrumentation. Note that optimization might inline functions!\nFollow this document on how to do this: instrumentation/README.instrument_list.md.\nIf PCGUARD is used, then you need to follow this guide (needs llvm 12+!): https://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation\nOnly exclude those functions from instrumentation that provide no value for coverage - that is if it does not process any fuzz data directly or indirectly (e.g., hash maps, thread management etc.). If, however, a function directly or indirectly handles fuzz data, then you should not put the function in a deny instrumentation list and rather live with the instability it comes with.\n Recompile the target\nRecompile, fuzz it, be happy :)\nThis link explains this process for Fuzzbench.\n "}),a.add({id:8,href:'/docs/binaryonly_fuzzing/',title:"Binaryonly Fuzzing",content:"Fuzzing binary-only programs with AFL++ AFL++, libfuzzer and others are great if you have the source code, and it allows for very fast and coverage guided fuzzing.\nHowever, if there is only the binary program and no source code available, then standard afl-fuzz -n (non-instrumented mode) is not effective.\nThe following is a description of how these binaries can be fuzzed with AFL++.\nTL;DR: qemu_mode in persistent mode is the fastest - if the stability is high enough. Otherwise try retrowrite, afl-dyninst and if these fail too then try standard qemu_mode with AFL_ENTRYPOINT to where you need it.\nIf your target is a library use utils/afl_frida/.\nIf your target is non-linux then use unicorn_mode/.\nQEMU Qemu is the \u0026ldquo;native\u0026rdquo; solution to the program. It is available in the ./qemu_mode/ directory and once compiled it can be accessed by the afl-fuzz -Q command line option. It is the easiest to use alternative and even works for cross-platform binaries.\nThe speed decrease is at about 50%. However various options exist to increase the speed:\n using AFL_ENTRYPOINT to move the forkserver entry to a later basic block in the binary (+5-10% speed) using persistent mode qemu_mode/README.persistent.md this will result in 150-300% overall speed increase - so 3-8x the original qemu_mode speed! using AFL_CODE_START/AFL_CODE_END to only instrument specific parts Note that there is also honggfuzz: https://github.com/google/honggfuzz which now has a qemu_mode, but its performance is just 1.5% \u0026hellip;\nAs it is included in AFL++ this needs no URL.\nIf you like to code a customized fuzzer without much work, we highly recommend to check out our sister project libafl which will support QEMU too: https://github.com/AFLplusplus/LibAFL\nAFL FRIDA In frida_mode you can fuzz binary-only targets easily like with QEMU, with the advantage that frida_mode also works on MacOS (both intel and M1).\nIf you want to fuzz a binary-only library then you can fuzz it with frida-gum via utils/afl_frida/, you will have to write a harness to call the target function in the library, use afl-frida.c as a template.\nBoth come with AFL++ so this needs no URL.\nYou can also perform remote fuzzing with frida, e.g. if you want to fuzz on iPhone or Android devices, for this you can use https://github.com/ttdennis/fpicker/ as an intermediate that uses AFL++ for fuzzing.\nIf you like to code a customized fuzzer without much work, we highly recommend to check out our sister project libafl which supports Frida too: https://github.com/AFLplusplus/LibAFL Working examples already exist :-)\nWINE+QEMU Wine mode can run Win32 PE binaries with the QEMU instrumentation. It needs Wine, python3 and the pefile python package installed.\nAs it is included in AFL++ this needs no URL.\nUNICORN Unicorn is a fork of QEMU. The instrumentation is, therefore, very similar. In contrast to QEMU, Unicorn does not offer a full system or even userland emulation. Runtime environment and/or loaders have to be written from scratch, if needed. On top, block chaining has been removed. This means the speed boost introduced in the patched QEMU Mode of AFL++ cannot simply be ported over to Unicorn. For further information, check out unicorn_mode/README.md.\nAs it is included in AFL++ this needs no URL.\nAFL UNTRACER If you want to fuzz a binary-only shared library then you can fuzz it with utils/afl_untracer/, use afl-untracer.c as a template. It is slower than AFL FRIDA (see above).\nDYNINST Dyninst is a binary instrumentation framework similar to Pintool and Dynamorio (see far below). However whereas Pintool and Dynamorio work at runtime, dyninst instruments the target at load time, and then let it run - or save the binary with the changes. This is great for some things, e.g. fuzzing, and not so effective for others, e.g. malware analysis.\nSo what we can do with dyninst is taking every basic block, and put afl\u0026rsquo;s instrumention code in there - and then save the binary. Afterwards we can just fuzz the newly saved target binary with afl-fuzz. Sounds great? It is. The issue though - it is a non-trivial problem to insert instructions, which change addresses in the process space, so that everything is still working afterwards. Hence more often than not binaries crash when they are run.\nThe speed decrease is about 15-35%, depending on the optimization options used with afl-dyninst.\nSo if Dyninst works, it is the best option available. Otherwise it just doesn\u0026rsquo;t work well.\nhttps://github.com/vanhauser-thc/afl-dyninst\nRETROWRITE, ZAFL, \u0026hellip; other binary rewriter If you have an x86/x86_64 binary that still has its symbols, is compiled with position independant code (PIC/PIE) and does not use most of the C++ features then the retrowrite solution might be for you. It decompiles to ASM files which can then be instrumented with afl-gcc.\nIt is at about 80-85% performance.\nhttps://git.zephyr-software.com/opensrc/zafl https://github.com/HexHive/retrowrite\nMCSEMA Theoretically you can also decompile to llvm IR with mcsema, and then use llvm_mode to instrument the binary. Good luck with that.\nhttps://github.com/lifting-bits/mcsema\nINTEL-PT If you have a newer Intel CPU, you can make use of Intels processor trace. The big issue with Intel\u0026rsquo;s PT is the small buffer size and the complex encoding of the debug information collected through PT. This makes the decoding very CPU intensive and hence slow. As a result, the overall speed decrease is about 70-90% (depending on the implementation and other factors).\nThere are two AFL intel-pt implementations:\n https://github.com/junxzm1990/afl-pt =\u0026gt; this needs Ubuntu 14.04.05 without any updates and the 4.4 kernel.\n https://github.com/hunter-ht-2018/ptfuzzer =\u0026gt; this needs a 4.14 or 4.15 kernel. the \u0026ldquo;nopti\u0026rdquo; kernel boot option must be used. This one is faster than the other.\n Note that there is also honggfuzz: https://github.com/google/honggfuzz But its IPT performance is just 6%!\nCORESIGHT Coresight is ARM\u0026rsquo;s answer to Intel\u0026rsquo;s PT. There is no implementation so far which handles coresight and getting it working on an ARM Linux is very difficult due to custom kernel building on embedded systems is difficult. And finding one that has coresight in the ARM chip is difficult too. My guess is that it is slower than Qemu, but faster than Intel PT.\nIf anyone finds any coresight implementation for AFL please ping me: [email protected]\nPIN \u0026amp; DYNAMORIO Pintool and Dynamorio are dynamic instrumentation engines, and they can be used for getting basic block information at runtime. Pintool is only available for Intel x32/x64 on Linux, Mac OS and Windows, whereas Dynamorio is additionally available for ARM and AARCH64. Dynamorio is also 10x faster than Pintool.\nThe big issue with Dynamorio (and therefore Pintool too) is speed. Dynamorio has a speed decrease of 98-99% Pintool has a speed decrease of 99.5%\nHence Dynamorio is the option to go for if everything else fails, and Pintool only if Dynamorio fails too.\nDynamorio solutions:\n https://github.com/vanhauser-thc/afl-dynamorio https://github.com/mxmssh/drAFL https://github.com/googleprojectzero/winafl/ \u0026lt;= very good but windows only Pintool solutions:\n https://github.com/vanhauser-thc/afl-pin https://github.com/mothran/aflpin https://github.com/spinpx/afl_pin_mode \u0026lt;= only old Pintool version supported Non-AFL solutions There are many binary-only fuzzing frameworks. Some are great for CTFs but don\u0026rsquo;t work with large binaries, others are very slow but have good path discovery, some are very hard to set-up \u0026hellip;\n QSYM: https://github.com/sslab-gatech/qsym Manticore: https://github.com/trailofbits/manticore S2E: https://github.com/S2E Tinyinst: https://github.com/googleprojectzero/TinyInst (Mac/Windows only) Jackalope: https://github.com/googleprojectzero/Jackalope \u0026hellip; please send me any missing that are good Closing words That\u0026rsquo;s it! News, corrections, updates? Send an email to [email protected]\n"}),a.add({id:9,href:'/docs/changelog/',title:"Changelog",content:"Changelog This is the list of all noteworthy changes made in every public release of the tool. See README.md for the general instruction manual.\nStaying informed Want to stay in the loop on major new features? Join our mailing list by sending a mail to [email protected].\nVersion ++4.00c (release) complete documentation restructuring, made possible by Google Season of Docs :) thank you Jana! we renamed several UI and fuzzer_stat entries to be more precise, e.g. \u0026ldquo;unique crashes\u0026rdquo; -\u0026gt; \u0026ldquo;saved crashes\u0026rdquo;, \u0026ldquo;total paths\u0026rdquo; -\u0026gt; \u0026ldquo;corpus count\u0026rdquo;, \u0026ldquo;current path\u0026rdquo; -\u0026gt; \u0026ldquo;current item\u0026rdquo;. This might need changing custom scripting! Nyx mode (full system emulation with snapshot capability) has been added - thanks to @schumilo and @eqv! unicorn_mode: Moved to unicorn2! by Ziqiao Kong (@lazymio) Faster, more accurate emulation (newer QEMU base), risc-v support removed indirections in rust callbacks new binary-only fuzzing mode: coresight_mode for aarch64 CPUs :) thanks to RICSecLab submitting! if instrumented libaries are dlopen()\u0026lsquo;ed after the forkserver you will now see a crash. Before you would have colliding coverage. We changed this to force fixing a broken setup rather then allowing ineffective fuzzing. See docs/best_practices.md how to fix such setups. afl-fuzz: cmplog binaries will need to be recompiled for this version (it is better!) fix a regression introduced in 3.10 that resulted in less coverage being detected. thanks to Collin May for reporting! ensure all spawned targets are killed on exit added AFL_IGNORE_PROBLEMS, plus checks to identify and abort on incorrect LTO usage setups and enhanced the READMEs for better information on how to deal with instrumenting libraries fix -n dumb mode (nobody should use this mode though) fix stability issue with LTO and cmplog better banner more effective cmplog mode more often update the UI when in input2stage mode qemu_mode/unicorn_mode: fixed OOB write when using libcompcov, thanks to kotee4ko for reporting! frida_mode: better performance, bug fixes David Carlier added Android support :) afl-showmap, afl-tmin and afl-analyze: honor persistent mode for more speed. thanks to dloffre-snl for reporting! fix bug where targets are not killed on timeouts moved hidden afl-showmap -A option to -H to be used for coresight_mode Prevent accidentaly killing non-afl/fuzz services when aborting afl-showmap and other tools. afl-cc: detect overflow reads on initial input buffer for asan new cmplog mode (incompatible with older afl++ versions) support llvm IR select instrumentation for default PCGUARD and LTO fix for shared linking on MacOS better selective instrumentation AFL_LLVM_{ALLOW|DENY}LIST on filename matching (requires llvm 11 or newer) fixed a potential crash in targets for LAF string handling fixed a bad assert in LAF split switches added AFL_USE_TSAN thread sanitizer support llvm and LTO mode modified to work with new llvm 14-dev (again.) fix for AFL_REAL_LD more -z defs filtering make -v without options work added the very good grammar mutator \u0026ldquo;GramaTron\u0026rdquo; to the custom_mutators added optimin, a faster and better corpus minimizer by Adrian Herrera. Thank you! added afl-persistent-config script to set perform permanent system configuration settings for fuzzing, for Linux and Macos. thanks to jhertz! added xml, curl \u0026amp; exotic string functions to llvm dictionary feature fix AFL_PRELOAD issues on MacOS removed utils/afl_frida because frida_mode/ is now so much better added uninstall target to makefile (todo: update new readme!) Version ++3.14c (release) afl-fuzz: fix -F when a \u0026lsquo;/\u0026rsquo; was part of the parameter fixed a crash for cmplog for very slow inputs fix for AFLfast schedule counting removed implied -D determinstic from -M main if the target becomes unavailable check out out/default/error.txt for an indicator why AFL_CAL_FAST was a dead env, now does the same as AFL_FAST_CAL reverse read the queue on resumes (more effective) fix custom mutator trimming afl-cc: Update to COMPCOV/laf-intel that speeds up the instrumentation process a lot - thanks to Michael Rodler/f0rki for the PR! Fix for failures for some sized string instrumentations Fix to instrument global namespace functions in c++ Fix for llvm 13 support partial linking do honor AFL_LLVM_{ALLOW/DENY}LIST for LTO autodictionary andDICT2FILE We do support llvm versions from 3.8 to 5.0 again frida_mode: several fixes for cmplog remove need for AFL_FRIDA_PERSISTENT_RETADDR_OFFSET less coverage collision feature parity of aarch64 with intel now (persistent, cmplog, in-memory testcases, asan) afl-cmin and afl-showmap -i do now descend into subdirectories (like afl-fuzz does) - note that afl-cmin.bash does not! afl_analyze: fix timeout handling add forkserver support for better performance ensure afl-compiler-rt is built for gcc_module always build aflpp_driver for libfuzzer harnesses added AFL_NO_FORKSRV env variable support to afl-cmin, afl-tmin, and afl-showmap, by @jhertz removed outdated documents, improved existing documentation Version ++3.13c (release) Note: plot_data switched to relative time from unix time in 3.10 frida_mode - new mode that uses frida to fuzz binary-only targets, it currently supports persistent mode and cmplog. thanks to @WorksButNotTested! create a fuzzing dictionary with the help of CodeQL thanks to @microsvuln! see utils/autodict_ql afl-fuzz: added patch by @realmadsci to support @@ as part of command line options, e.g. afl-fuzz ... -- ./target --infile=@@ add recording of previous fuzz attempts for persistent mode to allow replay of non-reproducable crashes, see AFL_PERSISTENT_RECORD in config.h and docs/envs.h fixed a bug when trimming for stdin targets cmplog -l: default cmplog level is now 2, better efficiency. level 3 now performs redqueen on everything. use with care. better fuzzing strategy yield display for enabled options ensure one fuzzer sync per cycle fix afl_custom_queue_new_entry original file name when syncing from fuzzers fixed a crash when more than one custom mutator was used together with afl_custom_post_process on a crashing seed potentially the wrong input was disabled added AFL_EXIT_ON_SEED_ISSUES env that will exit if a seed in -i dir crashes the target or results in a timeout. By default AFL++ ignores these and uses them for splicing instead. added AFL_EXIT_ON_TIME env that will make afl-fuzz exit fuzzing after no new paths have been found for n seconds when AFL_FAST_CAL is set a variable path will now be calibrated 8 times instead of originally 40. Long calibration is now 20. added AFL_TRY_AFFINITY to try to bind to CPUs but don\u0026rsquo;t error if it fails afl-cc: We do not support llvm versions prior 6.0 anymore added thread safe counters to all modes (AFL_LLVM_THREADSAFE_INST), note that this disables NeverZero counters. Fix for -pie compiled binaries with default afl-clang-fast PCGUARD Leak Sanitizer (AFL_USE_LSAN) added by Joshua Rogers, thanks! Removed InsTrim instrumentation as it is not as good as PCGUARD Removed automatic linking with -lc++ for LTO mode Fixed a crash in llvm dict2file when a strncmp length was -1 added \u0026ndash;afl-noopt support utils/aflpp_driver: aflpp_qemu_driver_hook fixed to work with qemu_mode aflpp_driver now compiled with -fPIC unicornafl: fix MIPS delay slot caching, thanks @JackGrence fixed aarch64 exit address execution no longer stops at address 0x0 updated afl-system-config to support Arch Linux weirdness and increase MacOS shared memory updated the grammar custom mutator to the newest version add -d (add dead fuzzer stats) to afl-whatsup added AFL_PRINT_FILENAMES to afl-showmap/cmin to print the current filename afl-showmap/cmin will now process queue items in alphabetical order Version ++3.12c (release) afl-fuzz: added AFL_TARGET_ENV variable to pass extra env vars to the target (for things like LD_LIBRARY_PATH) fix map detection, AFL_MAP_SIZE not needed anymore for most cases fix counting favorites (just a display thing) afl-cc: fix cmplog rtn (rare crash and not being able to gather ptr data) fix our own PCGUARD implementation to compile with llvm 10.0.1 link runtime not to shared libs ensure shared libraries are properly built and instrumented AFL_LLVM_INSTRUMENT_ALLOW/DENY were not implemented for LTO, added show correct LLVM PCGUARD NATIVE mode when auto switching to it and keep fsanitize-coverage-*list=\u0026hellip; Short mnemnonic NATIVE is now also accepted. qemu_mode (thanks @realmadsci): move AFL_PRELOAD and AFL_USE_QASAN logic inside afl-qemu-trace add AFL_QEMU_CUSTOM_BIN unicorn_mode accidently removed the subfolder from github, re-added added DEFAULT_PERMISSION to config.h for all files created, default to 0600 Version ++3.11c (release) afl-fuzz: better auto detection of map size fix sanitizer settings (bug since 3.10c) fix an off-by-one overwrite in cmplog add non-unicode variants from unicode-looking dictionary entries Rust custom mutator API improvements Imported crash stats painted yellow on resume (only new ones are red) afl-cc: added AFL_NOOPT that will just pass everything to the normal gcc/clang compiler without any changes - to pass weird configure scripts fixed a crash that can occur with ASAN + CMPLOG together plus better support for unicode (thanks to @stbergmann for reporting!) fixed a crash in LAF transform for empty strings handle erroneous setups in which multiple afl-compiler-rt are compiled into the target. This now also supports dlopen() instrumented libs loaded before the forkserver and even after the forkserver is started (then with collisions though) the compiler rt was added also in object building (-c) which should have been fixed years ago but somewhere got lost :( Renamed CTX to CALLER, added correct/real CTX implementation to CLASSIC qemu_mode: added AFL_QEMU_EXCLUDE_RANGES env by @realmadsci, thanks! if no new/updated checkout is wanted, build with: NO_CHECKOUT=1 ./build_qemu_support.sh we no longer perform a \u0026ldquo;git drop\u0026rdquo; afl-cmin: support filenames with spaces Version ++3.10c (release) Mac OS ARM64 support Android support fixed and updated by Joey Jiaojg - thanks! New selective instrumentation option with _AFL_COVERAGE* commands to be placed in the source code. Check out instrumentation/README.instrument_list.md afl-fuzz Making AFL_MAP_SIZE (mostly) obsolete - afl-fuzz now learns on start the target map size upgraded cmplog/redqueen: solving for floating point, solving transformations (e.g. toupper, tolower, to/from hex, xor, arithmetics, etc.). This is costly hence new command line option -l that sets the intensity (values 1 to 3). Recommended is 2. added AFL_CMPLOG_ONLY_NEW to not use cmplog on initial seeds from -i or resumes (these have most likely already been done) fix crash for very, very fast targets+systems (thanks to mhlakhani for reporting) on restarts (-i)/autoresume (AFL_AUTORESUME) the stats are now reloaded and used, thanks to Vimal Joseph for this patch! changed the meaning of \u0026lsquo;+\u0026rsquo; of the \u0026lsquo;-t\u0026rsquo; option, it now means to auto-calculate the timeout with the value given being the max timeout. The original meaning of skipping timeouts instead of abort is now inherent to the -t option. if deterministic mode is active (-D, or -M without -d) then we sync after every queue entry as this can take very long time otherwise added minimum SYNC_TIME to include/config.h (30 minutes default) better detection if a target needs a large shared map fix for -Z fixed a few crashes switched to an even faster RNG added hghwng\u0026rsquo;s patch for faster trace map analysis printing suggestions for mistyped AFL_ env variables added Rust bindings for custom mutators (thanks @julihoh) afl-cc allow instrumenting LLVMFuzzerTestOneInput fixed endless loop for allow/blocklist lines starting with a comment (thanks to Zherya for reporting) cmplog/redqueen now also tracks floating point, _ExtInt() + 128bit cmplog/redqueen can now process basic libc++ and libstdc++ std::string comparisons (no position or length type variants) added support for __afl_coverage_interesting() for LTO and our own PCGUARD (llvm 10.0.1+), read more about this function and selective coverage in instrumentation/README.instrument_list.md added AFL_LLVM_INSTRUMENT option NATIVE for native clang pc-guard support (less performant than our own), GCC for old afl-gcc and CLANG for old afl-clang fixed a potential crash in the LAF feature workaround for llvm bitcast lto bug workaround for llvm 13 qemuafl QASan (address sanitizer for Qemu) ported to qemuafl! See qemu_mode/libqasan/README.md solved some persistent mode bugs (thanks Dil4rd) solved an issue when dumping the memory maps (thanks wizche) Android support for QASan unicornafl Substantial speed gains in python bindings for certain use cases Improved rust bindings Added a new example harness to compare python, c and rust bindings afl-cmin and afl-showmap now support the -f option afl_plot now also generates a graph on the discovered edges changed default: no memory limit for afl-cmin and afl-cmin.bash warn on any _AFL and __AFL env vars. set AFL_IGNORE_UNKNOWN_ENVS to not warn on unknown AFL_\u0026hellip; env vars added dummy Makefile to instrumentation/ Updated utils/afl_frida to be 5% faster, 7% on x86_x64 Added AFL_KILL_SIGNAL env variable (thanks @v-p-b) @Edznux added a nice documentation on how to use rpc.statsd with AFL++ in docs/rpc_statsd.md, thanks! Version ++3.00c (release) llvm_mode/ and gcc_plugin/ moved to instrumentation/ examples/ renamed to utils/ moved libdislocator, libtokencap and qdbi_mode to utils/ all compilers combined to afl-cc which emulates the previous ones afl-llvm/gcc-rt.o merged into afl-compiler-rt.o afl-fuzz not specifying -M or -S will now auto-set \u0026ldquo;-S default\u0026rdquo; deterministic fuzzing is now disabled by default and can be enabled with -D. It is still enabled by default for -M. a new seed selection was implemented that uses weighted randoms based on a schedule performance score, which is much better that the previous walk the whole queue approach. Select the old mode with -Z (auto enabled with -M) Marcel Boehme submitted a patch that improves all AFFast schedules :) the default schedule is now FAST memory limits are now disabled by default, set them with -m if required rpc.statsd support, for stats and charts, by Edznux, thanks a lot! reading testcases from -i now descends into subdirectories allow the -x command line option up to 4 times loaded extras now have a duplication protection If test cases are too large we do a partial read on the maximum supported size longer seeds with the same trace information will now be ignored for fuzzing but still be used for splicing crashing seeds are now not prohibiting a run anymore but are skipped - they are used for splicing, though update MOpt for expanded havoc modes setting the env var AFL_NO_AUTODICT will not load an LTO autodictionary added NO_SPLICING compile option and makefile define added INTROSPECTION make target that writes all mutations to out/NAME/introspection.txt print special compile time options used in help output when using -c cmplog, one of the childs was not killed, fixed somewhere we broke -n dumb fuzzing, fixed added afl_custom_describe to the custom mutator API to allow for easy mutation reproduction on crashing inputs new env. var. AFL_NO_COLOR (or AFL_NO_COLOUR) to suppress colored console output (when configured with USE_COLOR and not ALWAYS_COLORED) instrumentation We received an enhanced gcc_plugin module from AdaCore, thank you very much!! not overriding -Ox or -fno-unroll-loops anymore we now have our own trace-pc-guard implementation. It is the same as -fsanitize-coverage=trace-pc-guard from llvm 12, but: it is a) inline and b) works from llvm 10.0.1 + onwards :) new llvm pass: dict2file via AFL_LLVM_DICT2FILE, create afl-fuzz -x dictionary of string comparisons found during compilation LTO autodict now also collects interesting cmp comparisons, std::string compare + find + ==, bcmp fix crash in dict2file for integers \u0026gt; 64 bit custom mutators added a new custom mutator: symcc -\u0026gt; https://github.com/eurecom-s3/symcc/ added a new custom mutator: libfuzzer that integrates libfuzzer mutations Our AFL++ Grammar-Mutator is now better integrated into custom_mutators/ added INTROSPECTION support for custom modules python fuzz function was not optional, fixed some python mutator speed improvements afl-cmin/afl-cmin.bash now search first in PATH and last in AFL_PATH unicornafl synced with upstream version 1.02 (fixes, better rust bindings) renamed AFL_DEBUG_CHILD_OUTPUT to AFL_DEBUG_CHILD added AFL_CRASH_EXITCODE env variable to treat a child exitcode as crash Version ++2.68c (release) added the GSoC excellent AFL++ grammar mutator by Shengtuo to our custom_mutators/ (see custom_mutators/README.md) - or get it here: https://github.com/AFLplusplus/Grammar-Mutator a few QOL changes for Apple and its outdated gmake afl-fuzz: fix for auto dictionary entries found during fuzzing to not throw out a -x dictionary added total execs done to plot file AFL_MAX_DET_EXTRAS env variable added to control the amount of deterministic dict entries without recompiling. AFL_FORKSRV_INIT_TMOUT env variable added to control the time to wait for the forkserver to come up without the need to increase the overall timeout. bugfix for cmplog that results in a heap overflow based on target data (thanks to the magma team for reporting!) write fuzzing setup into out/fuzzer_setup (environment variables and command line) custom mutators: added afl_custom_fuzz_count/fuzz_count function to allow specifying the number of fuzz attempts for custom_fuzz llvm_mode: ported SanCov to LTO, and made it the default for LTO. better instrumentation locations Further llvm 12 support (fast moving target like AFL++ :-) ) deprecated LLVM SKIPSINGLEBLOCK env environment Version ++2.67c (release) Support for improved AFL++ snapshot module: https://github.com/AFLplusplus/AFL-Snapshot-LKM Due to the instrumentation needing more memory, the initial memory sizes for -m have been increased afl-fuzz: added -F option to allow -M main fuzzers to sync to foreign fuzzers, e.g. honggfuzz or libfuzzer added -b option to bind to a specific CPU eliminated CPU affinity race condition for -S/-M runs expanded havoc mode added, on no cycle finds add extra splicing and MOpt into the mix fixed a bug in redqueen for strings and made deterministic with -s Compiletime autodictionary fixes llvm_mode: now supports llvm 12 support for AFL_LLVM_ALLOWLIST/AFL_LLVM_DENYLIST (previous AFL_LLVM_WHITELIST and AFL_LLVM_INSTRUMENT_FILE are deprecated and are matched to AFL_LLVM_ALLOWLIST). The format is compatible to llvm sancov, and also supports function matching :) added neverzero counting to trace-pc/pcgard fixes for laf-intel float splitting (thanks to mark-griffin for reporting) fixes for llvm 4.0 skipping ctors and ifuncs for instrumentation LTO: switch default to the dynamic memory map, set AFL_LLVM_MAP_ADDR for a fixed map address (eg. 0x10000) LTO: improved stability for persistent mode, no other instrumentation has that advantage LTO: fixed autodict for long strings LTO: laf-intel and redqueen/cmplog are now applied at link time to prevent llvm optimizing away the splits LTO: autodictionary mode is a fixed default now LTO: instrim instrumentation disabled, only classic support used as it is always better LTO: env var AFL_LLVM_DOCUMENT_IDS=file will document which edge ID was given to which function during compilation LTO: single block functions were not implemented by default, fixed LTO: AFL_LLVM_SKIP_NEVERZERO behaviour was inversed, fixed setting AFL_LLVM_LAF_SPLIT_FLOATS now activates AFL_LLVM_LAF_SPLIT_COMPARES support for -E and -shared compilation runs added honggfuzz mangle as a custom mutator in custom_mutators/honggfuzz added afl-frida gum solution to examples/afl_frida (mostly imported from https://github.com/meme/hotwax/) small fixes to afl-plot, afl-whatsup and man page creation new README, added FAQ Version ++2.66c (release) renamed the main branch on Github to \u0026ldquo;stable\u0026rdquo; renamed master/slave to main/secondary renamed blacklist/whitelist to ignorelist/instrumentlist -\u0026gt; AFL_LLVM_INSTRUMENT_FILE and AFL_GCC_INSTRUMENT_FILE warn on deprecated environment variables afl-fuzz: -S secondary nodes now only sync from the main node to increase performance, the -M main node still syncs from everyone. Added checks that ensure exactly one main node is present and warn otherwise Add -D after -S to force a secondary to perform deterministic fuzzing If no main node is present at a sync one secondary node automatically becomes a temporary main node until a real main nodes shows up Fixed a mayor performance issue we inherited from AFLfast switched murmur2 hashing and random() for xxh3 and xoshiro256**, resulting in an up to 5.5% speed increase Resizing the window does not crash afl-fuzz anymore Ensure that the targets are killed on exit fix/update to MOpt (thanks to arnow117) added MOpt dictionary support from repo added experimental SEEK power schedule. It is EXPLORE with ignoring the runtime and less focus on the length of the test case llvm_mode: the default instrumentation is now PCGUARD if the llvm version is \u0026gt;= 7, as it is faster and provides better coverage. The original afl instrumentation can be set via AFL_LLVM_INSTRUMENT=AFL. This is automatically done when the instrument_file list feature is used. PCGUARD mode is now even better because we made it collision free - plus it has a fixed map size, so it is also faster! :) some targets want a ld variant for LD that is not gcc/clang but ld, added afl-ld-lto to solve this lowered minimum required llvm version to 3.4 (except LLVMInsTrim, which needs 3.8.0) instrument_file list feature now supports wildcards (thanks to sirmc) small change to cmplog to make it work with current llvm 11-dev added AFL_LLVM_LAF_ALL, sets all laf-intel settings LTO instrument_files functionality rewritten, now main, _init etc functions need not to be listed anymore fixed crash in compare-transform-pass when strcasecmp/strncasecmp was tried to be instrumented with LTO fixed crash in cmplog with LTO enable snapshot lkm also for persistent mode Unicornafl Added powerPC support from unicorn/next rust bindings! CMPLOG/Redqueen now also works for MMAP sharedmem ensure shmem is released on errors we moved radamsa to be a custom mutator in ./custom_mutators/. It is not compiled by default anymore. allow running in /tmp (only unsafe with umask 0) persistent mode shared memory testcase handover (instead of via files/stdin) - 10-100% performance increase General support for 64 bit PowerPC, RiscV, Sparc etc. fix afl-cmin.bash slightly better performance compilation options for AFL++ and targets fixed afl-gcc/afl-as that could break on fast systems reusing pids in the same second added lots of dictionaries from oss-fuzz, go-fuzz and Jakub Wilk added former post_library examples to examples/custom_mutators/ Dockerfile upgraded to Ubuntu 20.04 Focal and installing llvm 11 and gcc 10 so afl-clang-lto can be build Version ++2.65c (release): afl-fuzz: AFL_MAP_SIZE was not working correctly better python detection an old, old bug in AFL that would show negative stability in rare circumstances is now hopefully fixed AFL_POST_LIBRARY was deprecated, use AFL_CUSTOM_MUTATOR_LIBRARY instead (see docs/custom_mutators.md) llvm_mode: afl-clang-fast/lto now do not skip single block functions. This behaviour can be reactivated with AFL_LLVM_SKIPSINGLEBLOCK if LLVM 11 is installed the posix shm_open+mmap is used and a fixed address for the shared memory map is used as this increases the fuzzing speed InsTrim now has an LTO version! :-) That is the best and fastest mode! fixes to LTO mode if instrumented edges \u0026gt; MAP_SIZE CTX and NGRAM can now be used together CTX and NGRAM are now also supported in CFG/INSTRIM mode AFL_LLVM_LAF_TRANSFORM_COMPARES could crash, fixed added AFL_LLVM_SKIP_NEVERZERO to skip the never zero coverage counter implementation. For targets with few or no loops or heavily called functions. Gives a small performance boost. qemu_mode: add information on PIE/PIC load addresses for 32 bit better dependency checks gcc_plugin: better dependency checks unicorn_mode: validate_crash_callback can now count non-crashing inputs as crash as well better submodule handling afl-showmap: fix for -Q mode added examples/afl_network_proxy which allows to fuzz a target over the network (not fuzzing tcp/ip services but running afl-fuzz on one system and the target being on an embedded device) added examples/afl_untracer which does a binary-only fuzzing with the modifications done in memory (intel32/64 and aarch64 support) added examples/afl_proxy which can be easily used to fuzz and instrument non-standard things all: forkserver communication now also used for error reporting fix 32 bit build options make clean now leaves qemu-3.1.1.tar.xz and the unicornafl directory intact if in a git/svn checkout - unless \u0026ldquo;deepclean\u0026rdquo; is used Version ++2.64c (release): llvm_mode LTO mode: now requires llvm11 - but compiles all targets! :) autodictionary feature added, enable with AFL_LLVM_LTO_AUTODICTIONARY variable map size usage afl-fuzz: variable map size support added (only LTO mode can use this) snapshot feature usage now visible in UI Now setting -L -1 will enable MOpt in parallel to normal mutation. Additionally, this allows to run dictionaries, radamsa and cmplog. fix for cmplog/redqueen mode if stdin was used fix for writing a better plot_data file qemu_mode: fix for persistent mode (which would not terminate or get stuck) compare-transform/AFL_LLVM_LAF_TRANSFORM_COMPARES now transforms also static global and local variable comparisons (cannot find all though) extended forkserver: map_size and more information is communicated to afl-fuzz (and afl-fuzz acts accordingly) new environment variable: AFL_MAP_SIZE to specify the size of the shared map if AFL_CC/AFL_CXX is set but empty AFL compilers did fail, fixed (this bug is in vanilla AFL too) added NO_PYTHON flag to disable python support when building afl-fuzz more refactoring Version ++2.63c (release): ! the repository was moved from vanhauser-thc to AFLplusplus. It is now an own organisation :) ! development and acceptance of PRs now happen only in the dev branch and only occasionally when everything is fine we PR to master\n all: big code changes to make afl-fuzz thread-safe so afl-fuzz can spawn multiple fuzzing threads in the future or even become a library AFL basic tools now report on the environment variables picked up more tools get environment variable usage info in the help output force all output to stdout (some OK/SAY/WARN messages were sent to stdout, some to stderr) uninstrumented mode uses an internal forkserver (\u0026ldquo;fauxserver\u0026rdquo;) now builds with -D_FORTIFY_SOURCE=2 drastically reduced number of (de)allocations during fuzzing afl-fuzz: python mutator modules and custom mutator modules now use the same interface and hence the API changed AFL_AUTORESUME will resume execution without the need to specify -i - added experimental power schedules (-p): mmopt: ignores runtime of queue entries, gives higher weighting to the last 5 queue entries rare: puts focus on queue entries that hits rare branches, also ignores runtime llvm_mode: added SNAPSHOT feature (using https://github.com/AFLplusplus/AFL-Snapshot-LKM) added Control Flow Integrity sanitizer (AFL_USE_CFISAN) added AFL_LLVM_INSTRUMENT option to control the instrumentation type easier: DEFAULT, CFG (INSTRIM), LTO, CTX, NGRAM-x (x=2-16) made USE_TRACE_PC compile obsolete LTO collision free instrumented added in llvm_mode with afl-clang-lto - this mode is amazing but requires you to build llvm 11 yourself Added llvm_mode NGRAM prev_loc coverage by Adrean Herrera (https://github.com/adrianherrera/afl-ngram-pass/), activate by setting AFL_LLVM_INSTRUMENT=NGRAM-or AFL_LLVM_NGRAM_SIZE= Added llvm_mode context sensitive branch coverage, activated by setting AFL_LLVM_INSTRUMENT=CTX or AFL_LLVM_CTX=1 llvm_mode InsTrim mode: removed workaround for bug where paths were not instrumented and imported fix by author made skipping 1 block functions an option and is disabled by default, set AFL_LLVM_INSTRIM_SKIPSINGLEBLOCK=1 to re-enable this qemu_mode: qemu_mode now uses solely the internal capstone version to fix builds on modern Linux distributions QEMU now logs routine arguments for CmpLog when the target is x86 afl-tmin: now supports hang mode -H to minimize hangs fixed potential afl-tmin missbehavior for targets with multiple hangs Pressing Control-c in afl-cmin did not terminate it for some OS the custom API was rewritten and is now the same for Python and shared libraries. Version ++2.62c (release): Important fix for memory allocation functions that result in afl-fuzz not identifying crashes - UPDATE! Small fix for -E/-V to release the CPU CmpLog does not need sancov anymore Version ++2.61c (release): use -march=native if available most tools now check for mistyped environment variables gcc 10 is now supported the memory safety checks are now disabled for a little more speed during fuzzing (only affects creating queue entries), can be toggled in config.h afl-fuzz: MOpt out of bounds writing crash fixed now prints the real python version support compiled in set stronger performance compile options and little tweaks Android: prefer bigcores when selecting a CPU CmpLog forkserver Redqueen input-2-state mutator (cmp instructions only ATM) all Python 2+3 versions supported now changed execs_per_sec in fuzzer_stats from \u0026ldquo;current\u0026rdquo; execs per second (which is pointless) to total execs per second bugfix for dictionary insert stage count (fix via Google repo PR) added warning if -M is used together with custom mutators with _ONLY option AFL_TMPDIR checks are now later and better explained if they fail llvm_mode InsTrim: three bug fixes: (minor) no pointless instrumentation of 1 block functions (medium) path bug that leads a few blocks not instrumented that should be (major) incorrect prev_loc was written, fixed! afl-clang-fast: show in the help output for which llvm version it was compiled for now does not need to be recompiled between trace-pc and pass instrumentation. compile normally and set AFL_LLVM_USE_TRACE_PC :) LLVM 11 is supported CmpLog instrumentation using SanCov (see llvm_mode/README.cmplog.md) afl-gcc, afl-clang-fast, afl-gcc-fast: experimental support for undefined behaviour sanitizer UBSAN (set AFL_USE_UBSAN=1) the instrumentation summary output now also lists activated sanitizers afl-as: added isatty(2) check back in added AFL_DEBUG (for upcoming merge) qemu_mode: persistent mode is now also available for arm and aarch64 CmpLog instrumentation for QEMU (-c afl-fuzz command line option) for x86, x86_64, arm and aarch64 AFL_PERSISTENT_HOOK callback module for persistent QEMU (see examples/qemu_persistent_hook) added qemu_mode/README.persistent.md documentation AFL_ENTRYPOINT now has instruction granularity afl-cmin is now a sh script (invoking awk) instead of bash for portability the original script is still present as afl-cmin.bash afl-showmap: -i dir option now allows processing multiple inputs using the forkserver. This is for enhanced speed in afl-cmin. added blacklist and instrument_filesing function check in all modules of llvm_mode added fix from Debian project to compile libdislocator and libtokencap libdislocator: AFL_ALIGNED_ALLOC to force size alignment to max_align_t Version ++2.60c (release): fixed a critical bug in afl-tmin that was introduced during ++2.53d added test cases for afl-cmin and afl-tmin to test/test.sh added ./examples/argv_fuzzing ld_preload library by Kjell Braden added preeny\u0026rsquo;s desock_dup ld_preload library as ./examples/socket_fuzzing for network fuzzing added AFL_AS_FORCE_INSTRUMENT environment variable for afl-as - this is for the retrorewrite project we now set QEMU_SET_ENV from AFL_PRELOAD when qemu_mode is used Version ++2.59c (release): qbdi_mode: fuzz android native libraries via QBDI framework unicorn_mode: switched to the new unicornafl, thanks domenukk (see https://github.com/vanhauser-thc/unicorn) afl-fuzz: added radamsa as (an optional) mutator stage (-R[R]) added -u command line option to not unlink the fuzz input file Python3 support (autodetect) AFL_DISABLE_TRIM env var to disable the trim stage CPU affinity support for DragonFly llvm_mode: float splitting is now configured via AFL_LLVM_LAF_SPLIT_FLOATS support for llvm 10 included now (thanks to devnexen) libtokencap: support for *BSD/OSX/Dragonfly added hook common *cmp functions from widely used libraries compcov: hook common *cmp functions from widely used libraries floating point splitting support for QEMU on x86 targets qemu_mode: AFL_QEMU_DISABLE_CACHE env to disable QEMU TranslationBlocks caching afl-analyze: added AFL_SKIP_BIN_CHECK support better random numbers for gcc_plugin and llvm_mode (thanks to devnexen) Dockerfile by courtesy of devnexen added regex.dictionary qemu and unicorn download scripts now try to download until the full download succeeded. f*ckin travis fails downloading 40% of the time! more support for Android (please test!) added the few Android stuff we didnt have already from Google AFL repository removed unnecessary warnings Version ++2.58c (release): reverted patch to not unlink and recreate the input file, it resulted in performance loss of ~10% added test/test-performance.sh script (re)added gcc_plugin, fast inline instrumentation is not yet finished, however it includes the instrument_filesing and persistance feature! by hexcoder- gcc_plugin tests added to testing framework Version ++2.54d-2.57c (release): we jump to 2.57 instead of 2.55 to catch up with Google\u0026rsquo;s versioning persistent mode for QEMU (see qemu_mode/README.md) custom mutator library is now an additional mutator, to exclusivly use it add AFL_CUSTOM_MUTATOR_ONLY (that will trigger the previous behaviour) new library qemu_mode/unsigaction which filters sigaction events afl-fuzz: new command line option -I to execute a command on a new crash no more unlinking the input file, this way the input file can also be a FIFO or disk partition setting LLVM_CONFIG for llvm_mode will now again switch to the selected llvm version. If your setup is correct. fuzzing strategy yields for custom mutator were missing from the UI, added them :) added \u0026ldquo;make tests\u0026rdquo; which will perform checks to see that all functionality is working as expected. this is currently the starting point, its not complete :) added mutation documentation feature (\u0026ldquo;make document\u0026rdquo;), creates afl-fuzz-document and saves all mutations of the first run on the first file into out/queue/mutations libtokencap and libdislocator now compile to the afl_root directory and are installed to the \u0026hellip;/lib/afl directory when present during make install more BSD support, e.g. free CPU binding code for FreeBSD (thanks to devnexen) reducing duplicate code in afl-fuzz added \u0026ldquo;make help\u0026rdquo; removed compile warnings from python internal stuff added man page for afl-clang-fast[++] updated documentation Wine mode to run Win32 binaries with the QEMU instrumentation (-W) CompareCoverage for ARM target in QEMU/Unicorn laf-intel in llvm_mode now also handles floating point comparisons Version ++2.54c (release): big code refactoring: all includes are now in include/ all AFL sources are now in src/ - see src/README.md afl-fuzz was split up in various individual files for including functionality in other programs (e.g. forkserver, memory map, etc.) for better readability. new code indention everywhere auto-generating man pages for all (main) tools added AFL_FORCE_UI to show the UI even if the terminal is not detected llvm 9 is now supported (still needs testing) Android is now supported (thank to JoeyJiao!) - still need to modify the Makefile though fix building qemu on some Ubuntus (thanks to floyd!) custom mutator by a loaded library is now supported (thanks to kyakdan!) added PR that includes peak_rss_mb and slowest_exec_ms in the fuzzer_stats report more support for *BSD (thanks to devnexen!) fix building on *BSD (thanks to tobias.kortkamp for the patch) fix for a few features to support different map sized than 2^16 afl-showmap: new option -r now shows the real values in the buckets (stock AFL never did), plus shows tuple content summary information now small docu updates NeverZero counters for QEMU NeverZero counters for Unicorn CompareCoverage Unicorn immediates-only instrumentation for CompareCoverage Version ++2.53c (release): README is now README.md imported the few minor changes from the 2.53b release unicorn_mode got added - thanks to domenukk for the patch! fix llvm_mode AFL_TRACE_PC with modern llvm fix a crash in qemu_mode which also exists in stock afl added libcompcov, a laf-intel implementation for qemu! :) see qemu_mode/libcompcov/README.libcompcov.md afl-fuzz now displays the selected core in the status screen (blue {#}) updated afl-fuzz and afl-system-config for new scaling governor location in modern kernels using the old ineffective afl-gcc will now show a deprecation warning all queue, hang and crash files now have their discovery time in their name if llvm_mode was compiled, afl-clang/afl-clang++ will point to these instead of afl-gcc added instrim, a much faster llvm_mode instrumentation at the cost of path discovery. See llvm_mode/README.instrim.md (https://github.com/csienslab/instrim) added MOpt (github.com/puppet-meteor/MOpt-AFL) mode, see docs/README.MOpt.md added code to make it more portable to other platforms than Intel Linux added never zero counters for afl-gcc and optionally (because of an optimization issue in llvm \u0026lt; 9) for llvm_mode (AFL_LLVM_NEVER_ZERO=1) added a new doc about binary only fuzzing: docs/binaryonly_fuzzing.txt more cpu power for afl-system-config added forkserver patch to afl-tmin, makes it much faster (originally from github.com/nccgroup/TriforceAFL) added instrument_files support for llvm_mode via AFL_LLVM_WHITELIST to allow only to instrument what is actually interesting. Gives more speed and less map pollution (originally by choller@mozilla) added Python Module mutator support, python2.7-dev is autodetected. see docs/python_mutators.txt (originally by choller@mozilla) added AFL_CAL_FAST for slow applications and AFL_DEBUG_CHILD_OUTPUT for debugging added -V time and -E execs option to better comparison runs, runs afl-fuzz for a specific time/executions. added a -s seed switch to allow AFL run with a fixed initial seed that is not updated. This is good for performance and path discovery tests as the random numbers are deterministic then llvm_mode LAF_\u0026hellip; env variables can now be specified as AFL_LLVM_LAF_\u0026hellip; that is longer but in line with other llvm specific env vars Version ++2.52c (2019-06-05): Applied community patches. See docs/PATCHES for the full list. LLVM and Qemu modes are now faster. Important changes: afl-fuzz: -e EXTENSION commandline option llvm_mode: LAF-intel performance (needs activation, see llvm/README.laf-intel.md) a few new environment variables for afl-fuzz, llvm and qemu, see docs/env_variables.md Added the power schedules of AFLfast by Marcel Boehme, but set the default to the AFL schedule, not to the FAST schedule. So nothing changes unless you use the new -p option :-) - see docs/power_schedules.md added afl-system-config script to set all system performance options for fuzzing llvm_mode works with llvm 3.9 up to including 8 ! qemu_mode got upgraded from 2.1 to 3.1 - incorporated from https://github.com/andreafioraldi/afl and with community patches added Version 2.52b (2017-11-04): Upgraded QEMU patches from 2.3.0 to 2.10.0. Required troubleshooting several weird issues. All the legwork done by Andrew Griffiths.\n Added setsid to afl-showmap. See the notes for 2.51b.\n Added target mode (deferred, persistent, qemu, etc) to fuzzer_stats. Requested by Jakub Wilk.\n afl-tmin should now save a partially minimized file when Ctrl-C is pressed. Suggested by Jakub Wilk.\n Added an option for afl-analyze to dump offsets in hex. Suggested by Jakub Wilk.\n Added support for parameters in triage_crashes.sh. Patch by Adam of DC949.\n Version 2.51b (2017-08-30): Made afl-tmin call setsid to prevent glibc traceback junk from showing up on the terminal in some distros. Suggested by Jakub Wilk. Version 2.50b (2017-08-19): Fixed an interesting timing corner case spotted by Jakub Wilk.\n Addressed a libtokencap / pthreads incompatibility issue. Likewise, spotted by Jakub Wilk.\n Added a mention of afl-kit and Pythia.\n Added AFL_FAST_CAL.\n In-place resume now preserves .synced. Suggested by Jakub Wilk.\n Version 2.49b (2017-07-18): Added AFL_TMIN_EXACT to allow path constraint for crash minimization.\n Added dates for releases (retroactively for all of 2017).\n Version 2.48b (2017-07-17): Added AFL_ALLOW_TMP to permit some scripts to run in /tmp.\n Fixed cwd handling in afl-analyze (similar to the quirk in afl-tmin).\n Made it possible to point -o and -f to the same file in afl-tmin.\n Version 2.47b (2017-07-14): Fixed cwd handling in afl-tmin. Spotted by Jakub Wilk. Version 2.46b (2017-07-10): libdislocator now supports AFL_LD_NO_CALLOC_OVER for folks who do not want to abort on calloc() overflows.\n Made a minor fix to libtokencap. Reported by Daniel Stender.\n Added a small JSON dictionary, inspired on a dictionary done by Jakub Wilk.\n Version 2.45b (2017-07-04): Added strstr, strcasestr support to libtokencap. Contributed by Daniel Hodson.\n Fixed a resumption offset glitch spotted by Jakub Wilk.\n There are definitely no bugs in afl-showmap -c now.\n Version 2.44b (2017-06-28): Added a visual indicator of ASAN / MSAN mode when compiling. Requested by Jakub Wilk.\n Added support for afl-showmap coredumps (-c). Suggested by Jakub Wilk.\n Added LD_BIND_NOW=1 for afl-showmap by default. Although not really useful, it reportedly helps reproduce some crashes. Suggested by Jakub Wilk.\n Added a note about allocator_may_return_null=1 not always working with ASAN. Spotted by Jakub Wilk.\n Version 2.43b (2017-06-16): Added AFL_NO_ARITH to aid in the fuzzing of text-based formats. Requested by Jakub Wilk. Version 2.42b (2017-06-02): Renamed the R() macro to avoid a problem with llvm_mode in the latest versions of LLVM. Fix suggested by Christian Holler. Version 2.41b (2017-04-12): Addressed a major user complaint related to timeout detection. Timing out inputs are now binned as \u0026ldquo;hangs\u0026rdquo; only if they exceed a far more generous time limit than the one used to reject slow paths. Version 2.40b (2017-04-02): Fixed a minor oversight in the insertion strategy for dictionary words. Spotted by Andrzej Jackowski.\n Made a small improvement to the havoc block insertion strategy.\n Adjusted color rules for \u0026ldquo;is it done yet?\u0026rdquo; indicators.\n Version 2.39b (2017-02-02): Improved error reporting in afl-cmin. Suggested by floyd.\n Made a minor tweak to trace-pc-guard support. Suggested by kcc.\n Added a mention of afl-monitor.\n Version 2.38b (2017-01-22): Added -mllvm -sanitizer-coverage-block-threshold=0 to trace-pc-guard mode, as suggested by Kostya Serebryany. Version 2.37b (2017-01-22): Fixed a typo. Spotted by Jakub Wilk.\n Fixed support for make install when using trace-pc. Spotted by Kurt Roeckx.\n Switched trace-pc to trace-pc-guard, which should be considerably faster and is less quirky. Kudos to Konstantin Serebryany (and sorry for dragging my feet).\nNote that for some reason, this mode doesn\u0026rsquo;t perform as well as \u0026ldquo;vanilla\u0026rdquo; afl-clang-fast / afl-clang.\n Version 2.36b (2017-01-14): Fixed a cosmetic bad free() bug when aborting -S sessions. Spotted by Johannes S.\n Made a small change to afl-whatsup to sort fuzzers by name.\n Fixed a minor issue with malloc(0) in libdislocator. Spotted by Rene Freingruber.\n Changed the clobber pattern in libdislocator to a slightly more reliable one. Suggested by Rene Freingruber.\n Added a note about THP performance. Suggested by Sergey Davidoff.\n Added a somewhat unofficial support for running afl-tmin with a baseline \u0026ldquo;mask\u0026rdquo; that causes it to minimize only for edges that are unique to the input file, but not to the \u0026ldquo;boring\u0026rdquo; baseline. Suggested by Sami Liedes.\n \u0026ldquo;Fixed\u0026rdquo; a getPassName() problem with newer versions of clang. Reported by Craig Young and several other folks.\n Yep, I know I have a backlog on several other feature requests. Stay tuned!\nVersion 2.35b: Fixed a minor cmdline reporting glitch, spotted by Leo Barnes.\n Fixed a silly bug in libdislocator. Spotted by Johannes Schultz.\n Version 2.34b: Added a note about afl-tmin to technical_details.txt.\n Added support for AFL_NO_UI, as suggested by Leo Barnes.\n Version 2.33b: Added code to strip -Wl,-z,defs and -Wl,\u0026ndash;no-undefined for afl-clang-fast, since they interfere with -shared. Spotted and diagnosed by Toby Hutton.\n Added some fuzzing tips for Android.\n Version 2.32b: Added a check for AFL_HARDEN combined with AFL_USE_*SAN. Suggested by Hanno Boeck.\n Made several other cosmetic adjustments to cycle timing in the wake of the big tweak made in 2.31b.\n Version 2.31b: Changed havoc cycle counts for a marked performance boost, especially with -S / -d. See the discussion of FidgetyAFL in:\nhttps://groups.google.com/forum/#!topic/afl-users/fOPeb62FZUg\nWhile this does not implement the approach proposed by the authors of the CCS paper, the solution is a result of digging into that research; more improvements may follow as I do more experiments and get more definitive data.\n Version 2.30b: Made minor improvements to persistent mode to avoid the remote possibility of \u0026ldquo;no instrumentation detected\u0026rdquo; issues with very low instrumentation densities.\n Fixed a minor glitch with a leftover process in persistent mode. Reported by Jakub Wilk and Daniel Stender.\n Made persistent mode bitmaps a bit more consistent and adjusted the way this is shown in the UI, especially in persistent mode.\n Version 2.29b: Made a minor #include fix to llvm_mode. Suggested by Jonathan Metzman.\n Made cosmetic updates to the docs.\n Version 2.28b: Added \u0026ldquo;life pro tips\u0026rdquo; to docs/.\n Moved testcases/_extras/ to dictionaries/ for visibility.\n Made minor improvements to install scripts.\n Added an important safety tip.\n Version 2.27b: Added libtokencap, a simple feature to intercept strcmp / memcmp and generate dictionary entries that can help extend coverage.\n Moved libdislocator to its own dir, added README.md.\n The demo in examples/instrumented_cmp is no more.\n Version 2.26b: Made a fix for libdislocator.so to compile on MacOS X.\n Added support for DYLD_INSERT_LIBRARIES.\n Renamed AFL_LD_PRELOAD to AFL_PRELOAD.\n Version 2.25b: Made some cosmetic updates to libdislocator.so, renamed one env variable. Version 2.24b: Added libdislocator.so, an experimental, abusive allocator. Try it out with AFL_LD_PRELOAD=/path/to/libdislocator.so when running afl-fuzz. Version 2.23b: Improved the stability metric for persistent mode binaries. Problem spotted by Kurt Roeckx.\n Made a related improvement that may bring the metric to 100% for those targets.\n Version 2.22b: Mentioned the potential conflicts between MSAN / ASAN and FORTIFY_SOURCE. There is no automated check for this, since some distros may implicitly set FORTIFY_SOURCE outside of the compiler\u0026rsquo;s argv[].\n Populated the support for AFL_LD_PRELOAD to all companion tools.\n Made a change to the handling of ./afl-clang-fast -v. Spotted by Jan Kneschke.\n Version 2.21b: Added some crash reporting notes for Solaris in docs/INSTALL, as investigated by Martin Carpenter.\n Fixed a minor UI mix-up with havoc strategy stats.\n Version 2.20b: Revamped the handling of variable paths, replacing path count with a \u0026ldquo;stability\u0026rdquo; score to give users a much better signal. Based on the feedback from Vegard Nossum.\n Made a stability improvement to the syncing behavior with resuming fuzzers. Based on the feedback from Vegard.\n Changed the UI to include current input bitmap density along with total density. Ditto.\n Added experimental support for parallelizing -M.\n Version 2.19b: Made a fix to make sure that auto CPU binding happens at non-overlapping times. Version 2.18b: Made several performance improvements to has_new_bits() and classify_counts(). This should offer a robust performance bump with fast targets. Version 2.17b: Killed the error-prone and manual -Z option. On Linux, AFL will now automatically bind to the first free core (or complain if there are no free cores left).\n Made some doc updates along these lines.\n Version 2.16b: Improved support for older versions of clang (hopefully without breaking anything).\n Moved version data from Makefile to config.h. Suggested by Jonathan Metzman.\n Version 2.15b: Added a README section on looking for non-crashing bugs.\n Added license data to several boring files. Contributed by Jonathan Metzman.\n Version 2.14b: Added FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION as a macro defined when compiling with afl-gcc and friends. Suggested by Kostya Serebryany.\n Refreshed some of the non-x86 docs.\n Version 2.13b: Fixed a spurious build test error with trace-pc and llvm_mode/Makefile. Spotted by Markus Teufelberger.\n Fixed a cosmetic issue with afl-whatsup. Spotted by Brandon Perry.\n Version 2.12b: Fixed a minor issue in afl-tmin that can make alphabet minimization less efficient during passes \u0026gt; 1. Spotted by Daniel Binderman. Version 2.11b: Fixed a minor typo in instrumented_cmp, spotted by Hanno Eissfeldt.\n Added a missing size check for deterministic insertion steps.\n Made an improvement to afl-gotcpu when -Z not used.\n Fixed a typo in post_library_png.so.c in examples/. Spotted by Kostya Serebryany.\n Version 2.10b: Fixed a minor core counting glitch, reported by Tyler Nighswander. Version 2.09b: Made several documentation updates.\n Added some visual indicators to promote and simplify the use of -Z.\n Version 2.08b: Added explicit support for -m32 and -m64 for llvm_mode. Inspired by a request from Christian Holler.\n Added a new benchmarking option, as requested by Kostya Serebryany.\n Version 2.07b: Added CPU affinity option (-Z) on Linux. With some caution, this can offer a significant (10%+) performance bump and reduce jitter. Proposed by Austin Seipp.\n Updated afl-gotcpu to use CPU affinity where supported.\n Fixed confusing CPU_TARGET error messages with QEMU build. Spotted by Daniel Komaromy and others.\n Version 2.06b: Worked around LLVM persistent mode hiccups with -shared code. Contributed by Christian Holler.\n Added __AFL_COMPILER as a convenient way to detect that something is built under afl-gcc / afl-clang / afl-clang-fast and enable custom optimizations in your code. Suggested by Pedro Corte-Real.\n Upstreamed several minor changes developed by Franjo Ivancic to allow AFL to be built as a library. This is fairly use-specific and may have relatively little appeal to general audiences.\n Version 2.05b: Put __sanitizer_cov_module_init \u0026amp; co behind #ifdef to avoid problems with ASAN. Spotted by Christian Holler. Version 2.04b: Removed indirect-calls coverage from -fsanitize-coverage (since it\u0026rsquo;s redundant). Spotted by Kostya Serebryany. Version 2.03b: Added experimental -fsanitize-coverage=trace-pc support that goes with some recent additions to LLVM, as implemented by Kostya Serebryany. Right now, this is cumbersome to use with common build systems, so the mode remains undocumented.\n Made several substantial improvements to better support non-standard map sizes in LLVM mode.\n Switched LLVM mode to thread-local execution tracing, which may offer better results in some multithreaded apps.\n Fixed a minor typo, reported by Heiko Eissfeldt.\n Force-disabled symbolization for ASAN, as suggested by Christian Holler.\n AFL_NOX86 renamed to AFL_NO_X86 for consistency.\n Added AFL_LD_PRELOAD to allow LD_PRELOAD to be set for targets without affecting AFL itself. Suggested by Daniel Godas-Lopez.\n Version 2.02b: Fixed a \u0026ldquo;lcamtuf can\u0026rsquo;t count to 16\u0026rdquo; bug in the havoc stage. Reported by Guillaume Endignoux. Version 2.01b: Made an improvement to cycle counter color coding, based on feedback from Shai Sarfaty.\n Added a mention of aflize to sister_projects.txt.\n Fixed an installation issue with afl-as, as spotted by ilovezfs.\n Version 2.00b: Cleaned up color handling after a minor snafu in 1.99b (affecting some terminals).\n Made minor updates to the documentation.\n Version 1.99b: Substantially revamped the output and the internal logic of afl-analyze.\n Cleaned up some of the color handling code and added support for background colors.\n Removed some stray files (oops).\n Updated docs to better explain afl-analyze.\n Version 1.98b: Improved to \u0026ldquo;boring string\u0026rdquo; detection in afl-analyze.\n Added technical_details.txt for afl-analyze.\n Version 1.97b: Added afl-analyze, a nifty tool to analyze the structure of a file based on the feedback from AFL instrumentation. This is kinda experimental, so field reports welcome.\n Added a mention of afl-cygwin.\n Fixed a couple of typos, as reported by Jakub Wilk and others.\n Version 1.96b: Added -fpic to CFLAGS for the clang plugin, as suggested by Hanno Boeck.\n Made another clang change (IRBuilder) suggested by Jeff Trull.\n Fixed several typos, spotted by Jakub Wilk.\n Added support for AFL_SHUFFLE_QUEUE, based on discussions with Christian Holler.\n Version 1.95b: Fixed a harmless bug when handling -B. Spotted by Jacek Wielemborek.\n Made the exit message a bit more accurate when AFL_EXIT_WHEN_DONE is set.\n Added some error-checking for old-style forkserver syntax. Suggested by Ben Nagy.\n Switched from exit() to _exit() in injected code to avoid snafus with destructors in C++ code. Spotted by sunblate.\n Made a change to avoid spuriously setting __AFL_SHM_ID when AFL_DUMB_FORKSRV is set in conjunction with -n. Spotted by Jakub Wilk.\n Version 1.94b: Changed allocator alignment to improve support for non-x86 systems (now that llvm_mode makes this more feasible).\n Fixed a minor typo in afl-cmin. Spotted by Jonathan Neuschafer.\n Fixed an obscure bug that would affect people trying to use afl-gcc with $TMP set but $TMPDIR absent. Spotted by Jeremy Barnes.\n Version 1.93b: Hopefully fixed a problem with MacOS X and persistent mode, spotted by Leo Barnes. Version 1.92b: Made yet another C++ fix (namespaces). Reported by Daniel Lockyer. Version 1.91b: Made another fix to make 1.90b actually work properly with C++ (d\u0026rsquo;oh). Problem spotted by Daniel Lockyer. Version 1.90b: Fixed a minor typo spotted by Kai Zhao; and made several other minor updates to docs.\n Updated the project URL for python-afl. Requested by Jakub Wilk.\n Fixed a potential problem with deferred mode signatures getting optimized out by the linker (with \u0026ndash;gc-sections).\n Version 1.89b: Revamped the support for persistent and deferred forkserver modes. Both now feature simpler syntax and do not require companion env variables. Suggested by Jakub Wilk.\n Added a bit more info about afl-showmap. Suggested by Jacek Wielemborek.\n Version 1.88b: Made AFL_EXIT_WHEN_DONE work in non-tty mode. Issue spotted by Jacek Wielemborek. Version 1.87b: Added QuickStartGuide.txt, a one-page quick start doc.\n Fixed several typos spotted by Dominique Pelle.\n Revamped several parts of README.\n Version 1.86b: Added support for AFL_SKIP_CRASHES, which is a very hackish solution to the problem of resuming sessions with intermittently crashing inputs.\n Removed the hard-fail terminal size check, replaced with a dynamic warning shown in place of the UI. Based on feedback from Christian Holler.\n Fixed a minor typo in show_stats. Spotted by Dingbao Xie.\n Version 1.85b: Fixed a garbled sentence in notes on parallel fuzzing. Thanks to Jakub Wilk.\n Fixed a minor glitch in afl-cmin. Spotted by Jonathan Foote.\n Version 1.84b: Made SIMPLE_FILES behave as expected when naming backup directories for crashes and hangs.\n Added the total number of favored paths to fuzzer_stats. Requested by Ben Nagy.\n Made afl-tmin, afl-fuzz, and afl-cmin reject negative values passed to -t and -m, since they generally won\u0026rsquo;t work as expected.\n Made a fix for no lahf / sahf support on older versions of FreeBSD. Patch contributed by Alex Moneger.\n Version 1.83b: Fixed a problem with xargs -d on non-Linux systems in afl-cmin. Spotted by teor2345 and Ben Nagy.\n Fixed an implicit declaration in LLVM mode on MacOS X. Reported by Kai Zhao.\n Version 1.82b: Fixed a harmless but annoying race condition in persistent mode - signal delivery is a bit more finicky than I thought.\n Updated the documentation to explain persistent mode a bit better.\n Tweaked AFL_PERSISTENT to force AFL_NO_VAR_CHECK.\n Version 1.81b: Added persistent mode for in-process fuzzing. See llvm_mode/README.llvm. Inspired by Kostya Serebryany and Christian Holler.\n Changed the in-place resume code to preserve crashes/README.txt. Suggested by Ben Nagy.\n Included a potential fix for LLVM mode issues on MacOS X, based on the investigation done by teor2345.\n Version 1.80b: Made afl-cmin tolerant of whitespaces in filenames. Suggested by Jonathan Neuschafer and Ketil Froyn.\n Added support for AFL_EXIT_WHEN_DONE, as suggested by Michael Rash.\n Version 1.79b: Added support for dictionary levels, see testcases/README.testcases.\n Reworked the SQL dictionary to use levels.\n Added a note about Preeny.\n Version 1.78b: Added a dictionary for PDF, contributed by Ben Nagy.\n Added several references to afl-cov, a new tool by Michael Rash.\n Fixed a problem with crash reporter detection on MacOS X, as reported by Louis Dassy.\n Version 1.77b: Extended the -x option to support single-file dictionaries.\n Replaced factory-packaged dictionaries with file-based variants.\n Removed newlines from HTML keywords in testcases/_extras/html/.\n Version 1.76b: Very significantly reduced the number of duplicate execs during deterministic checks, chiefly in int16 and int32 stages. Confirmed identical path yields. This should improve early-stage efficiency by around 5-10%.\n Reduced the likelihood of duplicate non-deterministic execs by bumping up lowest stacking factor from 1 to 2. Quickly confirmed that this doesn\u0026rsquo;t seem to have significant impact on coverage with libpng.\n Added a note about integrating afl-fuzz with third-party tools.\n Version 1.75b: Improved argv_fuzzing to allow it to emit empty args. Spotted by Jakub Wilk.\n afl-clang-fast now defines __AFL_HAVE_MANUAL_INIT. Suggested by Jakub Wilk.\n Fixed a libtool-related bug with afl-clang-fast that would make some ./configure invocations generate incorrect output. Spotted by Jakub Wilk.\n Removed flock() on Solaris. This means no locking on this platform, but so be it. Problem reported by Martin Carpenter.\n Fixed a typo. Reported by Jakub Wilk.\n Version 1.74b: Added an example argv[] fuzzing wrapper in examples/argv_fuzzing. Reworked the bash example to be faster, too.\n Clarified llvm_mode prerequisites for FreeBSD.\n Improved afl-tmin to use /tmp if cwd is not writeable.\n Removed redundant includes for sys/fcntl.h, which caused warnings with some nitpicky versions of libc.\n Added a corpus of basic HTML tags that parsers are likely to pay attention to (no attributes).\n Added EP_EnabledOnOptLevel0 to llvm_mode, so that the instrumentation is inserted even when AFL_DONT_OPTIMIZE=1 is set.\n Switched qemu_mode to use the newly-released QEMU 2.3.0, which contains a couple of minor bugfixes.\n Version 1.73b: Fixed a pretty stupid bug in effector maps that could sometimes cause AFL to fuzz slightly more than necessary; and in very rare circumstances, could lead to SEGV if eff_map is aligned with page boundary and followed by an unmapped page. Spotted by Jonathan Gray. Version 1.72b: Fixed a glitch in non-x86 install, spotted by Tobias Ospelt.\n Added a minor safeguard to llvm_mode Makefile following a report from Kai Zhao.\n Version 1.71b: Fixed a bug with installed copies of AFL trying to use QEMU mode. Spotted by G.M. Lime.\n Added last find / crash / hang times to fuzzer_stats, suggested by Richard Hipp.\n Fixed a typo, thanks to Jakub Wilk.\n Version 1.70b: Modified resumption code to reuse the original timeout value when resuming a session if -t is not given. This prevents timeout creep in continuous fuzzing.\n Added improved error messages for failed handshake when AFL_DEFER_FORKSRV is set.\n Made a slight improvement to llvm_mode/Makefile based on feedback from Jakub Wilk.\n Refreshed several bits of documentation.\n Added a more prominent note about the MacOS X trade-offs to Makefile.\n Version 1.69b: Added support for deferred initialization in LLVM mode. Suggested by Richard Godbee. Version 1.68b: Fixed a minor PRNG glitch that would make the first seconds of a fuzzing job deterministic. Thanks to Andreas Stieger.\n Made tmp[] static in the LLVM runtime to keep Valgrind happy (this had no impact on anything else). Spotted by Richard Godbee.\n Clarified the footnote in README.\n Version 1.67b: Made one more correction to llvm_mode Makefile, spotted by Jakub Wilk. Version 1.66b: Added CC / CXX support to llvm_mode Makefile. Requested by Charlie Eriksen.\n Fixed \u0026lsquo;make clean\u0026rsquo; with gmake. Suggested by Oliver Schneider.\n Fixed \u0026lsquo;make -j n clean all\u0026rsquo;. Suggested by Oliver Schneider.\n Removed build date and time from banners to give people deterministic builds. Requested by Jakub Wilk.\n Version 1.65b: Fixed a snafu with some leftover code in afl-clang-fast.\n Corrected even moar typos.\n Version 1.64b: Further simplified afl-clang-fast runtime by reverting .init_array to attribute((constructor(0)). This should improve compatibility with non-ELF platforms.\n Fixed a problem with afl-clang-fast and -shared libraries. Simplified the code by getting rid of .preinit_array and replacing it with a .comm object. Problem reported by Charlie Eriksen.\n Removed unnecessary instrumentation density adjustment for the LLVM mode. Reported by Jonathan Neuschafer.\n Version 1.63b: Updated cgroups_asan/ with a new version from Sam, made a couple changes to streamline it and keep parallel AFL instances in separate groups.\n Fixed typos, thanks to Jakub Wilk.\n Version 1.62b: Improved the handling of -x in afl-clang-fast,\n Improved the handling of low AFL_INST_RATIO settings for QEMU and LLVM modes.\n Fixed the llvm-config bug for good (thanks to Tobias Ospelt).\n Version 1.61b: Fixed an obscure bug compiling OpenSSL with afl-clang-fast. Patch by Laszlo Szekeres.\n Fixed a \u0026lsquo;make install\u0026rsquo; bug on non-x86 systems, thanks to Tobias Ospelt.\n Fixed a problem with half-broken llvm-config on Odroid, thanks to Tobias Ospelt. (There is another odd bug there that hasn\u0026rsquo;t been fully fixed - TBD).\n Version 1.60b: Allowed examples/llvm_instrumentation/ to graduate to llvm_mode/.\n Removed examples/arm_support/, since it\u0026rsquo;s completely broken and likely unnecessary with LLVM support in place.\n Added ASAN cgroups script to examples/asan_cgroups/, updated existing docs. Courtesy Sam Hakim and David A. Wheeler.\n Refactored afl-tmin to reduce the number of execs in common use cases. Ideas from Jonathan Neuschafer and Turo Lamminen.\n Added a note about CLAs at the bottom of README.\n Renamed testcases_readme.txt to README.testcases for some semblance of consistency.\n Made assorted updates to docs.\n Added MEM_BARRIER() to afl-showmap and afl-tmin, just to be safe.\n Version 1.59b: Imported Laszlo Szekeres\u0026rsquo; experimental LLVM instrumentation into examples/llvm_instrumentation. I\u0026rsquo;ll work on including it in the \u0026ldquo;mainstream\u0026rdquo; version soon.\n Fixed another typo, thanks to Jakub Wilk.\n Version 1.58b: Added a workaround for abort() behavior in -lpthread programs in QEMU mode. Spotted by Aidan Thornton.\n Made several documentation updates, including links to the static instrumentation tool (sister_projects.txt).\n Version 1.57b: Fixed a problem with exception handling on some versions of MacOS X. Spotted by Samir Aguiar and Anders Wang Kristensen.\n Tweaked afl-gcc to use BIN_PATH instead of a fixed string in help messages.\n Version 1.56b: Renamed related_work.txt to historical_notes.txt.\n Made minor edits to the ASAN doc.\n Added docs/sister_projects.txt with a list of inspired or closely related utilities.\n Version 1.55b: Fixed a glitch with afl-showmap opening /dev/null with O_RDONLY when running in quiet mode. Spotted by Tyler Nighswander. Version 1.54b: Added another postprocessor example for PNG.\n Made a cosmetic fix to realloc() handling in examples/post_library/, suggested by Jakub Wilk.\n Improved -ldl handling. Suggested by Jakub Wilk.\n Version 1.53b: Fixed an -l ordering issue that is apparently still a problem on Ubuntu. Spotted by William Robinet. Version 1.52b: Added support for file format postprocessors. Requested by Ben Nagy. This feature is intentionally buried, since it\u0026rsquo;s fairly easy to misuse and useful only in some scenarios. See examples/post_library/. Version 1.51b: Made it possible to properly override LD_BIND_NOW after one very unusual report of trouble.\n Cleaned up typos, thanks to Jakub Wilk.\n Fixed a bug in AFL_DUMB_FORKSRV.\n Version 1.50b: Fixed a flock() bug that would prevent dir reuse errors from kicking in every now and then.\n Renamed references to ppvm (the project is now called recidivm).\n Made improvements to file descriptor handling to avoid leaving some fds unnecessarily open in the child process.\n Fixed a typo or two.\n Version 1.49b: Added code to save original command line in fuzzer_stats and crashes/README.txt. Also saves fuzzer version in fuzzer_stats. Requested by Ben Nagy. Version 1.48b: Fixed a bug with QEMU fork server crashes when translation is attempted after a jump to an invalid pointer in the child process (i.e., after bumping into a particularly nasty security bug in the tested binary). Reported by Tyler Nighswander. Version 1.47b: Fixed a bug with afl-cmin in -Q mode complaining about binary being not instrumented. Thanks to Jonathan Neuschafer for the bug report.\n Fixed another bug with argv handling for afl-fuzz in -Q mode. Reported by Jonathan Neuschafer.\n Improved the use of colors when showing crash counts in -C mode.\n Version 1.46b: Improved instrumentation performance on 32-bit systems by getting rid of xor-swap (oddly enough, xor-swap is still faster on 64-bit) and tweaking alignment.\n Made path depth numbers more accurate with imported test cases.\n Version 1.45b: Added support for SIMPLE_FILES in config.h for folks who don\u0026rsquo;t like descriptive file names. Generates very simple names without colons, commas, plus signs, dashes, etc.\n Replaced zero-sized files with symlinks in the variable behavior state dir to simplify examining the relevant test cases.\n Changed the period of limited-range block ops from 5 to 10 minutes based on a couple of experiments. The basic goal of this delay timer behavior is to better support jobs that are seeded with completely invalid files, in which case, the first few queue cycles may be completed very quickly without discovering new paths. Should have no effect on well-seeded jobs.\n Made several minor updates to docs.\n Version 1.44b: Corrected two bungled attempts to get the -C mode work properly with afl-cmin (accounting for the short-lived releases tagged 1.42 and 1.43b) - sorry.\n Removed AFL_ALLOW_CRASHES in favor of the -C mode in said tool.\n Said goodbye to Hello Kitty, as requested by Padraig Brady.\n Version 1.41b: Added AFL_ALLOW_CRASHES=1 to afl-cmin. Allows crashing inputs in the output corpus. Changed the default behavior to disallow it.\n Made the afl-cmin output dir default to 0700, not 0755, to be consistent with afl-fuzz; documented the rationale for 0755 in afl-plot.\n Lowered the output dir reuse time limit to 25 minutes as a dice-roll compromise after a discussion on afl-users@.\n Made afl-showmap accept -o /dev/null without borking out.\n Added support for crash / hang info in exit codes of afl-showmap.\n Tweaked block operation scaling to also factor in ballpark run time in cases where queue passes take very little time.\n Fixed typos and made improvements to several docs.\n Version 1.40b: Switched to smaller block op sizes during the first passes over the queue. Helps keep test cases small.\n Added memory barrier for run_target(), just in case compilers get smarter than they are today.\n Updated a bunch of docs.\n Version 1.39b: Added the ability to skip inputs by sending SIGUSR1 to the fuzzer.\n Reworked several portions of the documentation.\n Changed the code to reset splicing perf scores between runs to keep them closer to intended length.\n Reduced the minimum value of -t to 5 for afl-fuzz (~200 exec/sec) and to 10 for auxiliary tools (due to the absence of a fork server).\n Switched to more aggressive default timeouts (rounded up to 25 ms versus 50 ms - ~40 execs/sec) and made several other cosmetic changes to the timeout code.\n Version 1.38b: Fixed a bug in the QEMU build script, spotted by William Robinet.\n Improved the reporting of skipped bitflips to keep the UI counters a bit more accurate.\n Cleaned up related_work.txt and added some non-goals.\n Fixed typos, thanks to Jakub Wilk.\n Version 1.37b: Added effector maps, which detect regions that do not seem to respond to bitflips and subsequently exclude them from more expensive steps (arithmetics, known ints, etc). This should offer significant performance improvements with quite a few types of text-based formats, reducing the number of deterministic execs by a factor of 2 or so.\n Cleaned up mem limit handling in afl-cmin.\n Switched from uname -i to uname -m to work around Gentoo-specific issues with coreutils when building QEMU. Reported by William Robinet.\n Switched from PID checking to flock() to detect running sessions. Problem, against all odds, bumped into by Jakub Wilk.\n Added SKIP_COUNTS and changed the behavior of COVERAGE_ONLY in config.h. Useful only for internal benchmarking.\n Made improvements to UI refresh rates and exec/sec stats to make them more stable.\n Made assorted improvements to the documentation and to the QEMU build script.\n Switched from perror() to strerror() in error macros, thanks to Jakub Wilk for the nag.\n Moved afl-cmin back to bash, wasn\u0026rsquo;t thinking straight. It has to stay on bash because other shells may have restrictive limits on array sizes.\n Version 1.36b: Switched afl-cmin over to /bin/sh. Thanks to Jonathan Gray.\n Fixed an off-by-one bug in queue limit check when resuming sessions (could cause NULL ptr deref if you are really unlucky).\n Fixed the QEMU script to tolerate i686 if returned by uname -i. Based on a problem report from Sebastien Duquette.\n Added multiple references to Jakub\u0026rsquo;s ppvm tool.\n Made several minor improvements to the Makefile.\n Believe it or not, fixed some typos. Thanks to Jakub Wilk.\n Version 1.35b: Cleaned up regular expressions in some of the scripts to avoid errors on *BSD systems. Spotted by Jonathan Gray. Version 1.34b: Performed a substantial documentation and program output cleanup to better explain the QEMU feature. Version 1.33b: Added support for AFL_INST_RATIO and AFL_INST_LIBS in the QEMU mode.\n Fixed a stack allocation crash in QEMU mode (bug in QEMU, fixed with an extra patch applied to the downloaded release).\n Added code to test the QEMU instrumentation once the afl-qemu-trace binary is built.\n Modified afl-tmin and afl-showmap to search $PATH for binaries and to better handle QEMU support.\n Added a check for instrumented binaries when passing -Q to afl-fuzz.\n Version 1.32b: Fixed \u0026lsquo;make install\u0026rsquo; following the QEMU changes. Spotted by Hanno Boeck.\n Fixed EXTRA_PAR handling in afl-cmin.\n Version 1.31b: Hallelujah! Thanks to Andrew Griffiths, we now support very fast, black-box instrumentation of binary-only code. See qemu_mode/README.qemu.\nTo use this feature, you need to follow the instructions in that directory and then run afl-fuzz with -Q.\n Version 1.30b: Added -s (summary) option to afl-whatsup. Suggested by Jodie Cunningham.\n Added a sanity check in afl-tmin to detect minimization to zero len or excess hangs.\n Fixed alphabet size counter in afl-tmin.\n Slightly improved the handling of -B in afl-fuzz.\n Fixed process crash messages with -m none.\n Version 1.29b: Improved the naming of test cases when orig: is already present in the file name.\n Made substantial improvements to technical_details.txt.\n Version 1.28b: Made a minor tweak to the instrumentation to preserve the directionality of tuples (i.e., A -\u0026gt; B != B -\u0026gt; A) and to maintain the identity of tight loops (A -\u0026gt; A). You need to recompile targeted binaries to leverage this.\n Cleaned up some of the afl-whatsup stats.\n Added several sanity checks to afl-cmin.\n Version 1.27b: Made afl-tmin recursive. Thanks to Hanno Boeck for the tip.\n Added docs/technical_details.txt.\n Changed afl-showmap search strategy in afl-cmap to just look into the same place that afl-cmin is executed from. Thanks to Jakub Wilk.\n Removed current_todo.txt and cleaned up the remaining docs.\n Version 1.26b: Added total execs/sec stat for afl-whatsup.\n afl-cmin now auto-selects between cp or ln. Based on feedback from Even Huus.\n Fixed a typo. Thanks to Jakub Wilk.\n Made afl-gotcpu a bit more accurate by using getrusage instead of times. Thanks to Jakub Wilk.\n Fixed a memory limit issue during the build process on NetBSD-current. Reported by Thomas Klausner.\n Version 1.25b: Introduced afl-whatsup, a simple tool for querying the status of local synced instances of afl-fuzz.\n Added -x compiler to clang options on Darwin. Suggested by Filipe Cabecinhas.\n Improved exit codes for afl-gotcpu.\n Improved the checks for -m and -t values in afl-cmin. Bug report from Evan Huus.\n Version 1.24b: Introduced afl-getcpu, an experimental tool to empirically measure CPU preemption rates. Thanks to Jakub Wilk for the idea. Version 1.23b: Reverted one change to afl-cmin that actually made it slower. Version 1.22b: Reworked afl-showmap.c to support normal options, including -o, -q, -e. Also added support for timeouts and memory limits.\n Made changes to afl-cmin and other scripts to accommodate the new semantics.\n Officially retired AFL_EDGES_ONLY.\n Fixed another typo in afl-tmin, courtesy of Jakub Wilk.\n Version 1.21b: Graduated minimize_corpus.sh to afl-cmin. It is now a first-class utility bundled with the fuzzer.\n Made significant improvements to afl-cmin to make it faster, more robust, and more versatile.\n Refactored some of afl-tmin code to make it a bit more readable.\n Made assorted changes to the doc to document afl-cmin and other stuff.\n Version 1.20b: Added AFL_DUMB_FORKSRV, as requested by Jakub Wilk. This works only in -n mode and allows afl-fuzz to run with \u0026ldquo;dummy\u0026rdquo; fork servers that don\u0026rsquo;t output any instrumentation, but follow the same protocol.\n Renamed AFL_SKIP_CHECKS to AFL_SKIP_BIN_CHECK to make it at least somewhat descriptive.\n Switched to using clang as the default assembler on MacOS X to work around Xcode issues with newer builds of clang. Testing and patch by Nico Weber.\n Fixed a typo (via Jakub Wilk).\n Version 1.19b: Improved exec failure detection in afl-fuzz and afl-showmap.\n Improved Ctrl-C handling in afl-showmap.\n Added afl-tmin, a handy instrumentation-enabled minimizer.\n Version 1.18b: Fixed a serious but short-lived bug in the resumption behavior introduced in version 1.16b.\n Added -t nn+ mode for soft-skipping timing-out paths.\n Version 1.17b: Fixed a compiler warning introduced in 1.16b for newer versions of GCC. Thanks to Jakub Wilk and Ilfak Guilfanov.\n Improved the consistency of saving fuzzer_stats, bitmap info, and auto-dictionaries when aborting fuzzing sessions.\n Made several noticeable performance improvements to deterministic arith and known int steps.\n Version 1.16b: Added a bit of code to make resumption pick up from the last known offset in the queue, rather than always rewinding to the start. Suggested by Jakub Wilk.\n Switched to tighter timeout control for slow programs (3x rather than 5x average exec speed at init).\n Version 1.15b: Added support for AFL_NO_VAR_CHECK to speed up resumption and inhibit variable path warnings for some programs.\n Made the trimmer run even for variable paths, since there is no special harm in doing so and it can be very beneficial if the trimming still pans out.\n Made the UI a bit more descriptive by adding \u0026ldquo;n/a\u0026rdquo; instead of \u0026ldquo;0\u0026rdquo; in a couple of corner cases.\n Version 1.14b: Added a (partial) dictionary for JavaScript.\n Added AFL_NO_CPU_RED, as suggested by Jakub Wilk.\n Tweaked the havoc scaling logic added in 1.12b.\n Version 1.13b: Improved the performance of minimize_corpus.sh by switching to a sort-based approach.\n Made several minor revisions to the docs.\n Version 1.12b: Made an improvement to dictionary generation to avoid runs of identical bytes.\n Added havoc cycle scaling to help with slow binaries in -d mode. Based on a thread with Sami Liedes.\n Added AFL_SYNC_FIRST for afl-fuzz. This is useful for those who obsess over stats, no special purpose otherwise.\n Switched to more robust box drawing codes, suggested by Jakub Wilk.\n Created faster 64-bit variants of several critical-path bitmap functions (sorry, no difference on 32 bits).\n Fixed moar typos, as reported by Jakub Wilk.\n Version 1.11b: Added a bit more info about dictionary strategies to the status screen. Version 1.10b: Revised the dictionary behavior to use insertion and overwrite in deterministic steps, rather than just the latter. This improves coverage with SQL and the like.\n Added a mention of \u0026ldquo;*\u0026rdquo; in status_screen.txt, as suggested by Jakub Wilk.\n Version 1.09b: Corrected a cosmetic problem with \u0026lsquo;extras\u0026rsquo; stage count not always being accurate in the stage yields view.\n Fixed a typo reported by Jakub Wilk and made some minor documentation improvements.\n Version 1.08b: Fixed a div-by-zero bug in the newly-added code when using a dictionary. Version 1.07b: Added code that automatically finds and extracts syntax tokens from the input corpus.\n Fixed a problem with ld dead-code removal option on MacOS X, reported by Filipe Cabecinhas.\n Corrected minor typos spotted by Jakub Wilk.\n Added a couple of more exotic archive format samples.\n Version 1.06b: Switched to slightly more accurate (if still not very helpful) reporting of short read and short write errors. These theoretically shouldn\u0026rsquo;t happen unless you kill the forkserver or run out of disk space. Suggested by Jakub Wilk.\n Revamped some of the allocator and debug code, adding comments and cleaning up other mess.\n Tweaked the odds of fuzzing non-favored test cases to make sure that baseline coverage of all inputs is reached sooner.\n Version 1.05b: Added a dictionary for WebP.\n Made some additional performance improvements to minimize_corpus.sh, getting deeper into the bash woods.\n Version 1.04b: Made substantial performance improvements to minimize_corpus.sh with large datasets, albeit at the expense of having to switch back to bash (other shells may have limits on array sizes, etc).\n Tweaked afl-showmap to support the format used by the new script.\n Version 1.03b: Added code to skip README.txt in the input directory to make the crash exploration mode work better. Suggested by Jakub Wilk.\n Added a dictionary for SQLite.\n Version 1.02b: Reverted the ./ search path in minimize_corpus.sh because people did not like it.\n Added very explicit warnings not to run various shell scripts that read or write to /tmp/ (since this is generally a pretty bad idea on multi-user systems).\n Added a check for /tmp binaries and -f locations in afl-fuzz.\n Version 1.01b: Added dictionaries for XML and GIF. Version 1.00b: Slightly improved the performance of minimize_corpus.sh, especially on Linux.\n Made a couple of improvements to calibration timeouts for resumed scans.\n Version 0.99b: Fixed minimize_corpus.sh to work with dash, as suggested by Jakub Wilk.\n Modified minimize_corpus.sh to try locate afl-showmap in $PATH and ./. The first part requested by Jakub Wilk.\n Added support for afl-as \u0026ndash;version, as required by one funky build script. Reported by William Robinet.\n Version 0.98b: Added a dictionary for TIFF.\n Fixed another cosmetic snafu with stage exec counts for -x.\n Switched afl-plot to /bin/sh, since it seems bashism-free. Also tried to remove any obvious bashisms from other examples/ scripts, most notably including minimize_corpus.sh and triage_crashes.sh. Requested by Jonathan Gray.\n Version 0.97b: Fixed cosmetic issues around the naming of -x strategy files.\n Added a dictionary for JPEG.\n Fixed a very rare glitch when running instrumenting 64-bit code that makes heavy use of xmm registers that are also touched by glibc.\n Version 0.96b: Added support for extra dictionaries, provided testcases/_extras/png/ as a demo.\n Fixed a minor bug in number formatting routines used by the UI.\n Added several additional PNG test cases that are relatively unlikely to be hit by chance.\n Fixed afl-plot syntax for gnuplot 5.x. Reported by David Necas.\n Version 0.95b: Cleaned up the OSX ReportCrash code. Thanks to Tobias Ospelt for help.\n Added some extra tips for AFL_NO_FORKSERVER on OSX.\n Refreshed the INSTALL file.\n Version 0.94b: Added in-place resume (-i-) to address a common user complaint.\n Added an awful workaround for ReportCrash on MacOS X. Problem spotted by Joseph Gentle.\n Version 0.93b: Fixed the link() workaround, as reported by Jakub Wilk. Version 0.92b: Added support for reading test cases from another filesystem. Requested by Jakub Wilk.\n Added pointers to the mailing list.\n Added a sample PDF document.\n Version 0.91b: Refactored minimize_corpus.sh to make it a bit more user-friendly and to select for smallest files, not largest bitmaps. Offers a modest corpus size improvement in most cases.\n Slightly improved the performance of splicing code.\n Version 0.90b: Moved to an algorithm where paths are marked as preferred primarily based on size and speed, rather than bitmap coverage. This should offer noticeable performance gains in many use cases.\n Refactored path calibration code; calibration now takes place as soon as a test case is discovered, to facilitate better prioritization decisions later on.\n Changed the way of marking variable paths to avoid .state metadata inconsistencies.\n Made sure that calibration routines always create a new test case to avoid hypothetical problems with utilities that modify the input file.\n Added bitmap saturation to fuzzer stats and plot data.\n Added a testcase for JPEG XR.\n Added a tty check for the colors warning in Makefile, to keep distro build logs tidy. Suggested by Jakub Wilk.\n Version 0.89b: Renamed afl-plot.sh to afl-plot, as requested by Padraig Brady.\n Improved the compatibility of afl-plot with older versions of gnuplot.\n Added banner information to fuzzer_stats, populated it to afl-plot.\n Version 0.88b: Added support for plotting, with design and implementation based on a prototype design proposed by Michael Rash. Huge thanks!\n Added afl-plot.sh, which allows you to, well, generate a nice plot using this data.\n Refactored the code slightly to make more frequent updates to fuzzer_stats and to provide more detail about synchronization.\n Added an fflush(stdout) call for non-tty operation, as requested by Joonas Kuorilehto.\n Added some detail to fuzzer_stats for parity with plot_file.\n Version 0.87b: Added support for MSAN, via AFL_USE_MSAN, same gotchas as for ASAN. Version 0.86b: Added AFL_NO_FORKSRV, allowing the forkserver to be bypassed. Suggested by Ryan Govostes.\n Simplified afl-showmap.c to make use of the no-forkserver mode.\n Made minor improvements to crash_triage.sh, as suggested by Jakub Wilk.\n Version 0.85b: Fixed the CPU counting code - no sysctlbyname() on OpenBSD, d\u0026rsquo;oh. Bug reported by Daniel Dickman.\n Made a slight correction to error messages - the advice on testing with ulimit was a tiny bit off by a factor of 1024.\n Version 0.84b: Added support for the CPU widget on some non-Linux platforms (I hope). Based on feedback from Ryan Govostes.\n Cleaned up the changelog (very meta).\n Version 0.83b: Added examples/clang_asm_normalize/ and related notes in env_variables.txt and afl-as.c. Thanks to Ryan Govostes for the idea.\n Added advice on hardware utilization in README.\n Version 0.82b: Made additional fixes for Xcode support, juggling -Q and -q flags. Thanks to Ryan Govostes.\n Added a check for asm blocks and switches to .intel_syntax in assembly. Based on feedback from Ryan Govostes.\n Version 0.81b: A workaround for Xcode 6 as -Q flag glitch. Spotted by Ryan Govostes.\n Improved Solaris build instructions, as suggested by Martin Carpenter.\n Fix for a slightly busted path scoring conditional. Minor practical impact.\n Version 0.80b: Added a check for $PATH-induced loops. Problem noticed by Kartik Agaram.\n Added AFL_KEEP_ASSEMBLY for easier troubleshooting.\n Added an override for AFL_USE_ASAN if set at AFL compile time. Requested by Hanno Boeck.\n Version 0.79b: Made minor adjustments to path skipping logic.\n Made several documentation updates to reflect the path selection changes made in 0.78b.\n Version 0.78b: Added a CPU governor check. Bug report from Joe Zbiciak.\n Favored paths are now selected strictly based on new edges, not hit counts. This speeds up the first pass by a factor of 3-6x without significantly impacting ultimate coverage (tested with libgif, libpng, libjpeg).\nIt also allows some performance \u0026amp; memory usage improvements by making some of the in-memory bitmaps much smaller.\n Made multiple significant performance improvements to bitmap checking functions, plus switched to a faster hash.\n Owing largely to these optimizations, bumped the size of the bitmap to 64k and added a warning to detect older binaries that rely on smaller bitmaps.\n Version 0.77b: Added AFL_SKIP_CHECKS to bypass binary checks when really warranted. Feature requested by Jakub Wilk.\n Fixed a couple of typos.\n Added a warning for runs that are aborted early on.\n Version 0.76b: Incorporated another signal handling fix for Solaris. Suggestion submitted by Martin Carpenter. Version 0.75b: Implemented a slightly more \u0026ldquo;elegant\u0026rdquo; kludge for the %llu glitch (see types.h).\n Relaxed CPU load warnings to stay in sync with reality.\n Version 0.74b: Switched to more responsive exec speed averages and better UI speed scaling.\n Fixed a bug with interrupted reads on Solaris. Issue spotted by Martin Carpenter.\n Version 0.73b: Fixed a stray memcpy() instead of memmove() on overlapping buffers. Mostly harmless but still dumb. Mistake spotted thanks to David Higgs. Version 0.72b: Bumped map size up to 32k. You may want to recompile instrumented binaries (but nothing horrible will happen if you don\u0026rsquo;t).\n Made huge performance improvements for bit-counting functions.\n Default optimizations now include -funroll-loops. This should have interesting effects on the instrumentation. Frankly, I\u0026rsquo;m just going to ship it and see what happens next. I have a good feeling about this.\n Made a fix for stack alignment crash on MacOS X 10.10; looks like the rhetorical question in the comments in afl-as.h has been answered. Tracked down by Mudge Zatko.\n Version 0.71b: Added a fix for the nonsensical MacOS ELF check. Spotted by Mudge Zatko.\n Made some improvements to ASAN checks.\n Version 0.70b: Added explicit detection of ASANified binaries.\n Fixed compilation issues on Solaris. Reported by Martin Carpenter.\n Version 0.69b: Improved the detection of non-instrumented binaries.\n Made the crash counter in -C mode accurate.\n Fixed an obscure install bug that made afl-as non-functional with the tool installed to /usr/bin instead of /usr/local/bin. Found by Florian Kiersch.\n Fixed for a cosmetic SIGFPE when Ctrl-C is pressed while the fork server is spinning up.\n Version 0.68b: Added crash exploration mode! Woot! Version 0.67b: Fixed several more typos, the project is now cartified 100% typo-free. Thanks to Thomas Jarosch and Jakub Wilk.\n Made a change to write fuzzer_stats early on.\n Fixed a glitch when (not!) running on MacOS X as root. Spotted by Tobias Ospelt.\n Made it possible to override -O3 in Makefile. Suggested by Jakub Wilk.\n Version 0.66b: Fixed a very obscure issue with build systems that use gcc as an assembler for hand-written .s files; this would confuse afl-as. Affected nss, reported by Hanno Boeck.\n Fixed a bug when cleaning up synchronized fuzzer output dirs. Issue reported by Thomas Jarosch.\n Version 0.65b: Cleaned up shell printf escape codes in Makefile. Reported by Jakub Wilk.\n Added more color to fuzzer_stats, provided short documentation of the file format, and made several other stats-related improvements.\n Version 0.64b: Enabled GCC support on MacOS X. Version 0.63b: Provided a new, simplified way to pass data in files (@@). See README.\n Made additional fixes for 64-bit MacOS X, working around a crashing bug in their linker (umpf) and several other things. It\u0026rsquo;s alive!\n Added a minor workaround for a bug in 64-bit FreeBSD (clang -m32 -g doesn\u0026rsquo;t work on that platform, but clang -m32 does, so we no longer insert -g).\n Added a build-time warning for inverse video terminals and better instructions in status_screen.txt.\n Version 0.62b: Made minor improvements to the allocator, as suggested by Tobias Ospelt.\n Added example instrumented memcmp() in examples/instrumented_cmp.\n Added a speculative fix for MacOS X (clang detection, again).\n Fixed typos in parallel_fuzzing.txt. Problems spotted by Thomas Jarosch.\n Version 0.61b: Fixed a minor issue with clang detection on systems with a clang cc wrapper, so that afl-gcc doesn\u0026rsquo;t confuse it with GCC.\n Made cosmetic improvements to docs and to the CPU load indicator.\n Fixed a glitch with crash removal (README.txt left behind, d\u0026rsquo;oh).\n Version 0.60b: Fixed problems with jump tables generated by exotic versions of GCC. This solves an outstanding problem on OpenBSD when using afl-gcc + PIE (not present with afl-clang).\n Fixed permissions on one of the sample archives.\n Added a lahf / sahf workaround for OpenBSD (their assembler doesn\u0026rsquo;t know about these opcodes).\n Added docs/INSTALL.\n Version 0.59b: Modified \u0026lsquo;make install\u0026rsquo; to also install test cases.\n Provided better pointers to installed README in afl-fuzz.\n More work on RLIMIT_AS for OpenBSD.\n Version 0.58b: Added a core count check on Linux.\n Refined the code for the lack-of-RLIMIT_AS case on OpenBSD.\n Added a rudimentary CPU utilization meter to help with optimal loading.\n Version 0.57b: Made fixes to support FreeBSD and OpenBSD: use_64bit is now inferred if not explicitly specified when calling afl-as, and RLIMIT_AS is behind an #ifdef. Thanks to Fabian Keil and Jonathan Gray for helping troubleshoot this.\n Modified \u0026lsquo;make install\u0026rsquo; to also install docs (in /usr/local/share/doc/afl).\n Fixed a typo in status_screen.txt.\n Made a couple of Makefile improvements as proposed by Jakub Wilk.\n Version 0.56b: Added probabilistic instrumentation density reduction in ASAN mode. This compensates for ASAN-specific branches in a crude but workable way.\n Updated notes_for_asan.txt.\n Version 0.55b: Implemented smarter out_dir behavior, automatically deleting directories that don\u0026rsquo;t contain anything of special value. Requested by several folks, including Hanno Boeck.\n Added more detail in fuzzer_stats (start time, run time, fuzzer PID).\n Implemented support for configurable install prefixes in Makefile ($PREFIX), as requested by Luca Barbato.\n Made it possible to resume by doing -i \u0026lt;out_dir\u0026gt;, without having to specify -i \u0026lt;out_dir\u0026gt;/queue/.\n Version 0.54b: Added a fix for -Wformat warning messages (oops, I thought this had been in place for a while). Version 0.53b: Redesigned the crash \u0026amp; hang duplicate detection code to better deal with fault conditions that can be reached in a multitude of ways.\nThe old approach could be compared to hashing stack traces to de-dupe crashes, a method prone to crash count inflation. The alternative I wanted to avoid would be equivalent to just looking at crash %eip, which can have false negatives in common functions such as memcpy().\nThe middle ground currently used in afl-fuzz can be compared to looking at every line item in the stack trace and tagging crashes as unique if we see any function name that we haven\u0026rsquo;t seen before (or if something that we have always seen there suddenly disappears). We do the comparison without paying any attention to ordering or hit counts. This can still cause some crash inflation early on, but the problem will quickly taper off. So, you may get 20 dupes instead of 5,000.\n Added a fix for harmless but absurd trim ratios shown if the first exec in the trimmer timed out. Spotted by @EspenGx.\n Version 0.52b: Added a quick summary of the contents in examples/.\n Made a fix to the process of writing fuzzer_stats.\n Slightly reorganized the .state/ directory, now recording redundant paths, too. Note that this breaks the ability to properly resume older sessions\n sorry about that. (To fix this, simply move \u0026lt;out_dir\u0026gt;/.state/* from an older run to \u0026lt;out_dir\u0026gt;/.state/deterministic_done/*.)\n Version 0.51b: Changed the search order for afl-as to avoid the problem with older copies installed system-wide; this also means that I can remove the Makefile check for that.\n Made it possible to set instrumentation ratio of 0%.\n Introduced some typos, fixed others.\n Fixed the test_prev target in Makefile, as reported by Ozzy Johnson.\n Version 0.50b: Improved the \u0026lsquo;make install\u0026rsquo; logic, as suggested by Padraig Brady.\n Revamped various bits of the documentation, especially around perf_tips.txt; based on the feedback from Alexander Cherepanov.\n Added AFL_INST_RATIO to afl-as. The only case where this comes handy is ffmpeg, at least as far as I can tell. (Trivia: the current version of ffmpeg ./configure also ignores CC and \u0026ndash;cc, probably unintentionally).\n Added documentation for all environmental variables (env_variables.txt).\n Implemented a visual warning for excessive or insufficient bitmap density.\n Changed afl-gcc to add -O3 by default; use AFL_DONT_OPTIMIZE if you don\u0026rsquo;t like that. Big speed gain for ffmpeg, so seems like a good idea.\n Made a regression fix to afl-as to ignore .LBB labels in gcc mode.\n Version 0.49b: Fixed more typos, as found by Jakub Wilk.\n Added support for clang!\n Changed AFL_HARDEN to not include ASAN by default. Use AFL_USE_ASAN if needed. The reasons for this are in notes_for_asan.txt.\n Switched from configure auto-detection to isatty() to keep afl-as and afl-gcc quiet.\n Improved installation process to properly create symlinks, rather than copies of binaries.\n Version 0.48b: Improved afl-fuzz to force-set ASAN_OPTIONS=abort_on_error=1. Otherwise, ASAN crashes wouldn\u0026rsquo;t be caught at all. Reported by Hanno Boeck.\n Improved Makefile mkdir logic, as suggested by Hanno Boeck.\n Improved the 64-bit instrumentation to properly save r8-r11 registers in the x86 setup code. The old behavior could cause rare problems running without instrumentation when the first function called in a particular .o file has 5+ parameters. No impact on code running under afl-fuzz or afl-showmap. Issue spotted by Padraig Brady.\n Version 0.47b: Fixed another Makefile bug for parallel builds of afl. Problem identified by Richard W. M. Jones.\n Added support for suffixes for -m.\n Updated the documentation and added notes_for_asan.txt. Based on feedback from Hanno Boeck, Ben Laurie, and others.\n Moved the project to https://lcamtuf.coredump.cx/afl/.\n Version 0.46b: Cleaned up Makefile dependencies for parallel builds. Requested by Richard W. M. Jones.\n Added support for DESTDIR in Makefile. Once again suggested by Richard W. M. Jones :-)\n Removed all the USE_64BIT stuff; we now just auto-detect compilation mode. As requested by many callers to the show.\n Fixed rare problems with programs that use snippets of assembly and switch between .code32 and .code64. Addresses a glitch spotted by Hanno Boeck with compiling ToT gdb.\n Version 0.45b: Implemented a test case trimmer. Results in 20-30% size reduction for many types of work loads, with very pronounced improvements in path discovery speeds.\n Added better warnings for various problems with input directories.\n Added a Makefile warning for older copies, based on counterintuitive behavior observed by Hovik Manucharyan.\n Added fuzzer_stats file for status monitoring. Suggested by @dronesec.\n Fixed moar typos, thanks to Alexander Cherepanov.\n Implemented better warnings for ASAN memory requirements, based on calls from several angry listeners.\n Switched to saner behavior with non-tty stdout (less output generated, no ANSI art).\n Version 0.44b: Added support for AFL_CC and AFL_CXX, based on a patch from Ben Laurie.\n Replaced afl-fuzz -S -D with -M for simplicity.\n Added a check for .section .text; lack of this prevented main() from getting instrumented for some users. Reported by Tom Ritter.\n Reorganized the testcases/ directory.\n Added an extra check to confirm that the build is operational.\n Made more consistent use of color reset codes, as suggested by Oliver Kunz.\n Version 0.43b: Fixed a bug with 64-bit gcc -shared relocs.\n Removed echo -e from Makefile for compatibility with dash. Suggested by Jakub Wilk.\n Added status_screen.txt.\n Added examples/canvas_harness.\n Made a minor change to the Makefile GCC check. Suggested by Hanno Boeck.\n Version 0.42b: Fixed a bug with red zone handling for 64-bit (oops!). Problem reported by Felix Groebert.\n Implemented horribly experimental ARM support in examples/arm_support.\n Made several improvements to error messages.\n Added AFL_QUIET to silence afl-gcc and afl-as when using wonky build systems. Reported by Hanno Boeck.\n Improved check for 64-bit compilation, plus several sanity checks in Makefile.\n Version 0.41b: Fixed a fork served bug for processes that call execve().\n Made minor compatibility fixes to Makefile, afl-gcc; suggested by Jakub Wilk.\n Fixed triage_crashes.sh to work with the new layout of output directories. Suggested by Jakub Wilk.\n Made multiple performance-related improvements to the injected instrumentation.\n Added visual indication of the number of imported paths.\n Fixed afl-showmap to make it work well with new instrumentation.\n Added much better error messages for crashes when importing test cases or otherwise calibrating the binary.\n Version 0.40b: Added support for parallelized fuzzing. Inspired by earlier patch from Sebastian Roschke.\n Added an example in examples/distributed_fuzzing/.\n Version 0.39b: Redesigned status screen, now 90% more spiffy.\n Added more verbose and user-friendly messages for some common problems.\n Modified the resumption code to reconstruct path depth.\n Changed the code to inhibit core dumps and improve the ability to detect SEGVs.\n Added a check for redirection of core dumps to programs.\n Made a minor improvement to the handling of variable paths.\n Made additional performance tweaks to afl-fuzz, chiefly around mem limits.\n Added performance_tips.txt.\n Version 0.38b: Fixed an fd leak and +cov tracking bug resulting from changes in 0.37b.\n Implemented auto-scaling for screen update speed.\n Added a visual indication when running in non-instrumented mode.\n Version 0.37b: Added fuzz state tracking for more seamless resumption of aborted fuzzing sessions.\n Removed the -D option, as it\u0026rsquo;s no longer necessary.\n Refactored calibration code and improved startup reporting.\n Implemented dynamically scaled timeouts, so that you don\u0026rsquo;t need to play with -t except in some very rare cases.\n Added visual notification for slow binaries.\n Improved instrumentation to explicitly cover the other leg of every branch.\n Version 0.36b: Implemented fork server support to avoid the overhead of execve(). A nearly-verbatim design from Jann Horn; still pending part 2 that would also skip initial setup steps (thinking about reliable heuristics now).\n Added a check for shell scripts used as fuzz targets.\n Added a check for fuzz jobs that don\u0026rsquo;t seem to be finding anything.\n Fixed the way IGNORE_FINDS works (was a bit broken after adding splicing and path skip heuristics).\n Version 0.35b: Properly integrated 64-bit instrumentation into afl-as. Version 0.34b: Added a new exec count classifier (the working theory is that it gets meaningful coverage with fewer test cases spewed out). Version 0.33b: Switched to new, somewhat experimental instrumentation that tries to target only arcs, rather than every line. May be fragile, but is a lot faster (2x+).\n Made several other cosmetic fixes and typo corrections, thanks to Jakub Wilk.\n Version 0.32b: Another take at fixing the C++ exception thing. Reported by Jakub Wilk. Version 0.31b: Made another fix to afl-as to address a potential problem with newer versions of GCC (introduced in 0.28b). Thanks to Jann Horn. Version 0.30b: Added more detail about the underlying operations in file names. Version 0.29b: Made some general improvements to chunk operations. Version 0.28b: Fixed C++ exception handling in newer versions of GCC. Problem diagnosed by Eberhard Mattes.\n Fixed the handling of the overflow flag. Once again, thanks to Eberhard Mattes.\n Version 0.27b: Added prioritization of new paths over the already-fuzzed ones.\n Included spliced test case ID in the output file name.\n Fixed a rare, cosmetic null ptr deref after Ctrl-C.\n Refactored the code to make copies of test cases in the output directory.\n Switched to better output file names, keeping track of stage and splicing sources.\n Version 0.26b: Revamped storage of testcases, -u option removed,\n Added a built-in effort minimizer to get rid of potentially redundant inputs,\n Provided a testcase count minimization script in examples/,\n Made miscellaneous improvements to directory and file handling.\n Fixed a bug in timeout detection.\n Version 0.25b: Improved count-based instrumentation.\n Improved the hang deduplication logic.\n Added -cov prefixes for test cases.\n Switched from readdir() to scandir() + alphasort() to preserve ordering of test cases.\n Added a splicing strategy.\n Made various minor UI improvements and several other bugfixes.\n Version 0.24b: Added program name to the status screen, plus the -T parameter to go with it. Version 0.23b: Improved the detection of variable behaviors.\n Added path depth tracking,\n Improved the UI a bit,\n Switched to simplified (XOR-based) tuple instrumentation.\n Version 0.22b: Refactored the handling of long bitflips and some swaps.\n Fixed the handling of gcc -pipe, thanks to anonymous reporter.\n Version 0.21b (2013-11-12): Initial public release.\n Added support for use of multiple custom mutators which can be specified using the environment variable AFL_CUSTOM_MUTATOR_LIBRARY.\n "}),a.add({id:10,href:'/docs/custom_mutator/',title:"Custom Mutator",content:"Adding custom mutators to AFL This file describes how you can implement custom mutations to be used in AFL.\nImplemented by Khaled Yakdan from Code Intelligence [email protected]\n1) Description Custom mutator libraries can be passed to afl-fuzz to perform custom mutations on test cases beyond those available in AFL - for example, to enable structure-aware fuzzing by using libraries that perform mutations according to a given grammar.\nThe custom mutator library is passed to afl-fuzz via the AFL_CUSTOM_MUTATOR_LIBRARY environment variable. The library must export the afl_custom_mutator() function and must be compiled as a shared object. For example: $CC -shared -Wall -O3 .c -o .so\nNote: unless AFL_CUSTOM_MUTATOR_ONLY is set, its state mutator like any others, so it will be used for some test cases, and other mutators for others.\nOnly if AFL_CUSTOM_MUTATOR_ONLY is set the afl_custom_mutator() function will be called every time it needs to mutate test case!\nFor some cases, the format of the mutated data returned from the custom mutator is not suitable to directly execute the target with this input. For example, when using libprotobuf-mutator, the data returned is in a protobuf format which corresponds to a given grammar. In order to execute the target, the protobuf data must be converted to the plain-text format expected by the target. In such scenarios, the user can define the afl_pre_save_handler() function. This function is then transforms the data into the format expected by the API before executing the target. afl_pre_save_handler is optional and does not have to be implemented if its functionality is not needed.\n2) Example A simple example is provided in ../examples/custom_mutators/\n"}),a.add({id:11,href:'/docs/custom_mutators/',title:"Custom Mutators",content:"Custom Mutators in AFL++ This file describes how you can implement custom mutations to be used in AFL. For now, we support C/C++ library and Python module, collectively named as the custom mutator.\nThere is also experimental support for Rust in custom_mutators/rust. For documentation, refer to that directory. Run cargo doc -p custom_mutator --open in that directory to view the documentation in your web browser.\nImplemented by\n C/C++ library (*.so): Khaled Yakdan from Code Intelligence ([email protected]) Python module: Christian Holler from Mozilla ([email protected]) 1) Introduction Custom mutators can be passed to afl-fuzz to perform custom mutations on test cases beyond those available in AFL. For example, to enable structure-aware fuzzing by using libraries that perform mutations according to a given grammar.\nThe custom mutator is passed to afl-fuzz via the AFL_CUSTOM_MUTATOR_LIBRARY or AFL_PYTHON_MODULE environment variable, and must export a fuzz function. Now AFL++ also supports multiple custom mutators which can be specified in the same AFL_CUSTOM_MUTATOR_LIBRARY environment variable like this.\nexport AFL_CUSTOM_MUTATOR_LIBRARY=\u0026#34;full/path/to/mutator_first.so;full/path/to/mutator_second.so\u0026#34; For details, see APIs and Usage.\nThe custom mutation stage is set to be the first non-deterministic stage (right before the havoc stage).\nNote: If AFL_CUSTOM_MUTATOR_ONLY is set, all mutations will solely be performed with the custom mutator.\n2) APIs C/C++:\nvoid *afl_custom_init(afl_state_t *afl, unsigned int seed); unsigned int afl_custom_fuzz_count(void *data, const unsigned char *buf, size_t buf_size); size_t afl_custom_fuzz(void *data, unsigned char *buf, size_t buf_size, unsigned char **out_buf, unsigned char *add_buf, size_t add_buf_size, size_t max_size); const char *afl_custom_describe(void *data, size_t max_description_len); size_t afl_custom_post_process(void *data, unsigned char *buf, size_t buf_size, unsigned char **out_buf); int afl_custom_init_trim(void *data, unsigned char *buf, size_t buf_size); size_t afl_custom_trim(void *data, unsigned char **out_buf); int afl_custom_post_trim(void *data, unsigned char success); size_t afl_custom_havoc_mutation(void *data, unsigned char *buf, size_t buf_size, unsigned char **out_buf, size_t max_size); unsigned char afl_custom_havoc_mutation_probability(void *data); unsigned char afl_custom_queue_get(void *data, const unsigned char *filename); u8 afl_custom_queue_new_entry(void *data, const unsigned char *filename_new_queue, const unsigned int *filename_orig_queue); const char* afl_custom_introspection(my_mutator_t *data); void afl_custom_deinit(void *data); Python:\ndef init(seed): pass def fuzz_count(buf, add_buf, max_size): return cnt def fuzz(buf, add_buf, max_size): return mutated_out def describe(max_description_length): return \u0026#34;description_of_current_mutation\u0026#34; def post_process(buf): return out_buf def init_trim(buf): return cnt def trim(): return out_buf def post_trim(success): return next_index def havoc_mutation(buf, max_size): return mutated_out def havoc_mutation_probability(): return probability # int in [0, 100] def queue_get(filename): return True def queue_new_entry(filename_new_queue, filename_orig_queue): return False def introspection(): return string def deinit(): # optional for Python pass Custom Mutation init:\nThis method is called when AFL++ starts up and is used to seed RNG and set up buffers and state.\n queue_get (optional):\nThis method determines whether the custom fuzzer should fuzz the current queue entry or not\n fuzz_count (optional):\nWhen a queue entry is selected to be fuzzed, afl-fuzz selects the number of fuzzing attempts with this input based on a few factors. If, however, the custom mutator wants to set this number instead on how often it is called for a specific queue entry, use this function. This function is most useful if AFL_CUSTOM_MUTATOR_ONLY is not used.\n fuzz (optional):\nThis method performs custom mutations on a given input. It also accepts an additional test case. Note that this function is optional - but it makes sense to use it. You would only skip this if post_process is used to fix checksums etc. so if you are using it, e.g., as a post processing library. Note that a length \u0026gt; 0 must be returned!\n describe (optional):\nWhen this function is called, it shall describe the current test case, generated by the last mutation. This will be called, for example, to name the written test case file after a crash occurred. Using it can help to reproduce crashing mutations.\n havoc_mutation and havoc_mutation_probability (optional):\nhavoc_mutation performs a single custom mutation on a given input. This mutation is stacked with other mutations in havoc. The other method, havoc_mutation_probability, returns the probability that havoc_mutation is called in havoc. By default, it is 6%.\n post_process (optional):\nFor some cases, the format of the mutated data returned from the custom mutator is not suitable to directly execute the target with this input. For example, when using libprotobuf-mutator, the data returned is in a protobuf format which corresponds to a given grammar. In order to execute the target, the protobuf data must be converted to the plain-text format expected by the target. In such scenarios, the user can define the post_process function. This function is then transforming the data into the format expected by the API before executing the target.\nThis can return any python object that implements the buffer protocol and supports PyBUF_SIMPLE. These include bytes, bytearray, etc.\n queue_new_entry (optional):\nThis methods is called after adding a new test case to the queue. If the contents of the file was changed, return True, False otherwise.\n introspection (optional):\nThis method is called after a new queue entry, crash or timeout is discovered if compiled with INTROSPECTION. The custom mutator can then return a string (const char *) that reports the exact mutations used.\n deinit:\nThe last method to be called, deinitializing the state.\n Note that there are also three functions for trimming as described in the next section.\nTrimming Support The generic trimming routines implemented in AFL++ can easily destroy the structure of complex formats, possibly leading to a point where you have a lot of test cases in the queue that your Python module cannot process anymore but your target application still accepts. This is especially the case when your target can process a part of the input (causing coverage) and then errors out on the remaining input.\nIn such cases, it makes sense to implement a custom trimming routine. The API consists of multiple methods because after each trimming step, we have to go back into the C code to check if the coverage bitmap is still the same for the trimmed input. Here\u0026rsquo;s a quick API description:\n init_trim (optional):\nThis method is called at the start of each trimming operation and receives the initial buffer. It should return the amount of iteration steps possible on this input (e.g., if your input has n elements and you want to remove them one by one, return n, if you do a binary search, return log(n), and so on).\nIf your trimming algorithm doesn\u0026rsquo;t allow to determine the amount of (remaining) steps easily (esp. while running), then you can alternatively return 1 here and always return 0 in post_trim until you are finished and no steps remain. In that case, returning 1 in post_trim will end the trimming routine. The whole current index/max iterations stuff is only used to show progress.\n trim (optional)\nThis method is called for each trimming operation. It doesn\u0026rsquo;t have any arguments because there is already the initial buffer from init_trim and we can memorize the current state in the data variables. This can also save reparsing steps for each iteration. It should return the trimmed input buffer.\n post_trim (optional)\nThis method is called after each trim operation to inform you if your trimming step was successful or not (in terms of coverage). If you receive a failure here, you should reset your input to the last known good state. In any case, this method must return the next trim iteration index (from 0 to the maximum amount of steps you returned in init_trim).\n Omitting any of three trimming methods will cause the trimming to be disabled and trigger a fallback to the built-in default trimming routine.\nEnvironment Variables Optionally, the following environment variables are supported:\n AFL_CUSTOM_MUTATOR_ONLY\nDisable all other mutation stages. This can prevent broken test cases (those that your Python module can\u0026rsquo;t work with anymore) to fill up your queue. Best combined with a custom trimming routine (see below) because trimming can cause the same test breakage like havoc and splice.\n AFL_PYTHON_ONLY\nDeprecated and removed, use AFL_CUSTOM_MUTATOR_ONLY instead.\n AFL_DEBUG\nWhen combined with AFL_NO_UI, this causes the C trimming code to emit additional messages about the performance and actions of your custom trimmer. Use this to see if it works :)\n 3) Usage Prerequisite For Python mutators, the python 3 or 2 development package is required. On Debian/Ubuntu/Kali it can be installed like this:\nsudo apt install python3-dev # or sudo apt install python-dev Then, AFL++ can be compiled with Python support. The AFL++ Makefile detects Python 2 and 3 through python-config if it is in the PATH and compiles afl-fuzz with the feature if available.\nNote: for some distributions, you might also need the package python[23]-apt. In case your setup is different, set the necessary variables like this: PYTHON_INCLUDE=/path/to/python/include LDFLAGS=-L/path/to/python/lib make.\nCustom Mutator Preparation For C/C++ mutators, the source code must be compiled as a shared object:\ngcc -shared -Wall -O3 example.c -o example.so Note that if you specify multiple custom mutators, the corresponding functions will be called in the order in which they are specified. E.g., the first post_process function of example_first.so will be called and then that of example_second.so.\nRun C/C++\nexport AFL_CUSTOM_MUTATOR_LIBRARY=\u0026#34;/full/path/to/example_first.so;/full/path/to/example_second.so\u0026#34; afl-fuzz /path/to/program Python\nexport PYTHONPATH=`dirname /full/path/to/example.py` export AFL_PYTHON_MODULE=example afl-fuzz /path/to/program 4) Example See example.c and example.py.\n5) Other Resources AFL libprotobuf mutator bruce30262/libprotobuf-mutator_fuzzing_learning thebabush/afl-libprotobuf-mutator XML Fuzzing@NullCon 2017 A bug detected by AFL + XML-aware mutators "}),a.add({id:12,href:'/docs/docs/',title:"Docs",content:"Restructure AFL++\u0026rsquo;s documentation About us We are dedicated to everything around fuzzing, our main and most well known contribution is the fuzzer AFL++ which is part of all major Unix distributions (e.g. Debian, Arch, FreeBSD, etc.) and is deployed on Google\u0026rsquo;s oss-fuzz and clusterfuzz. It is rated the top fuzzer on Google\u0026rsquo;s fuzzbench.\nWe are four individuals from Europe supported by a large community.\nAll our tools are open source.\nAbout the AFL++ fuzzer project AFL++ inherited it\u0026rsquo;s documentation from the original Google AFL project. Since then it has been massively improved - feature and performance wise - and although the documenation has likewise been continued it has grown out of proportion. The documentation is done by non-natives to the English language, plus none of us has a writer background.\nWe see questions on AFL++ usage on mailing lists (e.g. afl-users), discord channels, web forums and as issues in our repository.\nThis only increases as AFL++ has been on the top of Google\u0026rsquo;s fuzzbench statistics (which measures the performance of fuzzers) and is now being integrated in Google\u0026rsquo;s oss-fuzz and clusterfuzz - and is in many Unix packaging repositories, e.g. Debian, FreeBSD, etc.\nAFL++ now has 44 (!) documentation files with 13k total lines of content. This is way too much.\nHence AFL++ needs a complete overhaul of it\u0026rsquo;s documentation, both on a organisation/structural level as well as the content.\nOverall the following actions have to be performed:\n Create a better structure of documentation so it is easier to find the information that is being looked for, combining and/or splitting up the existing documents as needed. Rewrite some documentation to remove duplication. Several information is present several times in the documentation. These should be removed to where needed so that we have as little bloat as possible. The documents have been written and modified by a lot of different people, most of them non-native English speaker. Hence an overall review where parts should be rewritten has to be performed and then the rewrite done. Create a cheat-sheet for a very short best-setup build and run of AFL++ Pictures explain more than 1000 words. We need at least 4 images that explain the workflow with AFL++: the build workflow the fuzzing workflow the fuzzing campaign management workflow the overall workflow that is an overview of the above maybe more? where the technical writes seems it necessary for understanding. Requirements:\n Documentation has to be in Markdown format Images have to be either in SVG or PNG format. All documentation should be (moved) in(to) docs/ The project does not require writing new documentation or tutorials beside the cheat sheet. The technical information for the cheat sheet will be provided by us.\nMetrics AFL++ is a the highest performant fuzzer publicly available - but is also the most feature rich and complex. With the publicity of AFL++' success and deployment in Google projects internally and externally and availability as a package on most Linux distributions we see more and more issues being created and help requests on our Discord channel that would not be necessary if people would have read through all our documentation - which is unrealistic.\nWe expect the the new documenation after this project to be cleaner, easier accessible and lighter to digest by our users, resulting in much less help requests. On the other hand the amount of users using AFL++ should increase as well as it will be more accessible which would also increase questions again - but overall resulting in a reduction of help requests.\nIn numbers: we currently have per week on average 5 issues on Github, 10 questions on discord and 1 on mailing lists that would not be necessary with perfect documentation and perfect people.\nWe would consider this project a success if afterwards we only have 2 issues on Github and 3 questions on discord anymore that would be answered by reading the documentation. The mailing list is usually used by the most novice users and we don\u0026rsquo;t expect any less questions there.\nProject Budget We have zero experience with technical writers, so this is very hard for us to calculate. We expect it to be a lot of work though because of the amount of documentation we have that needs to be restructured and partially rewritten (44 documents with 13k total lines of content).\nWe assume the daily rate of a very good and experienced technical writer in times of a pandemic to be ~500$ (according to web research), and calculate the overall amout of work to be around 20 days for everything incl. the graphics (but again - this is basically just guessing).\nTechnical Writer 10000$ Volunteer stipends 0$ (waved) T-Shirts for the top 10 contributors and helpers to this documentation project: 10 AFL++ logo t-shirts 20$ each 200$ 10 shipping cost of t-shirts 10$ each 100$\nTotal: 10.300$ (in the submission form 10.280$ was entered)\nAdditional Information We have participated in Google Summer of Code in 2020 and hope to be selected again in 2021.\nWe have no experience with a technical writer, but we will support that person with video calls, chats, emails and messaging, provide all necessary information and write technical contents that is required for the success of this project. It is clear to us that a technical writer knows how to write, but cannot know the technical details in a complex tooling like in AFL++. This guidance, input, etc. has to come from us.\n"}),a.add({id:13,href:'/docs/env_variables/',title:"Env Variables",content:"Environment variables This document discusses the environment variables used by AFL++ to expose various exotic functions that may be (rarely) useful for power users or for some types of custom fuzzing setups. For general information about AFL++, see README.md.\nNote: Most tools will warn on any unknown AFL++ environment variables; for example, because of typos. If you want to disable this check, then set the AFL_IGNORE_UNKNOWN_ENVS environment variable.\n1) Settings for all compilers Starting with AFL++ 3.0, there is only one compiler: afl-cc.\nTo select the different instrumentation modes, use one of the following options:\n Pass the \u0026ndash;afl-MODE command-line option to the compiler. Only this option accepts further AFL-specific command-line options.\n Use a symlink to afl-cc: afl-clang, afl-clang++, afl-clang-fast, afl-clang-fast++, afl-clang-lto, afl-clang-lto++, afl-g++, afl-g++-fast, afl-gcc, afl-gcc-fast. This option does not accept AFL-specific command-line options. Instead, use environment variables.\n Use the AFL_CC_COMPILER environment variable with MODE. To select MODE, use one of the following values:\n GCC (afl-gcc/afl-g++) GCC_PLUGIN (afl-g*-fast) LLVM (afl-clang-fast*) LTO (afl-clang-lto*). The compile-time tools do not accept AFL-specific command-line options. The \u0026ndash;afl-MODE command line option is the only exception. The other options make fairly broad use of environment variables instead:\n Some build/configure scripts break with AFL++ compilers. To be able to pass them, do:\n export CC=afl-cc export CXX=afl-c++ export AFL_NOOPT=1 ./configure --disable-shared --disabler-werror unset AFL_NOOPT make Setting AFL_AS, AFL_CC, and AFL_CXX lets you use alternate downstream compilation tools, rather than the default \u0026lsquo;as\u0026rsquo;, \u0026lsquo;clang\u0026rsquo;, or \u0026lsquo;gcc\u0026rsquo; binaries in your $PATH.\n If you are a weird person that wants to compile and instrument asm text files, then use the AFL_AS_FORCE_INSTRUMENT variable: AFL_AS_FORCE_INSTRUMENT=1 afl-gcc foo.s -o foo\n Most AFL tools do not print any output if stdout/stderr are redirected. If you want to get the output into a file, then set the AFL_DEBUG environment variable. This is sadly necessary for various build processes which fail otherwise.\n By default, the wrapper appends -O3 to optimize builds. Very rarely, this will cause problems in programs built with -Werror, because -O3 enables more thorough code analysis and can spew out additional warnings. To disable optimizations, set AFL_DONT_OPTIMIZE. However, if -O... and/or -fno-unroll-loops are set, these are not overridden.\n Setting AFL_HARDEN automatically adds code hardening options when invoking the downstream compiler. This currently includes -D_FORTIFY_SOURCE=2 and -fstack-protector-all. The setting is useful for catching non-crashing memory bugs at the expense of a very slight (sub-5%) performance loss.\n Setting AFL_INST_RATIO to a percentage between 0 and 100 controls the probability of instrumenting every branch. This is (very rarely) useful when dealing with exceptionally complex programs that saturate the output bitmap. Examples include ffmpeg, perl, and v8.\n(If this ever happens, afl-fuzz will warn you ahead of the time by displaying the \u0026ldquo;bitmap density\u0026rdquo; field in fiery red.)\nSetting AFL_INST_RATIO to 0 is a valid choice. This will instrument only the transitions between function entry points, but not individual branches.\nNote that this is an outdated variable. A few instances (e.g., afl-gcc) still support these, but state-of-the-art (e.g., LLVM LTO and LLVM PCGUARD) do not need this.\n AFL_NO_BUILTIN causes the compiler to generate code suitable for use with libtokencap.so (but perhaps running a bit slower than without the flag).\n AFL_PATH can be used to point afl-gcc to an alternate location of afl-as. One possible use of this is utils/clang_asm_normalize/, which lets you instrument hand-written assembly when compiling clang code by plugging a normalizer into the chain. (There is no equivalent feature for GCC.)\n Setting AFL_QUIET will prevent afl-as and afl-cc banners from being displayed during compilation, in case you find them distracting.\n Setting AFL_USE_... automatically enables supported sanitizers - provided that your compiler supports it. Available are:\n AFL_USE_ASAN=1 - activates the address sanitizer (memory corruption detection) AFL_USE_CFISAN=1 - activates the Control Flow Integrity sanitizer (e.g. type confusion vulnerabilities) AFL_USE_LSAN - activates the leak sanitizer. To perform a leak check within your program at a certain point (such as at the end of an __AFL_LOOP()), you can run the macro __AFL_LEAK_CHECK(); which will cause an abort if any memory is leaked (you can combine this with the __AFL_LSAN_OFF(); and __AFL_LSAN_ON(); macros to avoid checking for memory leaks from memory allocated between these two calls. AFL_USE_MSAN=1 - activates the memory sanitizer (uninitialized memory) AFL_USE_TSAN=1 - activates the thread sanitizer to find thread race conditions AFL_USE_UBSAN=1 - activates the undefined behavior sanitizer TMPDIR is used by afl-as for temporary files; if this variable is not set, the tool defaults to /tmp.\n 2) Settings for LLVM and LTO: afl-clang-fast / afl-clang-fast++ / afl-clang-lto / afl-clang-lto++ The native instrumentation helpers (instrumentation and gcc_plugin) accept a subset of the settings discussed in section 1, with the exception of:\n AFL_AS, since this toolchain does not directly invoke GNU as.\n AFL_INST_RATIO, as we use collision free instrumentation by default. Not all passes support this option though as it is an outdated feature.\n LLVM modes support AFL_LLVM_DICT2FILE=/absolute/path/file.txt which will write all constant string comparisons to this file to be used later with afl-fuzz' -x option.\n TMPDIR and AFL_KEEP_ASSEMBLY, since no temporary assembly files are created.\n Then there are a few specific features that are only available in instrumentation mode:\nSelect the instrumentation mode AFL_LLVM_INSTRUMENT - this configures the instrumentation mode.\nAvailable options:\n CLANG - outdated clang instrumentation\n CLASSIC - classic AFL (map[cur_loc ^ prev_loc \u0026raquo; 1]++) (default)\nYou can also specify CTX and/or NGRAM, separate the options with a comma \u0026ldquo;,\u0026rdquo; then, e.g.: AFL_LLVM_INSTRUMENT=CLASSIC,CTX,NGRAM-4\nNote: It is actually not a good idea to use both CTX and NGRAM. :)\n CTX - context sensitive instrumentation\n GCC - outdated gcc instrumentation\n LTO - LTO instrumentation\n NATIVE - clang\u0026rsquo;s original pcguard based instrumentation\n NGRAM-x - deeper previous location coverage (from NGRAM-2 up to NGRAM-16)\n PCGUARD - our own pcgard based instrumentation (default)\n CMPLOG Setting AFL_LLVM_CMPLOG=1 during compilation will tell afl-clang-fast to produce a CmpLog binary.\nFor more information, see instrumentation/README.cmplog.md.\nCTX Setting AFL_LLVM_CTX or AFL_LLVM_INSTRUMENT=CTX activates context sensitive branch coverage - meaning that each edge is additionally combined with its caller. It is highly recommended to increase the MAP_SIZE_POW2 definition in config.h to at least 18 and maybe up to 20 for this as otherwise too many map collisions occur.\nFor more information, see instrumentation/README.llvm.md#6) AFL++ Context Sensitive Branch Coverage.\nINSTRUMENT LIST (selectively instrument files and functions) This feature allows selective instrumentation of the source.\nSetting AFL_LLVM_ALLOWLIST or AFL_LLVM_DENYLIST with a file name and/or function will only instrument (or skip) those files that match the names listed in the specified file.\nFor more information, see instrumentation/README.instrument_list.md.\nLAF-INTEL This great feature will split compares into series of single byte comparisons to allow afl-fuzz to find otherwise rather impossible paths. It is not restricted to Intel CPUs. ;-)\n Setting AFL_LLVM_LAF_TRANSFORM_COMPARES will split string compare functions.\n Setting AFL_LLVM_LAF_SPLIT_COMPARES will split all floating point and 64, 32 and 16 bit integer CMP instructions.\n Setting AFL_LLVM_LAF_SPLIT_FLOATS will split floating points, needs AFL_LLVM_LAF_SPLIT_COMPARES to be set.\n Setting AFL_LLVM_LAF_SPLIT_SWITCHES will split all switch constructs.\n Setting AFL_LLVM_LAF_ALL sets all of the above.\n For more information, see instrumentation/README.laf-intel.md.\nLTO This is a different way of instrumentation: first it compiles all code in LTO (link time optimization) and then performs an edge inserting instrumentation which is 100% collision free (collisions are a big issue in AFL and AFL-like instrumentations). This is performed by using afl-clang-lto/afl-clang-lto++ instead of afl-clang-fast, but is only built if LLVM 11 or newer is used.\nAFL_LLVM_INSTRUMENT=CFG will use Control Flow Graph instrumentation. (Not recommended for afl-clang-fast, default for afl-clang-lto as there it is a different and better kind of instrumentation.)\nNone of the following options are necessary to be used and are rather for manual use (which only ever the author of this LTO implementation will use). These are used if several separated instrumentations are performed which are then later combined.\n AFL_LLVM_DOCUMENT_IDS=file will document to a file which edge ID was given to which function. This helps to identify functions with variable bytes or which functions were touched by an input. AFL_LLVM_LTO_DONTWRITEID prevents that the highest location ID written into the instrumentation is set in a global variable. AFL_LLVM_LTO_STARTID sets the starting location ID for the instrumentation. This defaults to 1. AFL_LLVM_MAP_ADDR sets the fixed map address to a different address than the default 0x10000. A value of 0 or empty sets the map address to be dynamic (the original AFL way, which is slower). AFL_LLVM_MAP_DYNAMIC sets the shared memory address to be dynamic. For more information, see instrumentation/README.lto.md.\nNGRAM Setting AFL_LLVM_INSTRUMENT=NGRAM-{value} or AFL_LLVM_NGRAM_SIZE activates ngram prev_loc coverage. Good values are 2, 4, or 8 (any value between 2 and 16 is valid). It is highly recommended to increase the MAP_SIZE_POW2 definition in config.h to at least 18 and maybe up to 20 for this as otherwise too many map collisions occur.\nFor more information, see instrumentation/README.llvm.md#7) AFL++ N-Gram Branch Coverage.\nNOT_ZERO Setting AFL_LLVM_NOT_ZERO=1 during compilation will use counters that skip zero on overflow. This is the default for llvm \u0026gt;= 9, however, for llvm versions below that this will increase an unnecessary slowdown due a performance issue that is only fixed in llvm 9+. This feature increases path discovery by a little bit.\n Setting AFL_LLVM_SKIP_NEVERZERO=1 will not implement the skip zero test. If the target performs only a few loops, then this will give a small performance boost.\n Thread safe instrumentation counters (in all modes) Setting AFL_LLVM_THREADSAFE_INST will inject code that implements thread safe counters. The overhead is a little bit higher compared to the older non-thread safe case. Note that this disables neverzero (see NOT_ZERO).\n3) Settings for GCC / GCC_PLUGIN modes There are a few specific features that are only available in GCC and GCC_PLUGIN mode.\n GCC mode only: Setting AFL_KEEP_ASSEMBLY prevents afl-as from deleting instrumented assembly files. Useful for troubleshooting problems or understanding how the tool works.\nTo get them in a predictable place, try something like:\nmkdir assembly_here TMPDIR=$PWD/assembly_here AFL_KEEP_ASSEMBLY=1 make clean all GCC_PLUGIN mode only: Setting AFL_GCC_INSTRUMENT_FILE or AFL_GCC_ALLOWLIST with a filename will only instrument those files that match the names listed in this file (one filename per line).\nSetting AFL_GCC_DENYLIST or AFL_GCC_BLOCKLIST with a file name and/or function will only skip those files that match the names listed in the specified file. See instrumentation/README.instrument_list.md for more information.\nSetting AFL_GCC_OUT_OF_LINE=1 will instruct afl-gcc-fast to instrument the code with calls to an injected subroutine instead of the much more efficient inline instrumentation.\nSetting AFL_GCC_SKIP_NEVERZERO=1 will not implement the skip zero test. If the target performs only a few loops, then this will give a small performance boost.\n 4) Settings for afl-fuzz The main fuzzer binary accepts several options that disable a couple of sanity checks or alter some of the more exotic semantics of the tool:\n Setting AFL_AUTORESUME will resume a fuzz run (same as providing -i -) for an existing out folder, even if a different -i was provided. Without this setting, afl-fuzz will refuse execution for a long-fuzzed out dir.\n Benchmarking only: AFL_BENCH_JUST_ONE causes the fuzzer to exit after processing the first queue entry; and AFL_BENCH_UNTIL_CRASH causes it to exit soon after the first crash is found.\n AFL_CMPLOG_ONLY_NEW will only perform the expensive cmplog feature for newly found test cases and not for test cases that are loaded on startup (-i in). This is an important feature to set when resuming a fuzzing session.\n Setting AFL_CRASH_EXITCODE sets the exit code AFL++ treats as crash. For example, if AFL_CRASH_EXITCODE='-1' is set, each input resulting in a -1 return code (i.e. exit(-1) got called), will be treated as if a crash had occurred. This may be beneficial if you look for higher-level faulty conditions in which your target still exits gracefully.\n Setting AFL_CUSTOM_MUTATOR_LIBRARY to a shared library with afl_custom_fuzz() creates additional mutations through this library. If afl-fuzz is compiled with Python (which is autodetected during building afl-fuzz), setting AFL_PYTHON_MODULE to a Python module can also provide additional mutations. If AFL_CUSTOM_MUTATOR_ONLY is also set, all mutations will solely be performed with the custom mutator. This feature allows to configure custom mutators which can be very helpful, e.g., fuzzing XML or other highly flexible structured input. For details, see custom_mutators.md.\n Setting AFL_CYCLE_SCHEDULES will switch to a different schedule every time a cycle is finished.\n Setting AFL_DEBUG_CHILD will not suppress the child output. This lets you see all output of the child, making setup issues obvious. For example, in an unicornafl harness, you might see python stacktraces. You may also see other logs that way, indicating why the forkserver won\u0026rsquo;t start. Not pretty but good for debugging purposes. Note that AFL_DEBUG_CHILD_OUTPUT is deprecated.\n Setting AFL_DISABLE_TRIM tells afl-fuzz not to trim test cases. This is usually a bad idea!\n AFL_EXIT_ON_SEED_ISSUES will restore the vanilla afl-fuzz behavior which does not allow crashes or timeout seeds in the initial -i corpus.\n AFL_EXIT_ON_TIME causes afl-fuzz to terminate if no new paths were found within a specified period of time (in seconds). May be convenient for some types of automated jobs.\n AFL_EXIT_WHEN_DONE causes afl-fuzz to terminate when all existing paths have been fuzzed and there were no new finds for a while. This would be normally indicated by the cycle counter in the UI turning green. May be convenient for some types of automated jobs.\n Setting AFL_EXPAND_HAVOC_NOW will start in the extended havoc mode that includes costly mutations. afl-fuzz automatically enables this mode when deemed useful otherwise.\n AFL_FAST_CAL keeps the calibration stage about 2.5x faster (albeit less precise), which can help when starting a session against a slow target. AFL_CAL_FAST works too.\n Setting AFL_FORCE_UI will force painting the UI on the screen even if no valid terminal was detected (for virtual consoles).\n Setting AFL_FORKSRV_INIT_TMOUT allows you to specify a different timeout to wait for the forkserver to spin up. The default is the -t value times FORK_WAIT_MULT from config.h (usually 10), so for a -t 100, the default would wait for 1000 milliseconds. Setting a different time here is useful if the target has a very slow startup time, for example, when doing full-system fuzzing or emulation, but you don\u0026rsquo;t want the actual runs to wait too long for timeouts.\n Setting AFL_HANG_TMOUT allows you to specify a different timeout for deciding if a particular test case is a \u0026ldquo;hang\u0026rdquo;. The default is 1 second or the value of the -t parameter, whichever is larger. Dialing the value down can be useful if you are very concerned about slow inputs, or if you don\u0026rsquo;t want AFL++ to spend too much time classifying that stuff and just rapidly put all timeouts in that bin.\n If you are Jakub, you may need AFL_I_DONT_CARE_ABOUT_MISSING_CRASHES. Others need not apply, unless they also want to disable the /proc/sys/kernel/core_pattern check.\n If afl-fuzz encounters an incorrect fuzzing setup during a fuzzing session (not at startup), it will terminate. If you do not want this, then you can set AFL_IGNORE_PROBLEMS.\n When running in the -M or -S mode, setting AFL_IMPORT_FIRST causes the fuzzer to import test cases from other instances before doing anything else. This makes the \u0026ldquo;own finds\u0026rdquo; counter in the UI more accurate. Beyond counter aesthetics, not much else should change.\n AFL_KILL_SIGNAL: Set the signal ID to be delivered to child processes on timeout. Unless you implement your own targets or instrumentation, you likely don\u0026rsquo;t have to set it. By default, on timeout and on exit, SIGKILL (AFL_KILL_SIGNAL=9) will be delivered to the child.\n AFL_MAP_SIZE sets the size of the shared map that afl-analyze, afl-fuzz, afl-showmap, and afl-tmin create to gather instrumentation data from the target. This must be equal or larger than the size the target was compiled with.\n Setting AFL_MAX_DET_EXTRAS will change the threshold at what number of elements in the -x dictionary and LTO autodict (combined) the probabilistic mode will kick off. In probabilistic mode, not all dictionary entries will be used all of the time for fuzzing mutations to not slow down fuzzing. The default count is 200 elements. So for the 200 + 1st element, there is a 1 in 201 chance, that one of the dictionary entries will not be used directly.\n Setting AFL_NO_AFFINITY disables attempts to bind to a specific CPU core on Linux systems. This slows things down, but lets you run more instances of afl-fuzz than would be prudent (if you really want to).\n AFL_NO_ARITH causes AFL++ to skip most of the deterministic arithmetics. This can be useful to speed up the fuzzing of text-based file formats.\n Setting AFL_NO_AUTODICT will not load an LTO generated auto dictionary that is compiled into the target.\n Setting AFL_NO_COLOR or AFL_NO_COLOUR will omit control sequences for coloring console output when configured with USE_COLOR and not ALWAYS_COLORED.\n The CPU widget shown at the bottom of the screen is fairly simplistic and may complain of high load prematurely, especially on systems with low core counts. To avoid the alarming red color for very high CPU usages, you can set AFL_NO_CPU_RED.\n Setting AFL_NO_FORKSRV disables the forkserver optimization, reverting to fork + execve() call for every tested input. This is useful mostly when working with unruly libraries that create threads or do other crazy things when initializing (before the instrumentation has a chance to run).\nNote that this setting inhibits some of the user-friendly diagnostics normally done when starting up the forkserver and causes a pretty significant performance drop.\n AFL_NO_SNAPSHOT will advice afl-fuzz not to use the snapshot feature if the snapshot lkm is loaded.\n Setting AFL_NO_UI inhibits the UI altogether and just periodically prints some basic stats. This behavior is also automatically triggered when the output from afl-fuzz is redirected to a file or to a pipe.\n In QEMU mode (-Q) and Frida mode (-O), AFL_PATH will be searched for afl-qemu-trace and afl-frida-trace.so.\n If you are using persistent mode (you should, see instrumentation/README.persistent_mode.md), some targets keep inherent state due which a detected crash test case does not crash the target again when the test case is given. To be able to still re-trigger these crashes, you can use the AFL_PERSISTENT_RECORD variable with a value of how many previous fuzz cases to keep prio a crash. If set to e.g., 10, then the 9 previous inputs are written to out/default/crashes as RECORD:000000,cnt:000000 to RECORD:000000,cnt:000008 and RECORD:000000,cnt:000009 being the crash case. NOTE: This option needs to be enabled in config.h first!\n Note that AFL_POST_LIBRARY is deprecated, use AFL_CUSTOM_MUTATOR_LIBRARY instead.\n Setting AFL_PRELOAD causes AFL++ to set LD_PRELOAD for the target binary without disrupting the afl-fuzz process itself. This is useful, among other things, for bootstrapping libdislocator.so.\n In QEMU mode (-Q), setting AFL_QEMU_CUSTOM_BIN will cause afl-fuzz to skip prepending afl-qemu-trace to your command line. Use this if you wish to use a custom afl-qemu-trace or if you need to modify the afl-qemu-trace arguments.\n AFL_SHUFFLE_QUEUE randomly reorders the input queue on startup. Requested by some users for unorthodox parallelized fuzzing setups, but not advisable otherwise.\n When developing custom instrumentation on top of afl-fuzz, you can use AFL_SKIP_BIN_CHECK to inhibit the checks for non-instrumented binaries and shell scripts; and AFL_DUMB_FORKSRV in conjunction with the -n setting to instruct afl-fuzz to still follow the fork server protocol without expecting any instrumentation data in return. Note that this also turns off auto map size detection.\n Setting AFL_SKIP_CPUFREQ skips the check for CPU scaling policy. This is useful if you can\u0026rsquo;t change the defaults (e.g., no root access to the system) and are OK with some performance loss.\n Setting AFL_STATSD enables StatsD metrics collection. By default, AFL++ will send these metrics over UDP to 127.0.0.1:8125. The host and port are configurable with AFL_STATSD_HOST and AFL_STATSD_PORT respectively. To enable tags (banner and afl_version), you should provide AFL_STATSD_TAGS_FLAVOR that matches your StatsD server (see AFL_STATSD_TAGS_FLAVOR).\n Setting AFL_STATSD_TAGS_FLAVOR to one of dogstatsd, influxdb, librato, or signalfx allows you to add tags to your fuzzing instances. This is especially useful when running multiple instances (-M/-S for example). Applied tags are banner and afl_version. banner corresponds to the name of the fuzzer provided through -M/-S. afl_version corresponds to the currently running AFL++ version (e.g., ++3.0c). Default (empty/non present) will add no tags to the metrics. For more information, see rpc_statsd.md.\n Setting AFL_TARGET_ENV causes AFL++ to set extra environment variables for the target binary. Example: AFL_TARGET_ENV=\u0026quot;VAR1=1 VAR2='a b c'\u0026quot; afl-fuzz ... . This exists mostly for things like LD_LIBRARY_PATH but it would theoretically allow fuzzing of AFL++ itself (with \u0026lsquo;target\u0026rsquo; AFL++ using some AFL_ vars that would disrupt work of \u0026lsquo;fuzzer\u0026rsquo; AFL++).\n AFL_TESTCACHE_SIZE allows you to override the size of #define TESTCASE_CACHE in config.h. Recommended values are 50-250MB - or more if your fuzzing finds a huge amount of paths for large inputs.\n AFL_TMPDIR is used to write the .cur_input file to if it exists, and in the normal output directory otherwise. You would use this to point to a ramdisk/tmpfs. This increases the speed by a small value but also reduces the stress on SSDs.\n Setting AFL_TRY_AFFINITY tries to attempt binding to a specific CPU core on Linux systems, but will not terminate if that fails.\n Outdated environment variables that are not supported anymore:\n AFL_DEFER_FORKSRV AFL_PERSISTENT 5) Settings for afl-qemu-trace The QEMU wrapper used to instrument binary-only code supports several settings:\n Setting AFL_COMPCOV_LEVEL enables the CompareCoverage tracing of all cmp and sub in x86 and x86_64 and memory comparison functions (e.g., strcmp, memcmp, \u0026hellip;) when libcompcov is preloaded using AFL_PRELOAD. More info at qemu_mode/libcompcov/README.md.\nThere are two levels at the moment, AFL_COMPCOV_LEVEL=1 that instruments only comparisons with immediate values / read-only memory and AFL_COMPCOV_LEVEL=2 that instruments all the comparisons. Level 2 is more accurate but may need a larger shared memory.\n AFL_DEBUG will print the found entry point for the binary to stderr. Use this if you are unsure if the entry point might be wrong - but use it directly, e.g., afl-qemu-trace ./program.\n AFL_ENTRYPOINT allows you to specify a specific entry point into the binary (this can be very good for the performance!). The entry point is specified as hex address, e.g., 0x4004110. Note that the address must be the address of a basic block.\n Setting AFL_INST_LIBS causes the translator to also instrument the code inside any dynamically linked libraries (notably including glibc).\n It is possible to set AFL_INST_RATIO to skip the instrumentation on some of the basic blocks, which can be useful when dealing with very complex binaries.\n Setting AFL_QEMU_COMPCOV enables the CompareCoverage tracing of all cmp and sub in x86 and x86_64. This is an alias of AFL_COMPCOV_LEVEL=1 when AFL_COMPCOV_LEVEL is not specified.\n With AFL_QEMU_FORCE_DFL, you force QEMU to ignore the registered signal handlers of the target.\n When the target is i386/x86_64, you can specify the address of the function that has to be the body of the persistent loop using AFL_QEMU_PERSISTENT_ADDR=start addr.\n With AFL_QEMU_PERSISTENT_GPR=1, QEMU will save the original value of general purpose registers and restore them in each persistent cycle.\n Another modality to execute the persistent loop is to specify also the AFL_QEMU_PERSISTENT_RET=end addr environment variable. With this variable assigned, instead of patching the return address, the specified instruction is transformed to a jump towards start addr.\n With AFL_QEMU_PERSISTENT_RETADDR_OFFSET, you can specify the offset from the stack pointer in which QEMU can find the return address when start addr is hit.\n With AFL_USE_QASAN, you can enable QEMU AddressSanitizer for dynamically linked binaries.\n The underlying QEMU binary will recognize any standard \u0026ldquo;user space emulation\u0026rdquo; variables (e.g., QEMU_STACK_SIZE), but there should be no reason to touch them.\n 7) Settings for afl-frida-trace The FRIDA wrapper used to instrument binary-only code supports many of the same options as afl-qemu-trace, but also has a number of additional advanced options. These are listed in brief below (see frida_mode/README.md for more details). These settings are provided for compatibility with QEMU mode, the preferred way to configure FRIDA mode is through its scripting support.\n AFL_FRIDA_DEBUG_MAPS - See AFL_QEMU_DEBUG_MAPS AFL_FRIDA_DRIVER_NO_HOOK - See AFL_QEMU_DRIVER_NO_HOOK. When using the QEMU driver to provide a main loop for a user provided LLVMFuzzerTestOneInput, this option configures the driver to read input from stdin rather than using in-memory test cases. AFL_FRIDA_EXCLUDE_RANGES - See AFL_QEMU_EXCLUDE_RANGES AFL_FRIDA_INST_COVERAGE_FILE - File to write DynamoRio format coverage information (e.g., to be loaded within IDA lighthouse). AFL_FRIDA_INST_DEBUG_FILE - File to write raw assembly of original blocks and their instrumented counterparts during block compilation. AFL_FRIDA_INST_JIT - Enable the instrumentation of Just-In-Time compiled code. Code is considered to be JIT if the executable segment is not backed by a file. AFL_FRIDA_INST_NO_OPTIMIZE - Don\u0026rsquo;t use optimized inline assembly coverage instrumentation (the default where available). Required to use AFL_FRIDA_INST_TRACE. AFL_FRIDA_INST_NO_BACKPATCH - Disable backpatching. At the end of executing each block, control will return to FRIDA to identify the next block to execute. AFL_FRIDA_INST_NO_PREFETCH - Disable prefetching. By default, the child will report instrumented blocks back to the parent so that it can also instrument them and they be inherited by the next child on fork, implies AFL_FRIDA_INST_NO_PREFETCH_BACKPATCH. AFL_FRIDA_INST_NO_PREFETCH_BACKPATCH - Disable prefetching of stalker backpatching information. By default, the child will report applied backpatches to the parent so that they can be applied and then be inherited by the next child on fork. AFL_FRIDA_INST_RANGES - See AFL_QEMU_INST_RANGES AFL_FRIDA_INST_SEED - Sets the initial seed for the hash function used to generate block (and hence edge) IDs. Setting this to a constant value may be useful for debugging purposes, e.g., investigating unstable edges. AFL_FRIDA_INST_TRACE - Log to stdout the address of executed blocks, implies AFL_FRIDA_INST_NO_OPTIMIZE. AFL_FRIDA_INST_TRACE_UNIQUE - As per AFL_FRIDA_INST_TRACE, but each edge is logged only once, requires AFL_FRIDA_INST_NO_OPTIMIZE. AFL_FRIDA_INST_UNSTABLE_COVERAGE_FILE - File to write DynamoRio format coverage information for unstable edges (e.g., to be loaded within IDA lighthouse). AFL_FRIDA_JS_SCRIPT - Set the script to be loaded by the FRIDA scripting engine. See frida_mode/Scripting.md for details. AFL_FRIDA_OUTPUT_STDOUT - Redirect the standard output of the target application to the named file (supersedes the setting of AFL_DEBUG_CHILD) AFL_FRIDA_OUTPUT_STDERR - Redirect the standard error of the target application to the named file (supersedes the setting of AFL_DEBUG_CHILD) AFL_FRIDA_PERSISTENT_ADDR - See AFL_QEMU_PERSISTENT_ADDR AFL_FRIDA_PERSISTENT_CNT - See AFL_QEMU_PERSISTENT_CNT AFL_FRIDA_PERSISTENT_DEBUG - Insert a Breakpoint into the instrumented code at AFL_FRIDA_PERSISTENT_HOOK and AFL_FRIDA_PERSISTENT_RET to allow the user to detect issues in the persistent loop using a debugger. AFL_FRIDA_PERSISTENT_HOOK - See AFL_QEMU_PERSISTENT_HOOK AFL_FRIDA_PERSISTENT_RET - See AFL_QEMU_PERSISTENT_RET AFL_FRIDA_SECCOMP_FILE - Write a log of any syscalls made by the target to the specified file. AFL_FRIDA_STALKER_ADJACENT_BLOCKS - Configure the number of adjacent blocks to fetch when generating instrumented code. By fetching blocks in the same order they appear in the original program, rather than the order of execution should help reduce locallity and adjacency. This includes allowing us to vector between adjancent blocks using a NOP slide rather than an immediate branch. AFL_FRIDA_STALKER_IC_ENTRIES - Configure the number of inline cache entries stored along-side branch instructions which provide a cache to avoid having to call back into FRIDA to find the next block. Default is 32. AFL_FRIDA_STATS_FILE - Write statistics information about the code being instrumented to the given file name. The statistics are written only for the child process when new block is instrumented (when the AFL_FRIDA_STATS_INTERVAL has expired). Note that just because a new path is found does not mean a new block needs to be compiled. It could be that the existing blocks instrumented have been executed in a different order. AFL_FRIDA_STATS_INTERVAL - The maximum frequency to output statistics information. Stats will be written whenever they are updated if the given interval has elapsed since last time they were written. AFL_FRIDA_TRACEABLE - Set the child process to be traceable by any process to aid debugging and overcome the restrictions imposed by YAMA. Supported on Linux only. Permits a non-root user to use gcore or similar to collect a core dump of the instrumented target. Note that in order to capture the core dump you must set a sufficient timeout (using -t) to avoid afl-fuzz killing the process whilst it is being dumped. 8) Settings for afl-cmin The corpus minimization script offers very little customization:\n AFL_ALLOW_TMP permits this and some other scripts to run in /tmp. This is a modest security risk on multi-user systems with rogue users, but should be safe on dedicated fuzzing boxes.\n AFL_KEEP_TRACES makes the tool keep traces and other metadata used for minimization and normally deleted at exit. The files can be found in the \u0026lt;out_dir\u0026gt;/.traces/ directory.\n Setting AFL_PATH offers a way to specify the location of afl-showmap and afl-qemu-trace (the latter only in -Q mode).\n AFL_PRINT_FILENAMES prints each filename to stdout, as it gets processed. This can help when embedding afl-cmin or afl-showmap in other scripts.\n 9) Settings for afl-tmin Virtually nothing to play with. Well, in QEMU mode (-Q), AFL_PATH will be searched for afl-qemu-trace. In addition to this, TMPDIR may be used if a temporary file can\u0026rsquo;t be created in the current working directory.\nYou can specify AFL_TMIN_EXACT if you want afl-tmin to require execution paths to match when minimizing crashes. This will make minimization less useful, but may prevent the tool from \u0026ldquo;jumping\u0026rdquo; from one crashing condition to another in very buggy software. You probably want to combine it with the -e flag.\n10) Settings for afl-analyze You can set AFL_ANALYZE_HEX to get file offsets printed as hexadecimal instead of decimal.\n11) Settings for libdislocator The library honors these environment variables:\n AFL_ALIGNED_ALLOC=1 will force the alignment of the allocation size to max_align_t to be compliant with the C standard.\n AFL_LD_HARD_FAIL alters the behavior by calling abort() on excessive allocations, thus causing what AFL++ would perceive as a crash. Useful for programs that are supposed to maintain a specific memory footprint.\n AFL_LD_LIMIT_MB caps the size of the maximum heap usage permitted by the library, in megabytes. The default value is 1 GB. Once this is exceeded, allocations will return NULL.\n AFL_LD_NO_CALLOC_OVER inhibits abort() on calloc() overflows. Most of the common allocators check for that internally and return NULL, so it\u0026rsquo;s a security risk only in more exotic setups.\n AFL_LD_VERBOSE causes the library to output some diagnostic messages that may be useful for pinpointing the cause of any observed issues.\n 11) Settings for libtokencap This library accepts AFL_TOKEN_FILE to indicate the location to which the discovered tokens should be written.\n12) Third-party variables set by afl-fuzz \u0026amp; other tools Several variables are not directly interpreted by afl-fuzz, but are set to optimal values if not already present in the environment:\n By default, ASAN_OPTIONS are set to (among others):\nabort_on_error=1 detect_leaks=0 malloc_context_size=0 symbolize=0 allocator_may_return_null=1 If you want to set your own options, be sure to include abort_on_error=1 - otherwise, the fuzzer will not be able to detect crashes in the tested app. Similarly, include symbolize=0, since without it, AFL++ may have difficulty telling crashes and hangs apart.\n Similarly, the default LSAN_OPTIONS are set to:\nexit_code=23 fast_unwind_on_malloc=0 symbolize=0 print_suppressions=0 Be sure to include the first ones for LSAN and MSAN when customizing anything, since some MSAN and LSAN versions don\u0026rsquo;t call abort() on error, and we need a way to detect faults.\n In the same vein, by default, MSAN_OPTIONS are set to:\nexit_code=86 (required for legacy reasons) abort_on_error=1 symbolize=0 msan_track_origins=0 allocator_may_return_null=1 By default, LD_BIND_NOW is set to speed up fuzzing by forcing the linker to do all the work before the fork server kicks in. You can override this by setting LD_BIND_LAZY beforehand, but it is almost certainly pointless.\n "}),a.add({id:14,href:'/docs/faq/',title:"Faq",content:"Frequently asked questions (FAQ) If you find an interesting or important question missing, submit it via https://github.com/AFLplusplus/AFLplusplus/discussions.\nGeneral AFL++ is a superior fork to Google\u0026rsquo;s AFL - more speed, more and better mutations, more and better instrumentation, custom module support, etc.\nAmerican Fuzzy Lop (AFL) was developed by Michał \u0026ldquo;lcamtuf\u0026rdquo; Zalewski starting in 2013/2014, and when he left Google end of 2017 he stopped developing it.\nAt the end of 2019, the Google fuzzing team took over maintenance of AFL, however, it is only accepting PRs from the community and is not developing enhancements anymore.\nIn the second quarter of 2019, 1 1/2 years later, when no further development of AFL had happened and it became clear there would none be coming, AFL++ was born, where initially community patches were collected and applied for bug fixes and enhancements. Then from various AFL spin-offs - mostly academic research - features were integrated. This already resulted in a much advanced AFL.\nUntil the end of 2019, the AFL++ team had grown to four active developers which then implemented their own research and features, making it now by far the most flexible and feature rich guided fuzzer available as open source. And in independent fuzzing benchmarks it is one of the best fuzzers available, e.g., Fuzzbench Report.\nThe definition of the terms whitebox, graybox, and blackbox fuzzing varies from one source to another. For example, \u0026ldquo;graybox fuzzing\u0026rdquo; could mean binary-only or source code fuzzing, or something completely different. Therefore, we try to avoid them.\nThe Fuzzing Book describes the original AFL to be a graybox fuzzer. In that sense, AFL++ is also a graybox fuzzer.\nWe compiled a list of tutorials and exercises, see tutorials.md.\nA program contains functions, functions contain the compiled machine code. The compiled machine code in a function can be in a single or many basic blocks. A basic block is the largest possible number of subsequent machine code instructions that has exactly one entry point (which can be be entered by multiple other basic blocks) and runs linearly without branching or jumping to other addresses (except at the end).\nfunction() { A: some code B: if (x) goto C; else goto D; C: some code goto E D: some code goto B E: return } Every code block between two jump locations is a basic block.\nAn edge is then the unique relationship between two directly connected basic blocks (from the code example above):\n Block A | v Block B \u0026lt;------+ / \\ | v v | Block C Block D --+ \\ v Block E Every line between two blocks is an edge. Note that a few basic block loop to itself, this too would be an edge.\nTargets AFL++ is a great fuzzer if you have the source code available.\nHowever, if there is only the binary program and no source code available, then the standard non-instrumented mode is not effective.\nTo learn how these binaries can be fuzzed, read fuzzing_binary-only_targets.md.\nThe short answer is - you cannot, at least not \u0026ldquo;out of the box\u0026rdquo;.\nFor more information on fuzzing network services, see best_practices.md#fuzzing-a-network-service.\nNot all GUI programs are suitable for fuzzing. If the GUI program can read the fuzz data from a file without needing any user interaction, then it would be suitable for fuzzing.\nFor more information on fuzzing GUI programs, see best_practices.md#fuzzing-a-gui-program.\nPerformance Good performance generally means \u0026ldquo;making the fuzzing results better\u0026rdquo;. This can be influenced by various factors, for example, speed (finding lots of paths quickly) or thoroughness (working with decreased speed, but finding better mutations).\nThere are a few things you can do to improve the fuzzing speed, see best_practices.md#improving-speed.\nStability is measured by how many percent of the edges in the target are \u0026ldquo;stable\u0026rdquo;. Sending the same input again and again should take the exact same path through the target every time. If that is the case, the stability is 100%.\nIf, however, randomness happens, e.g., a thread reading other external data, reaction to timing, etc., then in some of the re-executions with the same data the edge coverage result will be different across runs. Those edges that change are then flagged \u0026ldquo;unstable\u0026rdquo;.\nThe more \u0026ldquo;unstable\u0026rdquo; edges there are, the harder it is for AFL++ to identify valid new paths.\nA value above 90% is usually fine and a value above 80% is also still ok, and even a value above 20% can still result in successful finds of bugs. However, it is recommended that for values below 90% or 80% you should take countermeasures to improve stability.\nFor more information on stability and how to improve the stability value, see best_practices.md#improving-stability.\nNot every item in our queue/corpus is the same, some are more interesting, others provide little value. A power schedule measures how \u0026ldquo;interesting\u0026rdquo; a value is, and depending on the calculated value spends more or less time mutating it.\nAFL++ comes with several power schedules, initially ported from AFLFast, however, modified to be more effective and several more modes added.\nThe most effective modes are -p fast (default) and -p explore.\nIf you fuzz with several parallel afl-fuzz instances, then it is beneficial to assign a different schedule to each instance, however the majority should be fast and explore.\nIt does not make sense to explain the details of the calculation and reasoning behind all of the schedules. If you are interested, read the source code and the AFLFast paper.\nTroubleshooting It can happen that you see this error on startup when fuzzing a target:\n[-] FATAL: forkserver is already up, but an instrumented dlopen() library loaded afterwards. You must AFL_PRELOAD such libraries to be able to fuzz them or LD_PRELOAD to run outside of afl-fuzz. To ignore this set AFL_IGNORE_PROBLEMS=1. As the error describes, a dlopen() call is happening in the target that is loading an instrumented library after the forkserver is already in place. This is a problem for afl-fuzz because when the forkserver is started, we must know the map size already and it can\u0026rsquo;t be changed later.\nThe best solution is to simply set AFL_PRELOAD=foo.so to the libraries that are dlopen\u0026rsquo;ed (e.g., use strace to see which), or to set a manual forkserver after the final dlopen().\nIf this is not a viable option, you can set AFL_IGNORE_PROBLEMS=1 but then the existing map will be used also for the newly loaded libraries, which allows it to work, however, the efficiency of the fuzzing will be partially degraded.\nIf you see this kind of error when trying to instrument a target with afl-cc/afl-clang-fast/afl-clang-lto:\n/prg/tmp/llvm-project/build/bin/clang-13: symbol lookup error: /usr/local/bin/../lib/afl//cmplog-instructions-pass.so: undefined symbol: _ZNK4llvm8TypeSizecvmEv clang-13: error: unable to execute command: No such file or directory clang-13: error: clang frontend command failed due to signal (use -v to see invocation) clang version 13.0.0 (https://github.com/llvm/llvm-project 1d7cf550721c51030144f3cd295c5789d51c4aad) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /prg/tmp/llvm-project/build/bin clang-13: note: diagnostic msg: ******************** Then this means that your OS updated the clang installation from an upgrade package and because of that the AFL++ llvm plugins do not match anymore.\nSolution: git pull ; make clean install of AFL++.\n"}),a.add({id:15,href:'/docs/features/',title:"Features",content:"Important features of AFL++ AFL++ supports llvm from 3.8 up to version 12, very fast binary fuzzing with QEMU 5.1 with laf-intel and Redqueen, FRIDA mode, unicorn mode, gcc plugin, full *BSD, Mac OS, Solaris and Android support and much, much, much more.\nFeatures and instrumentation Feature/Instrumentation afl-gcc llvm gcc_plugin FRIDA mode(9) QEMU mode(10) unicorn_mode(10) nyx_mode(12) coresight_mode(11) Threadsafe counters [A] x(3) x NeverZero [B] x86[_64] x(1) x x x x Persistent Mode [C] x x x86[_64]/arm64 x86[_64]/arm[64] x LAF-Intel / CompCov [D] x x86[_64]/arm[64] x86[_64]/arm[64] x86[_64] CmpLog [E] x x86[_64]/arm64 x86[_64]/arm[64] Selective Instrumentation [F] x x x x Non-Colliding Coverage [G] x(4) (x)(5) Ngram prev_loc Coverage [H] x(6) Context Coverage [I] x(6) Auto Dictionary [J] x(7) Snapshot Support [K] (x)(8) (x)(8) (x)(5) x Shared Memory Test cases [L] x x x86[_64]/arm64 x x x More information about features A. Default is not thread-safe coverage counter updates for better performance, see instrumentation/README.llvm.md\nB. On wrapping coverage counters (255 + 1), skip the 0 value and jump to 1 instead. This has shown to give better coverage data and is the default; see instrumentation/README.llvm.md.\nC. Instead of forking, reiterate the fuzz target function in a loop (like LLVMFuzzerTestOneInput. Great speed increase but only works with target functions that do not keep state, leak memory, or exit; see instrumentation/README.persistent_mode.md\nD. Split any non-8-bit comparison to 8-bit comparison; see instrumentation/README.laf-intel.md\nE. CmpLog is our enhanced Redqueen implementation, see instrumentation/README.cmplog.md\nF. Similar and compatible to clang 13+ sancov sanitize-coverage-allow/deny but for all llvm versions and all our compile modes, only instrument what should be instrumented, for more speed, directed fuzzing and less instability; see instrumentation/README.instrument_list.md\nG. Vanilla AFL uses coverage where edges could collide to the same coverage bytes the larger the target is. Our default instrumentation in LTO and afl-clang-fast (PCGUARD) uses non-colliding coverage that also makes it faster. Vanilla AFL style is available with AFL_LLVM_INSTRUMENT=AFL; see instrumentation/README.llvm.md.\nH.+I. Alternative coverage based on previous edges (NGRAM) or depending on the caller (CTX), based on https://www.usenix.org/system/files/raid2019-wang-jinghan.pdf; see instrumentation/README.llvm.md.\nJ. An LTO feature that creates a fuzzing dictionary based on comparisons found during compilation/instrumentation. Automatic feature :) See instrumentation/README.lto.md\nK. The snapshot feature requires a kernel module that was a lot of work to get right and maintained so it is no longer supported. We have nyx_mode instead.\nL. Faster fuzzing and less kernel syscall overhead by in-memory fuzz testcase delivery, see instrumentation/README.persistent_mode.md\nMore information about instrumentation Default for LLVM \u0026gt;= 9.0, environment variable for older version due an efficiency bug in previous llvm versions GCC creates non-performant code, hence it is disabled in gcc_plugin With AFL_LLVM_THREADSAFE_INST, disables NeverZero With pcguard mode and LTO mode for LLVM 11 and newer Upcoming, development in the branch Not compatible with LTO instrumentation and needs at least LLVM v4.1 Automatic in LTO mode with LLVM 11 and newer, an extra pass for all LLVM versions that write to a file to use with afl-fuzz' -x The snapshot LKM is currently unmaintained due to too many kernel changes coming too fast :-( FRIDA mode is supported on Linux and MacOS for Intel and ARM QEMU/Unicorn is only supported on Linux Coresight mode is only available on AARCH64 Linux with a CPU with Coresight extension Nyx mode is only supported on Linux and currently restricted to x86_x64 Integrated features and patches Among others, the following features and patches have been integrated:\n NeverZero patch for afl-gcc, instrumentation, QEMU mode and unicorn_mode which prevents a wrapping map value to zero, increases coverage Persistent mode, deferred forkserver and in-memory fuzzing for QEMU mode Unicorn mode which allows fuzzing of binaries from completely different platforms (integration provided by domenukk) The new CmpLog instrumentation for LLVM and QEMU inspired by Redqueen Win32 PE binary-only fuzzing with QEMU and Wine AFLfast\u0026rsquo;s power schedules by Marcel Böhme: https://github.com/mboehme/aflfast The MOpt mutator: https://github.com/puppet-meteor/MOpt-AFL LLVM mode Ngram coverage by Adrian Herrera https://github.com/adrianherrera/afl-ngram-pass LAF-Intel/CompCov support for instrumentation, QEMU mode and unicorn_mode (with enhanced capabilities) Radamsa and honggfuzz mutators (as custom mutators). QBDI mode to fuzz android native libraries via Quarkslab\u0026rsquo;s QBDI framework Frida and ptrace mode to fuzz binary-only libraries, etc. So all in all this is the best-of AFL that is out there :-)\n"}),a.add({id:16,href:'/docs/fuzzing_binary-only_targets/',title:"Fuzzing Binary Only Targets",content:"Fuzzing binary-only targets AFL++, libfuzzer, and other fuzzers are great if you have the source code of the target. This allows for very fast and coverage guided fuzzing.\nHowever, if there is only the binary program and no source code available, then standard afl-fuzz -n (non-instrumented mode) is not effective.\nFor fast, on-the-fly instrumentation of black-box binaries, AFL++ still offers various support. The following is a description of how these binaries can be fuzzed with AFL++.\nTL;DR: FRIDA mode and QEMU mode in persistent mode are the fastest - if persistent mode is possible and the stability is high enough.\nOtherwise, try Zafl, RetroWrite, Dyninst, and if these fail, too, then try standard FRIDA/QEMU mode with AFL_ENTRYPOINT to where you need it.\nIf your target is non-linux, then use unicorn_mode.\nFuzzing binary-only targets with AFL++ QEMU mode QEMU mode is the \u0026ldquo;native\u0026rdquo; solution to the program. It is available in the ./qemu_mode/ directory and, once compiled, it can be accessed by the afl-fuzz -Q command line option. It is the easiest to use alternative and even works for cross-platform binaries.\nFor linux programs and its libraries, this is accomplished with a version of QEMU running in the lesser-known \u0026ldquo;user space emulation\u0026rdquo; mode. QEMU is a project separate from AFL++, but you can conveniently build the feature by doing:\ncd qemu_mode ./build_qemu_support.sh The following setup to use QEMU mode is recommended:\n run 1 afl-fuzz -Q instance with CMPLOG (-c 0 + AFL_COMPCOV_LEVEL=2) run 1 afl-fuzz -Q instance with QASAN (AFL_USE_QASAN=1) run 1 afl-fuzz -Q instance with LAF (AFL_PRELOAD=libcmpcov.so + AFL_COMPCOV_LEVEL=2), alternatively you can use FRIDA mode, just switch -Q with -O and remove the LAF instance Then run as many instances as you have cores left with either -Q mode or - even better - use a binary rewriter like Dyninst, RetroWrite, ZAFL, etc.\nIf afl-dyninst works for your binary, then you can use afl-fuzz normally and it will have twice the speed compared to QEMU mode (but slower than QEMU persistent mode). Note that several other binary rewriters exist, all with their advantages and caveats.\nThe speed decrease of QEMU mode is at about 50%. However, various options exist to increase the speed:\n using AFL_ENTRYPOINT to move the forkserver entry to a later basic block in the binary (+5-10% speed) using persistent mode qemu_mode/README.persistent.md this will result in a 150-300% overall speed increase - so 3-8x the original QEMU mode speed! using AFL_CODE_START/AFL_CODE_END to only instrument specific parts For additional instructions and caveats, see qemu_mode/README.md. If possible, you should use the persistent mode, see qemu_mode/README.persistent.md. The mode is approximately 2-5x slower than compile-time instrumentation, and is less conducive to parallelization.\nNote that there is also honggfuzz: https://github.com/google/honggfuzz which now has a QEMU mode, but its performance is just 1.5% \u0026hellip;\nIf you like to code a customized fuzzer without much work, we highly recommend to check out our sister project libafl which supports QEMU, too: https://github.com/AFLplusplus/LibAFL\nWINE+QEMU Wine mode can run Win32 PE binaries with the QEMU instrumentation. It needs Wine, python3, and the pefile python package installed.\nIt is included in AFL++.\nFor more information, see qemu_mode/README.wine.md.\nFRIDA mode In FRIDA mode, you can fuzz binary-only targets as easily as with QEMU mode. FRIDA mode is most of the times slightly faster than QEMU mode. It is also newer, lacks COMPCOV, and has the advantage that it works on MacOS (both intel and M1).\nTo build FRIDA mode:\ncd frida_mode gmake For additional instructions and caveats, see frida_mode/README.md.\nIf possible, you should use the persistent mode, see instrumentation/README.persistent_mode.md. The mode is approximately 2-5x slower than compile-time instrumentation, and is less conducive to parallelization. But for binary-only fuzzing, it gives a huge speed improvement if it is possible to use.\nIf you want to fuzz a binary-only library, then you can fuzz it with frida-gum via frida_mode/. You will have to write a harness to call the target function in the library, use afl-frida.c as a template.\nYou can also perform remote fuzzing with frida, e.g., if you want to fuzz on iPhone or Android devices, for this you can use https://github.com/ttdennis/fpicker/ as an intermediate that uses AFL++ for fuzzing.\nIf you like to code a customized fuzzer without much work, we highly recommend to check out our sister project libafl which supports Frida, too: https://github.com/AFLplusplus/LibAFL. Working examples already exist :-)\nNyx mode Nyx is a full system emulation fuzzing environment with snapshot support that is built upon KVM and QEMU. It is only available on Linux and currently restricted to x86_x64.\nFor binary-only fuzzing a special 5.10 kernel is required.\nSee nyx_mode/README.md.\nUnicorn Unicorn is a fork of QEMU. The instrumentation is, therefore, very similar. In contrast to QEMU, Unicorn does not offer a full system or even userland emulation. Runtime environment and/or loaders have to be written from scratch, if needed. On top, block chaining has been removed. This means the speed boost introduced in the patched QEMU Mode of AFL++ cannot be ported over to Unicorn.\nFor non-Linux binaries, you can use AFL++\u0026rsquo;s unicorn_mode which can emulate anything you want - for the price of speed and user written scripts.\nTo build unicorn_mode:\ncd unicorn_mode ./build_unicorn_support.sh For further information, check out unicorn_mode/README.md.\nShared libraries If the goal is to fuzz a dynamic library, then there are two options available. For both, you need to write a small harness that loads and calls the library. Then you fuzz this with either FRIDA mode or QEMU mode and either use AFL_INST_LIBS=1 or AFL_QEMU/FRIDA_INST_RANGES.\nAnother, less precise and slower option is to fuzz it with utils/afl_untracer/ and use afl-untracer.c as a template. It is slower than FRIDA mode.\nFor more information, see utils/afl_untracer/README.md.\nCoresight Coresight is ARM\u0026rsquo;s answer to Intel\u0026rsquo;s PT. With AFL++ v3.15, there is a coresight tracer implementation available in coresight_mode/ which is faster than QEMU, however, cannot run in parallel. Currently, only one process can be traced, it is WIP.\nFore more information, see coresight_mode/README.md.\nBinary rewriters An alternative solution are binary rewriters. They are faster than the solutions native to AFL++ but don\u0026rsquo;t always work.\nZAFL ZAFL is a static rewriting platform supporting x86-64 C/C++, stripped/unstripped, and PIE/non-PIE binaries. Beyond conventional instrumentation, ZAFL\u0026rsquo;s API enables transformation passes (e.g., laf-Intel, context sensitivity, InsTrim, etc.).\nIts baseline instrumentation speed typically averages 90-95% of afl-clang-fast\u0026rsquo;s.\nhttps://git.zephyr-software.com/opensrc/zafl\nRetroWrite RetroWrite is a static binary rewriter that can be combined with AFL++. If you have an x86_64 binary that still has its symbols (i.e., not stripped binary), is compiled with position independent code (PIC/PIE), and does not contain C++ exceptions, then the RetroWrite solution might be for you. It decompiles to ASM files which can then be instrumented with afl-gcc.\nBinaries that are statically instrumented for fuzzing using RetroWrite are close in performance to compiler-instrumented binaries and outperform the QEMU-based instrumentation.\nhttps://github.com/HexHive/retrowrite\nDyninst Dyninst is a binary instrumentation framework similar to Pintool and DynamoRIO. However, whereas Pintool and DynamoRIO work at runtime, Dyninst instruments the target at load time and then let it run - or save the binary with the changes. This is great for some things, e.g., fuzzing, and not so effective for others, e.g., malware analysis.\nSo, what you can do with Dyninst is taking every basic block and putting AFL++\u0026rsquo;s instrumentation code in there - and then save the binary. Afterwards, just fuzz the newly saved target binary with afl-fuzz. Sounds great? It is. The issue though - it is a non-trivial problem to insert instructions, which change addresses in the process space, so that everything is still working afterwards. Hence, more often than not binaries crash when they are run.\nThe speed decrease is about 15-35%, depending on the optimization options used with afl-dyninst.\nhttps://github.com/vanhauser-thc/afl-dyninst\nMcsema Theoretically, you can also decompile to llvm IR with mcsema, and then use llvm_mode to instrument the binary. Good luck with that.\nhttps://github.com/lifting-bits/mcsema\nBinary tracers Pintool \u0026amp; DynamoRIO Pintool and DynamoRIO are dynamic instrumentation engines. They can be used for getting basic block information at runtime. Pintool is only available for Intel x32/x64 on Linux, Mac OS, and Windows, whereas DynamoRIO is additionally available for ARM and AARCH64. DynamoRIO is also 10x faster than Pintool.\nThe big issue with DynamoRIO (and therefore Pintool, too) is speed. DynamoRIO has a speed decrease of 98-99%, Pintool has a speed decrease of 99.5%.\nHence, DynamoRIO is the option to go for if everything else fails and Pintool only if DynamoRIO fails, too.\nDynamoRIO solutions:\n https://github.com/vanhauser-thc/afl-dynamorio https://github.com/mxmssh/drAFL https://github.com/googleprojectzero/winafl/ \u0026lt;= very good but windows only Pintool solutions:\n https://github.com/vanhauser-thc/afl-pin https://github.com/mothran/aflpin https://github.com/spinpx/afl_pin_mode \u0026lt;= only old Pintool version supported Intel PT If you have a newer Intel CPU, you can make use of Intel\u0026rsquo;s processor trace. The big issue with Intel\u0026rsquo;s PT is the small buffer size and the complex encoding of the debug information collected through PT. This makes the decoding very CPU intensive and hence slow. As a result, the overall speed decrease is about 70-90% (depending on the implementation and other factors).\nThere are two AFL intel-pt implementations:\n https://github.com/junxzm1990/afl-pt =\u0026gt; This needs Ubuntu 14.04.05 without any updates and the 4.4 kernel.\n https://github.com/hunter-ht-2018/ptfuzzer =\u0026gt; This needs a 4.14 or 4.15 kernel. The \u0026ldquo;nopti\u0026rdquo; kernel boot option must be used. This one is faster than the other.\n Note that there is also honggfuzz: https://github.com/google/honggfuzz. But its IPT performance is just 6%!\nNon-AFL++ solutions There are many binary-only fuzzing frameworks. Some are great for CTFs but don\u0026rsquo;t work with large binaries, others are very slow but have good path discovery, some are very hard to set-up\u0026hellip;\n Jackalope: https://github.com/googleprojectzero/Jackalope Manticore: https://github.com/trailofbits/manticore QSYM: https://github.com/sslab-gatech/qsym S2E: https://github.com/S2E TinyInst: https://github.com/googleprojectzero/TinyInst (Mac/Windows only) \u0026hellip; please send me any missing that are good Closing words That\u0026rsquo;s it! News, corrections, updates? Send an email to [email protected].\n"}),a.add({id:17,href:'/docs/fuzzing_in_depth/',title:"Fuzzing in Depth",content:"Fuzzing with AFL++ The following describes how to fuzz with a target if source code is available. If you have a binary-only target, go to fuzzing_binary-only_targets.md.\nFuzzing source code is a three-step process:\n Compile the target with a special compiler that prepares the target to be fuzzed efficiently. This step is called \u0026ldquo;instrumenting a target\u0026rdquo;. Prepare the fuzzing by selecting and optimizing the input corpus for the target. Perform the fuzzing of the target by randomly mutating input and assessing if that input was processed on a new path in the target binary. 0. Common sense risks Please keep in mind that, similarly to many other computationally-intensive tasks, fuzzing may put a strain on your hardware and on the OS. In particular:\n Your CPU will run hot and will need adequate cooling. In most cases, if cooling is insufficient or stops working properly, CPU speeds will be automatically throttled. That said, especially when fuzzing on less suitable hardware (laptops, smartphones, etc.), it\u0026rsquo;s not entirely impossible for something to blow up.\n Targeted programs may end up erratically grabbing gigabytes of memory or filling up disk space with junk files. AFL++ tries to enforce basic memory limits, but can\u0026rsquo;t prevent each and every possible mishap. The bottom line is that you shouldn\u0026rsquo;t be fuzzing on systems where the prospect of data loss is not an acceptable risk.\n Fuzzing involves billions of reads and writes to the filesystem. On modern systems, this will be usually heavily cached, resulting in fairly modest \u0026ldquo;physical\u0026rdquo; I/O - but there are many factors that may alter this equation. It is your responsibility to monitor for potential trouble; with very heavy I/O, the lifespan of many HDDs and SSDs may be reduced.\nA good way to monitor disk I/O on Linux is the iostat command:\n$ iostat -d 3 -x -k [...optional disk ID...] Using the AFL_TMPDIR environment variable and a RAM-disk, you can have the heavy writing done in RAM to prevent the aforementioned wear and tear. For example, the following line will run a Docker container with all this preset:\n# docker run -ti --mount type=tmpfs,destination=/ramdisk -e AFL_TMPDIR=/ramdisk aflplusplus/aflplusplus 1. Instrumenting the target a) Selecting the best AFL++ compiler for instrumenting the target AFL++ comes with a central compiler afl-cc that incorporates various different kinds of compiler targets and instrumentation options. The following evaluation flow will help you to select the best possible.\nIt is highly recommended to have the newest llvm version possible installed, anything below 9 is not recommended.\n+--------------------------------+ | clang/clang++ 11+ is available | --\u0026gt; use LTO mode (afl-clang-lto/afl-clang-lto++) +--------------------------------+ see [instrumentation/README.lto.md](instrumentation/README.lto.md) | | if not, or if the target fails with LTO afl-clang-lto/++ | v +---------------------------------+ | clang/clang++ 3.8+ is available | --\u0026gt; use LLVM mode (afl-clang-fast/afl-clang-fast++) +---------------------------------+ see [instrumentation/README.llvm.md](instrumentation/README.llvm.md) | | if not, or if the target fails with LLVM afl-clang-fast/++ | v +--------------------------------+ | gcc 5+ is available | -\u0026gt; use GCC_PLUGIN mode (afl-gcc-fast/afl-g++-fast) +--------------------------------+ see [instrumentation/README.gcc_plugin.md](instrumentation/README.gcc_plugin.md) and [instrumentation/README.instrument_list.md](instrumentation/README.instrument_list.md) | | if not, or if you do not have a gcc with plugin support | v use GCC mode (afl-gcc/afl-g++) (or afl-clang/afl-clang++ for clang) Clickable README links for the chosen compiler:\n LTO mode - afl-clang-lto LLVM mode - afl-clang-fast GCC_PLUGIN mode - afl-gcc-fast GCC/CLANG modes (afl-gcc/afl-clang) have no README as they have no own features You can select the mode for the afl-cc compiler by one of the following methods:\n Using a symlink to afl-cc: afl-gcc, afl-g++, afl-clang, afl-clang++, afl-clang-fast, afl-clang-fast++, afl-clang-lto, afl-clang-lto++, afl-gcc-fast, afl-g++-fast (recommended!). Using the environment variable AFL_CC_COMPILER with MODE. Passing \u0026ndash;afl-MODE command line options to the compiler via CFLAGS/CXXFLAGS/CPPFLAGS. MODE can be one of the following:\n LTO (afl-clang-lto*) LLVM (afl-clang-fast*) GCC_PLUGIN (afl-g*-fast) or GCC (afl-gcc/afl-g++) CLANG(afl-clang/afl-clang++) Because no AFL++ specific command-line options are accepted (beside the \u0026ndash;afl-MODE command), the compile-time tools make fairly broad use of environment variables, which can be listed with afl-cc -hh or looked up in env_variables.md.\nb) Selecting instrumentation options If you instrument with LTO mode (afl-clang-fast/afl-clang-lto), the following options are available:\n Splitting integer, string, float, and switch comparisons so AFL++ can easier solve these. This is an important option if you do not have a very good and large input corpus. This technique is called laf-intel or COMPCOV. To use this, set the following environment variable before compiling the target: export AFL_LLVM_LAF_ALL=1. You can read more about this in instrumentation/README.laf-intel.md. A different technique (and usually a better one than laf-intel) is to instrument the target so that any compare values in the target are sent to AFL++ which then tries to put these values into the fuzzing data at different locations. This technique is very fast and good - if the target does not transform input data before comparison. Therefore, this technique is called input to state or redqueen. If you want to use this technique, then you have to compile the target twice, once specifically with/for this mode by setting AFL_LLVM_CMPLOG=1, and pass this binary to afl-fuzz via the -c parameter. Note that you can compile also just a cmplog binary and use that for both, however, there will be a performance penalty. You can read more about this in instrumentation/README.cmplog.md. If you use LTO, LLVM, or GCC_PLUGIN mode (afl-clang-fast/afl-clang-lto/afl-gcc-fast), you have the option to selectively instrument parts of the target that you are interested in. For afl-clang-fast, you have to use an llvm version newer than 10.0.0 or a mode other than DEFAULT/PCGUARD.\nThis step can be done either by explicitly including parts to be instrumented or by explicitly excluding parts from instrumentation.\n To instrument only specified parts, create a file (e.g., allowlist.txt) with all the filenames and/or functions of the source code that should be instrumented and then:\n Just put one filename or function (prefixing with fun: ) per line (no directory information necessary for filenames) in the file allowlist.txt.\nExample:\nfoo.cpp # will match foo/foo.cpp, bar/foo.cpp, barfoo.cpp etc. fun: foo_func # will match the function foo_func Set export AFL_LLVM_ALLOWLIST=allowlist.txt to enable selective positive instrumentation.\n Similarly to exclude specified parts from instrumentation, create a file (e.g., denylist.txt) with all the filenames of the source code that should be skipped during instrumentation and then:\n Same as above. Just put one filename or function per line in the file denylist.txt.\n Set export AFL_LLVM_DENYLIST=denylist.txt to enable selective negative instrumentation.\n NOTE: During optimization functions might be inlined and then would not match the list! See instrumentation/README.instrument_list.md.\nThere are many more options and modes available, however, these are most of the time less effective. See:\n instrumentation/README.llvm.md#6) AFL++ Context Sensitive Branch Coverage instrumentation/README.llvm.md#7) AFL++ N-Gram Branch Coverage AFL++ performs \u0026ldquo;never zero\u0026rdquo; counting in its bitmap. You can read more about this here:\n instrumentation/README.llvm.md#8-neverzero-counters c) Selecting sanitizers It is possible to use sanitizers when instrumenting targets for fuzzing, which allows you to find bugs that would not necessarily result in a crash.\nNote that sanitizers have a huge impact on CPU (= less executions per second) and RAM usage. Also, you should only run one afl-fuzz instance per sanitizer type. This is enough because e.g. a use-after-free bug will be picked up by ASAN (address sanitizer) anyway after syncing test cases from other fuzzing instances, so running more than one address sanitized target would be a waste.\nThe following sanitizers have built-in support in AFL++:\n ASAN = Address SANitizer, finds memory corruption vulnerabilities like use-after-free, NULL pointer dereference, buffer overruns, etc. Enabled with export AFL_USE_ASAN=1 before compiling. MSAN = Memory SANitizer, finds read accesses to uninitialized memory, e.g., a local variable that is defined and read before it is even set. Enabled with export AFL_USE_MSAN=1 before compiling. UBSAN = Undefined Behavior SANitizer, finds instances where - by the C and C++ standards - undefined behavior happens, e.g., adding two signed integers where the result is larger than what a signed integer can hold. Enabled with export AFL_USE_UBSAN=1 before compiling. CFISAN = Control Flow Integrity SANitizer, finds instances where the control flow is found to be illegal. Originally this was rather to prevent return oriented programming (ROP) exploit chains from functioning. In fuzzing, this is mostly reduced to detecting type confusion vulnerabilities - which is, however, one of the most important and dangerous C++ memory corruption classes! Enabled with export AFL_USE_CFISAN=1 before compiling. TSAN = Thread SANitizer, finds thread race conditions. Enabled with export AFL_USE_TSAN=1 before compiling. LSAN = Leak SANitizer, finds memory leaks in a program. This is not really a security issue, but for developers this can be very valuable. Note that unlike the other sanitizers above this needs __AFL_LEAK_CHECK(); added to all areas of the target source code where you find a leak check necessary! Enabled with export AFL_USE_LSAN=1 before compiling. To ignore the memory-leaking check for certain allocations, __AFL_LSAN_OFF(); can be used before memory is allocated, and __AFL_LSAN_ON(); afterwards. Memory allocated between these two macros will not be checked for memory leaks. It is possible to further modify the behavior of the sanitizers at run-time by setting ASAN_OPTIONS=..., LSAN_OPTIONS etc. - the available parameters can be looked up in the sanitizer documentation of llvm/clang. afl-fuzz, however, requires some specific parameters important for fuzzing to be set. If you want to set your own, it might bail and report what it is missing.\nNote that some sanitizers cannot be used together, e.g., ASAN and MSAN, and others often cannot work together because of target weirdness, e.g., ASAN and CFISAN. You might need to experiment which sanitizers you can combine in a target (which means more instances can be run without a sanitized target, which is more effective).\nd) Modifying the target If the target has features that make fuzzing more difficult, e.g., checksums, HMAC, etc., then modify the source code so that checks for these values are removed. This can even be done safely for source code used in operational products by eliminating these checks within these AFL++ specific blocks:\n#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION // say that the checksum or HMAC was fine - or whatever is required // to eliminate the need for the fuzzer to guess the right checksum return 0; #endif All AFL++ compilers will set this preprocessor definition automatically.\ne) Instrumenting the target In this step, the target source code is compiled so that it can be fuzzed.\nBasically, you have to tell the target build system that the selected AFL++ compiler is used. Also - if possible - you should always configure the build system in such way that the target is compiled statically and not dynamically. How to do this is described below.\nThe #1 rule when instrumenting a target is: avoid instrumenting shared libraries at all cost. You would need to set LD_LIBRARY_PATH to point to these, you could accidentally type \u0026ldquo;make install\u0026rdquo; and install them system wide - so don\u0026rsquo;t. Really don\u0026rsquo;t. Always compile libraries you want to have instrumented as static and link these to the target program!\nThen build the target. (Usually with make.)\nNOTES\n Sometimes configure and build systems are fickle and do not like stderr output (and think this means a test failure) - which is something AFL++ likes to do to show statistics. It is recommended to disable AFL++ instrumentation reporting via export AFL_QUIET=1.\n Sometimes configure and build systems error on warnings - these should be disabled (e.g., --disable-werror for some configure scripts).\n In case the configure/build system complains about AFL++\u0026rsquo;s compiler and aborts, then set export AFL_NOOPT=1 which will then just behave like the real compiler and run the configure step separately. For building the target afterwards this option has to be unset again!\n configure For configure build systems, this is usually done by:\nCC=afl-clang-fast CXX=afl-clang-fast++ ./configure --disable-shared Note that if you are using the (better) afl-clang-lto compiler, you also have to set AR to llvm-ar[-VERSION] and RANLIB to llvm-ranlib[-VERSION] - as is described in instrumentation/README.lto.md.\nCMake For CMake build systems, this is usually done by:\nmkdir build; cd build; cmake -DCMAKE_C_COMPILER=afl-cc -DCMAKE_CXX_COMPILER=afl-c++ .. Note that if you are using the (better) afl-clang-lto compiler you also have to set AR to llvm-ar[-VERSION] and RANLIB to llvm-ranlib[-VERSION] - as is described in instrumentation/README.lto.md.\nMeson Build System For the Meson Build System, you have to set the AFL++ compiler with the very first command!\nCC=afl-cc CXX=afl-c++ meson Other build systems or if configure/cmake didn\u0026rsquo;t work Sometimes cmake and configure do not pick up the AFL++ compiler or the RANLIB/AR that is needed - because this was just not foreseen by the developer of the target. Or they have non-standard options. Figure out if there is a non-standard way to set this, otherwise set up the build normally and edit the generated build environment afterwards manually to point it to the right compiler (and/or RANLIB and AR).\nf) Better instrumentation If you just fuzz a target program as-is, you are wasting a great opportunity for much more fuzzing speed.\nThis variant requires the usage of afl-clang-lto, afl-clang-fast or afl-gcc-fast.\nIt is the so-called persistent mode, which is much, much faster but requires that you code a source file that is specifically calling the target functions that you want to fuzz, plus a few specific AFL++ functions around it. See instrumentation/README.persistent_mode.md for details.\nBasically, if you do not fuzz a target in persistent mode, then you are just doing it for a hobby and not professionally :-).\ng) libfuzzer fuzzer harnesses with LLVMFuzzerTestOneInput() libfuzzer LLVMFuzzerTestOneInput() harnesses are the defacto standard for fuzzing, and they can be used with AFL++ (and honggfuzz) as well!\nCompiling them is as simple as:\nafl-clang-fast++ -fsanitize=fuzzer -o harness harness.cpp targetlib.a You can even use advanced libfuzzer features like FuzzedDataProvider, LLVMFuzzerInitialize() etc. and they will work!\nThe generated binary is fuzzed with afl-fuzz like any other fuzz target.\nBonus: the target is already optimized for fuzzing due to persistent mode and shared-memory test cases and hence gives you the fastest speed possible.\nFor more information, see utils/aflpp_driver/README.md.\n2. Preparing the fuzzing campaign As you fuzz the target with mutated input, having as diverse inputs for the target as possible improves the efficiency a lot.\na) Collecting inputs To operate correctly, the fuzzer requires one or more starting files that contain a good example of the input data normally expected by the targeted application.\nTry to gather valid inputs for the target from wherever you can. E.g., if it is the PNG picture format, try to find as many PNG files as possible, e.g., from reported bugs, test suites, random downloads from the internet, unit test case data - from all kind of PNG software.\nIf the input format is not known, you can also modify a target program to write normal data it receives and processes to a file and use these.\nYou can find many good examples of starting files in the testcases/ subdirectory that comes with this tool.\nb) Making the input corpus unique Use the AFL++ tool afl-cmin to remove inputs from the corpus that do not produce a new path/coverage in the target:\n Put all files from step a into one directory, e.g., INPUTS. Run afl-cmin: If the target program is to be called by fuzzing as bin/target INPUTFILE, replace the INPUTFILE argument that the target program would read from with @@:\nafl-cmin -i INPUTS -o INPUTS_UNIQUE -- bin/target -someopt @@ If the target reads from stdin (standard input) instead, just omit the @@ as this is the default:\nafl-cmin -i INPUTS -o INPUTS_UNIQUE -- bin/target -someopt This step is highly recommended, because afterwards the testcase corpus is not bloated with duplicates anymore, which would slow down the fuzzing progress!\nc) Minimizing all corpus files The shorter the input files that still traverse the same path within the target, the better the fuzzing will be. This minimization is done with afl-tmin, however, it is a long process as this has to be done for every file:\nmkdir input cd INPUTS_UNIQUE for i in *; do afl-tmin -i \u0026quot;$i\u0026quot; -o \u0026quot;../input/$i\u0026quot; -- bin/target -someopt @@ done This step can also be parallelized, e.g., with parallel.\nNote that this step is rather optional though.\nDone! The INPUTS_UNIQUE/ directory from step b - or even better the directory input/ if you minimized the corpus in step c - is the resulting input corpus directory to be used in fuzzing! :-)\n3. Fuzzing the target In this final step, fuzz the target. There are not that many important options to run the target - unless you want to use many CPU cores/threads for the fuzzing, which will make the fuzzing much more useful.\nIf you just use one instance for fuzzing, then you are fuzzing just for fun and not seriously :-)\na) Running afl-fuzz Before you do even a test run of afl-fuzz, execute sudo afl-system-config (on the host if you execute afl-fuzz in a Docker container). This reconfigures the system for optimal speed - which afl-fuzz checks and bails otherwise. Set export AFL_SKIP_CPUFREQ=1 for afl-fuzz to skip this check if you cannot run afl-system-config with root privileges on the host for whatever reason.\nNote:\n There is also sudo afl-persistent-config which sets additional permanent boot options for a much better fuzzing performance. Both scripts improve your fuzzing performance but also decrease your system protection against attacks! So set strong firewall rules and only expose SSH as a network service if you use these (which is highly recommended). If you have an input corpus from step 2, then specify this directory with the -i option. Otherwise, create a new directory and create a file with any content as test data in there.\nIf you do not want anything special, the defaults are already usually best, hence all you need is to specify the seed input directory with the result of step 2a) Collecting inputs:\nafl-fuzz -i input -o output -- bin/target -someopt @@ Note that the directory specified with -o will be created if it does not exist.\nIt can be valuable to run afl-fuzz in a screen or tmux shell so you can log off, or afl-fuzz is not aborted if you are running it in a remote ssh session where the connection fails in between. Only do that though once you have verified that your fuzzing setup works! Run it like screen -dmS afl-main -- afl-fuzz -M main-$HOSTNAME -i ... and it will start away in a screen session. To enter this session, type screen -r afl-main. You see - it makes sense to name the screen session same as the afl-fuzz -M/-S naming :-) For more information on screen or tmux, check their documentation.\nIf you need to stop and re-start the fuzzing, use the same command line options (or even change them by selecting a different power schedule or another mutation mode!) and switch the input directory with a dash (-):\nafl-fuzz -i - -o output -- bin/target -someopt @@ Adding a dictionary is helpful. You have to following options:\n See the directory dictionaries/, if something is already included for your data format, and tell afl-fuzz to load that dictionary by adding -x dictionaries/FORMAT.dict. With afl-clang-lto, you have an autodictionary generation for which you need to do nothing except to use afl-clang-lto as the compiler. With afl-clang-fast, you can set AFL_LLVM_DICT2FILE=/full/path/to/new/file.dic to automatically generate a dictionary during target compilation. You also have the option to generate a dictionary yourself during an independent run of the target, see utils/libtokencap/README.md. Finally, you can also write a dictionary file manually, of course. afl-fuzz has a variety of options that help to workaround target quirks like very specific locations for the input file (-f), performing deterministic fuzzing (-D) and many more. Check out afl-fuzz -h.\nWe highly recommend that you set a memory limit for running the target with -m which defines the maximum memory in MB. This prevents a potential out-of-memory problem for your system plus helps you detect missing malloc() failure handling in the target. Play around with various -m values until you find one that safely works for all your input seeds (if you have good ones and then double or quadruple that).\nBy default, afl-fuzz never stops fuzzing. To terminate AFL++, press Control-C or send a signal SIGINT. You can limit the number of executions or approximate runtime in seconds with options also.\nWhen you start afl-fuzz, you will see a user interface that shows what the status is:\nAll labels are explained in afl-fuzz_approach.md#understanding-the-status-screen.\nb) Keeping memory use and timeouts in check Memory limits are not enforced by afl-fuzz by default and the system may run out of memory. You can decrease the memory with the -m option, the value is in MB. If this is too small for the target, you can usually see this by afl-fuzz bailing with the message that it could not connect to the forkserver.\nConsider setting low values for -m and -t.\nFor programs that are nominally very fast, but get sluggish for some inputs, you can also try setting -t values that are more punishing than what afl-fuzz dares to use on its own. On fast and idle machines, going down to -t 5 may be a viable plan.\nThe -m parameter is worth looking at, too. Some programs can end up spending a fair amount of time allocating and initializing megabytes of memory when presented with pathological inputs. Low -m values can make them give up sooner and not waste CPU time.\nc) Using multiple cores If you want to seriously fuzz, then use as many cores/threads as possible to fuzz your target.\nOn the same machine - due to the design of how AFL++ works - there is a maximum number of CPU cores/threads that are useful, use more and the overall performance degrades instead. This value depends on the target, and the limit is between 32 and 64 cores per machine.\nIf you have the RAM, it is highly recommended run the instances with a caching of the test cases. Depending on the average test case size (and those found during fuzzing) and their number, a value between 50-500MB is recommended. You can set the cache size (in MB) by setting the environment variable AFL_TESTCACHE_SIZE.\nThere should be one main fuzzer (-M main-$HOSTNAME option) and as many secondary fuzzers (e.g., -S variant1) as you have cores that you use. Every -M/-S entry needs a unique name (that can be whatever), however, the same -o output directory location has to be used for all instances.\nFor every secondary fuzzer there should be a variation, e.g.:\n one should fuzz the target that was compiled differently: with sanitizers activated (export AFL_USE_ASAN=1 ; export AFL_USE_UBSAN=1 ; export AFL_USE_CFISAN=1) one or two should fuzz the target with CMPLOG/redqueen (see above), at least one cmplog instance should follow transformations (-l AT) one to three fuzzers should fuzz a target compiled with laf-intel/COMPCOV (see above). Important note: If you run more than one laf-intel/COMPCOV fuzzer and you want them to share their intermediate results, the main fuzzer (-M) must be one of them! (Although this is not really recommended.) All other secondaries should be used like this:\n a quarter to a third with the MOpt mutator enabled: -L 0 run with a different power schedule, recommended are: fast (default), explore, coe, lin, quad, exploit, and rare which you can set with the -p option, e.g., -p explore. See the FAQ for details. a few instances should use the old queue cycling with -Z Also, it is recommended to set export AFL_IMPORT_FIRST=1 to load test cases from other fuzzers in the campaign first.\nIf you have a large corpus, a corpus from a previous run or are fuzzing in a CI, then also set export AFL_CMPLOG_ONLY_NEW=1 and export AFL_FAST_CAL=1.\nYou can also use different fuzzers. If you are using AFL spinoffs or AFL conforming fuzzers, then just use the same -o directory and give it a unique -S name. Examples are:\n Fuzzolic symcc Eclipser AFLsmart FairFuzz Neuzz Angora A long list can be found at https://github.com/Microsvuln/Awesome-AFL.\nHowever, you can also sync AFL++ with honggfuzz, libfuzzer with -entropic=1, etc. Just show the main fuzzer (-M) with the -F option where the queue/work directory of a different fuzzer is, e.g., -F /src/target/honggfuzz. Using honggfuzz (with -n 1 or -n 2) and libfuzzer in parallel is highly recommended!\nd) Using multiple machines for fuzzing Maybe you have more than one machine you want to fuzz the same target on. Start the afl-fuzz (and perhaps libfuzzer, honggfuzz, \u0026hellip;) orchestra as you like, just ensure that your have one and only one -M instance per server, and that its name is unique, hence the recommendation for -M main-$HOSTNAME.\nNow there are three strategies on how you can sync between the servers:\n never: sounds weird, but this makes every server an island and has the chance that each follow different paths into the target. You can make this even more interesting by even giving different seeds to each server. regularly (~4h): this ensures that all fuzzing campaigns on the servers \u0026ldquo;see\u0026rdquo; the same thing. It is like fuzzing on a huge server. in intervals of 1/10th of the overall expected runtime of the fuzzing you sync. This tries a bit to combine both. Have some individuality of the paths each campaign on a server explores, on the other hand if one gets stuck where another found progress this is handed over making it unstuck. The syncing process itself is very simple. As the -M main-$HOSTNAME instance syncs to all -S secondaries as well as to other fuzzers, you have to copy only this directory to the other machines.\nLets say all servers have the -o out directory in /target/foo/out, and you created a file servers.txt which contains the hostnames of all participating servers, plus you have an ssh key deployed to all of them, then run:\nfor FROM in `cat servers.txt`; do for TO in `cat servers.txt`; do rsync -rlpogtz --rsh=ssh $FROM:/target/foo/out/main-$FROM $TO:target/foo/out/ done done You can run this manually, per cron job - as you need it. There is a more complex and configurable script in utils/distributed_fuzzing.\ne) The status of the fuzz campaign AFL++ comes with the afl-whatsup script to show the status of the fuzzing campaign.\nJust supply the directory that afl-fuzz is given with the -o option and you will see a detailed status of every fuzzer in that campaign plus a summary.\nTo have only the summary, use the -s switch, e.g., afl-whatsup -s out/.\nIf you have multiple servers, then use the command after a sync or you have to execute this script per server.\nAnother tool to inspect the current state and history of a specific instance is afl-plot, which generates an index.html file and graphs that show how the fuzzing instance is performing. The syntax is afl-plot instance_dir web_dir, e.g., afl-plot out/default /srv/www/htdocs/plot.\nf) Stopping fuzzing, restarting fuzzing, adding new seeds To stop an afl-fuzz run, press Control-C.\nTo restart an afl-fuzz run, just reuse the same command line but replace the -i directory with -i - or set AFL_AUTORESUME=1.\nIf you want to add new seeds to a fuzzing campaign, you can run a temporary fuzzing instance, e.g., when your main fuzzer is using -o out and the new seeds are in newseeds/ directory:\nAFL_BENCH_JUST_ONE=1 AFL_FAST_CAL=1 afl-fuzz -i newseeds -o out -S newseeds -- ./target g) Checking the coverage of the fuzzing The corpus count value is a bad indicator for checking how good the coverage is.\nA better indicator - if you use default llvm instrumentation with at least version 9 - is to use afl-showmap with the collect coverage option -C on the output directory:\n$ afl-showmap -C -i out -o /dev/null -- ./target -params @@ ... [*] Using SHARED MEMORY FUZZING feature. [*] Target map size: 9960 [+] Processed 7849 input files. [+] Captured 4331 tuples (highest value 255, total values 67130596) in '/dev/nul l'. [+] A coverage of 4331 edges were achieved out of 9960 existing (43.48%) with 7849 input files. It is even better to check out the exact lines of code that have been reached - and which have not been found so far.\nAn \u0026ldquo;easy\u0026rdquo; helper script for this is https://github.com/vanhauser-thc/afl-cov, just follow the README of that separate project.\nIf you see that an important area or a feature has not been covered so far, then try to find an input that is able to reach that and start a new secondary in that fuzzing campaign with that seed as input, let it run for a few minutes, then terminate it. The main node will pick it up and make it available to the other secondary nodes over time. Set export AFL_NO_AFFINITY=1 or export AFL_TRY_AFFINITY=1 if you have no free core.\nNote that in nearly all cases you can never reach full coverage. A lot of functionality is usually dependent on exclusive options that would need individual fuzzing campaigns each with one of these options set. E.g., if you fuzz a library to convert image formats and your target is the png to tiff API, then you will not touch any of the other library APIs and features.\nh) How long to fuzz a target? This is a difficult question. Basically, if no new path is found for a long time (e.g., for a day or a week), then you can expect that your fuzzing won\u0026rsquo;t be fruitful anymore. However, often this just means that you should switch out secondaries for others, e.g., custom mutator modules, sync to very different fuzzers, etc.\nKeep the queue/ directory (for future fuzzings of the same or similar targets) and use them to seed other good fuzzers like libfuzzer with the -entropic switch or honggfuzz.\ni) Improve the speed! Use persistent mode (x2-x20 speed increase). If you do not use shmem persistent mode, use AFL_TMPDIR to point the input file on a tempfs location, see env_variables.md. Linux: Improve kernel performance: modify /etc/default/grub, set GRUB_CMDLINE_LINUX_DEFAULT=\u0026quot;ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off\u0026quot;; then update-grub and reboot (warning: makes the system more insecure) - you can also just run sudo afl-persistent-config. Linux: Running on an ext2 filesystem with noatime mount option will be a bit faster than on any other journaling filesystem. Use your cores! See 3c) Using multiple cores. Run sudo afl-system-config before starting the first afl-fuzz instance after a reboot. j) Going beyond crashes Fuzzing is a wonderful and underutilized technique for discovering non-crashing design and implementation errors, too. Quite a few interesting bugs have been found by modifying the target programs to call abort() when say:\n Two bignum libraries produce different outputs when given the same fuzzer-generated input.\n An image library produces different outputs when asked to decode the same input image several times in a row.\n A serialization/deserialization library fails to produce stable outputs when iteratively serializing and deserializing fuzzer-supplied data.\n A compression library produces an output inconsistent with the input file when asked to compress and then decompress a particular blob.\n Implementing these or similar sanity checks usually takes very little time; if you are the maintainer of a particular package, you can make this code conditional with #ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION (a flag also shared with libfuzzer and honggfuzz) or #ifdef __AFL_COMPILER (this one is just for AFL++).\nk) Known limitations \u0026amp; areas for improvement Here are some of the most important caveats for AFL++:\n AFL++ detects faults by checking for the first spawned process dying due to a signal (SIGSEGV, SIGABRT, etc.). Programs that install custom handlers for these signals may need to have the relevant code commented out. In the same vein, faults in child processes spawned by the fuzzed target may evade detection unless you manually add some code to catch that.\n As with any other brute-force tool, the fuzzer offers limited coverage if encryption, checksums, cryptographic signatures, or compression are used to wholly wrap the actual data format to be tested.\nTo work around this, you can comment out the relevant checks (see utils/libpng_no_checksum/ for inspiration); if this is not possible, you can also write a postprocessor, one of the hooks of custom mutators. See custom_mutators.md on how to use AFL_CUSTOM_MUTATOR_LIBRARY.\n There are some unfortunate trade-offs with ASAN and 64-bit binaries. This isn\u0026rsquo;t due to any specific fault of afl-fuzz.\n There is no direct support for fuzzing network services, background daemons, or interactive apps that require UI interaction to work. You may need to make simple code changes to make them behave in a more traditional way. Preeny may offer a relatively simple option, too - see: https://github.com/zardus/preeny\nSome useful tips for modifying network-based services can be also found at: https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop\n Occasionally, sentient machines rise against their creators. If this happens to you, please consult https://lcamtuf.coredump.cx/prep/.\n Beyond this, see INSTALL.md for platform-specific tips.\n4. Triaging crashes The coverage-based grouping of crashes usually produces a small data set that can be quickly triaged manually or with a very simple GDB or Valgrind script. Every crash is also traceable to its parent non-crashing test case in the queue, making it easier to diagnose faults.\nHaving said that, it\u0026rsquo;s important to acknowledge that some fuzzing crashes can be difficult to quickly evaluate for exploitability without a lot of debugging and code analysis work. To assist with this task, afl-fuzz supports a very unique \u0026ldquo;crash exploration\u0026rdquo; mode enabled with the -C flag.\nIn this mode, the fuzzer takes one or more crashing test cases as the input and uses its feedback-driven fuzzing strategies to very quickly enumerate all code paths that can be reached in the program while keeping it in the crashing state.\nMutations that do not result in a crash are rejected; so are any changes that do not affect the execution path.\nThe output is a small corpus of files that can be very rapidly examined to see what degree of control the attacker has over the faulting address, or whether it is possible to get past an initial out-of-bounds read - and see what lies beneath.\nOh, one more thing: for test case minimization, give afl-tmin a try. The tool can be operated in a very simple way:\n./afl-tmin -i test_case -o minimized_result -- /path/to/program [...] The tool works with crashing and non-crashing test cases alike. In the crash mode, it will happily accept instrumented and non-instrumented binaries. In the non-crashing mode, the minimizer relies on standard AFL++ instrumentation to make the file simpler without altering the execution path.\nThe minimizer accepts the -m, -t, -f, and @@ syntax in a manner compatible with afl-fuzz.\nAnother tool in AFL++ is the afl-analyze tool. It takes an input file, attempts to sequentially flip bytes and observes the behavior of the tested program. It then color-codes the input based on which sections appear to be critical and which are not; while not bulletproof, it can often offer quick insights into complex file formats.\n5. CI fuzzing Some notes on continuous integration (CI) fuzzing - this fuzzing is different to normal fuzzing campaigns as these are much shorter runnings.\n Always:\n LTO has a much longer compile time which is diametrical to short fuzzing - hence use afl-clang-fast instead. If you compile with CMPLOG, then you can save compilation time and reuse that compiled target with the -c option and as the main fuzz target. This will impact the speed by ~15% though. AFL_FAST_CAL - enables fast calibration, this halves the time the saturated corpus needs to be loaded. AFL_CMPLOG_ONLY_NEW - only perform cmplog on new finds, not the initial corpus as this very likely has been done for them already. Keep the generated corpus, use afl-cmin and reuse it every time! Additionally randomize the AFL++ compilation options, e.g.:\n 40% for AFL_LLVM_CMPLOG 10% for AFL_LLVM_LAF_ALL Also randomize the afl-fuzz runtime options, e.g.:\n 65% for AFL_DISABLE_TRIM 50% use a dictionary generated by AFL_LLVM_DICT2FILE 40% use MOpt (-L 0) 40% for AFL_EXPAND_HAVOC_NOW 20% for old queue processing (-Z) for CMPLOG targets, 60% for -l 2, 40% for -l 3 Do not run any -M modes, just running -S modes is better for CI fuzzing. -M enables old queue handling etc. which is good for a fuzzing campaign but not good for short CI runs.\n How this can look like can, e.g., be seen at AFL++\u0026rsquo;s setup in Google\u0026rsquo;s oss-fuzz and clusterfuzz.\nThe End Check out the FAQ. Maybe it answers your question (that you might not even have known you had ;-) ).\nThis is basically all you need to know to professionally run fuzzing campaigns. If you want to know more, the tons of texts in docs/ will have you covered.\nNote that there are also a lot of tools out there that help fuzzing with AFL++ (some might be deprecated or unsupported), see third_party_tools.md.\n"}),a.add({id:18,href:'/docs/historical_notes/',title:"Historical Notes",content:"Historical notes This doc talks about the rationale of some of the high-level design decisions for American Fuzzy Lop. It\u0026rsquo;s adopted from a discussion with Rob Graham. See README.md for the general instruction manual, and technical_details.md for additional implementation-level insights.\n1) Influences In short, afl-fuzz is inspired chiefly by the work done by Tavis Ormandy back in 2007. Tavis did some very persuasive experiments using gcov block coverage to select optimal test cases out of a large corpus of data, and then using them as a starting point for traditional fuzzing workflows.\n(By \u0026ldquo;persuasive\u0026rdquo;, I mean: netting a significant number of interesting vulnerabilities.)\nIn parallel to this, both Tavis and I were interested in evolutionary fuzzing. Tavis had his experiments, and I was working on a tool called bunny-the-fuzzer, released somewhere in 2007.\nBunny used a generational algorithm not much different from afl-fuzz, but also tried to reason about the relationship between various input bits and the internal state of the program, with hopes of deriving some additional value from that. The reasoning / correlation part was probably in part inspired by other projects done around the same time by Will Drewry and Chris Evans.\nThe state correlation approach sounded very sexy on paper, but ultimately, made the fuzzer complicated, brittle, and cumbersome to use; every other target program would require a tweak or two. Because Bunny didn\u0026rsquo;t fare a whole lot better than less sophisticated brute-force tools, I eventually decided to write it off. You can still find its original documentation at:\nhttps://code.google.com/p/bunny-the-fuzzer/wiki/BunnyDoc\nThere has been a fair amount of independent work, too. Most notably, a few weeks earlier that year, Jared DeMott had a Defcon presentation about a coverage-driven fuzzer that relied on coverage as a fitness function.\nJared\u0026rsquo;s approach was by no means identical to what afl-fuzz does, but it was in the same ballpark. His fuzzer tried to explicitly solve for the maximum coverage with a single input file; in comparison, afl simply selects for cases that do something new (which yields better results - see technical_details.md).\nA few years later, Gabriel Campana released fuzzgrind, a tool that relied purely on Valgrind and a constraint solver to maximize coverage without any brute-force bits; and Microsoft Research folks talked extensively about their still non-public, solver-based SAGE framework.\nIn the past six years or so, I\u0026rsquo;ve also seen a fair number of academic papers that dealt with smart fuzzing (focusing chiefly on symbolic execution) and a couple papers that discussed proof-of-concept applications of genetic algorithms with the same goals in mind. I\u0026rsquo;m unconvinced how practical most of these experiments were; I suspect that many of them suffer from the bunny-the-fuzzer\u0026rsquo;s curse of being cool on paper and in carefully designed experiments, but failing the ultimate test of being able to find new, worthwhile security bugs in otherwise well-fuzzed, real-world software.\nIn some ways, the baseline that the \u0026ldquo;cool\u0026rdquo; solutions have to compete against is a lot more impressive than it may seem, making it difficult for competitors to stand out. For a singular example, check out the work by Gynvael and Mateusz Jurczyk, applying \u0026ldquo;dumb\u0026rdquo; fuzzing to ffmpeg, a prominent and security-critical component of modern browsers and media players:\nhttp://googleonlinesecurity.blogspot.com/2014/01/ffmpeg-and-thousand-fixes.html\nEffortlessly getting comparable results with state-of-the-art symbolic execution in equally complex software still seems fairly unlikely, and hasn\u0026rsquo;t been demonstrated in practice so far.\nBut I digress; ultimately, attribution is hard, and glorying the fundamental concepts behind AFL is probably a waste of time. The devil is very much in the often-overlooked details, which brings us to\u0026hellip;\n2. Design goals for afl-fuzz In short, I believe that the current implementation of afl-fuzz takes care of several itches that seemed impossible to scratch with other tools:\n Speed. It\u0026rsquo;s genuinely hard to compete with brute force when your \u0026ldquo;smart\u0026rdquo; approach is resource-intensive. If your instrumentation makes it 10x more likely to find a bug, but runs 100x slower, your users are getting a bad deal.\nTo avoid starting with a handicap, afl-fuzz is meant to let you fuzz most of the intended targets at roughly their native speed - so even if it doesn\u0026rsquo;t add value, you do not lose much.\nOn top of this, the tool leverages instrumentation to actually reduce the amount of work in a couple of ways: for example, by carefully trimming the corpus or skipping non-functional but non-trimmable regions in the input files.\n Rock-solid reliability. It\u0026rsquo;s hard to compete with brute force if your approach is brittle and fails unexpectedly. Automated testing is attractive because it\u0026rsquo;s simple to use and scalable; anything that goes against these principles is an unwelcome trade-off and means that your tool will be used less often and with less consistent results.\nMost of the approaches based on symbolic execution, taint tracking, or complex syntax-aware instrumentation are currently fairly unreliable with real-world targets. Perhaps more importantly, their failure modes can render them strictly worse than \u0026ldquo;dumb\u0026rdquo; tools, and such degradation can be difficult for less experienced users to notice and correct.\nIn contrast, afl-fuzz is designed to be rock solid, chiefly by keeping it simple. In fact, at its core, it\u0026rsquo;s designed to be just a very good traditional fuzzer with a wide range of interesting, well-researched strategies to go by. The fancy parts just help it focus the effort in places where it matters the most.\n Simplicity. The author of a testing framework is probably the only person who truly understands the impact of all the settings offered by the tool - and who can dial them in just right. Yet, even the most rudimentary fuzzer frameworks often come with countless knobs and fuzzing ratios that need to be guessed by the operator ahead of the time. This can do more harm than good.\nAFL is designed to avoid this as much as possible. The three knobs you can play with are the output file, the memory limit, and the ability to override the default, auto-calibrated timeout. The rest is just supposed to work. When it doesn\u0026rsquo;t, user-friendly error messages outline the probable causes and workarounds, and get you back on track right away.\n Chainability. Most general-purpose fuzzers can\u0026rsquo;t be easily employed against resource-hungry or interaction-heavy tools, necessitating the creation of custom in-process fuzzers or the investment of massive CPU power (most of which is wasted on tasks not directly related to the code we actually want to test).\nAFL tries to scratch this itch by allowing users to use more lightweight targets (e.g., standalone image parsing libraries) to create small corpora of interesting test cases that can be fed into a manual testing process or a UI harness later on.\n As mentioned in technical_details.md, AFL does all this not by systematically applying a single overarching CS concept, but by experimenting with a variety of small, complementary methods that were shown to reliably yields results better than chance. The use of instrumentation is a part of that toolkit, but is far from being the most important one.\nUltimately, what matters is that afl-fuzz is designed to find cool bugs - and has a pretty robust track record of doing just that.\n"}),a.add({id:19,href:'/docs/ideas/',title:"Ideas",content:"Ideas for AFL++ In the following, we describe a variety of ideas that could be implemented for future AFL++ versions.\nAnalysis software Currently analysis is done by using afl-plot, which is rather outdated. A GTK or browser tool to create run-time analysis based on fuzzer_stats, queue/id* information and plot_data that allows for zooming in and out, changing min/max display values etc. and doing that for a single run, different runs and campaigns vs. campaigns. Interesting values are execs, and execs/s, edges discovered (total, when each edge was discovered and which other fuzzer share finding that edge), test cases executed. It should be clickable which value is X and Y axis, zoom factor, log scaling on-off, etc.\nMentor: vanhauser-thc\nWASM Instrumentation Currently, AFL++ can be used for source code fuzzing and traditional binaries. With the rise of WASM as a compile target, however, a novel way of instrumentation needs to be implemented for binaries compiled to Webassembly. This can either be done by inserting instrumentation directly into the WASM AST, or by patching feedback into a WASM VM of choice, similar to the current Unicorn instrumentation.\nMentor: any\nSupport other programming languages Other programming languages also use llvm hence they could be (easily?) supported for fuzzing, e.g., mono, swift, go, kotlin native, fortran, \u0026hellip;\nGCC also supports: Objective-C, Fortran, Ada, Go, and D (according to Gcc homepage)\nLLVM is also used by: Rust, LLGo (Go), kaleidoscope (Haskell), flang (Fortran), emscripten (JavaScript, WASM), ilwasm (CIL (C#)) (according to LLVM frontends)\nMentor: vanhauser-thc\nMachine Learning Something with machine learning, better than NEUZZ :-) Either improve a single mutator through learning of many different bugs (a bug class) or gather deep insights about a single target beforehand (CFG, DFG, VFG, \u0026hellip;?) and improve performance for a single target.\nMentor: domenukk\nYour idea! Finally, we are open to proposals! Create an issue at https://github.com/AFLplusplus/AFLplusplus/issues and let\u0026rsquo;s discuss :-)\n"}),a.add({id:20,href:'/docs/important_changes/',title:"Important Changes",content:"Important changes in AFL++ This document lists important changes in AFL++, for example, major behavior changes.\nFrom version 3.00 onwards With AFL++ 4.00, we introduced the following changes from previous behaviors:\n the complete documentation was overhauled and restructured thanks to @llzmb! a new CMPLOG target format requires recompiling CMPLOG targets for use with AFL++ 4.0 onwards better naming for several fields in the UI With AFL++ 3.15, we introduced the following changes from previous behaviors:\n afl-cmin and afl-showmap -Ci now descend into subdirectories like afl-fuzz -i does (but note that afl-cmin.bash does not) With AFL++ 3.14, we introduced the following changes from previous behaviors:\n afl-fuzz: deterministic fuzzing is not a default for -M main anymore afl-cmin/afl-showmap -i now descends into subdirectories (afl-cmin.bash, however, does not) With AFL++ 3.10, we introduced the following changes from previous behaviors:\n The \u0026lsquo;+\u0026rsquo; feature of the -t option now means to auto-calculate the timeout with the value given being the maximum timeout. The original meaning of \u0026ldquo;skipping timeouts instead of abort\u0026rdquo; is now inherent to the -t option. With AFL++ 3.00, we introduced changes that break some previous AFL and AFL++ behaviors and defaults:\n There are no llvm_mode and gcc_plugin subdirectories anymore and there is only one compiler: afl-cc. All previous compilers now symlink to this one. All instrumentation source code is now in the instrumentation/ folder. The gcc_plugin was replaced with a new version submitted by AdaCore that supports more features. Thank you! QEMU mode got upgraded to QEMU 5.1, but to be able to build this a current ninja build tool version and python3 setuptools are required. QEMU mode also got new options like snapshotting, instrumenting specific shared libraries, etc. Additionally QEMU 5.1 supports more CPU targets so this is really worth it. When instrumenting targets, afl-cc will not supersede optimizations anymore if any were given. This allows to fuzz targets build regularly like those for debug or release versions. afl-fuzz: if neither -M or -S is specified, -S default is assumed, so more fuzzers can easily be added later -i input directory option now descends into subdirectories. It also does not fail on crashes and too large files, instead it skips them and uses them for splicing mutations -m none is now the default, set memory limits (in MB) with, e.g., -m 250 deterministic fuzzing is now disabled by default (unless using -M) and can be enabled with -D a caching of test cases can now be performed and can be modified by editing config.h for TESTCASE_CACHE or by specifying the environment variable AFL_TESTCACHE_SIZE (in MB). Good values are between 50-500 (default: 50). -M mains do not perform trimming examples/ got renamed to utils/ libtokencap/, libdislocator/, and qdbi_mode/ were moved to utils/ afl-cmin/afl-cmin.bash now search first in PATH and last in AFL_PATH "}),a.add({id:21,href:'/docs/install/',title:"Install",content:"Building and installing AFL++ Linux on x86 An easy way to install AFL++ with everything compiled is available via docker: You can use the Dockerfile (which has gcc-10 and clang-11 - hence afl-clang-lto is available!) or just pull directly from the Docker Hub:\ndocker pull aflplusplus/aflplusplus docker run -ti -v /location/of/your/target:/src aflplusplus/aflplusplus This image is automatically generated when a push to the stable repo happens. You will find your target source code in /src in the container.\nIf you want to build AFL++ yourself, you have many options. The easiest choice is to build and install everything:\nsudo apt-get update sudo apt-get install -y build-essential python3-dev automake git flex bison libglib2.0-dev libpixman-1-dev python3-setuptools # try to install llvm 11 and install the distro default if that fails sudo apt-get install -y lld-11 llvm-11 llvm-11-dev clang-11 || sudo apt-get install -y lld llvm llvm-dev clang sudo apt-get install -y gcc-$(gcc --version|head -n1|sed \u0026#39;s/.* //\u0026#39;|sed \u0026#39;s/\\..*//\u0026#39;)-plugin-dev libstdc++-$(gcc --version|head -n1|sed \u0026#39;s/.* //\u0026#39;|sed \u0026#39;s/\\..*//\u0026#39;)-dev sudo apt-get install -y ninja-build # for QEMU mode git clone https://github.com/AFLplusplus/AFLplusplus cd AFLplusplus make distrib sudo make install It is recommended to install the newest available gcc, clang and llvm-dev possible in your distribution!\nNote that make distrib also builds FRIDA mode, QEMU mode, unicorn_mode, and more. If you just want plain AFL++, then do make all. If you want some assisting tooling compiled but are not interested in binary-only targets, then instead choose:\nmake source-only These build targets exist:\n all: the main afl++ binaries and llvm/gcc instrumentation binary-only: everything for binary-only fuzzing: frida_mode, nyx_mode, qemu_mode, frida_mode, unicorn_mode, coresight_mode, libdislocator, libtokencap source-only: everything for source code fuzzing: nyx_mode, libdislocator, libtokencap distrib: everything (for both binary-only and source code fuzzing) man: creates simple man pages from the help option of the programs install: installs everything you have compiled with the build options above clean: cleans everything compiled, not downloads (unless not on a checkout) deepclean: cleans everything including downloads code-format: format the code, do this before you commit and send a PR please! tests: runs test cases to ensure that all features are still working as they should unit: perform unit tests (based on cmocka) help: shows these build options Unless you are on Mac OS X, you can also build statically linked versions of the AFL++ binaries by passing the STATIC=1 argument to make:\nmake STATIC=1 These build options exist:\n STATIC - compile AFL++ static ASAN_BUILD - compiles with memory sanitizer for debug purposes DEBUG - no optimization, -ggdb3, all warnings and -Werror PROFILING - compile with profiling information (gprof) INTROSPECTION - compile afl-fuzz with mutation introspection NO_PYTHON - disable python support NO_SPLICING - disables splicing mutation in afl-fuzz, not recommended for normal fuzzing AFL_NO_X86 - if compiling on non-intel/amd platforms LLVM_CONFIG - if your distro doesn\u0026rsquo;t use the standard name for llvm-config (e.g., Debian) e.g.: make ASAN_BUILD=1\nMacOS X on x86 and arm64 (M1) MacOS has some gotchas due to the idiosyncrasies of the platform.\nTo build AFL, install llvm (and perhaps gcc) from brew and follow the general instructions for Linux. If possible, avoid Xcode at all cost.\nbrew install wget git make cmake llvm gdb coreutils Be sure to setup PATH to point to the correct clang binaries and use the freshly installed clang, clang++, llvm-config, gmake and coreutils, e.g.:\n# Depending on your MacOS system + brew version it is either export PATH=\u0026#34;/opt/homebrew/opt/llvm/bin:$PATH\u0026#34; # or export PATH=\u0026#34;/usr/local/opt/llvm/bin:$PATH\u0026#34; # you can check with \u0026#34;brew info llvm\u0026#34; export PATH=\u0026#34;/usr/local/opt/coreutils/libexec/gnubin:/usr/local/bin:$PATH\u0026#34; export CC=clang export CXX=clang++ gmake cd frida_mode gmake cd .. sudo gmake install afl-gcc will fail unless you have GCC installed, but that is using outdated instrumentation anyway. afl-clang might fail too depending on your PATH setup. But you don\u0026rsquo;t want neither, you want afl-clang-fast anyway :) Note that afl-clang-lto, afl-gcc-fast and qemu_mode are not working on MacOS.\nThe crash reporting daemon that comes by default with MacOS X will cause problems with fuzzing. You need to turn it off:\nlaunchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.ReportCrash.Root.plist The fork() semantics on OS X are a bit unusual compared to other unix systems and definitely don\u0026rsquo;t look POSIX-compliant. This means two things:\n Fuzzing will be probably slower than on Linux. In fact, some folks report considerable performance gains by running the jobs inside a Linux VM on MacOS X. Some non-portable, platform-specific code may be incompatible with the AFL++ forkserver. If you run into any problems, set AFL_NO_FORKSRV=1 in the environment before starting afl-fuzz. User emulation mode of QEMU does not appear to be supported on MacOS X, so black-box instrumentation mode (-Q) will not work. However, Frida mode (-O) works on both x86 and arm64 MacOS boxes.\nMacOS X supports SYSV shared memory used by AFL\u0026rsquo;s instrumentation, but the default settings aren\u0026rsquo;t usable with AFL++. The default settings on 10.14 seem to be:\n$ ipcs -M IPC status from \u0026lt;running system\u0026gt; as of XXX shminfo: shmmax: 4194304 (max shared memory segment size) shmmin: 1 (min shared memory segment size) shmmni: 32 (max number of shared memory identifiers) shmseg: 8 (max shared memory segments per process) shmall: 1024 (max amount of shared memory in pages) To temporarily change your settings to something minimally usable with AFL++, run these commands as root:\nsysctl kern.sysv.shmmax=8388608 sysctl kern.sysv.shmall=4096 If you\u0026rsquo;re running more than one instance of AFL, you likely want to make shmall bigger and increase shmseg as well:\nsysctl kern.sysv.shmmax=8388608 sysctl kern.sysv.shmseg=48 sysctl kern.sysv.shmall=98304 See http://www.spy-hill.com/help/apple/SharedMemory.html for documentation for these settings and how to make them permanent.\n"}),a.add({id:22,href:'/docs/life_pro_tips/',title:"Life Pro Tips",content:"AFL \u0026ldquo;Life Pro Tips\u0026rdquo; Bite-sized advice for those who understand the basics, but can\u0026rsquo;t be bothered to read or memorize every other piece of documentation for AFL.\nGet more bang for your buck by using fuzzing dictionaries. See dictionaries/README.md to learn how.\nYou can get the most out of your hardware by parallelizing AFL jobs. See parallel_fuzzing.md for step-by-step tips.\nImprove the odds of spotting memory corruption bugs with libdislocator.so! It\u0026rsquo;s easy. Consult utils/libdislocator/README.md for usage tips.\nWant to understand how your target parses a particular input file? Try the bundled afl-analyze tool; it\u0026rsquo;s got colors and all!\nYou can visually monitor the progress of your fuzzing jobs. Run the bundled afl-plot utility to generate browser-friendly graphs.\nNeed to monitor AFL jobs programmatically? Check out the fuzzer_stats file in the AFL output dir or try afl-whatsup.\nPuzzled by something showing up in red or purple in the AFL UI? It could be important - consult docs/status_screen.md right away!\nKnow your target? Convert it to persistent mode for a huge performance gain! Consult section #5 in README.llvm.md for tips.\nUsing clang? Check out instrumentation/ for a faster alternative to afl-gcc!\nDid you know that AFL can fuzz closed-source or cross-platform binaries? Check out qemu_mode/README.md and unicorn_mode/README.md for more.\nDid you know that afl-fuzz can minimize any test case for you? Try the bundled afl-tmin tool - and get small repro files fast!\nNot sure if a crash is exploitable? AFL can help you figure it out. Specify -C to enable the peruvian were-rabbit mode.\nTrouble dealing with a machine uprising? Relax, we\u0026rsquo;ve all been there. Find essential survival tips at http://lcamtuf.coredump.cx/prep/.\nWant to automatically spot non-crashing memory handling bugs? Try running an AFL-generated corpus through ASAN, MSAN, or Valgrind.\nGood selection of input files is critical to a successful fuzzing job. See docs/perf_tips.md for pro tips.\nYou can improve the odds of automatically spotting stack corruption issues. Specify AFL_HARDEN=1 in the environment to enable hardening flags.\nBumping into problems with non-reproducible crashes? It happens, but usually isn\u0026rsquo;t hard to diagnose. See section #7 in README.md for tips.\nFuzzing is not just about memory corruption issues in the codebase. Add some sanity-checking assert() / abort() statements to effortlessly catch logic bugs.\nHey kid\u0026hellip; pssst\u0026hellip; want to figure out how AFL really works? Check out docs/technical_details.md for all the gory details in one place!\nThere\u0026rsquo;s a ton of third-party helper tools designed to work with AFL! Be sure to check out docs/sister_projects.md before writing your own.\nNeed to fuzz the command-line arguments of a particular program? You can find a simple solution in utils/argv_fuzzing.\nAttacking a format that uses checksums? Remove the checksum-checking code or use a postprocessor! See afl_custom_post_process in custom_mutators/examples/example.c for more.\n"}),a.add({id:23,href:'/docs/notes_for_asan/',title:"Notes for Asan",content:"Notes for using ASAN with afl-fuzz This file discusses some of the caveats for fuzzing under ASAN, and suggests a handful of alternatives. See README.md for the general instruction manual.\n1) Short version ASAN on 64-bit systems requests a lot of memory in a way that can\u0026rsquo;t be easily distinguished from a misbehaving program bent on crashing your system.\nBecause of this, fuzzing with ASAN is recommended only in four scenarios:\n On 32-bit systems, where we can always enforce a reasonable memory limit (-m 800 or so is a good starting point),\n On 64-bit systems only if you can do one of the following:\n Compile the binary in 32-bit mode (gcc -m32),\n Precisely gauge memory needs using http://jwilk.net/software/recidivm .\n Limit the memory available to process using cgroups on Linux (see utils/asan_cgroups).\n To compile with ASAN, set AFL_USE_ASAN=1 before calling \u0026lsquo;make clean all\u0026rsquo;. The afl-gcc / afl-clang wrappers will pick that up and add the appropriate flags. Note that ASAN is incompatible with -static, so be mindful of that.\n(You can also use AFL_USE_MSAN=1 to enable MSAN instead.)\nWhen compiling with AFL_USE_LSAN, the leak sanitizer will normally run when the program exits. In order to utilize this check at different times, such as at the end of a loop, you may use the macro __AFL_LEAK_CHECK();. This macro will report a crash in afl-fuzz if any memory is left leaking at this stage. You can also use LSAN_OPTIONS and a supressions file for more fine-tuned checking, however make sure you keep exitcode=23.\nNOTE: if you run several secondary instances, only one should run the target compiled with ASAN (and UBSAN, CFISAN), the others should run the target with no sanitizers compiled in.\nThere is also the option of generating a corpus using a non-ASAN binary, and then feeding it to an ASAN-instrumented one to check for bugs. This is faster, and can give you somewhat comparable results. You can also try using libdislocator (see utils/libdislocator/README.dislocator.md in the parent directory) as a lightweight and hassle-free (but less thorough) alternative.\n2) Long version ASAN allocates a huge region of virtual address space for bookkeeping purposes. Most of this is never actually accessed, so the OS never has to allocate any real pages of memory for the process, and the VM grabbed by ASAN is essentially \u0026ldquo;free\u0026rdquo; - but the mapping counts against the standard OS-enforced limit (RLIMIT_AS, aka ulimit -v).\nOn our end, afl-fuzz tries to protect you from processes that go off-rails and start consuming all the available memory in a vain attempt to parse a malformed input file. This happens surprisingly often, so enforcing such a limit is important for almost any fuzzer: the alternative is for the kernel OOM handler to step in and start killing random processes to free up resources. Needless to say, that\u0026rsquo;s not a very nice prospect to live with.\nUnfortunately, un*x systems offer no portable way to limit the amount of pages actually given to a process in a way that distinguishes between that and the harmless \u0026ldquo;land grab\u0026rdquo; done by ASAN. In principle, there are three standard ways to limit the size of the heap:\n The RLIMIT_AS mechanism (ulimit -v) caps the size of the virtual space - but as noted, this pays no attention to the number of pages actually in use by the process, and doesn\u0026rsquo;t help us here.\n The RLIMIT_DATA mechanism (ulimit -d) seems like a good fit, but it applies only to the traditional sbrk() / brk() methods of requesting heap space; modern allocators, including the one in glibc, routinely rely on mmap() instead, and circumvent this limit completely.\n Finally, the RLIMIT_RSS limit (ulimit -m) sounds like what we need, but doesn\u0026rsquo;t work on Linux - mostly because nobody felt like implementing it.\n There are also cgroups, but they are Linux-specific, not universally available even on Linux systems, and they require root permissions to set up; I\u0026rsquo;m a bit hesitant to make afl-fuzz require root permissions just for that. That said, if you are on Linux and want to use cgroups, check out the contributed script that ships in utils/asan_cgroups/.\nIn settings where cgroups aren\u0026rsquo;t available, we have no nice, portable way to avoid counting the ASAN allocation toward the limit. On 32-bit systems, or for binaries compiled in 32-bit mode (-m32), this is not a big deal: ASAN needs around 600-800 MB or so, depending on the compiler - so all you need to do is to specify -m that is a bit higher than that.\nOn 64-bit systems, the situation is more murky, because the ASAN allocation is completely outlandish - around 17.5 TB in older versions, and closer to 20 TB with newest ones. The actual amount of memory on your system is (probably!) just a tiny fraction of that - so unless you dial the limit with surgical precision, you will get no protection from OOM bugs.\nOn my system, the amount of memory grabbed by ASAN with a slightly older version of gcc is around 17,825,850 MB; for newest clang, it\u0026rsquo;s 20,971,600. But there is no guarantee that these numbers are stable, and if you get them wrong by \u0026ldquo;just\u0026rdquo; a couple gigs or so, you will be at risk.\nTo get the precise number, you can use the recidivm tool developed by Jakub Wilk (http://jwilk.net/software/recidivm). In absence of this, ASAN is not recommended when fuzzing 64-bit binaries, unless you are confident that they are robust and enforce reasonable memory limits (in which case, you can specify \u0026lsquo;-m none\u0026rsquo; when calling afl-fuzz).\nUsing recidivm or running with no limits aside, there are two other decent alternatives: build a corpus of test cases using a non-ASAN binary, and then examine them with ASAN, Valgrind, or other heavy-duty tools in a more controlled setting; or compile the target program with -m32 (32-bit mode) if your system supports that.\n3) Interactions with the QEMU mode ASAN, MSAN, and other sanitizers appear to be incompatible with QEMU user emulation, so please do not try to use them with the -Q option; QEMU doesn\u0026rsquo;t seem to appreciate the shadow VM trick used by these tools, and will likely just allocate all your physical memory, then crash.\nYou can, however, use QASan to run binaries that are not instrumented with ASan under QEMU with the AFL++ instrumentation.\nhttps://github.com/andreafioraldi/qasan\n4) ASAN and OOM crashes By default, ASAN treats memory allocation failures as fatal errors, immediately causing the program to crash. Since this is a departure from normal POSIX semantics (and creates the appearance of security issues in otherwise properly-behaving programs), we try to disable this by specifying allocator_may_return_null=1 in ASAN_OPTIONS.\nUnfortunately, it\u0026rsquo;s been reported that this setting still causes ASAN to trigger phantom crashes in situations where the standard allocator would simply return NULL. If this is interfering with your fuzzing jobs, you may want to cc: yourself on this bug:\nhttps://bugs.llvm.org/show_bug.cgi?id=22026\n5) What about UBSAN? New versions of UndefinedBehaviorSanitizer offers the -fsanitize=undefined-trap-on-error compiler flag that tells UBSan to insert an istruction that will cause SIGILL (ud2 on x86) when an undefined behaviour is detected. This is the option that you want to use when combining AFL++ and UBSan.\nAFL_USE_UBSAN=1 env var will add this compiler flag to afl-clang-fast, afl-gcc-fast and afl-gcc for you.\nOld versions of UBSAN don\u0026rsquo;t offer a consistent way to abort() on fault conditions or to terminate with a distinctive exit code but there are some versions of the library can be binary-patched to address this issue. You can also preload a shared library that substitute all the UBSan routines used to report errors with abort().\n"}),a.add({id:24,href:'/docs/parallel_fuzzing/',title:"Parallel Fuzzing",content:"Tips for parallel fuzzing This document talks about synchronizing afl-fuzz jobs on a single machine or across a fleet of systems. See README.md for the general instruction manual.\nNote that this document is rather outdated. please refer to the main document section on multiple core usage ../README.md#Using multiple cores for up to date strategies!\n1) Introduction Every copy of afl-fuzz will take up one CPU core. This means that on an n-core system, you can almost always run around n concurrent fuzzing jobs with virtually no performance hit (you can use the afl-gotcpu tool to make sure).\nIn fact, if you rely on just a single job on a multi-core system, you will be underutilizing the hardware. So, parallelization is always the right way to go.\nWhen targeting multiple unrelated binaries or using the tool in \u0026ldquo;non-instrumented\u0026rdquo; (-n) mode, it is perfectly fine to just start up several fully separate instances of afl-fuzz. The picture gets more complicated when you want to have multiple fuzzers hammering a common target: if a hard-to-hit but interesting test case is synthesized by one fuzzer, the remaining instances will not be able to use that input to guide their work.\nTo help with this problem, afl-fuzz offers a simple way to synchronize test cases on the fly.\nNote that AFL++ has AFLfast\u0026rsquo;s power schedules implemented. It is therefore a good idea to use different power schedules if you run several instances in parallel. See power_schedules.md\nAlternatively running other AFL spinoffs in parallel can be of value, e.g. Angora (https://github.com/AngoraFuzzer/Angora/)\n2) Single-system parallelization If you wish to parallelize a single job across multiple cores on a local system, simply create a new, empty output directory (\u0026ldquo;sync dir\u0026rdquo;) that will be shared by all the instances of afl-fuzz; and then come up with a naming scheme for every instance - say, \u0026ldquo;fuzzer01\u0026rdquo;, \u0026ldquo;fuzzer02\u0026rdquo;, etc.\nRun the first one (\u0026ldquo;main node\u0026rdquo;, -M) like this:\n./afl-fuzz -i testcase_dir -o sync_dir -M fuzzer01 [...other stuff...] \u0026hellip;and then, start up secondary (-S) instances like this:\n./afl-fuzz -i testcase_dir -o sync_dir -S fuzzer02 [...other stuff...] ./afl-fuzz -i testcase_dir -o sync_dir -S fuzzer03 [...other stuff...] Each fuzzer will keep its state in a separate subdirectory, like so:\n/path/to/sync_dir/fuzzer01/\nEach instance will also periodically rescan the top-level sync directory for any test cases found by other fuzzers - and will incorporate them into its own fuzzing when they are deemed interesting enough. For performance reasons only -M main node syncs the queue with everyone, the -S secondary nodes will only sync from the main node.\nThe difference between the -M and -S modes is that the main instance will still perform deterministic checks; while the secondary instances will proceed straight to random tweaks.\nNote that you must always have one -M main instance! Running multiple -M instances is wasteful!\nYou can also monitor the progress of your jobs from the command line with the provided afl-whatsup tool. When the instances are no longer finding new paths, it\u0026rsquo;s probably time to stop.\nWARNING: Exercise caution when explicitly specifying the -f option. Each fuzzer must use a separate temporary file; otherwise, things will go south. One safe example may be:\n./afl-fuzz [...] -S fuzzer10 -f file10.txt ./fuzzed/binary @@ ./afl-fuzz [...] -S fuzzer11 -f file11.txt ./fuzzed/binary @@ ./afl-fuzz [...] -S fuzzer12 -f file12.txt ./fuzzed/binary @@ This is not a concern if you use @@ without -f and let afl-fuzz come up with the file name.\n3) Multiple -M mains There is support for parallelizing the deterministic checks. This is only needed where\n many new paths are found fast over a long time and it looks unlikely that main node will ever catch up, and deterministic fuzzing is actively helping path discovery (you can see this in the main node for the first for lines in the \u0026ldquo;fuzzing strategy yields\u0026rdquo; section. If the ration found/attemps is high, then it is effective. It most commonly isn\u0026rsquo;t.) Only if both are true it is beneficial to have more than one main. You can leverage this by creating -M instances like so:\n./afl-fuzz -i testcase_dir -o sync_dir -M mainA:1/3 [...] ./afl-fuzz -i testcase_dir -o sync_dir -M mainB:2/3 [...] ./afl-fuzz -i testcase_dir -o sync_dir -M mainC:3/3 [...] \u0026hellip; where the first value after \u0026lsquo;:\u0026rsquo; is the sequential ID of a particular main instance (starting at 1), and the second value is the total number of fuzzers to distribute the deterministic fuzzing across. Note that if you boot up fewer fuzzers than indicated by the second number passed to -M, you may end up with poor coverage.\n4) Syncing with non-AFL fuzzers or independant instances A -M main node can be told with the -F other_fuzzer_queue_directory option to sync results from other fuzzers, e.g. libfuzzer or honggfuzz.\nOnly the specified directory will by synced into afl, not subdirectories. The specified directory does not need to exist yet at the start of afl.\nThe -F option can be passed to the main node several times.\n5) Multi-system parallelization The basic operating principle for multi-system parallelization is similar to the mechanism explained in section 2. The key difference is that you need to write a simple script that performs two actions:\n Uses SSH with authorized_keys to connect to every machine and retrieve a tar archive of the /path/to/sync_dir/\u0026lt;main_node(s)\u0026gt; directory local to the machine. It is best to use a naming scheme that includes host name and it\u0026rsquo;s being a main node (e.g. main1, main2) in the fuzzer ID, so that you can do something like:\nfor host in `cat HOSTLIST`; do ssh user@$host \u0026#34;tar -czf - sync/$host_main*/\u0026#34; \u0026gt; $host.tgz done Distributes and unpacks these files on all the remaining machines, e.g.:\nfor srchost in `cat HOSTLIST`; do for dsthost in `cat HOSTLIST`; do test \u0026#34;$srchost\u0026#34; = \u0026#34;$dsthost\u0026#34; \u0026amp;\u0026amp; continue ssh user@$srchost \u0026#39;tar -kxzf -\u0026#39; \u0026lt; $dsthost.tgz done done There is an example of such a script in utils/distributed_fuzzing/.\nThere are other (older) more featured, experimental tools:\n https://github.com/richo/roving https://github.com/MartijnB/disfuzz-afl However these do not support syncing just main nodes (yet).\nWhen developing custom test case sync code, there are several optimizations to keep in mind:\n The synchronization does not have to happen very often; running the task every 60 minutes or even less often at later fuzzing stages is fine\n There is no need to synchronize crashes/ or hangs/; you only need to copy over queue/* (and ideally, also fuzzer_stats).\n It is not necessary (and not advisable!) to overwrite existing files; the -k option in tar is a good way to avoid that.\n There is no need to fetch directories for fuzzers that are not running locally on a particular machine, and were simply copied over onto that system during earlier runs.\n For large fleets, you will want to consolidate tarballs for each host, as this will let you use n SSH connections for sync, rather than n*(n-1).\nYou may also want to implement staged synchronization. For example, you could have 10 groups of systems, with group 1 pushing test cases only to group 2; group 2 pushing them only to group 3; and so on, with group eventually 10 feeding back to group 1.\nThis arrangement would allow test interesting cases to propagate across the fleet without having to copy every fuzzer queue to every single host.\n You do not want a \u0026ldquo;main\u0026rdquo; instance of afl-fuzz on every system; you should run them all with -S, and just designate a single process somewhere within the fleet to run with -M.\n Syncing is only necessary for the main nodes on a system. It is possible to run main-less with only secondaries. However then you need to find out which secondary took over the temporary role to be the main node. Look for the is_main_node file in the fuzzer directories, eg. sync-dir/hostname-*/is_main_node\n It is not advisable to skip the synchronization script and run the fuzzers directly on a network filesystem; unexpected latency and unkillable processes in I/O wait state can mess things up.\n6) Remote monitoring and data collection You can use screen, nohup, tmux, or something equivalent to run remote instances of afl-fuzz. If you redirect the program\u0026rsquo;s output to a file, it will automatically switch from a fancy UI to more limited status reports. There is also basic machine-readable information which is always written to the fuzzer_stats file in the output directory. Locally, that information can be interpreted with afl-whatsup.\nIn principle, you can use the status screen of the main (-M) instance to monitor the overall fuzzing progress and decide when to stop. In this mode, the most important signal is just that no new paths are being found for a longer while. If you do not have a main instance, just pick any single secondary instance to watch and go by that.\nYou can also rely on that instance\u0026rsquo;s output directory to collect the synthesized corpus that covers all the noteworthy paths discovered anywhere within the fleet. Secondary (-S) instances do not require any special monitoring, other than just making sure that they are up.\nKeep in mind that crashing inputs are not automatically propagated to the main instance, so you may still want to monitor for crashes fleet-wide from within your synchronization or health checking scripts (see afl-whatsup).\n7) Asymmetric setups It is perhaps worth noting that all of the following is permitted:\n Running afl-fuzz with conjunction with other guided tools that can extend coverage (e.g., via concolic execution). Third-party tools simply need to follow the protocol described above for pulling new test cases from out_dir/\u0026lt;fuzzer_id\u0026gt;/queue/* and writing their own finds to sequentially numbered id:nnnnnn files in out_dir/\u0026lt;ext_tool_id\u0026gt;/queue/*.\n Running some of the synchronized fuzzers with different (but related) target binaries. For example, simultaneously stress-testing several different JPEG parsers (say, IJG jpeg and libjpeg-turbo) while sharing the discovered test cases can have synergistic effects and improve the overall coverage.\n(In this case, running one -M instance per target is necessary.)\n Having some of the fuzzers invoke the binary in different ways. For example, \u0026lsquo;djpeg\u0026rsquo; supports several DCT modes, configurable with a command-line flag, while \u0026lsquo;dwebp\u0026rsquo; supports incremental and one-shot decoding. In some scenarios, going after multiple distinct modes and then pooling test cases will improve coverage.\n Much less convincingly, running the synchronized fuzzers with different starting test cases (e.g., progressive and standard JPEG) or dictionaries. The synchronization mechanism ensures that the test sets will get fairly homogeneous over time, but it introduces some initial variability.\n "}),a.add({id:25,href:'/docs/patches/',title:"PAT Ches",content:"Applied Patches The following patches from https://github.com/vanhauser-thc/afl-patches have been installed or not installed:\nINSTALLED afl-llvm-fix.diff by kcwu(at)csie(dot)org afl-sort-all_uniq-fix.diff by legarrec(dot)vincent(at)gmail(dot)com laf-intel.diff by heiko(dot)eissfeldt(at)hexco(dot)de afl-llvm-optimize.diff by mh(at)mh-sec(dot)de afl-fuzz-tmpdir.diff by mh(at)mh-sec(dot)de afl-fuzz-79x24.diff by heiko(dot)eissfeldt(at)hexco(dot)de afl-fuzz-fileextensionopt.diff tbd afl-as-AFL_INST_RATIO.diff by legarrec(dot)vincent(at)gmail(dot)com afl-qemu-ppc64.diff by william(dot)barsse(at)airbus(dot)com afl-qemu-optimize-entrypoint.diff by mh(at)mh-sec(dot)de afl-qemu-speed.diff by abiondo on github afl-qemu-optimize-map.diff by mh(at)mh-sec(dot)de llvm_mode ngram prev_loc coverage (github.com/adrianherrera/afl-ngram-pass) Custom mutator (native library) (by kyakdan) unicorn_mode (modernized and updated by domenukk) instrim (https://github.com/csienslab/instrim) was integrated MOpt (github.com/puppet-meteor/MOpt-AFL) was imported AFLfast additions (github.com/mboehme/aflfast) were incorporated. Qemu 3.1 upgrade with enhancement patches (github.com/andreafioraldi/afl) Python mutator modules support (github.com/choller/afl) Instrument file list in LLVM mode (github.com/choller/afl) forkserver patch for afl-tmin (github.com/nccgroup/TriforceAFL) NOT INSTALLED afl-fuzz-context_sensitive.diff - changes too much of the behaviour afl-tmpfs.diff - same as afl-fuzz-tmpdir.diff but more complex afl-cmin-reduce-dataset.diff - unsure of the impact afl-llvm-fix2.diff - not needed with the other patches "}),a.add({id:26,href:'/docs/perf_tips/',title:"Perf Tips",content:"Tips for performance optimization This file provides tips for troubleshooting slow or wasteful fuzzing jobs. See README.md for the general instruction manual.\n1. Keep your test cases small This is probably the single most important step to take! Large test cases do not merely take more time and memory to be parsed by the tested binary, but also make the fuzzing process dramatically less efficient in several other ways.\nTo illustrate, let\u0026rsquo;s say that you\u0026rsquo;re randomly flipping bits in a file, one bit at a time. Let\u0026rsquo;s assume that if you flip bit #47, you will hit a security bug; flipping any other bit just results in an invalid document.\nNow, if your starting test case is 100 bytes long, you will have a 71% chance of triggering the bug within the first 1,000 execs - not bad! But if the test case is 1 kB long, the probability that we will randomly hit the right pattern in the same timeframe goes down to 11%. And if it has 10 kB of non-essential cruft, the odds plunge to 1%.\nOn top of that, with larger inputs, the binary may be now running 5-10x times slower than before - so the overall drop in fuzzing efficiency may be easily as high as 500x or so.\nIn practice, this means that you shouldn\u0026rsquo;t fuzz image parsers with your vacation photos. Generate a tiny 16x16 picture instead, and run it through jpegtran or pngcrunch for good measure. The same goes for most other types of documents.\nThere\u0026rsquo;s plenty of small starting test cases in ../testcases/ - try them out or submit new ones!\nIf you want to start with a larger, third-party corpus, run afl-cmin with an aggressive timeout on that data set first.\n2. Use a simpler target Consider using a simpler target binary in your fuzzing work. For example, for image formats, bundled utilities such as djpeg, readpng, or gifhisto are considerably (10-20x) faster than the convert tool from ImageMagick - all while exercising roughly the same library-level image parsing code.\nEven if you don\u0026rsquo;t have a lightweight harness for a particular target, remember that you can always use another, related library to generate a corpus that will be then manually fed to a more resource-hungry program later on.\nAlso note that reading the fuzzing input via stdin is faster than reading from a file.\n3. Use LLVM persistent instrumentation The LLVM mode offers a \u0026ldquo;persistent\u0026rdquo;, in-process fuzzing mode that can work well for certain types of self-contained libraries, and for fast targets, can offer performance gains up to 5-10x; and a \u0026ldquo;deferred fork server\u0026rdquo; mode that can offer huge benefits for programs with high startup overhead. Both modes require you to edit the source code of the fuzzed program, but the changes often amount to just strategically placing a single line or two.\nIf there are important data comparisons performed (e.g. strcmp(ptr, MAGIC_HDR)) then using laf-intel (see instrumentation/README.laf-intel.md) will help afl-fuzz a lot to get to the important parts in the code.\nIf you are only interested in specific parts of the code being fuzzed, you can instrument_files the files that are actually relevant. This improves the speed and accuracy of afl. See instrumentation/README.instrument_list.md\n4. Profile and optimize the binary Check for any parameters or settings that obviously improve performance. For example, the djpeg utility that comes with IJG jpeg and libjpeg-turbo can be called with:\n-dct fast -nosmooth -onepass -dither none -scale 1/4 \u0026hellip;and that will speed things up. There is a corresponding drop in the quality of decoded images, but it\u0026rsquo;s probably not something you care about.\nIn some programs, it is possible to disable output altogether, or at least use an output format that is computationally inexpensive. For example, with image transcoding tools, converting to a BMP file will be a lot faster than to PNG.\nWith some laid-back parsers, enabling \u0026ldquo;strict\u0026rdquo; mode (i.e., bailing out after first error) may result in smaller files and improved run time without sacrificing coverage; for example, for sqlite, you may want to specify -bail.\nIf the program is still too slow, you can use strace -tt or an equivalent profiling tool to see if the targeted binary is doing anything silly. Sometimes, you can speed things up simply by specifying /dev/null as the config file, or disabling some compile-time features that aren\u0026rsquo;t really needed for the job (try ./configure --help). One of the notoriously resource-consuming things would be calling other utilities via exec*(), popen(), system(), or equivalent calls; for example, tar can invoke external decompression tools when it decides that the input file is a compressed archive.\nSome programs may also intentionally call sleep(), usleep(), or nanosleep(); vim is a good example of that. Other programs may attempt fsync() and so on. There are third-party libraries that make it easy to get rid of such code, e.g.:\nhttps://launchpad.net/libeatmydata\nIn programs that are slow due to unavoidable initialization overhead, you may want to try the LLVM deferred forkserver mode (see README.llvm.md), which can give you speed gains up to 10x, as mentioned above.\nLast but not least, if you are using ASAN and the performance is unacceptable, consider turning it off for now, and manually examining the generated corpus with an ASAN-enabled binary later on.\n5. Instrument just what you need Instrument just the libraries you actually want to stress-test right now, one at a time. Let the program use system-wide, non-instrumented libraries for any functionality you don\u0026rsquo;t actually want to fuzz. For example, in most cases, it doesn\u0026rsquo;t make to instrument libgmp just because you\u0026rsquo;re testing a crypto app that relies on it for bignum math.\nBeware of programs that come with oddball third-party libraries bundled with their source code (Spidermonkey is a good example of this). Check ./configure options to use non-instrumented system-wide copies instead.\n6. Parallelize your fuzzers The fuzzer is designed to need ~1 core per job. This means that on a, say, 4-core system, you can easily run four parallel fuzzing jobs with relatively little performance hit. For tips on how to do that, see parallel_fuzzing.md.\nThe afl-gotcpu utility can help you understand if you still have idle CPU capacity on your system. (It won\u0026rsquo;t tell you about memory bandwidth, cache misses, or similar factors, but they are less likely to be a concern.)\n7. Keep memory use and timeouts in check Consider setting low values for -m and -t.\nFor programs that are nominally very fast, but get sluggish for some inputs, you can also try setting -t values that are more punishing than what afl-fuzz dares to use on its own. On fast and idle machines, going down to -t 5 may be a viable plan.\nThe -m parameter is worth looking at, too. Some programs can end up spending a fair amount of time allocating and initializing megabytes of memory when presented with pathological inputs. Low -m values can make them give up sooner and not waste CPU time.\n8. Check OS configuration There are several OS-level factors that may affect fuzzing speed:\n If you have no risk of power loss then run your fuzzing on a tmpfs partition. This increases the performance noticably. Alternatively you can use AFL_TMPDIR to point to a tmpfs location to just write the input file to a tmpfs. High system load. Use idle machines where possible. Kill any non-essential CPU hogs (idle browser windows, media players, complex screensavers, etc). Network filesystems, either used for fuzzer input / output, or accessed by the fuzzed binary to read configuration files (pay special attention to the home directory - many programs search it for dot-files). Disable all the spectre, meltdown etc. security countermeasures in the kernel if your machine is properly separated: ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off In most Linux distributions you can put this into a `/etc/default/grub` variable. You can use `sudo afl-persistent-config` to set these options for you. The following list of changes are made when executing afl-system-config:\n On-demand CPU scaling. The Linux ondemand governor performs its analysis on a particular schedule and is known to underestimate the needs of short-lived processes spawned by afl-fuzz (or any other fuzzer). On Linux, this can be fixed with: cd /sys/devices/system/cpu echo performance | tee cpu*/cpufreq/scaling_governor On other systems, the impact of CPU scaling will be different; when fuzzing, use OS-specific tools to find out if all cores are running at full speed. Transparent huge pages. Some allocators, such as jemalloc, can incur a heavy fuzzing penalty when transparent huge pages (THP) are enabled in the kernel. You can disable this via: echo never \u0026gt; /sys/kernel/mm/transparent_hugepage/enabled Suboptimal scheduling strategies. The significance of this will vary from one target to another, but on Linux, you may want to make sure that the following options are set: echo 1 \u0026gt;/proc/sys/kernel/sched_child_runs_first echo 1 \u0026gt;/proc/sys/kernel/sched_autogroup_enabled Setting a different scheduling policy for the fuzzer process - say `SCHED_RR` - can usually speed things up, too, but needs to be done with care. "}),a.add({id:27,href:'/docs/power_schedules/',title:"Power Schedules",content:"afl++\u0026rsquo;s power schedules based on AFLfast Power schedules implemented by Marcel Böhme \u0026lt;[email protected]\u0026gt;. AFLFast is an extension of AFL which is written and maintained by Michal Zalewski \u0026lt;[email protected]\u0026gt;.\nAFLfast has helped in the success of Team Codejitsu at the finals of the DARPA Cyber Grand Challenge where their bot Galactica took 2nd place in terms of #POVs proven (see red bar at https://www.cybergrandchallenge.com/event#results). AFLFast exposed several previously unreported CVEs that could not be exposed by AFL in 24 hours and otherwise exposed vulnerabilities significantly faster than AFL while generating orders of magnitude more unique crashes.\nEssentially, we observed that most generated inputs exercise the same few \u0026ldquo;high-frequency\u0026rdquo; paths and developed strategies to gravitate towards low-frequency paths, to stress significantly more program behavior in the same amount of time. We devised several search strategies that decide in which order the seeds should be fuzzed and power schedules that smartly regulate the number of inputs generated from a seed (i.e., the time spent fuzzing a seed). We call the number of inputs generated from a seed, the seed\u0026rsquo;s energy.\nWe find that AFL\u0026rsquo;s exploitation-based constant schedule assigns too much energy to seeds exercising high-frequency paths (e.g., paths that reject invalid inputs) and not enough energy to seeds exercising low-frequency paths (e.g., paths that stress interesting behaviors). Technically, we modified the computation of a seed\u0026rsquo;s performance score (calculate_score), which seed is marked as favourite (update_bitmap_score), and which seed is chosen next from the circular queue (main). We implemented the following schedules (in the order of their effectiveness, best first):\n AFL flag Power Schedule -p explore -p fast (default) -p coe -p quad -p lin -p exploit (AFL) -p mmopt Experimental: explore with no weighting to runtime and increased weighting on the last 5 queue entries -p rare Experimental: rare puts focus on queue entries that hit rare edges -p seek Experimental: seek is EXPLORE but ignoring the runtime of the queue input and less focus on the size where α(i) is the performance score that AFL uses to compute for the seed input i, β(i)\u0026gt;1 is a constant, s(i) is the number of times that seed i has been chosen from the queue, f(i) is the number of generated inputs that exercise the same path as seed i, and μ is the average number of generated inputs exercising a path. More details can be found in the paper that was accepted at the 23rd ACM Conference on Computer and Communications Security (CCS'16).\nPS: In parallel mode (several instances with shared queue), we suggest to run the main node using the exploit schedule (-p exploit) and the secondary nodes with a combination of cut-off-exponential (-p coe), exponential (-p fast; default), and explore (-p explore) schedules. In single mode, the default settings will do. EDIT: In parallel mode, AFLFast seems to perform poorly because the path probability estimates are incorrect for the imported seeds. Pull requests to fix this issue by syncing the estimates across instances are appreciated :)\nCopyright 2013, 2014, 2015, 2016 Google Inc. All rights reserved. Released under terms and conditions of Apache License, Version 2.0.\n"}),a.add({id:28,href:'/docs/python_mutators/',title:"Python Mutators",content:"Adding custom mutators to AFL using Python modules This file describes how you can utilize the external Python API to write your own custom mutation routines.\nNote: This feature is highly experimental. Use at your own risk.\nImplemented by Christian Holler (:decoder) [email protected].\nNOTE: Only cPython 2.7, 3.7 and above are supported, although others may work. Depending on with which version afl-fuzz was compiled against, you must use python2 or python3 syntax in your scripts! After a major version upgrade (e.g. 3.7 -\u0026gt; 3.8), a recompilation of afl-fuzz may be needed.\nFor an example and a template see ../examples/python_mutators/\n1) Description and purpose While AFLFuzz comes with a good selection of generic deterministic and non-deterministic mutation operations, it sometimes might make sense to extend these to implement strategies more specific to the target you are fuzzing.\nFor simplicity and in order to allow people without C knowledge to extend AFLFuzz, I implemented a \u0026ldquo;Python\u0026rdquo; stage that can make use of an external module (written in Python) that implements a custom mutation stage.\nThe main motivation behind this is to lower the barrier for people experimenting with this tool. Hopefully, someone will be able to do useful things with this extension.\nIf you find it useful, have questions or need additional features added to the interface, feel free to send a mail to [email protected].\nSee the following information to get a better pictures: https://www.agarri.fr/docs/XML_Fuzzing-NullCon2017-PUBLIC.pdf https://bugs.chromium.org/p/chromium/issues/detail?id=930663\n2) How the Python module looks like You can find a simple example in pymodules/example.py including documentation explaining each function. In the same directory, you can find another simple module that performs simple mutations.\nRight now, \u0026ldquo;init\u0026rdquo; is called at program startup and can be used to perform any kinds of one-time initializations while \u0026ldquo;fuzz\u0026rdquo; is called each time a mutation is requested.\nThere is also optional support for a trimming API, see the section below for further information about this feature.\n3) How to compile AFLFuzz with Python support You must install the python 3 or 2 development package of your Linux distribution before this will work. On Debian/Ubuntu/Kali this can be done with either: apt install python3-dev or apt install python-dev Note that for some distributions you might also need the package python[23]-apt\nA prerequisite for using this mode is to compile AFLFuzz with Python support.\nThe AFL++ Makefile detects Python 3 and 2 through python-config if is is in the PATH and compiles afl-fuzz with the feature if available.\nIn case your setup is different set the necessary variables like this: PYTHON_INCLUDE=/path/to/python/include LDFLAGS=-L/path/to/python/lib make\n4) How to run AFLFuzz with your custom module You must pass the module name inside the env variable AFL_PYTHON_MODULE.\nIn addition, if you are trying to load the module from the local directory, you must adjust your PYTHONPATH to reflect this circumstance. The following command should work if you are inside the aflfuzz directory:\n$ AFL_PYTHON_MODULE=\u0026ldquo;pymodules.test\u0026rdquo; PYTHONPATH=. ./afl-fuzz\nOptionally, the following environment variables are supported:\nAFL_PYTHON_ONLY - Disable all other mutation stages. This can prevent broken testcases (those that your Python module can\u0026rsquo;t work with anymore) to fill up your queue. Best combined with a custom trimming routine (see below) because trimming can cause the same test breakage like havoc and splice.\nAFL_DEBUG - When combined with AFL_NO_UI, this causes the C trimming code to emit additional messages about the performance and actions of your custom Python trimmer. Use this to see if it works :)\n5) Order and statistics The Python stage is set to be the first non-deterministic stage (right before the havoc stage). In the statistics however, it shows up as the third number under \u0026ldquo;havoc\u0026rdquo;. That\u0026rsquo;s because I\u0026rsquo;m lazy and I didn\u0026rsquo;t want to mess with the UI too much ;)\n6) Trimming support The generic trimming routines implemented in AFLFuzz can easily destroy the structure of complex formats, possibly leading to a point where you have a lot of testcases in the queue that your Python module cannot process anymore but your target application still accepts. This is especially the case when your target can process a part of the input (causing coverage) and then errors out on the remaining input.\nIn such cases, it makes sense to implement a custom trimming routine in Python. The API consists of multiple methods because after each trimming step, we have to go back into the C code to check if the coverage bitmap is still the same for the trimmed input. Here\u0026rsquo;s a quick API description:\ninit_trim: This method is called at the start of each trimming operation and receives the initial buffer. It should return the amount of iteration steps possible on this input (e.g. if your input has n elements and you want to remove them one by one, return n, if you do a binary search, return log(n), and so on\u0026hellip;).\n If your trimming algorithm doesn't allow you to determine the amount of (remaining) steps easily (esp. while running), then you can alternatively return 1 here and always return 0 in post_trim until you are finished and no steps remain. In that case, returning 1 in post_trim will end the trimming routine. The whole current index/max iterations stuff is only used to show progress. trim: This method is called for each trimming operation. It doesn\u0026rsquo;t have any arguments because we already have the initial buffer from init_trim and we can memorize the current state in global variables. This can also save reparsing steps for each iteration. It should return the trimmed input buffer, where the returned data must not exceed the initial input data in length. Returning anything that is larger than the original data (passed to init_trim) will result in a fatal abort of AFLFuzz.\npost_trim: This method is called after each trim operation to inform you if your trimming step was successful or not (in terms of coverage). If you receive a failure here, you should reset your input to the last known good state. In any case, this method must return the next trim iteration index (from 0 to the maximum amount of steps you returned in init_trim).\nOmitting any of the methods will cause Python trimming to be disabled and trigger a fallback to the builtin default trimming routine.\n"}),a.add({id:29,href:'/docs/quickstartguide/',title:"Quick Start Guide",content:"AFL quick start guide You should read README.md - it\u0026rsquo;s pretty short. If you really can\u0026rsquo;t, here\u0026rsquo;s how to hit the ground running:\n Compile AFL with \u0026lsquo;make\u0026rsquo;. If build fails, see INSTALL.md for tips.\n Find or write a reasonably fast and simple program that takes data from a file or stdin, processes it in a test-worthy way, then exits cleanly. If testing a network service, modify it to run in the foreground and read from stdin. When fuzzing a format that uses checksums, comment out the checksum verification code, too.\nIf this is not possible (e.g. in -Q(emu) mode) then use AFL_CUSTOM_MUTATOR_LIBRARY to calculate the values with your own library.\nThe program must crash properly when a fault is encountered. Watch out for custom SIGSEGV or SIGABRT handlers and background processes. For tips on detecting non-crashing flaws, see section 11 in README.md .\n Compile the program / library to be fuzzed using afl-cc. A common way to do this would be:\nCC=/path/to/afl-cc CXX=/path/to/afl-c++ ./configure \u0026ndash;disable-shared make clean all\n Get a small but valid input file that makes sense to the program. When fuzzing verbose syntax (SQL, HTTP, etc), create a dictionary as described in dictionaries/README.md, too.\n If the program reads from stdin, run \u0026lsquo;afl-fuzz\u0026rsquo; like so:\n./afl-fuzz -i testcase_dir -o findings_dir \u0026ndash; /path/to/tested/program [\u0026hellip;program\u0026rsquo;s cmdline\u0026hellip;]\nIf the program takes input from a file, you can put @@ in the program\u0026rsquo;s command line; AFL will put an auto-generated file name in there for you.\n Investigate anything shown in red in the fuzzer UI by promptly consulting status_screen.md.\n There is a basic docker build with \u0026lsquo;docker build -t aflplusplus .\u0026rsquo;\n That\u0026rsquo;s it. Sit back, relax, and - time permitting - try to skim through the following files:\n README.md - A general introduction to AFL, docs/perf_tips.md - Simple tips on how to fuzz more quickly, docs/status_screen.md - An explanation of the tidbits shown in the UI, docs/parallel_fuzzing.md - Advice on running AFL on multiple cores. "}),a.add({id:30,href:'/docs/readme/',title:"Readme",content:"AFL++ documentation This is the overview of the AFL++ docs content.\nFor general information on AFL++, see the README.md of the repository.\nAlso take a look at our FAQ.md and best_practices.md.\nFuzzing targets with the source code available You can find a quickstart for fuzzing targets with the source code available in the README.md of the repository.\nFor in-depth information on the steps of the fuzzing process, see fuzzing_in_depth.md or click on the following image and select a step.\nFor further information on instrumentation, see the READMEs in the instrumentation/ folder.\nInstrumenting the target For more information, click on the following image and select a step.\nPreparing the fuzzing campaign For more information, click on the following image and select a step.\nFuzzing the target For more information, click on the following image and select a step.\nManaging the fuzzing campaign For more information, click on the following image and select a step.\nFuzzing other targets To learn about fuzzing other targets, see:\n Binary-only: fuzzing_binary-only_targets.md GUI programs: best_practices.md#fuzzing-a-gui-program Libraries: frida_mode/README.md Network services: best_practices.md#fuzzing-a-network-service Non-linux: unicorn_mode/README.md Additional information Tools that help fuzzing with AFL++: third_party_tools.md Tutorials: tutorials.md "}),a.add({id:31,href:'/docs/readme.mopt/',title:"Readme. Mopt",content:"MOpt(imized) AFL by [email protected] 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal selection probability distribution of operators with respect to fuzzing effectiveness. More details can be found in the technical report.\n2. Cite Information Chenyang Lyu, Shouling Ji, Chao Zhang, Yuwei Li, Wei-Han Lee, Yu Song and Raheem Beyah, MOPT: Optimized Mutation Scheduling for Fuzzers, USENIX Security 2019.\n3. Seed Sets We open source all the seed sets used in the paper \u0026ldquo;MOPT: Optimized Mutation Scheduling for Fuzzers\u0026rdquo;.\n4. Experiment Results The experiment results can be found in https://drive.google.com/drive/folders/184GOzkZGls1H2NuLuUfSp9gfqp1E2-lL?usp=sharing. We only open source the crash files since the space is limited.\n5. Technical Report MOpt_TechReport.pdf is the technical report of the paper \u0026ldquo;MOPT: Optimized Mutation Scheduling for Fuzzers\u0026rdquo;, which contains more deatails.\n6. Parameter Introduction Most important, you must add the parameter -L (e.g., -L 0) to launch the MOpt scheme.\nOption \u0026lsquo;-L\u0026rsquo; controls the time to move on to the pacemaker fuzzing mode. \u0026lsquo;-L t\u0026rsquo;: when MOpt-AFL finishes the mutation of one input, if it has not discovered any new unique crash or path for more than t minutes, MOpt-AFL will enter the pacemaker fuzzing mode.\nSetting 0 will enter the pacemaker fuzzing mode at first, which is recommended in a short time-scale evaluation.\nSetting -1 will enable both pacemaker mode and normal aflmutation fuzzing in parallel.\nOther important parameters can be found in afl-fuzz.c, for instance,\n\u0026lsquo;swarm_num\u0026rsquo;: the number of the PSO swarms used in the fuzzing process. \u0026lsquo;period_pilot\u0026rsquo;: how many times MOpt-AFL will execute the target program in the pilot fuzzing module, then it will enter the core fuzzing module. \u0026lsquo;period_core\u0026rsquo;: how many times MOpt-AFL will execute the target program in the core fuzzing module, then it will enter the PSO updating module. \u0026lsquo;limit_time_bound\u0026rsquo;: control how many interesting test cases need to be found before MOpt-AFL quits the pacemaker fuzzing mode and reuses the deterministic stage. 0 \u0026lt; \u0026lsquo;limit_time_bound\u0026rsquo; \u0026lt; 1, MOpt-AFL-tmp. \u0026lsquo;limit_time_bound\u0026rsquo; \u0026gt;= 1, MOpt-AFL-ever.\nHave fun with MOpt in AFL!\n"}),a.add({id:32,href:'/docs/readme.radamsa/',title:"Readme.Radamsa",content:"libradamsa Pretranslated radamsa library. This code belongs to the radamsa author.\n Original repository: https://gitlab.com/akihe/radamsa\n Source commit: 7b2cc2d0\n The code here is adapted for AFL++ with minor changes respect the original version\n "}),a.add({id:33,href:'/docs/rpc_statsd/',title:"Rpc Statsd",content:"Remote monitoring and metrics visualization AFL++ can send out metrics as StatsD messages. For remote monitoring and visualization of the metrics, you can set up a tool chain. For example, with Prometheus and Grafana. All tools are free and open source.\nThis enables you to create nice and readable dashboards containing all the information you need on your fuzzer instances. There is no need to write your own statistics parsing system, deploy and maintain it to all your instances, and sync with your graph rendering system.\nCompared to the default integrated UI of AFL++, this can help you to visualize trends and the fuzzing state over time. You might be able to see when the fuzzing process has reached a state of no progress and visualize what are the \u0026ldquo;best strategies\u0026rdquo; for your targets (according to your own criteria). You can do so without logging into each instance individually.\nThis is an example visualization with Grafana. The dashboard can be imported with this JSON template.\nAFL++ metrics and StatsD StatsD allows you to receive and aggregate metrics from a wide range of applications and retransmit them to a backend of your choice.\nFrom AFL++, StatsD can receive the following metrics:\n cur_item cycle_done cycles_wo_finds edges_found execs_done execs_per_sec havoc_expansion max_depth corpus_favored corpus_found corpus_imported corpus_count pending_favs pending_total slowest_exec_ms total_crashes saved_crashes saved_hangs var_byte_count corpus_variable Depending on your StatsD server, you will be able to monitor, trigger alerts, or perform actions based on these metrics (for example: alert on slow exec/s for a new build, threshold of crashes, time since last crash \u0026gt; X, and so on).\nSetting environment variables in AFL++ To enable the StatsD metrics collection on your fuzzer instances, set the environment variable AFL_STATSD=1. By default, AFL++ will send the metrics over UDP to 127.0.0.1:8125.\n To enable tags for each metric based on their format (banner and afl_version), set the environment variable AFL_STATSD_TAGS_FLAVOR. By default, no tags will be added to the metrics.\nThe available values are the following:\n dogstatsd influxdb librato signalfx For more information on environment variables, see env_variables.md.\nNote: When using multiple fuzzer instances with StatsD it is strongly recommended to set up AFL_STATSD_TAGS_FLAVOR to match your StatsD server. This will allow you to see individual fuzzer performance, detect bad ones, and see the progress of each strategy.\n Optional: To set the host and port of your StatsD daemon, set AFL_STATSD_HOST and AFL_STATSD_PORT. The default values are localhost and 8125.\n Installing and setting up StatsD, Prometheus, and Grafana The easiest way to install and set up the infrastructure is with Docker and Docker Compose.\nDepending on your fuzzing setup and infrastructure, you may not want to run these applications on your fuzzer instances. This setup may be modified before use in a production environment; for example, adding passwords, creating volumes for storage, tweaking the metrics gathering to get host metrics (CPU, RAM, and so on).\nFor all your fuzzing instances, only one instance of Prometheus and Grafana is required. The statsd exporter converts the StatsD metrics to Prometheus. If you are using a provider that supports StatsD directly, you can skip this part of the setup.\u0026quot;\nYou can create and move the infrastructure files into a directory of your choice. The directory will store all the required configuration files.\nTo install and set up Prometheus and Grafana:\n Install Docker and Docker Compose:\ncurl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh Create a docker-compose.yml containing the following:\nversion: \u0026#39;3\u0026#39; networks: statsd-net: driver: bridge services: prometheus: image: prom/prometheus container_name: prometheus volumes: - ./prometheus.yml:/prometheus.yml command: - \u0026#39;--config.file=/prometheus.yml\u0026#39; restart: unless-stopped ports: - \u0026#34;9090:9090\u0026#34; networks: - statsd-net statsd_exporter: image: prom/statsd-exporter container_name: statsd_exporter volumes: - ./statsd_mapping.yml:/statsd_mapping.yml command: - \u0026#34;--statsd.mapping-config=/statsd_mapping.yml\u0026#34; ports: - \u0026#34;9102:9102/tcp\u0026#34; - \u0026#34;8125:9125/udp\u0026#34; networks: - statsd-net grafana: image: grafana/grafana container_name: grafana restart: unless-stopped ports: - \u0026#34;3000:3000\u0026#34; networks: - statsd-net Create a prometheus.yml containing the following:\nglobal: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: \u0026#39;fuzzing_metrics\u0026#39; static_configs: - targets: [\u0026#39;statsd_exporter:9102\u0026#39;] Create a statsd_mapping.yml containing the following:\nmappings: - match: \u0026#34;fuzzing.*\u0026#34; name: \u0026#34;fuzzing\u0026#34; labels: type: \u0026#34;$1\u0026#34; Run docker-compose up -d.\n Running AFL++ with StatsD To run your fuzzing instances:\nAFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -M test-fuzzer-1 -i i -o o [./bin/my-application] @@ AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -S test-fuzzer-2 -i i -o o [./bin/my-application] @@ ... "}),a.add({id:34,href:'/docs/sister_projects/',title:"Sister Projects",content:"Sister projects This doc lists some of the projects that are inspired by, derived from, designed for, or meant to integrate with AFL. See README.md for the general instruction manual.\n!!! !!! This list is outdated and needs an update, missing: e.g. Angora, FairFuzz !!!\nSupport for other languages / environments: Python AFL (Jakub Wilk) Allows fuzz-testing of Python programs. Uses custom instrumentation and its own forkserver.\nhttp://jwilk.net/software/python-afl\nGo-fuzz (Dmitry Vyukov) AFL-inspired guided fuzzing approach for Go targets:\nhttps://github.com/dvyukov/go-fuzz\nafl.rs (Keegan McAllister) Allows Rust features to be easily fuzzed with AFL (using the LLVM mode).\nhttps://github.com/kmcallister/afl.rs\nOCaml support (KC Sivaramakrishnan) Adds AFL-compatible instrumentation to OCaml programs.\nhttps://github.com/ocamllabs/opam-repo-dev/pull/23 http://canopy.mirage.io/Posts/Fuzzing\nAFL for GCJ Java and other GCC frontends (-) GCC Java programs are actually supported out of the box - simply rename afl-gcc to afl-gcj. Unfortunately, by default, unhandled exceptions in GCJ do not result in abort() being called, so you will need to manually add a top-level exception handler that exits with SIGABRT or something equivalent.\nOther GCC-supported languages should be fairly easy to get working, but may face similar problems. See https://gcc.gnu.org/frontends.html for a list of options.\nAFL-style in-process fuzzer for LLVM (Kostya Serebryany) Provides an evolutionary instrumentation-guided fuzzing harness that allows some programs to be fuzzed without the fork / execve overhead. (Similar functionality is now available as the \u0026ldquo;persistent\u0026rdquo; feature described in the llvm_mode readme)\nhttp://llvm.org/docs/LibFuzzer.html\nTriforceAFL (Tim Newsham and Jesse Hertz) Leverages QEMU full system emulation mode to allow AFL to target operating systems and other alien worlds:\nhttps://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2016/june/project-triforce-run-afl-on-everything/\nWinAFL (Ivan Fratric) As the name implies, allows you to fuzz Windows binaries (using DynamoRio).\nhttps://github.com/ivanfratric/winafl\nAnother Windows alternative may be:\nhttps://github.com/carlosgprado/BrundleFuzz/\nNetwork fuzzing Preeny (Yan Shoshitaishvili) Provides a fairly simple way to convince dynamically linked network-centric programs to read from a file or not fork. Not AFL-specific, but described as useful by many users. Some assembly required.\nhttps://github.com/zardus/preeny\nDistributed fuzzing and related automation roving (Richo Healey) A client-server architecture for effortlessly orchestrating AFL runs across a fleet of machines. You don\u0026rsquo;t want to use this on systems that face the Internet or live in other untrusted environments.\nhttps://github.com/richo/roving\nDistfuzz-AFL (Martijn Bogaard) Simplifies the management of afl-fuzz instances on remote machines. The author notes that the current implementation isn\u0026rsquo;t secure and should not be exposed on the Internet.\nhttps://github.com/MartijnB/disfuzz-afl\nAFLDFF (quantumvm) A nice GUI for managing AFL jobs.\nhttps://github.com/quantumvm/AFLDFF\nafl-launch (Ben Nagy) Batch AFL launcher utility with a simple CLI.\nhttps://github.com/bnagy/afl-launch\nAFL Utils (rc0r) Simplifies the triage of discovered crashes, start parallel instances, etc.\nhttps://github.com/rc0r/afl-utils\nAFL crash analyzer (floyd) Another crash triage tool:\nhttps://github.com/floyd-fuh/afl-crash-analyzer\nafl-extras (fekir) Collect data, parallel afl-tmin, startup scripts.\nhttps://github.com/fekir/afl-extras\nafl-fuzzing-scripts (Tobias Ospelt) Simplifies starting up multiple parallel AFL jobs.\nhttps://github.com/floyd-fuh/afl-fuzzing-scripts/\nafl-sid (Jacek Wielemborek) Allows users to more conveniently build and deploy AFL via Docker.\nhttps://github.com/d33tah/afl-sid\nAnother Docker-related project:\nhttps://github.com/ozzyjohnson/docker-afl\nafl-monitor (Paul S. Ziegler) Provides more detailed and versatile statistics about your running AFL jobs.\nhttps://github.com/reflare/afl-monitor\nFEXM (Security in Telecommunications) Fully automated fuzzing framework, based on AFL\nhttps://github.com/fgsect/fexm\nCrash triage, coverage analysis, and other companion tools: afl-crash-analyzer (Tobias Ospelt) Makes it easier to navigate and annotate crashing test cases.\nhttps://github.com/floyd-fuh/afl-crash-analyzer/\nCrashwalk (Ben Nagy) AFL-aware tool to annotate and sort through crashing test cases.\nhttps://github.com/bnagy/crashwalk\nafl-cov (Michael Rash) Produces human-readable coverage data based on the output queue of afl-fuzz.\nhttps://github.com/mrash/afl-cov\nafl-sancov (Bhargava Shastry) Similar to afl-cov, but uses clang sanitizer instrumentation.\nhttps://github.com/bshastry/afl-sancov\nRecidiVM (Jakub Wilk) Makes it easy to estimate memory usage limits when fuzzing with ASAN or MSAN.\nhttp://jwilk.net/software/recidivm\naflize (Jacek Wielemborek) Automatically build AFL-enabled versions of Debian packages.\nhttps://github.com/d33tah/aflize\nafl-ddmin-mod (Markus Teufelberger) A variant of afl-tmin that uses a more sophisticated (but slower) minimization algorithm.\nhttps://github.com/MarkusTeufelberger/afl-ddmin-mod\nafl-kit (Kuang-che Wu) Replacements for afl-cmin and afl-tmin with additional features, such as the ability to filter crashes based on stderr patterns.\nhttps://github.com/kcwu/afl-kit\nNarrow-purpose or experimental: Cygwin support (Ali Rizvi-Santiago) Pretty self-explanatory. As per the author, this \u0026ldquo;mostly\u0026rdquo; ports AFL to Windows. Field reports welcome!\nhttps://github.com/arizvisa/afl-cygwin\nPause and resume scripts (Ben Nagy) Simple automation to suspend and resume groups of fuzzing jobs.\nhttps://github.com/bnagy/afl-trivia\nStatic binary-only instrumentation (Aleksandar Nikolich) Allows black-box binaries to be instrumented statically (i.e., by modifying the binary ahead of the time, rather than translating it on the run). Author reports better performance compared to QEMU, but occasional translation errors with stripped binaries.\nhttps://github.com/vanhauser-thc/afl-dyninst\nAFL PIN (Parker Thompson) Early-stage Intel PIN instrumentation support (from before we settled on faster-running QEMU).\nhttps://github.com/mothran/aflpin\nAFL-style instrumentation in llvm (Kostya Serebryany) Allows AFL-equivalent instrumentation to be injected at compiler level. This is currently not supported by AFL as-is, but may be useful in other projects.\nhttps://code.google.com/p/address-sanitizer/wiki/AsanCoverage#Coverage_counters\nAFL JS (Han Choongwoo) One-off optimizations to speed up the fuzzing of JavaScriptCore (now likely superseded by LLVM deferred forkserver init - see README.llvm.md).\nhttps://github.com/tunz/afl-fuzz-js\nAFL harness for fwknop (Michael Rash) An example of a fairly involved integration with AFL.\nhttps://github.com/mrash/fwknop/tree/master/test/afl\nBuilding harnesses for DNS servers (Jonathan Foote, Ron Bowes) Two articles outlining the general principles and showing some example code.\nhttps://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop https://goo.gl/j9EgFf\nFuzzer shell for SQLite (Richard Hipp) A simple SQL shell designed specifically for fuzzing the underlying library.\nhttp://www.sqlite.org/src/artifact/9e7e273da2030371\nSupport for Python mutation modules (Christian Holler) now integrated in AFL++, originally from here https://github.com/choller/afl/blob/master/docs/mozilla/python_modules.txt\nSupport for selective instrumentation (Christian Holler) now integrated in AFL++, originally from here https://github.com/choller/afl/blob/master/docs/mozilla/partial_instrumentation.txt\nSyzkaller (Dmitry Vyukov) A similar guided approach as applied to fuzzing syscalls:\nhttps://github.com/google/syzkaller/wiki/Found-Bugs https://github.com/dvyukov/linux/commit/33787098ffaaa83b8a7ccf519913ac5fd6125931 http://events.linuxfoundation.org/sites/events/files/slides/AFL%20filesystem%20fuzzing%2C%20Vault%202016_0.pdf\nKernel Snapshot Fuzzing using Unicornafl (Security in Telecommunications) https://github.com/fgsect/unicorefuzz\nAndroid support (ele7enxxh) Based on a somewhat dated version of AFL:\nhttps://github.com/ele7enxxh/android-afl\nCGI wrapper (floyd) Facilitates the testing of CGI scripts.\nhttps://github.com/floyd-fuh/afl-cgi-wrapper\nFuzzing difficulty estimation (Marcel Boehme) A fork of AFL that tries to quantify the likelihood of finding additional paths or crashes at any point in a fuzzing job.\nhttps://github.com/mboehme/pythia\n"}),a.add({id:35,href:'/docs/status_screen/',title:"Status Screen",content:"Understanding the status screen This document provides an overview of the status screen - plus tips for troubleshooting any warnings and red text shown in the UI. See README.md for the general instruction manual.\nA note about colors The status screen and error messages use colors to keep things readable and attract your attention to the most important details. For example, red almost always means \u0026ldquo;consult this doc\u0026rdquo; :-)\nUnfortunately, the UI will render correctly only if your terminal is using traditional un*x palette (white text on black background) or something close to that.\nIf you are using inverse video, you may want to change your settings, say:\n For GNOME Terminal, go to Edit \u0026gt; Profile preferences, select the \u0026ldquo;colors\u0026rdquo; tab, and from the list of built-in schemes, choose \u0026ldquo;white on black\u0026rdquo;. For the MacOS X Terminal app, open a new window using the \u0026ldquo;Pro\u0026rdquo; scheme via the Shell \u0026gt; New Window menu (or make \u0026ldquo;Pro\u0026rdquo; your default). Alternatively, if you really like your current colors, you can edit config.h to comment out USE_COLORS, then do make clean all.\nI\u0026rsquo;m not aware of any other simple way to make this work without causing other side effects - sorry about that.\nWith that out of the way, let\u0026rsquo;s talk about what\u0026rsquo;s actually on the screen\u0026hellip;\nThe status bar american fuzzy lop ++3.01a (default) [fast] {0} The top line shows you which mode afl-fuzz is running in (normal: \u0026ldquo;american fuzy lop\u0026rdquo;, crash exploration mode: \u0026ldquo;peruvian rabbit mode\u0026rdquo;) and the version of AFL++. Next to the version is the banner, which, if not set with -T by hand, will either show the binary name being fuzzed, or the -M/-S main/secondary name for parallel fuzzing. Second to last is the power schedule mode being run (default: fast). Finally, the last item is the CPU id.\nProcess timing +----------------------------------------------------+ | run time : 0 days, 8 hrs, 32 min, 43 sec | | last new path : 0 days, 0 hrs, 6 min, 40 sec | | last uniq crash : none seen yet | | last uniq hang : 0 days, 1 hrs, 24 min, 32 sec | +----------------------------------------------------+ This section is fairly self-explanatory: it tells you how long the fuzzer has been running and how much time has elapsed since its most recent finds. This is broken down into \u0026ldquo;paths\u0026rdquo; (a shorthand for test cases that trigger new execution patterns), crashes, and hangs.\nWhen it comes to timing: there is no hard rule, but most fuzzing jobs should be expected to run for days or weeks; in fact, for a moderately complex project, the first pass will probably take a day or so. Every now and then, some jobs will be allowed to run for months.\nThere\u0026rsquo;s one important thing to watch out for: if the tool is not finding new paths within several minutes of starting, you\u0026rsquo;re probably not invoking the target binary correctly and it never gets to parse the input files we\u0026rsquo;re throwing at it; another possible explanations are that the default memory limit (-m) is too restrictive, and the program exits after failing to allocate a buffer very early on; or that the input files are patently invalid and always fail a basic header check.\nIf there are no new paths showing up for a while, you will eventually see a big red warning in this section, too :-)\nOverall results +-----------------------+ | cycles done : 0 | | total paths : 2095 | | uniq crashes : 0 | | uniq hangs : 19 | +-----------------------+ The first field in this section gives you the count of queue passes done so far - that is, the number of times the fuzzer went over all the interesting test cases discovered so far, fuzzed them, and looped back to the very beginning. Every fuzzing session should be allowed to complete at least one cycle; and ideally, should run much longer than that.\nAs noted earlier, the first pass can take a day or longer, so sit back and relax.\nTo help make the call on when to hit Ctrl-C, the cycle counter is color-coded. It is shown in magenta during the first pass, progresses to yellow if new finds are still being made in subsequent rounds, then blue when that ends - and finally, turns green after the fuzzer hasn\u0026rsquo;t been seeing any action for a longer while.\nThe remaining fields in this part of the screen should be pretty obvious: there\u0026rsquo;s the number of test cases (\u0026ldquo;paths\u0026rdquo;) discovered so far, and the number of unique faults. The test cases, crashes, and hangs can be explored in real-time by browsing the output directory, as discussed in README.md.\nCycle progress +-------------------------------------+ | now processing : 1296 (61.86%) | | paths timed out : 0 (0.00%) | +-------------------------------------+ This box tells you how far along the fuzzer is with the current queue cycle: it shows the ID of the test case it is currently working on, plus the number of inputs it decided to ditch because they were persistently timing out.\nThe \u0026ldquo;*\u0026rdquo; suffix sometimes shown in the first line means that the currently processed path is not \u0026ldquo;favored\u0026rdquo; (a property discussed later on).\nMap coverage +--------------------------------------+ | map density : 10.15% / 29.07% | | count coverage : 4.03 bits/tuple | +--------------------------------------+ The section provides some trivia about the coverage observed by the instrumentation embedded in the target binary.\nThe first line in the box tells you how many branch tuples we have already hit, in proportion to how much the bitmap can hold. The number on the left describes the current input; the one on the right is the value for the entire input corpus.\nBe wary of extremes:\n Absolute numbers below 200 or so suggest one of three things: that the program is extremely simple; that it is not instrumented properly (e.g., due to being linked against a non-instrumented copy of the target library); or that it is bailing out prematurely on your input test cases. The fuzzer will try to mark this in pink, just to make you aware. Percentages over 70% may very rarely happen with very complex programs that make heavy use of template-generated code. Because high bitmap density makes it harder for the fuzzer to reliably discern new program states, I recommend recompiling the binary with AFL_INST_RATIO=10 or so and trying again (see env_variables.md). The fuzzer will flag high percentages in red. Chances are, you will never see that unless you\u0026rsquo;re fuzzing extremely hairy software (say, v8, perl, ffmpeg). The other line deals with the variability in tuple hit counts seen in the binary. In essence, if every taken branch is always taken a fixed number of times for all the inputs we have tried, this will read 1.00. As we manage to trigger other hit counts for every branch, the needle will start to move toward 8.00 (every bit in the 8-bit map hit), but will probably never reach that extreme.\nTogether, the values can be useful for comparing the coverage of several different fuzzing jobs that rely on the same instrumented binary.\nStage progress +-------------------------------------+ | now trying : interest 32/8 | | stage execs : 3996/34.4k (11.62%) | | total execs : 27.4M | | exec speed : 891.7/sec | +-------------------------------------+ This part gives you an in-depth peek at what the fuzzer is actually doing right now. It tells you about the current stage, which can be any of:\n calibration - a pre-fuzzing stage where the execution path is examined to detect anomalies, establish baseline execution speed, and so on. Executed very briefly whenever a new find is being made. trim L/S - another pre-fuzzing stage where the test case is trimmed to the shortest form that still produces the same execution path. The length (L) and stepover (S) are chosen in general relationship to file size. bitflip L/S - deterministic bit flips. There are L bits toggled at any given time, walking the input file with S-bit increments. The current L/S variants are: 1/1, 2/1, 4/1, 8/8, 16/8, 32/8. arith L/8 - deterministic arithmetics. The fuzzer tries to subtract or add small integers to 8-, 16-, and 32-bit values. The stepover is always 8 bits. interest L/8 - deterministic value overwrite. The fuzzer has a list of known \u0026ldquo;interesting\u0026rdquo; 8-, 16-, and 32-bit values to try. The stepover is 8 bits. extras - deterministic injection of dictionary terms. This can be shown as \u0026ldquo;user\u0026rdquo; or \u0026ldquo;auto\u0026rdquo;, depending on whether the fuzzer is using a user-supplied dictionary (-x) or an auto-created one. You will also see \u0026ldquo;over\u0026rdquo; or \u0026ldquo;insert\u0026rdquo;, depending on whether the dictionary words overwrite existing data or are inserted by offsetting the remaining data to accommodate their length. havoc - a sort-of-fixed-length cycle with stacked random tweaks. The operations attempted during this stage include bit flips, overwrites with random and \u0026ldquo;interesting\u0026rdquo; integers, block deletion, block duplication, plus assorted dictionary-related operations (if a dictionary is supplied in the first place). splice - a last-resort strategy that kicks in after the first full queue cycle with no new paths. It is equivalent to \u0026lsquo;havoc\u0026rsquo;, except that it first splices together two random inputs from the queue at some arbitrarily selected midpoint. sync - a stage used only when -M or -S is set (see parallel_fuzzing.md). No real fuzzing is involved, but the tool scans the output from other fuzzers and imports test cases as necessary. The first time this is done, it may take several minutes or so. The remaining fields should be fairly self-evident: there\u0026rsquo;s the exec count progress indicator for the current stage, a global exec counter, and a benchmark for the current program execution speed. This may fluctuate from one test case to another, but the benchmark should be ideally over 500 execs/sec most of the time - and if it stays below 100, the job will probably take very long.\nThe fuzzer will explicitly warn you about slow targets, too. If this happens, see the perf_tips.md file included with the fuzzer for ideas on how to speed things up.\nFindings in depth +--------------------------------------+ | favored paths : 879 (41.96%) | | new edges on : 423 (20.19%) | | total crashes : 0 (0 unique) | | total tmouts : 24 (19 unique) | +--------------------------------------+ This gives you several metrics that are of interest mostly to complete nerds. The section includes the number of paths that the fuzzer likes the most based on a minimization algorithm baked into the code (these will get considerably more air time), and the number of test cases that actually resulted in better edge coverage (versus just pushing the branch hit counters up). There are also additional, more detailed counters for crashes and timeouts.\nNote that the timeout counter is somewhat different from the hang counter; this one includes all test cases that exceeded the timeout, even if they did not exceed it by a margin sufficient to be classified as hangs.\nFuzzing strategy yields +-----------------------------------------------------+ | bit flips : 57/289k, 18/289k, 18/288k | | byte flips : 0/36.2k, 4/35.7k, 7/34.6k | | arithmetics : 53/2.54M, 0/537k, 0/55.2k | | known ints : 8/322k, 12/1.32M, 10/1.70M | | dictionary : 9/52k, 1/53k, 1/24k | |havoc/splice : 1903/20.0M, 0/0 | |py/custom/rq : unused, 53/2.54M, unused | | trim/eff : 20.31%/9201, 17.05% | +-----------------------------------------------------+ This is just another nerd-targeted section keeping track of how many paths we have netted, in proportion to the number of execs attempted, for each of the fuzzing strategies discussed earlier on. This serves to convincingly validate assumptions about the usefulness of the various approaches taken by afl-fuzz.\nThe trim strategy stats in this section are a bit different than the rest. The first number in this line shows the ratio of bytes removed from the input files; the second one corresponds to the number of execs needed to achieve this goal. Finally, the third number shows the proportion of bytes that, although not possible to remove, were deemed to have no effect and were excluded from some of the more expensive deterministic fuzzing steps.\nNote that when deterministic mutation mode is off (which is the default because it is not very efficient) the first five lines display \u0026ldquo;disabled (default, enable with -D)\u0026rdquo;.\nOnly what is activated will have counter shown.\nPath geometry +---------------------+ | levels : 5 | | pending : 1570 | | pend fav : 583 | | own finds : 0 | | imported : 0 | | stability : 100.00% | +---------------------+ The first field in this section tracks the path depth reached through the guided fuzzing process. In essence: the initial test cases supplied by the user are considered \u0026ldquo;level 1\u0026rdquo;. The test cases that can be derived from that through traditional fuzzing are considered \u0026ldquo;level 2\u0026rdquo;; the ones derived by using these as inputs to subsequent fuzzing rounds are \u0026ldquo;level 3\u0026rdquo;; and so forth. The maximum depth is therefore a rough proxy for how much value you\u0026rsquo;re getting out of the instrumentation-guided approach taken by afl-fuzz.\nThe next field shows you the number of inputs that have not gone through any fuzzing yet. The same stat is also given for \u0026ldquo;favored\u0026rdquo; entries that the fuzzer really wants to get to in this queue cycle (the non-favored entries may have to wait a couple of cycles to get their chance).\nNext, we have the number of new paths found during this fuzzing section and imported from other fuzzer instances when doing parallelized fuzzing; and the extent to which identical inputs appear to sometimes produce variable behavior in the tested binary.\nThat last bit is actually fairly interesting: it measures the consistency of observed traces. If a program always behaves the same for the same input data, it will earn a score of 100%. When the value is lower but still shown in purple, the fuzzing process is unlikely to be negatively affected. If it goes into red, you may be in trouble, since AFL will have difficulty discerning between meaningful and \u0026ldquo;phantom\u0026rdquo; effects of tweaking the input file.\nNow, most targets will just get a 100% score, but when you see lower figures, there are several things to look at:\n The use of uninitialized memory in conjunction with some intrinsic sources of entropy in the tested binary. Harmless to AFL, but could be indicative of a security bug. Attempts to manipulate persistent resources, such as left over temporary files or shared memory objects. This is usually harmless, but you may want to double-check to make sure the program isn\u0026rsquo;t bailing out prematurely. Running out of disk space, SHM handles, or other global resources can trigger this, too. Hitting some functionality that is actually designed to behave randomly. Generally harmless. For example, when fuzzing sqlite, an input like select random(); will trigger a variable execution path. Multiple threads executing at once in semi-random order. This is harmless when the \u0026lsquo;stability\u0026rsquo; metric stays over 90% or so, but can become an issue if not. Here\u0026rsquo;s what to try: Use afl-clang-fast from instrumentation - it uses a thread-local tracking model that is less prone to concurrency issues, See if the target can be compiled or run without threads. Common ./configure options include --without-threads, --disable-pthreads, or --disable-openmp. Replace pthreads with GNU Pth (https://www.gnu.org/software/pth/), which allows you to use a deterministic scheduler. In persistent mode, minor drops in the \u0026ldquo;stability\u0026rdquo; metric can be normal, because not all the code behaves identically when re-entered; but major dips may signify that the code within __AFL_LOOP() is not behaving correctly on subsequent iterations (e.g., due to incomplete clean-up or reinitialization of the state) and that most of the fuzzing effort goes to waste. The paths where variable behavior is detected are marked with a matching entry in the \u0026lt;out_dir\u0026gt;/queue/.state/variable_behavior/ directory, so you can look them up easily.\nCPU load [cpu: 25%] This tiny widget shows the apparent CPU utilization on the local system. It is calculated by taking the number of processes in the \u0026ldquo;runnable\u0026rdquo; state, and then comparing it to the number of logical cores on the system.\nIf the value is shown in green, you are using fewer CPU cores than available on your system and can probably parallelize to improve performance; for tips on how to do that, see parallel_fuzzing.md.\nIf the value is shown in red, your CPU is possibly oversubscribed, and running additional fuzzers may not give you any benefits.\nOf course, this benchmark is very simplistic; it tells you how many processes are ready to run, but not how resource-hungry they may be. It also doesn\u0026rsquo;t distinguish between physical cores, logical cores, and virtualized CPUs; the performance characteristics of each of these will differ quite a bit.\nIf you want a more accurate measurement, you can run the afl-gotcpu utility from the command line.\nAddendum: status and plot files For unattended operation, some of the key status screen information can be also found in a machine-readable format in the fuzzer_stats file in the output directory. This includes:\n start_time - unix time indicating the start time of afl-fuzz last_update - unix time corresponding to the last update of this file run_time - run time in seconds to the last update of this file fuzzer_pid - PID of the fuzzer process cycles_done - queue cycles completed so far cycles_wo_finds - number of cycles without any new paths found execs_done - number of execve() calls attempted execs_per_sec - overall number of execs per second paths_total - total number of entries in the queue paths_favored - number of queue entries that are favored paths_found - number of entries discovered through local fuzzing paths_imported - number of entries imported from other instances max_depth - number of levels in the generated data set cur_path - currently processed entry number pending_favs - number of favored entries still waiting to be fuzzed pending_total - number of all entries waiting to be fuzzed variable_paths - number of test cases showing variable behavior stability - percentage of bitmap bytes that behave consistently bitmap_cvg - percentage of edge coverage found in the map so far unique_crashes - number of unique crashes recorded unique_hangs - number of unique hangs encountered last_path - seconds since the last path was found last_crash - seconds since the last crash was found last_hang - seconds since the last hang was found execs_since_crash - execs since the last crash was found exec_timeout - the -t command line value slowest_exec_ms - real time of the slowest execution in ms peak_rss_mb - max rss usage reached during fuzzing in MB edges_found - how many edges have been found var_byte_count - how many edges are non-deterministic afl_banner - banner text (e.g. the target name) afl_version - the version of AFL used target_mode - default, persistent, qemu, unicorn, non-instrumented command_line - full command line used for the fuzzing session Most of these map directly to the UI elements discussed earlier on.\nOn top of that, you can also find an entry called plot_data, containing a plottable history for most of these fields. If you have gnuplot installed, you can turn this into a nice progress report with the included afl-plot tool.\nAddendum: Automatically send metrics with StatsD In a CI environment or when running multiple fuzzers, it can be tedious to log into each of them or deploy scripts to read the fuzzer statistics. Using AFL_STATSD (and the other related environment variables AFL_STATSD_HOST, AFL_STATSD_PORT, AFL_STATSD_TAGS_FLAVOR) you can automatically send metrics to your favorite StatsD server. Depending on your StatsD server you will be able to monitor, trigger alerts or perform actions based on these metrics (e.g: alert on slow exec/s for a new build, threshold of crashes, time since last crash \u0026gt; X, etc).\nThe selected metrics are a subset of all the metrics found in the status and in the plot file. The list is the following: cycle_done, cycles_wo_finds, execs_done,execs_per_sec, paths_total, paths_favored, paths_found, paths_imported, max_depth, cur_path, pending_favs, pending_total, variable_paths, unique_crashes, unique_hangs, total_crashes, slowest_exec_ms, edges_found, var_byte_count, havoc_expansion. Their definitions can be found in the addendum above.\nWhen using multiple fuzzer instances with StatsD it is strongly recommended to setup the flavor (AFL_STATSD_TAGS_FLAVOR) to match your StatsD server. This will allow you to see individual fuzzer performance, detect bad ones, see the progress of each strategy\u0026hellip;\n"}),a.add({id:36,href:'/docs/technical_details/',title:"Technical Details",content:"Technical \u0026ldquo;whitepaper\u0026rdquo; for afl-fuzz NOTE: this document is rather outdated!\nThis document provides a quick overview of the guts of American Fuzzy Lop. See README.md for the general instruction manual; and for a discussion of motivations and design goals behind AFL, see historical_notes.md.\n0. Design statement American Fuzzy Lop does its best not to focus on any singular principle of operation and not be a proof-of-concept for any specific theory. The tool can be thought of as a collection of hacks that have been tested in practice, found to be surprisingly effective, and have been implemented in the simplest, most robust way I could think of at the time.\nMany of the resulting features are made possible thanks to the availability of lightweight instrumentation that served as a foundation for the tool, but this mechanism should be thought of merely as a means to an end. The only true governing principles are speed, reliability, and ease of use.\n1. Coverage measurements The instrumentation injected into compiled programs captures branch (edge) coverage, along with coarse branch-taken hit counts. The code injected at branch points is essentially equivalent to:\ncur_location = \u0026lt;COMPILE_TIME_RANDOM\u0026gt;; shared_mem[cur_location ^ prev_location]++; prev_location = cur_location \u0026gt;\u0026gt; 1; The cur_location value is generated randomly to simplify the process of linking complex projects and keep the XOR output distributed uniformly.\nThe shared_mem[] array is a 64 kB SHM region passed to the instrumented binary by the caller. Every byte set in the output map can be thought of as a hit for a particular (branch_src, branch_dst) tuple in the instrumented code.\nThe size of the map is chosen so that collisions are sporadic with almost all of the intended targets, which usually sport between 2k and 10k discoverable branch points:\n Branch cnt | Colliding tuples | Example targets ------------+------------------+----------------- 1,000 | 0.75% | giflib, lzo 2,000 | 1.5% | zlib, tar, xz 5,000 | 3.5% | libpng, libwebp 10,000 | 7% | libxml 20,000 | 14% | sqlite 50,000 | 30% | - At the same time, its size is small enough to allow the map to be analyzed in a matter of microseconds on the receiving end, and to effortlessly fit within L2 cache.\nThis form of coverage provides considerably more insight into the execution path of the program than simple block coverage. In particular, it trivially distinguishes between the following execution traces:\n A -\u0026gt; B -\u0026gt; C -\u0026gt; D -\u0026gt; E (tuples: AB, BC, CD, DE) A -\u0026gt; B -\u0026gt; D -\u0026gt; C -\u0026gt; E (tuples: AB, BD, DC, CE) This aids the discovery of subtle fault conditions in the underlying code, because security vulnerabilities are more often associated with unexpected or incorrect state transitions than with merely reaching a new basic block.\nThe reason for the shift operation in the last line of the pseudocode shown earlier in this section is to preserve the directionality of tuples (without this, A ^ B would be indistinguishable from B ^ A) and to retain the identity of tight loops (otherwise, A ^ A would be obviously equal to B ^ B).\nThe absence of simple saturating arithmetic opcodes on Intel CPUs means that the hit counters can sometimes wrap around to zero. Since this is a fairly unlikely and localized event, it\u0026rsquo;s seen as an acceptable performance trade-off.\n2. Detecting new behaviors The fuzzer maintains a global map of tuples seen in previous executions; this data can be rapidly compared with individual traces and updated in just a couple of dword- or qword-wide instructions and a simple loop.\nWhen a mutated input produces an execution trace containing new tuples, the corresponding input file is preserved and routed for additional processing later on (see section #3). Inputs that do not trigger new local-scale state transitions in the execution trace (i.e., produce no new tuples) are discarded, even if their overall control flow sequence is unique.\nThis approach allows for a very fine-grained and long-term exploration of program state while not having to perform any computationally intensive and fragile global comparisons of complex execution traces, and while avoiding the scourge of path explosion.\nTo illustrate the properties of the algorithm, consider that the second trace shown below would be considered substantially new because of the presence of new tuples (CA, AE):\n #1: A -\u0026gt; B -\u0026gt; C -\u0026gt; D -\u0026gt; E #2: A -\u0026gt; B -\u0026gt; C -\u0026gt; A -\u0026gt; E At the same time, with #2 processed, the following pattern will not be seen as unique, despite having a markedly different overall execution path:\n #3: A -\u0026gt; B -\u0026gt; C -\u0026gt; A -\u0026gt; B -\u0026gt; C -\u0026gt; A -\u0026gt; B -\u0026gt; C -\u0026gt; D -\u0026gt; E In addition to detecting new tuples, the fuzzer also considers coarse tuple hit counts. These are divided into several buckets:\n 1, 2, 3, 4-7, 8-15, 16-31, 32-127, 128+ To some extent, the number of buckets is an implementation artifact: it allows an in-place mapping of an 8-bit counter generated by the instrumentation to an 8-position bitmap relied on by the fuzzer executable to keep track of the already-seen execution counts for each tuple.\nChanges within the range of a single bucket are ignored; transition from one bucket to another is flagged as an interesting change in program control flow, and is routed to the evolutionary process outlined in the section below.\nThe hit count behavior provides a way to distinguish between potentially interesting control flow changes, such as a block of code being executed twice when it was normally hit only once. At the same time, it is fairly insensitive to empirically less notable changes, such as a loop going from 47 cycles to 48. The counters also provide some degree of \u0026ldquo;accidental\u0026rdquo; immunity against tuple collisions in dense trace maps.\nThe execution is policed fairly heavily through memory and execution time limits; by default, the timeout is set at 5x the initially-calibrated execution speed, rounded up to 20 ms. The aggressive timeouts are meant to prevent dramatic fuzzer performance degradation by descending into tarpits that, say, improve coverage by 1% while being 100x slower; we pragmatically reject them and hope that the fuzzer will find a less expensive way to reach the same code. Empirical testing strongly suggests that more generous time limits are not worth the cost.\n3. Evolving the input queue Mutated test cases that produced new state transitions within the program are added to the input queue and used as a starting point for future rounds of fuzzing. They supplement, but do not automatically replace, existing finds.\nIn contrast to more greedy genetic algorithms, this approach allows the tool to progressively explore various disjoint and possibly mutually incompatible features of the underlying data format, as shown in this image:\nSeveral practical examples of the results of this algorithm are discussed here:\nhttp://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html http://lcamtuf.blogspot.com/2014/11/afl-fuzz-nobody-expects-cdata-sections.html\nThe synthetic corpus produced by this process is essentially a compact collection of \u0026ldquo;hmm, this does something new!\u0026rdquo; input files, and can be used to seed any other testing processes down the line (for example, to manually stress-test resource-intensive desktop apps).\nWith this approach, the queue for most targets grows to somewhere between 1k and 10k entries; approximately 10-30% of this is attributable to the discovery of new tuples, and the remainder is associated with changes in hit counts.\nThe following table compares the relative ability to discover file syntax and explore program states when using several different approaches to guided fuzzing. The instrumented target was GNU patch 2.7k.3 compiled with -O3 and seeded with a dummy text file; the session consisted of a single pass over the input queue with afl-fuzz:\n Fuzzer guidance | Blocks | Edges | Edge hit | Highest-coverage strategy used | reached | reached | cnt var | test case generated ------------------+---------+---------+----------+--------------------------- (Initial file) | 156 | 163 | 1.00 | (none) | | | | Blind fuzzing S | 182 | 205 | 2.23 | First 2 B of RCS diff Blind fuzzing L | 228 | 265 | 2.23 | First 4 B of -c mode diff Block coverage | 855 | 1,130 | 1.57 | Almost-valid RCS diff Edge coverage | 1,452 | 2,070 | 2.18 | One-chunk -c mode diff AFL model | 1,765 | 2,597 | 4.99 | Four-chunk -c mode diff The first entry for blind fuzzing (\u0026ldquo;S\u0026rdquo;) corresponds to executing just a single round of testing; the second set of figures (\u0026ldquo;L\u0026rdquo;) shows the fuzzer running in a loop for a number of execution cycles comparable with that of the instrumented runs, which required more time to fully process the growing queue.\nRoughly similar results have been obtained in a separate experiment where the fuzzer was modified to compile out all the random fuzzing stages and leave just a series of rudimentary, sequential operations such as walking bit flips. Because this mode would be incapable of altering the size of the input file, the sessions were seeded with a valid unified diff:\n Queue extension | Blocks | Edges | Edge hit | Number of unique strategy used | reached | reached | cnt var | crashes found ------------------+---------+---------+----------+------------------ (Initial file) | 624 | 717 | 1.00 | - | | | | Blind fuzzing | 1,101 | 1,409 | 1.60 | 0 Block coverage | 1,255 | 1,649 | 1.48 | 0 Edge coverage | 1,259 | 1,734 | 1.72 | 0 AFL model | 1,452 | 2,040 | 3.16 | 1 At noted earlier on, some of the prior work on genetic fuzzing relied on maintaining a single test case and evolving it to maximize coverage. At least in the tests described above, this \u0026ldquo;greedy\u0026rdquo; approach appears to confer no substantial benefits over blind fuzzing strategies.\n4. Culling the corpus The progressive state exploration approach outlined above means that some of the test cases synthesized later on in the game may have edge coverage that is a strict superset of the coverage provided by their ancestors.\nTo optimize the fuzzing effort, AFL periodically re-evaluates the queue using a fast algorithm that selects a smaller subset of test cases that still cover every tuple seen so far, and whose characteristics make them particularly favorable to the tool.\nThe algorithm works by assigning every queue entry a score proportional to its execution latency and file size; and then selecting lowest-scoring candidates for each tuple.\nThe tuples are then processed sequentially using a simple workflow:\n Find next tuple not yet in the temporary working set, Locate the winning queue entry for this tuple, Register all tuples present in that entry\u0026rsquo;s trace in the working set, Go to #1 if there are any missing tuples in the set. The generated corpus of \u0026ldquo;favored\u0026rdquo; entries is usually 5-10x smaller than the starting data set. Non-favored entries are not discarded, but they are skipped with varying probabilities when encountered in the queue:\n If there are new, yet-to-be-fuzzed favorites present in the queue, 99% of non-favored entries will be skipped to get to the favored ones. If there are no new favorites: If the current non-favored entry was fuzzed before, it will be skipped 95% of the time. If it hasn\u0026rsquo;t gone through any fuzzing rounds yet, the odds of skipping drop down to 75%. Based on empirical testing, this provides a reasonable balance between queue cycling speed and test case diversity.\nSlightly more sophisticated but much slower culling can be performed on input or output corpora with afl-cmin. This tool permanently discards the redundant entries and produces a smaller corpus suitable for use with afl-fuzz or external tools.\n5. Trimming input files File size has a dramatic impact on fuzzing performance, both because large files make the target binary slower, and because they reduce the likelihood that a mutation would touch important format control structures, rather than redundant data blocks. This is discussed in more detail in perf_tips.md.\nThe possibility that the user will provide a low-quality starting corpus aside, some types of mutations can have the effect of iteratively increasing the size of the generated files, so it is important to counter this trend.\nLuckily, the instrumentation feedback provides a simple way to automatically trim down input files while ensuring that the changes made to the files have no impact on the execution path.\nThe built-in trimmer in afl-fuzz attempts to sequentially remove blocks of data with variable length and stepover; any deletion that doesn\u0026rsquo;t affect the checksum of the trace map is committed to disk. The trimmer is not designed to be particularly thorough; instead, it tries to strike a balance between precision and the number of execve() calls spent on the process, selecting the block size and stepover to match. The average per-file gains are around 5-20%.\nThe standalone afl-tmin tool uses a more exhaustive, iterative algorithm, and also attempts to perform alphabet normalization on the trimmed files. The operation of afl-tmin is as follows.\nFirst, the tool automatically selects the operating mode. If the initial input crashes the target binary, afl-tmin will run in non-instrumented mode, simply keeping any tweaks that produce a simpler file but still crash the target. The same mode is used for hangs, if -H (hang mode) is specified. If the target is non-crashing, the tool uses an instrumented mode and keeps only the tweaks that produce exactly the same execution path.\nThe actual minimization algorithm is:\n Attempt to zero large blocks of data with large stepovers. Empirically, this is shown to reduce the number of execs by preempting finer-grained efforts later on. Perform a block deletion pass with decreasing block sizes and stepovers, binary-search-style. Perform alphabet normalization by counting unique characters and trying to bulk-replace each with a zero value. As a last result, perform byte-by-byte normalization on non-zero bytes. Instead of zeroing with a 0x00 byte, afl-tmin uses the ASCII digit \u0026lsquo;0\u0026rsquo;. This is done because such a modification is much less likely to interfere with text parsing, so it is more likely to result in successful minimization of text files.\nThe algorithm used here is less involved than some other test case minimization approaches proposed in academic work, but requires far fewer executions and tends to produce comparable results in most real-world applications.\n6. Fuzzing strategies The feedback provided by the instrumentation makes it easy to understand the value of various fuzzing strategies and optimize their parameters so that they work equally well across a wide range of file types. The strategies used by afl-fuzz are generally format-agnostic and are discussed in more detail here:\nhttp://lcamtuf.blogspot.com/2014/08/binary-fuzzing-strategies-what-works.html\nIt is somewhat notable that especially early on, most of the work done by afl-fuzz is actually highly deterministic, and progresses to random stacked modifications and test case splicing only at a later stage. The deterministic strategies include:\n Sequential bit flips with varying lengths and stepovers, Sequential addition and subtraction of small integers, Sequential insertion of known interesting integers (0, 1, INT_MAX, etc), The purpose of opening with deterministic steps is related to their tendency to produce compact test cases and small diffs between the non-crashing and crashing inputs.\nWith deterministic fuzzing out of the way, the non-deterministic steps include stacked bit flips, insertions, deletions, arithmetics, and splicing of different test cases.\nThe relative yields and execve() costs of all these strategies have been investigated and are discussed in the aforementioned blog post.\nFor the reasons discussed in historical_notes.md (chiefly, performance, simplicity, and reliability), AFL generally does not try to reason about the relationship between specific mutations and program states; the fuzzing steps are nominally blind, and are guided only by the evolutionary design of the input queue.\nThat said, there is one (trivial) exception to this rule: when a new queue entry goes through the initial set of deterministic fuzzing steps, and tweaks to some regions in the file are observed to have no effect on the checksum of the execution path, they may be excluded from the remaining phases of deterministic fuzzing - and the fuzzer may proceed straight to random tweaks. Especially for verbose, human-readable data formats, this can reduce the number of execs by 10-40% or so without an appreciable drop in coverage. In extreme cases, such as normally block-aligned tar archives, the gains can be as high as 90%.\nBecause the underlying \u0026ldquo;effector maps\u0026rdquo; are local every queue entry and remain in force only during deterministic stages that do not alter the size or the general layout of the underlying file, this mechanism appears to work very reliably and proved to be simple to implement.\n7. Dictionaries The feedback provided by the instrumentation makes it easy to automatically identify syntax tokens in some types of input files, and to detect that certain combinations of predefined or auto-detected dictionary terms constitute a valid grammar for the tested parser.\nA discussion of how these features are implemented within afl-fuzz can be found here:\nhttp://lcamtuf.blogspot.com/2015/01/afl-fuzz-making-up-grammar-with.html\nIn essence, when basic, typically easily-obtained syntax tokens are combined together in a purely random manner, the instrumentation and the evolutionary design of the queue together provide a feedback mechanism to differentiate between meaningless mutations and ones that trigger new behaviors in the instrumented code - and to incrementally build more complex syntax on top of this discovery.\nThe dictionaries have been shown to enable the fuzzer to rapidly reconstruct the grammar of highly verbose and complex languages such as JavaScript, SQL, or XML; several examples of generated SQL statements are given in the blog post mentioned above.\nInterestingly, the AFL instrumentation also allows the fuzzer to automatically isolate syntax tokens already present in an input file. It can do so by looking for run of bytes that, when flipped, produce a consistent change to the program\u0026rsquo;s execution path; this is suggestive of an underlying atomic comparison to a predefined value baked into the code. The fuzzer relies on this signal to build compact \u0026ldquo;auto dictionaries\u0026rdquo; that are then used in conjunction with other fuzzing strategies.\n8. De-duping crashes De-duplication of crashes is one of the more important problems for any competent fuzzing tool. Many of the naive approaches run into problems; in particular, looking just at the faulting address may lead to completely unrelated issues being clustered together if the fault happens in a common library function (say, strcmp, strcpy); while checksumming call stack backtraces can lead to extreme crash count inflation if the fault can be reached through a number of different, possibly recursive code paths.\nThe solution implemented in afl-fuzz considers a crash unique if any of two conditions are met:\n The crash trace includes a tuple not seen in any of the previous crashes, The crash trace is missing a tuple that was always present in earlier faults. The approach is vulnerable to some path count inflation early on, but exhibits a very strong self-limiting effect, similar to the execution path analysis logic that is the cornerstone of afl-fuzz.\n9. Investigating crashes The exploitability of many types of crashes can be ambiguous; afl-fuzz tries to address this by providing a crash exploration mode where a known-faulting test case is fuzzed in a manner very similar to the normal operation of the fuzzer, but with a constraint that causes any non-crashing mutations to be thrown away.\nA detailed discussion of the value of this approach can be found here:\nhttp://lcamtuf.blogspot.com/2014/11/afl-fuzz-crash-exploration-mode.html\nThe method uses instrumentation feedback to explore the state of the crashing program to get past the ambiguous faulting condition and then isolate the newly-found inputs for human review.\nOn the subject of crashes, it is worth noting that in contrast to normal queue entries, crashing inputs are not trimmed; they are kept exactly as discovered to make it easier to compare them to the parent, non-crashing entry in the queue. That said, afl-tmin can be used to shrink them at will.\n10 The fork server To improve performance, afl-fuzz uses a \u0026ldquo;fork server\u0026rdquo;, where the fuzzed process goes through execve(), linking, and libc initialization only once, and is then cloned from a stopped process image by leveraging copy-on-write. The implementation is described in more detail here:\nhttp://lcamtuf.blogspot.com/2014/10/fuzzing-binaries-without-execve.html\nThe fork server is an integral aspect of the injected instrumentation and simply stops at the first instrumented function to await commands from afl-fuzz.\nWith fast targets, the fork server can offer considerable performance gains, usually between 1.5x and 2x. It is also possible to:\n Use the fork server in manual (\u0026ldquo;deferred\u0026rdquo;) mode, skipping over larger, user-selected chunks of initialization code. It requires very modest code changes to the targeted program, and With some targets, can produce 10x+ performance gains. Enable \u0026ldquo;persistent\u0026rdquo; mode, where a single process is used to try out multiple inputs, greatly limiting the overhead of repetitive fork() calls. This generally requires some code changes to the targeted program, but can improve the performance of fast targets by a factor of 5 or more - approximating the benefits of in-process fuzzing jobs while still maintaining very robust isolation between the fuzzer process and the targeted binary. 11. Parallelization The parallelization mechanism relies on periodically examining the queues produced by independently-running instances on other CPU cores or on remote machines, and then selectively pulling in the test cases that, when tried out locally, produce behaviors not yet seen by the fuzzer at hand.\nThis allows for extreme flexibility in fuzzer setup, including running synced instances against different parsers of a common data format, often with synergistic effects.\nFor more information about this design, see parallel_fuzzing.md.\n12. Binary-only instrumentation Instrumentation of black-box, binary-only targets is accomplished with the help of a separately-built version of QEMU in \u0026ldquo;user emulation\u0026rdquo; mode. This also allows the execution of cross-architecture code - say, ARM binaries on x86.\nQEMU uses basic blocks as translation units; the instrumentation is implemented on top of this and uses a model roughly analogous to the compile-time hooks:\nif (block_address \u0026gt; elf_text_start \u0026amp;\u0026amp; block_address \u0026lt; elf_text_end) { cur_location = (block_address \u0026gt;\u0026gt; 4) ^ (block_address \u0026lt;\u0026lt; 8); shared_mem[cur_location ^ prev_location]++; prev_location = cur_location \u0026gt;\u0026gt; 1; } The shift-and-XOR-based scrambling in the second line is used to mask the effects of instruction alignment.\nThe start-up of binary translators such as QEMU, DynamoRIO, and PIN is fairly slow; to counter this, the QEMU mode leverages a fork server similar to that used for compiler-instrumented code, effectively spawning copies of an already-initialized process paused at _start.\nFirst-time translation of a new basic block also incurs substantial latency. To eliminate this problem, the AFL fork server is extended by providing a channel between the running emulator and the parent process. The channel is used to notify the parent about the addresses of any newly-encountered blocks and to add them to the translation cache that will be replicated for future child processes.\nAs a result of these two optimizations, the overhead of the QEMU mode is roughly 2-5x, compared to 100x+ for PIN.\n13. The afl-analyze tool The file format analyzer is a simple extension of the minimization algorithm discussed earlier on; instead of attempting to remove no-op blocks, the tool performs a series of walking byte flips and then annotates runs of bytes in the input file.\nIt uses the following classification scheme:\n \u0026ldquo;No-op blocks\u0026rdquo; - segments where bit flips cause no apparent changes to control flow. Common examples may be comment sections, pixel data within a bitmap file, etc. \u0026ldquo;Superficial content\u0026rdquo; - segments where some, but not all, bitflips produce some control flow changes. Examples may include strings in rich documents (e.g., XML, RTF). \u0026ldquo;Critical stream\u0026rdquo; - a sequence of bytes where all bit flips alter control flow in different but correlated ways. This may be compressed data, non-atomically compared keywords or magic values, etc. \u0026ldquo;Suspected length field\u0026rdquo; - small, atomic integer that, when touched in any way, causes a consistent change to program control flow, suggestive of a failed length check. \u0026ldquo;Suspected cksum or magic int\u0026rdquo; - an integer that behaves similarly to a length field, but has a numerical value that makes the length explanation unlikely. This is suggestive of a checksum or other \u0026ldquo;magic\u0026rdquo; integer. \u0026ldquo;Suspected checksummed block\u0026rdquo; - a long block of data where any change always triggers the same new execution path. Likely caused by failing a checksum or a similar integrity check before any subsequent parsing takes place. \u0026ldquo;Magic value section\u0026rdquo; - a generic token where changes cause the type of binary behavior outlined earlier, but that doesn\u0026rsquo;t meet any of the other criteria. May be an atomically compared keyword or so. "}),a.add({id:37,href:'/docs/third_party_tools/',title:"Third Party Tools",content:"Tools that help fuzzing with AFL++ Speeding up fuzzing:\n libfiowrapper - if the function you want to fuzz requires loading a file, this allows using the shared memory test case feature :-) - recommended. Minimization of test cases:\n afl-pytmin - a wrapper for afl-tmin that tries to speed up the process of minimization of a single test case by using many CPU cores. afl-ddmin-mod - a variation of afl-tmin based on the ddmin algorithm. halfempty - is a fast utility for minimizing test cases by Tavis Ormandy based on parallelization. Distributed execution:\n disfuzz-afl - distributed fuzzing for AFL. AFLDFF - AFL distributed fuzzing framework. afl-launch - a tool for the execution of many AFL instances. afl-mothership - management and execution of many synchronized AFL fuzzers on AWS cloud. afl-in-the-cloud - another script for running AFL in AWS. Deployment, management, monitoring, reporting\n afl-utils - a set of utilities for automatic processing/analysis of crashes and reducing the number of test cases. afl-other-arch - is a set of patches and scripts for easily adding support for various non-x86 architectures for AFL. afl-trivia - a few small scripts to simplify the management of AFL. afl-monitor - a script for monitoring AFL. afl-manager - a web server on Python for managing multi-afl. afl-remote - a web server for the remote management of AFL instances. afl-extras - shell scripts to parallelize afl-tmin, startup, and data collection. Crash processing\n AFLTriage - triage crashing input files using gdb. afl-crash-analyzer - another crash analyzer for AFL. fuzzer-utils - a set of scripts for the analysis of results. atriage - a simple triage tool. afl-kit - afl-cmin on Python. AFLize - a tool that automatically generates builds of debian packages suitable for AFL. afl-fid - a set of tools for working with input data. "}),a.add({id:38,href:'/docs/tutorials/',title:"Tutorials",content:"Tutorials Here are some good write-ups to show how to effectively use AFL++:\n https://aflplus.plus/docs/tutorials/libxml2_tutorial/ https://bananamafia.dev/post/gb-fuzz/ https://securitylab.github.com/research/fuzzing-challenges-solutions-1 https://securitylab.github.com/research/fuzzing-software-2 https://securitylab.github.com/research/fuzzing-sockets-FTP https://securitylab.github.com/research/fuzzing-sockets-FreeRDP https://securitylab.github.com/research/fuzzing-apache-1 https://mmmds.pl/fuzzing-map-parser-part-1-teeworlds/ If you do not want to follow a tutorial but rather try an exercise type of training, then we can highly recommend the following:\n https://github.com/antonio-morales/Fuzzing101 If you are interested in fuzzing structured data (where you define what the structure is), these links have you covered:\n libprotobuf for AFL++: https://github.com/P1umer/AFLplusplus-protobuf-mutator libprotobuf raw: https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator libprotobuf for old AFL++ API: https://github.com/thebabush/afl-libprotobuf-mutator Superion for AFL++: https://github.com/adrian-rt/superion-mutator Video Tutorials Install AFL++ Ubuntu [Fuzzing with AFLplusplus] Installing AFLPlusplus and fuzzing a simple C program [Fuzzing with AFLplusplus] How to fuzz a binary with no source code on Linux in persistent mode Blackbox Fuzzing #1: Start Binary-Only Fuzzing using AFL++ QEMU mode HOPE 2020 (2020): Hunting Bugs in Your Sleep - How to Fuzz (Almost) Anything With AFL/AFL++ How Fuzzing with AFL works! WOOT \u0026lsquo;20 - AFL++ : Combining Incremental Steps of Fuzzing Research If you find other good ones, please send them to us :-)\n"}),a.add({id:39,href:'/docs/tutorials/libxml2_tutorial/',title:"Libxml2 Tutorial",content:"Fuzzing libxml2 with AFL++ Before starting, build AFL++ LLVM mode and QEMU mode.\nI assume that the path to AFL++ is ~/AFLplusplus, change it in the commands if your installation path is different.\nDownload the source of libxml2 with\n$ git clone https://gitlab.gnome.org/GNOME/libxml2.git $ cd libxml2 Now configure it disabling the shared libraries\n$ ./autogen.sh $ ./configure --enable-shared=no If you want to enable the sanitizers, use the proper env var.\nIn this tutorial, we will enable ASan and UBSan.\n$ export AFL_USE_UBSAN=1 $ export AFL_USE_ASAN=1 Build the library using the clang wrappers\n$ make CC=~/AFLplusplus/afl-clang-fast CXX=~/AFLplusplus/afl-clang-fast++ LD=~/AFLplusplus/afl-clang-fast When the job is completed, we start to fuzz libxml2 using the tool xmllint as harness and take some testcases from the test folder as initial seeds.\n$ mkdir fuzz $ cp xmllint fuzz/xmllint_cov $ mkdir fuzz/in $ cp test/*.xml fuzz/in/ $ cd fuzz Make sure to configure your system with our script before start afl-fuzz\n$ sudo ~/AFLplusplus/afl-system-config Here we are!\n$ ~/AFLplusplus/afl-fuzz -i in/ -o out -m none -d -- ./xmllint_cov @@ Beware of the -m none. We built it using AddressSanitizer that maps a lot of pages for the shadow memory so we have to remove the memory limit to have it up and running.\nXML is a highly structured input so -d is a good choice. It enables FidgetyAFL, a modality that skips the deterministic stages (that are well suited for binary formats) in favor of the random stages.\nNow, knowing that libxml2 is a library and so the code is reentrant, we can speedup our fuzzing process using persistent mode.\nPersistent mode avoids the overhead of forking and gives a lot of speedup.\nTo enable it, we have to choose a reentrant routine and set up a persistent loop patching the code.\ndiff --git a/xmllint.c b/xmllint.c index 735d951d..64725e9c 100644 --- a/xmllint.c +++ b/xmllint.c @@ -3102,8 +3102,19 @@ static void deregisterNode(xmlNodePtr node) nbregister--; } +int main(int argc, char** argv) { + + if (argc \u0026lt; 2) return 1; + + while (__AFL_LOOP(10000)) + parseAndPrintFile(argv[1], NULL); + + return 0; + +} + int -main(int argc, char **argv) { +old_main(int argc, char **argv) { int i, acount; int files = 0; int version = 0; In this case, I choose parseAndPrintFile, the main parsing routine called from the xmllint main. As you can see, I created a new main function that loops around that function.\n__AFL_LOOP is the way that we have to tell AFL++ that we want persistent mode. Each fuzzing iteration, instead of to fork and re-execute the target with a different input, is just an execution of this loop.\nThe number 10000 tells that after 10000 runs with fuzzed inputs generated by AFL++ the harness has to fork and reset the state of the target. This is useful when the fuzzed routine is reentrant but, for example, has memory leaks and so we want to restore the target after a fixed number of executions to avoid filling the heap with useless allocated memory.\nTo build it, just remove the previously compiled xmllint and recompile it.\n$ cd .. $ rm xmllint $ make CC=~/AFLplusplus/afl-clang-fast CXX=~/AFLplusplus/afl-clang-fast++ LD=~/AFLplusplus/afl-clang-fast $ cp xmllint fuzz/xmllint_persistent Now restart the fuzzer\n$ cd fuzz $ ~/AFLplusplus/afl-fuzz -i in/ -o out -m none -d -- ./xmllint_persistent @@ As you can see, the speedup is impressive.\nNow we\u0026rsquo;ll fuzz xmllint using the binary-only instrumentation with QEMU.\nWe will act as if we don\u0026rsquo;t have the source code and therefore we will not patch anything in the source.\nFirstly, build an uninstrumented binary. Remind to revert the applied patch for LLVM persistent before proceed.\n$ cd ... $ make clean $ make $ cp xmllint fuzz/ To fuzz it in the simple fork-based fashion under QEMU, just add the -Q flag to afl-fuzz.\n$ cd fuzz $ ~/AFLplusplus/afl-fuzz -i in/ -o out -m none -d -Q -- ./xmllint @@ You\u0026rsquo;ve probably noticed that the speed is faster than the LLVM fork-based fuzzing. This is because we used ASan+UBSan in the previous steps based on LLVM (so a 2x slowdown in average).\nNote that so the slowdown of QEMU is circa 2x in this specific case, quite good.\nBut what if we want the speed of persistent mode for a closed source binary?\nNo pain, there is QEMU persistent mode, a new feature introduced in AFL++.\nThere are two possibilities in persistent QEMU, loop around a function (like WinAFL) or loop around a specific portion of code.\nIn this tutorial, we will go for the easy path, we will loop around parseAndPrintFile.\nFirstly, locate the address of the function:\n$ nm xmllint | grep parseAndPrintFile 0000000000019be0 t parseAndPrintFile The binary is position independent and QEMU persistent needs the real addresses, not the offsets. Fortunately, QEMU loads PIE executables at a fixed address, 0x4000000000 for x86_64.\nWe can check it using AFL_QEMU_DEBUG_MAPS. You don\u0026rsquo;t need this step if your binary is not PIE.\n$ AFL_QEMU_DEBUG_MAPS=1 ~/AFLplusplus/afl-qemu-trace ./xmllint - 4000000000-400013e000 r-xp 00000000 103:06 18676576 /home/andrea/libxml2/fuzz/xmllint 400013e000-400033e000 ---p 00000000 00:00 0 400033e000-4000346000 r--p 0013e000 103:06 18676576 /home/andrea/libxml2/fuzz/xmllint 4000346000-4000347000 rw-p 00146000 103:06 18676576 /home/andrea/libxml2/fuzz/xmllint 4000347000-4000355000 rw-p 00000000 00:00 0 ... Now, we set the address of the function that has to loop\n$ export AFL_QEMU_PERSISTENT_ADDR=0x4000019be0 We are on x86_64 and the parameters are passed in the registers. When, at the end of the function, we return to the starting address, the registers are clobbered so we don\u0026rsquo;t have anymore the pointer to the filename in rdi.\nTo avoid that, we can save and restore the state of the general-purpose registers at each iteration setting AFL_QEMU_PERSISTENT_GPR.\n$ export AFL_QEMU_PERSISTENT_GPR=1 Here we go, rerun the previous afl-fuzz command:\n$ ~/AFLplusplus/afl-fuzz -i in/ -o out -m none -d -Q -- ./xmllint @@ As for persistent LLVM, the speedup is incredible.\nEnjoy AFL++, stay tuned for other beginners tutorial of this kind in the future.\nAndrea.\n"}),a.add({id:40,href:'/categories/',title:"Categories",content:""}),a.add({id:41,href:'/tags/',title:"Tags",content:""}),a.add({id:42,href:'/',title:"The AFL++ fuzzing framework",content:"AFL++ Overview AFLplusplus is the daughter of the American Fuzzy Lop fuzzer by Michał \u0026ldquo;lcamtuf\u0026rdquo; Zalewski and was created initially to incorporate all the best features developed in the years for the fuzzers in the AFL family and not merged in AFL cause it is not updated since November 2017.\nThe AFL++ fuzzing framework includes the following:\n A fuzzer with many mutators and configurations: afl-fuzz. Different source code instrumentation modules: LLVM mode, afl-as, GCC plugin. Different binary code instrumentation modules: QEMU mode, Unicorn mode, QBDI mode. Utilities for testcase/corpus minimization: afl-tmin, afl-cmin. Helper libraries: libtokencap, libdislocator, libcompcov. It includes a lot of changes, optimizations and new features respect to AFL like the AFLfast power schedules, QEMU 5.1 upgrade with CompareCoverage, MOpt mutators, InsTrim instrumentation and a lot more.\nSee the Features page.\nIf you are a student or enthusiast developer and want to contribute, we have an idea list what would be cool to have! :-)\nIf you want to acknoledge our work and the derived works by the academic community in your paper, see the Papers page.\nIt is maintained by Marc \u0026ldquo;van Hauser\u0026rdquo; Heuse [email protected], Heiko \u0026ldquo;hexcoder-\u0026rdquo; Eißfeldt [email protected], Andrea Fioraldi [email protected] and Dominik Maier [email protected].\nCheck out the GitHub repository here.\nTrophies VLC CVE-2019-14437 CVE-2019-14438 CVE-2019-14498 CVE-2019-14533 CVE-2019-14534 CVE-2019-14535 CVE-2019-14776 CVE-2019-14777 CVE-2019-14778 CVE-2019-14779 CVE-2019-14970 by Antonio Morales (GitHub Security Lab) Sqlite CVE-2019-16168 by Xingwei Lin (Ant-Financial Light-Year Security Lab) Vim CVE-2019-20079 by Dhiraj (blog) Pure-FTPd CVE-2019-20176 CVE-2020-9274 CVE-2020-9365 by Antonio Morales (GitHub Security Lab) Bftpd CVE-2020-6162 CVE-2020-6835 by Antonio Morales (GitHub Security Lab) Tcpdump CVE-2020-8036 by Reza Mirzazade ProFTPd CVE-2020-9272 CVE-2020-9273 by Antonio Morales (GitHub Security Lab) Gifsicle Issue 130 by Ashish Kunwar FFmpeg Ticket 8592 Ticket 8593 Ticket 8594 Ticket 8596 by Andrea Fioraldi Ticket 9099 by Qiuhao Li Glibc Bug 25933 by David Mendenhall FreeRDP CVE-2020-11095 CVE-2020-11096 CVE-2020-11097 CVE-2020-11098 CVE-2020-11099 CVE-2020-13397 CVE-2020-13398 CVE-2020-4030 CVE-2020-4031 CVE-2020-4032 CVE-2020-4033 by Antonio Morales (GitHub Security Lab) GNOME Libxps issue 3 by Qiuhao Li QEMU CVE-2020-29129 CVE-2020-29130 by Qiuhao Li GNU coreutils Bug 1919775 by Qiuhao Li Sponsoring We always need servers with many cores for testing various changes for the efficiency. If you want to sponsor a server with more than 20 cores - contact us! :-)\nCurrent sponsors:\n Fuzzing IO is sponsoring a 24 core server for one year, thank you! "})})()