New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AFLnwe seeds timestamps are non-sequential #6
Comments
Hi @acidghost, Thank you for reporting these issues! As you pointed out, the old timestamp issue is caused by the link_or_copy function -- if a seed is linked to the queue folder, its timestamp is not updated. I have tried to modify the function so that it always copies seeds and the issue seems to be fixed. Edited:
Thuan |
Can you please rebuild the Docker image and run the experiments again? And please let me know if you still observe the issues. Thanks. |
@thuanpv It seems like the issue is still there. The seeds don't seem to have extremely old timestamps, but the timestamps are still non-sequential (even some seeds have wrong timestamp):
|
Thanks @acidghost. Interesting! In AFLnwe, we did not make any changes to AFL regarding its code to save test cases. Let us look into it more deeply. |
I did some investigation on this issue. When the file is created in the "queue" folder for the first time, it has the correct ctime. When AFL saves a new interesting input, the ctime for the "src" input file also gets updated. Note that the "src" input file is mmap-ed (even if the fd is read-only), and that it is munmap-ed by abandon_entry() when the new interesting input is saved. It seems that timestamps with mmap are not really reliable, see also https://yarchive.net/comp/linux/mtime_mmap.html or https://apenwarr.ca/log/20181113 . I did not yet check why this issue surfaced with AFLnwe. As for the solution, we may want to save a second copy of the interesting inputs in another folder, or we may include the timestamp in the file name (but this may break some script). |
After more debugging, it turns out that there is no weird problem with mmap() and timestamps (fortunately!). I wrote a patch for AFLnwe to add a new option to take control of this. If you pass -z '-trimmed', it will save the trimmed test case in a separate file, with the same name of the original file suffixed with '-trimmed'. If you pass -z '', the trimmed test case is not saved. We may want to pass -z '' in run.sh when calling afl-fuzz. @thuanpv can you kindly check the attached patch, and commit it on AFLnwe? I don't have permissions on the repo for a pull request. |
Uh this made me spend hours today on investigating. Sadly I only found this now. Here is the patch for better visibility: tlspuffin/aflnwe@69cd1eb So as I can see by setting -z to an empty string no more trimmed test cases are written. That should work! I'll test this! |
While doing some testing I noticed that the timestamps of the seeds stored in the
queue
folder of AFLnwe runs are non-sequential:The timestamps for the initials seeds (i.e. ending in
.raw
) are also not updated while pivoting them from the input seeds folder (see above as the dates are 3 and 5 Mar):https://github.com/aflnet/aflnwe/blob/113102a3ba552028e6fb0193cc2039503def7ef4/afl-fuzz.c#L3303
When generating plots the cut-off time may turn out wrong because it's based on the timestamp of the first initial (linked) seed:
profuzzbench/scripts/analysis/profuzzbench_plot.py
Lines 37 to 40 in 9962025
profuzzbench/subjects/FTP/BFTPD/cov_script.sh
Lines 33 to 34 in 9962025
profuzzbench/subjects/RTSP/Live555/cov_script.sh
Lines 33 to 34 in 9962025
profuzzbench/subjects/FTP/LightFTP/cov_script.sh
Lines 40 to 41 in 9962025
(this is the same for all targets/subjects)
This bug can cause the plots for AFLnwe to be totally wrong as they're cut too short. In the example above the difference in timestamps from the first seed and the others is in days (the run was 1 hour long), causing the plot for AFLnwe to only pick up some of the initial seeds.
This does not seem to be the case for AFLnet as it uses the
replayable-queue
where files are created from scratch.The text was updated successfully, but these errors were encountered: