-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File merging #5882
Labels
Comments
sqlmap is using session file, reducing "stress" on target when rerunning.
if there is a non-empty query result in session, sqlmap will reuse it. why
you say that it redumps everything?
…On Wed, Mar 26, 2025, 18:04 Caf3sp0re ***@***.***> wrote:
This feature request came to my mind because sqlmap can lose connection to
the target a lot of times for may reasons, making the user wait to re-run
the tool. However, there is no way to resume where we last left off,
leading to the entire database getting re-dumped by sqlmap, making
excessive duplicate files in sqlmap output
My idea was to include a flag like "--merge". This option would look for
any duplicate files and merge them to only make one. Ex: "file.csv"
"file1.csv" --> "file.csv"
An alternative that would resolve this issue would be to add an option to
resume where sqlmap last stopped. Effectively saving time and bandwidth,
aswell as imposing less stress on the target
1_001.png (view on web)
<https://github.com/user-attachments/assets/0c1fa136-3e45-4cfd-b067-b0b5617b9962>
(i had a lot of duplicate files before taking this screenshot. I had to
wipe the entire folder to start off clean again)
—
Reply to this email directly, view it on GitHub
<#5882>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHA7UZYTVNINVYYWRHN5332WLFXXAVCNFSM6AAAAABZ23XO46VHI2DSMVQWIX3LMV43ASLTON2WKOZSHE2TAMRYGAYTONA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
[image: Caf3sp0re]*Caf3sp0re* created an issue (sqlmapproject/sqlmap#5882)
<#5882>
This feature request came to my mind because sqlmap can lose connection to
the target a lot of times for may reasons, making the user wait to re-run
the tool. However, there is no way to resume where we last left off,
leading to the entire database getting re-dumped by sqlmap, making
excessive duplicate files in sqlmap output
My idea was to include a flag like "--merge". This option would look for
any duplicate files and merge them to only make one. Ex: "file.csv"
"file1.csv" --> "file.csv"
An alternative that would resolve this issue would be to add an option to
resume where sqlmap last stopped. Effectively saving time and bandwidth,
aswell as imposing less stress on the target
1_001.png (view on web)
<https://github.com/user-attachments/assets/0c1fa136-3e45-4cfd-b067-b0b5617b9962>
(i had a lot of duplicate files before taking this screenshot. I had to
wipe the entire folder to start off clean again)
—
Reply to this email directly, view it on GitHub
<#5882>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHA7UZYTVNINVYYWRHN5332WLFXXAVCNFSM6AAAAABZ23XO46VHI2DSMVQWIX3LMV43ASLTON2WKOZSHE2TAMRYGAYTONA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This feature request came to my mind because sqlmap can lose connection to the target a lot of times for may reasons, making the user wait to re-run the tool. However, there is no way to resume where we last left off, leading to the entire database getting re-dumped by sqlmap, making excessive duplicate files in sqlmap output
My idea was to include a flag like "--merge". This option would look for any duplicate files and merge them to only make one. Ex: "file.csv" "file1.csv" --> "file.csv"
An alternative that would resolve this issue would be to add an option to resume where sqlmap last stopped. Effectively saving time and bandwidth, aswell as imposing less stress on the target
(i had a lot of duplicate files before taking this screenshot. I had to wipe the entire folder to start off clean again)
The text was updated successfully, but these errors were encountered: