Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LeechCore integration #188

Open
Wenzel opened this issue Mar 15, 2021 · 34 comments
Open

LeechCore integration #188

Wenzel opened this issue Mar 15, 2021 · 34 comments
Labels
integration Integration with other VMI components / libraries

Comments

@Wenzel
Copy link
Owner

Wenzel commented Mar 15, 2021

LeechCore is a physical memory acquisition library providing various methods from software to hardware.

On the software side, it can acquire live memory through 4 methods:

Device Type Volatile Write Linux Support Plugin
DumpIt /LIVEKD Live Memory Yes No No No
WinPMEM Live Memory Yes No No No
LiveKd Live Memory Yes No No No
LiveCloudKd Live Memory Yes No No Yes

Through libmicrovmi integration, support can be added to access live memory:

  • Xen
  • KVM
  • VirtualBox

libmicrovmi - LeechCore

⬇️ ⬇️ ⬇️

Device Type Volatile Write Linux Support Plugin
DumpIt /LIVEKD Live Memory Yes No No No
WinPMEM Live Memory Yes No No No
LiveKd Live Memory Yes No No No
LiveCloudKd Live Memory Yes No No Yes
libmicrovmi Live Memory Yes Yes Yes Yes
  • writte access would be supported
  • linux support (assuming yes, but not sure what it actually means)

Also, in the future, we could refactor and add liveKd and LiveCloudKd as part of libmicrovmi, since they give access to a VM's physical memory, and are totally in the scope of a libmicrovmi driver.

⬇️ ⬇️ ⬇️

Device Type Volatile Write Linux Support Plugin
DumpIt /LIVEKD Live Memory Yes No No No
WinPMEM Live Memory Yes No No No
libmicrovmi Live Memory Yes Yes Yes Yes

cc @ufrisk for this opinion on the matter, and what's the next step to accomplish this

@Wenzel Wenzel added the integration Integration with other VMI components / libraries label Mar 15, 2021
@ufrisk
Copy link

ufrisk commented Mar 15, 2021

Hi, yes, this would be very nice to add support for indeed :)

The existing plugin-based architecture for LeechCore should already allow for this. Best example is probably the LiveCloudKd which is already implemented as a separate plugin. It's found here: https://github.com/gerhart01/LiveCloudKd/tree/master/leechcore_device_hvmm

I'd be happy to offer whatever support you need to get this going; also if you really do need some changes in the LeechCore for me to better support it. I would think that most/all of the extra needs should have been surfaced already by the LiveCloudKd integration.

I would assume such an integration would work on both Windows and Linux (at least for VirtualBox)? "Linux Support" just means it's supported on Linux. It's just not possible to suport Hyper-V on Linux since there is no such product (yet anyway).

Please let me know how you wish to proceed around this.


Also, on Windows LeechCore have a remote agent allowing for remote connections to it (from other Windows-based LeechCore libraries). If this becomes a reality I should really look into making an agent for Linux as well and allow them to securely interact. Main issue for that is authentication though; I'm guessing it will have to be some certificate based scheme which quickly becomes somewhat complicated. At least more complicated than the Active-Directory Kerberos based one I use today. But even if complicated this would be a priority if I would get the livmicrovmi integration.


I'm however not interested in removing functionality from LeechCore. I see no reason why I should remove LiveKd/LiveCloudKd support even if it should be added to libmicrovmi.


LeechCore also have active volatility3 integration as a layer supported by the Volatility devs. In theory if you would add LeechCore support it could be supported this way.

But for memory analysis of live Windows systems MemProcFS is the best :) But it's Windows only unfortunately. But it's one of the main reasons why I'll be looking into "remote" leechcore support for linux hosts.

@Wenzel
Copy link
Owner Author

Wenzel commented Aug 31, 2021

@ufrisk Sorry for the lack of feedback, I had small tasks to finish before looking at the plugin.

I started to have a look today, and I was able to compile a first version of the microvmi LeechCore plugin (leechcore_device_microvmi.so)

I'm now wondering how to actually load the plugin ?

I tried from the Python interface:
Capture d’écran de 2021-08-31 17-20-30

Should I use an API to register somewhere ?
How to associate a URL scheme to the plugin ? (ex: microvmi://)

➡️ if you want to have a look at the code:
https://github.com/Wenzel/LeechCore-plugins/blob/microvmi_plugin/leechcore_device_microvmi/leechcore_device_microvmi.c

Thanks ! 😉

@ufrisk
Copy link

ufrisk commented Aug 31, 2021

This is most awesome; and no need to apologize for things taking time. I've been super busy as well.

Easiest to test your plugin is probably by downloading the PCILeech or MemProcFS and place your plugin alongside it.

The python (if you've installed LeechCore via pip) will load the .so files from the site-packages / pip plugin install location and not from current working directory. I think this is your error.

Otherwise the plugin looks OK except that you'll need to set the memory map and/or the max physical address upon initialization. The LiveCloudKd is a nice example on an external plugin (even though it's for Windows); and the memory map initialization is found here: https://github.com/gerhart01/LiveCloudKd/blob/67ecd35506d33119704ec63e96500f5ab029e1ab/leechcore_device_hvmm/leechcore_device_hvmm.c#L421

Also when you get it to work write support may be interesting as well :) About the Python I could probably co-bundle this in my Python package to make it super-easy for the user to use it. PCILeech, MemProcFS and also Volatility support LeechCore in the background.

Please let me know if you'll have any more questions and I'll try to do my best.

And huge thanks for looking into this awesome addition ❤️

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 2, 2021

I think I'm a bit confused by how Leechcore plugin loader works.
Especially the location where I'm supposed to put the plugins.
Capture d’écran de 2021-09-02 11-41-29

In the screenshot above, I placed my plugin leechcore_device_microvmi.so in the site-packages/leechcorepyc/ directory, and initialized leechcore, monitored by strace.

When i look at the output, I can see leechcore attempted to open my plugin, but failed because the path is wrong, as it has appended an absolute path to another absolute path.

Is there something wrong with my environment ? 🤔

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 2, 2021

@ufrisk I believe you need to remove this line:
https://github.com/ufrisk/LeechCore/blob/master/leechcore/leechcore.c#L259

since the LoadLibraryA wrapper is already responsible for calling Util_GetPathLib()

@ufrisk
Copy link

ufrisk commented Sep 2, 2021

Thanks for this excellent bug report. Unfortunately the device plugin path is not that well tested on Linux as you notice. Apologies for this.

Anyway; I should have it updated now if you update the pip package or download the new sources/binaries from github (only in the LeechCore project atm).

I went with a slightly different way; which more closely emulates the LoadLibrary on Windows: https://github.com/ufrisk/LeechCore/blob/d067ee2aa7d9f7a87963c8e907d9f8bdf221dbff/leechcore/oscompatibility.c#L212 This to avoid strange behavior on WIndows. The result should be the same though.

Can you try to see if it's working and let me know?

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 3, 2021

Thanks for the bug fix !

I implemented init argument parsing, and debugged why I couldn't load my plugin (missing symbol)
But I'm facing an issue now because reading physical memory from leechcore always fail, and I don't know why.
I have a callback configured for pfnReadContigious, isn't that enough ?

Capture d’écran de 2021-09-03 11-31-34

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 3, 2021

@ufrisk btw, I would suggest printing the loadlibrary error on your log somewhere in case something fails, without that it was impossible to know what went wrong in my plugin:

        if((ctx->hDeviceModule = LoadLibraryA(szModule))) {
            if((ctx->pfnCreate = (BOOL(*)(PLC_CONTEXT, PPLC_CONFIG_ERRORINFO))GetProcAddress(ctx->hDeviceModule, "LcPluginCreate"))) {
                strncpy_s(ctx->Config.szDeviceName, sizeof(ctx->Config.szDeviceName), ctx->Config.szDevice, cszDevice);
                return;
            } else {
                FreeLibrary(ctx->hDeviceModule);
                ctx->hDeviceModule = NULL;
            }
        } else {
            fprintf(stderr, "LoadLibrary failed: %s\n", dlerror());
        }

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 3, 2021

Okay, seems the error is on my side afterall,
my API call doesn't read any bytes, I'm looking at it :)

Edit: found my issue, fix is on its way:
#210

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 3, 2021

@ufrisk do you know if it's possible to run MemProcFS and mount the FUSE filesystem as root ?

I'm getting an "Invalid argument" error here when mounting the filesystem.
I didn't have this kind of error when working with KVM (which doesn't require to be root)
Capture d’écran de 2021-09-03 14-45-56

@ufrisk
Copy link

ufrisk commented Sep 3, 2021

Thanks; I don't know how I could have missed this. It must be some security things in fuse I'd need to disable. I'll look into it.

About the LoadLibrary; I'll add an extra output there in extra verbose (-vv) mode; but in the next release of LeechCore. I don't think it's anything that affects the end user much; but I agree it would be very nice to have it there.

Anyway, I'll let you know when I fixed the MemProcFS.

@ufrisk
Copy link

ufrisk commented Sep 3, 2021

The mount as root bug should now be fixed. Thanks for reporting this.

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 6, 2021

@ufrisk I confirm the fix is working 🎉
Here is a showcase of MemProcFS running via libmicrovmi
✔️ KVM
Capture d’écran de 2021-09-06 09-47-47

✔️ Xen
Capture d’écran de 2021-09-06 09-43-14

Working on VirtualBox for Linux, and then I'll check to make libmicrovmi VirtualBox driver compatible with Windows.

@Wenzel Wenzel added this to In progress in LeechCore microvmi plugin Sep 6, 2021
@Wenzel
Copy link
Owner Author

Wenzel commented Sep 6, 2021

Working on MemProcFS for Virtualbox via libmicrovmi through FDP, we are investigating a segfault:
thalium/icebox#38

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 6, 2021

✔️ Aaaand we have VirtualBox support now 😉
Capture d’écran de 2021-09-06 14-41-59

@ufrisk
Copy link

ufrisk commented Sep 6, 2021

wosh, this is totally awesome news and progress! I'm very much looking forward to the finished plugin :)

If you do need to chunk the memory reads to 4kB the ReadScatter() function may be a better fit than ReadContigious().

Also, if possible, it would be super nice to have write capabilities.

Please let me know if you would need anything from my side and I'll try to look into it right away :)

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 7, 2021

I believe i spotted an issue with verbosity command line handling:
Enabling -vv works:
Capture d’écran de 2021-09-07 12-55-17

While enabling -vvv hides -v -vv messages:
Capture d’écran de 2021-09-07 13-00-22

Is this by design, or is it an issue ?
I'm working with your latest release here: https://github.com/ufrisk/MemProcFS/releases/tag/v4.2

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 7, 2021

✔️ This adds memflow support as well, passing the connector name. (cc @ko1N)
Inspecting an unmodified QEMU instance via memflow-qemu-procfs
Capture d’écran de 2021-09-07 13-11-18

This should help to solve ufrisk/MemProcFS#62

@ufrisk
Copy link

ufrisk commented Sep 7, 2021

This is super nice; after this leechcore will be able to integrate with pretty much any virtualization software on the market 👍

About the verbosity; you have to add them -vv -vvv if you want both of them; I know it's a bit of a mess. Long term plan is to implement a proper logging system so that you may enable more fine grained logging of separate components if you so should wish. I just haven't gotten around doing that yet. There have always been other more important things...

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 21, 2021

update:

Do you wish to test the plugin maybe ?

@ufrisk
Copy link

ufrisk commented Sep 21, 2021

wow, this looks super nice; I'll be super happy to test it; unfortunately I'm a bit too busy now during the weekdays; I'll do it in the weekend.

I'd be happy to include the package in my default binary packaging. I'm guessing for amd64 Linux only right? not aarch64 or windows?

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 21, 2021

wow, this looks super nice; I'll be super happy to test it; unfortunately I'm a bit too busy now during the weekdays; I'll do it in the weekend.

There is no rush, and week-ends are precious :)

I'd be happy to include the package in my default binary packaging. I'm guessing for amd64 Linux only right? not aarch64 or windows?

amd64 Linux only for now
Windows is ongoing, to compile and distribute the VirtualBox driver and then package libmicrovmi.
I'm not knowledgeable about cross-compilation with Cargo/Rust here, but it's an issue I can track

@ufrisk
Copy link

ufrisk commented Sep 27, 2021

Apologies for some time to look into this.

The module compiles fine on Ubuntu 18.04.

It seems like my default PCILeech/MemProcFS binaries won't work though since I build on Ubuntu 20.04 (more recent GLIBC required). I'm still a bit undecided whether I should start building on 18.04 for better backwards compatibility or if I should keep things as-is.

The module builds to a very reasonable size so I'd be happy to include it in my default binaries. Still have the GLIBC issue if wanting to run it on 18.04 though.

I've also installed icebox, microvmi (the .deb package) and rust.

I however get some kind of rust error no matter what I do. Any ideas what this may be due to?

error

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 28, 2021

hey @ufrisk

I guess it's my turn to fix bugs in my code :)
Indeed, the KVM driver was panicking in case the library it depends on couldn't be located and loaded.
This was an early implementation behavior that stayed there for too long, and all the other drivers shouldn't panic anymore.

It has been fixed:
#224
Wenzel/kvmi#49

Along with a new release of libmicrovmi:
https://github.com/Wenzel/libmicrovmi/releases/tag/v0.3.7

Also, you don't need to install the rust compiler to run the library.
On a side note, you can use export RUST_LOG=debug to display a maximum of information when libmicrovmi is initializing.

I hope this should help get you going with running MemProcFS on icebox !

EDIT: congrats for going through the whole icebox setup and running a VM behind it, I'm sure the icebox team would be happy to hear that integration 😉 (cc @bamiaux)

@ufrisk
Copy link

ufrisk commented Sep 28, 2021

Many thanks for the update and it's good to see the issue was fairly well known and is now resolved. There however seems to be more of the same issue in other places. I got a little longer ahead before getting another similar crash. After installing libxen-dev it went away though so no worries.

error

After the install of the missing libxen-dev it however fails to connect to icebox and I have no clue why that may be. I tried as both user and root both for MemProcFS and Icebox. Any ideas?

error2

About icebox; I first tried it on Ubuntu 20.04 but it was not possible. Some dependencies have changed names and Virtualbox had a bug that caused it to fail compilation on gcc's with two-digit version number which wasn't back ported to icebox. On Ubuntu 18.04 everything went super smooth though thanks to the excellent install guide :)

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 28, 2021

On my way to fix the panic on https://github.com/Wenzel/xenstore/blob/master/src/libxenstore.rs#L41 :)

After the install of the missing libxen-dev it however fails to connect to icebox and I have no clue why that may be. I tried as both user and root both for MemProcFS and Icebox. Any ideas?

You shouldn't have to run it as root for icebox.
I suggest to run RUST_LOG=debug ./memprocfs ... to get more debug info.

@ufrisk
Copy link

ufrisk commented Sep 28, 2021

Thanks :)

  1. Error indicates liblibFDP.so is missing.
  2. I symlinked it; I'm unsure which name was required so I tried a few combinations.
  3. Success! it seems to be working perfectly :)

How do you wish to proceed with this? Do you feel it's release ready as is or do you prefer to look into Windows support first as well? Also do you wish to keep it as a stand-alone plugin here and I'll link to it similar to what I do with LiveCloudKd; or would you like for me to co-bundle the .so in my x64 linux release? (In that case I'll have to start building on Ubuntu 18.04 I think for better backwards compatibility; but as long as it don't causes any issues on more recent Linuxes that's probably just a good thing).

1

2

3

@Wenzel
Copy link
Owner Author

Wenzel commented Sep 29, 2021

Hi @ufrisk ,

You uncovered new bugs and I'm glad that I could fix them before an official release :)
Regarding the liblibFDP.so, this bug has been introduced since I modified the crate to be compatible with Windows:
Wenzel/fdp#18
I used a funtion to determine the library name based on the OS, and for Unix it also add the lib prefix.

As for the xen driver, it shouldn't panic anymore:
#228

How do you wish to proceed with this? Do you feel it's release ready as is or do you prefer to look into Windows support first as well?

Give me a few days to see if I can add Windows support as well, and then we can look at an official release.

Also do you wish to keep it as a stand-alone plugin here and I'll link to it similar to what I do with LiveCloudKd; or would you like for me to co-bundle the .so in my x64 linux release?

That's totally up to you, if you feel like your users could benefit from having the microvmi plugin directly bundled with leechcore / MemProcFS releases, I'm more than happy to see it used and integrated ! 💯

I've just triggered the release for v0.3.8 with the fixes I mentionned above:
https://github.com/Wenzel/libmicrovmi/actions/runs/1284801306

You can test it to confirm :)

@ufrisk
Copy link

ufrisk commented Oct 3, 2021

Thanks. It seems to be working alright 👍

I also saw you released a Windows version; I was unable to easily install IceBox on Windows though; I think I may have to uninstall VMWare and also enable driver testing mode so I trust it's working.

If you feel the plugin is ready enough I'll be super happy to bundle it in the Linux version. It's tiny enough and it may be useful for some users.

As far as the Windows version goes it's probably better to co-bundle it with libmicrovmi and have it as a separate download. I'm not too keen on co-bundling it with my Windows releases since it's quite large.

Also, I've started to compile on Ubuntu 18.04 so it should now be more backwards compatible.

Please let me know when you feel it's ready enough and I'll co-bundle it and tweet something about it :) And thank you for this awesome work!

@Wenzel
Copy link
Owner Author

Wenzel commented Oct 6, 2021

I also saw you released a Windows version; I was unable to easily install IceBox on Windows though; I think I may have to uninstall VMWare and also enable driver testing mode so I trust it's working.

I couldn't test it either, icebox is a complicated setup and I don't have much time on my hands for Windows here.

If you feel the plugin is ready enough I'll be super happy to bundle it in the Linux version. It's tiny enough and it may be useful for some users.

Awesome !

As far as the Windows version goes it's probably better to co-bundle it with libmicrovmi and have it as a separate download. I'm not too keen on co-bundling it with my Windows releases since it's quite large.

👍

Please let me know when you feel it's ready enough and I'll co-bundle it and tweet something about it :) And thank you for this awesome work!

I feel like we are close to ready now.
I suppose the next step would be for me to make a PR from mtarral/LeechCore-plugins to ufrisk/LeechCore-plugins ?

I added a short tutorial in my documentation to use MemProcFS on QEMU:
https://wenzel.github.io/libmicrovmi/tutorial/memprocfs_qemu.html

@ufrisk
Copy link

ufrisk commented Oct 11, 2021

Awesome; please feel free to do a pull request at any time and I'll add the plugin to my binary linux version 👍 And also update some documentations.

@JuniorJPDJ
Copy link

JuniorJPDJ commented Oct 17, 2021

Wow guys! Congratulations on a great cooperation. It's super-cool to see LeechCore support VMs through libmicrovmi plugin. I'm the creator of ufrisk/MemProcFS#62. I suppose I can now close the issue?

@ufrisk
Copy link

ufrisk commented Oct 19, 2021

Once again thanks for this integration.

I have a bug report as well. It does only seem to affect PCILeech sometimes; and not MemProcFS. When I try do a write microvmi segfaults. It would be much nicer if the write would just fail gracefully if it should fail at all. No biggie by any means but I guess bug reports are welcome.

Screenshot from 2021-10-19 20-59-09

@Wenzel
Copy link
Owner Author

Wenzel commented Oct 19, 2021

Ah yes, thanks for the bug report.

Indeed, the write_physical trait API has not been implemented in https://github.com/Wenzel/libmicrovmi/blob/master/src/driver/virtualbox.rs#L33

I should also open an issue to fail gracefully instead of panicking when an API is missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
integration Integration with other VMI components / libraries
Projects
Development

No branches or pull requests

3 participants