-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
continuating the discussion started in the pull request to folder2ram #1
Comments
Committed an update. Made the config file changes since they were the easiest. 0.2 (1.24.2016):
Todo List:
|
Here's how I envision changing the configuration file to accommodate the disk to disk caching and the zRam additions. I'll use destination to encompass tmpfs, ramfs, zram or another disk. Example of different permutations of Destination: |
Looks good. A couple questions: what is LogHist for? And why Dependant Services? With systemd it is getting called just after/before fs-target, that is just after/before filesystem mounting/unmounting anyway. |
LogHist (Log History), allows you to specify for each directory how many logs to keep (or disable the feature with a negative value). It became important for Plex which had hundreds of MB of old logs after only a few days. Dependent services is one of the primary reasons I wrote this and the most difficult part of the coding. If a service using the directory is active then you can't guarantee it won't access the directory while it is being setup or disabled. This race condition potential was mentioned in the article by Matteo Cortese. Some of the time you can get away without disabling and then re-enabling the service while it is setup, however, in the case of postfix it would 100% always cause errors. The only solution to fix the broken spool directories was to completely remove the spool directory and reinstall postfix again. By identifying the services that could access the directory and then disabling them, doing the work to mount/demount, then re-enabling, it allows everything to be performed without a reboot. I don't like requiring a reboot if it can be avoided. |
Some feedback, don't take it as an order or anything. Kind of wondering how you plan to make LogHist logic work in a general way, so that it works for non-Plex logs too. Dependant services, you can probably automate it or at least check it on runtime to throw an error instead of screwing up things. lsof /path/to/folder will tell you what processes are accessing /path/to/folder right now. There are options to filter its results. Powering down the services brutally (with a "stop") can cause some data loss if some processes can't handle being stopped like this in the middle of operation (like say stuff may quit without saving or uncleanly), but theoretically it shouldn't FUBAR the service like happened with Postfix. It is probably better to code in the config the SAFE way to stop each and every OMV plugin in a modular config and warn power users of the issue, maybe printing an error "sorry can't enable dir2ram because process XXX is using the folder, please stop it safely first or write a custom configuration for it" or whatever. Also, if you are seriously into feature explosion I'm warmly suggesting to keep another config file for special/advanced options in another folder, and just write its name in the main config line. Like this: Then you have a folder like "SpecialOptionsConfigFiles" (placeholder name) and in it you have And inside each you have the pile of advanced options for that specific mount point, what should be kept (some folder structure? some files?) what should be removed and when (what folders? what files?), all services that must be handled, what commands must be sent them to avoid data loss (as I doubt that calling a "stop" would be enough for many of them), and so on. |
LogHist: Services: I would like to add in a 'reboot' option on the services, where it add it in with insserv, but it doesn't attempt to stop/restart services in order to load the drive immediately and lets it become active on a reboot. This should handle the cases where it is too dangerous (for whatever reason) to stop and then restart the service. Conf file |
Uploaded Version:
I was intending to add both drive to drive caching and zRam, but when I got into the details only the drive to drive caching was possible given that the kernel I have backported doesn't support hot mounting new zRam drives. This would only be possible if there was only one zRam needed/used. However, if multiple are needed (as would be a typical case), there's no way to do this from multiple files without the hot mounting feature. I'll add in the zRam capability when I have a kernel that supports the hot mounting capabilities. |
The difference here is that a reboot is performed by the user when he thinks all his stuff is ok for a reboot, restarting services automatically is done well... automatically. So, by adding the user to the equation, I think that the risk of data loss can be reduced. I know that it will look like windows, but the only other way would be to add stuff to the plugin UI to allow the script to talk to the user when updates or installations are being performed, and I'm unable to do that.
From what I manage to read in the source, your method does read stuff from the config only.
I have some decades of experience in IT too... Assuming that others did their job right is a big fat mistake.
I'm just telling you what I know about it, because I did think about implementing something like that too, so what I did find might save you some time. Any program that is working on an open file and does not save automatically data is simply shutdown when "stopped", and the data isn't saved. An obvious example would be Virtualbox. VM data isn't saved, you must call its own CLI VM control binary and tell it to shutdown the VMs safely first and wait for them to shutdown. Most text editors, browsers, downloaders, and whatever else tend to act like that too. |
I'll study the services problem more. My high level goal is to not require a reboot if a decent way exists to avoid it. I'll need to study more about what the service xxx stop is actually executing. I'll take some time to look at what is present in those files and I'll do some more searching on the internet for cases people have run into. I did briefly play around with lsof and it didn't list any open files for any of the directories I was using. However, I know from practical experience that if I mess with the /var/spool directory without shutting down postfix it will damage the directory. I'll do a bit more learning and experimentation on the subject to see what should be done. On another note, I had been thinking about the fact that we are using 'cp' to copy the changed data back and forth between the cache and the real directory. This is a fine method if the files in the directory are static or increasing. Copy will handle both those cases fine. However, if the directory is in the cache (memory or another disk) and a file is deleted, the deletion won't be reflected back in the original directory because copy isn't going to delete files. I was thinking rsync might be a better solution to handle this. Thoughts? |
0.4 (1.27.2016):
Todo: Do a little more checking before using the service command to stop/restart. |
Since you opened your own repo as I suggested, I'm moving here the answer, and rejecting/closing the pull request here bobafetthotmail/folder2ram#1 for the reasons already discussed.
I'm not on the defensive, I'm just forcing you in the open to admit that you don't know Bash hehehehe. :)
I kinda figured that out myself by looking at the script you posted, you just re-wrote everything. In another language.
Not that scripting per-se is terribly complex, the hardest part is getting the the damn system components to do what you want them to, not writing the script.
I lost a month (= a lot of my free time during a month, as I also have a life and a job) to get the damn thing to work with systemd I knew very little about before, "systemctl enable my_service.service" does not work with my unit no matter what I tried, so I had to symlink stuff manually (which is also another official way of installing services, not a hack), then I'm currently troubleshooting other retarded %&$%£ that happens only on Openmediavault 3.0 and not on a vanilla Debian 8.0 that simply erases my systemd service files for unknown reasons.
[/end rant.]
Don't worry, my ego isn't hurt by that. :) My project started just because the only option to do these tricks this was fs2ram (you can find it in debian unstable repos, I think) which is inane garbage, has no way of safe uninstall and encrypts stuff for no reason.
If you can do better, I'll just use your own project and maybe learn some Perl along the way.
I'll still keep my own sh/bash version though, maybe backporting some of your features.
afaik, the "standard" in such kinds of systems on linux "is newer duplicates override older ones".
Also, the "standard" seems to have a main config but also to make a folder called "something.d" and parsing ALL files inside that folder, and files override each other based off the number before the name (= 01-file.conf gets overridden by duplicate configs in 02-file.conf, and so on).
The reason for this is allowing easy change of configuration by the debian package scripts (those run on install and on removal of packages), while also of admins and somesuch.
With this you can simply write a echo "tmpfs /path/to/folder some-options-here" > /etc/folder2ram/folder2ram.d/99-my-program-folders.conf
to add a custom configuration
or a simple rm -f /etc/folder2ram/folder2ram.d/99-my-program-folders.conf
to remove your custom configuration.
And a simple reboot for the changes to go live safely.
This saves the aggravation/danger of parsing a single config file in a safe way from CLI or third party scripts each time you need to change the config.
Yes, that's the goal. I installed Debian and then OMV inside a Zyxel NSA325v2, which is a commercial NAS (with Kirkwood SoC, so it is supported by Debian, there is a guy that makes custom bootloaders for it, he supports a large amount of kirkwood-based commercial NASes.. but anyway) that box has 512MB of RAM, there are Raspi/BananaPi/whatever-embedded-dev-board images of OMV too (done/mantained by ryecoaaron, and there are dedicated threads in the forums) and they don't have dozens of GB of RAM either.
Besides, it wouldn't make any sense to waste various GBs of ram to hold a database of whatever (maybe Kodi's metadata stuff? don't know) when you could simply nibble a bit of GBs from your multi-TB hard drive/RAID array that is in-use anyway.
The train of thought was: embedded devices usually prefer squashfs in RAM or on flash because reading it is faster than their own flash storage and squashfs has a rather good compression ratio.
OMV is a "kinda embedded" system, as it isn't expected to change a lot over time (apart from updates), so most binaries and stuff can be kept in a squashfs, in RAM or even on its system drive (a usb flash drive/sdcard).
Also the flashmemory plugin will be used by people that place OMV in a usb flash drive (or card reader), and by people running it in embedded boards (or hacked NASes) and any speed boost will be nice. On my own NAS box when I changed the system drive from a crappy usb 2.0 pendrive to a midrange class-10 microSD card, everything was snappier.
Of course it won't do jack to people using SSDs already, but that's not the main use case of the plugin/folder2ram script anyway.
I think I checked and apt-get or dpkg can be configured (with a system as "multiple config files" I discussed above, which is very common for many programs on linux) to run a command before/after installing packages which can be used to trigger the "unmount squashfs" or "make squashfs and mount it".
Doing a weird setup with squashfs would also have the secondary effect of security-hardening a bit the system, because many things become read-only and unless someone is targeting the users of this OMV plugin specifically, it's wildly unlikely that malware will be able to handle the situation.
But yeah, that's a pipe dream that was supposed to stay in the backburner until everything was ok.
Oooh! Cool, never saw that.
Zram just got useful then. I thought it was only for "swap into a compressed folder in ram", which all times I tried netted worse performance than disk swap.
If it can keep compressed folders too it's a nice feature that might make squashfs unnecessary, if it does compress things at the same level as squashfs anyway.
That's probably a non-issue (just add a safety check to ignore/fallback), OMV 3.0 (based off Debian Jessie) is in beta since the first day of January 2016, and will probably go live within months.
Jessie starts with kernel 3.16 and has backports for kernel 4.3 atm.
Possible yes, but I think it was discussed already in the forums and ryecoaaron/others were against this because it would allow noob users to screw with settings they should NOT be allowed to touch. OMV is aimed at noob linux users.
So if you want to do that, discuss it with him too.
I'm personally in the "better to not do that" camp though.
Well, consider that with that specific earlier todo list item you can write down super-easy pre-configured settings for all current OMV plugins inside their own debian package install/remove scripts, so that when they are installed/removed and they detect flashmemory plugin they also install/remove the relevant settings for dir2ram and ask for a reboot.
Which is nifty and imho much better than letting noobs handle config files with or without a GUI, since the entire point of the OMV plugin system was to pre-configure all the pre-configurable for them anyway.
Those that can install non-plugin stuff from CLI will probably prefer to add the settings by CLI too.
I will.
I only need to get folder2ram ready to be shipped for the new OMV 3.0 first (currently on beta).
The text was updated successfully, but these errors were encountered: