Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/usr/local/lib/python3.9 space usage #7473

Closed
cookiemonsteruk opened this issue May 20, 2024 · 12 comments
Closed

/usr/local/lib/python3.9 space usage #7473

cookiemonsteruk opened this issue May 20, 2024 · 12 comments
Labels
support Community support

Comments

@cookiemonsteruk
Copy link

Important notices

Our forum is located at https://forum.opnsense.org , please consider joining discussions there in stead of using GitHub for these matters.

Before you ask a new question, we ask you kindly to acknowledge the following:

Hello. I seem to recall this or previous version refactored Unbound to use a devfs device but don't remember details. Anyway, that might be nothing to do with today's question.
I've noticed that regularly, as in a number of times a day, the filesystem usage appears to balloon before reducing to more "normal" levels by itself.
When "abnormal":

$ df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default           16G     10G    6.0G    64%    /
devfs                       1.0K    1.0K      0B   100%    /dev
/dev/ada0p1                 260M    1.8M    258M     1%    /boot/efi
zroot/var/mail              6.0G    112K    6.0G     0%    /var/mail
zroot/var/audit             6.0G     96K    6.0G     0%    /var/audit
zroot/usr/src               6.0G     96K    6.0G     0%    /usr/src
zroot/tmp                   6.0G    3.8M    6.0G     0%    /tmp
zroot                       6.0G     96K    6.0G     0%    /zroot
zroot/var/log               6.4G    433M    6.0G     7%    /var/log
zroot/var/crash             6.0G     96K    6.0G     0%    /var/crash
zroot/var/tmp                12G    5.9G    6.0G    50%    /var/tmp
zroot/usr/ports             6.0G     96K    6.0G     0%    /usr/ports
zroot/usr/home              6.0G     96K    6.0G     0%    /usr/home
devfs                       1.0K    1.0K      0B   100%    /var/dhcpd/dev
/dev/md43                    48M    8.0K     44M     0%    /usr/local/zenarmor/output/active/temp
tmpfs                       100M    576K     99M     1%    /usr/local/zenarmor/run/tracefs
devfs                       1.0K    1.0K      0B   100%    /var/unbound/dev
/usr/local/lib/python3.9     16G     10G    6.0G    64%    /var/unbound/usr/local/lib/python3.9

That seems like Unbound is using 10 GB of storage and a 64%.

A few minutes later, more "normal":

$ df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default           22G    8.5G     14G    38%    /
devfs                       1.0K    1.0K      0B   100%    /dev
/dev/ada0p1                 260M    1.8M    258M     1%    /boot/efi
zroot/var/mail               14G    112K     14G     0%    /var/mail
zroot/var/audit              14G     96K     14G     0%    /var/audit
zroot/usr/src                14G     96K     14G     0%    /usr/src
zroot/tmp                    14G    3.8M     14G     0%    /tmp
zroot                        14G     96K     14G     0%    /zroot
zroot/var/log                14G    433M     14G     3%    /var/log
zroot/var/crash              14G     96K     14G     0%    /var/crash
zroot/var/tmp                14G    132K     14G     0%    /var/tmp
zroot/usr/ports              14G     96K     14G     0%    /usr/ports
zroot/usr/home               14G     96K     14G     0%    /usr/home
devfs                       1.0K    1.0K      0B   100%    /var/dhcpd/dev
/dev/md43                    48M    8.0K     44M     0%    /usr/local/zenarmor/output/active/temp
tmpfs                       100M    576K     99M     1%    /usr/local/zenarmor/run/tracefs
devfs                       1.0K    1.0K      0B   100%    /var/unbound/dev
/usr/local/lib/python3.9     22G    8.5G     14G    38%    /var/unbound/usr/local/lib/python3.9

Now using 8.5 Gb for a 38%.
More often that usage can grow up to 12 Gb, or 85% of the total.

In System: Settings: Logging; I have nothing entered and nothing ticked, so all defaults.
In Unbound: Advanced I have:
Log Queries = enabled
Log Replies = enabled
Tag Queries and Replies = enabled
Log local actions = disabled
Log SERVFAIL = enabled
Log Level Verbosity = Level 1 (default)
Log validation level = Level 0 (default)

Can someone tell me what causes this loop mount to grow in normal OPN operation?

@cookiemonsteruk cookiemonsteruk added the support Community support label May 20, 2024
@fichtner
Copy link
Member

hi @cookiemonsteruk,

Sorry it was a long weekend.

This might be related to compiled objects /usr/local/lib/python3.9 -- the nullfs will just provide a different directory with the exact file system.

You might get better diagnostics running

# du -hd0 /usr/local/lib/python3.9
158K	/usr/local/lib/python3.9

But as you can see the file system is not very big...

One last thing to note is that

zroot/ROOT/default           22G    8.5G     14G    38%    /
[...]
/usr/local/lib/python3.9     22G    8.5G     14G    38%    /var/unbound/usr/local/lib/python3.9

If you look at it closely you can see df giving the same diagnostics as the parent file system which is not very helpful.

Looking at my system I see the same thing, also for the new /lib mount:

zroot/ROOT/default           217G    3.2G    214G     1%    /
/usr/local/lib/python3.11    217G    3.2G    214G     1%    /var/unbound/usr/local/lib/python3.11
/lib                         217G    3.2G    214G     1%    /var/unbound/lib

So whatever fluctuation you see is just the root file system size and that could be anything.

It may be a bug in FreeBSD (ZFS even), but it could also be "working as intended". I don't know.

Cheers,
Franco

@pmhausen
Copy link
Contributor

It may be a bug in FreeBSD (ZFS even), but it could also be "working as intended". I don't know.

It is working as intended if /usr/local/lib/python3.11 is not a separate file system.

@fichtner
Copy link
Member

@cookiemonsteruk @pmhausen ok that means case closed? Fluctuation observed is on the root filesystem then.

@pmhausen
Copy link
Contributor

pmhausen commented May 22, 2024

I'd guess so. To confirm, go to any local directory that is not a filesystem root, e.g.

cd /usr/local/libexec
df .

df was invented when there were neither loopback mounts nor logical volume managers. Every mount was a fixed disk partition.

https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/df.c

@cookiemonsteruk
Copy link
Author

Hi, thanks for looking at this one @fichtner and @pmhausen . Please allow me to do some of these diagx before closing. I've not had the chance yet.

@cookiemonsteruk
Copy link
Author

Right. Quick glance and checked as you suggested above and seems to be the case i.e. /var/unbound/usr/local/lib/python3.9 is not a separate filesystem and the size of it follows/corresponds exactly to the root filesystem size.
In other words, down to an artifact of current tools' behaviour as to where the storage reporting paths actually reflect.
In light of this, I'm happy to say it's not the path with the "problem" but one somewhere else. Where that is, can move to a separate discussion if we feel this way.
Next step for me then is to try and if need, report said storage flexing. Still seems unidentified, and for me, problematic.

@cookiemonsteruk
Copy link
Author

closing and moving to separate diagnostic

@cookiemonsteruk
Copy link
Author

Hi. I've tracked -so far- the high storage usage to /usr/local/datastore/sqliteconn_all.sqlite as the most consuming.
I have default logging for system as per initial post and logging on most firewall rules, of which I have only a handful.
Could I please ask for a pointer to what populates this database?
Hopefully I can dial something down to relieve this situation.

penguin@OPNsense:/ $ df -h
Filesystem                  Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default           16G     14G    2.2G    87%    /
devfs                       1.0K    1.0K      0B   100%    /dev
/dev/ada0p1                 260M    1.8M    258M     1%    /boot/efi
zroot/var/mail              2.2G    112K    2.2G     0%    /var/mail
zroot/tmp                   2.2G    4.2M    2.2G     0%    /tmp
zroot/usr/ports             2.2G     96K    2.2G     0%    /usr/ports
zroot/var/log               2.5G    374M    2.2G    15%    /var/log
zroot/usr/src               2.2G     96K    2.2G     0%    /usr/src
zroot/var/audit             2.2G     96K    2.2G     0%    /var/audit
zroot/var/tmp               8.2G    6.1G    2.2G    74%    /var/tmp
zroot/var/crash             2.2G     96K    2.2G     0%    /var/crash
zroot/usr/home              2.2G     96K    2.2G     0%    /usr/home
zroot                       2.2G     96K    2.2G     0%    /zroot
devfs                       1.0K    1.0K      0B   100%    /var/dhcpd/dev
/dev/md43                    48M    1.4M     43M     3%    /usr/local/zenarmor/output/active/temp
tmpfs                       100M    492K    100M     0%    /usr/local/zenarmor/run/tracefs
devfs                       1.0K    1.0K      0B   100%    /var/unbound/dev
/usr/local/lib/python3.9     16G     14G    2.2G    87%    /var/unbound/usr/local/lib/python3.9
penguin@OPNsense:/usr/local/datastore/sqlite $ ls -alh
total 6384832
drwxr-x---  2 root  wheel     8B May 29 21:35 .
drwxr-x---  3 root  wheel     3B Apr  4  2023 ..
-rw-r-----  1 root  wheel   7.8M May 29 21:35 alert_all.sqlite
-rw-r-----  1 root  wheel    17G May 29 21:35 conn_all.sqlite
-rw-r-----  1 root  wheel    71M May 29 21:35 dns_all.sqlite
-rw-r-----  1 root  wheel    20M May 29 21:35 http_all.sqlite
-rw-r-----  1 root  wheel   8.0K May 29 21:00 sip_all.sqlite
-rw-r-----  1 root  wheel    28M May 29 21:35 tls_all.sqlite

@ureyni
Copy link

ureyni commented May 29, 2024

The /usr/local/datastore/sqlite folder is used by Zenarmor to store the conn_all.sqlite database for network connections log, which generally holds data for two days. This value is customizable. You can check the current setting using the following command:
grep retireAfter /usr/local/zenarmor/etc/eastpect.cfg
To reduce the file size, you can use the following command:
echo -n "vacuum;" | sqlite3 /usr/local/datastore/sqlite/conn_all.sqlite

@cookiemonsteruk
Copy link
Author

cookiemonsteruk commented May 30, 2024

@ureyni this is very useful and might just spot my problem, thank you.
Indeed it is on default at 2 days and might explain why I very often (actually most times) I get a "database is locked" when navigating the Zenarmor screens. Perhaps my connections are outgrowing sqlite option for DB. I'll look into.
Edit (highlights the situation I think :))

penguin@OPNsense:/usr/local/datastore/sqlite $ grep retireAfter /usr/local/zenarmor/etc/eastpect.cfg
retireAfter = 2
penguin@OPNsense:/usr/local/datastore/sqlite $ echo -n "vacuum;" | sqlite3 /usr/local/datastore/sqlite/conn_all.sqlite
Runtime error near line 1: database is locked (5)

I'll stop Zenarmor before attempting again.

@cookiemonsteruk
Copy link
Author

@ureyni do you have any more tips to reduce the size?
conn_all.sqlite is still 18G in size after multiple vacuums. For now I've increased the amount of memory in the system and seems to have alleviated the sluggishness, and reduced the retain to 1 day from the default 2 in Zenarmor.

@cookiemonsteruk
Copy link
Author

closing as it seems related to zenarmor. Thanks.!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
support Community support
Development

No branches or pull requests

4 participants