Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beta status #328

Closed
mikeruss1 opened this issue Jul 14, 2022 · 127 comments
Closed

Beta status #328

mikeruss1 opened this issue Jul 14, 2022 · 127 comments

Comments

@mikeruss1
Copy link

common classes failing, when I did the main update. This is a change, common classes wasnt being updated with the main program.

tried an independant update of common classes and signatures having restarted, same checksum failure

will do a reinstall when I have more time.

@mikeruss1
Copy link
Author

mikeruss1 commented Jul 14, 2022

now you reached beta .. did you notice this idea from 303 ...

thought of a variation on this which might be easier.
after a block you dont just exit, you write the browser block report or whatever, then return to the include with an array having been set which confirms the block, and reports the reason for the block, time, IP address, no of infractions, and banned or tracking status. I process this, and send you an API instructing you to change status to banned or whatever, and then exit.
this facility would be turned on in the front end, together with the parameters required.

@Maikuolan
Copy link
Member

I reckon that could work. :-)

@mikeruss1
Copy link
Author

to reinstall, do I just secure the config and auxiliary ymls, run the install, then copy them back?

@Maikuolan
Copy link
Member

Yep, that should do it. Or even just replacing out all the existing PHP files in the vault with fresh copies from the repository should do it too, seeing as they're the only files likely to be causing the problems here in this case.

@mikeruss1
Copy link
Author

done, will recheck commonclasses and signatures after next update

@mikeruss1
Copy link
Author

blocking from an auxiliary rule works, but no log created in the vault. I have an aux rule which turns off logging which shoudnt have been tripped. If it was using "either conditions" instead of "all conditions" it would explain it
Signatures Count = 1
Why Blocked = Cloud service
Use Windows-style wildcards to test the conditions.
In order to trigger the rule, all conditions must be met.

@mikeruss1
Copy link
Author

forget it, looks like I had taken turned off logging in the config

@mikeruss1
Copy link
Author

mikeruss1 commented Jul 17, 2022

tracking page - the expiry date/time of a tracked IP changes to 7 days from the point a valid transaction goes through for that IP. It only changes once each day. Note it says 6 days. The last infraction was 2 days ago

IP Address Status – Infractions Expiry Options
127.0.0.1 Tracking – 3 Sun, 24 Jul 2022 06:54:21 +0000(6 days from now)

@Maikuolan
Copy link
Member

Behaviour replicated and confirmed at my end. It shouldn't be doing that, so I guess, this is a bug. Investigating the cause now.

Maikuolan added a commit that referenced this issue Jul 17, 2022
Ensure that IP tracking expiries aren't extended when encountering
non-blocked, non-nontrackable requests.
@Maikuolan
Copy link
Member

Okay; That particular bug should be fixed now. Thanks for spotting this. :-)

@mikeruss1
Copy link
Author

updated - will verify tomorrow
if you could arrange a sig / common classes update I could check that too

@mikeruss1
Copy link
Author

expiry date bug confirmed fixed

@mikeruss1
Copy link
Author

log view from tracking not present with logfile{yyyy}{mm}.log

@mikeruss1
Copy link
Author

mikeruss1 commented Jul 21, 2022

sorry, signature update still failing, reinstalled a few days ago
IPv4 – Successfully deactivated.
IPv4 – signatures/ipv4.dat – Checksum error! File rejected!
IPv4 – Failed to update! +0 bytes | -0 bytes | 0.069
IPv4 – Successfully activated.

the security extras module updated OK

@Maikuolan
Copy link
Member

Hm.. Very strange. :-/

I'll keep looking into it.

@Maikuolan
Copy link
Member

I haven't yet identified the reason why it isn't updating properly, but I've made a change to the way it presents those failures, at least: When there's a checksum failure, it'll now display the expected and the actual checksum, to allow us to compare the difference for the failed update (in case this might reveal something about the nature of the failure in the future).

Anyway.. Will reply again when I'm able to get a bit further with it.

@mikeruss1
Copy link
Author

hope it helps ...
IPv4 – Successfully deactivated.
IPv4 – signatures/ipv4.dat – Checksum error! File rejected!
Actual – 0e4c4d4241fb9297c0f39b99ec8fc9468c01233541ceed87e0a80b3beb00d9be:1066519
Expected – d5558cd419c8d46bdc958064cb97f963d1ea793866414c025906ec15033512ed:14
IPv4 – Failed to update! +0 bytes | -0 bytes | 0.235
IPv4 – Successfully activated.
IPv4-ISPs – Successfully deactivated.
IPv4-ISPs – signatures/ipv4_isps.dat – Checksum error! File rejected!
Actual – f3aa81a6b0dc27205da2eed75e370594194e7aef015eb4cc3d18ea475ab5027e:565405
Expected – d5558cd419c8d46bdc958064cb97f963d1ea793866414c025906ec15033512ed:14
IPv4-ISPs – Failed to update! +0 bytes | -0 bytes | 0.233
IPv4-ISPs – Successfully activated.
IPv6 – Successfully deactivated.
IPv6 – signatures/ipv6.dat – Checksum error! File rejected!
Actual – e0c39177d2371bef78ee13afa6ce3ec84c3b0a17e9007f2d69f3199039be2c04:305310
Expected – d5558cd419c8d46bdc958064cb97f963d1ea793866414c025906ec15033512ed:14
IPv6 – Failed to update! +0 bytes | -0 bytes | 0.208
IPv6 – Successfully activated.
IPv6-ISPs – Successfully deactivated.
IPv6-ISPs – signatures/ipv6_isps.dat – Checksum error! File rejected!
Actual – da9d1de07d558362340431689838e582119813f7179fdb40915596b7bd523889:50043
Expected – d5558cd419c8d46bdc958064cb97f963d1ea793866414c025906ec15033512ed:14
IPv6-ISPs – Failed to update! +0 bytes | -0 bytes | 0.232
IPv6-ISPs – Successfully activated.

@Maikuolan
Copy link
Member

hope it helps ...

It does. :-)

Found the problem. Committing in a patch now.

Maikuolan added a commit that referenced this issue Jul 24, 2022
@Maikuolan
Copy link
Member

Done. :-)

Let me know how it goes. It should work properly now. '^^

@mikeruss1
Copy link
Author

yeah - fixed!
there wasnt anything to update so just repeated the signature update, presumably the problem was in the signature files?

@Maikuolan
Copy link
Member

The problem was the exact paths cited in the metadata for the signature files for where exactly the upstream exists. At some point between updating the updater and where we are now, I'd moved the signature files into their own directory, and renamed the directory for the Common Classes Package, but hadn't updated those paths as cited in the metadata accordingly, so when the updater tried to update those components, it requested the old path instead of the new one, and GitHub returned some 404 errors instead of the expected data, thus resulting in the failure.

The checksums quoted in the above reply clued me in to the problem, as it looked like checksums I'd seen before for one of GitHub's 404 messages. Not sure how I managed to forget to update those paths, and I'm pretty sure I could actually remember updating them already, but after suspecting something like that from seeing the checksums and then checking the metadata, I noticed that it was still using the old paths. The fix just updates the paths to point to where they're supposed to be pointing now (changes can be checked by checking the commit which references this issue above). Maybe what I remember is updating the wrong copy of the upstream metadata (seeing as I have several copies installed at my machine for various stages of testing, implementing new features, QA, etc)? I'm not sure. In any case.. I've definitely updated the correct copy now and committed/pushed those changes, and if it's also now updating properly accordingly.. all good, I guess..? '^^

On the bright side, it further demonstrates the usefulness of checksums, I guess, considering we wouldn't want to be overwriting good files with a bunch of 404 messages.

Maikuolan added a commit to CIDRAM/CIDRAM-Extras that referenced this issue Nov 30, 2022
Changelog excerpt:
- Adjusted minimum value for some port directives from 1 to 0.
Maikuolan added a commit that referenced this issue Nov 30, 2022
Changelog excerpt:
- Adjusted minimum value for some port directives from 1 to 0.
Maikuolan added a commit that referenced this issue Nov 30, 2022
Changelog excerpt:
- Adjusted minimum value for some port directives from 1 to 0.
Maikuolan added a commit that referenced this issue Nov 30, 2022
Changelog excerpt:
- Adjusted minimum value for some port directives from 1 to 0.
@Maikuolan
Copy link
Member

Memcached port problem should be fixed now. :-)

@Maikuolan
Copy link
Member

the statistics in v3 look wrong, I have IPv4 =1, everything else 0. Not sure whats included but since testing start early Sept there have been Cloud Service, OOD browser, Invalid UA, Attacks etc etc

As long as the statistic in question has been configured to be tracked (since v3, specific individual statistics can be turned on/off now, unlike in v2/v1, whereby statistics could only be turned on/off as a whole rather than individually), then it should continually increment each time the event in question occurs (e.g., two IPv4 block events should show as 2, then 3 on the next event, 4 on the next and so on). If it's just showing as 1 all the time, then that is definitely wrong behaviour.

Testing at my end locally, it appears to be working correctly (i.e., wrong behaviour not replicated at my end). Is it possible that the cache might've been reset, and you're seeing it reach 1 anew, after having been reset?

I think you suggested earlier (at least, pretty sure it was you; might've be someone else maybe; was a while back now) the idea of backing up the cache to a file periodically, in order to restore data, in case it's reset like that. Had some issues before, but might need to reexplore that idea again soon, I think.

@Maikuolan
Copy link
Member

now we have bobuam working I have switched over about 50 pages of the site to v3.

👍

how about having an option in the frontend to select the basis of the infraction limit?

That could work.

I'll look into that tonight or tomorrow.

@mikeruss1
Copy link
Author

mikeruss1 commented Nov 30, 2022

I have statistics set on for IPv4 blocks, and off for passed requests. It seems to be stuck at 1, I dont think its the cache clearing because when that happens the start date is reset, which it hasnt. Volumes have been very low too. It could be the stats were all on originally, so thats where the 1 came from, I turned off passed request, and thats stopped everything?

The memcached setup worked, and it accepts and blocks a GET. However tracking is empty, and the memcached tab on the Cache menu page is empty. I reset it no cache, repeated the same process and tracking is set. So I suspect the memcached data is not being updated?

@mikeruss1
Copy link
Author

have dumped the memcached keys and it looks like the entries are there ...
CIDRAM_Tracking-(my ip address)
CIDRAM_Tracking-(my ip address)-MinimumTime
CIDRAM_Statistics-Since
CIDRAM_DnsReverses-(my ip address)
CIDRAM_(followed by what looks like a binary encrypred key)

worth noting the v2 entries are also there without the leading CIDRAM

so it seems it must be a retrieval problem?
I can extract the data as well if you need it

@mikeruss1
Copy link
Author

mikeruss1 commented Dec 5, 2022

re stats problem - I turned on reporting valid requests and it accumulated OK to show 1. Its been like that for 24 hours, that also is not showing anything beyond 1!
weird !!

I looked up the memcached contents of the cache that are not being shown in the frontend
CIDRAM_Tracking-(my ip address) - contains 2 - presumably infractions
CIDRAM_Tracking-(my ip address)-MinimumTime - contains 604800 - presumably the original tracktime of 7 days when created, note its not reduced

@Maikuolan
Copy link
Member

Maikuolan commented Dec 5, 2022

presumably the original tracktime of 7 days when created, note its not reduced

So.. Due to changes to the way that caching works in v3, unlike with v1/v2, expiry times for existing cache entries can't be manipulated quite as easily as before anymore. We needed to be able to manipulate expiry times for situations such as extending the time for which an IP address is banned, increasing/decreasing tracking times for existing entries, etc. To resolve that problem, as well as the tracking information for an IP, v3 also now has a "MinimumTime" cache entry which corresponds to the main IP tracking cache entry. The purpose for the "MinimumTime" cache entry isn't to tell CIDRAM the actual time when an IP tracking entry expires, but rather, to tell CIDRAM the minimum amount of time required before it should expire, in lieu of not being able to calculate it from the actual existing information anymore. Generally, it should either correspond to the default tracktime as defined by the configuration, or be something greater, for in the event that the time was extended for whatever reason. Anyway, in terms of implications for actual users, it shouldn't matter at all. It just means we'll be seeing 2 things at the cache page now instead of just 1 per each IP address being tracked.

Its been like that for 24 hours, that also is not showing anything beyond 1! weird !!

Definitely weird. :-/

Looking over the existing code, there aren't any glaringly obvious problems AFAICT. Always possible I might just not be seeing it yet though.

I'd hoped to set my CIDRAM dev installation at my new dev machine to using Memcached tonight in order to actually test this problem for myself directly, but.. uh oh.. looks like Memcached isn't working at all at my new dev machine at the moment, unfortunately. I'll need to figure out what's going on there first, I guess. (Previous dev machine had APCu, Memcached, SQL Server, SQLite, and a few other things all running together without any problem. New dev machine, currently, has APCu, SQL Server, and SQLite running, but no working Memcached, Redis, or other stuff yet). Hopefully shouldn't take too long to sort out.

@Maikuolan
Copy link
Member

so it seems it must be a retrieval problem?

Definitely seems that way. After all.. if the cache entries are there, as you can see them listed, but just not being reflected correctly at the actual statistics page, that does seem like a retrieval problem. But, having not found anything which should be able to cause that.. I'm not 100% sure. Anyway, I'll reply back as soon as/if I can figure something out.

@Maikuolan
Copy link
Member

how about having an option in the frontend to select the basis of the infraction limit?

That could work.

I'll look into that tonight or tomorrow.

No progress on that front yet (been unexpectedly busy with offline stuff the past few days). Still on the current to-do list though.

@mikeruss1
Copy link
Author

Bobuam problem causing php errors
this UA
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15 (Applebot/0.1; +http://www.apple.com/go/applebot)"
caused this (folder info removed)
[08-Dec-2022 12:16:32 Europe/London] PHP Fatal error: Uncaught TypeError: Argument 1 passed to CIDRAM\CIDRAM\Core::bypass() must be of the type bool, null given, called in /,,,,,,vault/modules/bobuam.php on line 175 and defined in /........vault/CIDRAM/CIDRAM/Core.php:1458
Stack trace:
#0 /,,,,,,,/vault/modules/bobuam.php(175): CIDRAM\CIDRAM\Core->bypass(NULL, 'Applebot Bypass...')
#1 /........vault/modules/bobuam.php(184): CIDRAM\CIDRAM\Core->{closure}()
#2 /.....vault/CIDRAM/CIDRAM/Protect.php(237): require('/home/worldwa2/...')
#3 [internal function]: CIDRAM\CIDRAM\Core->CIDRAM\CIDRAM{closure}('bobuam.php', 2)
#4 /.......vault/CIDRAM/CIDRAM/Protect.php(242): array_walk(Array, Object(Closure))

#8 {main}
thrown in /.....vault/CIDRAM/CIDRAM/Core.php on line 1458

Maikuolan added a commit that referenced this issue Dec 8, 2022
@Maikuolan
Copy link
Member

Committed a patch just now. Let me know how it goes. :-)

@mikeruss1
Copy link
Author

very quick thank you !
looks OK - detected I wasnt applebot
Signatures reference: bobuam.php:L64, bobuam.php:L70, bobuam.php:L80, Core.php:L1319
Why blocked: Ambiguous browser ID (MM), Malformed UA (WS), Suspected bot or scraper probe, Fake Applebot!

@mikeruss1
Copy link
Author

that block I just tested produced 2 Access Denied messages. A large font at the top, then another smaller font, then my message.
the template custom header is

<style>[id=detected]::after{content:"We get this right ...

@mikeruss1
Copy link
Author

concerned that with multiple overlapping posts the issues may be unclear ?
the major problems are ...
memcached variables are being created but the results such as tracking are empty in the frontend
using normal cache - none of the stats increment beyond 1

@Maikuolan
Copy link
Member

Although there are still chores/tasks listed here which haven't been finished yet, v3 is technically stable now (no more backwards-incompatible changes anticipated), and the chores/tasks remaining aren't too numerous, so I'm going to go ahead and create a stable "v3.0.0" tag now.

v3 was first branched (i.e., when it was first split from v2) February 14th 2022, and it's now already January 24th 2023. I'm not particularly fond of the idea of the beta period exceeding an entire year (because it can affect deployment for some users, e.g., those that install/update via Composer, those that can't update between versions using the front-end updater due to relying instead on platforms which handle updates from the platforms themselves, but which are only able to install/update when new stable version tags are available and etc), and I'm a little concerned that if I don't create that tag/release now, we'll see that entire year threshold come to pass.

What does creating that tag now actually mean for those remaining chores/tasks?

Nothing. They're still on the to-do list, and I still hope to get them done. We'll keep these issues open, and I'll work through it all as time permits, same as was already the case, tag or no tag.

All it really means is that some changes may appear alongside slightly higher version numbers (e.g., v3.0.1, v3.1.0, etc). ;-)

Duplicating this comment at #327, too, as it's relevant to both issues.

@Maikuolan
Copy link
Member

I think I might've figured out what was causing the problem with the statistics before, where it would always just show as 1. Working on a fix now.

Maikuolan added a commit that referenced this issue Feb 24, 2023
Changelog excerpt:
- The cache handler's incEntry and decEntry methods weren't handling
  non-expiring values correctly when using flatfile caching; Fixed.
Maikuolan added a commit that referenced this issue Feb 24, 2023
Changelog excerpt:
- The cache handler's incEntry and decEntry methods weren't handling
  non-expiring values correctly when using flatfile caching; Fixed.
Maikuolan added a commit that referenced this issue Feb 24, 2023
Changelog excerpt:
- The cache handler's incEntry and decEntry methods weren't handling
  non-expiring values correctly when using flatfile caching; Fixed.
@Maikuolan
Copy link
Member

Done.

@mikeruss1
Copy link
Author

Yeah - fixed !

@Maikuolan
Copy link
Member

Closing in deference to #327.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants