{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":26384672,"defaultBranch":"master","name":"uap-python","ownerLogin":"ua-parser","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2014-11-09T04:09:44.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1764972?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1694943828.0","currentOid":""},"activityList":{"items":[{"before":"bb74478af92ec817d5be57c2337ba6f990160736","after":"8c0f5c3e1527cdb7a209415470e34b9f7e772883","ref":"refs/heads/master","pushedAt":"2024-04-21T19:27:21.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Update uap-core to 0.18\n\napparently I never updated it...","shortMessageHtmlLink":"Update uap-core to 0.18"}},{"before":"854c12f28a94ccca3eddf00ae180fb3f5533bb2a","after":"bb74478af92ec817d5be57c2337ba6f990160736","ref":"refs/heads/master","pushedAt":"2024-03-27T18:34:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add migration documentation for 0.x -> 1.0\n\nFixes #181","shortMessageHtmlLink":"Add migration documentation for 0.x -> 1.0"}},{"before":"a270fe28805731a56d631ac7bbf38a52cc71ebe6","after":"854c12f28a94ccca3eddf00ae180fb3f5533bb2a","ref":"refs/heads/master","pushedAt":"2024-03-26T19:59:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Avoid eviction on entry replacement\n\n`*Result` objects are immutable, thus if a `PartialResult` gets filled\nfurther it has to be re-set into the cache.\n\nThis does not change the cache size, but because the current S3 and\nSIEVE implementations unconditionally check the cache size on\n`__setitem__` they may evict an entry unnecessarily.\n\nFix that: if there is already a valid cache entry for the key, just\nupdate it in place instead of trying to evict then creating a brand\nnew entry.\n\nAlso update the LRU to pre-check for size (and presence as well), this\nmay make setting a bit more expensive than post-check but it avoids\n\"wronging\" the user by bypassing the limit they set.\n\nFixes #201","shortMessageHtmlLink":"Avoid eviction on entry replacement"}},{"before":"63eda176915e85c73fb84fc16d41e5a78f0ff26d","after":"a270fe28805731a56d631ac7bbf38a52cc71ebe6","ref":"refs/heads/master","pushedAt":"2024-03-26T19:43:28.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"run mypy even if ruff fails\n\nBecause all the checks are performed in the same job and github\nactions will stop at the first step failure within a job, if one of\nthe ruff checks (formatting or checking) fails then mypy does not run\nat all, which is undesirable.\n\nTurns out the steps have an implicit `if success()` if no [status\ncheck function][check] (`success`, `failure`, `always`, `cancelled`)\nis used to guard the step.\n\nThus by gating on `always` and possibly explicitly checking the\nconclusion of specific checks it becomes possible to run `mypy` even\nthough `ruff check` failed, but not run it if *installing* mypy\nfailed.\n\n[check]: https://docs.github.com/en/actions/learn-github-actions/expressions#status-check-functions","shortMessageHtmlLink":"run mypy even if ruff fails"}},{"before":"428200321d7c4dc5bdbb961c9fafcc4d038b2971","after":"63eda176915e85c73fb84fc16d41e5a78f0ff26d","ref":"refs/heads/master","pushedAt":"2024-03-26T19:20:16.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add advanced cache documentation and belady approximator to hitrates\n\n- belady is useful to get *some* sort of semi-realistic expectation of\n a cache, as the maximum hit rate is only somewhat realistic as cache\n sizes get close to the number of unique entries\n- caches have been busting my balls and I'd assume the average user\n doesn't have the time and inclination to bother, so some guidance is\n useful\n- as caching is generally a CPU/memory tradeoff, while ``hitrates``\n provides a cache overhead estimation giving users a better grasp of\n the implementation details and where the overhead comes from is\n useful\n- plus I regularly re-wonder and re-research and re-discover the size\n complexity of various collections so this gives me the opportunity\n to actually write it down for once","shortMessageHtmlLink":"Add advanced cache documentation and belady approximator to hitrates"}},{"before":"b45380d348af5fc52b96dc1b8ae0bbeea569b336","after":"428200321d7c4dc5bdbb961c9fafcc4d038b2971","ref":"refs/heads/master","pushedAt":"2024-03-16T19:16:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Suppress \"Compile called before Add\" in re2.Filter\n\nWhen compiling an empty set, ``FilteredRE2::Compile`` logs a warning\nto stderr which can not be suppressed (google/re2#485).\n\nReplace `re2.Filter` by a null object if the corresponding matchers\nlist is empty: not only do we need to skip `Filter.Compile` to\nsuppress the warning message, we need to skip `Filter.Match` or the\nprogram will segfault (google/re2#484). Using a null object seems\nsafer and more reliable than adding conditionals, even if it requires\nmore code and reindenting half the file.\n\nDoing this also seems safer than my first instinct of trying to use\nlow-level fd redirection: fd redirection suffers from race\nconditions[^thread] and could suffer from other cross-platform\ncompatibility issues (e.g. does every python-supported OS have stderr\non fd 2 and correctly supports dup, dup2, and close?)\n\n[^thread]: AFAIK CPython does not provide a python-level GIL-pin\n feature (even less so with the GILectomy plans), so we have no way\n to prevent context-switching and any message sent to stderr by\n sibling threads would be lost","shortMessageHtmlLink":"Suppress \"Compile called before Add\" in re2.Filter"}},{"before":"0367c3bdcba43ce980dc90e9b0fb5cca6d9caa2b","after":"b45380d348af5fc52b96dc1b8ae0bbeea569b336","ref":"refs/heads/master","pushedAt":"2024-03-12T21:27:29.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add S3 and SIEVE, make S3 the default, remove clearing and locking\n\nCloses #143\n\nMemory Shavings\n===============\n\nIt was the plan all along, but since I worried about the overhead of\ncaches it made no sense to keep the result objects (which would\ncompose the cache entries) as dict-instances, so they've been\nconverted to `__slots__` (manually since `dataclasses` only supports\nslots from 3.10).\n\nSadly this requires adding explicit `__init__` to every dataclass\ninvolved as default values are not compatible with `__slots__`.\n\nCache Policies\n==============\n\nS3Fifo as default\n-----------------\n\nTesting on the sample file taught me what cache people have clearly\nknown for a while: lru is *awful*. You can do worse, but it takes\nsurprisingly little to be competitive with it.\n\nS3Fifo turns out to have pretty good performances while being\nrelatively simple. S3 is not perfect, notably like most CLOCK-type\nalgorithm its eviction is O(n) which might be a bit of an issue in\nsome cases. But until someone complains...\n\nAs a result, S3 is now the cache policy for the basic cache (if `re2`\nis not available) replacing LRU, and it's also exported as `Cache`\nfrom the package root.\n\nFrom an implementation perspective, the original exploratory version\n(of this and most FIFOs tested) used an ordered dict as an indexed\nfifo but the memory consumption is not great, the final version uses a\nsingle index dict and separate deques for the FIFOs, an idea found in\n@cmcaine's s3fifo which significantly compacts memory requirements\n(though it's still a good 50% higher than a SIEVE or OD-based LRU of\nthe same size).\n\nLFU\n---\n\nMatani et al's O(1) LFU had a great showing on hitrates and perfs\n(though slightly worse than s3 still), however the implementation\nstill required the addition of some form of aging, which was not worth\nit. Theoretically a straight LFU could work for offline use but...\nthat's a pretty pointles use as in that case you can just parse each\nunique value once and splat by the entry count.\n\nW-TinyLFU is the big modern cheese in the field, but I opted to avoid\nit for now: it's a lot more complicated than the existing caches\n(requiring a bloom filter, a frequency sketch or counting bloom\nfilter, an SLRU, and an LRU), plus a good implementation clearly\nrequires a lot of bit twiddling (for the bloom filters / frequency\nsketch), which Python is not great at from a performance point of view\n(I tried implementing CLOCK using a bytearray for bitmap and it was\ncrap).\n\nSIEVE\n-----\n\nSIEVE is consistently a few percentage point below S3, and it's\nlacking a few properties (e.g. scan resistance), however it does have\none interesting property which S3 lacks: at small cache sizes it has\nless memory overhead than LRU, despite Python-level linked list and\nnodes where LRU gets to use the native-coded OrderedDict, with a\nC-level linked list and a bespoke secondary hashmap. And it does that\nwith the hitrates of an LRU double the size until we get to caches a\nsignificant fraction the size of uniques (5000). It also features a\ntruly thread-safe unsynchronized cache hit.\n\nNote: while the reference paper uses a doubly linked list, this\nimplementation uses a singly linked list for the sieve hand. This\nmeans the hand is a pair of pointers but it saves 11% memory on the\nnodes (72 -> 64 bytes), which gets significant as the size of the\ncache increases.\n\nOther Caches\n------------\n\nA number of simple cache implementations were temporarily\n~~embarassed~~ implemented for testing:\n\n- random\n- fifo\n- lp-fifo / fifo-reinsertion\n- CLOCK (0 to 2), which is a different implementation of the same\n algorithm, tried a bitmap, it was horrible, an array of counters was\n competitive with lp-fifo using an ordereddict (perf-wise, I had yet\n to start looking at memory use).\n- QD-LP-FIFO which is not *really* an algorithm but was an\n intermediate stations to S3 (the addition of a fixed-size\n probationary fifo and a ghost cache to an LP-FIFO, S3 is basically a\n more advanced and flexible version)\n\nThe trivial caches (RR, fifo) were worse than LRU but very simple, the\nothers were better than LRU but at the end of the day didn't really\npull their weight compared to alternatives (even if they were easy to\nimplement).\n\nAn interesting note here is that the quick-demotion scheme of S3 can\nbe put in front of LRU to some success (it does improve hit rates\nsignificantly as the sample trace has a large number of one hit\nwonders), but without excellent reasons to use an LRU on the back end\nit doesn't seem super useful.\n\nThread Safety\n=============\n\nThe `Locking` wrapper has been removed, probably for ever: testing\nshowed that the perf hit of a lock in GILPython was basically nil (at\nleast for the amount of work ua-python has to do, on uncontended\nlocks). Since none of the caches are intrinsically safe anymore (and\nthe clearing cache's lack of performance was a lot worse than any\nsynchronisation could be) it's better to just have synchronised\ncaches.\n\nThread-local cache support has however been added in case, and will be\ndocumented, in case it turns out to be of use to the !gil mode (it\nbasically trades memory and / or hitrate for lower contention).\n\ns3fifo implementation notes\n===========================\n\nThe initial implementation of S3Fifo was done using ordered dicts as\nindexed fifos, this was easy but after adding some memory tracking it\nturns out to have a lot of overhead, at around 250% the overhead of\nLru (which makes sense, it needs 2 ordered dicts of about the same\nsize, plus a smaller ordered dict, plus entry objects to track\nfrequency).\n\nAn implementation based on deques is a lot more reasonable, it only\nneeds a single dict and CPython's deques are implemented as unrolled\nlinked lists of order 64 (so each link of the list stores 64\nelements). It still needs about 150% of the Lru space but that's a lot\nmore reasonable. At n=5000 after a full run on the sample file the\nmeasurements from tracemalloc indicates 785576 bytes, with\n`sys.getsizeof` measurements of the different elements indicating:\n\n- 415152 bytes for the index dict\n- 4984 bytes for the small cache deque\n- 37720 bytes for the main cache deque\n- 38248 bytes for the ghost cache deque\n- 280000 bytes for the CacheEntry objects\n\nFor LRU this is 500488 bytes of which 498752 are attributed to the\n`OrderedDict`.\n\nIt seems difficult to go below: while in theory the ~9500 entries\nshould fit in a dict of class 14, as the dicts have a lot of traffic\n(keys being added and removed) — and possibly because they're never\niterated so this is not a concern (have not checked if this is a\nconsideration) — cpython uses a dict one size larger to compact less\noften[^dict]. However the issue also occurs in the LRU so it's\n\"fair\" (while the OrderedDict has a Python implementation which uses\ntwo maps, it also has a native implementation which uses an internal\nad-hoc hashmap rather than a full blow dict, so it doesn't quite have\ndouble-hashmap overhead).\n\nNote that this only measures *cache overhead*, so the cache keys are\nnot counted, and all parses result in a global singleton:\n\n- user agent strings are around 195 bytes on average\n- parse results, user agent, and os objects are 72 bytes\n- device objects are 56 bytes\n- the extracted strings total about 200 bytes on average[^interning]\n\nThat's some 600 bytes per cache entry, or 3000000 bytes\nfor a 5000 entries cache. In view of that, the cache overhead hardly\nseems consequential, but still.\n\n[^dict]: Roughly python's dict has power of two size classes, a size\n class `n` leads to a total capacity of `1< ParseResult:\n if resolver is None:\n from . import parser as resolver\n\n return resolver(ua, Domain.ALL).complete()\n\nbut that feels like it would be pretty error prone, in the sense that\nit would be too easy to forget to pass in the resolver, compared to\nconsistently resolving via a bespoke parser, or just installing a\nparser globally.\n\nAlso move things around a bit:\n\n- move matcher utility functions out of the core, un-prefix them since\n we're using `__all__` for visibility anyway\n- move eager matchers out of the core, similar to the lazy matchers\n\nFixes #189","shortMessageHtmlLink":"Split Parser and reorganise package"}},{"before":"16c1324578c5c4b6fc1617757ebb5161f75b82b9","after":"8d4e624d36f150fec30959b3183cb80008a1e12d","ref":"refs/heads/master","pushedAt":"2024-02-20T19:41:13.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Configure ruff beyond the basics\n\nEspecially isort, shame it's not part of format but...","shortMessageHtmlLink":"Configure ruff beyond the basics"}},{"before":"04d0b7df7d62017dd90ef6e81c2861747701df38","after":"16c1324578c5c4b6fc1617757ebb5161f75b82b9","ref":"refs/heads/master","pushedAt":"2024-02-18T19:20:30.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add support for lazy matchers\n\nAdd lazy builtin matchers (with a separately compiled file), as well\nas loading json or yaml files using lazy matchers.\n\nLazy matchers are very much a tradeoff: they improve import speed (and\nmemory consumption until triggered), but slow down run speed, possibly\ndramatically:\n\n- importing the package itself takes ~36ms\n- importing the lazy matchers takes ~36ms (including the package, so\n ~0) and ~70kB RSS\n- importing the eager matchers takes ~97ms and ~780kB RSS\n- triggering the instantiation of the lazy matchers adds ~800kB RSS\n- running bench on the sample file using the lazy matcher has\n 700~800ms overhead compared to the eager matchers\n\nWhile the lazy matchers are less costly across the board until they're\nused, benching the sample file causes the loading of *every* regex --\nlikely due to matching failures -- has a 700~800ms overhead over eager\nmatchers, and increases the RSS by ~800kB (on top of the original 70).\n\nThus lazy matchers are not a great default for the basic parser.\nThough they might be a good opt-in if the user only ever uses one of\nthe domains (especially if it's not the devices one as that's by far\nthe largest).\n\nWith the re2 parser however, only 156 of the 1162 regexes get\nevaluated, leading to a minor CPU overhead of 20~30ms (1% of bench\ntime) and a more reasonable memory overhead. Thus use the lazy matcher\nfot the re2 parser.\n\nOn the more net-negative but relatively minor side of things, the\npregenerated lazy matchers file adds 120k to the on-disk requirements\nof the library, and ~25k to the wheel archive. This is also what the\n_regexes and _matchers precompiled files do. pyc files seem to be even\nbigger (~130k) so the tradeoff is dubious even if they are slightly\nfaster.\n\nFixes #171, fixes #173","shortMessageHtmlLink":"Add support for lazy matchers"}},{"before":"fa27574515fb236bdc817a188a87bee88bdc98d1","after":"04d0b7df7d62017dd90ef6e81c2861747701df38","ref":"refs/heads/master","pushedAt":"2024-02-14T16:20:41.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add README note that this is unreleased\n\nWhile switching the default branch back to master is convenient for\ndevelopment, it's already confusing visitors (cf #186).\n\nAt least add a note and a link to the proper page.","shortMessageHtmlLink":"Add README note that this is unreleased"}},{"before":"9960dbdf00c640664ccd0229e93d1bf6fcefd8ac","after":"fa27574515fb236bdc817a188a87bee88bdc98d1","ref":"refs/heads/master","pushedAt":"2024-02-11T20:22:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Default to re2 parser is available\n\nAfter benchmarking, the results are out, at least on the current\nsample file:\n\nFirst, re2 is ridiculously faster than the basic parser, even with\ntons of caching. re2 does benefit from caching, but it's so fast that\nit needs very high hitrates (so a very large cache) for the caching to\nhave a real impact, it's fast enough that at low hitrates (small\nsizes) the cache does slow down parsing visibly which is not the case\nof the basic parser.\n\nSecond, LRU is confirmed to be a better cache replacement policy than\nclearing (which... duh), it's not super sensible at very low sizes but\nat 100 entries it starts really pulling ahead, so definitely the\nbetter default at 200 (where even with the overhead of the more\nlayered approach it's ahead of the legacy parser and its immutable 20\nentries clearing cache).\n\nThe locking doesn't seem to have much impact without contention, and\neven contended the LRU seems to behave way better than the clearing\ncache still. So fallback onto locked LRU if re2 is not available.","shortMessageHtmlLink":"Default to re2 parser is available"}},{"before":"e719a7ef9b003e8bd901d55584d3d5c82ed6d4d3","after":"9960dbdf00c640664ccd0229e93d1bf6fcefd8ac","ref":"refs/heads/master","pushedAt":"2024-02-11T19:36:28.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add benchmarking scripts\n\nuseragents.txt sample file kindly provided by @DailyMats out of\nDailyMotion's data (2023-04-26).\n\nThe provided scripts allow:\n\n- Testing the cache hit rate of various cache configuration (algorithm\n and size) on sample files, this script uses a dummy parser and is\n thus extremely fast.\n- Benchmarking the average entry processing of various parser\n configurations (base parser + cache algoritm + cache size) on sample\n files, this is a much slower script but provides a a realistic\n evaluation, and allows using custom rules (`regexes.yaml` files) to\n check their impact on the performance of a given base parser.\n\nAlso added a script for testing threaded parsing, as expected this\ngets 0 gain over the normal stuff because of the GIL (and re2\nseemingly doesn't release the GIL either, though I don't know how\nbeneficial it would be at ~30us per call).\n\nMay be more useful with 3.13, or possibly with a regex-based extension\nreleasing the GIL, at least the basis for testing things out will be\nhere.","shortMessageHtmlLink":"Add benchmarking scripts"}},{"before":"6ad6f0433d8499dfe8bf04d682c9ec61f1304d92","after":"e719a7ef9b003e8bd901d55584d3d5c82ed6d4d3","ref":"refs/heads/master","pushedAt":"2024-02-06T19:08:35.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add an re2-based parser\n\nRequires splitting out some of the testenvs, as re2 is not available\nfor pypy at all, and not yet for 3.12.\n\nUses `re2.Filter`, which unlike the C++ `FilteredRE2` bundles\nprefiltering, using an `re2.Set` so likely less efficient than\nproviding one's own e.g. aho-corasick, but avoids having to do that.\n\nAt first glance according to pytest's `--durations 0` this is quite\nsuccessful (unlike using `re2.Set` which was more of a mixed bag):\n\n```\n2.54s call tests/test_core.py::test_devices[test_device.yaml-basic]\n2.51s call tests/test_core.py::test_ua[pgts_browser_list.yaml-basic]\n2.48s call tests/test_legacy.py::TestParse::testPGTSStrings\n2.43s call tests/test_legacy.py::TestParse::testStringsDevice\n0.95s call tests/test_core.py::test_devices[test_device.yaml-re2]\n0.55s call tests/test_core.py::test_ua[pgts_browser_list.yaml-re2]\n0.18s call tests/test_core.py::test_ua[test_ua.yaml-basic]\n0.16s call tests/test_legacy.py::TestParse::testBrowserscopeStrings\n0.10s call tests/test_core.py::test_ua[test_ua.yaml-re2]\n```\n\nWhile the \"basic\" parser for the new API is slightly slower than the\nlegacy API (browserscope does use test_ua.yaml so that matches) the\nre2 parser is significantly faster than both:\n\n- 60% faster on test_device.yaml (~2.5s -> 1s)\n- 80% faster on pgts (2.5s -> 0.5s)\n- 40% faster on browserscope (0.16 -> 0.1)\n\nThis is very encouraging, altough the memory consumption has not been\nchecked (yet).\n\nFixes #149, kind-of","shortMessageHtmlLink":"Add an re2-based parser"}},{"before":"fafda51915bef37ea103cd2b2853d674b43e5a90","after":"6ad6f0433d8499dfe8bf04d682c9ec61f1304d92","ref":"refs/heads/master","pushedAt":"2024-02-03T16:28:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Disable pypy311\n\nDespite the suite succeeding as noted in\ne9483d8fbb1e288c8842bff2766ec0a08e1a73eb, github sucks so it still\nmarks a PR / commit as failing if non-required tests fail (red cross\non the commit and \"Some checks were not successful\" on the PR), which\nsucks.\n\nNot to mention pypy311 does not exist yet, let alone being provided by\nsetup-python, so it can never succeed.\n\nTherefore remove it.","shortMessageHtmlLink":"Disable pypy311"}},{"before":"e9483d8fbb1e288c8842bff2766ec0a08e1a73eb","after":"fafda51915bef37ea103cd2b2853d674b43e5a90","ref":"refs/heads/master","pushedAt":"2024-02-03T15:59:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"fix black formatting changes now failing CI","shortMessageHtmlLink":"fix black formatting changes now failing CI"}},{"before":"fd395ce6f5b4e617883015623de56b9bfb7b2032","after":"e9483d8fbb1e288c8842bff2766ec0a08e1a73eb","ref":"refs/heads/master","pushedAt":"2023-10-23T18:31:29.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Use continue on error for -next jobs\n\n~~Apparently github's automerge is based on the job's properties not\nthe branch protection rules.~~ From what I gather, seems to make the\noverall job succeed even if a specific run fails. This is exactly what\nwe want.\n\nAfter checking again the selection was such a pain in the ass I got\none of the status checks wrong in the ruleset, and selected one of the\npypy-3.11 jobs as required. Even after `continue-on-error`-ing and the\noverall check now passing the branch still doesn't merge.\n`continue-on-error` seems like it's useful regardless, for better\nreporting, so leaving it.","shortMessageHtmlLink":"Use continue on error for -next jobs"}},{"before":"52053d54a6e4708ea5945ba01487642615d750fe","after":"ae6398d219ba4b717e50f64be94f0c8e9768e16b","ref":"refs/heads/0.x","pushedAt":"2023-07-08T11:14:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Slightly late update of uap-core to 0.18.0\n\nSee https://github.com/ua-parser/uap-core/compare/v0.16.0...v0.18.0\nfor the upstream changelog\n\nAlso remove 2.7 and 3.6 from the changelog as they're not accessible\nfrom setup-python anymore, we'd need to e.g. install them by hand\nusing pyenv if we want to keep them (which might be a good idea but\nit's not like anyone is touching the 0.x code so changes of breakage\nare low anyway).\n\n- actions/setup-python#672\n- actions/setup-python#544","shortMessageHtmlLink":"Slightly late update of uap-core to 0.18.0"}},{"before":"f6fb0e2f2e43985b1015f529f39f9eb1de3f4a17","after":"fd395ce6f5b4e617883015623de56b9bfb7b2032","ref":"refs/heads/master","pushedAt":"2023-05-04T13:04:30.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Add tox labels to more easily run just a subset of the thing\n\nThe label can be selected with `-m` on most tox commands, this is\nequivalent to selecting the corresponding envs using `-e`.\n\n- `test` runs all the tests, in all python versions\n- `check` runs the non-test checks\n- `pypy` and `cpy` run the tests for their respective Python\n implementation\n\nThis is way more convenient when leveraging `posargs` as most of the\ntools are not posargs-compatible. Also easier than typing the envs in\nfull. The only drawback is `tox list` does not display the labels.\n\nAlso use brace expansions for cleaner definitions (and easier\nupdates), in both envlist and labels.","shortMessageHtmlLink":"Add tox labels to more easily run just a subset of the thing"}},{"before":"ae0fe2ba899994192fb3d97c9d0b1289d977dde4","after":"f6fb0e2f2e43985b1015f529f39f9eb1de3f4a17","ref":"refs/heads/master","pushedAt":"2023-05-03T12:35:53.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"forgot to remove format in a24779bea3c3f0fe477530f694ead307d8edcee5\n\nThis is broken, but obviously not tested.","shortMessageHtmlLink":"forgot to remove format in a24779b"}},{"before":"81da21a8004bd96a5d9351072171fa1cea63b03e","after":"ae0fe2ba899994192fb3d97c9d0b1289d977dde4","ref":"refs/heads/master","pushedAt":"2023-05-02T13:10:30.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"FIX: tests not working\n\nSince the switch to src layout and pytest in\n827347722bfb2fd8088783fd9705308dd8b0d4b6 the main tests had not been\nrunning at all, as they don't match the pytest naming\nconventions (thankfully I'd not broken anything).\n\n- rename the two problematic test classes to be picked up by pytest\n- also handle the warning generated by `GetFilters` since\n a24779bea3c3f0fe477530f694ead307d8edcee5\n- and remove the warnings configuration in `TestDeprecationWarnings`\n as it has not been necessary since\n a24779bea3c3f0fe477530f694ead307d8edcee5 (P2 being dropped), and\n possibly even 827347722bfb2fd8088783fd9705308dd8b0d4b6 (as I\n wouldn't be surprised if pytest did the right thing on P2 either)","shortMessageHtmlLink":"FIX: tests not working"}},{"before":"a24779bea3c3f0fe477530f694ead307d8edcee5","after":"81da21a8004bd96a5d9351072171fa1cea63b03e","ref":"refs/heads/master","pushedAt":"2023-05-02T08:58:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Switch to tox 4\n\nTox 4 is (finally) compatible with wheel installs & stuff, which is\nespecially nice as the manual wheel installation was not really\ncompatible with `tox -p` (jobs could conflict with one another and\ncorrupt an other job's wheel).\n\nThe issue doesn't seem to happen with `tox p` which is nice, and while\nit's technically not *shorter* than the old tox conf, it's definitely\nclearer, and feels more resilient (we'll see if it is in the long\nrun).\n\nAlso remove `requirements_dev`:\n\n- it contained tox, which is kind-of a dev requirement but kind-of not\n- as a result it caused the installation of tox in the tox envs, and\n in the github test images, both completely unnecessary\n- the dev dependencies are pytest and pyyaml, which is shorter than\n spelling out `-rrequirements.dev` in full","shortMessageHtmlLink":"Switch to tox 4"}},{"before":"e2267d294f8ba68ec3fbcdbfdaa05fe21bc204d6","after":"a24779bea3c3f0fe477530f694ead307d8edcee5","ref":"refs/heads/master","pushedAt":"2023-05-02T08:01:44.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Modernize user_agent_parser\n\n- remove P2 compatibility (e.g. str/bytes)\n- remove never used `MatchSpans` methods (not sure what they were\n intended for)\n- remove usage of `jsParseBits`, remove from caches and inner parsers,\n deprecate at the outer parser directly\n- add a few types\n- modernise code a bit (e.g. match group indexing, f-strings)","shortMessageHtmlLink":"Modernize user_agent_parser"}},{"before":"827347722bfb2fd8088783fd9705308dd8b0d4b6","after":"e2267d294f8ba68ec3fbcdbfdaa05fe21bc204d6","ref":"refs/heads/master","pushedAt":"2023-04-30T11:50:02.000Z","pushType":"pr_merge","commitsCount":3,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Update CI actions for new packaging\n\n- remove old compile step\n- update python-action to v4 and \"primary\" python to 3.11\n- produce sdist and wheel just once\n- ensure we're testing after installing from wheel, sdist, and source","shortMessageHtmlLink":"Update CI actions for new packaging"}},{"before":"a8d45b675350dfa4fc19d17fc4b8ed1d04461dcf","after":"827347722bfb2fd8088783fd9705308dd8b0d4b6","ref":"refs/heads/master","pushedAt":"2023-04-06T19:16:26.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"masklinn","name":null,"path":"/masklinn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6993?s=80&v=4"},"commit":{"message":"Switch to src layout and move tests to their own directory\n\n- src/ layout seems to be the modern standard, and avoids\r\n e.g. false-positive success issues (where tests get run agains the\r\n source instead of the packaged library, and thus hide packaging\r\n issues): https://hynek.me/articles/testing-packaging/\r\n- remove use of unittest entirely (switch everything to pytest, not\r\n just runner), also move doctesting to pytest\r\n- moving tests out avoids packaging them, and mucking up the source\r\n dir with more test files in the future\r\n- replace old test command by an invocation of tox\r\n\r\nThe only thing that's lost is `setup.py check`, but turns out:\r\n\r\n- it only checks that `long_description` is valid and we don't use that\r\n- it wasn't being run on CI\r\n- invoking `setup.py` directly is getting deprecated","shortMessageHtmlLink":"Switch to src layout and move tests to their own directory"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAENmi4GgA","startCursor":null,"endCursor":null}},"title":"Activity · ua-parser/uap-python"}