- Entirely different search model: there are pages of data, and the search shows all pages that contain matching search results, scrolling to the first matched search result. - CSS is not ready for XSLT
seems to work when viewing locally, i.e. on closing and re-opening the window the only request that hits the server is for the cache manifest file. TODO: form the cache manifest file programmatically. right now every time we add a file we need to be careful to add it to the cache, which kinda sucks.
Included some rendering optimizations Each language has its own debounce time now, slightly higher for PHP (feels just about right)
- collections now use static .json files in static/data TODO: cache these in local storage via a collection that somehow intercepts the call to fetch() TODO: once cache is set up, label it with some sort of version. or just expect ppl to clear their cache every once in a while...? - all collections use the same model (there is only one model now) - all collections are very similar now...
route strings weren't getting converted to regexes, causing the ORs in the js route string to not get picked up
forgot to push onto the titles list, so duplicate title detection was failing. in other news, all of these scrapers are really similar, and bugfixes like this are a PITA
- had to make .json files a list of json objects - adding trim() to titles
almost identical code to css scraper...
- writes to data/css-mdn.json in static directory instead of db - fills in a ton of gaps in current scrape (509 items vs 300ish)