-
-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ToDo list #78
Comments
How about a "backup" mode. Users could provide backends such as git, S3 where their bookmark database gets pushed to |
For personal usage, one could, however, store the database file in a mapped drive which is synced and create a softlink to it in the regular I think I should document this though. Thanks for reminding! |
Hi, |
@polo2ro |
yes i think that would do the trick; when using -u on all database i run into other problems because many of my bookmarks where on the same site and they probably have some policy to limit high number of page/seconds, that is why i used the sleep command. I my first try i almost deleted all the links from the same site because of that. |
I get it now. I noticed that you check only for status 200. Does curl handle HTTP re-directions transiently? There are also some status codes which indicate temporary failures. I think it would be great to make the script more verbose with the status code if it is not 200 and a verbose description. Note that Instead of an additional shell script, I think it would be great if you can add an API in Of course it will need more checks. For example, it would need a better check for malformed URLs (refer to jarun/googler@b53b638 for a hint). Then the additional delay, verbose status codes and so on... I can add it as a task item if you wanna pick it up. |
@polo2ro We'll soon move over to urllib3 which has retry in-built. I believe the above issue will be over with it. |
here are some examples url of my use case: http://www.seloger.com/annonces/achat/maison/estang-32/107048765.htm the first url give me a 200 OK, all good The probleme is that the site give me a 301 instead of a 404 for a page that does not exists anymore, that is why i check the 200 Ok only. I am all for a pull request for that but for now i am not sure what to do, maybe a simple output of the http code can be sufficient to detect 404 or other unwanted codes for example a capability to filter by http code can make possible to remove 404 links like this: or maybe using -u to set a tag: |
I'll test the scenario with urllib3 and update. |
I think this works fine with urllib3. Results with latest master (at the time of writing):
|
In addition, now you can view only failed and skipped (due to mime) using:
|
This is great! i will check this out with an update on my database and get back to you. Thank you |
The retry functionality is not working in my case because seloger.com give me a 200OK with a error page, i don't know if this a common behavior. I am luky that the error page does not have a title. I uploaded by database https://1fichier.com/?9urj0c0p14 i get the error message after 45 updates, i think nothing can be done about it because http codes are note used properly here. I am pretty sure this will work with apache mod_ratelimit |
This doesn't seem standard behavior. Here's an example of correct behaviour:
Don't think we can do much about it. |
Suggestion: |
Thank you for the suggestion. Yes, |
Hi Jarun |
Hi @DamianSiniakowicz thank you for your interest in the project. Please let me know any questions you have. I am available on Gitter. |
Hi! Are there any tests in particular you would like to see? I am new to open source and think that would be a good way to get started. If you don't have anything specific in mind, I can jump in and try to find something not yet covered. Thanks! |
Herroo!
and maybe, since peco supports sticky selections,
I am thinking pucu? beco? I dunno! I bet those are naughty words in one language or another. It would take the form of a little script mashing buku output into a line that looks acceptably pretty in peco and contains all or most of the searchable content (which might fail if preferred search targets contain more than one or two of the longer fields, url/description/tags, but we'll see) and parsing those same lines afterwards for everything from opening the url(s) to batch tagging/deleting (these might require the index). Cheerio! UPDATE: try |
@naaaargle thanks for your offer! there are lots of stuff not covered in tests yet. I can come up with a list for you. |
@AndreiUlmeyda, I am open to the options as long as they remain simple. While the terminal is awesome, we'll have to keep the marriage less complicated for the benefit of healthy mortal children. |
@naaaargle here are some test cases which popped up:
Also, please let me know if you'd like to maintain the PyPi branch. You may need to spend some time on PyPi. The project structure is ready for it. But I'm quite blank about PyPi. |
Altrightythen, I've thrown the <buku|peco> idea inside a little script and I am pretty sure it classifies as simple, thanks to the inherent neatness of buku. I will try and get the suboptimal line structuring sorted out over the next few days and would greatly appreciate a bit of input from people who can imagine themselves using it. I will open an issue there later laying out where the problems lie and what information from users is needed to get it straight. If that isn't there yet just throw whatever thoughts you have at it in as many issues as you like. Cheers |
@AndreiUlmeyda 👍 Thank you! I'll check it out! |
Any thoughs on a REST API, so that webapps could have a ready bookmark backend? |
👍 I am open to PRs. :) Added it in the task list. |
@AndreiUlmeyda I was trying it. Works well. I have one request though... can you please change the name to something else? |
Aw man, that was the best part. But ok, I will do that. Thanks for checking it out |
Thanks! |
@jarun Okay, it is done. And I know climate change is an issue but please don't make me change it again. A little question now that the thing is sufficient for my own needs: Would you rather have it existing as a separate project, have it incorporated into buku, or none of the above? Either thing is fine with me. I was happy to be able to improve my bash a bit but in case some people find use for it and request more than one or two additional features I would just rewrite it in python anyway and have it require half the amount of code (or a third when incorporated into buku). Cheers |
@AndreiUlmeyda I would like it to be a separate project that would be linked to What do you say? |
@jarun Sparkles with me! Let's do that. |
Here you go! |
Neato. A pleasure doing business with you. |
Continued from #39.
Notes
The list below is a growing one. While suggesting new features please consider contributing to
Buku
. The code is intentionally kept simple and easy to understand with comments. We'll be happy to help out any new contributor.Some of the just-completed features may not have been released yet. Grab the master branch for those.
Identified tasks
Option to add folder names as tags while importing HTML(see Import folders as tags while importing bookmarks HTML. #80)Implement self-upgrade(see Support self-upgrade #83)The text was updated successfully, but these errors were encountered: