New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passing site=whatever does not say which page a mention was for #12
Comments
|
I assume you are using this in your code now? I'm thinking of how to introduce updates and changes like this to the API – non-breaking changes could be introduced right away, but breaking ones should be be opt-in from clients. Should probably start using some kind of version parameter that's increased for every breaking change. This issue could of course be solved without breaking anything though, so such a decision doesn't block this one – just want to take care I don't break anything in the future. |
|
Also – I can't say when I will be able to get around to doing this. Even if it's just a simple fix. I have so many simple fixes for my projects and haven't had time for many of them these last couple of weeks. Please ping me again if I don't get around to it in a while – or make a pull request if you like and I can release that :) |
|
I'll try to do a pull request, certainly, but it'll have to wait until I'm back at a computer. Agreed that some sort of versioning is a good idea, but I personally wouldn't consider adding a new member to a returned dict to be a breaking change, which I think is how you view this one too :) |
Proposed to fix voxpelli#12.
|
Pull request sent. :) |
stuartlangridge commentedNov 22, 2014
Examine the output of https://webmention.herokuapp.com/api/mentions?site=voxpelli.com :
Note that this describes a mention and where the mention came from, but not the page at which the mention was targeted! This means that it's not possible to fetch all the mentions for one's static site in one go; instead, one has to pass url=fullurl for each individual post to get the details, which takes one site=whatever request and makes it into (for my site) 1,700 url=whatever/full/url requests. Which is not good for anyone's bandwidth. :) Adding a target_url member to returned data would solve this problem.
The text was updated successfully, but these errors were encountered: