Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Create a curation tool #12
A tool is required to extract links from a specific tracking issue for a campaign.
An example of these tracking issue can be see on #6.
The tool should be able to extract the links, de-dupe and save as an RSS file (XML). At a minimum it should extract the blog post page title, blog post link, publication date.
We expect the tool to be run repeatedly, if the RSS file already exists it should append new items to the end of the RSS file.
Bonus if written in Rust, but any language that can be added to an integration service like travis-ci would be considered.
Mentoring can be provided, just ask.
This sounds like lots of fun, could I give it a try ?
I'd probably pick an http client such as Actix's or Hyper's, to (async) crawl the page and extract links into a BlogPost struct, then use serde to write / append the xml.
I'll have a look at the RSS spec, could I try to work on this ? :)
Here is a status update of the
If you are missing any point, please let me know
referenced this issue
May 15, 2018