Skip to content

Commit

Permalink
Flag deprecated website pages in blog entries
Browse files Browse the repository at this point in the history
  • Loading branch information
shiruken committed Jul 3, 2022
1 parent 239e850 commit 20389f2
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 12 deletions.
2 changes: 1 addition & 1 deletion content/posts/dogecoin-halving-countdown/index.md
Expand Up @@ -5,7 +5,7 @@ draft: false
tags: ['Coding', 'Web', 'Cryptocurrency', 'Dogecoin']
---

The first block reward halving for [Dogecoin](http://dogecoin.com/) resulted in many [Reddit users](http://www.reddit.com/r/dogecoin/) asking when the event was occurring and how much the new block reward would be. I haven't been able to find a good countdown website so I decided to throw one together myself. It uses the [DogeChain](http://dogechain.info/chain/Dogecoin) API to grab the current block number and estimates the time until the next change in the block reward. Since the Dogecoin [protocol](https://bitcointalk.org/index.php?PHPSESSID=7fsbe1l362dulhpb5an0j4imq0&topic=361813.msg3872945#msg3872945) establishes the specific block rewards, it's relatively simple to calculate it with some accuracy. Check it out by [clicking here](/dogecoin/halving.php)!
The first block reward halving for [Dogecoin](http://dogecoin.com/) resulted in many [Reddit users](http://www.reddit.com/r/dogecoin/) asking when the event was occurring and how much the new block reward would be. I haven't been able to find a good countdown website so I decided to throw one together myself. It uses the [DogeChain](http://dogechain.info/chain/Dogecoin) API to grab the current block number and estimates the time until the next change in the block reward. Since the Dogecoin [protocol](https://bitcointalk.org/index.php?PHPSESSID=7fsbe1l362dulhpb5an0j4imq0&topic=361813.msg3872945#msg3872945) establishes the specific block rewards, it's relatively simple to calculate it with some accuracy. Check it out by ~~clicking here~~ _(Deprecated July 2022)_.

![Dogecoin Halving Countdown Screenshot](Screenshot.jpg)

Expand Down
4 changes: 2 additions & 2 deletions content/posts/google-scholar-visualization/index.md
Expand Up @@ -5,7 +5,7 @@ draft: false
tags: ['Coding', 'Web', 'Science', 'Data Visualization']
---

One of the most common images I see during science presentations is the frequency of publications within a particular field over time. It's a great way to show the growth of the field while attempting to validate the worthiness of the research that follows. As far as I can tell, most people manually assemble this data with sequential searches on [Google Scholar](https://scholar.google.com/) or [Web of Science](https://webofknowledge.com/). This seemed like a straightforward opportunity for automation, so [I made a little website](http://www.csullender.com/scholar/) that does just that. It takes a Google Scholar search query and a range of years and plots the number of results over time.
One of the most common images I see during science presentations is the frequency of publications within a particular field over time. It's a great way to show the growth of the field while attempting to validate the worthiness of the research that follows. As far as I can tell, most people manually assemble this data with sequential searches on [Google Scholar](https://scholar.google.com/) or [Web of Science](https://webofknowledge.com/). This seemed like a straightforward opportunity for automation, so ~~I made a little website~~ (_Deprecated July 2022_) that does just that. It takes a Google Scholar search query and a range of years and plots the number of results over time.

![Scholar Plotr Results for CRISPR](results.png)

Expand All @@ -15,4 +15,4 @@ Obviously since we're plotting the number of search results this doesn't necessa

For those who are interested, the entire page is powered by Javascript and utilizes a JQuery plugin called [Ajax Cross Origin](http://www.ajax-cross-origin.com) to overcome the [Same Origin Policy](http://en.wikipedia.org/wiki/Same-origin_policy). This is definitely a rough solution but until Google releases an API for Scholar there's not much of an alternative (please don't get mad at me). As usual the charts are generated using Google Charts and I opted to try out the [new Material Design versions](https://developers.google.com/chart/interactive/docs/gallery/barchart#Material) currently in testing. Because they're kinda sorta prettier.

**Check it out:** [**Scholar Plotr**](/scholar)
~~**Check it out:** **Scholar Plotr**~~ (_Deprecated July 2022_)
4 changes: 2 additions & 2 deletions content/posts/lastfm-first-listen/index.md
Expand Up @@ -5,10 +5,10 @@ draft: false
tags: ['Coding', 'Web', 'Music', 'LastFM']
---

When was the first time you listened to M83? What about LIGHTS? If you're a [LastFM](http://www.last.fm/) user, then I've made a [little web-app](/first) you can use to look it up! All you need is an active LastFM account and to have been tracking your music listening with their scrobbling service.
When was the first time you listened to M83? What about LIGHTS? If you're a [LastFM](http://www.last.fm/) user, then I've made a ~~little web-app~~ (_Deprecated July 2022_) you can use to look it up! All you need is an active LastFM account and to have been tracking your music listening with their scrobbling service.

![First Time I Listened to M83](M83First.jpg)

All you need to do is enter any LastFM username and artist and then hit submit. The website uses the LastFM `user.getArtistTracks` API method to fetch every single track the user has scrobbled from that particular artist. The XML returned file is segmented into 50 plays-per-page but fortunately reports the total number of pages in the first child element. This number is used to jump to the last page of the data, which contains the very first scrobbled track by the user for the particular artist. From there it's just a matter of parsing the XML structure and grabbing the necessary information for display on the webpage with PHP.

**Check It Out**: [**First LastFM Listen**](/first)
~~**Check It Out**: **First LastFM Listen**~~ (_Deprecated July 2022_)
5 changes: 2 additions & 3 deletions content/posts/taylor-swifting/index.md
Expand Up @@ -5,7 +5,7 @@ draft: false
tags: ['Coding', 'Web', 'Music', 'LastFM']
---

I had a [really random idea](https://twitter.com/shiruken/status/536957959503241217) the other day for a simple coding project using the LastFM API: When was the last time you listened to Taylor Swift? This is obviously an extremely important statistic to know for the Taylor Swift obsessed. I already made a tool to [lookup the first time you listened to an artist](/first) using your LastFM profile, so this was a relatively straightforward adaptation. I also wanted to take this opportunity to leverage the power of [jQuery](http://jquery.com/) to asynchronously load the information rather than simply waiting for a static page. Check out [The Last Swifting](/swifting) page or continue reading for more information.
I had a [really random idea](https://twitter.com/shiruken/status/536957959503241217) the other day for a simple coding project using the LastFM API: When was the last time you listened to Taylor Swift? This is obviously an extremely important statistic to know for the Taylor Swift obsessed. I already made a tool to [~~lookup the first time you listened to an artist~~](/first) using your LastFM profile, so this was a relatively straightforward adaptation. I also wanted to take this opportunity to leverage the power of [jQuery](http://jquery.com/) to asynchronously load the information rather than simply waiting for a static page. Check out ~~The Last Swifting page~~ (_Deprecated July 2022_) or continue reading for more information.

![Screenshot of The Last Swifting result page](screenshot.jpg)

Expand All @@ -15,5 +15,4 @@ During development of the webpage, I wanted to make the username input more inte

![Text Entry Demonstration](text.gif)

**Check It Out:** [**The Last Swifting**](/swifting)

~~**Check It Out:** **The Last Swifting**~~ (_Deprecated July 2022_)
8 changes: 4 additions & 4 deletions content/posts/twitter-analytics-2-0/index.md
Expand Up @@ -7,18 +7,18 @@ tags: ['Coding', 'Web', 'Design', 'Twitter']

A couple years ago I made a simple Twitter Stats page to depict my tweeting activity. It was originally powered by some datasets pulled from [TweetStats](http://www.tweetstats.com/ "Visit Tweetstats") but I eventually upgraded it to run entirely from my own server. It was extremely barebones and grabbed my [Twitter](https://twitter.com/shiruken "View My Twitter") feed every hour and downloaded all the tweets that had been added since the previous update. Unfortunately, because Twitter does not offer the entire tweeting history via the website or this XML feed, I was missing well over a year of data. Combined with problems accessing this feed, I would regularly lose my entire (local) cache of my Twitter feed and have to spend a lot of time fixing everything. I eventually just decided to kill off the page since I was losing more and more of the older tweets every time I had to fix the cache and Twitter was changing the way the feed was presented.

Fast forward to December 2012, [Twitter announces the archive](http://blog.twitter.com/2012/12/your-twitter-archive.html) feature on the website. This allows you to download your _entire_ tweeting history in a .ZIP file for display as a HTML website ([check out my archive](https://googledrive.com/host/0Bx3p6yyQUcUIbXlGa0VDaGd0WG8/ "View my Twitter Archive")). The actual tweet data is stored in easily readable JSON files contained within the archive. The files are separated by month and contain every single tweet and retweet since the creation of your account. This makes it extremely easy to parse through everything and generate whatever statistics you want. Upon hearing this announcement, I decided to revive my Twitter Stats page and use the Twitter archive to power it. Unfortunately, the Twitter API does not offer a means to regularly generate this archive. The user has to go to their account settings page and manually request the download URL, which makes an automated analytics page essentially impossible.
Fast forward to December 2012, [Twitter announces the archive](http://blog.twitter.com/2012/12/your-twitter-archive.html) feature on the website. This allows you to download your _entire_ tweeting history in a .ZIP file for display as a HTML website. The actual tweet data is stored in easily readable JSON files contained within the archive. The files are separated by month and contain every single tweet and retweet since the creation of your account. This makes it extremely easy to parse through everything and generate whatever statistics you want. Upon hearing this announcement, I decided to revive my Twitter Stats page and use the Twitter archive to power it. Unfortunately, the Twitter API does not offer a means to regularly generate this archive. The user has to go to their account settings page and manually request the download URL, which makes an automated analytics page essentially impossible.

![Twitter Archive JSON](TweetJSON.jpg)

A few days before I finally gained access to the Twitter Archive feature, [a Google Apps Script was released](http://mashe.hawksey.info/2013/01/sync-twitter-archive-with-google-drive/) to automatically update the Twitter archive with new tweets. The latest update to Google Drive actually allows for site publishing, which means that the entire archive can be hosted by Google. This eliminating the annoying initial hurdle of regularly updating the Twitter archive for generating the statistics page. Rather than having to do all of this on my own server, all I needed to do was pull the updated data files from my Google Drive. This meant I needed to figure out the [Google Drive SDK](https://developers.google.com/drive/) in order to access my Google hosted Twitter archive and grab the data.

{{< youtube ce8G3sEOjAY >}}

I had previously used Python for my backend collection of tweets, so I decided to re-use some of my existing code for the new website. After fighting with the Google authentication system, I finally managed to load the data files from Google Drive and cache them to my server. Each file is then then parsed to extract the metrics I track: monthly tweeting, daily tweeting, hourly tweeting, recent tweeting, most mentioned users, most retweeted users, most commonly used words, and most commonly used hashtags. All of the relevant information is then cached in .XML files for easy access when generating the website with PHP. Unlike the previous iteration of the stats website, I opted to use [Google Charts Tools](https://developers.google.com/chart/) for generating all the graphs for display. The API has been massively upgraded and simplified since the last time I attempted to implement them, making it much more versatile than my old inefficient bar-plotting code. The front-end [Twitter Analytics webpage](http://csullender.com/tweets/) is generated with PHP and Javascript (for Google Charts) with several graphs and lists relaying the information pulled from the Twitter archive.
I had previously used Python for my backend collection of tweets, so I decided to re-use some of my existing code for the new website. After fighting with the Google authentication system, I finally managed to load the data files from Google Drive and cache them to my server. Each file is then then parsed to extract the metrics I track: monthly tweeting, daily tweeting, hourly tweeting, recent tweeting, most mentioned users, most retweeted users, most commonly used words, and most commonly used hashtags. All of the relevant information is then cached in .XML files for easy access when generating the website with PHP. Unlike the previous iteration of the stats website, I opted to use [Google Charts Tools](https://developers.google.com/chart/) for generating all the graphs for display. The API has been massively upgraded and simplified since the last time I attempted to implement them, making it much more versatile than my old inefficient bar-plotting code. The front-end ~~Twitter Analytics webpage~~ (_Deprecated November 2015_) is generated with PHP and Javascript (for Google Charts) with several graphs and lists relaying the information pulled from the Twitter archive.

![Twitter archiving flowchart](flowchart.jpg)

Unlike the original version, [Twitter Analytics](http://csullender.com/twitter/) represents the entirety of my tweeting activity since joining the social network back in May 2007 (I didn't really start using it until a year later). I now know every single word I've said in my 31k tweets and when I said them. I can see distributions of character counts and when I'm most active throughout the day. Even with all of the stuff I've already done, I'm sure there's tons of other cool metrics that (could|need to) be made. When have I said certain words over time? What words are commonly used in the same tweet? What is the "emotional status" depicted by my tweets over time? If I had location information enabled, I'm sure a map of likely location could even be produced (CREEPER STATUS OVER 9000). I haven't decided whether I'm going to post the code I wrote to generate all this since it's kinda ugly. It's nothing too complicated and really just the Google authentication is the hardest thing to get properly set up. If you want to have an always up-to-date version of your Twitter Archive, definitely check out [Martin Hawksey's Google App Script](http://mashe.hawksey.info/2013/01/sync-twitter-archive-with-google-drive/). It's really the core functionality of everything you've seen above and without it, I'd probably have had to write a ton more code to get everything running.
Unlike the original version, ~~Twitter Analytics~~ (_Deprecated November 2015_) represents the entirety of my tweeting activity since joining the social network back in May 2007 (I didn't really start using it until a year later). I now know every single word I've said in my 31k tweets and when I said them. I can see distributions of character counts and when I'm most active throughout the day. Even with all of the stuff I've already done, I'm sure there's tons of other cool metrics that (could|need to) be made. When have I said certain words over time? What words are commonly used in the same tweet? What is the "emotional status" depicted by my tweets over time? If I had location information enabled, I'm sure a map of likely location could even be produced (CREEPER STATUS OVER 9000). I haven't decided whether I'm going to post the code I wrote to generate all this since it's kinda ugly. It's nothing too complicated and really just the Google authentication is the hardest thing to get properly set up. If you want to have an always up-to-date version of your Twitter Archive, definitely check out [Martin Hawksey's Google App Script](http://mashe.hawksey.info/2013/01/sync-twitter-archive-with-google-drive/). It's really the core functionality of everything you've seen above and without it, I'd probably have had to write a ton more code to get everything running.

**Check it out:** [**Twitter Analytics 2.0**](http://www.csullender.com/twitter)
~~**Check it out:** **Twitter Analytics 2.0**~~ (_Deprecated November 2015_)

0 comments on commit 20389f2

Please sign in to comment.