Create and share dynamic coverage reports!
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
app
bin
config
db
lib
log
public
spec
vendor/assets
.gitignore
.rspec
Gemfile
Gemfile.lock
README.md
README.rdoc
Rakefile
config.ru
test

README.md

README

One thing every public relations professional has had to deal with is compiling coverage reports. This process typically involves searching for articles and then pasting information piece-by-piece into an excel grid. Now Coverage Reportr helps simplify that process by automating most of the data entry and allowing multiple users to collaborate on a single report. You can even download your reports to .csv and send them as attachments or use the data for other applications.

Once you have an account you can create a coverage report:

alt text

On the next page click "Add Hit" and you'll see this screen where you can paste an article URL:

alt text

Submit the URL and the article's publication, title, author, tone and today's date will automatically populate the fields in the edit page. Validate that the information is correct, hit submit, and your hit will be added to your report!

alt text

Ta-da!

alt text

One of the coolest features of Coverage Reportr is it's ability to pull information about an article just from the URL and to save an instance as a row in a .csv file. These are all provided by the methods availble to the "hit" class model:

  class Hit < ActiveRecord::Base
    belongs_to :coverage_report
    require 'open-uri'
    require 'uri'
    require 'csv'

  AlchemyAPI.key = ENV.fetch('ALCHEMY_API_KEY')

    #This method parses the URL string to extract just the name of the publication.
    def self.get_publication(url)
      host = URI.parse(url).host
      if host[0] == "w"
        URI.parse(url).host.split(".")[1..-2].first
      else
        URI.parse(url).host.split(".").first
      end
    end

    #Here Nokogiri helps find the article title by seeking the "title" css selector, it works most of the time.
    def self.get_title(url)
      Nokogiri::HTML(open(url)).css('title').first.content
    end

    #AlchemyAPI is a logic-based text reader that intelligently identifies the author of an article.
    def self.get_author(url)
      AlchemyAPI::AuthorExtraction.new.search(:url => url)['author']
    end

    #It can even tell you whether the article is positive or negative in tone, which is an important feature for PR people.
    def self.get_sentiment(url)
      AlchemyAPI::SentimentAnalysis.new.search(:url => url)['type']
    end

    #This method extracts the full body of an article, but it's currently not utilized in the current feature-set.
    def self.get_text(url)
      AlchemyAPI::TextExtraction.new.search(:url => url)
    end

    #This method enables coverage reports to be downloaded in .csv format.
    def self.to_csv(hits)
      column_names = [:hit_publication, :hit_title, :hit_url, :hit_author, :hit_date, :hit_sentiment]
      CSV.generate do |csv|
          csv << column_names
          hits.each do |hit|
            csv << column_names.map { |key| hit[key] }
          end
      end
    end
  end
  

Coverage Reportr takes advantage of the below technologies:

And it has a few dependencies:

Deployment Instructions:

Coverage Reportr Backlog:

  • No effective date extraction function, therefore it automatically populates with today's date.
  • New York Times prevents Nokogiri from redirecting due to required log-in. Need to integrate NYT article API and add methods specifically for that site. This may be the case with other sites, but I have not come across them yet.
  • When the URL is analyzed, some sites redirect the user directly to the original page:
    • ESPN.com
    • Cnet.com