Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on May 8, 2015
Commits on Dec 22, 2014
  1. Fix docs typo

Commits on May 27, 2014
  1. @ncanceill
Commits on Dec 21, 2013
  1. @jleclanche
Commits on Sep 23, 2013
  1. @thcipriani
Commits on Aug 7, 2013
  1. @Niggler

    --help also invokes help

    Niggler authored
Commits on Dec 19, 2012
  1. @mohnish

    Use an exit status of 0

    mohnish authored
    Use an exit status `0` showing that the script quit successfully.
Commits on Dec 12, 2012
  1. @zsprackett

    Fix it so that sourcing works properly

    zsprackett authored
    This commit makes sourcing spark in bash work properly and also removes the
    help function when sourced so as not to pollute the namespace.
Commits on Sep 13, 2012
  1. Remove comment

Commits on Sep 11, 2012
  1. Characters ( and ) creates subshell and then executes the list of com…

    Arturo Borrero Gonzalez authored
    …mands, { is smarter and faster.
    When using -z you need "" or you will be evaluating the ']' character if $1 is unset.
Commits on Jul 30, 2012
  1. @akatrevorjay
Commits on Nov 18, 2011
  1. @markusfisch
  2. @heyc

    Example updates, documentation changes

    heyc authored committed
Commits on Nov 17, 2011
  1. retab

  2. Update docs

Commits on Nov 16, 2011
  1. @markusfisch
  2. @markusfisch

    streamlined processing

    markusfisch authored
    fraction of floats is now cut off in favor of speed and portability
    scaling is now done by fixed point math
    correct display of numbers that are all lower than the total of ticks
    sparkline does always display relative difference between numbers
  3. Handle floating-point numbers (closes #28)

    Mostly I just want make this last test green. Shelling out to awk slows things
    down, unfortunately, but at least at this point forward we can just improve the
    performance. I still suspect we can simplify the whole thing with a smarter
    rewrite and speed things up to boot.
  4. Merge pull request #38 from gwern/patch-3

    rm 'set -e' & quote per Riviera's advice; 'set -e' breaks on no trailing...
  5. UTF-8 has one more bin.

    Chad Metcalf authored
    I got a fever, and the only prescription is more sparklines.
  6. @gwern

    eat stdin with 'cat', not 'read' which doesn't seem to work with stdi…

    gwern authored
    …n (this should fix issue #9)
  7. @gwern

    rm 'set -e' & quote per Riviera's advice; 'set -e' breaks on no trail…

    gwern authored
    …ing newline (see issue #37 for detailed discussion)
Commits on Nov 15, 2011
  1. Merge pull request #30 from gwern/patch-1

    generalize to translating all whitespace, not one character (the space)
  2. Remove debug

  3. @joshmoore

    Fix tier=0 for 1-5

    joshmoore authored
  4. @joshmoore

    Fix odd results

    joshmoore authored
    The loop was only going up to the size of numbers
    as opposed to the size of tickets.
  5. @joshmoore

    Add debugging

    joshmoore authored
  6. @gwern
  7. @patricklucas

    Keep individual numbers separate

    patricklucas authored
    Instead of interpreting input "1 3 2 6 8" as "13268", treat spaces
    as commas then condense.
  8. @patricklucas

    Change back to using 'test' for string equality

    patricklucas authored
    Minimizes this branches overall change.
  9. @patricklucas

    Allow spaced input

    patricklucas authored
    This allows input like "1, 2, 4, 7, 9" which is sometimes useful
    with longer lists.
  10. @peff

    print sparks incrementally instead of building string

    peff authored
    This shaves a few lines from the print_ticks function. We
    use "printf" instead of "echo -n" as the former is more
    portable (although we are hopelessly tied to bash due to the
    use of arrays, anyway, so either would be fine).
  11. @peff

    use shell arithmetic expansion

    peff authored
    This is way faster than invoking bc repeatedly.
      $ DATA=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
      $ time spark.old $DATA >/dev/null
      real    0m1.018s
      user    0m0.020s
      sys     0m0.060s
      $ time spark $DATA >/dev/null
      real    0m0.089s
      user    0m0.000s
      sys     0m0.008s
    Or to make it more clear:
      $ elapsed_ms() {
          /usr/bin/time -f %e "$@" 2>&1 >/dev/null |
            perl -lpe '$_ *= 1000'
      $ spark "$(elapsed_ms spark.old $DATA),$(elapsed_ms spark $DATA)"
  12. @peff

    drop pointless loop

    peff authored
    We just reassign the data to itself in the loop, and then
    break after reading one line.
Something went wrong with that request. Please try again.