Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Tail a file. This will continue to work even if a file is unlinked rotated or truncated. It is also ok if the path doesnt exist before watching it
JavaScript Shell
Branch: master

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.

Build Status


Tails a file. it should work great. it will continue to work even if a file is unlinked rotated or truncated. It is also ok if the path doesnt exist before watching it


var tail = require('tailfd').tail,
watcher = tail('/some.log',function(line,tailInfo){
  //default line listener. optional.
  console.log('line of data> ',line);

when you are done



npm install tailfd

argument structure

tailfd.tail(filename, [options], listener)

  • filename

    • this should be a regular file or non existent. the behavior is undefined in the case of a directory.
  • options. supported custom options are

    "start":undefined, //defaults to the first reported stat.size
    //optional. a hard start position in the file for tail to start emitting data events.
    //optional.  offset is negtively applied to the start position
    //optional. defaults to newline but can be anything
    // optional. this is how much data will be read off of a file descriptor in one call to defaults to 10k.
    // the maximum data buffer size for each tail is 
    //  maxBufferPerRead + the last incomplete line from the previous read.length
    //optional. if cannot read the offset from a file it will try attempts times before is gives up with a range-unreadable event
    // defaults to 3 attempts
    // optional. defaults to 1 mb
    //  if the line excedes this length it's data will be emitted as a line-part event
        //  this is a failsafe so that a single line cant eat all of the memory.
    //  all gathered line-parts are complete with the value of the next line event for that file descriptor.
    // the options object is passed to watchfd as well. With watchfd you may configure
    "timeout": 60*60*1000, //defaults to one hour
    //how long an inactive file descriptor can remain inactive before being cleared
    "timeoutInterval":60*5*1000 //every five minutes
    // how often to check for inactive file descriptors
    //the options object is also passed directly to and fs.watchFile so you may configure
    "persistent":true, //defaults to true
    //persistent indicates whether the process should continue to run as long as files are being watched
    "interval":0, //defaults 0
    //interval indicates how often the target should be polled, in milliseconds. (On Linux systems with inotify, interval is ignored.) 
  • callback

    • this is bound to the line event of the watcher. its optional.


    cur and prev are instances of fs.Stats

  • @returns TailFD Watcher


  • pause data and line events on all underlying descriptors


  • get it goin again! =)


  • line
    • String line, Object tailInfo
  • data
    • Buffer buffer, Object tailInfo
  • line-part
    • String linePart, Object tailInfo
      • if line length excedes the options.maxLineLength the linePart is emitted. This is to prevent cases where unexpected values in logs can eat all of the memory.
  • range-unreadable
    • Array errors, Number fromPos,Number toPos,Object tailInfo
      • After configured readAttempts the offset could still not be read. This range will be skipped

events inherited from watchfd

  • change
    • fs.Stats cur, fs.Stats prev
  • open
    • fs.Stats cur,{fd:file descriptor,stat:fs.Stats cur}
  • unlink
    • fs.Stats cur,{fd:file descriptor,stat:fs.Stats cur}
  • timeout
    • fs.Stats cur,{fd:file descriptor,stat:fs.Stats cur}

tailInfo properties

  • stat
    • instanceof fs.Stats
  • pos
    • current seek position in the file
  • fd
    • file descriptor being tailed
  • buf
    • string containing the last data fragment from delimiter parsing

watch file and watch may behave differently on different systems here is the doc for it.

Something went wrong with that request. Please try again.