Skip to content
Commits on Oct 2, 2010
  1. fix trailing whitespace

    Dave Hansen committed Oct 2, 2010
  2. fix old 'aa' variable name

    Dave Hansen committed Oct 2, 2010
Commits on Aug 9, 2010
  1. @Buggaboo

    gmailfs - see patch, fixed most errors on bugrep on google code rep

    Here's the first patch, it leaves all the identation stuff alone. I
    even left the string concatenation stuff alone.
    
    --
    
    Fixes exception syntax and "IMAPFS_TRASH_ALL" bug.
    
    Signed-off-by: Dave Hansen <dave@sr71.net>
    Buggaboo committed with Dave Hansen Aug 9, 2010
Commits on Jun 30, 2010
  1. Move the writeout code around a bit

    Dave Hansen committed Jun 30, 2010
Commits on Jun 28, 2010
  1. Add some reliability

    This wraps quite a few of the IMAP commands.  It then notices
    when exceptions (such as network errors) or transient conditions
    (like append errors when the account is not full).  It will tear
    down the IMAP connection, and open a new one in its place,
    hopefully without any of the rest of the software noticing.
    
    This should let these filesystems stay connected for much longer
    and also better tolerate some of the errors that GMail is starting
    to throw these days.
    Dave Hansen committed Jun 27, 2010
Commits on Jun 27, 2010
  1. Fix IMAP response ) parsing

    I was a bit confounded that the responses back from the UID command
    have more responses than I gave them UIDs.
    
    It turns out that there are either N*2 or N*2+1 responses from those
    commands.  Each UID gets a response in the form of a tuple, but each
    of _those_ is followed by a plain 'str' object filled with ')'.
    
    But, sometimes, the whole sequence is followed by _another_ string.
    That string seems to be for the flags that all the messages share,
    like: "FLAGS (\Seen))"
    
    But, I screwed it up, and wasn't looking for these responses, and
    went out of range on the uids array.  I should _probably_ be parsing
    these and looking for all the matching '(' ')'  pairs in the
    responses.  But, for now, just ignore all the string responses, and
    try to parse the tuples.
    Dave Hansen committed Jun 27, 2010
Commits on Apr 19, 2010
  1. Add a block cache

    This finally allows multiple blocks from the same file to be
    active at one time.  This means that we can much more efficiently
    handle doing writeouts to large files.
    
    But, I think  GMail is getting slower.  Although I can burst at
    amost 10 mbits/sec, IMAP operations really bog down.  They take
    a long time to complete even simple writes.  I think either GMail
    is getting overloaded or I'm getting throttled for beating it up
    too much.
    
    I shrunk the default block size down.  If it is too big, we do
    bad things with big files.  Say the block size is 10MB and we
    are writing a 100MB file:
    
    1. start writing to file, get perhaps 5MB in
    2. writeout thread wakes up, sees the new 5MB and starts an
       IMAP transaction to get it up to the server.  Block marked
       clean.
    3. Block continues to be written, up to 10MB, block marked
       dirty because of new writes.
    4. Writing to block stops, because writes have moved on to the
       next block.
    5. writeout of first 5MB finishes.
    6. The full 10MB block is now dirty and needs to get written out.
       We write the full 10MB back up to the server.
    
    So, to write 10MB of data, we ended up writing it 15MB.  With
    smaller block sizes, that writeout thread has a much smaller
    chance of seeing a partial block.  So, I shrunk the default
    block size a bit.
    Dave Hansen committed Apr 19, 2010
Commits on Apr 18, 2010
  1. Make more tolerant of small writes

    It used to be that we had a 50-operation limit in the queue.
    That meant that _any_ 50 writes would block until the queue
    got shrunk.
    
    Now, we at least ensure that those 50 writes are to 50 unique
    objects.  That means that 1000 small writes to an object will
    only end up counting for one of the 50 spots in that queue.
    Well, actually 2 spots (you need one for the dirty inode).
    Dave Hansen committed Apr 18, 2010
Commits on Feb 22, 2010
  1. Make sure that we are not racing to clear the dirty status

    Use a thread-safe queue to store things.
    Dave Hansen committed Feb 21, 2010
Commits on Feb 21, 2010
  1. Fix initial inode creation

    The initial inode creation was going fine, but the inode
    never got written out again.  That means that all writes
    produced zero-length files, although there *were* block
    written.
    
    The issue was that the inode wasn't getting marked dirty.
    I think this had to do with the writeout code assuming
    that all inodes for which it could not get a lock would
    eventually get properly written out, and it would still
    remove those from the queue.
    
    But, the lock is taken for other things, and there are
    probably still races there anyway.
    
    So, wirteout threads will *always* either write out the
    object or place it back in the queue.  Extra writeouts
    should be avoided by the writeout code checking __dirty
    before actually doing writeouts.
    Dave Hansen committed Feb 21, 2010
  2. bump rev in git

    Dave Hansen committed Feb 21, 2010
  3. Get the conf file in here too.

    Dave Hansen committed Feb 21, 2010
  4. Might as well start storing this stuff in git.

    Dave Hansen committed Feb 21, 2010
Something went wrong with that request. Please try again.