Skip to content

Latest commit

 

History

History
48 lines (41 loc) · 3.13 KB

TODO.md

File metadata and controls

48 lines (41 loc) · 3.13 KB

TODO

  • redo logging to use the debug module
  • more caching beyond what sshfs does?

Lower Priority for now

  • support node v20
    • See #1
    • May be entirely an issue for unit testing.

DONE

  • caching -- take 2 -- implement at least the api of sshfs.
The default attribute cache timeout for SSHFS is 20 seconds.  You can change the default cache timeout by using the -o cache_timeout=N option, where N is the desired cache timeout in seconds2. You can also control cache timeouts for directory listing and other attributes with options such as -o cache_stat_timeout=N, -o cache_dir_timout=N, and -o cache_link_timout=N2. To disable the cache, you can use the -o cache=no option2.
  • there are a bunch of TODO's in the code still -- read them; delete ones that aren't relevant or address, or leave ones that may matter later. Result: most of the worrisome ones are in copy functions that FUSE doesn't use.
  • Remove all auth (was: support auth, i.e., an optional symmetric key that clients must present to be allowed to mount the filesystem. This is useful for "defense in depth".). It's better to do the auth at a different level.
  • delete the "WEB" comments in code...
  • require require's to use static import syntax instead Once this is done, we have the option of using ESM modules. "module": "es2020" (in tsconfig.json).
  • eliminate use of var
  • make it work over the network
    • it technically does already work over a network, but I've only been using localhost for testing/demos. Need to try a real network situation.
  • benchmarking and make it a bit faster, e.g., maybe support some intense levels of caching...?
  • implement statfs so can do df -h ...
    • with luck, I just need to implement SftpVfsStats in sftp-misc.ts?!
  • set filesystem name
  • stat doesn't return blocks so "du" doesn't work.
  • tar gets confused -- "file changed as we read it", I think because our timestamps are a mess for stat (1 second resolution and kind of random?)
  • LARGE files (above 32*1024 characters) are always corrupted when written (or read?). This probably causes many of the remaining problems. I don't know why this is yet, but the stress.test.ts illustrates it. Basically exactly the first 32*1024 gets written and nothing more. I thought I wrote
  • "git log" on nontrivial content doesn't work, probably due to mmap?
  • "git clone" doesn't work
  • get rid of all the #if macro preprocess comments (maybe grunt used them). We can solve these problems for the web later, e.g., using polyfills or better code.
  • writing a LARGE file -- do we need to chunk it? Same question about reading. It seems like we do. What about changing the params?
  • promote the node-fuse stuff to be part of the main library instead of an example
  • finish implementing all fuse functions
  • github actions that does pnpm test-all...
  • implement reading contents from a file
  • upgrade to newest ws module.
  • fix all typescript errors
  • enable noUnusedLocals
  • enable noUnusedParameters
  • enable strictNullChecks