Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
83 lines (51 sloc) 3.52 KB is a tool to quickly upload files from command line (a handy tool when you're dealing with a server through SSH) and download it on other server or desktop/mobile. For example, when you've generated a report directly from mysql into CSV and want to send it to your analyst. Or when you've created a virtual test node and want to copy some files to it.

Basic usage

curl --data-binary @file.txt

Upload multiple files

Single upload can contain for up to 8 files:

curl -F 'file1=@file1.txt' -F 'file2=@file2.txt' ...

Upload with progress

For large files you can enable upload progress:

curl -o /tmp/_bu.tmp -# --data-binary @file.txt && cat /tmp/_bu.tmp && rm /tmp/_bu.tmp

Image of curl upload progress

Encrypt file on upload and decrypt on download (using gpg)

For more security you can encrypt files before upload:

gpg -ac -o- test.txt | curl --data-binary @-

And then decrypt that on download:

curl | gpg -d -o decrypted.txt

Append (feed) text files to single uploaded file

Instead of uploading as a new file, you can append your files to the same file on the server. That becomes handy when you offload statistics/logs from multiple nodes and need to easily consolidate it.

First, you have to init feed. Upload first file (from the bunch) with the additional "feed: 1" header:

curl "" -H "feed: 1" --data-binary @access.log

You'll see standard output with additonal feed key to continue feeding to the same file:

Fed 39945 bytes to access.log on "xDRD" key
Total feed size 39945 bytes


To continue feeding more data to this file use: 
curl "" -H "feed-key: xDRD" --data-binary @access.log

On the next uploads just use the feed-key header from the initial upload instruction:

curl "" -H "feed-key: xDRD" --data-binary @access.log.1

After that you can download the file with consolidated contents from two uploads or feed more data:

**Fed 127416 bytes to access.log on "xDRD" key
Total feed size 167361 bytes**


To continue feeding more data to this file use: 
curl "" -H "feed-key: xDRD" --data-binary @access.log

Rewriting files

Sometimes you might need to rewrite a file on the server (e.g. statistics offloading every x weeks/days/hours/...). In this case, just path the "rewrite: 1" header on first upload:

curl --data-binary @data.csv -H "rewrite: 1"

When uploaded you'll see the instructions for rewriting this file:

Uploaded 1 file, 1119 bytes


  To rewrite this file, use the following command:
  curl "" -H "rewrite: 1" --data-binary @data.csv

So to rewrite the uploaded file, just use it's full path (with a key in it):

curl "" -H "rewrite: 1" --data-binary @data.csv

You can now fetch data from this file to different sources. For example, using "importdata" function to load it into spreadsheet: Image of spreadsheet importdata() function

You can’t perform that action at this time.