Skip to content
Documentation for command line file uploader
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md
about.md

README.md

bashupload.com is a tool to quickly upload files from command line (a handy tool when you're dealing with a server through SSH) and download it on other server or desktop/mobile. For example, when you've generated a report directly from mysql into CSV and want to send it to your analyst. Or when you've created a virtual test node and want to copy some files to it.

Basic usage

curl https://bashupload.com/name.txt --data-binary @file.txt

Upload multiple files

Single upload can contain for up to 8 files:

curl https://bashupload.com/ -F 'file1=@file1.txt' -F 'file2=@file2.txt' ...

Upload with progress

For large files you can enable upload progress:

curl -o /tmp/_bu.tmp -# https://bashupload.com/name.txt --data-binary @file.txt && cat /tmp/_bu.tmp && rm /tmp/_bu.tmp

Image of curl upload progress

Encrypt file on upload and decrypt on download (using gpg)

For more security you can encrypt files before upload:

gpg -ac -o- test.txt | curl https://bashupload.com/encrypted.txt --data-binary @-

And then decrypt that on download:

curl https://bashupload.com/ca8H/encrypted.txt | gpg -d -o decrypted.txt

Append (feed) text files to single uploaded file

Instead of uploading as a new file, you can append your files to the same file on the server. That becomes handy when you offload statistics/logs from multiple nodes and need to easily consolidate it.

First, you have to init feed. Upload first file (from the bunch) with the additional "feed: 1" header:

curl "https://bashupload.com/access.log" -H "feed: 1" --data-binary @access.log

You'll see standard output with additonal feed key to continue feeding to the same file:

Fed 39945 bytes to access.log on "xDRD" key
Total feed size 39945 bytes

wget https://bashupload.com/xDRD/access.log

To continue feeding more data to this file use: 
curl "https://bashupload.com/access.log" -H "feed-key: xDRD" --data-binary @access.log

On the next uploads just use the feed-key header from the initial upload instruction:

curl "https://bashupload.com/name.txt" -H "feed-key: xDRD" --data-binary @access.log.1

After that you can download the file with consolidated contents from two uploads or feed more data:

**Fed 127416 bytes to access.log on "xDRD" key
Total feed size 167361 bytes**

wget https://bashupload.com/xDRD/access.log

To continue feeding more data to this file use: 
curl "https://bashupload.com/access.log" -H "feed-key: xDRD" --data-binary @access.log

Rewriting files

Sometimes you might need to rewrite a file on the server (e.g. statistics offloading every x weeks/days/hours/...). In this case, just path the "rewrite: 1" header on first upload:

curl https://bashupload.com/data.csv --data-binary @data.csv -H "rewrite: 1"

When uploaded you'll see the instructions for rewriting this file:

Uploaded 1 file, 1119 bytes

wget https://bashupload.com/CpmjE/data.csv

  To rewrite this file, use the following command:
  curl "https://bashupload.com/CpmjE/data.csv" -H "rewrite: 1" --data-binary @data.csv

So to rewrite the uploaded file, just use it's full path (with a key in it):

curl "https://bashupload.com/CpmjE/data.csv" -H "rewrite: 1" --data-binary @data.csv

You can now fetch data from this file to different sources. For example, using "importdata" function to load it into spreadsheet: Image of spreadsheet importdata() function

You can’t perform that action at this time.