A CLI and Library for super fast, Multi part downloads.
Pluto is a Multipart File Downloader. It comes in form of a package and a CLI. It works by dividing the file into a given number of parts, Each part is given a range of bytes to download, As all the parts are downloading they are also written to the file in correct order.
There are a lot of tool similar and better than Pluto but most of them have an upper limit of 16 or 32 parts whereas Pluto has no upper limit.
- Fast download speed.
- Multi Part Downloading
- High tolerance for low quality internet connection.
- A Stats API that makes getting parameters like current download speed and number of bytes downloaded easier
- Guarantees reliable file downloads
- It can be used to download files from servers which require authorization in form of a value in Request Header.
- It can load URLs from a file.
-
You have a working Go Environment
go get github.com/ishanjain28/pluto
-
You don't have a working Go Environment
- See the Releases section for Precompiled Binaries
- Download a binary for your platform
- Put the binary in
/usr/bin
or/usr/local/bin
on Unix like systems and add the path to binary toPATH
variable on Windows. - Done. Now type
pluto -v
in terminal to see if it is installed correctly. :)
Usage:
pluto [OPTIONS] [urls...]
Application Options:
--verbose Enable Verbose Mode
-n, --connections= Number of concurrent connections
--name= Path or Name of save file
-f, --load-from-file= Load URLs from a file
-H, --Headers= Headers to send with each request. Useful if a server requires some information in headers
-v, --version Print Pluto Version and exit
Help Options:
-h, --help Show this help message
See cli.go for an example of this package
-
When an error occurs in downloading stage of a part, It is automatically retried, unless there is an error that retrying won't fix. For example, If the server sends a 404, 400 or 500 HTTP response, It stop and return an error.
-
It now uses 256kb buffers instead of a 64kb buffer to reduce CPU Usage.
-
When a part download fails for reason that is recoverable(see 1) reason, Only the bytes that have not been downloaded yet are requested from server.
Almost all Download Managers have an upper limit on number of parts. This is usually done to for the following reasons:
- Prevent DDoS detection systems on servers from falsely marking the client's IP address as a hostile machine.
- Prevent internet experience degradtion on other Machines on the same local network and in other applications on the same PC.
- This is just a guess, But maybe People saw that after a certain limit increasing number of parts doesn't really increase anymore speed which is true but the 16/32 part limit is very low and much better speed can be achieved by increasing part limit upto 100 on a 50Mbit Connection.
But when I am downloading a file from my private servers I need the absolute maximum speed and I could not find a good tool for it. So, I built one myself. A benchmark b/w Pluto, axel and aria2c will be added shortly.
- Pause and resume support.
- Intelligent redistribution of remaining bytes when one of the connections finishes downloading data assigned to it. This would result in much better speed utilisation as it approaches the end of download.
Please use this package responsibly because it can cause all the bad things mentioned above and if you encounter any problems, Feel free to create an issue.
GPLv2