ass is a self-hosted ShareX upload server written in Node.js. I initially started this project purely out of spite.
ass aims to be as unopinionated as possible. It allows nearly endless choice for users & hosts alike: Users can configure their upload settings directly from the ShareX interface (including embeds, webhooks, & more), while hosts are free to pick their preferred storage & data management methods.
By default, ass comes with a resource viewing page, which includes metadata about the resource as well as a download button & inline viewers for images, videos, & audio. It does not have a user dashboard or registration system: this is intentional! Developers are free to create their own frontends using the languages & tools they are most comfortable with. Writing & using these frontends is fully documented below, in the wiki, & in the source code.
ass was designed with developers in mind. If you are a developer & want something changed to better suit you, let me know & we'll see what we can do!
CodeQL | DeepSource |
---|---|
- Upload images, gifs, videos, audio, & files
- Token-based authentication
- Download & delete resources
- Fully customizable Discord embeds
- Built-in web viewer with video & audio player
- Embed images, gifs, & images directly in Discord
- Personal upload log using customizable Discord Webhooks
- macOS/Linux support with alternative clients such as Flameshot (script for ass) & MagicCap
- Multiple URL styles
- ZWS
- Mixed-case alphanumeric
- Gfycat
- Original
- Usage metrics
- Thumbnail support
- Mimetype blocking
- Basic multi-user support
- Configurable global upload limit (per-user coming soon!)
- Custom pluggable frontends using Git Submodules
- Run locally or in a Docker container
- Multiple file storage methods
- Local file system
- Amazon S3 (including DigitalOcean Spaces)
- Multiple data storage methods using ass StorageEngines (JSON by default)
- File
- JSON (default, ass-storage-engine)
- YAML (soon!)
- Database
- PostgreSQL (ass-psql)
- Mongo (soon!)
- MySQL (soon!)
- File
Type | What is it? |
---|---|
Zero-width spaces | When pasted elsewhere, the URL appears to be just your domain name. Some browsers or sites may not recognize these URLs (Discord does support these) |
Mixed-case alphanumeric | The "safe" mode. URL's are browser safe as the character set is just letters & numbers. |
Gfycat | Gfycat-style ID's (for example: https://example.com/unsung-discrete-grub ). Thanks to Gfycat for the wordlists |
Original | The "basic" mode. URL matches the same filename as when the file was uploaded. This may be prone to conflicts with files of the same name. |
sudo apt update
sudo apt-get install curl
sudo apt-get install git
curl -sL https://deb.nodesource.com/setup_14.x | sudo bash -
cat /etc/apt/sources.list.d/nodesource.list
sudo apt -y install nodejs
sudo npm install -g npm@latest
You should have Node.js 14 or later & npm 7 or later installed.
Clone this repo using git clone https://github.com/1x6/ass.git && cd ass/
Run npm i
to install the required dependencies
Run npm run setup
to start the easy configuration
Run npm start
to start the server. The first time you run it you will be shown your first authorization token; save this as you will need it to configure ShareX.
If it works, press ctrl + c.
Now we need to install Nginx Proxy Manager.
https://nginxproxymanager.com/guide/#quick-setup
When it's installed, for every domain that you want, add a proxy host and request a new ssl cert.
ez
- move your uploads folder, auth.json, config.json, data.json to another directory.
sudo rm -r ass
git clone https://github.com/1x6/ass.git
- Move the config, uploads, auth.json back to the ass folder
npm i
pm2 start ass
or you can git pull or something
For HTTPS support, you must configure a reverse proxy. I recommend Caddy but any reverse proxy should work (such as Apache or Nginx). I also have a tutorial on easily setting up Caddy as a reverse proxy server.
Expand for steps to install with Docker & docker-compose
You may also install ass using docker-compose. These steps assume you are already family with Docker, so if you're not, please read the docs. It also assumes that you have a working Docker installation with docker-compose
installed.
- Clone the ass repo using
git clone https://github.com/1x6/ass.git && cd ass/
- Run the command that corresponds to your OS:
- Linux:
./install/docker-linux.sh
(uses#!/bin/bash
) - Windows:
install/docker-windows.bat
(from Command Prompt) - These scripts are identical using the equivalent commands in each OS.
- Linux:
- Work through the setup process when prompted.
The upload token will be printed at the end of the setup script prompts. This is the token that you'll need to use to upload resources to ass. It may go by too quickly to copy it, so just scroll back up in your terminal after setup or run cat auth.json
.
You should now be able to access the ass server at http://localhost:40115/
(ass-docker will bind to host 0.0.0.0
to allow external access). You can configure a reverse proxy (for example, Caddy; also check out my tutorial) to make it accessible from the internet with automatic SSL.
It creates directories & files required for docker-compose
to work. It then calls docker-compose
to build the image & run ass. On first run, ass will detect an empty config file, so it will run the setup script in a headless terminal with no possible input. Luckily, you can use docker-exec
to start your own terminal in which to run the setup script (the install scripts call this for you). After setup, the container is restarted & you are prompted to open logs so you can confirm that the setup was successful. Each install script also has comments for every step, so you can see what's going on.
Since all 3 primary data files are bound to the container with Volumes, you can run the scripts in two ways:
# Check the usage metrics
docker-compose exec ass npm run metrics
# Use docker-compose exec to run the setup script
docker-compose exec ass npm run setup && docker-compose restart
# Run npm on the host to run the setup script (also works for metrics)
# (You will have to meet the Node.js & npm requirements on your host)
npm run setup && docker-compose restart
Easy! Just pull the changes & run this one-liner:
# Pull the latest version of ass
git pull
# Rebuild the container with the new changes (uncomment the 2nd part if the update requires refreshing the config)
docker-compose up --force-recreate --build -d && docker image prune -f # docker-compose exec npm run setup && docker-compose restart
--force-recreate
will force the container to rebuild--build
will build the image from the latest changes in the directory-d
will run the container in the backgrounddocker image prune -f
will remove old images that are no longer used by any containers- These descriptions were suggested by CoPilot, feel free to revise if necessary.
docker-compose
exposes five volumes. These volumes let you edit the config, view the auth or data files, or view theuploads/
folder from your host.uploads/
share/
(for future use)config.json
auth.json
data.json
- I have personally tested running using these commands (migrating from an existing local deployment!) with Digital Ocean Spaces (S3 object-storage), a PostgreSQL database, & a custom frontend all on the same container. It should also work for you but feel free to let me know if you have any issues.
If you need to generate a new token at any time, run npm run new-token <username>
. This will automatically load the new token so there is no need to restart ass. Username field is optional; if left blank, a random username will be created.
In your Cloudflare DNS dashboard, make sure your domain/subdomain is set to DNS Only.
- Add a new Custom Uploader in ShareX by going to
Destinations > Custom uploader settings...
- Under Uploaders, click New & name it whatever you like.
- Set Destination type to
Image
,Text
, &File
- Request tab:
- Method:
POST
- URL:
https://your.domain.name.here/
- Body:
Form data (multipart/form-data)
- File from name:
file
(literally put "file
" in the field) - Headers:
- Name:
Authorization
- Value: (the value provided by
npm start
on first run)
- Name:
- Method:
- Response tab:
- URL:
$json:.resource$
- Thumbnail:
$json:.thumbnail$
- Deletion URL:
$json:.delete$
- Error message:
$response$
- MagicCap users: do not include the
.
in the above (i.e.$json:resource$
)
- URL:
- The file
sample_config.sxcu
can also be modified & imported to suit your needs
If you need to override a specific part of the config to be different from the global config, you may do so via "X
" HTTP headers:
Header | Purpose |
---|---|
Domain |
Override the domain returned for the clipboard (useful for multi-domain hosts) |
Access |
Override the generator used for the resource URI. Must be one of: original , zws , gfycat , or random (see above) |
Gfycat |
Override the length of Gfycat ID's. Defaults to 2 |
If you primarily share media on Discord, you can add these additional (optional) headers to build embeds:
Header | Purpose |
---|---|
OG-Title |
Large text shown above your media |
OG-Description |
Small text shown below the title but above the media (does not show up on videos yet) |
OG-Author |
Small text shown above the title |
OG-Author-Url |
URL to open when the Author is clicked |
OG-Provider |
Smaller text shown above the author |
OG-Provider-Url |
URL to open when the Provider is clicked |
OG-Color |
Colour shown on the left side of the embed. Must be one of &random , &vibrant , or a hex colour value (for example: #fe3c29 ). Random is a randomly generated hex value & Vibrant is sourced from the image itself |
You can insert certain metadata into your embeds with these placeholders:
Placeholder | Result |
---|---|
&size |
The files size with proper notation rounded to two decimals (example: 7.06 KB ) |
&filename |
The original filename of the uploaded file |
×tamp |
The timestamp of when the file was uploaded (example: Oct 14, 1983, 1:30 PM ) |
You may use Discord webhooks as an easy way to keep track of your uploads. The first step is to create a new Webhook. You only need to follow the first section, Making a Webhook. Once you are done that, click Copy Webhook URL. Next, paste your URL into a text editor. Extract these two values from the URL:
https://discord.com/api/webhooks/12345678910/T0kEn0fw3Bh00K
^^^^^^^^^^ ^^^^^^^^^^^^
Webhook ID Webhook Token
Once you have these, add the following HTTP headers to your ShareX config:
Header | Purpose |
---|---|
Webhook-Client |
The Webhook ID |
Webhook-Token |
The Webhook Token |
Webhook-Username |
(Optional) the "username" of the Webhook; can be set to whatever you want |
Webhook-Avatar |
(Optional) URL to an image to use as the Webhook avatar. Use the full URL including https:// |
Webhooks will show the filename, mimetype, size, upload timestamp, thumbail, & a link to delete the file. To disable webhooks, simply remove the headers from your config.
ass is intended to provide a strong backend for developers to build their own frontends around. The easiest way to do this is with a Git Submodule. Your submodule should be a separate git repo. Make sure you adjust the FRONTEND_NAME
to match your frontend. To make updates easier, it is recommended to make a new branch. Since submodules are their own dedicated projects, you are free to build the router however you wish, as long as it exports the required items detailed below.
Sample submodule entry file:
const { name, version } = require('./package.json');
const express = require('express');
const router = express.Router();
router.all('/', (_req, res) => res.send('My awesome dashboard!'));
// These exports are REQUIRED by ass, so don't forget to set them!
module.exports = {
router, // The dashboard router itself
enabled: true, // Required to activate frontend in ass; DO NOT change unless you want to disable your frontend
brand: `${name} v${version}`, // Printed in ass logs & reported to client. Can be changed to your liking
endpoint: '/dashboard' // URL to use for your dashboard router. ass will automatically set up Express to use this value. Can be changed to your liking
};
Now you should see My awesome dashboard!
when you navigate to http://your-ass-url/dashboard
.
If you want to access resource & user data within your frontend router, just add these two lines near the top of your router:
const users = require('../auth');
const data = require('../data');
These values are recognized globally throughout ass, so they will stay up-to-date as users upload.
By default, ass directs the app index to this README. To change it, just add an index
function to your router exports:
function index(req, res, next) {
// redirect user to dashboard
res.redirect('/dashboard/user');
// you can also use req & next as you normally
// would in an Express route handler
}
module.exports = {
router,
index,
enabled: true,
brand: `${name} v${version}`,
endpoint: '/dashboard',
};
For a detailed walkthrough on developing your first frontend, consult the wiki.
StorageEngines are responsible for managing your data. "Data" has two parts: an identifier & the actual data itself. With ass, the data is a JSON object representing the uploaded resource. The identifier is the unique ID in the URL returned to the user on upload.
Supported StorageEngines:
Name | Description | Links |
---|---|---|
JSON | JSON-based data storage. On disk, data is stored in a JSON file. In memory, data is stored in a Map. This is the default StorageEngine. | GitHub, npm |
PostgreSQL | Data storage using a PostgreSQL database. node-postgres is used for communicating with the database. | GitHub, npm |
An ass StorageEngine implements support for one type of database (or file, such as JSON or YAML). This lets ass server hosts pick their database of choice, because all they'll have to do is plugin the connection/authentication details, then ass will handle the rest, using the resource ID as the key.
The only StorageEngine ass comes with by default is JSON. If you find (or create!) a StorageEngine you like, you can use it by installing it with npm i <package-name>
then changing the contents of data.js
. The StorageEngines own README file should also instruct how to use it. At this time, a modified data.js
might look like this:
/**
* Used for global data management
*/
//const { JsonStorageEngine } = require('@tycrek/ass-storage-engine');
const { CustomStorageEngine } = require('my-custom-ass-storage-engine');
//const data = new JsonStorageEngine();
// StorageEngines may take no parameters...
const data1 = new CustomStorageEngine();
// multiple parameters...
const data2 = new CustomStorageEngine('Parameters!!', 420);
// or object-based parameters, depending on what the StorageEngine dev decides on.
const data3 = new CustomStorageEngine({ key1: 'value1', key2: { key3: 44 } });
module.exports = data1;
As long as the StorageEngine properly implements GET
/PUT
/DEL
/HAS
StorageFunctions, replacing the file/database system is just that easy.
For a detailed walkthrough on developing StorageEngines, consult the wiki.
Because I was dumb & didn't know what to call it, totally forgetting that "storage engine" would also imply a way to store files, not just data.
ass has a number of pre-made npm scripts for you to use. All of these scripts should be run using npm run <script-name>
.
Script | Description |
---|---|
start |
Starts the ass server. This is the default script & is run with npm start . |
setup |
Starts the easy setup process. Should be run once after installing ass, & also after any updates that introduce new configuration options. |
metrics |
Runs the metrics script. This is a simple script that outputs basic resource statistics. |
new-token |
Generates a new API token. Accepts one parameter for specifying a username, like npm run new-token <username> . ass automatically detects the new token & reloads it, so there's no need to restart the server. |
update |
Runs update tasks. These will update ass to the latest version by first pulling changes with git pull , then running npm i to install any new dependencies. This is the recommended way to update ass. After updating, you will need to restart ass. |
update-full |
Runs the previous update script, followed by npm run setup to ensure that all the latest configuration options are set. The setup script uses your existing config for setting defaults to make updates much quicker. If any ass Release Notes say to use update-full instead of update , then use update-full . |
restart |
Restarts the ass server using systemctl . More info soon (should work fine if you have an existing ass.service file) |
engine-check |
Ensures your environment meets the minimum Node & npm version requirements. |
logs |
Uses the tlog Socket plugin to stream logs from the ass server to your terminal, with full colour support (Remember to set FORCE_COLOR if you're using Systemd) |
docker-logs |
Alias for docker-compose logs -f --tail=50 --no-log-prefix ass |
Use this script kindly provided by @ToxicAven. For the KEY
, put your token.
No strict contributing rules at this time. I appreciate any Issues or Pull Requests.
- GitHub CoPilot... seriously, this thing is good.
- Special thanks to hlsl#1359 for the awesome logo!
- @ToxicAven for the Flameshot script
- Gfycat for their wordlists