A Docker container that provides an HTTP upload interface with support for large files (up to 10GB), real-time progress tracking, upload speed monitoring, and resume functionality.
- ✅ Large File Support: Upload files up to 10GB
- ✅ Resume Capability: Resume interrupted uploads
- ✅ Real-time Progress: Live progress bar with percentage
- ✅ Speed Monitoring: Real-time upload speed display
- ✅ Drag & Drop: Modern drag-and-drop interface
- ✅ Long Timeouts: Extended HTTP timeouts for large files
- ✅ Chunked Upload: Efficient chunked upload mechanism
- ✅ File Management: View and download uploaded files
- ✅ Responsive Design: Works on desktop and mobile
- Clone or download the project files
- Run the container:
docker-compose up -d- Open your browser and navigate to
http://localhost:3000
# Build the image
docker build -t upload-server .
# Run the container
docker run -d \
-p 3000:3000 \
-v $(pwd)/uploads:/app/uploads \
-v $(pwd)/temp:/app/temp \
--name upload-server \
upload-server# Install dependencies
npm install
# Start the server
npm start- Select File: Click the upload area or drag and drop a file
- Start Upload: Click "Start Upload" to begin uploading
- Monitor Progress: Watch real-time progress, speed, and time remaining
- Pause/Resume: Use pause and resume buttons as needed
- Download: Access uploaded files from the files list
POST /api/upload/init
Content-Type: application/json
{
"filename": "example.zip",
"fileSize": 1073741824,
"chunkSize": 1048576
}POST /api/upload/chunk
Content-Type: multipart/form-data
- chunk: (file data)
- uploadId: (session ID)
- chunkIndex: (chunk number)
- chunkSize: (chunk size in bytes)GET /api/upload/status/{uploadId}GET /api/upload/resume/{uploadId}GET /api/filesGET /api/download/{filename}PORT: Server port (default: 3000)NODE_ENV: Environment mode (production/development)
- Maximum file size: 10GB
- Chunk size: 1MB (configurable)
- Session timeout: 1 hour of inactivity
- HTTP request timeout: 1 hour
- HTTP response timeout: 1 hour
The server uses a chunked upload mechanism:
- Initialization: Client requests upload session with file metadata
- Chunking: File is split into 1MB chunks on the client side
- Upload: Each chunk is uploaded sequentially with progress tracking
- Resume: If interrupted, client can resume from the last uploaded chunk
- Completion: Once all chunks are uploaded, file is moved to final location
- Uploads:
/app/uploads(persistent storage for completed files) - Temporary:
/app/temp(temporary storage during upload process)
- Chrome 60+
- Firefox 55+
- Safari 12+
- Edge 79+
- Check file size (must be ≤ 10GB)
- Ensure stable internet connection
- Try resuming the upload
- Check network connection
- Consider reducing chunk size for unstable connections
- Monitor server resources
- Ensure upload session hasn't expired (1 hour limit)
- Check that temporary files exist on server
- Files are stored in a designated upload directory
- No file execution permissions
- Session-based upload tracking
- Automatic cleanup of expired sessions
- Chunk Size: Default 1MB works well for most connections
- Concurrent Uploads: One upload per session to avoid conflicts
- Storage: Use fast storage (SSD) for better performance
- Network: Stable connection recommended for large files
# Install dependencies
npm install
# Start in development mode
npm run dev
# Build Docker image
docker build -t upload-server .MIT License - feel free to use and modify as needed.