Skip to content

tsunko/vidmap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vidmap

Another attempt at videos on Minecraft maps

Just as full disclosure: I am also the original author for this.

What is Vidmap?

A proof of concept implementation of streaming video to Minecraft maps with the ability to play audio! Streaming videos to maps is not practical, due to massive bandwidth usage and required throughput. The audio part of this project might be a good starting point in creating a "radio" in-game though.

Videos!? On maps!?

Yes, we utilize FFmpeg's libraries to decode video frames in a frame-by-frame fashion. This is mainly due to the fact that FFmpeg has high performance, support for many codecs, and relative ease-of-use (relative in that documentation for FFmpeg is sparse and is lacking in several areas). Frames are automagically dithered by FFmpeg.

We can then exploit the fact that our video frame's color space is going to be 16-bits due to swscale (ARGB4444) and can utilize the pixel data directly to index into a small 4kb lookup table. This lookup table is generated outside of VidMap (see: lut-main.zig and color.zig). Each lookup entry was generated by comparing each supported Minecraft map color and every possible 16-bit pixel color by the way of converting RGB values to XYZ and then XYZ to CIELab. The CIELab values are then compared against using CIE's Delta E (specifically, the 2000 variant) for as much accuracy as possible (though, as mentioned within the comments, this ironically may lead to loss of accuracy due to the conversion process).

Audios!? Custom audio!?

Yup, we got audio too. Extremely large disclaimer: the audio and video will make no attempt to synchronize. This is due to the fact that we cannot arbitrarily send audio data to the client. This is achieved in two steps:

  • Extract the audio from the container using FFmpeg and re-encode the audio into Ogg Vorbis.
  • Generate a resource pack at runtime and serve the resource pack to the player.

By doing this, we can playback audio that corresponds to the video being played, with some caveats (such as inability to synchronize).

What's different with this than the original?

  • No longer needs to generate a GIF for the entire video and then read each GIF frame one at a time - just read and decode straight from the video file.
  • Since we directly read and decode from the video with FFmpeg, we don't need to use Sun's internal GIF decoder.
  • Scaling past 1 map is now possible, so that we can achieve larger displays without too much of a performance penalty (the old version was able to scale past 1 map, but had terrible performance).
  • No longer bound by a hardcoded, fixed framerate - Vidmap matches the source video, so 60FPS videos will attempt to play at 60FPS, 24FPS will attempt to play at 24FPS.
  • A downside is that we're no longer able to directly hook up things like Java-based GameBoy emulators directly to this version, as we now rely directly on FFmpeg to do video decoding legwork.

Tools and Libraries

This is a dual-language project. Since Bukkit is Java-based, I stuck with implementing the bridge and plugin in Java. The other half is written in Zig, a super cool language that does Great Justice(tm) for the problems with C. I highly recommend you check it out - this project would've been much more buggy and annoying to write if the library was written in C/C++. To link the two worlds together, we use the bog standard JNI interface for loading and executing Zig code from Java.

As for libraries, I currently use the aforementioned FFmpeg's libraries (avcodec, avformat, avutil, swresample, swscale) and Zig's standard library with some self-written mini-libraries (while it's part of the source code, most of the files within the Zig section can be used elsewhere - they're in no way coupled tightly together). We do also utilize the internal Sun HTTP server for serving resource packs - however, I'd imagine it'd be relatively easy to write in a replacement if needed.

Optionally, SDL2 is used to "debug" videos by the way of emulating the Minecraft map colorspace (read frame -> match color -> translate Minecraft color back to ARGB4444 value -> update SDL2 texture).

Build instructions

Quick, dirty build instructions:

  • Throw a shared-build of FFmpeg into nativemap/ffmpeg so we can import its headers and link against its libraries.
  • Throw JNI headers into nativemap/jni.
  • Throw SDL2 into nativemap/sdl2.
  • Use zig build to build the project. This will produce the library under zig-out/lib and two binaries under zig-out/bin - one is our LUT generator and the other is the SDL2-based debug player.
  • If you want to generate your own lookup table, you'll need to manually invoke it. It'll be named "lut-gen.exe" and will be residing in zig-out/bin.
  • If you want to use the SDL2 debug player, invoke it like so: sdl-debug-player <your video file here> (or just drag an appropriate video file onto it). Note: it will not playback audio, but it will output the extracted and re-encoded audio as test.ogg. The SDL2 texture is internally 720p - resizing the window does not change this.
  • Head into zig-out/lib and copy nativemap.dll + FFmepg's DLLs into the root of your server (where your server .jar resides).
  • Do the usual song and dance to package the Java plugin component.
  • Set online-mode to false and network-compression-threshold to -1. See the limitations section for reasons why we do this.
  • Start up Minecraft and log into your server.
  • The command to prepare a video is /setup-video <width> <height> <file>, where width and height is how big your map display is (each map is 128x128 pixels - if you want a 128x128 display, then width=1, height=1).
  • You will get a prompt to install a resource pack - this resource pack contains the music extracted from the video.
  • You can then start the video (and audio) with /start-video.
  • You can stop a running video with /stop-video. It will not stop the audio though.

These instructions are deliberatly a little vague to discourage use on servers that are not suitable to run this plugin. Please do not use this on a live server.

Limitations

  • In order to run anything above a 2 x 2 "display", you must set online-mode to false. This is primarily due to how Minecraft handles online-mode - when we have online-mode, not only do we validate the client on connect, but Minecraft also begins encrypting and decrypting packets. This can become an extreme bottleneck, as some processors may not have AES-NI instructions.
  • In order for the client to achieve stable framerate, you must turn off compression on the server, or else we then stutter frequently while the client tries to decompress our map data, multiple times per second.
  • Autodithering is done by FFmpeg, however, I cannot find a way to turn off dithering when using ARGB4444. swscale's source code has references to the "sws_dither" AVOption, but setting it seems to have no effect. Known bug (or "wish", apparently): https://trac.ffmpeg.org/ticket/4614
  • It is incredibly taxing on the network to stream videos. I would highly not recommend running this for anything more than seeing it yourself. It is also incredibly taxing on the client to have to update multiple maps multiple times per second. If I had to set a "limit" on who you can show this, I would recommend only using this within the confines of your own network.
  • As mentioned before, audio can video cannot be synchronized. One possibility is to divide the audio into multiple, 1-second chunks and then playing each audio clip back every second. With this, you can synchronize to at least every second.
  • Very large in-game map "displays" will cause performance issues for the client (basically: you can DoS a client unintentionally). The size of "very large" will be dependent on your computer. I tested very large "displays" on three different configurations (display size of 30 x 17, or 4K equivalent - footage was using the Bad Apple music video, which is 30FPS).
    • AMD Ryzen 3600X + NVIDIA GTX 980: client chokes completely and cuts out the audio within a couple of seconds. The client will then enter a state of constantly responding and then suddenly not responding.
    • AMD Ryzen 5950x + NVIDIA GTX 3090: client is actually "usable" in the sense that you can move around. Framerate fluctuates from anywhere from 7FPS to 19FPS. Note the server was running on this same machine, so some resources was being shared.
    • Intel Xeon Silver 4110 Server + 5950x/3090 Client: client is still "usable" - better framerate compared to running the server on the same machine, but this quickly turns into a network bottleneck. Yes, 4K equivalent displays will fully saturate a gigabit connection. Turning on compression will reduce the bandwidth usage from gigabit down to around 12mbps, however the client becomes a stuttery mess (reports 72FPS but dipping down to 6FPS constantly).

Partial Solutions

Check out my Fabric-based mod: UnsafeMaps. This mod will fix some inefficiencies with how Minecraft handles incoming map data and allows some computers to do 4K equivalent displays with absolutely no problems. Unfortunately, it will not allow you to run videos on 8K equivalent displays :( Been trying to optimize to allow for 8K displays to run at least at 24FPS (even 10FPS would be nice), but there's way too much data for it to handle. Perhaps with CUDA/OpenCL it would be possible.

About

Videos on Minecraft maps... again?

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published