Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smaller Map Size? #312

Open
Vereor opened this issue Apr 2, 2011 · 14 comments
Open

Smaller Map Size? #312

Vereor opened this issue Apr 2, 2011 · 14 comments

Comments

@Vereor
Copy link

Vereor commented Apr 2, 2011

Hi, I have been using Overviewer for a while now for our little server and it has been great.
But I have always thought that there was a disproportionate relationship between the world size and the resulting map.

When our world was about 50Mb, the resulting map would be around 300Mb. Now that our world has grown to just over 200Mb, the resulting map is around 1.2Gb. I know the maps are made out of many sections and layers, is there a chance that some of this is not needed, or could we we even maybe skip every second zoom level to try to reduce the map data needed?

@agrif
Copy link
Member

agrif commented Apr 3, 2011

Have you tried using --imgformat=jpg? If you haven't, this should reduce the size of your map considerably, at the cost of a little image quality. You'd need to rerender your map from scratch, though.

I'm no expert on the Google Maps API, but I'm pretty sure all of those layers are required. In the development branch, the chunk-level cache was eliminated, which saves some hard drive space. The size of the map directory doesn't change, though.

Being able to decrease (or increase!) the size of each block on the map has been a personal long-term goal, and would let you render a less-detailed and smaller map, but unfortunately it's a little complicated to implement.

@Vereor
Copy link
Author

Vereor commented Apr 3, 2011

Thanks for the tip agrif. I render from scratch every time, hoping it will help keep the file size down a little... lol. I did think about going the jpg option, but I really wasn't keen on compression artifacts, but I will give it a good. At the size I am at now, pngout isn't really making a big enough difference, its time to make some sacrifices ;o)

An option to output 8-bit depth pngs instead of 32-bit depth would be nice though, and save a lot of overhead. And would still look pretty good (better than jpg) for most textures.

@pironic
Copy link
Member

pironic commented Apr 28, 2011

@agrif although the payoff would be minimal for space, we could potentially render every layer as we are now, but skip outputing every second zoom level. If we rendered 8 layers, we only output 4 of them, and we tell the API to only load 4 layers. it could potentially save on disk space, but logically we would still need to render the layer, even if we don't use it.

@agrif
Copy link
Member

agrif commented Apr 28, 2011

We could just make the number of tiles used for each zoom level configurable. Right now it's 2x2 (4 tiles per zoomed-out tile), but we could change it to be NxN. The only problem I can think of, does the google maps API support non-quadtree based tiles? Can you tell it that each layer corresponds to two zoom levels, not one?

At least in chrome, google maps zoom levels look to be fixed at powers of two (zooming out => seeing 4 times the area), so this could be a problem.

@pironic
Copy link
Member

pironic commented Apr 29, 2011

what if we didn't tell the api to skip a level? what if it thinks all levels are there... we determine ultimately what zoom percent equals each level of zoom... if we wanted we could make the first 3 zoom levels really really slow and the next ones fast. The API wouldn't know the difference.

@agrif
Copy link
Member

agrif commented Apr 29, 2011

it does know the difference, unfortunately. I should have been clearer with that bit about Chrome: it actually animates the transitions between zoom levels, and the zoom in/out animation uses a factor of two. You can still render the tiles differently, but at least on Chrome, it'll look really funky.

There might be an API call to change this though.

@dliverman
Copy link

Many open source web-mapping systems generate tiles on-the-fly and use a cache system. This way there is a small storage requirement and a higher cpu/ram requirement.

Could this be implemented here? I haven't fully examined the render pipeline.

@pironic
Copy link
Member

pironic commented May 11, 2011

@dliverman right now, overviewer generates the tiles only. in order for overviewer to generate them on demand it would need to also be the webhost or tied directly into the host. That way if a client requests a file, overviewer generates it. This would be a lot more work and frankly, based on our current cross platform ability to execute, unfeasible. It's a nice idea... but wouldn't work with overviewer.

@Fenixin
Copy link
Member

Fenixin commented May 13, 2011

An idea on this: Why don't change the render pipeline to be block texture size independent? At the moment we use textures with 24x24 pixels, once generated is really easy to scale down all the textures, and rendering with smaller textures will generate smaller maps... We also can generate bigger textures (48x48) so we can render more detailed maps and then scale them down for less detailed maps.

For this we can add an option called --scale, and use it resizing the texture block images as: texture_size / 2**scale. So scale = 0 means the maximum size.

What do you think? I haven't looked in the render pipeline for a long time, maybe this is a hard job, no idea.

@agrif
Copy link
Member

agrif commented Jun 4, 2011

@Fenixin, something like this has been an idea for a long time (see issue #45), but this is the first reasonable method I've heard of getting it done. It'd be a huge change, but not an impossible one.

@Fenixin
Copy link
Member

Fenixin commented Jun 5, 2011

mmmh... maybe is better just ask for a --texture-size, and limit it to be an even integer (so the maths are beautiful in the render process).

I'm thinking in doing the textures from real 3D, this change sounds as a good moment to do that, so you can render the textures with a big size (and maybe reading the 3D models from minecraft). I realize that this should be done without any hardware, so servers doesn't need to run X or whatever... I've never done anything like this, can anybody point me in the right direction in the python world? (a beautiful module?)

@acertain
Copy link
Contributor

imgopt (and tools like it that run your images through popular image optimizers), does wonders, maybe build in any python image optimizers that are good?

@TimoDinnesen
Copy link

This request needs to be revisited. As Overviewer is now it's near useless for other things than just generating a map, looking at it a few mins and then throwing everything away.

I have a rather small minecraft world that takes 600megabytes to store. But the map made by Overviewer is 6GB! It consists of more than 34THOUSAND individual files, making it take more than half an hour to simply move the folder to another location on my PC.

Did someone forget that maps are supposed to be smaller than the world they represent?

This thread mentions various ways of compressing the images, but the real problem is the crazy structure of the images, that build up the webpage.

Ideally I would just want Overviewer to create a single huge image, and then let the actual zooming etc. be left up to some other tool.

@Narnianknight
Copy link

It wouldn't have to be one image, but images larger than 384x384 at least. The real problem is the ridiculous amount of nested folders. I don't know how the api works, but surely a solution with fewer folders is possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants