-
Notifications
You must be signed in to change notification settings - Fork 576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Planet generation: The more time passes, the more the estimated generation time rises #654
Comments
This is expected (optimistic) generating time duration for the whole planet. We offer the generating of tiles for the whole planet on our cluster as a service (with our hardware setup it is done in four days): https://openmaptiles.com/cluster-rendering/ |
I really like the job you made with this library, but honestly, i'm done with your copied / pasted answers ...
I totally understand the fact that you made tiles generation a business because you spent much time working on it. But when people needs to be helped on the open source part, please just answer them in order to help, or don't answer at all. Your business is valuable to people that don't want to wait tiles generation and can afford your services. Concerning my issue: This way, I can set the number of jobs to execute in environement variable |
@qlerebours Are you getting much faster renders with your jobs script? |
No, they are not much faster but I can't use multiple computers. For the moment, I split z12 in 50 jobs, each job takes between 1 hour and 12 hours depending on the quantity of data, on a small computer: 16Gb RAM, 4v CPU, 1To SSD. |
How did it end up going @qlerebours ? I'm dealing with the generation of the planet in a different way, I think. I'm using the openmaptiles quickstart.sh script, to generate the MBTiles of each country. And then, I merge them all in the same file, with tippecanoe (using the tile-join tool), like this: So far it's working quite nicely. However it still takes time to generate all countries and merge them. But at least I don't need a 1TB SSD and I can use it progressively as I have a bigger file. In my case, I don't need the whole world at the moment, only a part of Europe. But I guess you can set up a bunch of cloud servers, along with your computer, to generate all the MBTiles in a cheaper way than getting a computer with LOTS of RAM and SSD. Cheers! |
@carlos-mg89 I thought about doing it the way you are, but I think there may be a problem: Even with EC2 c5x.large it take many time to generate tiles. I ran the first 21 jobs on 100 with a first instance, in less than 2 weeks. The problem is that the first 10 were very fast because it's just ocean, not cities. |
Why would you need to render past zoom level 14? It's vector data and can be overzoomed. Past z14 you're just splitting tiles into smaller chunks, not adding any extra detail. |
Yeah, someone told me that a few days after I posted. Thanks for sharing here |
I finally managed to process a full planet (with quickstart.sh :-) so I could give a rough figure on what to expect. I proccesd the data in 4 parts. for better comparisson: $ docker-compose run openmaptiles-tools test-perf openmaptiles.yaml --no-color
Connecting to PostgreSQL at postgres:5432, db=openmaptiles, user=openmaptiles...
* version() = PostgreSQL 9.6.18 on x86_64-pc-linux-gnu (Debian 9.6.18-1.pgdg90+1), compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
* postgis_full_version() = POSTGIS="3.0.1 ec2a9aa" [EXTENSION] PGSQL="96" GEOS="3.7.1-CAPI-1.11.1 27a5e771" PROJ="Rel. 4.9.3, 15 August 2016" LIBXML="2.9.4" LIBJSON="0.12.1" LIBPROTOBUF="1.2.1" WAGYU="0.4.3 (Internal)"
* jit = unrecognized configuration parameter "jit"
* shared_buffers = 128MB
* work_mem = 4MB
* maintenance_work_mem = 64MB
* effective_cache_size = 4GB
* effective_io_concurrency = 1
* max_connections = 100
* max_worker_processes = 8
* max_parallel_workers = unrecognized configuration parameter "max_parallel_workers"
* max_parallel_workers_per_gather = 0
* wal_buffers = 4MB
* min_wal_size = 80MB
* max_wal_size = 1GB
* random_page_cost = 4
* default_statistics_target = 100
* checkpoint_completion_target = 0.5
Validating SQL fields in all layers of the tileset
Running all layers test 'us-across' at zoom 14 (2,361 tiles) - A line from Pacific ocean across US via New York and some Atlantic ocean...
Tile sizes for 2,361 tiles (~236/line) done in 0:08:40.3 (4.5 tiles/s)
#######################################################################################################################################
52.0 avg size, 0B (14/2759/6158) — 233B (14/2839/6158)
█ 832.1 avg size, 236B (14/2774/6158) — 1,117B (14/3595/6158)
█ 1.2K avg size, 1,117B (14/4526/6158) — 1,323B (14/3752/6158)
██ 1.4K avg size, 1,323B (14/4188/6158) — 1,624B (14/4411/6158)
██ 1.8K avg size, 1,632B (14/3994/6158) — 2,110B (14/2620/6158)
███ 2.3K avg size, 2,110B (14/3013/6158) — 2,560B (14/4079/6158)
████ 2.8K avg size, 2,561B (14/3406/6158) — 3,294B (14/3877/6158)
█████ 3.8K avg size, 3,296B (14/3349/6158) — 4,549B (14/4489/6158)
████████ 5.5K avg size, 4,555B (14/4600/6158) — 7,404B (14/3964/6158)
███████████████████████████████████████████████████████████████████████ 49.0K avg size, 7,418B (14/4122/6158) — 585,849B (14/4824/6158)
================ SUMMARY ================
Generated 2,361 tiles in 0:08:40.3, 4.5 tiles/s, 7,029.6 bytes/tile 4.5 tiles/s seems low to me - I was seeing 40-100 tiles/s most of the time. But 4.5 tiles/s is what this test says ;-) |
Not really a solution but some experience to share.
|
How do you combine the mbtiles? I used tile-join from tippecanoe. But I wonder if there is abetter solution.Am 15.12.2020 22:53 schrieb ache051 <notifications@github.com>:
Not really a solution but some experience to share.
We do cuts of entire planet by using AWS spot instances this way:
Use quickstart.sh to load all data to Postgres on one spot instance (r4.16x large) then stop. (~ 2 days)Create an image from EBS of said instanceSplit the planet into 8 "geographical areas" depending on concentration of data - East Europe, West Europe, North America, South America, Oceania, East Asia, West Asia, Middle East and India subcontinent.Start 8 spot instances (r4.16x large) from image created in 2)Run generate-vectortiles with 64 processes each for the areas devised in 3) from Z0 to Z14 on the 8 instances (~ 5-7 days in total depending on availability of spot instances)Combine the eight mbtiles files into one to get world coverage (1 day)
Altogether it takes about 10 days in total to do a cut of the world. Each complete run cost about US $1k.
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.
|
I'm currently trying to generate tiles for the planet file on z12 and z13.
I know that it requires a strong configuration and that it should take at least a week or two to generate (no need to warn me that this is the optimistic generation time).
My problem is that it took 3 days to generate 25%, the estimated time was 12 days during the first days, then it started to slow down announcing 30 days to generate, then 70, then 100, then 130 and now the estimated time is 170 days... It's been 10 days since the genreration started and it's been generated "only" 28%.
Can someone help me to understand why it keeps slowing down everyday ?
My configuration: 16Gb RAM, 4 threads, 1To SSD
Here is a picture of
htop
command on the computerCan be viewed here too:
https://ibb.co/mRBFh5H
https://ibb.co/MNqQFLc
Thanks
The text was updated successfully, but these errors were encountered: