-
Notifications
You must be signed in to change notification settings - Fork 12
Increase US rendering capacity #637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Do you have rough guesses for what 1 and 2 would cost? |
1 would have no costs for the OSMF. 2 and 3 cost about 7k USD in the past when we got new servers in Europe, but supply shortages will have increased costs since then. |
FYI: Our (ACC, nidhogg) network provider tells us that for them north american capacity is cheap and plentiful, so the only downside for shifting the load to the renderer we host is the increased latency (and risk running out of nidhogg rendering capacity). |
AWS would be interested in helping provide capacity. Feel free to email djnalley@amazon.com and cashsame@amazon.com and we'll start a conversation. |
@ke4qqq Thank you. I will reach out to you via email shortly. |
Current plans are to set up a rendering server on AWS (#682) but to also replace pyrene, which has a CPU that is 9 years old, spinning hard drives, and only 2.8TB in RAID5. We can use this issue to track replacing Pyrene. Pyrene is owned by OSM US.
|
In conversations for locating a server in Arizona and also getting University of Arizona support to price out what we need as far as funding. Will connect via email. |
Current DB size is 1.1TiB, est. size in 5 years is 2TB. The tile store is tougher to measure, because it will consume as much space as its given, and more space is always good for cache hit ratios. We run an daily cleanup job that, if >88% of space is used, removes files not accessed in the last 2 days until 80% of disk space is used. pyrene, with about 1.3TiB of tile store is having to do automated sweeps of old tiles multiple times per day, too often. Nidhogg and Culebre, splitting the metatiles between them, have about 2TiB of tile store each, and take 2-4 days to get from 80% to 88%. Given the above, I would want >2TB of tile storage for a US server in 5 years, so a total >4TB, which means a 7.68TB disk. |
piasa is now running, so we have enough capacity, and it should remain enough once pyrene is shut off. |
pyrene, the one US rendering server, no longer has the capacity to keep up with its demand. (#625 (comment), and other reports) I was able to relieve some of the pressure by sending significant east coast US traffic to Europe, but sending IAD, EWR, LGA, YYZ, and MIA across the Atlantic is not ideal.
I see three options
My preferred order is 1, 2, 3. I would rather not add another location we have hardware in for just one server.
The text was updated successfully, but these errors were encountered: