You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Take for example database isolation levels (http://highscalability.com/blog/2011/2/10/database-isolation-levels-and-their-effects-on-performance-a.html) in a perfect world we would like to perform all operations on a database in a serial way. That is, operations happen in order as they are received by the engine. However in practice no production-level database performs operations in a serializable way, we make compromises and figure out ways to perform operations a bit faster. In this way we trade off performance (expected operation side effects) for scalability. See https://www.youtube.com/watch?v=5ZjhNTM8XU8 to learn more about attack vectors on database isolation levels.
Another way is if you wanted to scale some api. Say you have a stock exchange engine running on an api. At first you run it in one location (part of the world) and the performance is fast. But then you start getting customers at the other end of the world and their latency numbers are bad (due to the speed of light and timespace). If you choose to spin up and api instance that is geographically closer to those customers (scale), the latency is shortened for them but now your apis must synchronize between instances. This synchronization will cost you time and extra computation steps, lowering your performance.
We tend to see this trade off in various forms at various levels.
https://github.com/donnemartin/system-design-primer#performance-vs-scalability
Does a system have to sacrifice performance for scalability?
Even if I read the sources, I still can't get it.
Why is there a trade-off between them?
Thanks in advance
The text was updated successfully, but these errors were encountered: