We are listening to your every feedback,
and taking action to constantly improve your learning experience.
If you have any feedback, please use this form:
- Click here to access our Premium Algorithms course.
We often hear about Distributed Database, Data Replication and Data Partitioning. But have you ever thought when we should adopt distributed architecture and what benefits we get when multiple machines are involved in storage and retrieval of data ?
Below are the various reasons why you might want to distribute a database across multiple machines :
If your data volume, read load, and/or write load grows bigger than a single machine can handle, you can potentially spread the load across multiple machines.
This is called scale out or horizontal scaling (as opposed to, scale up or vertical scaling where a single machine is made more and more powerful by adding more CPU power, RAM and/or disk storage). Scaling out the number of machines that can serve read queries effectively increases the read throughput.And, if your dataset is so big that each machine cannot hold a copy of the entire dataset then you would need data partitioning or sharding.
Fault Tolerance, Resilience, High Availability:
If your application needs to continue working even if one (or several) machine(s) or the network or an entire datacenter goes down, you can use multiple machines to give the redundancy. When one fails, another one takes over. This is when data replication comes into play. This effectively increases the availability of the system.
If you have users around the world, you might want to have servers at various locations worldwide so that each user can be served from a datacenter that is geographically close to them. That avoids the users having to wait for network packets to travel halfway around the world. Keeping data and datacenter close to the users effectively reduces latency.In this context it is important to know that latency and response time are not actually the same though many people use them interchangeably.
latency = response time - processing time OR latency + processing time = response time
Latency is the delay incurred in communicating a message (the time the message spends “on the wire”). The word latent means inactive or dormant, so the processing of a user action is latent while it is traveling across a network. Latency cannot be improved by change (or optimization) in code. Latency is a resource issue, which is affected by hardware adequacy and utilization.
Processing time is the amount of time a system takes to process a given request, not including the time it takes the message to get from the user to the system or the time it takes to get from the system back to the user. Processing time can be affected by changes to your code, changes to systems that your code depends on (e.g. databases), or improvements in hardware.
Response time is the total time it takes from when a user makes a request until they receive a response. Response time can be affected by changes to the processing time of your system and by changes in latency, which occur due to changes in hardware resources or utilization.
In many cases, you can assert that your latency is nominal, thus making your response time and your processing time pretty much the same
Example: The latency in a phone call is the amount of time it takes from when you ask a question until the time that the other party hears your question. If you’ve ever talked to somebody on a cell phone while standing in the same room, you’ve probably experienced latency first hand, because you can see their lips moving, but what you hear in the phone is delayed because of the latency.
The processing time in a phone conversation is the amount of time the person you ask a question takes to ponder the question and speak the answer (after he hears the question of course).
The response time in phone conversation is the amount of time it takes for you to ask a question and get a response back from the person that you’re talking to.
Check out our other System Design content here.
Click here to access our Premium Content on Algorithms .
The above content is written by:
If you have any feedback, please use this form: https://thealgorists.com/Feedback.
Follow Us On LinkedIn