We are listening to your every feedback,
and taking action to constantly improve your learning experience.
If you have any feedback, please use this form:
- Click here to access our Premium Algorithms course.
In this chapter, we will discuss a very simplistic view of how to build a scalable distributed system from scratch that would be able to handle traffic from huge user base, potentially tens of millions of users, with large number of concurrent users.
If you are preparing for System Design interview, feel free to use this as a starting point as a template. We would discuss more advanced scale out strategies and go into much much deeper in later chapters.
STEP 1: Single Server Setup:
Initially we have only one server which hosts both the web app and the database.
When a client tries to reach the site using url, the DNS returns the public IP of the url and then the client request reaches the web server using the public IP address.
STEP 2: Separate out Data Base from web server:
Now we have reached quite a few hundreds of users and would need to scale our database. In order to be able to scale the data base independently we need to separate out the data base from the web server where the web app resides. So now we have a separate data tier.
The web server sends the read/write/update requests to the data tier to persist and the data tier sends data to the web server and displays data to the clients.
STEP 3: Add Load Balancer
The user base is gradually increasing and has reached quite a few thousands. The single server won’t be able to handle so many requests.
In this step we would add more servers and add a Load Balancer to distribute the incoming traffic among the web servers.
In this setup the load balancer has a public IP and each of the multiple web servers has a private IP. The DNS returns the public IP of the site which is used to reach the load balancer and then the load balancer uses the private IPs of the web servers to distribute the requests among the web servers.
STEP 4: Data Replication : Add Secondary/Slave Databases
Our user base has increased even more and one database server is not being able to handle such load. We need to segregate read operations from write/update operations to reduce the pressure on the data base server and to serve the users better and achieve low latency. We need to have multiple Slave Database Servers where data would be replicated from the Master Database Server.
Slave databases would be used for reads and the Master Databases would be used for write and update operations.
Some of the complexities that arise in this setup:
- Data Synchronization between Master and Slave Servers: There should be a set rule for how often data be replicated to slave servers from master server. There would also be a possibility of reading stale data from the slave servers .
- When slave database goes offline: If there is only one slave database and it goes down then the read operations are redirected to the master database and a new slave database is setup as soon as possible. If there are more than one slave databases then there is no need to redirect read operations to the master. Rather all the read operations are handled by the rest of the slave databases and a new slave server is setup.
If Master Database Goes Down: If there is only one master database and it goes down then a slave database is promoted to master database. There can be data lose due to this but that can be handled either by
- Running Data Recovery Script
- Having Multi Master Setup
- Having Circular Replication Setup
STEP 5: Add Cache Tier
Database calls are expensive and often slow. In order to reduce the read latency we need to have implement Caching and have a Cache Tier. This would efficiently lighten the database load.
Things to keep in mind while implementing Caching:
- An expiration policy for the Cache should be implemented that will cause data to be removed from the cache the data is not accessed for a specified period of time. Often this specified period of time is called TTL or Time To Live. The expiration period should not be too short, otherwise this can cause the application to continually fetch data from the database. The expiration period should not be too long either, otherwise the data would often become stale.
- Maintaining Consistency between the cache and the database is very important. An item in the database may be changed or updated at any time and this change may not be reflected in the cache might not be reflected in the cache until the next time the item is loaded into the cache. When scaling across multiple regions, maintaining consistency between data in cache and the persistent storage becomes a big technical challenge.
- Data Eviction: The process of purging data from the cache when the cache reaches its maximum size and new data needs to be inserted in the cache is called Data Eviction. Some of the popular cache data eviction policies are Least Recently Used (LRU), Least Frequently Used (LFU), First In First Out (FIFO).
STEP 6: Use Content Delivery Network (CDN)
Things to keep in mind:
- File Invalidation: Resources in the CDN should be updated using the CDN API when a resource changes or gets updated. Alternatively, Object versioning can also be used to serve a different version of the resource.
- CDN Fallback / Failure: Care should also be taken to detect CDN Failure and in case of CDN failure the requested resource is served from the origin when CDN is unavailable.
Step 5 efficiently lightens the database load and Step 6 reduces the load of the web servers.
STEP 7: Make The Web Tier Stateless (If possible)
As we scale the web tier horizontally we need to move the state such as session data out of the web tier.
If we don’t make the web tier stateless then we won’t be able to send the requests of a user to different servers when state needs to be maintained.
A very naïve solution to this problem is Sticky Session which binds a user’s session to a specific server and consequently all the subsequent requests from that user for that session is sent to that specific server. This potentially means that we are not taking advantage of multiple web servers, and consequently it is not a very elegant solution.
Making the web tier stateless would help to auto scale the web tier, i.e, scale the web tier independently.
STEP 8: Implement Message Queue : LOOSE COUPLING
We have come a long way from where we began. Now we want to further decouple our web tier into smaller services. The key is to achieve Loose Coupling and build components that do not have tight coupling. If one component fails, other components can continue working as if no failure occurred since even though they are dependent on each other they have very less coupling.
Messaging Queue is a key strategy that is employed in many Distributed Systems to achieve Loose Coupling.
Some of the methods to implement Message Queue are Java Messaging Service or JMS, Amazon SQS Queue, Azure Queue among many others.
Message Queues are stored in memory and supports high availability.
Message Queue is a Publication-Subscription Model. Input services called producers or publishers create messages and deliver them to the message queue. Other services or servers, called consumers or subscribers, connect to the queue and subscribe to the messages to be processed.
Key Benefits of Message Queue:
- Message queue allows web server to respond to requests quickly instead of being forced to perform resource-heavy procedures on the spot.
- Message queues are also useful for distributing a message to multiple recipients for consumption or for balancing loads between workers (consumers).
STEP 9: Database Scaling : Sharding
Now time has come to scale out our database by database sharding.
Things to keep in mind:
- Resharding: We may need to reshard an already sharded DB if a certain data shard experience faster exhaustion due to uneven distribution of data ora if a shard can no longer hold data due to data growth.
- Join and Denormalization: Once a database has been sharded across multiple servers, it is hard to perform joins across database shards due to performance and complexity constraints. A common workaround is to de-normalize the database so that queries that previously required joins can now be performed on a single table
STEP 10: Add Automation, Logging, Monitoring and Metrics
Now that the business has grown substantially, collecting different types of metrics is very important as they can give useful business insights about the site. Some important metrics are:
- Host Level Metrics: CPU, Memory, Disk I/O
- Aggregated Level Metrics: Performance of entire database tier
- Key Business Metrics: Daily Active Users, Revenue etc.
We also need to have proper Logging to track down any error. Monitoring error logs is very important.
STEP 11: Multiple Data Centers
The site has grown rapidly and has attracted millions of users internationally. To improve availability and provide a better user experience across wider geographical areas, deploying the site to more than one data center is crucial.
User requests are geoDNS-routed to the nearest data center. GeoDNS is a DNS service that allows domain name to be resolved to an IP address based on the location of the user.
Proper care should be taken so that the data across all the data centers are synchronized properly.
Summary:The above steps from step 1 to step 11 would potentially help to build a system from scratch and scale the system from a single user to 10 million users.
So, while building distributed scalable systems things to keep in mind are:
- Keep Web Tier stateless
- Have data replication or redundancy at every tier
- Cache data as much as possible
- Host static resources on CDN
- Scale data tier by sharding
- Split tiers into individual services, like using messaging queue
- Monitor your system and use automation tools
- Support multiple data centers
Check out our other System Design content here.
Click here to access our Premium Content on Algorithms .
The above content is written by:
If you have any feedback, please use this form: https://thealgorists.com/Feedback.
Follow Us On LinkedIn