Compare commits
136 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
25724090f7 | ||
|
|
2e04efd63c | ||
|
|
66be664f9a | ||
|
|
4062c9252c | ||
|
|
0544287e71 | ||
|
|
d9c0737744 | ||
|
|
0c744e7554 | ||
|
|
7810731268 | ||
|
|
40a7b481e5 | ||
|
|
b424d7d31f | ||
|
|
7684b61e5c | ||
|
|
e00dcec9fb | ||
|
|
26cb0fba80 | ||
|
|
690ea6331d | ||
|
|
6eea432b69 | ||
|
|
ca2bfbcf31 | ||
|
|
f8c1526093 | ||
|
|
49a4cd78aa | ||
|
|
01d4d207bc | ||
|
|
d46cc2780b | ||
|
|
9d3d8b6583 | ||
|
|
6c125f57f8 | ||
|
|
c619cb8ded | ||
|
|
3b3579069b | ||
|
|
b769882e49 | ||
|
|
7b91b4aa34 | ||
|
|
df34b018b4 | ||
|
|
f923fac46e | ||
|
|
bdd2de91fc | ||
|
|
d087b6559a | ||
|
|
ad8d400f90 | ||
|
|
1cf528fa9a | ||
|
|
34e86933e3 | ||
|
|
fb103087b0 | ||
|
|
bff822018c | ||
|
|
4d6fcf140c | ||
|
|
6bdb1778e4 | ||
|
|
99a6cf36a0 | ||
|
|
8ad6f7c4a5 | ||
|
|
364cab24f6 | ||
|
|
02a7adb2a7 | ||
|
|
ae0496601f | ||
|
|
801ba632c7 | ||
|
|
fca6ebe13c | ||
|
|
f667d0098a | ||
|
|
c81f684f76 | ||
|
|
cc4a522628 | ||
|
|
d65dd87403 | ||
|
|
0423ed51fc | ||
|
|
82d7609fed | ||
|
|
d34c1bd2f3 | ||
|
|
82b921d7b9 | ||
|
|
897d9a5350 | ||
|
|
c25423be69 | ||
|
|
7625b25950 | ||
|
|
2a0e42d29f | ||
|
|
e76741c1d0 | ||
|
|
0fdfb7db3a | ||
|
|
878b5f74e2 | ||
|
|
7860eb58c5 | ||
|
|
3c182a1015 | ||
|
|
fe060248a7 | ||
|
|
46c8f3eb92 | ||
|
|
21901b3598 | ||
|
|
d24b12f0d6 | ||
|
|
a2ee0e9822 | ||
|
|
cde273f06c | ||
|
|
5171098095 | ||
|
|
006041b06b | ||
|
|
dd48af5e00 | ||
|
|
21a6a020fe | ||
|
|
f89ec2bba3 | ||
|
|
827bac099c | ||
|
|
5e0555c709 | ||
|
|
8e82391311 | ||
|
|
de92679df3 | ||
|
|
eb6ec68b77 | ||
|
|
5a9bf86980 | ||
|
|
494b04b1b4 | ||
|
|
df696cb4d9 | ||
|
|
e654d2c65a | ||
|
|
0b595a301a | ||
|
|
a50aa2bcbb | ||
|
|
e24fd66e9c | ||
|
|
83cdba07e0 | ||
|
|
3a063266e7 | ||
|
|
2c8e0a334d | ||
|
|
69dd162923 | ||
|
|
c9eaf8a180 | ||
|
|
cf7cda8b70 | ||
|
|
6a5299c3e4 | ||
|
|
e8518631cf | ||
|
|
851eb4bf39 | ||
|
|
180cacd49e | ||
|
|
4043fe83f0 | ||
|
|
e274b977fb | ||
|
|
8c64ba6316 | ||
|
|
a762dcce87 | ||
|
|
fe09911c0d | ||
|
|
f485f13aff | ||
|
|
6755c9b8dc | ||
|
|
6af42a027a | ||
|
|
83cd94e980 | ||
|
|
7e138b2c5f | ||
|
|
2709b209e9 | ||
|
|
f4a0a4f8ed | ||
|
|
c987855fdf | ||
|
|
6961f923ce | ||
|
|
e5a5d2a269 | ||
|
|
bc11501bf0 | ||
|
|
1edb681064 | ||
|
|
008f4a2581 | ||
|
|
c23dc5f68c | ||
|
|
3efaa43762 | ||
|
|
aee69e7ba1 | ||
|
|
9ef4709f55 | ||
|
|
4cfc0424cb | ||
|
|
f9e8c72e28 | ||
|
|
20818b72f5 | ||
|
|
7a509d06d8 | ||
|
|
1755848d1f | ||
|
|
a31a29a37f | ||
|
|
d420638405 | ||
|
|
e4a5d4f302 | ||
|
|
e8d5e0b5c2 | ||
|
|
a8c85ae1d9 | ||
|
|
a662d0eb10 | ||
|
|
d7789ed08a | ||
|
|
09776a9c43 | ||
|
|
4105a330af | ||
|
|
443e284e21 | ||
|
|
dbd0527ba9 | ||
|
|
66ad8b81e6 | ||
|
|
174b2856cc | ||
|
|
589ed53c74 | ||
|
|
345d5ddc1d |
205
README.md
205
README.md
@@ -1,70 +1,121 @@
|
|||||||
# Awesome System Design Resources
|
# Awesome System Design Resources
|
||||||
This repository contains resources to learn System Design concepts and prepare for interviews all using free resources.
|
|
||||||
|
|
||||||
## System Design Fundamentals
|
<p align="center">
|
||||||
- [Horizontal vs Vertical Scaling](https://www.spiceworks.com/tech/cloud/articles/horizontal-vs-vertical-cloud-scaling/)
|
<img src="diagrams/system-design-github.png" width="400" height="250">
|
||||||
- [Content Delivery Network (CDN)](https://www.cloudflare.com/learning/cdn/what-is-a-cdn/)
|
</p>
|
||||||
- [Caching](https://medium.com/must-know-computer-science/system-design-caching-acbd1b02ca01)
|
|
||||||
- [Distributed Caching](https://redis.com/glossary/distributed-caching/)
|
This repository contains free resources to learn System Design concepts and prepare for interviews.
|
||||||
- [Latency vs Throughput](https://aws.amazon.com/compare/the-difference-between-throughput-and-latency/)
|
|
||||||
- [CAP Theorem](https://www.bmc.com/blogs/cap-theorem/)
|
👉 Subscribe to my [AlgoMaster Newsletter](https://bit.ly/amghsd) and get a **FREE System Design Interview Handbook** in your inbox.
|
||||||
- [Load Balancing](https://aws.amazon.com/what-is/load-balancing/)
|
|
||||||
- [ACID Transactions](https://redis.com/glossary/acid-transactions/)
|
✅ If you are new to System Design, start here: [System Design was HARD until I Learned these 30 Concepts](https://blog.algomaster.io/p/30-system-design-concepts)
|
||||||
- [SQL vs NoSQL](https://www.integrate.io/blog/the-sql-vs-nosql-difference/)
|
|
||||||
- [Consistent Hashing](https://arpitbhayani.me/blogs/consistent-hashing/)
|
## ⚙️ Core Concepts
|
||||||
- [Database Index](https://www.progress.com/tutorials/odbc/using-indexes)
|
- [Scalability](https://algomaster.io/learn/system-design/scalability)
|
||||||
- [Rate Limiting](https://www.imperva.com/learn/application-security/rate-limiting/)
|
- [Availability](https://algomaster.io/learn/system-design/availability)
|
||||||
- [Microservices Architecture](https://medium.com/hashmapinc/the-what-why-and-how-of-a-microservices-architecture-4179579423a9)
|
- [Reliability](https://algomaster.io/learn/system-design/reliability)
|
||||||
- [Microservices Guidelines](https://newsletter.systemdesign.one/p/netflix-microservices)
|
- [SPOF](https://algomaster.io/learn/system-design/single-point-of-failure-spof)
|
||||||
- [API Design](https://abdulrwahab.medium.com/api-architecture-best-practices-for-designing-rest-apis-bf907025f5f)
|
- [Latency vs Throughput vs Bandwidth](https://algomaster.io/learn/system-design/latency-vs-throughput)
|
||||||
- [Strong vs Eventual Consistency](https://hackernoon.com/eventual-vs-strong-consistency-in-distributed-databases-282fdad37cf7)
|
- [Consistent Hashing](https://algomaster.io/learn/system-design/consistent-hashing)
|
||||||
- [Consistency Patterns](https://systemdesign.one/consistency-patterns/)
|
- [CAP Theorem](https://algomaster.io/learn/system-design/cap-theorem)
|
||||||
- [Synchronous vs. asynchronous communications](https://www.techtarget.com/searchapparchitecture/tip/Synchronous-vs-asynchronous-communication-The-differences)
|
- [Failover](https://www.druva.com/glossary/what-is-a-failover-definition-and-related-faqs)
|
||||||
- [REST vs RPC](https://aws.amazon.com/compare/the-difference-between-rpc-and-rest/)
|
|
||||||
- [Batch Processing vs Stream Processing](https://atlan.com/batch-processing-vs-stream-processing/)
|
|
||||||
- [HeartBeat](https://martinfowler.com/articles/patterns-of-distributed-systems/heartbeat.html)
|
|
||||||
- [Circuit Breaker](https://medium.com/geekculture/design-patterns-for-microservices-circuit-breaker-pattern-276249ffab33)
|
|
||||||
- [Idempotency](https://blog.dreamfactory.com/what-is-idempotency/)
|
|
||||||
- [Database Scaling](https://thenewstack.io/techniques-for-scaling-applications-with-a-database/)
|
|
||||||
- [Data Replication](https://redis.com/blog/what-is-data-replication/)
|
|
||||||
- [Data Redundancy](https://www.egnyte.com/guides/governance/data-redundancy)
|
|
||||||
- [Database Sharding](https://www.mongodb.com/features/database-sharding-explained#)
|
|
||||||
- [Fault Tolerance](https://www.cockroachlabs.com/blog/what-is-fault-tolerance/)
|
- [Fault Tolerance](https://www.cockroachlabs.com/blog/what-is-fault-tolerance/)
|
||||||
- [Failover](https://avinetworks.com/glossary/failover/)
|
|
||||||
- [Proxy Server](https://www.fortinet.com/resources/cyberglossary/proxy-server)
|
## 🌐 Networking Fundamentals
|
||||||
- [Domain Name System (DNS)](https://www.cloudflare.com/learning/dns/what-is-dns/)
|
- [OSI Model](https://algomaster.io/learn/system-design/osi)
|
||||||
- [Message Queues](https://medium.com/must-know-computer-science/system-design-message-queues-245612428a22)
|
- [IP Addresses](https://algomaster.io/learn/system-design/ip-address)
|
||||||
- [WebSockets](https://www.pubnub.com/guides/websockets/)
|
- [Domain Name System (DNS)](https://blog.algomaster.io/p/how-dns-actually-works)
|
||||||
- [Bloom Filters](https://www.enjoyalgorithms.com/blog/bloom-filter)
|
- [Proxy vs Reverse Proxy](https://blog.algomaster.io/p/proxy-vs-reverse-proxy-explained)
|
||||||
|
- [HTTP/HTTPS](https://algomaster.io/learn/system-design/http-https)
|
||||||
|
- [TCP vs UDP](https://algomaster.io/learn/system-design/tcp-vs-udp)
|
||||||
|
- [Load Balancing](https://blog.algomaster.io/p/load-balancing-algorithms-explained-with-code)
|
||||||
|
- [Checksums](https://algomaster.io/learn/system-design/checksums)
|
||||||
|
|
||||||
|
## 🔌 API Fundamentals
|
||||||
|
- [APIs](https://algomaster.io/learn/system-design/what-is-an-api)
|
||||||
|
- [API Gateway](https://blog.algomaster.io/p/what-is-an-api-gateway)
|
||||||
|
- [REST vs GraphQL](https://blog.algomaster.io/p/rest-vs-graphql)
|
||||||
|
- [WebSockets](https://blog.algomaster.io/p/websockets)
|
||||||
|
- [Webhooks](https://algomaster.io/learn/system-design/webhooks)
|
||||||
|
- [Idempotency](https://algomaster.io/learn/system-design/idempotency)
|
||||||
|
- [Rate limiting](https://blog.algomaster.io/p/rate-limiting-algorithms-explained-with-code)
|
||||||
|
- [API Design](https://abdulrwahab.medium.com/api-architecture-best-practices-for-designing-rest-apis-bf907025f5f)
|
||||||
|
|
||||||
|
## 🗄️ Database Fundamentals
|
||||||
|
- [ACID Transactions](https://algomaster.io/learn/system-design/acid-transactions)
|
||||||
|
- [SQL vs NoSQL](https://algomaster.io/learn/system-design/sql-vs-nosql)
|
||||||
|
- [Database Indexes](https://algomaster.io/learn/system-design/indexing)
|
||||||
|
- [Database Sharding](https://algomaster.io/learn/system-design/sharding)
|
||||||
|
- [Data Replication](https://redis.com/blog/what-is-data-replication/)
|
||||||
|
- [Database Scaling](https://blog.algomaster.io/p/system-design-how-to-scale-a-database)
|
||||||
|
- [Databases Types](https://blog.algomaster.io/p/15-types-of-databases)
|
||||||
|
- [Bloom Filters](https://algomaster.io/learn/system-design/bloom-filters)
|
||||||
|
- [Database Architectures](https://www.mongodb.com/developer/products/mongodb/active-active-application-architectures/)
|
||||||
|
|
||||||
|
## ⚡ Caching Fundamentals
|
||||||
|
- [Caching 101](https://algomaster.io/learn/system-design/what-is-caching)
|
||||||
|
- [Caching Strategies](https://algomaster.io/learn/system-design/caching-strategies)
|
||||||
|
- [Cache Eviction Policies](https://blog.algomaster.io/p/7-cache-eviction-strategies)
|
||||||
|
- [Distributed Caching](https://blog.algomaster.io/p/distributed-caching)
|
||||||
|
- [Content Delivery Network (CDN)](https://algomaster.io/learn/system-design/content-delivery-network-cdn)
|
||||||
|
|
||||||
|
## 🔄 Asynchronous Communication
|
||||||
|
- [Pub/Sub](https://algomaster.io/learn/system-design/pub-sub)
|
||||||
|
- [Message Queues](https://algomaster.io/learn/system-design/message-queues)
|
||||||
|
- [Change Data Capture (CDC)](https://algomaster.io/learn/system-design/change-data-capture-cdc)
|
||||||
|
|
||||||
|
## 🧩 Distributed System and Microservices
|
||||||
|
- [HeartBeats](https://blog.algomaster.io/p/heartbeats-in-distributed-systems)
|
||||||
|
- [Service Discovery](https://blog.algomaster.io/p/service-discovery-in-distributed-systems)
|
||||||
- [Consensus Algorithms](https://medium.com/@sourabhatta1819/consensus-in-distributed-system-ac79f8ba2b8c)
|
- [Consensus Algorithms](https://medium.com/@sourabhatta1819/consensus-in-distributed-system-ac79f8ba2b8c)
|
||||||
- [Gossip Protocol](http://highscalability.com/blog/2023/7/16/gossip-protocol-explained.html)
|
|
||||||
- [API Gateway](https://www.nginx.com/learn/api-gateway/)
|
|
||||||
- [Serverless Architecture](https://www.datadoghq.com/knowledge-center/serverless-architecture/)
|
|
||||||
- [Service Discovery](https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture/)
|
|
||||||
- [Disaster Recovery](https://cloud.google.com/learn/what-is-disaster-recovery)
|
|
||||||
- [Distributed Locking](https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html)
|
- [Distributed Locking](https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html)
|
||||||
|
- [Gossip Protocol](http://highscalability.com/blog/2023/7/16/gossip-protocol-explained.html)
|
||||||
|
- [Circuit Breaker](https://medium.com/geekculture/design-patterns-for-microservices-circuit-breaker-pattern-276249ffab33)
|
||||||
|
- [Disaster Recovery](https://cloud.google.com/learn/what-is-disaster-recovery)
|
||||||
- [Distributed Tracing](https://www.dynatrace.com/news/blog/what-is-distributed-tracing/)
|
- [Distributed Tracing](https://www.dynatrace.com/news/blog/what-is-distributed-tracing/)
|
||||||
- [Checksum](https://www.lifewire.com/what-does-checksum-mean-2625825)
|
|
||||||
|
|
||||||
### [System Design Interview Template](interview-template.md)
|
## 🖇️ Architectural Patterns
|
||||||
|
- [Client-Server Architecture](https://algomaster.io/learn/system-design/client-server-architecture)
|
||||||
|
- [Microservices Architecture](https://medium.com/hashmapinc/the-what-why-and-how-of-a-microservices-architecture-4179579423a9)
|
||||||
|
- [Serverless Architecture](https://blog.algomaster.io/p/2edeb23b-cfa5-4b24-845e-3f6f7a39d162)
|
||||||
|
- [Event-Driven Architecture](https://www.confluent.io/learn/event-driven-architecture/)
|
||||||
|
- [Peer-to-Peer (P2P) Architecture](https://www.spiceworks.com/tech/networking/articles/what-is-peer-to-peer/)
|
||||||
|
|
||||||
## System Design Interview Problems
|
## ⚖️ System Design Tradeoffs
|
||||||
|
- [Top 15 Tradeoffs](https://blog.algomaster.io/p/system-design-top-15-trade-offs)
|
||||||
|
- [Vertical vs Horizontal Scaling](https://algomaster.io/learn/system-design/vertical-vs-horizontal-scaling)
|
||||||
|
- [Concurrency vs Parallelism](https://blog.algomaster.io/p/concurrency-vs-parallelism)
|
||||||
|
- [Long Polling vs WebSockets](https://blog.algomaster.io/p/long-polling-vs-websockets)
|
||||||
|
- [Batch vs Stream Processing](https://blog.algomaster.io/p/batch-processing-vs-stream-processing)
|
||||||
|
- [Stateful vs Stateless Design](https://blog.algomaster.io/p/stateful-vs-stateless-architecture)
|
||||||
|
- [Strong vs Eventual Consistency](https://blog.algomaster.io/p/strong-vs-eventual-consistency)
|
||||||
|
- [Read-Through vs Write-Through Cache](https://blog.algomaster.io/p/59cae60d-9717-4e20-a59e-759e370db4e5)
|
||||||
|
- [Push vs Pull Architecture](https://blog.algomaster.io/p/af5fe2fe-9a4f-4708-af43-184945a243af)
|
||||||
|
- [REST vs RPC](https://blog.algomaster.io/p/106604fb-b746-41de-88fb-60e932b2ff68)
|
||||||
|
- [Synchronous vs. asynchronous communications](https://blog.algomaster.io/p/aec1cebf-6060-45a7-8e00-47364ca70761)
|
||||||
|
- [Latency vs Throughput](https://aws.amazon.com/compare/the-difference-between-throughput-and-latency/)
|
||||||
|
|
||||||
|
## ✅ [How to Answer a System Design Interview Problem](https://algomaster.io/learn/system-design-interviews/answering-framework)
|
||||||
|
|
||||||
|
## 💻 System Design Interview Problems
|
||||||
### Easy
|
### Easy
|
||||||
- [Design Leaderboard](https://systemdesign.one/leaderboard-system-design/)
|
- [Design URL Shortener like TinyURL](https://algomaster.io/learn/system-design-interviews/design-url-shortener)
|
||||||
- [Design URL Shortener like TinyURL](https://www.youtube.com/watch?v=fMZMm_0ZhK4)
|
- [Design Autocomplete for Search Engines](https://algomaster.io/learn/system-design-interviews/design-instagram)
|
||||||
- [Design Text Storage Service like Pastebin](https://www.youtube.com/watch?v=josjRSBqEBI)
|
- [Design Load Balancer](https://algomaster.io/learn/system-design-interviews/design-load-balancer)
|
||||||
- [Design Content Delivery Network (CDN)](https://www.youtube.com/watch?v=8zX0rue2Hic)
|
- [Design Content Delivery Network (CDN)](https://www.youtube.com/watch?v=8zX0rue2Hic)
|
||||||
- [Design Parking Garage](https://www.youtube.com/watch?v=NtMvNh0WFVM)
|
- [Design Parking Garage](https://www.youtube.com/watch?v=NtMvNh0WFVM)
|
||||||
- [Design Vending Machine](https://www.youtube.com/watch?v=D0kDMUgo27c)
|
- [Design Vending Machine](https://www.youtube.com/watch?v=D0kDMUgo27c)
|
||||||
- [Design Distributed Key-Value Store](https://www.youtube.com/watch?v=rnZmdmlR-2M)
|
- [Design Distributed Key-Value Store](https://www.youtube.com/watch?v=rnZmdmlR-2M)
|
||||||
- [Design Distributed Cache](https://www.youtube.com/watch?v=iuqZvajTOyA)
|
- [Design Distributed Cache](https://www.youtube.com/watch?v=iuqZvajTOyA)
|
||||||
- [Design Distributed Job Scheduler](https://towardsdatascience.com/ace-the-system-design-interview-job-scheduling-system-b25693817950)
|
|
||||||
- [Design Authentication System](https://www.youtube.com/watch?v=uj_4vxm9u90)
|
- [Design Authentication System](https://www.youtube.com/watch?v=uj_4vxm9u90)
|
||||||
- [Design Unified Payments Interface (UPI)](https://www.youtube.com/watch?v=QpLy0_c_RXk)
|
- [Design Unified Payments Interface (UPI)](https://www.youtube.com/watch?v=QpLy0_c_RXk)
|
||||||
### Medium
|
### Medium
|
||||||
- [Design Instagram](https://www.youtube.com/watch?v=VJpfO6KdyWE)
|
- [Design WhatsApp](https://algomaster.io/learn/system-design-interviews/design-whatsapp)
|
||||||
|
- [Design Spotify](https://algomaster.io/learn/system-design-interviews/design-spotify)
|
||||||
|
- [Design Instagram](https://algomaster.io/learn/system-design-interviews/design-instagram)
|
||||||
|
- [Design Notification Service](https://algomaster.io/learn/system-design-interviews/design-notification-service)
|
||||||
|
- [Design Distributed Job Scheduler](https://blog.algomaster.io/p/design-a-distributed-job-scheduler)
|
||||||
- [Design Tinder](https://www.youtube.com/watch?v=tndzLznxq40)
|
- [Design Tinder](https://www.youtube.com/watch?v=tndzLznxq40)
|
||||||
- [Design WhatsApp](https://www.youtube.com/watch?v=vvhC64hQZMk)
|
|
||||||
- [Design Facebook](https://www.youtube.com/watch?v=9-hjBGxuiEs)
|
- [Design Facebook](https://www.youtube.com/watch?v=9-hjBGxuiEs)
|
||||||
- [Design Twitter](https://www.youtube.com/watch?v=wYk0xPP_P_8)
|
- [Design Twitter](https://www.youtube.com/watch?v=wYk0xPP_P_8)
|
||||||
- [Design Reddit](https://www.youtube.com/watch?v=KYExYE_9nIY)
|
- [Design Reddit](https://www.youtube.com/watch?v=KYExYE_9nIY)
|
||||||
@@ -72,24 +123,17 @@ This repository contains resources to learn System Design concepts and prepare f
|
|||||||
- [Design Youtube](https://www.youtube.com/watch?v=jPKTo1iGQiE)
|
- [Design Youtube](https://www.youtube.com/watch?v=jPKTo1iGQiE)
|
||||||
- [Design Google Search](https://www.youtube.com/watch?v=CeGtqouT8eA)
|
- [Design Google Search](https://www.youtube.com/watch?v=CeGtqouT8eA)
|
||||||
- [Design E-commerce Store like Amazon](https://www.youtube.com/watch?v=EpASu_1dUdE)
|
- [Design E-commerce Store like Amazon](https://www.youtube.com/watch?v=EpASu_1dUdE)
|
||||||
- [Design Spotify](https://www.youtube.com/watch?v=_K-eupuDVEc)
|
|
||||||
- [Design TikTok](https://www.youtube.com/watch?v=Z-0g_aJL5Fw)
|
- [Design TikTok](https://www.youtube.com/watch?v=Z-0g_aJL5Fw)
|
||||||
- [Design Shopify](https://www.youtube.com/watch?v=lEL4F_0J3l8)
|
- [Design Shopify](https://www.youtube.com/watch?v=lEL4F_0J3l8)
|
||||||
- [Design Airbnb](https://www.youtube.com/watch?v=YyOXt2MEkv4)
|
- [Design Airbnb](https://www.youtube.com/watch?v=YyOXt2MEkv4)
|
||||||
- [Design Autocomplete for Search Engines](https://www.youtube.com/watch?v=us0qySiUsGU)
|
|
||||||
- [Design Rate Limiter](https://www.youtube.com/watch?v=mhUQe4BKZXs)
|
- [Design Rate Limiter](https://www.youtube.com/watch?v=mhUQe4BKZXs)
|
||||||
- [Design Distributed Message Queue like Kafka](https://www.youtube.com/watch?v=iJLL-KPqBpM)
|
- [Design Distributed Message Queue like Kafka](https://www.youtube.com/watch?v=iJLL-KPqBpM)
|
||||||
- [Design Flight Booking System](https://www.youtube.com/watch?v=qsGcfVGvFSs)
|
- [Design Flight Booking System](https://www.youtube.com/watch?v=qsGcfVGvFSs)
|
||||||
- [Design Online Code Editor](https://www.youtube.com/watch?v=07jkn4jUtso)
|
- [Design Online Code Editor](https://www.youtube.com/watch?v=07jkn4jUtso)
|
||||||
- [Design Stock Exchange System](https://www.youtube.com/watch?v=dUMWMZmMsVE)
|
|
||||||
- [Design an Analytics Platform (Metrics & Logging)](https://www.youtube.com/watch?v=kIcq1_pBQSY)
|
- [Design an Analytics Platform (Metrics & Logging)](https://www.youtube.com/watch?v=kIcq1_pBQSY)
|
||||||
- [Design Notification Service](https://www.youtube.com/watch?v=CUwt9_l0DOg)
|
|
||||||
- [Design Payment System](https://www.youtube.com/watch?v=olfaBgJrUBI)
|
- [Design Payment System](https://www.youtube.com/watch?v=olfaBgJrUBI)
|
||||||
- [Design a Digital Wallet](https://www.youtube.com/watch?v=MCKdixWBnco)
|
- [Design a Digital Wallet](https://www.youtube.com/watch?v=4ijjIUeq6hE)
|
||||||
### Hard
|
### Hard
|
||||||
- [Design Slack](https://systemdesign.one/slack-architecture/)
|
|
||||||
- [Design Live Comments](https://systemdesign.one/live-comment-system-design/)
|
|
||||||
- [Design Distributed Counter](https://systemdesign.one/distributed-counter-system-design/)
|
|
||||||
- [Design Location Based Service like Yelp](https://www.youtube.com/watch?v=M4lR_Va97cQ)
|
- [Design Location Based Service like Yelp](https://www.youtube.com/watch?v=M4lR_Va97cQ)
|
||||||
- [Design Uber](https://www.youtube.com/watch?v=umWABit-wbk)
|
- [Design Uber](https://www.youtube.com/watch?v=umWABit-wbk)
|
||||||
- [Design Food Delivery App like Doordash](https://www.youtube.com/watch?v=iRhSAR3ldTw)
|
- [Design Food Delivery App like Doordash](https://www.youtube.com/watch?v=iRhSAR3ldTw)
|
||||||
@@ -103,22 +147,47 @@ This repository contains resources to learn System Design concepts and prepare f
|
|||||||
- [Design Distributed Cloud Storage like S3](https://www.youtube.com/watch?v=UmWtcgC96X8)
|
- [Design Distributed Cloud Storage like S3](https://www.youtube.com/watch?v=UmWtcgC96X8)
|
||||||
- [Design Distributed Locking Service](https://www.youtube.com/watch?v=v7x75aN9liM)
|
- [Design Distributed Locking Service](https://www.youtube.com/watch?v=v7x75aN9liM)
|
||||||
|
|
||||||
## Must-Read Engineering Articles
|
## 📇 Courses
|
||||||
- [How Discord stores trillions of messages](https://discord.com/blog/how-discord-stores-trillions-of-messages)
|
- [System Design Fundamentals](https://algomaster.io/learn/system-design/course-introduction)
|
||||||
- [Building In-Video Search](https://netflixtechblog.com/building-in-video-search-936766f0017c)
|
- [System Design Interviews](https://algomaster.io/learn/system-design-interviews/introduction)
|
||||||
- [How Canva scaled Media uploads from Zero to 50 Million per Day](https://www.canva.dev/blog/engineering/from-zero-to-50-million-uploads-per-day-scaling-media-at-canva/)
|
|
||||||
- [How Airbnb avoids double payments in a Distributed Payments System](https://medium.com/airbnb-engineering/avoiding-double-payments-in-a-distributed-payments-system-2981f6b070bb)
|
|
||||||
- [Stripe’s payments APIs - The first 10 years](https://stripe.com/blog/payment-api-design)
|
|
||||||
- [Real time messaging at Slack](https://slack.engineering/real-time-messaging/)
|
|
||||||
|
|
||||||
## Books
|
## 📩 Newsletters
|
||||||
- [Designing Data-Intensive Applications](https://www.amazon.com/Designing-Data-Intensive-Applications-Reliable-Maintainable/dp/B08VL1BLHB/)
|
- [AlgoMaster Newsletter](https://blog.algomaster.io/)
|
||||||
- [System Design Interview – An insider's guide](https://www.amazon.com/System-Design-Interview-insiders-Second/dp/B08CMF2CQF/)
|
|
||||||
|
|
||||||
## YouTube Channels
|
## 📚 Books
|
||||||
|
- [Designing Data-Intensive Applications](https://www.amazon.in/dp/9352135245)
|
||||||
|
|
||||||
|
## 📺 YouTube Channels
|
||||||
- [Tech Dummies Narendra L](https://www.youtube.com/@TechDummiesNarendraL)
|
- [Tech Dummies Narendra L](https://www.youtube.com/@TechDummiesNarendraL)
|
||||||
- [Gaurav Sen](https://www.youtube.com/@gkcs)
|
- [Gaurav Sen](https://www.youtube.com/@gkcs)
|
||||||
- [codeKarle](https://www.youtube.com/@codeKarle)
|
- [codeKarle](https://www.youtube.com/@codeKarle)
|
||||||
- [ByteByteGo](https://www.youtube.com/@ByteByteGo)
|
- [ByteByteGo](https://www.youtube.com/@ByteByteGo)
|
||||||
- [System Design Interview](https://www.youtube.com/@SystemDesignInterview)
|
- [System Design Interview](https://www.youtube.com/@SystemDesignInterview)
|
||||||
|
- [sudoCODE](https://www.youtube.com/@sudocode)
|
||||||
- [Success in Tech](https://www.youtube.com/@SuccessinTech/videos)
|
- [Success in Tech](https://www.youtube.com/@SuccessinTech/videos)
|
||||||
|
|
||||||
|
## 📜 Must-Read Engineering Articles
|
||||||
|
- [How Discord stores trillions of messages](https://discord.com/blog/how-discord-stores-trillions-of-messages)
|
||||||
|
- [Building In-Video Search at Netflix](https://netflixtechblog.com/building-in-video-search-936766f0017c)
|
||||||
|
- [How Canva scaled Media uploads from Zero to 50 Million per Day](https://www.canva.dev/blog/engineering/from-zero-to-50-million-uploads-per-day-scaling-media-at-canva/)
|
||||||
|
- [How Airbnb avoids double payments in a Distributed Payments System](https://medium.com/airbnb-engineering/avoiding-double-payments-in-a-distributed-payments-system-2981f6b070bb)
|
||||||
|
- [Stripe’s payments APIs - The first 10 years](https://stripe.com/blog/payment-api-design)
|
||||||
|
- [Real time messaging at Slack](https://slack.engineering/real-time-messaging/)
|
||||||
|
|
||||||
|
## 🗞️ Must-Read Distributed Systems Papers
|
||||||
|
- [Paxos: The Part-Time Parliament](https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf)
|
||||||
|
- [MapReduce: Simplified Data Processing on Large Clusters](https://research.google.com/archive/mapreduce-osdi04.pdf)
|
||||||
|
- [The Google File System](https://static.googleusercontent.com/media/research.google.com/en//archive/gfs-sosp2003.pdf)
|
||||||
|
- [Dynamo: Amazon’s Highly Available Key-value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf)
|
||||||
|
- [Kafka: a Distributed Messaging System for Log Processing](https://notes.stephenholiday.com/Kafka.pdf)
|
||||||
|
- [Spanner: Google’s Globally-Distributed Database](https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf)
|
||||||
|
- [Bigtable: A Distributed Storage System for Structured Data](https://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf)
|
||||||
|
- [ZooKeeper: Wait-free coordination for Internet-scale systems](https://www.usenix.org/legacy/event/usenix10/tech/full_papers/Hunt.pdf)
|
||||||
|
- [The Log-Structured Merge-Tree (LSM-Tree)](https://www.cs.umb.edu/~poneil/lsmtree.pdf)
|
||||||
|
- [The Chubby lock service for loosely-coupled distributed systems](https://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<i>If you find this resource helpful, please give it a star ⭐️ and share it with others!</i>
|
||||||
|
</p>
|
||||||
|
|||||||
BIN
diagrams/interview-template.png
Normal file
BIN
diagrams/interview-template.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.5 MiB |
BIN
diagrams/system-design-github.png
Normal file
BIN
diagrams/system-design-github.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 95 KiB |
@@ -0,0 +1,86 @@
|
|||||||
|
package implementations.java.consistent_hashing;
|
||||||
|
|
||||||
|
import java.security.MessageDigest;
|
||||||
|
import java.security.NoSuchAlgorithmException;
|
||||||
|
import java.util.*;
|
||||||
|
|
||||||
|
public class ConsistentHashing {
|
||||||
|
private final int numReplicas; // Number of virtual nodes per server
|
||||||
|
private final TreeMap<Long, String> ring; // Hash ring storing virtual nodes
|
||||||
|
private final Set<String> servers; // Set of physical servers
|
||||||
|
|
||||||
|
public ConsistentHashing(List<String> servers, int numReplicas) {
|
||||||
|
this.numReplicas = numReplicas;
|
||||||
|
this.ring = new TreeMap<>();
|
||||||
|
this.servers = new HashSet<>();
|
||||||
|
|
||||||
|
// Add each server to the hash ring
|
||||||
|
for (String server : servers) {
|
||||||
|
addServer(server);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private long hash(String key) {
|
||||||
|
try {
|
||||||
|
MessageDigest md = MessageDigest.getInstance("MD5");
|
||||||
|
md.update(key.getBytes());
|
||||||
|
byte[] digest = md.digest();
|
||||||
|
return ((long) (digest[0] & 0xFF) << 24) |
|
||||||
|
((long) (digest[1] & 0xFF) << 16) |
|
||||||
|
((long) (digest[2] & 0xFF) << 8) |
|
||||||
|
((long) (digest[3] & 0xFF));
|
||||||
|
} catch (NoSuchAlgorithmException e) {
|
||||||
|
throw new RuntimeException("MD5 algorithm not found", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void addServer(String server) {
|
||||||
|
servers.add(server);
|
||||||
|
for (int i = 0; i < numReplicas; i++) {
|
||||||
|
long hash = hash(server + "-" + i); // Unique hash for each virtual node
|
||||||
|
ring.put(hash, server);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void removeServer(String server) {
|
||||||
|
if (servers.remove(server)) {
|
||||||
|
for (int i = 0; i < numReplicas; i++) {
|
||||||
|
long hash = hash(server + "-" + i);
|
||||||
|
ring.remove(hash);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getServer(String key) {
|
||||||
|
if (ring.isEmpty()) {
|
||||||
|
return null; // No servers available
|
||||||
|
}
|
||||||
|
|
||||||
|
long hash = hash(key);
|
||||||
|
// Find the closest server in a clockwise direction
|
||||||
|
Map.Entry<Long, String> entry = ring.ceilingEntry(hash);
|
||||||
|
if (entry == null) {
|
||||||
|
// If we exceed the highest node, wrap around to the first node
|
||||||
|
entry = ring.firstEntry();
|
||||||
|
}
|
||||||
|
return entry.getValue();
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
List<String> servers = Arrays.asList("S0", "S1", "S2", "S3", "S4", "S5");
|
||||||
|
ConsistentHashing ch = new ConsistentHashing(servers, 3);
|
||||||
|
|
||||||
|
// Step 2: Assign requests (keys) to servers
|
||||||
|
System.out.println("UserA is assigned to: " + ch.getServer("UserA"));
|
||||||
|
System.out.println("UserB is assigned to: " + ch.getServer("UserB"));
|
||||||
|
|
||||||
|
// Step 3: Add a new server dynamically
|
||||||
|
ch.addServer("S6");
|
||||||
|
System.out.println("UserA is now assigned to: " + ch.getServer("UserA"));
|
||||||
|
|
||||||
|
// Step 4: Remove a server dynamically
|
||||||
|
ch.removeServer("S2");
|
||||||
|
System.out.println("UserB is now assigned to: " + ch.getServer("UserB"));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
25
implementations/java/load_balancing_algorithms/IPHash.java
Normal file
25
implementations/java/load_balancing_algorithms/IPHash.java
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
import java.util.List;
|
||||||
|
|
||||||
|
public class IPHash {
|
||||||
|
private List<String> servers;
|
||||||
|
|
||||||
|
public IPHash(List<String> servers) {
|
||||||
|
this.servers = servers;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getNextServer(String clientIp) {
|
||||||
|
int hash = clientIp.hashCode();
|
||||||
|
int serverIndex = Math.abs(hash) % servers.size();
|
||||||
|
return servers.get(serverIndex);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
List<String> servers = List.of("Server1", "Server2", "Server3");
|
||||||
|
IPHash ipHash = new IPHash(servers);
|
||||||
|
|
||||||
|
List<String> clientIps = List.of("192.168.0.1", "192.168.0.2", "192.168.0.3");
|
||||||
|
for (String ip : clientIps) {
|
||||||
|
System.out.println(ip + " is mapped to " + ipHash.getNextServer(ip));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,36 @@
|
|||||||
|
import java.util.HashMap;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
|
public class LeastConnections {
|
||||||
|
private Map<String, Integer> serverConnections;
|
||||||
|
|
||||||
|
public LeastConnections(List<String> servers) {
|
||||||
|
serverConnections = new HashMap<>();
|
||||||
|
for (String server : servers) {
|
||||||
|
serverConnections.put(server, 0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getNextServer() {
|
||||||
|
return serverConnections.entrySet().stream()
|
||||||
|
.min(Map.Entry.comparingByValue())
|
||||||
|
.map(Map.Entry::getKey)
|
||||||
|
.orElse(null);
|
||||||
|
}
|
||||||
|
|
||||||
|
public void releaseConnection(String server) {
|
||||||
|
serverConnections.computeIfPresent(server, (k, v) -> v > 0 ? v - 1 : 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
List<String> servers = List.of("Server1", "Server2", "Server3");
|
||||||
|
LeastConnections leastConnectionsLB = new LeastConnections(servers);
|
||||||
|
|
||||||
|
for (int i = 0; i < 6; i++) {
|
||||||
|
String server = leastConnectionsLB.getNextServer();
|
||||||
|
System.out.println(server);
|
||||||
|
leastConnectionsLB.releaseConnection(server);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,58 @@
|
|||||||
|
import java.util.ArrayList;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Random;
|
||||||
|
|
||||||
|
public class LeastResponseTime {
|
||||||
|
private List<String> servers;
|
||||||
|
private List<Double> responseTimes;
|
||||||
|
|
||||||
|
public LeastResponseTime(List<String> servers) {
|
||||||
|
this.servers = servers;
|
||||||
|
this.responseTimes = new ArrayList<>(servers.size());
|
||||||
|
for (int i = 0; i < servers.size(); i++)
|
||||||
|
responseTimes.add(0.0);
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getNextServer() {
|
||||||
|
double minResponseTime = responseTimes.get(0);
|
||||||
|
int minIndex = 0;
|
||||||
|
for (int i = 1; i < responseTimes.size(); i++) {
|
||||||
|
if (responseTimes.get(i) < minResponseTime) {
|
||||||
|
minResponseTime = responseTimes.get(i);
|
||||||
|
minIndex = i;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return servers.get(minIndex);
|
||||||
|
}
|
||||||
|
|
||||||
|
public void updateResponseTime(String server, double responseTime) {
|
||||||
|
int index = servers.indexOf(server);
|
||||||
|
responseTimes.set(index, responseTime);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static double simulateResponseTime(String server) {
|
||||||
|
// Simulating response time with random delay
|
||||||
|
Random random = new Random();
|
||||||
|
double delay = 0.1 + (1.0 - 0.1) * random.nextDouble();
|
||||||
|
try {
|
||||||
|
Thread.sleep((long) (delay * 1000));
|
||||||
|
} catch (InterruptedException e) {
|
||||||
|
e.printStackTrace();
|
||||||
|
}
|
||||||
|
return delay;
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
List<String> servers = List.of("Server1", "Server2", "Server3");
|
||||||
|
LeastResponseTime leastResponseTimeLB = new LeastResponseTime(servers);
|
||||||
|
|
||||||
|
for (int i = 0; i < 6; i++) {
|
||||||
|
String server = leastResponseTimeLB.getNextServer();
|
||||||
|
System.out.println("Request " + (i + 1) + " -> " + server);
|
||||||
|
double responseTime = simulateResponseTime(server);
|
||||||
|
leastResponseTimeLB.updateResponseTime(server, responseTime);
|
||||||
|
System.out.println("Response Time: " + String.format("%.2f", responseTime) + "s");
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,26 @@
|
|||||||
|
import java.util.List;
|
||||||
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
|
|
||||||
|
public class RoundRobin {
|
||||||
|
private List<String> servers;
|
||||||
|
private AtomicInteger index;
|
||||||
|
|
||||||
|
public RoundRobin(List<String> servers) {
|
||||||
|
this.servers = servers;
|
||||||
|
this.index = new AtomicInteger(-1);
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getNextServer() {
|
||||||
|
int currentIndex = index.incrementAndGet() % servers.size();
|
||||||
|
return servers.get(currentIndex);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
List<String> servers = List.of("Server1", "Server2", "Server3");
|
||||||
|
RoundRobin roundRobinLB = new RoundRobin(servers);
|
||||||
|
|
||||||
|
for (int i = 0; i < 6; i++) {
|
||||||
|
System.out.println(roundRobinLB.getNextServer());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,44 @@
|
|||||||
|
import java.util.List;
|
||||||
|
|
||||||
|
public class WeightedRoundRobin {
|
||||||
|
private List<String> servers;
|
||||||
|
private List<Integer> weights;
|
||||||
|
private int currentIndex;
|
||||||
|
private int currentWeight;
|
||||||
|
|
||||||
|
public WeightedRoundRobin(List<String> servers, List<Integer> weights) {
|
||||||
|
this.servers = servers;
|
||||||
|
this.weights = weights;
|
||||||
|
this.currentIndex = -1;
|
||||||
|
this.currentWeight = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getNextServer() {
|
||||||
|
while (true) {
|
||||||
|
currentIndex = (currentIndex + 1) % servers.size();
|
||||||
|
if (currentIndex == 0) {
|
||||||
|
currentWeight--;
|
||||||
|
if (currentWeight <= 0) {
|
||||||
|
currentWeight = getMaxWeight();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (weights.get(currentIndex) >= currentWeight) {
|
||||||
|
return servers.get(currentIndex);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private int getMaxWeight() {
|
||||||
|
return weights.stream().max(Integer::compare).orElse(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void main(String[] args) {
|
||||||
|
List<String> servers = List.of("Server1", "Server2", "Server3");
|
||||||
|
List<Integer> weights = List.of(5, 1, 1);
|
||||||
|
WeightedRoundRobin weightedRoundRobinLB = new WeightedRoundRobin(servers, weights);
|
||||||
|
|
||||||
|
for (int i = 0; i < 7; i++) {
|
||||||
|
System.out.println(weightedRoundRobinLB.getNextServer());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
33
implementations/java/rate_limiting/FixedWindowCounter.java
Normal file
33
implementations/java/rate_limiting/FixedWindowCounter.java
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
package implementations.java.rate_limiting;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
|
|
||||||
|
public class FixedWindowCounter {
|
||||||
|
private final long windowSizeInSeconds; // Size of each window in seconds
|
||||||
|
private final long maxRequestsPerWindow; // Maximum number of requests allowed per window
|
||||||
|
private long currentWindowStart; // Start time of the current window
|
||||||
|
private long requestCount; // Number of requests in the current window
|
||||||
|
|
||||||
|
public FixedWindowCounter(long windowSizeInSeconds, long maxRequestsPerWindow) {
|
||||||
|
this.windowSizeInSeconds = windowSizeInSeconds;
|
||||||
|
this.maxRequestsPerWindow = maxRequestsPerWindow;
|
||||||
|
this.currentWindowStart = Instant.now().getEpochSecond();
|
||||||
|
this.requestCount = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
public synchronized boolean allowRequest() {
|
||||||
|
long now = Instant.now().getEpochSecond();
|
||||||
|
|
||||||
|
// Check if we've moved to a new window
|
||||||
|
if (now - currentWindowStart >= windowSizeInSeconds) {
|
||||||
|
currentWindowStart = now; // Start a new window
|
||||||
|
requestCount = 0; // Reset the count for the new window
|
||||||
|
}
|
||||||
|
|
||||||
|
if (requestCount < maxRequestsPerWindow) {
|
||||||
|
requestCount++; // Increment the count for this window
|
||||||
|
return true; // Allow the request
|
||||||
|
}
|
||||||
|
return false; // We've exceeded the limit for this window, deny the request
|
||||||
|
}
|
||||||
|
}
|
||||||
42
implementations/java/rate_limiting/LeakyBucket.java
Normal file
42
implementations/java/rate_limiting/LeakyBucket.java
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
package implementations.java.rate_limiting;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
|
import java.util.LinkedList;
|
||||||
|
import java.util.Queue;
|
||||||
|
|
||||||
|
public class LeakyBucket {
|
||||||
|
private final long capacity; // Maximum number of requests the bucket can hold
|
||||||
|
private final double leakRate; // Rate at which requests leak out of the bucket (requests per second)
|
||||||
|
private final Queue<Instant> bucket; // Queue to hold timestamps of requests
|
||||||
|
private Instant lastLeakTimestamp; // Last time we leaked from the bucket
|
||||||
|
|
||||||
|
public LeakyBucket(long capacity, double leakRate) {
|
||||||
|
this.capacity = capacity;
|
||||||
|
this.leakRate = leakRate;
|
||||||
|
this.bucket = new LinkedList<>();
|
||||||
|
this.lastLeakTimestamp = Instant.now();
|
||||||
|
}
|
||||||
|
|
||||||
|
public synchronized boolean allowRequest() {
|
||||||
|
leak(); // First, leak out any requests based on elapsed time
|
||||||
|
|
||||||
|
if (bucket.size() < capacity) {
|
||||||
|
bucket.offer(Instant.now()); // Add the new request to the bucket
|
||||||
|
return true; // Allow the request
|
||||||
|
}
|
||||||
|
return false; // Bucket is full, deny the request
|
||||||
|
}
|
||||||
|
|
||||||
|
private void leak() {
|
||||||
|
Instant now = Instant.now();
|
||||||
|
long elapsedMillis = now.toEpochMilli() - lastLeakTimestamp.toEpochMilli();
|
||||||
|
int leakedItems = (int) (elapsedMillis * leakRate / 1000.0); // Calculate how many items should have leaked
|
||||||
|
|
||||||
|
// Remove the leaked items from the bucket
|
||||||
|
for (int i = 0; i < leakedItems && !bucket.isEmpty(); i++) {
|
||||||
|
bucket.poll();
|
||||||
|
}
|
||||||
|
|
||||||
|
lastLeakTimestamp = now;
|
||||||
|
}
|
||||||
|
}
|
||||||
42
implementations/java/rate_limiting/SlidingWindowCounter.java
Normal file
42
implementations/java/rate_limiting/SlidingWindowCounter.java
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
package implementations.java.rate_limiting;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
|
|
||||||
|
public class SlidingWindowCounter {
|
||||||
|
private final long windowSizeInSeconds; // Size of the sliding window in seconds
|
||||||
|
private final long maxRequestsPerWindow; // Maximum number of requests allowed in the window
|
||||||
|
private long currentWindowStart; // Start time of the current window
|
||||||
|
private long previousWindowCount; // Number of requests in the previous window
|
||||||
|
private long currentWindowCount; // Number of requests in the current window
|
||||||
|
|
||||||
|
public SlidingWindowCounter(long windowSizeInSeconds, long maxRequestsPerWindow) {
|
||||||
|
this.windowSizeInSeconds = windowSizeInSeconds;
|
||||||
|
this.maxRequestsPerWindow = maxRequestsPerWindow;
|
||||||
|
this.currentWindowStart = Instant.now().getEpochSecond();
|
||||||
|
this.previousWindowCount = 0;
|
||||||
|
this.currentWindowCount = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
public synchronized boolean allowRequest() {
|
||||||
|
long now = Instant.now().getEpochSecond();
|
||||||
|
long timePassedInWindow = now - currentWindowStart;
|
||||||
|
|
||||||
|
// Check if we've moved to a new window
|
||||||
|
if (timePassedInWindow >= windowSizeInSeconds) {
|
||||||
|
previousWindowCount = currentWindowCount;
|
||||||
|
currentWindowCount = 0;
|
||||||
|
currentWindowStart = now;
|
||||||
|
timePassedInWindow = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate the weighted count of requests
|
||||||
|
double weightedCount = previousWindowCount * ((windowSizeInSeconds - timePassedInWindow) / (double) windowSizeInSeconds)
|
||||||
|
+ currentWindowCount;
|
||||||
|
|
||||||
|
if (weightedCount < maxRequestsPerWindow) {
|
||||||
|
currentWindowCount++; // Increment the count for this window
|
||||||
|
return true; // Allow the request
|
||||||
|
}
|
||||||
|
return false; // We've exceeded the limit, deny the request
|
||||||
|
}
|
||||||
|
}
|
||||||
33
implementations/java/rate_limiting/SlidingWindowLog.java
Normal file
33
implementations/java/rate_limiting/SlidingWindowLog.java
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
package implementations.java.rate_limiting;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
|
import java.util.LinkedList;
|
||||||
|
import java.util.Queue;
|
||||||
|
|
||||||
|
public class SlidingWindowLog {
|
||||||
|
private final long windowSizeInSeconds; // Size of the sliding window in seconds
|
||||||
|
private final long maxRequestsPerWindow; // Maximum number of requests allowed in the window
|
||||||
|
private final Queue<Long> requestLog; // Log of request timestamps
|
||||||
|
|
||||||
|
public SlidingWindowLog(long windowSizeInSeconds, long maxRequestsPerWindow) {
|
||||||
|
this.windowSizeInSeconds = windowSizeInSeconds;
|
||||||
|
this.maxRequestsPerWindow = maxRequestsPerWindow;
|
||||||
|
this.requestLog = new LinkedList<>();
|
||||||
|
}
|
||||||
|
|
||||||
|
public synchronized boolean allowRequest() {
|
||||||
|
long now = Instant.now().getEpochSecond();
|
||||||
|
long windowStart = now - windowSizeInSeconds;
|
||||||
|
|
||||||
|
// Remove timestamps that are outside of the current window
|
||||||
|
while (!requestLog.isEmpty() && requestLog.peek() <= windowStart) {
|
||||||
|
requestLog.poll();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (requestLog.size() < maxRequestsPerWindow) {
|
||||||
|
requestLog.offer(now); // Log this request
|
||||||
|
return true; // Allow the request
|
||||||
|
}
|
||||||
|
return false; // We've exceeded the limit for this window, deny the request
|
||||||
|
}
|
||||||
|
}
|
||||||
36
implementations/java/rate_limiting/TokenBucket.java
Normal file
36
implementations/java/rate_limiting/TokenBucket.java
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
package implementations.java.rate_limiting;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
|
|
||||||
|
public class TokenBucket {
|
||||||
|
private final long capacity; // Maximum number of tokens the bucket can hold
|
||||||
|
private final double fillRate; // Rate at which tokens are added to the bucket (tokens per second)
|
||||||
|
private double tokens; // Current number of tokens in the bucket
|
||||||
|
private Instant lastRefillTimestamp; // Last time we refilled the bucket
|
||||||
|
|
||||||
|
public TokenBucket(long capacity, double fillRate) {
|
||||||
|
this.capacity = capacity;
|
||||||
|
this.fillRate = fillRate;
|
||||||
|
this.tokens = capacity; // Start with a full bucket
|
||||||
|
this.lastRefillTimestamp = Instant.now();
|
||||||
|
}
|
||||||
|
|
||||||
|
public synchronized boolean allowRequest(int tokens) {
|
||||||
|
refill(); // First, add any new tokens based on elapsed time
|
||||||
|
|
||||||
|
if (this.tokens < tokens) {
|
||||||
|
return false; // Not enough tokens, deny the request
|
||||||
|
}
|
||||||
|
|
||||||
|
this.tokens -= tokens; // Consume the tokens
|
||||||
|
return true; // Allow the request
|
||||||
|
}
|
||||||
|
|
||||||
|
private void refill() {
|
||||||
|
Instant now = Instant.now();
|
||||||
|
// Calculate how many tokens to add based on the time elapsed
|
||||||
|
double tokensToAdd = (now.toEpochMilli() - lastRefillTimestamp.toEpochMilli()) * fillRate / 1000.0;
|
||||||
|
this.tokens = Math.min(capacity, this.tokens + tokensToAdd); // Add tokens, but don't exceed capacity
|
||||||
|
this.lastRefillTimestamp = now;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,80 @@
|
|||||||
|
import hashlib
|
||||||
|
import bisect
|
||||||
|
|
||||||
|
class ConsistentHashing:
|
||||||
|
def __init__(self, servers, num_replicas=3):
|
||||||
|
"""
|
||||||
|
Initializes the consistent hashing ring.
|
||||||
|
|
||||||
|
- servers: List of initial server names (e.g., ["S0", "S1", "S2"])
|
||||||
|
- num_replicas: Number of virtual nodes per server for better load balancing
|
||||||
|
"""
|
||||||
|
self.num_replicas = num_replicas # Number of virtual nodes per server
|
||||||
|
self.ring = {} # Hash ring storing virtual node mappings
|
||||||
|
self.sorted_keys = [] # Sorted list of hash values (positions) on the ring
|
||||||
|
self.servers = set() # Set of physical servers (used for tracking)
|
||||||
|
|
||||||
|
# Add each server to the hash ring
|
||||||
|
for server in servers:
|
||||||
|
self.add_server(server)
|
||||||
|
|
||||||
|
def _hash(self, key):
|
||||||
|
"""Computes a hash value for a given key using MD5."""
|
||||||
|
return int(hashlib.md5(key.encode()).hexdigest(), 16)
|
||||||
|
|
||||||
|
def add_server(self, server):
|
||||||
|
"""
|
||||||
|
Adds a server to the hash ring along with its virtual nodes.
|
||||||
|
|
||||||
|
- Each virtual node is a different hash of the server ID to distribute load.
|
||||||
|
- The server is hashed multiple times and placed at different positions.
|
||||||
|
"""
|
||||||
|
self.servers.add(server)
|
||||||
|
for i in range(self.num_replicas): # Creating multiple virtual nodes
|
||||||
|
hash_val = self._hash(f"{server}-{i}") # Unique hash for each virtual node
|
||||||
|
self.ring[hash_val] = server # Map hash to the server
|
||||||
|
bisect.insort(self.sorted_keys, hash_val) # Maintain a sorted list for efficient lookup
|
||||||
|
|
||||||
|
def remove_server(self, server):
|
||||||
|
"""
|
||||||
|
Removes a server and all its virtual nodes from the hash ring.
|
||||||
|
"""
|
||||||
|
if server in self.servers:
|
||||||
|
self.servers.remove(server)
|
||||||
|
for i in range(self.num_replicas):
|
||||||
|
hash_val = self._hash(f"{server}-{i}") # Remove each virtual node's hash
|
||||||
|
self.ring.pop(hash_val, None) # Delete from hash ring
|
||||||
|
self.sorted_keys.remove(hash_val) # Remove from sorted key list
|
||||||
|
|
||||||
|
def get_server(self, key):
|
||||||
|
"""
|
||||||
|
Finds the closest server for a given key.
|
||||||
|
|
||||||
|
- Hash the key to get its position on the ring.
|
||||||
|
- Move clockwise to find the nearest server.
|
||||||
|
- If it exceeds the last node, wrap around to the first node.
|
||||||
|
"""
|
||||||
|
if not self.ring:
|
||||||
|
return None # No servers available
|
||||||
|
|
||||||
|
hash_val = self._hash(key) # Hash the key
|
||||||
|
index = bisect.bisect(self.sorted_keys, hash_val) % len(self.sorted_keys) # Locate nearest server
|
||||||
|
return self.ring[self.sorted_keys[index]] # Return the assigned server
|
||||||
|
|
||||||
|
# ----------------- Usage Example -------------------
|
||||||
|
|
||||||
|
# Step 1: Initialize Consistent Hashing with servers
|
||||||
|
servers = ["S0", "S1", "S2", "S3", "S4", "S5"]
|
||||||
|
ch = ConsistentHashing(servers)
|
||||||
|
|
||||||
|
# Step 2: Assign requests (keys) to servers
|
||||||
|
print(ch.get_server("UserA")) # Maps UserA to a server
|
||||||
|
print(ch.get_server("UserB")) # Maps UserB to a server
|
||||||
|
|
||||||
|
# Step 3: Add a new server dynamically
|
||||||
|
ch.add_server("S6")
|
||||||
|
print(ch.get_server("UserA")) # Might be reassigned if affected
|
||||||
|
|
||||||
|
# Step 4: Remove a server dynamically
|
||||||
|
ch.remove_server("S2")
|
||||||
|
print(ch.get_server("UserB")) # Might be reassigned if affected
|
||||||
19
implementations/python/load_balancing_algorithms/ip_hash.py
Normal file
19
implementations/python/load_balancing_algorithms/ip_hash.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
import hashlib
|
||||||
|
|
||||||
|
class IPHash():
|
||||||
|
def __init__(self, servers):
|
||||||
|
self.servers = servers
|
||||||
|
|
||||||
|
def get_next_server(self, client_ip):
|
||||||
|
hash_value = hashlib.md5(client_ip.encode()).hexdigest()
|
||||||
|
index = int(hash_value, 16) % len(self.servers)
|
||||||
|
return self.servers[index]
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
servers = ["Server1", "Server2", "Server3"]
|
||||||
|
load_balancer = IPHash(servers)
|
||||||
|
|
||||||
|
client_ips = ["192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4"]
|
||||||
|
for ip in client_ips:
|
||||||
|
server = load_balancer.get_next_server(ip)
|
||||||
|
print(f"Client {ip} -> {server}")
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
import random
|
||||||
|
|
||||||
|
class LeastConnections:
|
||||||
|
def __init__(self, servers):
|
||||||
|
self.servers = {server: 0 for server in servers}
|
||||||
|
|
||||||
|
def get_next_server(self):
|
||||||
|
# Find the minimum number of connections
|
||||||
|
min_connections = min(self.servers.values())
|
||||||
|
# Get all servers with the minimum number of connections
|
||||||
|
least_loaded_servers = [server for server, connections in self.servers.items() if connections == min_connections]
|
||||||
|
# Select a random server from the least loaded servers
|
||||||
|
selected_server = random.choice(least_loaded_servers)
|
||||||
|
self.servers[selected_server] += 1
|
||||||
|
return selected_server
|
||||||
|
|
||||||
|
def release_connection(self, server):
|
||||||
|
if self.servers[server] > 0:
|
||||||
|
self.servers[server] -= 1
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
servers = ["Server1", "Server2", "Server3"]
|
||||||
|
load_balancer = LeastConnections(servers)
|
||||||
|
|
||||||
|
for i in range(6):
|
||||||
|
server = load_balancer.get_next_server()
|
||||||
|
print(f"Request {i + 1} -> {server}")
|
||||||
|
load_balancer.release_connection(server)
|
||||||
@@ -0,0 +1,34 @@
|
|||||||
|
import time
|
||||||
|
import random
|
||||||
|
|
||||||
|
class LeastResponseTime:
|
||||||
|
def __init__(self, servers):
|
||||||
|
self.servers = servers
|
||||||
|
self.response_times = [0] * len(servers)
|
||||||
|
|
||||||
|
def get_next_server(self):
|
||||||
|
min_response_time = min(self.response_times)
|
||||||
|
min_index = self.response_times.index(min_response_time)
|
||||||
|
return self.servers[min_index]
|
||||||
|
|
||||||
|
def update_response_time(self, server, response_time):
|
||||||
|
index = self.servers.index(server)
|
||||||
|
self.response_times[index] = response_time
|
||||||
|
|
||||||
|
# Simulated server response time function
|
||||||
|
def simulate_response_time():
|
||||||
|
# Simulating response time with random delay
|
||||||
|
delay = random.uniform(0.1, 1.0)
|
||||||
|
time.sleep(delay)
|
||||||
|
return delay
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
servers = ["Server1", "Server2", "Server3"]
|
||||||
|
load_balancer = LeastResponseTime(servers)
|
||||||
|
|
||||||
|
for i in range(6):
|
||||||
|
server = load_balancer.get_next_server()
|
||||||
|
print(f"Request {i + 1} -> {server}")
|
||||||
|
response_time = simulate_response_time()
|
||||||
|
load_balancer.update_response_time(server, response_time)
|
||||||
|
print(f"Response Time: {response_time:.2f}s")
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
class RoundRobin:
|
||||||
|
def __init__(self, servers):
|
||||||
|
self.servers = servers
|
||||||
|
self.current_index = -1
|
||||||
|
|
||||||
|
def get_next_server(self):
|
||||||
|
self.current_index = (self.current_index + 1) % len(self.servers)
|
||||||
|
return self.servers[self.current_index]
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
servers = ["Server1", "Server2", "Server3"]
|
||||||
|
load_balancer = RoundRobin(servers)
|
||||||
|
|
||||||
|
for i in range(6):
|
||||||
|
server = load_balancer.get_next_server()
|
||||||
|
print(f"Request {i + 1} -> {server}")
|
||||||
@@ -0,0 +1,25 @@
|
|||||||
|
class WeightedRoundRobin:
|
||||||
|
def __init__(self, servers, weights):
|
||||||
|
self.servers = servers
|
||||||
|
self.weights = weights
|
||||||
|
self.current_index = -1
|
||||||
|
self.current_weight = 0
|
||||||
|
|
||||||
|
def get_next_server(self):
|
||||||
|
while True:
|
||||||
|
self.current_index = (self.current_index + 1) % len(self.servers)
|
||||||
|
if self.current_index == 0:
|
||||||
|
self.current_weight -= 1
|
||||||
|
if self.current_weight <= 0:
|
||||||
|
self.current_weight = max(self.weights)
|
||||||
|
if self.weights[self.current_index] >= self.current_weight:
|
||||||
|
return self.servers[self.current_index]
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
servers = ["Server1", "Server2", "Server3"]
|
||||||
|
weights = [5, 1, 1]
|
||||||
|
load_balancer = WeightedRoundRobin(servers, weights)
|
||||||
|
|
||||||
|
for i in range(7):
|
||||||
|
server = load_balancer.get_next_server()
|
||||||
|
print(f"Request {i + 1} -> {server}")
|
||||||
33
implementations/python/rate_limiting/fixed_window_counter.py
Normal file
33
implementations/python/rate_limiting/fixed_window_counter.py
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
import time
|
||||||
|
|
||||||
|
class FixedWindowCounter:
|
||||||
|
def __init__(self, window_size, max_requests):
|
||||||
|
self.window_size = window_size # Size of the window in seconds
|
||||||
|
self.max_requests = max_requests # Maximum number of requests per window
|
||||||
|
self.current_window = time.time() // window_size
|
||||||
|
self.request_count = 0
|
||||||
|
|
||||||
|
def allow_request(self):
|
||||||
|
current_time = time.time()
|
||||||
|
window = current_time // self.window_size
|
||||||
|
|
||||||
|
# If we've moved to a new window, reset the counter
|
||||||
|
if window != self.current_window:
|
||||||
|
self.current_window = window
|
||||||
|
self.request_count = 0
|
||||||
|
|
||||||
|
# Check if we're still within the limit for this window
|
||||||
|
if self.request_count < self.max_requests:
|
||||||
|
self.request_count += 1
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Usage example
|
||||||
|
limiter = FixedWindowCounter(window_size=60, max_requests=5) # 5 requests per minute
|
||||||
|
|
||||||
|
for _ in range(10):
|
||||||
|
print(limiter.allow_request()) # Will print True for the first 5 requests, then False
|
||||||
|
time.sleep(0.1) # Wait a bit between requests
|
||||||
|
|
||||||
|
time.sleep(60) # Wait for the window to reset
|
||||||
|
print(limiter.allow_request()) # True
|
||||||
36
implementations/python/rate_limiting/leaky_bucket.py
Normal file
36
implementations/python/rate_limiting/leaky_bucket.py
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
from collections import deque
|
||||||
|
import time
|
||||||
|
|
||||||
|
class LeakyBucket:
|
||||||
|
def __init__(self, capacity, leak_rate):
|
||||||
|
self.capacity = capacity # Maximum number of requests in the bucket
|
||||||
|
self.leak_rate = leak_rate # Rate at which requests leak (requests/second)
|
||||||
|
self.bucket = deque() # Queue to hold request timestamps
|
||||||
|
self.last_leak = time.time() # Last time we leaked from the bucket
|
||||||
|
|
||||||
|
def allow_request(self):
|
||||||
|
now = time.time()
|
||||||
|
# Simulate leaking from the bucket
|
||||||
|
leak_time = now - self.last_leak
|
||||||
|
leaked = int(leak_time * self.leak_rate)
|
||||||
|
if leaked > 0:
|
||||||
|
# Remove the leaked requests from the bucket
|
||||||
|
for _ in range(min(leaked, len(self.bucket))):
|
||||||
|
self.bucket.popleft()
|
||||||
|
self.last_leak = now
|
||||||
|
|
||||||
|
# Check if there's capacity and add the new request
|
||||||
|
if len(self.bucket) < self.capacity:
|
||||||
|
self.bucket.append(now)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Usage example
|
||||||
|
limiter = LeakyBucket(capacity=5, leak_rate=1) # 5 requests, leak 1 per second
|
||||||
|
|
||||||
|
for _ in range(10):
|
||||||
|
print(limiter.allow_request()) # Will print True for the first 5 requests, then False
|
||||||
|
time.sleep(0.1) # Wait a bit between requests
|
||||||
|
|
||||||
|
time.sleep(1) # Wait for bucket to leak
|
||||||
|
print(limiter.allow_request()) # True
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
import time
|
||||||
|
|
||||||
|
class SlidingWindowCounter:
|
||||||
|
def __init__(self, window_size, max_requests):
|
||||||
|
self.window_size = window_size # Size of the sliding window in seconds
|
||||||
|
self.max_requests = max_requests # Maximum number of requests per window
|
||||||
|
self.current_window = time.time() // window_size
|
||||||
|
self.request_count = 0
|
||||||
|
self.previous_count = 0
|
||||||
|
|
||||||
|
def allow_request(self):
|
||||||
|
now = time.time()
|
||||||
|
window = now // self.window_size
|
||||||
|
|
||||||
|
# If we've moved to a new window, update the counts
|
||||||
|
if window != self.current_window:
|
||||||
|
self.previous_count = self.request_count
|
||||||
|
self.request_count = 0
|
||||||
|
self.current_window = window
|
||||||
|
|
||||||
|
# Calculate the weighted request count
|
||||||
|
window_elapsed = (now % self.window_size) / self.window_size
|
||||||
|
threshold = self.previous_count * (1 - window_elapsed) + self.request_count
|
||||||
|
|
||||||
|
# Check if we're within the limit
|
||||||
|
if threshold < self.max_requests:
|
||||||
|
self.request_count += 1
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Usage example
|
||||||
|
limiter = SlidingWindowCounter(window_size=60, max_requests=5) # 5 requests per minute
|
||||||
|
|
||||||
|
for _ in range(10):
|
||||||
|
print(limiter.allow_request()) # Will print True for the first 5 requests, then gradually become False
|
||||||
|
time.sleep(0.1) # Wait a bit between requests
|
||||||
|
|
||||||
|
time.sleep(30) # Wait for half the window to pass
|
||||||
|
print(limiter.allow_request()) # Might be True or False depending on the exact timing
|
||||||
31
implementations/python/rate_limiting/sliding_window_log.py
Normal file
31
implementations/python/rate_limiting/sliding_window_log.py
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
import time
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
class SlidingWindowLog:
|
||||||
|
def __init__(self, window_size, max_requests):
|
||||||
|
self.window_size = window_size # Size of the sliding window in seconds
|
||||||
|
self.max_requests = max_requests # Maximum number of requests per window
|
||||||
|
self.request_log = deque() # Log to keep track of request timestamps
|
||||||
|
|
||||||
|
def allow_request(self):
|
||||||
|
now = time.time()
|
||||||
|
|
||||||
|
# Remove timestamps that are outside the current window
|
||||||
|
while self.request_log and now - self.request_log[0] >= self.window_size:
|
||||||
|
self.request_log.popleft()
|
||||||
|
|
||||||
|
# Check if we're still within the limit
|
||||||
|
if len(self.request_log) < self.max_requests:
|
||||||
|
self.request_log.append(now)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Usage example
|
||||||
|
limiter = SlidingWindowLog(window_size=60, max_requests=5) # 5 requests per minute
|
||||||
|
|
||||||
|
for _ in range(10):
|
||||||
|
print(limiter.allow_request()) # Will print True for the first 5 requests, then False
|
||||||
|
time.sleep(0.1) # Wait a bit between requests
|
||||||
|
|
||||||
|
time.sleep(60) # Wait for the window to slide
|
||||||
|
print(limiter.allow_request()) # True
|
||||||
31
implementations/python/rate_limiting/token_bucket.py
Normal file
31
implementations/python/rate_limiting/token_bucket.py
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
import time
|
||||||
|
|
||||||
|
class TokenBucket:
|
||||||
|
def __init__(self, capacity, fill_rate):
|
||||||
|
self.capacity = capacity # Maximum number of tokens the bucket can hold
|
||||||
|
self.fill_rate = fill_rate # Rate at which tokens are added (tokens/second)
|
||||||
|
self.tokens = capacity # Current token count, start with a full bucket
|
||||||
|
self.last_time = time.time() # Last time we checked the token count
|
||||||
|
|
||||||
|
def allow_request(self, tokens=1):
|
||||||
|
now = time.time()
|
||||||
|
# Calculate how many tokens have been added since the last check
|
||||||
|
time_passed = now - self.last_time
|
||||||
|
self.tokens = min(self.capacity, self.tokens + time_passed * self.fill_rate)
|
||||||
|
self.last_time = now
|
||||||
|
|
||||||
|
# Check if we have enough tokens for this request
|
||||||
|
if self.tokens >= tokens:
|
||||||
|
self.tokens -= tokens
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Usage example
|
||||||
|
limiter = TokenBucket(capacity=10, fill_rate=1) # 10 tokens, refill 1 token per second
|
||||||
|
|
||||||
|
for _ in range(15):
|
||||||
|
print(limiter.allow_request()) # Will print True for the first 10 requests, then False
|
||||||
|
time.sleep(0.1) # Wait a bit between requests
|
||||||
|
|
||||||
|
time.sleep(5) # Wait for bucket to refill
|
||||||
|
print(limiter.allow_request()) # True
|
||||||
@@ -1,90 +0,0 @@
|
|||||||
A 7-Step Framework to answer most System Design Interview Problems:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 1: 𝐂𝐥𝐚𝐫𝐢𝐟𝐲 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐚𝐥 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬:
|
|
||||||
|
|
||||||
- What is the scope of the system?
|
|
||||||
|
|
||||||
- What use cases / key features we need to support?
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐍𝐨𝐧-𝐟𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐚𝐥 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬:
|
|
||||||
|
|
||||||
- Consistency vs Availability?
|
|
||||||
|
|
||||||
- How big is the user base?
|
|
||||||
|
|
||||||
- What is the read/write ratio?
|
|
||||||
|
|
||||||
- What is the expected latency and throughput?
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 2: 𝐂𝐚𝐩𝐚𝐜𝐢𝐭𝐲 𝐄𝐬𝐭𝐢𝐦𝐚𝐭𝐢𝐨𝐧
|
|
||||||
|
|
||||||
- Estimate the number of read and write requests.
|
|
||||||
|
|
||||||
- Estimate the amount of database and cache storage required.
|
|
||||||
|
|
||||||
- Estimate the network bandwidth required.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 3: 𝐀𝐏𝐈 𝐃𝐞𝐬𝐢𝐠𝐧
|
|
||||||
|
|
||||||
- List the System APIs expected from the system based on the use cases.
|
|
||||||
|
|
||||||
- Define the API endpoints and request/response format.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 4: 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐃𝐞𝐬𝐢𝐠𝐧
|
|
||||||
|
|
||||||
- Choose the database type based one the needs. For example: SQL or NoSQL.
|
|
||||||
|
|
||||||
- Define the Database schema.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 5: 𝐇𝐢𝐠𝐡-𝐋𝐞𝐯𝐞𝐥 𝐃𝐞𝐬𝐢𝐠𝐧
|
|
||||||
|
|
||||||
- Sketch out the block diagram of the system.
|
|
||||||
|
|
||||||
- Identify major components like Databases, Servers, Clients, Load Balancers, CDN, Cache, Message Queues etc.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 6: 𝐃𝐢𝐯𝐞 𝐈𝐧𝐭𝐨 𝐊𝐞𝐲 𝐂𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬
|
|
||||||
|
|
||||||
- Go into the specifics of each component. Discuss how each part will work and interact with others.
|
|
||||||
|
|
||||||
- Address how each component will scale and perform under load.
|
|
||||||
|
|
||||||
- What data structures and algorithms we need to use?
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
𝐒𝐭𝐞𝐩 7: 𝐀𝐝𝐝𝐫𝐞𝐬𝐬 𝐒𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲, 𝐅𝐚𝐮𝐥𝐭 𝐓𝐨𝐥𝐞𝐫𝐚𝐧𝐜𝐞 & 𝐑𝐞𝐥𝐢𝐚𝐛𝐢𝐥𝐢𝐭𝐲
|
|
||||||
|
|
||||||
- Discuss how the system can scale using concepts like sharding, replication, and partitioning.
|
|
||||||
|
|
||||||
- Talk about caching strategies and where caching could be implemented.
|
|
||||||
|
|
||||||
- Discuss strategies for handling component failures, like using replicas, fallbacks, and retries.
|
|
||||||
|
|
||||||
- Discuss possible performance bottlenecks and how to address them.
|
|
||||||
|
|
||||||
- Do we need to throttle requests?
|
|
||||||
|
|
||||||
- Discuss authentication, authorization, data encryption, and other security best practices.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
This approach works well for most problems but may not be ideal for every type of problem, so feel free to adapt it according to the specific nuances of the interview question.
|
|
||||||
|
|
||||||
Reference in New Issue
Block a user