API gateway
Overall flowchart
Gateway architecture
Revolution history
Initial architecture
Only need to support web browser
BFF (Backend for frontEnd) layer
BFF layer exists to perform the following:
Security logic: If internal services are directly exposed on the web, there will be security risks. BFF layer could hide these internal services
Aggregation/Filter logic: Wireless service will typically need to perform filter (e.g. Cutting images due to the device size) / fit (client's customized requirements). BFF layer could perform these operations
However, BFF contains both business and cross-cutting logic over time.
Gateway layer and Cluster BFF Layer
BFF contains too many cross-cutting logic such as
Rate limiting
Auth
Monitor
Gateway is introduced to deal with these cross cutting concerns.
Clustered BFF and Gateway layer
Cluster implementation is introduced to remove single point of failure.
Gateway vs reverse proxy
Web Age: Reverse proxy (e.g. HA Proxy/Nginx) has existed since the web age
However, in microservice age, quick iteration requires dynamic configuration
MicroService Age: Gateway is introduce to support dynamic configuration
However, in cloud native age, gateway also needs to support dynamic programming such as green-blue deployment
Cloud native Age: Service mesh and envoy are proposed because of this.
Reverse Proxy (Nginx)
Use cases
Use distributed cache while skipping application servers: Use Lua scripts on top of Nginx so Redis could be directly served from Nginx instead of from web app (Java service applications whose optimization will be complicated such as JVM/multithreading)
Provides high availability for backend services
Failover config: proxy_next_upstream. Failure type could be customized, such as Http status code 5XX, 4XX, ...
Avoid failover avalanche config: proxy_next_upstream_tries limit number. Number of times to fail over
Gateway internals
API Gateway has become a pattern: https://freecontent.manning.com/the-api-gateway-pattern/
Please see this comparison (in Chinese)
Gateway comparison
Service discovery
Approach - Hardcode service provider addresses
Pros:
Update will be much faster
Cons:
Load balancer is easy to become the single point of failure
Load balancing strategy is inflexible in microservice scenarios. TODO: Details to be added.
All traffic volume needs to pass through load balancer, results in some performance cost.
Approach - Service registration center
Pros:
No single point of failure.
No additional hop for load balancing
For details on service registration implementation, please refer to [Service registration center]((https://github.com/DreamOfTheRedChamber/system-design/blob/master/serviceRegistry.md))
How to detect failure
Heatbeat messages: Tcp connect, HTTP, HTTPS
Detecting failure should not only rely on the heartbeat msg, but also include the application's health. There is a chance that the node is still sending heartbeat msg but application is not responding for some reason. (Psedo-dead)
Detect failure
centralized and decentralized failure detecting: https://time.geekbang.org/column/article/165314
heartbeat mechanism: https://time.geekbang.org/column/article/175545
How to gracefully shutdown
Problem: Two RPC calls are involved in the process
Service provider notifies registration center about offline plan for certain nodes
Registration center notifies clients to remove certain nodes clients' copy of service registration list
How to gracefully start
Problem: If a service provider node receives large volume of traffic without prewarm, it is easy to cause failures. How to make sure a newly started node won't receive large volume of traffic?
Future readings
Last updated