Event Driven, Server-less, Micro-Services Architecture in AWS
Per Amazon web services definition, an event-driven architecture uses events to trigger and communicate between decoupled services and is common in modern applications built with micro-services. An event is a change in state, or an update, like an item being placed in a shopping cart on an e-commerce website. Events can either carry the state (the item purchased, its price, and a delivery address) or events can be identifiers (a notification that an order was shipped).
Event-driven architectures have three key components: event producers, event routers, and event consumers. A producer publishes an event to the router, which filters and pushes the events to consumers. Producer services and consumer services are decoupled, which allows them to be scaled, updated, and deployed independently.
- Phase 1 architecture: Event Driven/ Serverless data pipeline
3. Setting up data pipeline
We are going to setup Amazon Kinesis is a cloud native real-time data streaming service like Kafka, however Kinesis offer better TCO, integration with other services, automation, cross-region replication without much configuration. Kafka data streaming require custom configuration, high amount of engineering time to scale, harden security etc. Refer this article to deep dive into Kafka vs Kinesis
3a. Start with defining parameters in cloud formation(IAC) template
3b. provision Kinesis Data Stream & Firehose
3c. Add IAM policies
3d. Create S3 bucket encryption, rules, retention
3e. Define EventBus Role and Policy
3f. Firehose inline trasnsformation of JSON payload using Lambda serverless function
2. Phase 2 architecture: Hook up consumers (Down stream micro-services applications)
3f. Create Kinesis consumers (Fanout)
3g. Create Glue database for storing JSON data from S3 in a table and run Athena queries