In trading speed is the name of the game. Sports Betting trading is no different, when the player sense that his favorite striker is about to score, he must act quickly to place his bet. At the very moment the goal is scored, the bookmaker absolutely can’t take any more bet or it would result in a direct massive financial loss.
How can you achieve such speeds in a highly distributed system with more than 70 microservices, multiple teams, while still achieving best in class scalability, resiliency and availability ?
How to manage hundred of millions of odds change a day, especially massive spikes during the weekend leagues where we have more than 500 live matches at the same time.
In this session, we will outline our journey to transform Betclic's legacy Offer System. We started with a monolithic gigantic database shared by all the services of the Betclic ecosystem, resulting in a chaotic distributed monolith that was hard to evolve and scale.
Our goal was to transform into a fully microservices architecture, using multiple databases that communicate through an event-driven architecture.
We transitioned from an on-premise infrastructure, which relied on RabbitMQ and SNS/SQS for messaging, to a cloud-native ecosystem that leverages Kafka to achieve real-time event-driven architecture (EDA).
We went from a single monolithic database for all Betclic, to 33 databases for the Sports Offering by itself. Allowing us incredible growth and scalability opportunities
We will cover why we chose Kafka in the first place as a solution to our needs.
How we tuned it even further to unleash the true low latency potential of Kafka on the cloud.
How we leveraged multiple features of Kafka to ensure speed and resiliency with minimum concession