![]() ![]() ![]() Because PgBouncer uses libuv for async I/O, you are likely to hit these if they aren’t tweaked in your ECS task definition. Another potential “gotcha” is the possibility of hitting ulimits. So if you need more than one core, you’ll need to run multiple processes and account for this in your configuration. It’s important to note that PgBouncer is not multithreaded. We currently run tasks on 1 vCPU with 2 gigabytes of memory, and it rarely goes above 30% CPU usage (this includes the additional Docker containers for collecting metrics). The entire thing is written in C and leverages libuv for asynchronous I/O. As a side note, PgBouncer is incredibly efficient. We’re currently running our tasks on AWS Fargate. The other two images are used for monitoring and sending metrics. ![]() Additionally, the code for the image itself is relatively straightforward and easy to understand. While it’s unofficial, the Edoburu image does a good job of exposing PgBouncer settings via environment variables. Our PgBouncer ECS task definition uses three containers:įor the image, we used Edoburu’s Minimal PgBouncer image. Our AWS configuration for this lives in a Terraform module so we can easily and safely make configuration changes and deploy new PgBouncer instances when needed. We’ve already had a pretty good experience using the AWS Elastic Container Service to run other containerized applications, so we decided to use it for PgBouncer as well. However, we hope this overview gives you an idea of how PgBouncer can be leveraged. Keep in mind that running a connection pooler heavily depends on your application’s workload, and the settings that make sense for one application might not make sense for others. This post gives a brief overview of how we rolled out PgBouncer on AWS. Since Aurora is based on Postgresql, which does not have a built-in connection pooler, we decided to introduce PgBouncer into our system in order to centralize management of connections to our database cluster. Our main database cluster is an AWS RDS cluster using AWS Aurora. This was particularly important for us, since we were approaching high connection counts from our web servers, and that could eventually result in a significant performance degradation. To mitigate this, we decided to introduce a connection pooler - which would allow us to safely add web servers without drastically increasing the number of database connections, as well as introduce more sophisticated pooling mechanisms that are more efficient for our workload. As the number of web servers we used grew, so did the limitations we ran into by having each server directly manage its connections to our database cluster. Each of these web servers must maintain a pool of connections to our database cluster. As we’ve grown, we’ve had to make our code more efficient and scale out horizontally by adding more web servers. At RevenueCat, we run an ever-changing number of web servers to meet the demands of all of the developers using the RevenueCat SDK in their applications. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |