We need to investigate how to better scale CouchDB while taking care of reliability and backup requirements.
A proposed solution for CouchDB 1.x is to front it with a load balancer (HAProxy, nginx, etc) that would split read/write traffic in order to have a single instance accepting writes and many read-only instances. This would be a more generalized approach (for all traffic) when compared to the per-database replication in CouchDB 1.x.
That being said, CouchDB 2.x comes with new clustering capabilities and we are starting to investigate what would it take to migrate to this version. See
GPII-1987 for more details.
The work in this ticket should involve both researching the best approach as well as doing the automation work to have CouchDB properly deployed and scaled as needed.
Deploying CouchDB as a Kubernetes application is the desired long-term goal. However, if there are concerns around persistent storage that would make it a complicated task in the Pilot timeline, this could be a lower priority.