Centralized Configuration Management – Part 1 – Storing and Controlling the Data
Dave Bechberger

View Part 2 of this Article

This 2 part blog series will explore a recent client project to centralize and simplify our ever growing application configuration data.  The objectives of this project were relatively simple:

Our product suite (all .NET based) consists of a small but growing number of related applications with a lot of configuration data stored in app/web.config files with multiple environmental transforms.  Much of the configuration is repeated across multiple projects in each environment (e.g. connection strings, email servers, etc.) so centralization of this data would be of great help. Currently, making even simple changes to any of these keys requires sifting through a large number of config files and transforms to make changes which is tedious, error prone, and difficult to test effectively.

The first step in my process was to look at what 3rd party systems were available off the shelf that might meet our needs.  The requirements of any third party system were relatively straight forward:

While there were a wide variety of solutions available I narrowed the final contenders (Zookeeper, consul.io, etcd, doozer, etc.)  down to two which I will go into detail here: Apache Zookeeper and Consul.io.

Apache Zookeeper

Zookeeper is probably the most well used and mature of the tools I evaluated.  Zookeeper was originally developed by the team at Yahoo to handle their need to track changing application configurations across their distributed environment.  If you are interested you can read the whole story here.  In addition to providing a centralized key/value store Zookeeper also provides the ability to build it out to provide service discovery.  While this wasn’t a current requirement it was an added feature that we might make use of.  At a high level Zookeeper provides a simple set of primitives that you can use to build and share synchronized data in distributed applications.  Zookeeper was designed to provide a system that allows for high-performance and coordinated reads/writes.





Consul.io is a relative newcomer to the game but has brought some rather interesting new features.  Unlike Zookeeper, doozer, etcd and many others, Consul is built with native support for multiple data centers with little to no configuration.  This is a huge advantage for any global distributed application.  Consul’s other killer feature is its built-in support for service discovery; this provides a huge advantage over Zookeeper where you must build that out for yourself.



After evaluating the systems out there and discussions within the team we determined that while both these systems showed a lot of promise and provided a nice feature set they seemed to be overkill for our current needs.  The overhead of configuration and maintenance of these systems seemed to outweigh the current benefits we would receive from using these technologies.

We decided on an approach that consists of a custom database with a set of REST based services built on WebAPI.  We decided that this approach would provides us a large degree of flexibility in both accessibility for our client applications as well as a layer of abstraction between the client and the persistence mechanism.  During our evaluation it became very clear that there was a real need to provide a level of structure and hierarchical mapping within the key-value store in order to simplify construction and management of keys as well as provide for a level of key reuse between applications and environments.  We are still in the process of designing and implementation of the new system so I will save the exact details for a future post.

What surprised me that supporting these technologies ended up being the most difficult part.  The technological aspects of maintaining and deploying a database are already in place within the team, but the question of who, when and how the data is maintained has become quite a point of discussion.  Solving this problem is equivalent to answering the question “How do we manage change and what risks are we willing to accept?”.  Since this question has no clear answer and each team member has different prior experiences, risk tolerance, priorities, and views on the relative risk we had multiple discussions of different strategies.  Some are willing to accept more risk for a simpler overall system, others advocate for a more robust and redundant system at the cost of adding to overall system complexity and maintenance.  I guess this is just one more example of the adage “Technology is easy, its people who are tough”