Centralized Configuration Management – Part 1 – Storing and Controlling the Data

Our product suite (all .NET based) consists of a small but growing number of related applications with a lot of configuration data stored in app/web.config files with multiple environmental transforms. Much of the configuration is repeated across multiple projects in each environment (e.g. connection strings, email servers, etc.) so centralization of this data would be of great help. Currently, making even simple changes to any of these keys requires sifting through a large number of config files and transforms to make changes which is tedious, error prone, and difficult to test effectively.

View Part 2 of this Article

This 2 part blog series will explore a recent client project to centralize and simplify our ever growing application configuration data.  The objectives of this project were relatively simple:

  • Provide a centralized storage location for configuration data for applications
  • Provide a method for developers to easily access the centralized configuration data within an application
  • Provide an infrastructure for managing and deploying the first two goals

Our product suite (all .NET based) consists of a small but growing number of related applications with a lot of configuration data stored in app/web.config files with multiple environmental transforms.  Much of the configuration is repeated across multiple projects in each environment (e.g. connection strings, email servers, etc.) so centralization of this data would be of great help. Currently, making even simple changes to any of these keys requires sifting through a large number of config files and transforms to make changes which is tedious, error prone, and difficult to test effectively.

The first step in my process was to look at what 3rd party systems were available off the shelf that might meet our needs.  The requirements of any third party system were relatively straight forward:

  • Must be open-source and have a license that was compatible with our customer
  • Must provide the ability to configure a centralized key-value storage system with a configurable hierarchy
  • Provide a robust and production ready deployment solution

While there were a wide variety of solutions available I narrowed the final contenders (Zookeeper, consul.io, etcd, doozer, etc.)  down to two which I will go into detail here: Apache Zookeeper and Consul.io.

Apache Zookeeper

Zookeeper is probably the most well used and mature of the tools I evaluated.  Zookeeper was originally developed by the team at Yahoo to handle their need to track changing application configurations across their distributed environment.  If you are interested you can read the whole story here.  In addition to providing a centralized key/value store Zookeeper also provides the ability to build it out to provide service discovery.  While this wasn’t a current requirement it was an added feature that we might make use of.  At a high level Zookeeper provides a simple set of primitives that you can use to build and share synchronized data in distributed applications.  Zookeeper was designed to provide a system that allows for high-performance and coordinated reads/writes.

Pros:

  • Flexible – since Zookeeper stores variables in a store similar to a file system it was very easy and intuitive for us to build the necessary data hierarchy to support our applications in different environments
  • Mature and Well used technology makes finding resources and information to aid in the installation, configuration, and maintenance rather easy to google for.
  • Security – Zookeeper had a rather sophisticated security model with ACL’s and even Kerberos support
  • Clients libraries – many common languages already have client libraries available

Cons:

  • Complex – The system is very powerful but requires a rather large installation footprint
  • Lack of good built in UI management tools.  There is a full featured REST based API as well as several external tools but the lack of one built in was a downside

 

Consul.io

Consul.io is a relative newcomer to the game but has brought some rather interesting new features.  Unlike Zookeeper, doozer, etcd and many others, Consul is built with native support for multiple data centers with little to no configuration.  This is a huge advantage for any global distributed application.  Consul’s other killer feature is its built-in support for service discovery; this provides a huge advantage over Zookeeper where you must build that out for yourself.

Pros:

  • Simple to configure and use compared to Zookeeper.  Includes a nice and friendly UI for configuration and administration as well as a full REST based API
  • Security – while not quite as sophisticated as Zookeeper the security model was still robust enough for our needs
  • Feature rich – for a young product it provides a rich set of compelling features for the end user

Cons:

  • Young – being a relative newcomer it was harder to find resources and information than it was for Zookeeper
  • Lack of available client libraries – the lack of available client libraries means that you need to write these yourself for now.  The good news is it is built on simple REST calls

After evaluating the systems out there and discussions within the team we determined that while both these systems showed a lot of promise and provided a nice feature set they seemed to be overkill for our current needs.  The overhead of configuration and maintenance of these systems seemed to outweigh the current benefits we would receive from using these technologies.

We decided on an approach that consists of a custom database with a set of REST based services built on WebAPI.  We decided that this approach would provides us a large degree of flexibility in both accessibility for our client applications as well as a layer of abstraction between the client and the persistence mechanism.  During our evaluation it became very clear that there was a real need to provide a level of structure and hierarchical mapping within the key-value store in order to simplify construction and management of keys as well as provide for a level of key reuse between applications and environments.  We are still in the process of designing and implementation of the new system so I will save the exact details for a future post.

What surprised me that supporting these technologies ended up being the most difficult part.  The technological aspects of maintaining and deploying a database are already in place within the team, but the question of who, when and how the data is maintained has become quite a point of discussion.  Solving this problem is equivalent to answering the question “How do we manage change and what risks are we willing to accept?”.  Since this question has no clear answer and each team member has different prior experiences, risk tolerance, priorities, and views on the relative risk we had multiple discussions of different strategies.  Some are willing to accept more risk for a simpler overall system, others advocate for a more robust and redundant system at the cost of adding to overall system complexity and maintenance.  I guess this is just one more example of the adage “Technology is easy, its people who are tough”

Contact Us

We are ready to accelerate your business forward. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.