Are you optimizing development efficiency within your organization? Read our whitepaper on developer velocity to learn more!

Blog

Cycling credentials without cycling containers

In my prior posts, we’ve talked about how to instrument credential cycling and why it’s important to enable application portability. In this post, we’ll take the notion of…

David Thor Jul 30

In my prior posts, we’ve talked about how to instrument credential cycling and why it’s important to enable application portability. In this post, we’ll take the notion of credential cycling even further and show how secrets can be injected into volumes mounted to your applications. Injecting secrets into mounted volumes is a great way to securely provide credentials to your applications without forcing all your containers to cycle on every update.

In a cloud-native world, we’re taught to invest heavily in our DevOps and continuous delivery processes to ensure that applications can be deployed automatically and frequently. This guarantees that new features can be released to customers safely and quickly. Still, deployments remain one of the most risky engineering processes and are all too often the cause for production downtime or outages. Even if we have a reliable release process, it’s always best to avoid needless use when possible. And since changes to application credentials don’t change any application code, updating credentials on the fly represents low-hanging fruit for reducing the overall number of deployments.

There is another reason to avoid container cycling: deployment speed. Shutting down and starting up any single container takes time; multiply this by the number of replicas you have and suddenly you find yourself monitoring a lengthy and risky process. Having applications read in credentials from the filesystem allows them to more quickly receive updates without needing to be cycled.

The cornerstone of our approach to dynamic credentialing will be to read secrets from volume mounts. So the first thing we’ll need to do is to tell our apps to read in secrets from the file system instead of from environment variables.

In this example, we’re going to load in credentials to a Postgres instance; we’ll assume that the secret is a JSON encoded string containing a userpassword, and database field:

const fs = require('fs');
const { Client: PostgresClient } = require('pg');

// Configure how much time can pass before the credentials will be
// considered stale and need to be refreshed
const DB_CREDENTIALS_TTL = process.env.DB_CREDENTIALS_TTL || 5 * 60 * 1000;

let postgres_client;
let client_expires_on;

const getPostgresClient = () => {
  if (!postgres_client || client_expires_on < Date.now()) {
    const raw_secret = fs.readFileSync(process.env.DB_CREDENTIALS_FILE);
    const { user, password, database } = JSON.parse(raw_secret);

    const postgres_config = {
      host: process.env.DB_HOST || 'localhost',
      port: Number(process.env.DB_PORT) || 5432,
      user,
      password,
      database,
    };

    postgres_client = new PostgresClient(postgres_config);
    client_expires_on = Date.now() + DB_CREDENTIALS_TTL;
  }

  return postgres_client;
};

In the above snippet, we’ve created a method called getPostgresClient() that will return a properly configured client for accessing our postgres database. We’ve also set it up to cache the client and only generate a new one when the previous credentials are considered stale. This will allow us to return the client in a reasonably performant way without needing to read from the file system on every request.

Next up we’ll want to test out the integration! Let’s go ahead and create a simple method that will create a table, write to the database, read from the database, and then tear it all down again.

const testDBConnection = async () => {
  const db = getPostgresClient();

  db.connect();

  // Create and seed table
  await db.query(
    'CREATE TABLE users (id int, last_name varchar(255), first_name varchar(255))',
  );
  await db.query(
    "INSERT INTO users (id, last_name, first_name) VALUES (1, 'User', 'Test')",
  );

  // Query seeded content
  const result = await db.query('SELECT * FROM users');
  console.log(result.rows);

  // Drop table and kill connection
  await db.query('DROP TABLE users');
  await db.end();
};

testDBConnection();

Great! Let’s go ahead and write the two methods above into an index.js file and create a Dockerfile to help us run it in a container. Don’t forget to create a package.json file for the project and install the pg library too.

FROM node:13
WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY index.js .

CMD ["node", "index.js"]

Before we can properly run our image, we need to boot up a postgres instance:

$ docker run -d -p 5432:5432 \
    -e POSTGRES_USER=postgres \
    -e POSTGRES_PASSWORD=password \
    -e POSTGRES_DB=architect \
    postgres:11

Time to boot up our application and see what happens! Let’s build that Docker image and run it locally!

$ docker build -t credentials-test:latest .
$ docker run credentials-test:latest

Whoop! Seems we haven’t mounted the secret onto the container yet and we got a nasty error. We’ll need to write the credentials to a local file that we can then mount to our application when we run it. Go ahead and copy the json credentials into a file named, database.json:

{
  "user": "postgres",
  "password": "password",
  "database": "architect"
}

Finally, we’re ready to run our application! Let’s run it with the flags needed to mount our local database credentials to the container:

$ docker run \
		-v $(pwd):/usr/src/secrets:ro \
		-e 'DB_CREDENTIALS_FILE=/usr/src/secrets/database.json' \
		-e 'DB_HOST=host.docker.internal' \
		credentials-test:latest

Great job! You should have seen the container output the test user before exiting. You can use this flow with any API server- just call getPostgresClient() whenever you need to access the database and you’ll be able to update the contents of your mounted volume whenever you need.


Want to see the whole source? Check it out on GitHub:

View source code