Little server patterns: Failing quickly

Little server patterns 2
May 27, 2022

Read time 7 min

This is the second post in a series about little server patterns that I find useful. The first post discusses “dependency parameters” and is found here. In this post, I discuss failing quickly.

I’m surprised at how few servers I see fail quickly, because this seems like an easy and valuable pattern. Servers commonly receive unknown data from at least two sources: server configuration on start-up and downstream/upstream requests. In both cases it often makes sense to validate the data and fail quickly.

Late failures are confusing failures

Server configuration parameters—external API credentials, database credentials, server ports, logger details, etc.—are commonly passed to web services using environment variables. The server code receives environment variables as strings, but what we want is more specific: a non-empty string, a port number, a valid URL, and so forth. If any of these are invalid, it usually doesn’t make sense to even run the server. If you’re lucky, you would find the error right away; the server fails with a 401 Unauthorized from upstream or a null pointer error on startup. If you’re unlucky, the server fails in subtle ways that you realize only weeks later; maybe because the logger configuration was borked as well, so no logs made it back to you.

Similarly, if your server endpoint depends on receiving certain data in an HTTP request or response and the data is bad or missing, there’s usually not much sense in carrying on— just fail and, if necessary, log the error immediately.

Fail quickly on start-up

To avoid late failures from server configuration issues, validate the server configuration immediately on start-up and fail hard with an error message if any validation fails. In most environments this will either fail the deployment and roll back to the previous deployment – or put the service into an endless failure loop. In either case, you find out immediately and directly what the error is and can correct it straightaway.

The validation doesn’t have to be complicated. You could just load all of your environment variables in one place and throw an error if an environment variable is missing:

const throwError = (msg) => {
  throw new Error(msg)

const config = {
  productApiAppId: process.env.PRODUCT_API_APP_ID || throwError("Missing PRODUCT_API_APP_ID"),
  productApiAppKey: process.env.PRODUCT_API_APP_KEY || throwError("Missing PRODUCT_API_APP_KEY"),
  appPort: process.env.APP_PORT || throwError("Missing APP_PORT")

But it can be convenient to use a library, and often you’ll want validation beyond “does this environment variable exist or not”, like whether the string actually contains text or whether the port is an integer. This example checks exactly that and throws an error if any validation fails (in TypeScript using io-ts):

import * as IoTsReporter from "io-ts-reporters"
import * as t from "io-ts"
import * as E from "fp-ts/Either"
import {IntFromString, NonEmptyString} from "./validation"
import {pipe} from "fp-ts/function"

const AppConfigCodec = t.exact(t.type({
  productApiAppId: NonEmptyString,
  productApiAppKey: NonEmptyString,
  appPort: IntFromString
type AppConfig = t.TypeOf

const rawConfig = {
  productApiAppId: process.env.PRODUCT_API_APP_ID,
  productApiAppKey: process.env.PRODUCT_API_APP_KEY,
  appPort: process.env.APP_PORT

const validateOrThrow = <I, A>(decoder: t.Decoder<I, A>, val: I): A => {
  return pipe(
    E.getOrElse<t.Errors, A>(errors => {
      const errorsMsg ="\n")
      throw new Error(`Validation errors:\n${errorsMsg}`)

const startApp = (rawCfg) => {
  const cfg: AppConfig = validateOrThrow(AppConfigCodec, rawCfg)


You could also do more involved checks beyond validating the config at a type level. Or use another TypeScript validation library like zod or joi or something JSON Schema-based. Other languages also have similar libraries. Or you just write a few little functions for validation and avoid the extra dependencies.

These kinds of configuration errors usually aren’t hard to track down. On the other hand, in the time it takes you to track down an error literally once you could have probably rewritten the server to validate the configuration on startup and avoid the problem entirely.

Fail quickly on bad data

Another place where it is useful to fail quickly is if you get bad data, either fetching from an upstream data source or from an incoming request to your server. If your service depends on having data in a certain format, you should validate that format and fail the request immediately. For example, fetching upstream data using TypeScript and `io-ts` again:

import * as t from "io-ts"
// From the other example
import {validateOrThrow} from "./validation"

const ProductCodec = t.type({
  id: t.number,
  name: t.string
type Product = t.TypeOf

const ProductResponseCodec = t.type({
  products: t.array(ProductCodec)
type ProductResponse = t.TypeOf

const fetchProductList = (apiCfg): Promise<Array> => {
  return fetch(`${apiCfg.productUrl}/products`)
    .then(resp => resp.json())
    .then(json => validateOrThrow(ProductResponseCodec, json))
    .then(response => response.products)

Decoding HTTP queries, URL parameters, and request bodies into valid data should happen right when the request is received so you can have clear errors, trust the data after that point, and keep the parsing logic out of the rest of the code. HTTP server libraries often handle this in part, but the vast majority of server libraries will only decode parameters as strings or basic primitives. Further validation is often needed, and that is left to the user. I often see this parsing/validation strewn about unnecessarily in the rest of the code.

Additionally, HTTP server libraries only handle requests that your server receives. Servers often make requests of upstream sources as well or receive data from queues, and libraries for these very rarely do validation of the data beyond perhaps parsing valid JSON.

In larger systems – where data may pass through many services – it’s also worth spending time thinking about open or extensible data formats versus closed data formats. If the data passes through many services and each service validates strictly, then any field change might require each and every service to be updated and deployed in the correct order to use data with the new field. What does each service actually depend on? 

If your service literally doesn’t care what the data is but just passes it on, then it can leave validation for downstream or upstream. If your service depends specifically on having a user field with a valid user ID, you should probably check for a user ID at the earliest point. There’s also a good amount of gray area in between, like whether extra unknown fields should be allowed in maps/records, but this probably depends on your specific context. The Clojure community seems to default to open data formats and typed languages to closed formats. In the end, the choice is yours.

Don’t types solve this problem already?

Well. Sort of. Optional type systems like TypeScript don’t require any validation. By default, you just assert that the data received is in the format you expect, and carry on under that assumption. Statically typed languages require valid types, but this often includes null values or strings that still need more checks. And even more strict languages like PureScript or Haskell don’t require you to validate or parse data immediately—you can always defer it to the point where you actually use it and suffer the late failure issues.

One way typed languages help here is that there are usually solid libraries and conventions for decoding and encoding typed values. This means you don’t have to reinvent all the wheels and can validate at a type level pretty easily.

However, regardless of language, you can still choose to validate at many different levels of strictness, from parsing JSON alone to type-level validation to more strict ways of making illegal states unrepresentable. Even using a strongly-typed, pure functional language doesn’t magically save you from stringly-typed APIs or primitive obsession. It just gives you better tools for choosing and enforcing the right level of validation.

Ultimately my idea here is this. You should default to type-level validation of all external data at the boundaries of your application and fail immediately with obvious errors if the data is invalid. This is easy to do, removes obscure errors and, most importantly, improves your trust in the rest of your code. And trusting your code is a beautiful thing.

In the next server patterns post, I will talk about test independence.

Sign up for our newsletter

Get the latest from us in tech, business, design – and why not life.