Good Code is inspired by the television program Good Eats where the host covers a particular food subject in detail--often covering the history, usage, advice, and personal opinions about a particular ingredient, cuisine, or dish.

Humans value synchronicity--in dance, music, cooking, business, transportation, and more. Being in sync with others is valuable, and when we build systems for computers, they need to be in sync as well. Devices interacting with the CPU, the kernel with programs, and programs with various APIs--computers are always operating as orchestras. Imagine though, if no more than one section of an orchestra could play at once. The violins would play, then stop, and then the horns would play. The violins would not be able to play again until the horns stopped. That wouldn't make for the best use of the orchestral performers. That'd be similar to blocking code--commonly called synchronous code, which is a bit of a misnomer in my opinion. If the sections were able to play together though, it'd be non-blocking or asynchronous code.

There's a variety of methods to create asnychronous code. At the lower level, there are interrupts, processes, and context switching. At an application level we can use threads, events, callbacks, and promises. Promises are objects that _promise_ to eventually provide some value that may not be immediately available. They've been around for awhile in software engineering, but they're resurfacing in large part due to JavaScript's growing popularity. As JavaScript has become more popular, and as teams and software projects using it have grown, the need for clean and understandable asynchronous JavaScript code has grown as well. If you understand where JavaScript has been and where it's going, then it'll be easy to see why Promises can help write some very...

History

The term "Promise" in programming is 40 years old this year, and it was proposed by Daniel P. Friedman and David Wise in a 1976 paper titled "The Impact of Applicative Programming on Multiprocessing." Several terms had been proposed around the same time period, but "Promise" stuck. Promises gave a pattern to distributed system programmers looking for a way to represent values or data that had not yet been calculated, but would eventually be available. It's a layer of abstraction around the nitty gritty signals, interrupts, threads, locks, and other lower level constructs that make distributed and parallel computation work behind the curtain. Promises provided a common interface--a pattern--for programmers to use for that entire class of problems.

JavaScript, on the other hand, comes into promises from a different angle. It traditionally relies on events, and callbacks as function arguments to perform asynchronous actions. Events were built into JavaScript in order to interact with the DOM, so they've been around for quite some time. Callbacks as function arguments are an easy solution for a few reasons:

  • Making anonymous functions is easy in JavaScript.
  • All functions are objects, and objects can be put into any variable, including function arguments.
  • Executing JavaScript variables as functions is easy.

That being said, callbacks as function arguments can also lead to the triangular "callback hell." When JS code is simply calling out to the network to download a JSON file we might have a couple of callbacks at most. However, more recent JS applications might call out to five or more asynchronous APIs. That pyramid of callbacks can get messy and pretty unreadable. As JavaScript has found its way into more and more of our software, engineers have looked for ways to improve the way we interface with asynchronous devices and services. Deeply nested code is a pain point that requires some targeted relief.

A Bird's Nest

Think about the way we write user stories: "This happens, then that, then this, then that." We don't nest our sentences within each other. We don't make triangles out of our sentences, so why should we settle for deeply nested logic? Even a simple example of nested anonymous callbacks can be difficult to read for the beginner, and unnatural for a non JavaScript developer. Let's look at some code that uses some imaginary asynchronous APIs to see why.

network.request('resource.json', function(error, data) {
  if (error) {
    // handle error
    return;
  }
  storage.write('resource.json', data, function(error, file) {
    if (error) {
      // handle error
      return;
    }
    bluetooth.transmit(file.status, function(error, btResponse) {
      if (error) {
        // handle error
        return;
      }
      ui.render(btResponse.status);
    })
  })
})
		

This code will download a json file, save it to disk, send the status of that file to a bluteooth device, and render the bluetooth device status to some UI. We'll wave our hands at the use case here, but the point is that we're utilizing 3 different asynchronous APIs. Notice how each new function call nests us deeper, and how for each nesting we're creating a more complicated scope. For people who aren't experienced writing or reading code in this style, it can be difficult to know where to insert code or make changes. Worse, what if an error is thrown? Notice that we have to check for errors at each callback, which is similar to how node.js traditionally handles asynchronous functions. We can't just wrap this in a big try-catch and sort it out--the anonymous functions are out of scope.

Hatching a Promise

What if network.request(), storage.write(), and bluetooth.transmit() each returned a Promise. Recall that a Promise is an object that promises to eventually provide a value in the future. To get that value, we give the Promise a function with an argument using .then().

network.request('resource.json')
.then(function(data) {
  return storage.write('resource.json', data);
})
.then(function(file) {
  return bluetooth.transmit(file.status);
})
.then(function(btResponse) {
  ui.render(btResponse.status);
})
.catch(function(error) {
  // handle error here
});
		

We can follow the execution of this straight down. First it requests from the network, then it writes it to storage, then it transmits to the bluteooth device, then it renders the status to the ui. The way that we write the code for promises follows from the language we use to talk about what needs to occur. Additionally, if there's any errors, they'll be fed into .catch() with the argument error representing the reason it failed.

Define "Promise"

Now's a good time to cover some specifics about Promises. Promises have three states:

  • Pending -- The promise has not yet been resolved.
  • Fulfilled -- The promise has been resolved with a value.
  • Rejected -- The promise threw an error or was otherwise rejected with a reason.

Promises also have at least one function available as a property, and normally that function's signature is: Promise.then(fulfilledCallback, [rejectedCallback]). Some libraries also provide Promise.catch(rejectedCallback) which allows us to catch errors from any previously unhandled callbacks, similar to how try-catch allows us to catch multiple exceptions across many lines of code.

Once a promise has changed from pending to fulfilled or rejected it doesn't change back, and thats when the next .then() or .catch() function in the chain is called. Additionally, what's shown in this example is that Promises are made to be chained together. Promise.then() and Promise.catch() always return a new Promise, so we can chain together as many Promises we like.

If we return nothing from inside .then() or .catch(), then nothing is passed along to the next callback in the chain--a fulfilled empty Promise. If we return a defined non-Promise value from inside the callback, it will be passed along to the next callback in the chain as a fulfilled Promise. If we return a Promise from inside .then() or .catch(), the next callback in the chain will receive the result of that promise if it's fulfilled. And best of all, if an exception is thrown it's handled as a rejected promise.

network.request('resource.json')
.then(function(data) {
  console.log("i got the data!");
  return data;
})
.then(function(data) {
  console.log("i also have the data!");
})
.then(function() {
  // `data` is undefined here
  console.log("my code is still executing! no data though!");
})
.catch(function(error) {
  // handle error here
});
		

Notice that the scope is well defined from step to step because we use no nesting. Additionally, we'll still pass an error, no matter where it occurs, down to the final .catch() call. Promises in JavaScript are designed to emulate the way that we write code synchronously. They wrap around the asynchronous messes and invisibly push return values forward and wrap our callbacks in try-catch statements.

Decisions and Promises

What does it look like when we have to make decisions while using Promises? Let's look at making the decision inside of a chain. If the data from the network says to store it, we'll do that. Otherwise we'll transmit the data to the bluetooth device immediately. And in the end we'll always render something to the UI.

network.request('resource.json')
.then(function(data) {
  if (data.shouldWriteToStorage) {
    return storage.write('resource.json', data)
    .then(function (file) {
      bluteooth.transmit(file.status);
    });
  }
  else {
    return bluteooth.transmit(data);
  }
})
.then(function(btResponse) {
  ui.render(btResponse.status);
})
		

This works, but I prefer to organize it differently. Notice how the the last callback in the chain is always getting a btResponse? We can abstract the decision making to its own function because of this. It makes for overall better software organization. We'll write a function that returns a Promise to fulfill with a btResponse.

function transmitToBluteooth(data) {
  if (data.shouldWriteToStorage) {
    return storage.write('resource.json', data)
      .then(function (file) {
        bluteooth.transmit(file.status);
      });
  }
  else {
    return bluteooth.transmit(data);
  }
}
network.request('resource.json')
.then(transmitToBluetooth)
.then(function(btResponse) {
  ui.render(btResponse.status);
})
		

Now let's look at what it means to make a decision _outside_ of the chain. In this case, we'll only store the data if, at the beginning of the request, we know that it's time to do so. We could imagine using this to prevent I/O thrashing or utilize caching of an often-changing piece of data. We'll pretend there's a function out there called shouldWriteToStorage() to tell us whether we should or should not write to storage.

var promiseChain = network.request('resource.json');
if (shouldWriteToStorage()) {
  promiseChain.then(function(data) {
    return storage.write('resource.json', data);
  })
  .then(function(file) {
    bluteooth.transmit(file.status);
  });
}
else {
  promiseChain.then(function(data) {
    bluetooth.transmit(data);
  });
}
promiseChain.then(function(btResponse) {
  ui.render(btResponse.status);
})
		

This code works and the use case is valid, but I prefer making decisions within chains. A function that returns a Promise is easier to use, test, and modify than a function that attaches callbacks to a chain. If we were to try to reorganize this code similarly to how I did in the previous example, we'd get something that looks like this.

function transmitToBluetooth(promiseChain) {
  if (shouldWriteToStorage()) {
    promiseChain.then(function(data) {
      return storage.write('resource.json', data);
    })
    .then(function(file) {
      bluteooth.transmit(file.status);
    });
  }
  else {
    promiseChain.then(function(data) {
      bluetooth.transmit(data);
    });
  }
  return promiseChain;
}
var promiseChain = network.request('resource.json');
promiseChain = transmitToBluetooth(promiseChain);
promiseChain.then(function(btResponse) {
  ui.render(btResponse.status);
})
		

In the first transmitToBluetooth(data) function, we could test it outside of the scope of an existing promise. However, in the second transmitToBluetooth(promiseChain) function, we must provide a promise to use it. Hence why I prefer organizing promise returning functions so that they receive actual data as their arguments as often as possible. Otherwise we're not doing much better than callbacks as function arguments.

Making and Keeping Promises

We've waved our hands at how Promise objects are constructed and what libraries out in the world of JavaScript support Promises. Folks who have used Angular have probably seen $q, which is actually based on a library named q. Both of them are Promise libraries that work in very similar ways. The code below demonstrates how to construct a Promise in $q.

function promiseReturningFunction() {
  var deferred = $q.defer();
  setTimeout(function() {
    deferred.resolve(3);
  }, 1000);
  return deferred.promise;  
}
		

Let's break this down line by line. $q.defer() returns an object, a deferral, which provides us with some functions and properties. deferred.resolve(value) resolves the Promise linked to the deferral. It's basically like a return function for the Promise. deferred.reject(reason) essentially reject the promise with a reason, similar to throwing an error. Most importantly, we always want to return deferred.promise because that's the actual Promise object that we can attach .then()`s to.

Recently, JavaScript has gained native support for Promises in ES6. This interface to use Promises is not unlike the $q method, but the creation of those Promises is a bit different.

// ES6
function promiseReturningFunction() {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      resolve(3);
    }, 1000);
  });
}
		

resolve() and reject() work just as they did before, but instead of being attached to a deferral object, they are passed into an anonymous function that executes the code for the Promise.

In either case, the value returned from promiseReturningFunction is a promise that we can call .then() on, and provide a function to receive the value. After 1 second, the provided function will receive the number 3 as the first argument.

A Promise Fulfilled

Engineers work best when code is clean and organized, and Promises help us do that. There's a quote that's been affecting the way I look at software engineering for quite some time: “Programs must be written for people to read, and only incidentally for machines to execute," written by Harold Ableson for the book "Structure and Interpretation of Computer Programs." It's an important conclusion that I try to keep at the front of my mind. The fact that the code runs quickly on a machine may seem like the highest order bit, but the fastest code that no one can read or edit is only marginally more useful than gibberish. When we consider that the patterns we use are the foundation of our code, we should ensure that those patterns are the best fit. As you work promises into your code and introduce them to your collaborators, you'll soon find that they are very good code.

Cecelia Wren is a Senior Consultant specializing in mobile development at 6D global. She creates Twitter robots, web and mobile apps, development environments and workflows, games, and npm packages. Her work seeks to improve software, communities, and experiences through feminist activism and code.