Ross Esmond

Code, Prose, and Mathematics.

portrait of myself, Ross Esmond
Written — Last Updated

One Request Rule

If the result of a data request across a network may result in a follow-up request, there should be some mechanism to include the follow-up in the initial request to reduce latency. There are two variants of a follow-up. The first is a conditional follow-up, where the result is examined to determine what additional requests should be made. The second is a relational follow-up, where the contents of the first request are used as arguments in another request. In both cases the service should allow for the follow-up to be encoded in the initial request, such that the service may return the additional data in one response. With a conditional follow-up, this requires that the request allow for control statements, and with the relational follow-up, this requires that the request be able to request multiple assets using the result of one request in another. The most well-known example of a solution to the One Request Rule is SQL, though there are many more.

Examples

There are endless examples of solutions to the One Request Rule, be it with database request, network requests, or even file reads. This list is not exhaustive, but illustrates the reach of the follow-up request problem and the diversity of solutions to such issues.

SQL

A major function of the Structured Query Language is to facilitate complex requests to a database such that they will not require a follow-up. To achieve this, most variants of SQL come with branching statements, to allow for conditional requests, and nested queries, to allow for relational request. The IF and CASE statements allow for some branching, such that a small deviation in the required data based on information stored in the SQL database doesn’t require multiple round trips.

SELECT * FROM table WHERE name = IIF(0 < SELECT prob FROM odds WHERE name = "outcome", "damage", "miss");

The result of requests may be included in other, larger requests, so that relationships may be followed without additional input from the client.

SELECT name FROM thangs WHERE name = (SELECT friend FROM thangs WHERE name = "Bob");

GraphQL

Of course, GraphQL’s purpose is to eliminate relational follow-ups when requesting assets on the web, as its site clearly shows.

GraphQL queries access not just the properties of one resource but also smoothly follow references between them.

GraphQL represents its data as a large graph of relations, and allows for queries to follow as many edges of such graph as the client finds necessary in order to avoid follow-ups.

{
  thangs(name: "Bob") {
    friend
  }
}

Cap’n Proto

Cap’n Proto is a data interchange format which stores data sequentially in the same form as it is intended to be accessed, reducing read times. It comes with a bonus feature, however, which it calls Time-traveling RPC. The feature will combine nested functions into one read so as to avoid multiple searches through the data. A read written in the form bar(foo()) will not wait for the result of foo() before asking for the result of bar(result), but will instead preemptively interpret the entire expression so that it may immediately start the read of bar once foo is resolved.

Server-side Rendering

Beyond the world of databases, server-side rendering satisfies the One Request Rule for web apps. Remix in particular boasts its ability to eliminate most web app loading states by satisfying requests on the server before the client ever displays the page.

Remix loads data in parallel on the server and sends a fully formed HTML document.

HTTP Server Push

HTML, by its very design, creates a problematic domain in which to solve the One Request Rule. When a request is made for a document, the client does not know what assets it must request to construct the document as it will be described by the HTML, be them images, CSS, or javascript. The client must then wait for the core document in order to attain a list of follow-up requests to be made. The HTTP Server Push proposal aimed to preempt follow-ups by pushing follow-up assets to the client before they had been requested. The client would then use the assets once neededed without requiring another round trip over the network. This resulted in increased bandwidth usage, as the server often misjudged which assets the client had already cached, but reduced latency, as follow-ups were satisfied after one request.