Home

Awesome

ReactRelayNetworkModern (for Relay Modern)

npm trends Travis Commitizen friendly semantic-release FlowType compatible

The ReactRelayNetworkModern is a Network Layer for Relay Modern with various middlewares which can manipulate requests/responses on the fly (change auth headers, request url or perform some fallback if request fails), batch several relay request by timeout into one http request, cache queries and server-side rendering.

Network Layer for Relay Classic can be found here.

Migration guide from v1 to v2 can be found here.

ReactRelayNetworkModern can be used in browser, react-native or node server for rendering. Under the hood this module uses global fetch method. So if your client is too old, please import explicitly proper polyfill to your code (eg. whatwg-fetch, node-fetch or fetch-everywhere).

Install

yarn add react-relay-network-modern
OR
npm install react-relay-network-modern --save

What if regeneratorRuntime is not defined?

<img width="493" alt="screen shot 2018-02-20 at 20 07 45" src="https://user-images.githubusercontent.com/1946920/36428334-da402a6e-1679-11e8-9897-7e730ab3123e.png">

I don't want to bundle regenerator-runtime with the library - it's a largish dependency and there's a good chance that the user is already including code which depends on it (eg. via babel-polyfill). If we bundle it they'll get a duplicate copy and that's bad.

So if you got regeneratorRuntime is not defined you should do the following:

import 'regenerator-runtime/runtime';
import { RelayNetworkLayer } from 'react-relay-network-modern';

What if Webpack errors with Error: Cannot find module 'core-js/modules/es6.*'?

core-js is not an explicit dependency as it adds 30kb for client bundles.

If this error occurs you can do one of the following:

Different builds

This library contains different builds for any purposes:

// Default import for using in any browser
// last 5 versions, ie 9, defaults
import { RelayNetworkLayer } from 'react-relay-network-modern';

// Source code without Flowtype declarations
import { RelayNetworkLayer } from 'react-relay-network-modern/es';

Middlewares

Built-in middlewares

Standalone package middlewares

Example of injecting NetworkLayer with middlewares on the client side

import { Environment, RecordSource, Store } from 'relay-runtime';
import {
  RelayNetworkLayer,
  urlMiddleware,
  batchMiddleware,
  // legacyBatchMiddleware,
  loggerMiddleware,
  errorMiddleware,
  perfMiddleware,
  retryMiddleware,
  authMiddleware,
  cacheMiddleware,
  progressMiddleware,
  uploadMiddleware,
} from 'react-relay-network-modern';

const network = new RelayNetworkLayer(
  [
    cacheMiddleware({
      size: 100, // max 100 requests
      ttl: 900000, // 15 minutes
    }),
    urlMiddleware({
      url: (req) => Promise.resolve('/graphql'),
    }),
    // Deprecated batch middleware
    // legacyBatchMiddleware({
    //   batchUrl: (requestMap) => Promise.resolve('/graphql/batch'),
    //   batchTimeout: 10,
    // }),
    batchMiddleware({
      batchUrl: (requestList) => Promise.resolve('/graphql/batch'),
      batchTimeout: 10,
    }),
    __DEV__ ? loggerMiddleware() : null,
    __DEV__ ? errorMiddleware() : null,
    __DEV__ ? perfMiddleware() : null,
    retryMiddleware({
      fetchTimeout: 15000,
      retryDelays: (attempt) => Math.pow(2, attempt + 4) * 100, // or simple array [3200, 6400, 12800, 25600, 51200, 102400, 204800, 409600],
      beforeRetry: ({ forceRetry, abort, delay, attempt, lastError, req }) => {
        if (attempt > 10) abort();
        window.forceRelayRetry = forceRetry;
        console.log('call `forceRelayRetry()` for immediately retry! Or wait ' + delay + ' ms.');
      },
      statusCodes: [500, 503, 504],
    }),
    authMiddleware({
      token: () => store.get('jwt'),
      tokenRefreshPromise: (req) => {
        console.log('[client.js] resolve token refresh', req);
        return fetch('/jwt/refresh')
          .then((res) => res.json())
          .then((json) => {
            const token = json.token;
            store.set('jwt', token);
            return token;
          })
          .catch((err) => console.log('[client.js] ERROR can not refresh token', err));
      },
    }),
    progressMiddleware({
      onProgress: (current, total) => {
        console.log('Downloaded: ' + current + ' B, total: ' + total + ' B');
      },
    }),
    uploadMiddleware(),

    // example of the custom inline middleware
    (next) => async (req) => {
      req.fetchOpts.method = 'GET'; // change default POST request method to GET
      req.fetchOpts.headers['X-Request-ID'] = uuid.v4(); // add `X-Request-ID` to request headers
      req.fetchOpts.credentials = 'same-origin'; // allow to send cookies (sending credentials to same domains)
      // req.fetchOpts.credentials = 'include'; // allow to send cookies for CORS (sending credentials to other domains)

      console.log('RelayRequest', req);

      const res = await next(req);
      console.log('RelayResponse', res);

      return res;
    },
  ],
  opts
); // as second arg you may pass advanced options for RRNL

const source = new RecordSource();
const store = new Store(source);
const environment = new Environment({ network, store });

Advanced options (2nd argument after middlewares)

RelayNetworkLayer may accept additional options:

const middlewares = []; // array of middlewares
const options = {}; // optional advanced options
const network = new RelayNetworkLayer(middlewares, options);

Available options:

Server-side rendering (SSR)

See react-relay-network-modern-ssr for SSR middleware.

How middlewares work internally

Middlewares on bottom layer use fetch method. So req is compliant with a fetch() options. And res can be obtained via resPromise.then(res => ...), which returned by fetch().

Middleware that needs access to the raw response body from fetch (before it has been consumed) can set isRawMiddleware = true, see progressMiddleware for example. It is important to note that response.body can only be consumed once, so make sure to clone() the response first.

Middlewares have 3 phases:

Basic skeleton of middleware:

export default function skeletonMiddleware(opts = {}) {
  // [SETUP PHASE]: here you can process `opts`, when you create Middleware

  return (next) => async (req) => {
    // [CAPTURING PHASE]: here you can change `req` object, before it will pass to following middlewares.
    // ...some code which modify `req`

    const res = await next(req); // pass request to following middleware and get response promise from it

    // [BUBBLING PHASE]: here you may change response of underlying middlewares, via promise syntax
    // ...some code, which process `req`

    return res; // return response to upper middleware
  };
}

Middlewares use LIFO (last in, first out) stack. Or simply put - use compose function. So if you pass such middlewares [M1(opts), M2(opts)] to NetworkLayer it will be work such way:

Batching several requests into one

Joseph Savona wrote: For legacy reasons, Relay splits "plural" root queries into individual queries. In general we want to diff each root value separately, since different fields may be missing for different root values.

Also if you use react-relay-router and have multiple root queries in one route pass, you may notice that default network layer will produce several http requests.

So for avoiding multiple http-requests, the ReactRelayNetworkModern is the right way to combine it in single http-request.

Example how to enable batching

...on server

Firstly, you should prepare server to process the batch request:

import express from 'express';
import graphqlHTTP from 'express-graphql';
import { graphqlBatchHTTPWrapper } from 'react-relay-network-modern';
import bodyParser from 'body-parser';
import myGraphqlSchema from './graphqlSchema';

const port = 3000;
const server = express();

// setup standart `graphqlHTTP` express-middleware
const graphqlServer = graphqlHTTP({
  schema: myGraphqlSchema,
  formatError: (error) => ({
    // better errors for development. `stack` used in `gqErrors` middleware
    message: error.message,
    stack: process.env.NODE_ENV === 'development' ? error.stack.split('\n') : null,
  }),
});

// declare route for batch query
server.use('/graphql/batch', bodyParser.json(), graphqlBatchHTTPWrapper(graphqlServer));

// declare standard graphql route
server.use('/graphql', graphqlServer);

server.listen(port, () => {
  console.log(`The server is running at http://localhost:${port}/`);
});

More complex example of how you can use a single DataLoader for all (batched) queries within a one HTTP-request.

If you are on Koa@2, koa-graphql-batch provides the same functionality as graphqlBatchHTTPWrapper (see its docs for usage example).

...on client

And right after server side ready to accept batch queries, you may enable batching on the client:

const network = new RelayNetworkLayer([
  // deprecated "legacy" batch middleware
  // legacyBatchMiddleware({
  //   batchUrl: '/graphql/batch', // <--- route for batch queries
  // }),
  batchMiddleware({
    batchUrl: '/graphql/batch', // <--- route for batch queries
  }),
]);

How batching works internally

Internally batching in NetworkLayer prepare list of queries [ {query, variables}, ...] sends it to server. And server returns list of results [ {data}, ...]. The server is expected to return the results in the same order as the requests.

As of v4.0.0, the batch middleware utilizing request IDs in queries and corresponding results has been renamed legacyBatchMiddleware. The legacy middleware included a request ID with each query included in the batch and expected the server to return each result with the corresponding request ID. The new batchMiddleware simply expects results be returned in the same order as the batched queries.

NOTE: legacyBatchMiddleware does not correctly deduplicate queries when batched because query variables may be ignored in a comparison. This means that two identical queries with different variables will show the same results due to a bug (#31). It is highly encouraged to use the new order-based batchMiddleware, which still deduplicates queries, but includes the variables in the comparison.

Contribute

I actively welcome pull requests with code and doc fixes. Also if you made great middleware and want share it within this module, please feel free to open PR.

License

MIT