Home

Awesome

Motivation

User interfaces are reactive systems which can be modelized accurately by state machines. There is a number of state machine libraries in the field with varying design objectives. We have proposed a state machine library with a minimal API, consisting of a single effect-less function.

In this particular design, the machine is a function which takes inputs (events to be processed by the machine) and outputs commands to be executed. While it is entirely possible to modelize an entire web application with state machines, a common use case is to modelize a component, and reuse that component.

This library proposes an integration of the Kingly state machine library with React that takes the shape of a <Machine /> component which can be used in a React application like any other components. The <Machine /> component will listen to user events, compute commands and execute them, generally leading to updating the screen.

Installation

react is a peer dependency. The component has been tested with React 16.4 and above. It may however work with lower versions of React as it does not use hooks, context, etc. If you encounter an issue with a given version of React, please log an issue.

npm install react-state-driven

API

<Machine fsm, renderWith, commandHandlers, eventHandler?, preprocessor?, effectHandlers?, options? />

We expose a <Machine /> React component which will hold the state machine and implement its behaviour using React's API. The Machine component behaviour is specified by its props.

There are 3 mandatory props and four optional props. The three mandatory props are the machine specifying the component's behavior, the component to render the screens with, and the command handlers to execute the commands computed by the machine.

The four optional props exists to address more complex use cases without modifying the <Machine/> component. This is in line with the open-closed principle which states that "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification".

We will examine these use cases in a dedicated section. Let's now describe the semantics of the component:

The previous paragraph describes how the mandatory props (fsm, renderWith, commandHandlers) are used. Let's talk about the optional props:

OptionUse case
eventHandler- pass events from outside the component or from outside the DOM, for instance to communicate with other components
preprocessor- convert one DOM event into one semantic event. For instance, convert a button click (no semantics) into a search intent or submit intent (note the semantics).
- aggregate several DOM events into one event. For instance, throttle button clicks
- pass events from outside the component or from outside the DOM, for instance to communicate with other components
effectHandlers- large component, or complex behavior where clean code matters
- customizing render handler (default is React.setState), for instance running some operations before or after rendering
options- configure initial event. This also entirely decouples the fsm machine from the Machine component

Note that our Machine component expects some props but does not expect children components.

We provide two examples, one illustrating the mandatory props, the second illustrating all optional props.

Example with mandatory props

This example only uses the mandatory props. It also uses JSX, and JSON patch to update the machine's extended state. We use JSON patch mainly to provide an example of configuration of updateState property. You may use a simple Object.assign-based reducer, or Immer or any other reducing function.

For illustration, the behavior modelization is as follows:

image search interface

The demo application can be evaluated in a codesandbox playground.

Example with optional props

This example uses the optional props. It uses hyperscripts instead of JSX and a Rxjs Subject for event emitter, and JSON patch to update the machine's extended state.

We will implement an image search application. That application basically takes an input from the user, looks up images related to that search input, and displays them. The user can then click on a particular image to see it in more details.

For illustration, the user interface starts like this:

image search interface

Click here for a live demo.

The user interface behaviour can be modelized by the following machine:

machine visualization

The visual notation for the modelization is addressed in Kingly documentation. We recall here the main points:

Using a preprocessor

In this example, we translate DOM events into machine events with a preprocessor. The preprocessor is a function which receives the event handler as parameter (named rawEventSource here). As the event handler chosen is a Rx.js Subject, we can use pipe, map and all the Rx.js combinators to perform our event conversion. This example for instance eliminates all unnecessary information from the DOM events and React framework (ref), so the machine events are completely decoupled from the DOM.

We thus move from onSubmit,onCancelClick, onGalleryClick, onPhotoClick DOM events to SEARCH, CANCEL_SEARCH, SELECT_PHOTO, EXIT_PHOTO machine events.

The preprocessor must return an object with a subscribe property by which the values computed in the preprocessor will flow. In other words, the preprocessor must return an observable. The subscribe method must return an object with an unsubscribe method which will be called when the <Machine /> component is unmounted. The API is thus designed to be a perfect fit for Rxjs Subjects and Observables.

  ...
  preprocessor: rawEventSource =>
    rawEventSource.pipe(
      map(ev => {
        const { rawEventName, rawEventData: e, ref } = destructureEvent(ev);

        if (rawEventName === INIT_EVENT) {
          return { [INIT_EVENT]: void 0 };
        }
        // Form raw events
        else if (rawEventName === "START") {
          return { START: void 0 };
        } else if (rawEventName === "onSubmit") {
          e.preventDefault();
          return { SEARCH: ref.current.value };
        } else if (rawEventName === "onCancelClick") {
          return { CANCEL_SEARCH: void 0 };
        }
        // Gallery
        else if (rawEventName === "onGalleryClick") {
          const item = e;
          return { SELECT_PHOTO: item };
        }
        // Photo detail
        else if (rawEventName === "onPhotoClick") {
          return { EXIT_PHOTO: void 0 };
        }
        // System events
        else if (rawEventName === "SEARCH_SUCCESS") {
          const items = e;
          return { SEARCH_SUCCESS: items };
        } else if (rawEventName === "SEARCH_FAILURE") {
          return { SEARCH_FAILURE: void 0 };
        }

        return NO_INTENT;
      }),
      filter(x => x !== NO_INTENT),
    ),

Using effect handlers

In this example, we use two effect handlers, one to perform an API call, and the other one to perform rendering. We discuss the rendering in the next section. Effect handlers can only be called by command handlers, as command handlers are the only functions which receives the effect handlers as parameters. Here, our API call is a fetchJsonp:

export function runSearchQuery(query) {
  const encodedQuery = encodeURIComponent(query);

  return fetchJsonp(
    `https://api.flickr.com/services/feeds/photos_public.gne?lang=en-us&format=json&tags=${encodedQuery}`,
    { jsonpCallback: "jsoncallback" }
  ).then(res => res.json());
}
...
  effectHandlers: {
    runSearchQuery: runSearchQuery,
    ...
  }
...

Using effect handlers may seem overkill in some simple applications. In most cases, the only orchestration necessary is handling failing and successful effect execution with two different events. This is why effect handlers are optional. In more complex cases however, where the orchestration is complex, it will be useful to separate the orchestration of effects from the execution of effects. This is a tenet of functional programming: separating concerns enables easier testing, better composition and reuse.

Customizing rendering

To customize rendering, we can override the default render handler. Rendering, as a matter of fact, is handled by an effect handler as any other effects. The effect handler for rendering can be configured on the key [COMMAND_RENDER]:

const flipping = new Flipping();
...
  effectHandlers: {
  ...
    [COMMAND_RENDER]: (machineComponent, renderWith, params, next) => {
      // Applying flipping animations : read DOM before render, and flip after render
      flipping.read();
      machineComponent.setState(
        { render: React.createElement(renderWith, Object.assign({}, params, { next }), []) },
        () => flipping.flip()
      );
    }
  }
...

The previous code shows how to customize the rendering to run a function just before rendering and just after rendering. The rendering itself is triggered with React.setState (you should not change that part). React.setState accepts as second argument a callback which runs immediately after rendering is performed. We may in the future hide away the details of React.setState so as not to expose implementation details in the interface.

Customizing options

Here we simply change the initial event that trigger the display of the initial screen:

...
  options: { initialEvent: [ "START"] },
...

Command handlers

Command handlers receive three parameters: the event emitter, the params passed by the command they handle, and the effect handlers passed to the machine factory. The orchestration here simply consists of dealing with failing and successful effect execution:

  ...
  commandHandlers: {
    [COMMAND_SEARCH]: (next, query, effectHandlers) => {
      effectHandlers
        .runSearchQuery(query)
        .then(data => {
          next(["SEARCH_SUCCESS",data.items]);
        })
        .catch(error => {
          next(["SEARCH_FAILURE", void 0]);
        });
    }
  },
  ...
};

A typical machine run

Alright, now let's leverage the example to explain what is going on here together with the <Machine /> semantics.

First of all, we use React.createElement but you could just as well use jsx <Machine ... />, that really is but an implementation detail. In our implementation we are mostly using core React API and hyperscript rather than jsx. Then keep in mind that when we write 'the machine', we refer to the state machine whose graph has been given previously. When we want to refer to the Machine React component, we will always specifically precise that.

Our state machine is basically a function which takes an input and returns outputs. The inputs received by the machine are meant to be mapped to events triggered by the user through the user interface. The outputs from the machine are commands representing what commands/effects to perform on the interfaced system(s). The mapping between user/system events and machine input is performed by preprocessor. The commands output by the machine are mapped to handlers gathered in commandHandlers so our Machine component knows how to run a command when it receives one.

A run of the machine would then be like this :

This is it! Whatever the machine passed as parameter to the Machine component, its behaviour will always be as described.

Note that this example is contrived for educational purposes:

Types

Types can be found in the repository.

Contracts

Semantics

Architecture

We are going all along to refer to a image search application example to illustrate our argumentation. Cf. Example section for more details.

In a traditional architecture, a simple scenario would be expressed as follows:

image search basic scenario

What we can derive from that is that the application is interfacing with other systems : the user interface and what we call external systems (local storage, databases, etc.). The application responsibility is to translate user actions on the user interface into commands on the external systems, execute those commands and deal with their result.

In our proposed architecture with all options used, the same scenario would become:

image search complete scenario

In our proposed architecture with only mandatory options used, the same scenario would become: image search simplified scenario

In that architecture, the application is refactored into a mediator, a preprocessor, a state machine, a command handler, and an effect handler. The application is thus split into smaller parts which address specific concerns :

While the architecture may appear more complex (isolating concerns means more parts), we have reduced the complexity born from the interconnection between the parts.

Concretely, we increased the testability of our implementation :

We also have achieved greater modularity: our parts are coupled only through their interface. For instance, we use in our example below Rxjs for preprocessing events, and state-transducer as state machine library. We could easily switch to most and xstate if the need be, or to a barebone event emitter (like emitonoff) by simply building interface adapters.

There are more benefits but this is not the place to go about them. Cf:

Code examples

For the impatient ones, you can directly review the available demos:

Code playgroundMachineScreenshot
TMDb movie searchgraphTMDb online interface screenshot
flickr image searchimage search interface

API design goals

We want to have an integration which is generic enough to accommodate a large set of use cases, and specific enough to be able to take advantage as much as possible of the React ecosystem and API. Unit-testing should ideally be based on the specifications of the behaviour of the component rather than its implementation details, and leverage the automatic test generator of the underlying state-tranducer library. In particular :

As a result of these design goals :

Tips and gotchas

Prior art and useful references

Footnotes

  1. Command handlers can only perform effects internally (for instance async. communication with the mediator)

  2. In relation with state machines, it is the same to say that an output depends exclusively on past and present inputs and that an output exclusively depends on current state, and present input.

  3. Another term used elsewhere is deterministic functions, but we found that term could be confusing.