Awesome
feathers-distributed
Distribute your Feathers services as microservices
The master
branch and >= 2.0.x version is expected to work with Feathers v5 (a.k.a. Dove).
The buzzard
branch and >= 0.3.x version is expected to work with Feathers v3 (a.k.a. Buzzard) and Feathers v4 (a.k.a. Crow) but will be soon deprecated.
Please note that the underlying architecture has been changed from one requester/publisher and responder/subscriber per service to one requester/publisher and responder/subscriber per application between v0.7 and v1.x. This breaking change has been required to improve performances and reliability by simplifying the underlying mesh network (see for instance #48 or #49). As a consequence, applications running under v1.x will not be compatible with applications running prior versions.
The auk
branch and 0.2.x version is expected to work with Feathers v2 (a.k.a. Auk) but it is deprecated.
This plugin relies on cote and takes benefits of it:
- Zero-configuration: no IP addresses, no ports, no routing to configure
- Decentralized: No fixed parts, no "manager" nodes, no single point of failure
- Auto-discovery: Services discover each other without a central bookkeeper
- Fault-tolerant: Don't lose any requests when a service is down
- Scalable: Horizontally scale to any number of machines
- Performant: Process thousands of messages per second
cote requires your cloud provider to support IP broadcast or multicast. You can still have the same functionality with Weave overlay networks, eg on Docker's Cloud. In any other cases you can use centralized discovery.
cote works out of the box with Docker Swarm and Docker Cloud but we are seeking for volunteers to test this module under various Cloud providers like AWS, Google Cloud, etc. Please open an issue if you'd like to do so and report your findings.
You might find this presentation really helpful to understand it. You might also be interested in reading this typical use case.
Installation
npm install @kalisio/feathers-distributed --save
To get the latest version please use the following command:
npm install https://github.com/kalisio/feathers-distributed --save
feathers-distributed
is as least intrusive as possible so for most use cases you simply need to configure it along with your applications holding your services:
const distribution = require('@kalisio/feathers-distributed');
...
app.configure(hooks());
app.configure(socketio());
app.configure(distribution());
...
A common problem with distribution is that it can register new remote services to your app after it has been configured and started, which typically causes 404 errors, read the documentation about this issue.
If you are not running a long-lived server and want to use distribution in your test suite for instance, you can clean it up gracefully like this:
const distribution = require('@kalisio/feathers-distributed');
...
server.on('close', () => distribution.finalize(app));
server.close();
...
Architecture
When the plugin initializes the following is done for your local app:
- creates a local publisher to dispatch its locally registered services to other apps.
- creates a local subscriber to be aware of remotely registered services from other apps.
- creates a local responder to handle incoming requests from other apps to locally registered services.
- creates a local publisher to dispatch locally registered services events to remote apps.
What is done by overriding app.use
is the following:
- each local Feathers service of your app is published using the local publisher to remote apps through the
service
event.
What is done when your app is aware of a new remotely registered app is the following:
- creates a local requester to send requests to the remote responder for remote services operations.
- creates a local subscriber to be aware of service events sent by the remote events publisher for remote services.
What is done when your app is aware of a new remotely registered service is the following:
- creates via
app.use
a local Feathers service acting as a proxy to the remote one by using the local requester.
What is done by overriding app.unuse
is the following:
- each local Feathers service removed from your app is unpublished using the local publisher to remote apps through the
service-removed
event.
What is done when your app is aware of a remotely unregistered service is the following:
- removes via
app.unuse
the local Feathers service acting as a proxy to the remote one.
Configuration options
Local services
By default all your services will be exposed, you can use the services
option to indicate which services need to be published if you'd like to keep some available only internally:
app.configure(
distribution({
// Can be a static list of service path to be exposed
services: ['api/service1', 'api/service2']
// Can be a function returning true for exposed services
services: (service) => (service.path !== 'api/internal')
})
)
Remote services
By default all remote services will be consumed, you can use the remoteServices
option to indicate which services need to be consumed if you don't want to be polluted by unused ones:
app.configure(
distribution({
// Can be a static list of service path to be consumed
remoteServices: ['api/service1', 'api/service2']
// Can be a function returning true for consumed services
remoteServices: (service) => (service.path !== 'api/external')
})
)
By default remote services will be registered locally using the same path as in the remote application, you can use the remoteServicePath
option to change the local path of consumed services to avoid conflict for instance:
app.configure(
distribution({
// Function returning the local path for consumed services
// In this case we rename the remote service1 to avoid conflict with a local service1
// We keep the original path for others remote services
remoteServicePath: (service) => (service.path === 'service1' ? service.path.replace('service1', 'remote-service1') : service.path)
})
)
By default options used to create a service will not be associated to the corresponding remote service, as it might contain references to complex objects not serializable "as is". However, you can use the remoteServiceOptions
option to define a list of options to be serialized and provided to the remote service when created. These options will then be available in the remoteService.remoteOptions
object:
app.configure(
distribution({
// Function returning the array of distributed options for the service
remoteServiceOptions: (service) => (service.path === 'service1' ? ['option1', 'option2'] : null)
})
)
app.use('service1', new MyService({ option1: 'xxx', option2: 'yyy' }))
// In remote app
if (app.service('service1').remoteOptions.option1 === 'xxx') ...
You can add hooks to each registered remote service by using the hooks
option, this is typically useful to enforce authentication on a gateway scenario:
app.configure(
distribution({
hooks: {
before: {
all: [authenticate('jwt')]
}
}
})
);
You can add middlewares to each registered remote service by using the middlewares
option, this is typically useful to enfore correct error handling on a gateway scenario:
const express = require('@feathersjs/express')
app.configure(
distribution({
middlewares: {
before: (req, res, next) => next(),
after: express.errorHandler()
}
})
);
Indeed, Feathers does not allow to register new services after the app has been setup so that application middlewares like not found or error handler will be hit first. However, feathers-distributed
dynamically adds new services during app lifecycle. As a consequence, you should not register these middlewares at app level and register them whenever a new service pops up using this option.
Last but not least, you can change the default 20 seconds service requester timout like this:
const express = require('@feathersjs/express')
app.configure(
distribution({
timeout: 30000 // 30s
})
);
Events
By default all real-time events from local services are distributed to remote ones but you can customize the events to be dispatched by providing the list in the distributedEvents
property of your service or disable all events publishing with the publishEvents
boolean option.
Methods
By default only standard service methods from local services are distributed to remote ones but you can customize the method calls to be dispatched by providing the list in the distributedMethods
option. Then, any service declaring a custom method available in the list will dispatch the call.
Partition keys
By default the same partition key is used for all distributed apps, so that there is no communication segregation. Sometimes it is better for security, maintenance or performance purpose to segregate services by following the principles of domain-driven design. In that case you can always define your own partition key for each application using the key
string option (defaults to 'default'
).
A solid solution as suggested in issue #70 is to use your package name because duplicated apps will then have the same key while different projects will not, and it will be persistent across restart:
const package = require('path/to/your/package.json')
app.configure(distributed({
...,
key: package.name
}))
Healthcheck
By default the module adds an express middleware on the /distribution/healthcheck/:key
route. You can perform a healthcheck status for each available partition key using this route and a GET HTTP method, the following responses are possible:
- HTTP code 200 with the list of registered remote services for this key
- HTTP code 404 if no application has been registered for this key
- HTTP code 503 if some remote services do not respond to the healthcheck signal
If you don't use partition keys you can omit the key request parameter as it will default to the 'default'
value.
You can change the healthcheck endpoint URL using the healthcheckPath
option.
Hooks
In some cases it can be useful to know in a hook if the method has been called from a remote service or a local one (e.g. in order to skip authentication). For this you can use the fromRemote
flag in parameters:
services[i].hooks({
before: {
all: hook => {
// Do something specific in this case
if (hook.params.fromRemote) ...
return hook
}
}
})
Example
To launch the example:
npm start
This launches a gateway and two replicas of the microservice. Wait a couple of seconds so that each app is aware of other apps on the network, then open the example/index.html file in your browser. If you refresh it regularly, you should see a TODO coming from a different microservice in a random way (i.e. its ID should be different sometimes).
The same example is available based on a Docker compose file:
// Start
docker-compose up -d
// Stop when you've finished
docker-compose down -v
This launches a gateway ( gateway
Docker service) and two replicas of the microservice (service1
and service2
Docker services). If you open the example/index.html file in your browser, then refresh it regularly, you should also see a TODO coming from a different microservice in a random way (i.e. its ID should be different sometimes).
You can then try to kill one of the service replicas, e.g. docker-compose stop service1
. Now if you refresh the page regularly you should always see the same TODO as the failed service should not be contacted anymore.
If you kill the latest service replica, e.g. docker-compose stop service2
, you should see a timeout on refresh. Then if you restart the service, e.g. docker-compose start service2
, the TODO should come back on refresh.
Look for details into the example folder.
Authentication
There are two scenarios:
- the API gateway, where you have a single entry point (ie node) to authenticate and access your API but services are distributed accross different nodes
- the distributed application, where you can distribute and access any service on any node on your network mesh with authentication
API gateway:
In this case you have to install the authentication plugin on your gateway and register a hook that will enforce authentication on each registered remote service by using the hooks
option:
app.configure(
distribution({
hooks: {
before: {
all: [authenticate('jwt')],
},
},
})
);
You don't need to install the authentication plugin or hook on each service served from your nodes.
You process as usual to authenticate your client first on the gateway with a local or JWT strategy for instance.
Our example folder is a good start for this use case.
Distributed application
In this case you have to install the authentication plugin on each of your nodes and register a hook that will enforce authentication on each service as usual.
You process as usual to authenticate your client first on any node with a local or JWT strategy for instance.
Our tests contain a good example for this use case.
To make it work all nodes must share the same authentication configuration (i.e. secret)
Tips
Initialization
-
The library overrides
app.use()
to automatically publish any new service defined, so that you can usually safely initialize it before registering your services like others feathers plugins (transport, configuration, etc.). However, you might also configure some middlewares withoptions.middlewares
and in this case you probably need to initialize the express plugin beforehand. -
The library immediately initializes the underlying cote module unless you intentionally add some delay (
coteDelay
option in ms, defaults to none). This delay can be required because it appears that in some scenarios, e.g. Docker deployment the network setup takes some time and cote is not able to correctly initialize (e.g. allocate ports or reach Redis) before. -
As the library also relies on cote components to publish/subscribe events, and these components take some time to initialize, there is also a publication delay (
publicationDelay
option in ms, defaults to 10s) that is respected before publishing app services once initialized. -
In order to make service discovery more reliable (but with a small overhead) you can publish services on a regular basis with the heartbeat interval (
heartbeatInterval
option in ms, defaults to none).
Environment variables
Some options can be directly provided as environment variables:
COTE_LOG
to activate logging for all underlying cote componentsBASE_PORT
to select the starting port of the port range to be used by coteHIGHEST_PORT
to select the ending port of the port range to be used by coteCOTE_DELAY
(ms) to define the delay before initializing cotePUBLICATION_DELAY
(ms) to define the delay before publishing servicesHEARTBEAT_INTERVAL
(ms) to define the interval to publish services on a regular basis
Cloud deployment
Cloud providers don't (and they probably won't) support broadcast/multicast out of the box as required to be a zero-configuration module. In this case, the most simple approach is usually to rely on a centralized discovery based on Redis instance. More details can be found in the cote documentation but you can have a look to our Kargo solution for a working configuration.
More specifically check the docker compose files of our Redis instance and one of our app running feathers-distributed
like Kano. You will see that you need to open at least some port to make it work and take care of initialization delay if you'd like to add a healthcheck.
License
Copyright (c) 2017-20xx Kalisio
Licensed under the MIT license.