Home

Awesome

Aries Mediator Service

TL;DR

This repository provides a simple process for a developer to run an Aries mediator agent. You should be able to bring the stack on-line by copying .env.sample to .env and running docker-compose up. For more information, keep reading.

Multitenant ACA-Py load testing

Please see Multi-demo Load Test for running mediator load testing against a local ACA-Py instance in multitenant mode.

Build & Run

This is setup to be run as is with a simple docker-compose up. When run it will fire up the following containers:

ngrok

You need to accept inbound connections. Most of us are behind firewalls or have impermanent IP addresses. Ngrok is used as a proxy to work around this. It will provide the URL that will act as a front door for your wallet to access the mediator service.

If you have a paid ngrok account you can provide your access token as one of the parameters (via the .env file). If not, leave it blank and it'll assume your on the free plan.

Pro Tip 🤓

Caddy

Your wallet needs to open two connections to the mediator: The first is a standard https connection. This will be embedded in the invitation and will be used by the wallet for standard CRUD operations. Once the invite is accepted, a second WebSocket (wss) connection will be opened. This will be the primary mode of communication between the mediator and your wallet.

Caddy is used to route http and wss traffic to the correct transport on the mediator. Without it, two ngrok tunnels would need to be started making startup and configuration a little more complicated.

In any case, I think its more clear and consumable to have a single URL.

Mediator Demo Controller

The mediator is configured to allow it to automatically accept connections. This functionality can be delegated to a controller process if business rules require approval / intervention before accepting a connection. A sample controller to do this is included with the project.

This custom nodejs app, written in TypeScript, uses Feathers JS to provide RESTFull endpoints to ACA-Py. To enable the controller and have it determine if connections should be accepted:

  1. Update the mediator configuration

In the .env file override the mediator config environment variable by adding MEDIATOR_ARG_FILE as follows:

MEDIATOR_ARG_FILE=./configs/mediator-with-controller.yml
  1. Enable the mediator service in the docker stack

Remove these two lines from the docker-compose.yml file in the mediator-controller service:

    profiles:
      - donotstart
  1. Add the following line to start.sh to allow the mediator to find and use the controller:
    --webhook-url ${MEDIATOR_CONTROLLER_WEBHOOK}

Mediator

A mediator is just a special type of agent. In this case, the mediator is ACA-Py, with a few special config params, into make it run as a "mediator" rather than a traditional agent.

About 1/2 of the params for ACA-Py are provided in start.sh, others are passed via a configuration file mediator-auto-accept.yml. Move them around as you see fit. Ones that are likely to change are better kept as environment variables.

By default, ACA-Py is using Aries Askar and the related stored components for managing secure data and keys. If you want to use the older Indy SDK, you can edit (or override) the --wallet-type parameters in start.sh to be --wallet-type indy. If you change this after starting with Askar storage, make sure that you delete the database before proceeding (docker volume rm aries-mediator-service_agency-wallet).

PostgreSQL

PostgreSQL is a well known RDBMS. It is used by the mediator persist wallet information. Without it, the wallet would be reset every time the stack is restarted. The first time the mediator container runs it will create a database for its wallet and initialize the wallet state.

Run It !

  1. Put some tunes on, it'll help. Here's one to get you started Bossa Nova - Take On Me. You should have it up and running before the song is done.

  2. Start by cloning this repo:

git clone git@github.com:fullboar/aries-mediator-service.git
  1. Copy the file .env.sample to .env in the root of the project. The default values are fine, edit as you see fit. This file will be used by docker-compose to add or override any environment variables.
cp .env.sample .env

Pro Tip 🤓

You can generate strong tokens for production with OpenSSL:

openssl rand 32 -hex
  1. Bring up the stack. When you first run this command it will build the mediator container so it may take a few moments. Subsequent restarts will be much faster.
docker-compose up

When the stack is on-line you'll see a big white QR code scroll up your screen, just above that is your invitation URL. I'll look something like this:

mediator_1             | Invitation URL (Connections protocol):
mediator_1             | https://ed49-70-67-240-52.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZmYwMjkzNmYtNzYzZC00N2JjLWE2ZmYtMmZjZmI2NmVjNTVmIiwgImxhYmVsIjogIk1lZGlhdG9yIiwgInJlY2lwaWVudEtleXMiOiBbIkFyVzd1NkgxQjRHTGdyRXpmUExQZERNUXlnaEhXZEJTb0d5amRCY0UzS0pEIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9lZDQ5LTcwLTY3LTI0MC01Mi5uZ3Jvay5pbyJ9

The c_i parameter is your reusable invitation encoded as base64. Let's decode it and see what's inside:

{"@type": "did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation", "@id": "ff02936f-763d-47bc-a6ff-2fcfb66ec55f", "label": "Mediator", "recipientKeys": ["ArW7u6H1B4GLgrEzfPLPdDMQyghHWdBSoGyjdBcE3KJD"], "serviceEndpoint": "https://ed49-70-67-240-52.ngrok.io"}

Pro Tip 🤓

The invitation will be regenerated every time you restart the docker stack for two important reason:

  1. The ngrok URL changes with restarts; and
  2. The database is not persistent. This is where wallet initialization data, like verkey is stored. This will cause the @id and recipientKeys properties to change in the invitation (c_i payload above).

The general workaround steps are:

Load Testing

See the load-testing folder for a tool to use in testing a mediator. The load tool runs multiple agents, each connecting to the mediator and then periodically sending a ping. See how to use the load tester in the Load Testing README file. Once you start your local mediator, copy the Mediator Invitation URL, pasting it into the load testing .env file and start the load tester.

Aries Bifold Wallet Integration

You can easily use your newly minted mediator with the Aries Bifold wallet. Take the full invitation URL from above and provide it to Bifold through the MEDIATOR_URL parameter. This can be in the form of an environment variable or, a more reliable way is to create a .env file in the root of the project with the parameter MEDIATOR_URL in it like this:

MEDIATOR_URL=https://ed49-70-67-240-52.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZmYwMjkzNmYtNzYzZC00N2JjLWE2ZmYtMmZjZmI2NmVjNTVmIiwgImxhYmVsIjogIk1lZGlhdG9yIiwgInJlY2lwaWVudEtleXMiOiBbIkFyVzd1NkgxQjRHTGdyRXpmUExQZERNUXlnaEhXZEJTb0d5amRCY0UzS0pEIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9lZDQ5LTcwLTY3LTI0MC01Mi5uZ3Jvay5pbyJ9

FAQ

How does Bifold talk to the Mediator?

I struggled quite a bit with how HTTP/s and WSS are managed internally. The key, for me, was the --endpoint argument in ACA-Py. To run a mediator, and maybe other agents, it takes two params for this argument. The first is the HTTP/s endpoint and the second is the WSS endpoint.

The HTTP/s endpoints, as per the docs on this param, will be used for invitations. Its going to be how your wallet finds and opens a dialogue with the mediator. Once a connection is established the WSS endpoint will be how the mediator and your wallet primarily communicated; they will message over the WebSocket.

Can I use two URLs rather than one?

You can use two different URLs and route them to the respective ports on the mediator. It won't care. As per "How does Bifold talk to the Mediator" just make sure the HTTP/s port is the first in the --endpoint argument and use wss:// ad the protocol in the second param even though the URL is the same.

I've used one URL and setup Caddy to route traffic to the correct port on the mediator. I think this setup is much more clear making it easier to consume and maintain.

Are there other ways to manage transports?

Sure. There is a ACA-Py plug-in that will allow it to take both HTTP/s and WSS traffic over a single port. You can find it in the Plugin Toolbox

My pro-tip is use Caddy. Reverse proxies are a tried and tru technology.

Why Caddy?

I get asked a bit why Caddy? NGINX is great, but I find you need a PhD in NGINX to configure it. Caddy is lightweight and built from the ground up be more effective in cloud (k8s / OpenShift) deployments and has more human friendly config.