RabbitMQ Multi-Region Setup

RabbitMQ Multi-Region Setup

In this post we will create an application capable to send messages from one region to another for data consistency.

Use Case

Imagen we are building large scale application which will need to run in multiple regions, for example: Europe and Africa, and we want to synchronise our User data between these regions, for example if user gets blocked in one region he has to be blocked in second one as well.

Technologies Used

  • Spring Boot 2
  • Spring Cloud Streams
  • RabbitMQ
  • Docker
  • Docker Compose
  • Pumba

RabbitMQ Clusters Setup

First of all make sure you have Docker and Docker Compose installed on your machine.

codespace:~ lab$ docker --version
Docker version 19.03.2, build 6a30dfc
codespace:~ lab$ docker-compose --version
docker-compose version 1.24.1, build 4667896b

We will create two RabbitMQ clusters which will be working on separate Docker networks. Setup of each cluster will be similar to one used in “RabbitMQ Single Point of Failure” post.

Cluster One Setup

Create new directory named rabbit and subdirectory cluster1. Open cluster1 directory and create docker-compose.yml file in it.

codespace:~ lab$ mkdir rabbit
codespace:~ lab$ cd rabbit/
codespace:rabbit lab$ mkdir cluster1
codespace:~ lab$ cd cluster1/
codespace:cluster1 lab$ vi docker-compose.yml

docker-compose.yml content:

version: '3.6'

networks:
  default:
    external:
      name: rabbitmq-cluster

services:
  rabbitmq-01:
    image: rabbitmq:3.7.17-management
    hostname: rabbitmq-01
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=guest
      - RABBITMQ_ERLANG_COOKIE="MY-SECRET-KEY-123"
    volumes:
      - ./definitions.json:/etc/rabbitmq/definitions.json
    ports:
      - '5672:5672'
      - '15672:15672'

  rabbitmq-02:
    image: rabbitmq:3.7.17-management
    hostname: rabbitmq-02
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=guest
      - RABBITMQ_ERLANG_COOKIE="MY-SECRET-KEY-123"
    volumes:
      - ./definitions.json:/etc/rabbitmq/definitions.json
    ports:
      - '5673:5672'
      - '15673:15672'

  rabbitmq-03:
    image: rabbitmq:3.7.17-management
    hostname: rabbitmq-03
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=guest
      - RABBITMQ_ERLANG_COOKIE="MY-SECRET-KEY-123"
    volumes:
      - ./definitions.json:/etc/rabbitmq/definitions.json
    ports:
      - '5674:5672'
      - '15674:15672'

As well create definitions.json file in cluster1 folder.

codespace:cluster1 lab$ vi definitions.json

Content of definitions.json:

{
  "rabbit_version": "3.6.15",
  "users": [
    {
      "name": "admin",
      "password_hash": "fd0GyzAf6C6hmgCJ5VU+TSyzUNlzypPlGb7VDKkqUvJqVxyd",
      "hashing_algorithm": "rabbit_password_hashing_sha256",
      "tags": "administrator"
    }
  ],
  "vhosts": [
    {
      "name": "/"
    }
  ],
  "permissions": [
    {
      "user": "admin",
      "vhost": "/",
      "configure": ".*",
      "write": ".*",
      "read": ".*"
    }
  ],
  "parameters": [],
  "policies": [
    {
      "vhost": "/",
      "name": "ha",
      "pattern": "",
      "definition": {
        "ha-mode": "all",
        "ha-sync-mode": "automatic",
        "ha-sync-batch-size": 5
      }
    }
  ],
  "queues": [
    {
      "name": "q.user.created",
      "vhost": "/",
      "durable": true,
      "auto_delete": true,
      "arguments": {}
    }
  ],
  "exchanges": [
    {
      "name": "e.user.created",
      "vhost": "/",
      "type": "topic",
      "durable": true,
      "auto_delete": false,
      "internal": false,
      "arguments": {}
    }
  ],
  "bindings": [
    {
      "source": "e.user.created",
      "vhost": "/",
      "destination": "q.user.created",
      "destination_type": "queue",
      "routing_key": "user.created",
      "arguments": {}
    }
  ]
}

Start cluster one by executing docker-compose up from cluster1 folder. Then open 3 terminal windows and open RabbitMQ Docker containers in each of them.

 docker exec -it cluster1_rabbitmq-01_1  bash
 docker exec -it cluster1_rabbitmq-02_1  bash
 docker exec -it cluster1_rabbitmq-03_1  bash

On nodes rabbitmq-02 and rabbitmq-03 run following commands:

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@rabbitmq-01
rabbitmqctl start_app

Validate that clustering worked successfully by running rabbitmqctl cluster_status on all nodes, output:

root@rabbitmq-01:/# rabbitmqctl cluster_status
 Cluster status of node rabbit@rabbitmq-01 …
 [{nodes,[{disc,['rabbit@rabbitmq-01','rabbit@rabbitmq-02',
                 'rabbit@rabbitmq-03']}]},
  {running_nodes,['rabbit@rabbitmq-03','rabbit@rabbitmq-02',
                  'rabbit@rabbitmq-01']},
  {cluster_name,<<"rabbit@rabbitmq-01">>},
  {partitions,[]},
  {alarms,[{'rabbit@rabbitmq-03',[]},
           {'rabbit@rabbitmq-02',[]},
           {'rabbit@rabbitmq-01',[]}]}]
 root@rabbitmq-01:/#

Setup queues mirroring policy by executing commend below on all nodes:

rabbitmqctl set_policy ha-all "" '{"ha-sync-mode": "automatic", "ha-mode": "all", "ha-sync-batch-size": 5}'

You should be able to see your cluster status in RabbitMQ management console on localhost:15672, username: admin, password: guest

Cluster Two Setup

Process of second cluster setup is almost identical create subdirectory cluster2 with same definitions.json as for cluster1 and docker-compose.yml with content:

version: '3.6'

networks:
  default:
    external:
      name: rabbitmq-cluster1

services:
  rabbitmq-01:
    image: rabbitmq:3.7.17-management
    hostname: rabbitmq-01
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=guest
      - RABBITMQ_ERLANG_COOKIE="MY-SECRET-KEY-123"
    volumes:
      - ./definitions.json:/etc/rabbitmq/definitions.json
    ports:
      - '5675:5672'
      - '15675:15672'

  rabbitmq-02:
    image: rabbitmq:3.7.17-management
    hostname: rabbitmq-02
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=guest
      - RABBITMQ_ERLANG_COOKIE="MY-SECRET-KEY-123"
    volumes:
      - ./definitions.json:/etc/rabbitmq/definitions.json
    ports:
      - '5676:5672'
      - '15676:15672'

  rabbitmq-03:
    image: rabbitmq:3.7.17-management
    hostname: rabbitmq-03
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=guest
      - RABBITMQ_ERLANG_COOKIE="MY-SECRET-KEY-123"
    volumes:
      - ./definitions.json:/etc/rabbitmq/definitions.json
    ports:
      - '5677:5672'
      - '15677:15672'

Open cluster containers:

 docker exec -it cluster2_rabbitmq-01_1  bash
 docker exec -it cluster2_rabbitmq-02_1  bash
 docker exec -it cluster2_rabbitmq-03_1  bash

Execute same commands as on cluster one. Management console will be available on localhost:15675.

Spring Boot App

In this example we will use two instances of the same Spring Boot application we used in post “RabbitMQ and Spring Cloud Stream”.

It is simple Spring Boot application created using Spring Initialiser with a dependancy on Spring Cloud Steams for listening and publishing to RabbitMQ. Source can be found on GitHub.

Configuration for DemoApp instance one:

server:
  port: 8556

spring:
  application:
    name: demo
  rabbitmq:
    addresses: localhost:5672,localhost:5673,localhost:5674
    username: admin
    password: guest
    cache:
      channel:
        size: 10
    listener:
      simple:
        concurrency: 10
        max-concurrency: 20
  cloud:
    stream:
      bindings:
        outputChannel:
          destination: demo
        inputChannel:
          destination: demo
          group: demo-group-1

Configuration for DemoApp instance two:

server:
  port: 8557

spring:
  application:
    name: demo
  rabbitmq:
    addresses: localhost:5675,localhost:5676,localhost:5677
    username: admin
    password: guest
    cache:
      channel:
        size: 10
    listener:
      simple:
        concurrency: 10
        max-concurrency: 20
  cloud:
    stream:
      bindings:
        outputChannel:
          destination: demo
        inputChannel:
          destination: demo
          group: demo-group-2

Each instance of DemoApp should be able to send and receive messages from cluster it is connected to. You can send selected amount of events by calling GET on /messages/ endpoint. For example /messages/100 to send 100 events. But how can we send messages from one cluster to another so in case 100 messages are send from DemoApp instance one instance two would get them all as well?

RabbitMQ Federation Setup

To send messages between regions we will use RabbitMQ federation plugin. The high-level goal of the federation plugin is to transmit messages between brokers without requiring clustering, details. This setup also is useful if RabbitMQ nodes would be started in different regions to avoid partitioning of cluster because of network delay and thats exactly our use case. To enable federation on RabbitMQ run commands below on all six nodes:

rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
rabbitmqctl stop_app
rabbitmqctl start_app

For federation to work on our setup we will need to create an extra Docker network and add at least one node from each cluster on it otherwise clusters won’t be able to communicate with each other.

docker network create testnet

docker ps // to get names of rabbitMQ containers

docker network connect testnet cluster1_rabbitmq-01_1
docker network connect testnet cluster2_rabbitmq-01_1

To check that container are really added on testnet network run:

docker network inspect testnet


[
    {
        "Name": "testnet",
        "Id": "758edc8e4b369ca772e4c2b1811a2fff25eac31fb593bb0d39319365bcd5c7f7",
        "Created": "2019-10-31T11:28:43.4492961Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "af26b8e4d4dded415290b72747b4d63436a60d646bd63c6f01a991274f5c57ab": {
                "Name": "cluster2_rabbitmq-01_1",
                "EndpointID": "7dfc1c4f31f3402bd87cd508e596243e2b0338975f8ce50172d6a182cfcfd44f",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "d32775816fa75f85610b82150b491221948edb1df4391cc1c4ce76d3ededdb6f": {
                "Name": "cluster1_rabbitmq-01_1",
                "EndpointID": "8ef09952d70bd6d1de9e9e5836c83c5948debd1513b2be58a4002f221c3c4987",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Restart containers which have been added on new network in our case cluster1_rabbitmq-01_1 and cluster2_rabbitmq-01_1.

Then to setup federation between clusters we need to create new federation on downstream cluster in this case that will be cluster2 so on node rabbitmq-01_1 of cluster2 execute:

rabbitmqctl set_parameter federation-upstream my-upstream \
'{"uri":"amqp://admin:guest@cluster1_rabbitmq-01_1:5672","expires":3600000}'

New Federation becomes visible in management console as well:

For federation to work new federation upstream policy is needed. It can be added in Policies section in Admin tab. Ours is setup as below.

Note: It is possible to setup federation policy using rabbitmqctl as well, but while testing this setup it wasn’t working as expected.

New federated queue and exchange will appear in Queues section of management console after policy is added and if endpoint GET /messages/100 of DemoApp instance one will be hit both DemoApps will get 100 messages.

For DemoApp instance 1 to receive messages from DemoApp instance 2 same process needs to be followed. On rabbitmq-01_1 of cluster1 execute:

rabbitmqctl set_parameter federation-upstream my-upstream \
'{"uri":"amqp://admin:guest@cluster2_rabbitmq-01_1:5672","expires":3600000}'

Add policy:

Queue and Exchange have to be federated on cluster2 same as on cluster1. To test it execute GET /messages/100 from DemoApp 2 and DemoApp 1 should get messages as well.

Ok so we have two instances of our app which is able to send messages to an app on other network, but what about network delay?

Mocking Network Delay

To setup network delay on docker environment we are going to use Pumba Docker network emulator discussed in “Network Delay Testing Using Docker and Pumba” post.

In order for Pumba to work first of all we need to install iproute2 on our RabbitMQ containers to do that run below commands on all 6 containers.

apt-get update
apt-get install iproute2

After installation is done, download compatible Pumba binary and add required delay to a bridge network testnet, for example 500ms delay for 20 minutes on cluster1 rabbitmq-01_1 node:

cd pumbaDownloadDirectory

./pumba netem --interface eth1 --duration 20m delay --time 500 cluster1_rabbitmq-01_1

Note: important part here is eth1 it specifies that we want to add delay specifically on network eth1 which is testnet, if we would use eth0 network (default cluster network) bigger delays would make RabbitMQ node unresponsive and cluster might split. You can check available networks on each container by running ip a command:

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
    link/tunnel6 :: brd ::
111: eth0@if112: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:13:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.19.0.4/16 brd 172.19.255.255 scope global eth0
       valid_lft forever preferred_lft forever
113: eth1@if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.20.0.3/16 brd 172.20.255.255 scope global eth1
       valid_lft forever preferred_lft forever

Add Comment

Your email address will not be published. Required fields are marked *