PhenixID Server Clustering

This document describes how to set up PhenixID Server in a cluster. The cluster configuration will be used to share configuration and sessions between two or more nodes. Please note that additional network components might be required in order to achieve failover and load balancing.

The reader of this document should have some basic knowledge about Phenix Server.

System Requirements

  • Phenix Server installed on two or more machines
  • Communication needs to be open between the hosts on the port specified for the cluster. By default, the cluster will try to find a free port between 5701 and 5801. It will start with 5701 and increment by 1 until it finds a free port. The cluster nodes also has to be allowed to communicate with each other on all high ports, as random high ports are used by the cluster service.
  • RADIUS_CONFIGURATIONS ip must be set to 0.0.0.0 since this is replicated between the nodes ("ip" : "0.0.0.0"). So no specific ip for the node/nodes should be set here. A specific ip will make the RADIUS listener unable to bind on subsequent nodes.

Overview

This is a summary of the tasks that will be handled in this document

1.  Install with cluster service

2.  Add cluster functionality after installation

3.  Configure Node1

4.  Configure subsequent nodes

5.  Add modules to subsequent nodes

6. Extracted example configuration from cluster.xml

7. Extracted example configuration from phenix-store.json

8. Verify cluster

Install with cluster service

On Linux - No special instructions on Linux. Cluster can be added to the startup by using -cluster. Like this: sudo ./start-PhenixID.sh -cluster -cluster-host <ip_address_of_local_machine>

On Windows - When installing PhenixID Server, add the cluster functionality by clicking the box "This node will be a part of a PhenixID server cluster", and then (on the next page) enter the ip address to use for the cluster on the local machine and in the next box the ip address of the second node.

Add cluster functionality after installation

For Linux see the step above.

For Windows, reinstall and select the cluster options mentioned above.

Configure Node1

Start by adding the desired configuration regarding functionality on Node1. And then verify that everything is working the way it should.

When done, please make a backup of the file phenix-store.json.

The file <PhenixID Server installation path>/Server/classes/cluster.xml is used for the cluster configuration. By default it will start using multicast. To point directly to the other server in the cluster, set multicast enabled="true" to "false", and then set tcp-ip enabled="false" to "true". Add the local ip as value for <interface> and the ip address of the other server in the cluster in the member-list, <member>. We also need to set the local ip address a bit further down in the file, in the section:

<interfaces enabled="true">
            <interface>clusterIP</interface>
        </interfaces>

So make sure that this section is set to "true" and add the local ip instead of "clusterIP".

See example below.

Configure subsequent nodes

Before starting the configuration on the subsequent node/nodes make sure that you have a backup of the file phenix-store.json on node1. Also remove "com.phenixidentity~phenix-store-mpl" from the boot.json file on all nodes but one in the cluster. It is found in the "default_modules" key.

 

On subsequent nodes, start by adding the desired cluster configuration to <PhenixID Server installation path>/Server/classes/cluster.xml. Then start the new node. It will start by retrieving the configuration part from Node1. But no modules will be loaded yet.

 

 

Add modules to subsequent nodes

Now we can start adding the modules that we want subsequent nodes to run. This is done in the file <PhenixID Server installation path>/Server/config/phenix-store.json. Find the section called NODES, and locate the subsequent node id. Now add the modules that should run on the subsequent node. See example below.

Extracted example configuration from cluster.xml

 <multicast enabled="false">
                <multicast-group>224.2.2.3</multicast-group>
                <multicast-port>54327</multicast-port>
            </multicast>
            <tcp-ip enabled="true">
                <interface>192.168.0.53</interface>
                <member-list>
                    <member>192.168.0.93</member>
                </member-list>
            </tcp-ip>

------------

<interfaces enabled="true">
            <interface>192.168.0.53</interface>
        </interfaces>

Extracted example configuration from phenix-store.json

"NODES" : [ {
    "id" : "PHIDTEST01",
    "modules" : [ {
      "module" : "com.phenixidentity~phenix-pipes~1.3.0",
      "enabled" : true,
      "config" : {
        "node_id" : "PHIDTEST01"
      }
    }, {
      "module" : "com.phenixidentity~phenix-session-manager~1.3.0",
      "enabled" : true,
      "config" : {
        "node_id" : "PHIDTEST01"
      }
    },{
      "module" : "com.phenixidentity~phenix-radius~1.3.0",
      "enabled" : true,
      "config" : {
        "node_id" : "PHIDTEST01"
      }
    } ]
  }, {
    "id" : "PHIDTEST02",
    "modules" : [ {
      "module" : "com.phenixidentity~phenix-session-manager~1.3.0",
      "enabled" : true,
      "config" : {
        "node_id" : "PHIDTEST02"
      }
    }, {
      "module" : "com.phenixidentity~phenix-pipes~1.3.0",
      "enabled" : true,
      "config" : {
        "node_id" : "PHIDTEST02",
        "instances" : 4
      }
    }, {
      "module" : "com.phenixidentity~phenix-radius~1.3.0",
      "enabled" : true,
      "config" : {
        "node_id" : "PHIDTEST02"
      }
    } ]
  } ]

Verify cluster

With the extracted example configuration above, verify the cluster using these steps:

1. Download and install a radius client test tool

2. Open the radius client test tool and point to node 1 in the cluster

3. Send username and password. The server should respond with a challenge to enter otp.

4. Shut down node 1.

5. Point the radius client test tool to node 2.

6. Enter otp and send to node 2.

7. Verify authentication success.

Ports

The cluster uses a few different ports for communication between the nodes.

Two of these ports can be set in the file cluster.xml under /classes, the third one is set during startup of the service.

In case restriction of these ports is needed because of firewall/security rules, please find the settings for this below.

First port is by default 5701 and set to auto-increment if 5701 would already be taken. This line will look like this by default:

<port auto-increment="true" port-count="100">5701</port>

Next port is the outbound communication port. By default the system picks up an ephemeral port during socket bind operation for this communication. This section looks like this:

<outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>

Third port is allocated by the service at startup. If this port is not explicitly assigned, the system will pick a random ephemeral port.

To restrict this behaviour and define the actual ports that the system should use, change the configuration according to below.

Restrict to only use port 5701, change the auto-increment value from true to false:

<port auto-increment="false" port-count="100">5701</port>

For the outbound ports, one per subsequent node is needed. So add the port/ports of your choice to the configuration:

<outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>37000,37001</ports>
        </outbound-ports>

The example above is for a three node cluster, so therefore two ports are needed.

The third port, allocated at startup, can be controlled with the parameter -cluster-port.

For Linux installation add this parameter like the rest of the startup parameters:

sudo ./start-PhenixID.sh -cluster -cluster-host <ip_address_of_local_machine> -cluster-port <port>

For Windows installation this needs to be added to the startup service. So use the following command to add it:

sc config "PhenixID service - cluster" binPath= "C:\Program Files\PhenixID\Server\bin\phenixidserviceclu.exe -cluster-port 47000"