Configuration
This document describes how to set up PhenixID Server in a cluster. The cluster configuration will be used to share configuration and sessions between two or more nodes. Please note that additional network components might be required in order to achieve failover and load balancing.
Recommendation is that nodes to be part in cluster has clear communication paths. No port restrictions in firewall etc.
If the clustering is added to an existing configuration, special preparation needs to be done. Please read the section below.
The reader of this document should have some basic knowledge about Phenix Server.
System Requirements
- Phenix Server installed on two or more machines
- Communication needs to be open between the hosts on the port specified for the cluster. By default required ports are 5701, 5702, 2424 & 47000.
- Any port binding in configuration must be set to 0.0.0.0. (RADIUS_CONFIGURATIONS ip must be set to 0.0.0.0 since this is replicated between the nodes, "ip" : "0.0.0.0". So no specific ip for the node/nodes should be set here. A specific ip will make the RADIUS listener unable to bind on subsequent nodes.
Overview
This is a summary of the tasks that will be handled in this document
1. Install with cluster service
2. Add cluster functionality post installation
3. Configure first node
4. Install and start subsequent nodes
5. Add modules to subsequent nodes
6. Extracted example configuration from phenix-store.json
7. Verify cluster
8. Database preparation when adding a node or reconfiguration of existing environment
9. Clustering in environments with multiple NICs
Install with cluster service
In order to enable cluster, this option must be selected at the time of installation. Follow the instructions in the installer.
Starting the cluster requires additional arguments, if running on a Linux environment:
sudo ./start-PhenixID.sh -cluster -cluster-host <ip_address_of_local_machine>
On Windows, the installed service will handle the arguments automatically.
Add cluster functionality post installation
Reinstall and select the cluster options mentioned above.
Configure first node
Start by adding the desired configuration regarding functionality on Node1. And then verify that everything is working the way it should.
When done, backup the file phenix-store.json.
Install and start subsequent nodes
When the configuration has been added to Node1, subsequent node/s can be installed and started.
The sebsequent node/s will retrive the configuration from Node1. But no modules will be loaded yet.
Add modules to subsequent nodes
New nodes does not automatically deploy any features other than specified in boot.json. Deploying features to subsequent nodes is done by updating the module_refs attribute for a node. Simply add the id's in the list of comma separated id's to the node where deployment should be performed.
Extracted example configuration from phenix-store.json
"NODES" : [ {
"name" : "computer1",
"description" : "Default node (created automatically)",
"config" : {
"module_refs" : "8ea5c86b-0f05-4365-8a21-6eaee984ceff,3b3a7f4e-6600-4731-94c0-01684990a9c9,41b14787-4f9a-4983-b974-92960f8266fd,f2fbc6e8-aa90-463a-9dc6-dc13cd01afc9,d6a1123a-42d2-419f-b9cf-9abd37a3a68f,eaptls,a5d7a2a9-cf92-4d57-aa7a-909a3e4df9f7,b79c5c49-d6e1-47ac-b451-4590aa28b518"
},
"created" : "2017-03-08T11:18:44.095Z",
"id" : "58af6d49-2fc1-4341-b86d-e8636ee07754",
"modified" : "2017-03-09T12:24:40.232Z"
},{
"name" : "computer2",
"description" : "Second node",
"config" : {
"module_refs" : "b79c5c49-d6e1-47ac-b451-4590aa28b518"
},
"created" : "2017-03-08T11:18:44.095Z",
"id" : "58af6d49-2fc1-4341-b86d-e8636ee07755",
"modified" : "2017-03-09T12:24:40.232Z"
} ],
Example above shows that node with name computer1 has a number of modules deployed. Node with name computer2 has only one.
Verify cluster
Verify that configuration is replicated between nodes. This is done by comparing phenix-store.json.The file should be identical on all nodes.
In addition, it is a good idea to verify that cluster nodes have the features properly configured. For HA clusters simply turn one node offline and verify functionality.
Database preparation when adding a node or reconfiguration of existing environment
When adding nodes or converting a non cluster installation to a cluster installation you have to make sure to prepare the database for this.
- Install the subsequent cluster node, but do not start the service
- Remove the content of /data from the subsequent node
- Stop the first node
- Copy the content from /data from the first node to the same folder on the subsequent node
- Start the first node
- Verify that the first node is up and running
- Start the subsequent node
- Verify by viewing the logfile, that the database delta is synchronized from the first node.
Clustering in environments with multiple NICs
The default configuration might have problem to handle environments with multiple NICs.
If that is the case, the following modifications has to be made:
Insert the hosts ip address as public-address in config/orientdb/hazencast.xml
<network>
<public-address>192.168.0.54</public-address>
<port auto-increment="false" port-count="1">5702</port>
Replace the listers tag with the following in config/orientdb-server-config.xml
<listeners>
<listener protocol="binary" socket="default" port-range="2424" ip-address="127.0.0.1"/>
<listener protocol="binary" socket="default" port-range="2424" ip-address="192.168.0.54"/>
</listeners>