Appboard/2.4/admin/clustering and failover: Difference between revisions

imported>Jason.nicholls
imported>Jason.nicholls
Line 78: Line 78:
=== Initial Preparation ===
=== Initial Preparation ===


# Set the database redunc
# Setup an external database for the shared configuration database, you will need the access details. Do not actually configure AppBoard to use the external database at this stage.
# Create a full backup archive of your existing system. Ensure this backup has been configured to include all custom filesystem assets. Refer to the [[appboard/2.4/admin/backup_and_recovery#Customizing_the_Export|Backup & Recovery]] page for more information.
# Set AppBoard to operate in redundancy mode. This is enabling the following setting in <tt>[INSTALL_HOME]/server/webapps/enportal/WEB-INF/config/custom.properties</tt>:
#: <tt>hosts.redundant=true</tt>
# Create a full backup archive of your existing system. Ensure this backup has been configured to include all custom filesystem assets. Refer to the [[appboard/2.4/admin/backup_and_recovery#Customizing_the_Export|Backup & Recovery]] page for more information. It's recommended this archive also include all required JDBC database drivers, including the JDBC driver required for the external configuration database.
# Have two or more systems ready to be configured for clustering. Note it's possible to also use the existing system without having to re-install.
# Have the AppBoard turnkey distribution and license files for each node.
 
=== Setup Process ===
 
The following applies to the primary node. Even in a purely load-balanced configuration pick one of the nodes as the primary for purposes of establishing the cluster:
 
# If converting the existing deployment into a cluster deployment, first shutdown AppBoard.
# Otherwise, deploy the AppBoard turnkey and ensure a valid environment. See the [[appboard/2.4/admin/installation|Installation]] documentation for more info.
# Load the complete backup archive created in the initial preparation:
#: <tt>portal Apply -jar ''archive.jar''</tt>
# Configure AppBoard to use an external configuration database. Follow the instructions to use <tt>dbsetup</tt> on the [[appboard/2.4/admin/configuration_database|Configuration Database]] page, but '''do not''' load any archives or perform a dbreset.
# Install the license file if not already - remember this is node specific.
# Start AppBoard. At this point you should have a working cluster of one node!
# Verify in the AppBoard <tt>catalina.out</tt> log file the following lines:
#: Establishing a connection to the external configuration database (italics sections will depend on your configuration)
#: <tt>Connecting to the ''MySQL'' database at ''jdbc:mysql://192.168.180.1:3306/ab_cluster''...Connected.</tt>
#: Redundancy mode is enabled:
#: <tt>''date@timestamp'' Redundancy support enabled</tt>
 
The following applies to all other nodes in the cluster:
 


When redundancy is enabled the following message will be logged to <tt>system.log</tt> on startup:
When redundancy is enabled the following message will be logged to <tt>system.log</tt> on startup:

Revision as of 05:32, 22 July 2014

Overview

AppBoard is implemented using a highly scalable web application architecture. As a Java application running inside an Apache Tomcat server, AppBoard is able to make use of multi-core and multi-processor systems with large amounts of RAM on 64-bit operating systems. In addition to scaling vertically on a single system, AppBoard supports horizontal scaling to handle even greater loads and/or to provide for high availability environments through the use of a shared configuration database. AppBoard can be used in the following configurations:

  1. Load Balanced: Two or more nodes are fully operational at all times. The load balancer directs traffic to nodes based on standard load balancing techniques such as round-robin, fewest sessions, smallest load, etc... If a server is detected as down it is removed from the active pool.
  2. Failover: A two-node configuration with both nodes running but all traffic is routed to the primary node unless it is detected to be down. At this point the load balancer re-directs traffic to the secondary node.
  3. Cold Standby: A two-node configuration where the secondary node is offline in normal operation. If the primary node is detected to be down the secondary node is brought online and traffic re-directed.

In cases where high-availability is required then regardless of the load a cluster configuration is recommended. In cases where load is a concern refer to the Performance Tuning & Sizing documentation for more information.

Architecture & Licensing

Two Node Cluster Architecture

Whether running a load-balanced, failover, or cold-standby configuration the following components are required:

  • AppBoard installation per node, this requires a separate license for each node.
  • External (shared) configuration database. This database is not provided by Edge and is recommended to reside on a different host to the AppBoard servers. In high availability environments the database itself should also highly available. See the System Requirements for supported external configuration databases.
  • Load Balancer. This component is not provided by Edge but is required in cluster configurations.

Cluster Configuration

The overall cluster configuration is made up of separate parts that follow the cluster architecture:

  1. Load Balancer configuration
  2. Shared AppBoard configuration: via an external shared configuration database.
  3. per-node AppBoard configuration and filesystem assets.

Also consider that establishing a new cluster and maintaining the cluster may have different approaches as outlined below.


Shared Configuration Database

In simple single-server AppBoard configurations it is recommended to use the built-in, in-memory, H2 configuration database. However, in cluster configurations the configuration needs to be shared and kept in sync across two or more nodes so an external configuration database is required. Setting up an external database for redundancy operation has two main steps:

  1. Configure AppBoard to use an external database (versus the build-in H2). This process is documented in isolation on the Configuration Database page.
  2. Configure AppBoard to operate in redundancy mode.
Template-note.png
The above points out the main steps in isolation, please refer to the Establishing a Cluster for more details.

Per-Node Configuration & Assets

While the shared configuration database takes care of AppBoard content, enPortal content, and provisioning information, there is other configuration and filesystem assets that need to be maintained per node:

  • license file
  • configuration database connection details
  • all other filesystem assets such as login pages, images for look and feel, images for visualizations, local data source files, and other miscellaneous pieces that have been built into the solution such as custom JSPs, CGIs, HTML/CSS/JS, etc...

The recommended approach, which also serves to ensure full-backups are made of the system, is to configure the Backup export list to include all filesystem assets. Then when establishing, or updating a cluster, the archive can be used to maintain the filesystem components. Also, other custom approaches may also be suitable such as filesystem synchronization tools.

The license and database configuration will need to be handled when first establishing the cluster but will not need to be changed after that point.

Load Balancer

The Load Balancer can distribute sessions to one or more AppBoard nodes using any standard load balancing algorithm (e.g. Round-Robin, smallest load, fewest sessions, etc.). The only requirement is that the session affinity is maintained such that a single user is always routed to the same AppBoard node during the full duration of the session.

The two session cookies used by AppBoard are JSESSIONID and enPortal_sessionid. When configuring the Load Balancer for session affinity, it is recommended to use enPortal_sessionid to avoid any conflicts with other applications that may also have a JSESSIONID cookie.

The following URL can be used by the load balancer as a means of testing AppBoard availability:

http://server:port/enportal/check.jsp

This script returns a HTTP status code 200 (success) if all components of AppBoard are running properly, otherwise it returns a 500 (internal error) if there is an issue. And in the case the AppBoard server isn't running, then of course there will be no response.

Virtualized Environments

Whether running on the bare metal or within virtualized environments the clustering configuration remains the same.

Some virtualization environments may offer their own layer of fault tolerance although this is usually targeted at reducing/eliminating the impact of hardware failure - e.g. VMware Fault Tolerance to transparently failover a guest from a failed physical host to a different physical host such that everything continues un-interrupted. This type of system is useful on it's own but may not be aware of application-level failures that can also occur.


Establishing a Cluster

The following process can be used to establish a cluster environment. If you're skipped straight here, please go back and read over the previous sections to understand the overall architecture and configuration components.

The following process assumes an existing environment, although a clean install environment can be used too.

Initial Preparation

  1. Setup an external database for the shared configuration database, you will need the access details. Do not actually configure AppBoard to use the external database at this stage.
  2. Set AppBoard to operate in redundancy mode. This is enabling the following setting in [INSTALL_HOME]/server/webapps/enportal/WEB-INF/config/custom.properties:
    hosts.redundant=true
  3. Create a full backup archive of your existing system. Ensure this backup has been configured to include all custom filesystem assets. Refer to the Backup & Recovery page for more information. It's recommended this archive also include all required JDBC database drivers, including the JDBC driver required for the external configuration database.
  4. Have two or more systems ready to be configured for clustering. Note it's possible to also use the existing system without having to re-install.
  5. Have the AppBoard turnkey distribution and license files for each node.

Setup Process

The following applies to the primary node. Even in a purely load-balanced configuration pick one of the nodes as the primary for purposes of establishing the cluster:

  1. If converting the existing deployment into a cluster deployment, first shutdown AppBoard.
  2. Otherwise, deploy the AppBoard turnkey and ensure a valid environment. See the Installation documentation for more info.
  3. Load the complete backup archive created in the initial preparation:
    portal Apply -jar archive.jar
  4. Configure AppBoard to use an external configuration database. Follow the instructions to use dbsetup on the Configuration Database page, but do not load any archives or perform a dbreset.
  5. Install the license file if not already - remember this is node specific.
  6. Start AppBoard. At this point you should have a working cluster of one node!
  7. Verify in the AppBoard catalina.out log file the following lines:
    Establishing a connection to the external configuration database (italics sections will depend on your configuration)
    Connecting to the MySQL database at jdbc:mysql://192.168.180.1:3306/ab_cluster...Connected.
    Redundancy mode is enabled:
    date@timestamp Redundancy support enabled

The following applies to all other nodes in the cluster:


When redundancy is enabled the following message will be logged to system.log on startup:

... - INFO - system - Redundancy support enabled

Maintaining a Cluster