Table of Contents
This chapter describes the basic concepts, environment configuration, and startup of Tibero New Cluster Manager.
Tibero Cluster Manager (hereafter CM) increases Cluster availability and manageability. It also manages instance membership for Tibero Active Cluster service. Various components of a Cluster are managed as resources in CM.
The following are the Cluster resources managed by CM:
File
Network
Cluster
Service
DB (Database)
AS (Active Storage)
VIP
CM monitors the states of the registered resources and performs the necessary actions. CMs in the same Cluster exchange heartbeat messages through specified network and shared disk to be aware of the current Cluster membership configuration.
One of the CMs in the Cluster has the Master role to automatically adjust the Cluster membership in case of failure to provide seamless Cluster services. If the Master node fails, another node becomes the Master node.
First set the CM_HOME and CM_SID as in the following. For information about the basic environment variables needed for starting RDBMS, refer to “Chapter 2. Basic Database Administration”.
$CM_HOME
Set CM's installation path.
export CM_HOME=/home/tibero6
$CM_SID
Set a unique Node ID.
export CM_SID=cm0
The following parameters can be configured in the TIP file of each CM. The required items must be specified, and the optional items will be set to their default values if they are not specified.
Initialization Parameter | Required | Description |
---|---|---|
CM_NAME | Required | Node ID. Must be unique within the Cluster. This port must be open between nodes. |
CM_UI_PORT | Required | Network port number to connect to CM when executing cmrctl. This port must be open between nodes. |
CM_RESOURCE_FILE | Required | CM resource file path. CM resource file is used to:
|
CM_RESOURCE_FILE_ BACKUP | Optional | CM resource file backup path. |
CM_RESOURCE_FILE_ BACKUP_INTERVAL | Optional | Time interval at which the CM resource file is backed up to the path set in CM_RESOURCE_FILE_BACKUP. (Unit: minutes) |
LOG_LVL_CM | Optional | CM log level (a value between 1 and 6). A higher level specifies more detailed logging. (Default value: 2) |
CM_LOG_DEST | Optional | CM log directory. Must be an absolute path. (Default value: $CM_HOME/instance/$CM_SID/log/) |
CM_GUARD_LOG_DEST | Optional | CM Guard log file path. Must be an absolute local path, and CM must be started with root privileges. Default value for each OS:
|
CM_LOG_FILE_SIZE | Optional | Maximum CM log file size. A new log file is created when the current file exceeds this size. Use an integer between 100 KB and 1 GB. (Unit: byte, Default value: 10 MB) |
CM_LOG_TOTAL_SIZE_ LIMIT | Optional | Total maximum size of all log files created in the CM_LOG_DEST directory. When this size is exceeded, the oldest file is deleted to prevent the log size from growing infinitely. Use an integer between 100 KB and 1 GB. (Unit: byte, Default value: 300 MB) |
CM_TIME_UNIT | Optional | Time unit for CM management. (Unit: 0.1 sec, Default value: 10) |
CM_HEARTBEAT_EXPIRE | Optional | Time limit for CM to be notified about another node's failure. If a heartbeat is not received from another node's CM within this time, it is regarded as a node failure. (Unit: second, Default value: 300) |
CM_NET_EXPIRE_MARGIN | Optional | Network heartbeat expire time for CM. Must be >= 5. (Unit: second, Default value: 5) |
CM_WATCHDOG_EXPIRE | Optional | Watchdog expiration period if it is activated between RDBMS and CM. If CM does not operate within this period, RDBMS is automatically terminated. Must be set to a value less than CM_HEARTBEAT_EXPIRE. (Unit: second, Default value: 290) |
CM_FENCE | Optional | Option to start CM fence daemon. CM fence daemon restarts the node of the CM that has exceeded the CM_WATCHDOG_EXPIRE time during I/O operation in order to prevent the RDBMS of the problem node from performing I/O. For restart permissions, the CM must be started with root privileges. (Default value: N) |
CM_ENABLE_FAST_ NET_ERROR_DETECTION | Optional | Option to activate network error detection of other CM nodes for early detection of any abnormal states. (Default value: N) |
_CM_BLOCK_SIZE | Optional | CM file's I/O Unit size in bytes. Use the default value for most OSs, except use 1024 for HP-UX. (Default value: 512) |
The following is the CM command syntax.
tbcm [option] <argument>
The following are the options:
Option | Description |
---|---|
-b | Start CM daemon. |
-d | Terminate CM. |
-x <file path> | Export the CM resource information to the specified file path. |
-X | Export the CM resource information to the CM_RESOURCE_FILE path. |
-s | Display CM status (initialization parameter values). |
-v | Display CM version info. |
-h | Display tbcm command help. |
Use the cmrctl command to execute the desired tasks after starting the CM daemon.
cmrctl is a set of commands for managing and controlling resources in the New CM.
The following is the basic cmrctl command syntax.
cmrctl <action> <resource_type> [--<attr_key> <attr_val>|...]
The following are the cmrctl actions and resource types.
Option | Description |
---|---|
action | add, del, show, start, stop, act, deact |
resource_type | network, cluster, service, db, as, vip, file |
Certain combination of action and resource type may not be allowed. (Example: add and file)
The following are possible cmrctl add commands.
Command to add a network resource.
Required attributes are different for public and private network types. A network resource is used to configure interconnect IP/PORT or public network interface for VIP use. Once configured, the network interface can be monitored to automatically update the resource status.
cmrctl add network --name <network_name> --nettype <private|public> --ipaddr <network_ipaddr/netmask_addr> --portno <port_no> --ifname <interface_name>
Key | Value Type | Description |
---|---|---|
name | string | Network resource name. (unique, required) |
nettype | string | Network resource type.
|
ipaddr | string | Interconnect IP address. (required only for 'private' nettype) |
portno | integer | Interconnect port number for CMs. This port must be open between nodes. (required only for 'private' nettype) |
ifname | string | Public interface name (for VIP use). (required only for 'public' nettype) |
Command to add a Cluster resource. Cluster resources include interconnect for inter-node connections, storage shared among nodes, and public network interface for VIP use.
cmrctl add cluster --name <cluster_name> --incnet <network_resource_name> --pubnet <public_network_resource_name> --cfile <file_path>
Key | Value Type | Description |
---|---|---|
name | string | Cluster resource name. (unique, required) |
incnet | string | Network resource name for interconnect use. (required) |
pubnet | string | Network resource name for public use (for VIP use) |
cfile | file path | Cluster file path. Can set multiple paths by using a comma as separator. (required) To specify a TAS path (diskstring), append a '+' at the beginning of the path.
For a storage server, specify as follows: |
1. If setting cfile to a raw device instead of TAS diskstring, it is recommended to set it to an odd number of paths (majority rule).
2. A file resource is automatically created by using the information in the file specified with --cfile for the Cluster resource. If set to a TAS diskstring, resource is created in the form like +0, +1, or +2.
.Command to add a service resource. Conceptually, a service is a set of instances that provide a specific service in the Cluster environment
cmrctl add service --name <service_name> --type <DB|AS> --mode <AC|HA> --cname <cluster_resource_name>
Key | Value Type | Description |
---|---|---|
name | string | Service resource name. (unique, required) The service resource name must be the same as the value of DB_NAME of the database mapped to the service. DB_NAME for TAS can be omitted from a tip file. However, if it is set in the file, the value must be the same as the service resource name. |
type | string | Service type.
|
mode | string | Service instance Clustering mode.
|
cname | string | Service resource's Cluster resource name. (required) |
A single AS service is allowed per Cluster. A service resource must be added to a node of a specific Cluster, and it is automatically shared by all resource in the Cluster.
Command to add a Tibero instance to a DB type service. The instance can be added to DB type services.
cmrctl add db --name <db_resource_name> --svcname <service_name> --dbhome <directory_path> --envfile <file_path> --retry_cnt<retry_cnt> --retry_interval<retry_interval>
Key | Value Type | Description |
---|---|---|
name | string | DB resource name. The DB resource name must be same as TB_SID of the DB instance. (unique, required) |
svcname | string | DB resource's service resource name. (required) |
dbhome | string (directory path) | DB binary path, similar to TB_HOME. (required) |
envfile | string (file path) | Environment file for executing DB binary. (recommended) |
retry_cnt | integer | Maximum retry count. (Default value: 3) |
retry_interval | integer | Retry interval. (Unit: second, Default value: 0 (no retries attempted) |
It is recommended to create an envfile for each DB resource. The envfile contains the export commands of environment variables needed to start and terminate the RDBMS.
When there is no envfile, the following is performed by default. LD_LIBRARY_PATH is used for Linux and Solaris. Instead of LD_LIBRARY_PATH, LIBPATH and SHLIB_PATH are used for AIX and HP-UX, respectively. For the other environment variables, values that the terminal has when CM boots are used. For more information about environment variables, refer to Tibero Installation Guide.
export TB_SID=name #db resource name export TB_HOME=dbhome #db resource's home directory export PATH=$TB_HOME/bin:$TB_HOME/client/bin:/usr/bin:$PATH export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
Command to add an AS (Active Storage) instance to an AS type service.
cmrctl add as --name <as_resource_name> --svcname <service_name> --dbhome <directory_path> --envfile <file_path> --retry_cnt<retry_cnt> --retry_interval<retry_interval>
Key | Value Type | Description |
---|---|---|
name | string | AS resource name. (unique, required) |
svcname | string | AS resource's service resource name. (required) |
dbhome | string (directory path) | AS binary path, similar to TB_HOME. (required) |
envfile | string (file path) | Environment file for executing AS binary. (recommended) |
retry_cnt | integer | Maximum retry count. (Default value: 3) |
retry_interval | integer | Retry interval. (Unit: second, Default value: 0 (no retries attempted) |
The AS resource name must be same as TB_SID of the AS instance. For information about envfile, refer to "cmrctl add db".
To add VIP, tbcm must be started using the ROOT privileges, and the pubnet attribute must be set for the service specified in svcname. Confirm that the PATH environment variable is set correctly. If the variable is not set to /sbin, VIP alias error occurs.
cmrctl add vip --name <vip_name> --node <CM_SID> --svcname <service_name> --ipaddr <vip_ipaddr/netmask_addr> --bcast <bcast_addr>
Key | Value Type | Description |
---|---|---|
name | string | VIP resource name. (unique, required) |
node | string | CM_SID of a node that owns VIP. (optional, default value: CM_SID of a node that executed the command) |
svcname | string | Service to use VIP. (required) |
ipaddr | string (IP address/Netmask) | Address in the format of VIP IP address/Netmask. (required) |
bcast | Broadcast | VIP broadcast address. (optional) |
Command to delete a resource that is in DOWN or DEACT state.
cmrctl del <resource_type> --name <resource_name>
Command to display information about CM's resources.
Command can be used in the following three ways:
cmrctl show
Display a list of CM daemon's resources.
cmrctl show all
Display a list of resources in all nodes in a CM daemon cluster.
cmrctl show <resource_type>
Display a list of CM daemon's resource of the type, <resource_type>.
cmrctl show <resource_type> --name <resource_name>
Display details about the specified resource.
Command to start a resource. Starting a service resource starts all instances of the service.
cmrctl start <resource_type> --name <resource_name> [--option <options>]
Command to stop a resource. Stopping a service resource stops all instances of the service. It also deactivates auto-restart operation, if active. The
auto-restart mode is used to detect and restart any service instance that has been stopped.
cmrctl stop <resource_type> --name <resource_name> [--option <options>]
Command to activate a resource that has been deactivated due to the following reasons:
When restart attempts exceed the retry_cnt (default value: 3) of the DB or AS resource
When the user explicitly deactivates the resource with the cmrctl deact command
Executing this command on a service resource also activates the auto-restart mode on all instances of the service.
cmrctl act <resource_type> --name <resource_name>
Command to deactivate a resource. A deactivated resource is exempt from auto-restart. Executing this command on a service resource also deactivates the auto-restart mode on all instances of the service.
cmrctl deact <resource_type> --name <resource_name>
Command to modify retry_cnt and retry_interval of an instant resource.
cmrctl modify <resource_type> --name <resource_name> [--option <options>]
Another CM in the same Cluster can be controlled by using the cmrctl command with the following attribute, which requires the target CM node name and its Cluster name.
--remote <node_name>@<cluster_name>
This attribute cannot be used with the cmrctl add and cmrctl del commands as shown in the following example.
# Display resources of cm2 node in cls1 Cluster $ cmrctl show --remote cm2@cls1 # Shut down tibero1 db of cm1 node in cls1 Cluster $ cmrctl stop db --name tibero1 --remote cm1@cls1 # Remotely executing resource add triggers an error $ cmrctl add db --name tac1 ... --remote cm1@cls1 [ERROR] Cannot add (or delete) resource remotely
An error is displayed if the node or the Cluster is in down state or incorrect node or Cluster name is specified.
Utility used to update the CM resource file while CM is offline by using the TIP file to determine its location. The user cannot modify the CM resource file, which is a binary file. The user must use this command to modify the resource information in the file before starting up the CM.
The crfconf command usage is same as cmrctl, except that the available actions are limited to add, del, and show.
crfconf <action> <resource_type> [--<attr_key> <attr_val>|...]
If crfconf is executed while CM is online, the following error occurs.
$ crfconf show [ERROR] CM is online. use 'cmrctl' command crfconf failed!
There are situations where all Cluster members need to have root privileges when there are global resources such as VIP in the Cluster. A Cluster resource enters the ROOT mode if all its CMs are started with ROOT privileges. The Cluster in ROOT mode prevents any CM without ROOT privileges from access.
The following are examples of a Cluster entering the ROOT mode.
When the first CM is started with ROOT privileges in the Cluster
This puts the Cluster in the Root mode, and any subsequent CM must be started with ROOT privileges to join the Cluster.
When all nodes that are not started with ROOT privileges are downed in the Cluster
Consider a 5-node Cluster. Node 1 and 2 are started without ROOT privileges, and node 3, 4, and 5 are started with ROOT privileges. The Cluster is not in the ROOT mode at this time. If node 1 and 2 are downed leaving only nodes with ROOT privileges, the Cluster enters the ROOT mode. Now, if node 1 and 2 want to rejoin the Cluster, they must be started with ROOT privileges.
In order to disable the ROOT mode of the Cluster, shut down all nodes in the Cluster and then start the first CM without ROOT privileges. However, a CM with ROOT privileges cannot use global resources including VIP in a Cluster that is not in the ROOT mode.
Use the following command to check which CMs have ROOT privileges and whether the Cluster is in the ROOT mode.
cmrctl show cluster --name 'cluster name'
The following is an example of a 2-node Cluster with both nodes having ROOT privileges. The '(ROOT)' next to the Status value indicates that the Cluster is in the ROOT mode. The Mst column value of 'R' in the NODE LIST indicates that the CM has ROOT privileges.
cmrctl show cluster --name cls1 Cluster Resource Info =============================================================== Cluster name : cls1 Status : UP (ROOT) Master node : (1) cm0 Last NID : 2 Local node : (2) cm1 Storage type : Active Storage As Diskstring : /data/* No. of cls files : 3 (1) +0 (2) +1 (3) +2 =============================================================== | NODE LIST | |-------------------------------------------------------------| | NID Name IP/PORT Status Schd Mst FHB NHB | | --- -------- -------------------- ------ ---- --- ---- ---- | | 1 cm0 123.1.1.1/29000 UP Y R M 30 35 | | 2 cm1 124.1.1.1/29100 UP Y R [ LOCAL ] | ===============================================================
The following is an example of a 2-node Cluster with both nodes without ROOT privileges. In contrast to the previous example, the Status does not show '(ROOT)', and there are no nodes with Mst value of 'R' in the NODE LIST.
cmrctl show cluster --name cls1
Cluster Resource Info
===============================================================
Cluster name : cls1
Status : UP
Master node : (1) cm0
Last NID : 2
Local node : (2) cm1
Storage type : Active Storage
As Diskstring : /data/*
No. of cls files : 3
(1) +0
(2) +1
(3) +2
===============================================================
| NODE LIST |
|-------------------------------------------------------------|
| NID Name IP/PORT Status Schd Mst FHB NHB |
| --- -------- -------------------- ------ ---- --- ---- ---- |
| 1 cm0 123.1.1.1/29000 UP Y M 30 35 |
| 2 cm1 124.1.1.1/29100 UP Y [ LOCAL ] |
===============================================================
The following is an example of a 2-node Cluster with one node with and one node without ROOT privileges. The Mst value shows 'R' only for node 2, and Cluster Resource Info Status does not show '(ROOT)' as the Cluster is not in the ROOT mode.
cmrctl show cluster --name cls1
Cluster Resource Info
===============================================================
Cluster name : cls1
Status : UP
Master node : (1) cm0
Last NID : 2
Local node : (2) cm1
Storage type : Active Storage
As Diskstring : /data/*
No. of cls files : 3
(1) +0
(2) +1
(3) +2
===============================================================
| NODE LIST |
|-------------------------------------------------------------|
| NID Name IP/PORT Status Schd Mst FHB NHB |
| --- -------- -------------------- ------ ---- --- ---- ---- |
| 1 cm0 123.1.1.1/29000 UP Y M 30 35 |
| 2 cm1 124.1.1.1/29100 UP Y R [ LOCAL ] |
===============================================================
In the following example, node 1 without ROOT privileges is downed and the Cluster enters the ROOT mode. Now, an error occurs when a node without ROOT privileges tries to join the Cluster again.
Node 1
cmrctl stop cluster --name cls1
cmrctl start cluster --name cls1 Failed to start the resource 'cls1' [ERROR] To join this cluster(cls1), you must be root
Node 2
cmrctl show cluster --name cls1 Cluster Resource Info =============================================================== Cluster name : cls1 Status : UP (ROOT) Master node : (1) cm1 Last NID : 2 Local node : (2) cm1 Storage type : Active Storage As Diskstring : /data/* No. of cls files : 3 (1) +0 (2) +1 (3) +2 =============================================================== | NODE LIST | |-------------------------------------------------------------| | NID Name IP/PORT Status Schd Mst FHB NHB | | --- -------- -------------------- ------ ---- --- ---- ---- | | 1 cm0 123.1.1.1/29000 DOWN N 0 0 | | 2 cm1 124.1.1.1/29100 UP Y R M [ LOCAL ] | ===============================================================
This section describes how to configure TAC in Linux environment with an example.
On node 1, TB_SID and CM_SID are ac1 and cm1, respectively. On node 2, TB_SID and CM_SID are ac2 and cm2, respectively. First configure the required items in the CM TIP file. The three aforementioned initialization parameters are required.
The CM TIP file of node 1 is saved as cm1.tip under $TB_HOME/config folder, and that of node 2 as cm2.tip under $TB_HOME/config as follows:
<cm1.tip>
CM_NAME=cm1
CM_UI_PORT=8635
CM_RESOURCE_FILE=/home/tibero6/cm1_res.crf
<cm2.tip>
CM_NAME=cm2
CM_UI_PORT=8655
CM_RESOURCE_FILE=/home/tibero6/cm2_res.crf
Next configure the TAC TIP file.
The TAC TIP file of node 1 is saved as ac1.tip under $TB_HOME/config folder, and that of node 2 as ac2.tip under $TB_HOME/config as follows (For information about each parameters, refer to “Chapter 10. Tibero Active Cluster”. TB_HOME in this example is /home/tibero6):
<ac1.tip>
DB_NAME=ac #DB_NAME is same for both ac1 and ac2.
LISTENER_PORT=21000
CONTROL_FILES="/home/tibero6/database/ac/c1.ctl"
MAX_SESSION_COUNT=20
TOTAL_SHM_SIZE=512M
MEMORY_TARGET=1G
THREAD=0
UNDO_TABLESPACE=UNDO0
CLUSTER_DATABASE=Y
LOCAL_CLUSTER_ADDR=123.1.1.1
LOCAL_CLUSTER_PORT=21100
CM_PORT=8635 #CM_UI_PORT of cm1
<ac2.tip>
DB_NAME=ac #DB_NAME is same for both ac1 and ac2.
LISTENER_PORT=21010
CONTROL_FILES="/home/tibero6/database/ac/c1.ctl"
MAX_SESSION_COUNT=20
TOTAL_SHM_SIZE=512M
MEMORY_TARGET=1G
THREAD=1
UNDO_TABLESPACE=UNDO1
CLUSTER_DATABASE=Y
LOCAL_CLUSTER_ADDR=124.1.1.1
LOCAL_CLUSTER_PORT=21110
CM_PORT=8655 #CM_UI_PORT of cm2
For node1, CM_SID must be set to the name of the already created TIP file (cm1).
Execute the following commands to configure CM_SID. Do the same for TB_SID, which will be used later to create the database.
export CM_SID=cm1
export TB_SID=ac1
Now start the CM on node 1.
tbcm -b
The following message is displayed when CM starts up successfully.
CM Guard daemon started up. import resources from '/home/tibero6/cm1_res.crf'... Tibero 6 TmaxData Corporation Copyright (c) 2008-. All rights reserved. Tibero cluster manager started up. Local node name is (cm1:8635).
The resource binary file, cm1_res.crf, is created in the CM_RESOURCE_FILE directory. Resource information will be saved in this file.
Execute the following command to check the CM state.
cmrctl show
The following displays a normal CM state before adding resources.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- ====================================================================
Execute the following command to add a network resource.
cmrctl add network --name net1 --ipaddr 123.1.1.1 --portno 29000
After successfully adding the resource, the following message is displayed.
Resource add success! (network, net1)
Execute 'cmrctl show' to check the resource state.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29000 ====================================================================
Execute the following command to add a Cluster. The folder for the cfile must be already created on the shared disk.
cmrctl add cluster --name cls1 --incnet net1 --cfile /'shared disk path'/cls1_cfile
After successfully adding the Cluster resource, the following message is displayed.
Resource add success! (cluster, cls1)
Execute 'cmrctl show' to check the resource state.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29000 COMMON cluster cls1 DOWN inc: net1, pub: N/A ====================================================================
Activate cls1 Cluster to add a TAC service resource to the Cluster.
cmrctl start cluster --name cls1
After a successful Cluster resource startup, the following message is displayed. If this fails, check that the directory for the cfile exists.
SUCCESS!
Execute 'cmrctl show' to check the resource state.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29000 COMMON cluster cls1 UP inc: net1, pub: N/A cls1 file cls1:0 UP /'shared disk path'/cls1_cfile ====================================================================
Execute the following command to add a service resource. Use the name of the database that will be created for the service name.
cmrctl add service --name ac --cname cls1
After successfully adding the service resource, the following message is displayed.
Resource add success! (service, ac)
Execute 'cmrctl show' to check the resource state.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29000 COMMON cluster cls1 UP inc: net1, pub: N/A cls1 file cls1:0 UP /'shared disk path'/cls1_cfile cls1 service ac DOWN Database, Active Cluster (auto-restart: OFF) ====================================================================
Lastly, add the DB resource.
The --name option must be set to TB_SID (ac0) of the Active Cluster. The envfile that contains the environment variable for ac0 is saved as envfile_ac1 under /home/tibero6/.
cmrctl add db --name ac1 --svcname ac --dbhome /home/tibero6 --envfile /home/tibero6/envfile_ac1
After successfully adding the DB resource, the following message is displayed.
Resource add success! (db, ac1)
Execute 'cmrctl show' to check the resource state.
Resource List of Node cm1
====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- ------------- --------- -----------------------
COMMON network net1 UP (private) 123.1.1.1/29000
COMMON cluster cls1 UP inc: net1, pub: N/A
cls1 file cls1:0 UP /'shared disk path'/cls1_cfile
cls1 service ac DOWN Database, Active Cluster (auto-restart: OFF)
cls1 db ac1 DOWN ac, /home/tibero6
====================================================================
Create the database by following steps 2-6 in “10.5. Creating a Database for TAC” considering the followings.
Add the following in the tbdsn.tbr file for tbsql connection.
ac0=( (INSTACE=(HOST=123.1.1.1) (PORT=21000) (DB_NAME=ac) ) )
After successfully connecting to tbsql, execute CREATE DATABASE "ac", the previously set service resource name (refer to “10.5. Creating a Database for TAC”). Once the database is created, execute 'cmrctl show' to check that the STATUS of all resources have been changed to 'UP' as in the following.
Resource List of Node cm1
====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- ------------- --------- -----------------------
COMMON network net1 UP (private) 123.1.1.1/29000
COMMON cluster cls1 UP inc: net1, pub: N/A
cls1 file cls1:0 UP /'shared disk path'/cls1_cfile
cls1 service ac UP Database, Active Cluster (auto-restart: OFF)
cls1 db ac1 UP ac, /home/tibero6
====================================================================
This completes the configuration of node 1. Configure the tbdsn.tbr file of node 2, and then execute the following commands in order. When adding the Cluster, the cfile path must be same as that of node 1.
export CM_SID=cm2 export TB_SID=ac2 tbcm -b cmrctl add network --name net1 --ipaddr 124.1.1.1 --portno 29100 cmrctl add cluster --name cls1 --incnet net1 --cfile /'shared disk path'/cls1_cfile cmrctl start cluster --name cls1
Execute 'cmrctl show', and notice that the service has already been added since the settings in cfile are from cm1.
Resource List of Node cm2 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 124.1.1.1/29100 COMMON cluster cls1 UP inc: net1, pub: N/A cls1 file cls1:0 UP /'shared disk path'/cls1_cfile cls1 service ac DOWN Database, Active Cluster (auto-restart: OFF) ====================================================================
Save the envfile for ac2, and then execute the following commands to complete the configuration of node 2 in TAC.
cmrctl add db --name ac2 --svcname ac --dbhome /home/tibero6 --envfile /home/tibero6/envfile_ac2 cmrctl start service --name ac
This section describes how to configure TAS.
The following is an example of configuring TAS-TAC (Tibero Active Storage - Tibero Active Cluster) in a Linux environment.
On node 1, TB_SID for TAS is as1, TB_SID for TAC is ac1, and CM_SID for CM is cm1. The example is saved in $TB_HOME/config/as1.tip on the node1.
On node 2, TB_SID for TAS is as2, TB_SID for TAC is ac2, and CM_SID for CM is cm2. The example is saved in $TB_HOME/config/as2.tip on the node2. TAS DB_NAME is not specified, and TAC DB_NAME is ac.
TAS is configured and then TAS-TAC is configured as in “9.5. TAC Configuration”.
Execute the following steps.
Create cm1.tip and cm2.tip by referring to “9.5. TAC Configuration”.
Configure various settings, such as memory size, in the TIP file as needed. In this example, two nodes are configured on the same machine by using two 5 GB files, /data/disk01 and /data/disk02, instead of using raw devices.
For information about disk configuration, refer to A.1.2. Disk Preparation in Tibero Active Storage Administrator's Guide.
<as1.tip>
LISTENER_PORT=30011 MEMORY_TARGET=2G MAX_SESSION_COUNT=50 TOTAL_SHM_SIZE=1G CLUSTER_DATABASE=Y #Required THREAD=0 CM_PORT=8635 #CM_UI_PORT of cm1 LOCAL_CLUSTER_ADDR=123.1.1.1 LOCAL_CLUSTER_PORT=30111 INSTANCE_TYPE=AS #Required AS_ALLOW_ONLY_RAW_DISKS=N #This is required if not using a RAW device. AS_THR_CNT=10 AS_DISKSTRING="/data/*" #Must specify /data/* to use data/disk01 and data/dis02.
<as2.tip>
LISTENER_PORT=40011 MEMORY_TARGET=2G MAX_SESSION_COUNT=50 TOTAL_SHM_SIZE=1G CLUSTER_DATABASE=Y #Required THREAD=1 CM_PORT=8655 #CM_UI_PORT of cm2 LOCAL_CLUSTER_ADDR=123.1.1.1 LOCAL_CLUSTER_PORT=40111 INSTANCE_TYPE=AS #Required AS_ALLOW_ONLY_RAW_DISKS=N #This is required if not using a RAW device. AS_THR_CNT=10 AS_DISKSTRING="/data/*" #Must specify /data/* to use data/disk01 and data/dis02.
After configuring the TIP file, configure each node in order.
Execute 'export CM_SID=cm1' to configure the environment variable. Follow the steps for configuring TAC, starting up CM, adding network resource, and up to executing 'cmrctl show' to display the following resource state.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29000 ====================================================================
Add the Cluster by modifying the cfile option as follows:
cmrctl add cluster --name cls1 --incnet net1 --cfile "+/data/*"
Export TB_SID as as1, and then execute the following command to start the TAS instance. (Do not start the Cluster, which is different from TAC configuration in “9.5. TAC Configuration”).
tbboot nomount
Use tbsql to connect to theTAS instance and create the disk space using the following sql. As for Tibero or TAC, as1 must be set in the tbdsn.tbr to connect to the TAS instance via tbsql.
CREATE DISKSPACE ds0 NORMAL REDUNDANCY FAILGROUP fg1 DISK '/data/disk01' NAME disk101 FAILGROUP fg2 DISK '/data/disk02' NAME disk201 ATTRIBUTE 'AU_SIZE'='4M';
Terminate tbsql, check that TAS instance has been terminated, and then execute the following command to start up the Cluster.
cmrctl start cluster --name cls1
Execute 'cmrctl show' to check the resource state.
Resource List of Node cm1 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29000 COMMON cluster cls1 UP inc: net1, pub: N/A cls1 file cls1:0 UP +0 cls1 file cls1:1 UP +1 cls1 file cls1:2 UP +2 ====================================================================
Execute the following command to add a service resource for AS.
cmrctl add service --name as --cname cls1 --type as
Add the AS resource. Save the envfile as envfile_for_as1 under tibero6/ folder, and use as1 from the TIP file as the argument for --name.
cmrctl add as --name as1 --svcname as --dbhome /home/tibero6 --envfile /home/tibero6/envfile_for_as1
Execute the following command to start up TAS instance in normal mode.
cmrctl start service --name as or cmrctl start as --name as1
After creating the TAS instance, execute the following tbsql command to create the thread for the TAS instance on node 2.
tbsql sys/tibero@as1 sql> alter diskspace ds0 add thread 1;
Now TAS instance is ready to be started up on node 2.
On node 2, 'execute export CM_SID=cm2' and 'export TB_SID=as2' to set the environment variables. Execute 'tbcm -b' to start up the CM, and then execute the following commands. Since two nodes are configured on the same machine in this example, use the same network ipaddr as cm1 with a different portno.
cmrctl add network --name net1 --ipaddr 123.1.1.1 --portno 29100 cmrctl add cluster --name cls1 --incnet net1 --cfile "+/data/*" cmrctl start cluster --name cls1
Execute 'cmrctl show' to check that the service has been added.
Resource List of Node cm2 ==================================================================== CLUSTER TYPE NAME STATUS DETAIL ----------- -------- ------------- --------- ----------------------- COMMON network net1 UP (private) 123.1.1.1/29100 COMMON cluster cls1 UP inc: net1, pub: N/A cls1 file cls1:0 UP +0 cls1 file cls1:1 UP +1 cls1 file cls1:2 UP +2 cls1 service as DOWN Active Storage, Active Cluster (auto-restart: OFF) ====================================================================
Add the AS resource, and then start up TAS instance on node 2.
cmrctl add as --name as2 --svcname ac --dbhome /home/tibero6 --envfile /home/tibero6/envfile_for_as2 cmrctl start as --name as2
Execute cmrctl on each node to check the results.
'cmrctl show' Output on Node 1
Resource List of Node cm1
====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- ------------- --------- -----------------------
COMMON network net1 UP (private) 123.1.1.1/29000
COMMON cluster cls1 UP inc: net1, pub: N/A
cls1 file cls1:0 UP +0
cls1 file cls1:1 UP +1
cls1 file cls1:2 UP +2
cls1 service as UP Active Storage, Active Cluster (auto-restart: OFF)
cls1 as as1 UP as, /home/tibero6
====================================================================
'cmrctl show' Output on Node 2
Resource List of Node cm2
====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- ------------- --------- -----------------------
COMMON network net1 UP (private) 123.1.1.1/29100
COMMON cluster cls1 UP inc: net1, pub: N/A
cls1 file cls1:0 UP +0
cls1 file cls1:1 UP +1
cls1 file cls1:2 UP +2
cls1 service as UP Active Storage, Active Cluster (auto-restart: OFF)
cls1 as as2 UP as, /home/tibero6
====================================================================
TAS configuration is complete on the two nodes. To configure TAC on each node, first create ac1.tip and ac2.tip under $TB_HOME/config folder on each node, respectively, as follows:
<ac1.tip>
DB_NAME=ac LISTENER_PORT=21000 CONTROL_FILES="+DS0/c1.ctl","+DS0/c2.ctl" DB_CREATE_FILE_DEST="+DS0" LOG_ARCHIVE_DEST="/home/ac/data/archive1" MEMORY_TARGET=1G MAX_SESSION_COUNT=50 TOTAL_SHM_SIZE=512M USE_ACTIVE_STORAGE=Y AS_PORT=30011 CLUSTER_DATABASE=Y THREAD=0 UNDO_TABLESPACE=UNDO0 LOCAL_CLUSTER_ADDR=123.1.1.1 CM_PORT=8635 LOCAL_CLUSTER_PORT=20015
<ac2.tip>
DB_NAME=ac LISTENER_PORT=21100 CONTROL_FILES="+DS0/c1.ctl","+DS0/c2.ctl" DB_CREATE_FILE_DEST="+DS0" LOG_ARCHIVE_DEST="/home/ac/data/archive2" MEMORY_TARGET=1G MAX_SESSION_COUNT=50 TOTAL_SHM_SIZE=512M USE_ACTIVE_STORAGE=Y AS_PORT=40011 CLUSTER_DATABASE=Y THREAD=1 UNDO_TABLESPACE=UNDO1 LOCAL_CLUSTER_ADDR=123.1.1.1 CM_PORT=8655 LOCAL_CLUSTER_PORT=20015
Execute the following command on each node to create envfile_ac1 and envfile_ac2 file under TB_HOME directory on each node, respectively. For information about envfile, refer to “9.3.1.1. cmrctl add” and “9.5. TAC Configuration”.
Node 1
cmrctl add service --name ac --cname cls1 cmrctl add db --name ac1 --svcname ac --dbhome /home/tibero6 --envfile /home/tibero6/envfile_ac1
Node 2
cmrctl add db --name ac2 --svcname ac --dbhome /home/tibero6 --envfile /home/tibero6/envfile_ac2
Execute steps 2-6 in “10.5. Creating a Database for TAC” on node 1.
To start up the DB instance in nomount mode, execute
cmrctl start db --name ac1 --option "-t NOMOUNT"
or
tbboot nomount
Complete the TAS-TAC configuration by executing the following command to start up the database on each node. Note that when creating the database, the logfile path must be set to a TAS path such as '+DS0/log001' ('+' indicates a TAS path, and 'DS0' is the name of the disk space created earlier).
cmrctl start service --name ac
This section describes how to configure HA in Linux environment.
The HA configuration method is similar to that of TAC, except for the tip file configuration and using the --mode ha option to create a service resource. On node 1, TB_SID and CM_SID are ha1 and cm1, respectively. On node 2, TB_SID and CM_SID are ha2 and cm2, respectively.
Configure cm1.tip and cm2.tip files as in “9.5. TAC Configuration”, and configure ha1.tip and ha2.tip files as follows (simple 2-node configuration on the same machine).
<ha1.tip>
DB_NAME=ha #DB_NAME is same for both ac1 and ac2. LISTENER_PORT=25001 CONTROL_FILES="/home/tibero6/database/ha/c1.ctl" DB_CREATE_FILE_DEST="/home/tibero6/database/ha" LOG_ARCHIVE_DEST="/home/tibero6/database/ha/archive1" MAX_SESSION_COUNT=20 TOTAL_SHM_SIZE=1G MEMORY_TARGET=2G CLUSTER_DATABASE=Y THREAD=0 UNDO_TABLESPACE=UNDO0 LOCAL_CLUSTER_ADDR=123.1.1.1 LOCAL_CLUSTER_PORT=21100 CM_PORT=8635
<ha2.tip>
DB_NAME=ha #DB_NAME is same for both ac1 and ac2. LISTENER_PORT=35001 CONTROL_FILES="/home/tibero6/database/ha/c1.ctl" DB_CREATE_FILE_DEST="/home/tibero6/database/ha" LOG_ARCHIVE_DEST="/home/tibero6/database/ha/archive1" MAX_SESSION_COUNT=20 TOTAL_SHM_SIZE=1G MEMORY_TARGET=2G CLUSTER_DATABASE=Y THREAD=0 #In contrast to TAC, ha1 and ha2 use the same thread and #UNDO_TABLESPACE values. UNDO_TABLESPACE=UNDO0 LOCAL_CLUSTER_ADDR=123.1.1.1 LOCAL_CLUSTER_PORT=31100 CM_PORT=8655
Create envfiles for ha1 and ha2, and then execute the following commands to add network, Cluster, service, and HA resources.
Node 1
export CM_SID=cm1 export TB_SID=ha1 tbcm -b cmrctl add network --name net1 --ipaddr 123.1.1.1 --portno 29000 cmrctl add cluster --name cls1 --incnet net1 --cfile /home/tibero6/cfile/cls1_cfile cmrctl start cluster --name cls1 cmrctl add service --name ha --cname cls1 --mode ha cmrctl add db --name ha1 --dbhome /home/tibero6 --envfile /home/tibero6/envfile_ha1
Node 2
export CM_SID=cm2 export TB_SID=ha2 tbcm -b cmrctl add network --name net1 --ipaddr 123.1.1.1 --portno 29100 cmrctl add cluster --name cls1 --incnet net1 --cfile /home/tibero6/cfile/cls1_cfile cmrctl start cluster --name cls1 cmrctl add db --name ha2 --dbhome /home/tibero6 --envfile /home/tibero6/envfile_ha2
Now create the database.
Start up Tibero in NOMOUNT mode, create the database as for a single Tibero instance, and then execute the system.sh file. Execute the following command on node 1 to enable the DB instance on node 2 to start up automatically in case node 1 fails. Execute 'cmrctl show service --name ha' to check that node 1 is in Active mode and node 2 in Standby mode.
cmrctl act service --name ha cmrctl show service --name ha Service Resource Info ====================================== Service name : ha Service type : Database Service mode : HA Cluster : cls1 Inst. Auto Start: ON ======================== | INSTANCE LIST | |----------------------| | NID Status HA MODE | | --- -------- ------- | | 1 UP Active | | 2 DOWN Standby | ========================
If failover to node 2 occurs, node 1 may become deactivated. Activate the deactivated resource to set node 1 in Standby mode and allow node 2 to failover in case of failure.