Table of Contents
This chapter describes how to use TAS to configure Tibero
The example is based on a Linux x86_64 system, and all commands to be executed on TAS and Tibero, except SQL, must be modified according to the OS of the target system.
The example in this chapter represents best practices and does not represent the minimum system requirements. For system requirements, refer to Tibero Installation Guide. TAS only supports Linux and AIX.
This section describes the items that need to be prepared before configuring the database.
The following table describes the installation environment for the example.
Item | Value |
---|---|
Number of Nodes | 2 |
Internal Node IP | 100.100.100.11, 100.100.100.12 |
TAS PORT | 9620 |
TAS CM PORT | 20005 |
TAS LOCAL CLUSTER PORT | 20000 |
Number of Shared Disks | 4 |
Shared Disk Size | 512GB each |
Shared Disk Path | /dev/sdc, /dev/sdd, /dev/sde, /dev/sdf |
Installation Account | dba |
Tibero Installation Path | /home/dba/tibero |
It is assumed that the required binaries already exist in the installation path.
It is assumed that the same shared disk path is seen from all nodes.
The following is the process of preparing shared disks. The process must be performed on all nodes.
Disks can be used as shared disks by modifying their privilege or ownership. However, this can cause the issue that a disk name is changed after OS rebooting. To prevent the issue, it is recommended to configure device nodes by using udev.
The following is an example of device nodes configured using udev.
$ ls -al /dev/disk* lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk0 -> sdc lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk1 -> sdd lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk2 -> sde lrwxrwxrwx. 1 root root 3 Aug 13 19:50 /dev/disk3 -> sdf $ ls -l /dev/sd* brw-rw----. 1 root disk 8, 0 Jul 13 11:20 /dev/sda brw-rw----. 1 root disk 8, 16 Jul 13 11:20 /dev/sdb brw-------. 1 dba disk 8, 32 Jul 13 11:20 /dev/sdc brw-------. 1 dba disk 8, 48 Jul 13 11:20 /dev/sdd brw-------. 1 dba disk 8, 64 Jul 13 11:20 /dev/sde brw-------. 1 dba disk 8, 80 Jul 13 11:20 /dev/sdf
The links displayed as the result of the first ls command in the above example are symbolic links created according to the udev rules.
The following is an example of the udev rules file used to configure device nodes.
$ cat /etc/udev/rules.d/as-disk.rules KERNEL=="sd?", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d %N", RESULT=="35000c50087de8480", SYMLINK+="disk0", OWNER="dba", MODE="0600" KERNEL=="sd?", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d %N", RESULT=="35000c50087de8481", SYMLINK+="disk1", OWNER="dba", MODE="0600" KERNEL=="sd?", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d %N", RESULT=="35000c50087de8482", SYMLINK+="disk2", OWNER="dba", MODE="0600" KERNEL=="sd?", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d %N", RESULT=="35000c50087de8483", SYMLINK+="disk3", OWNER="dba", MODE="0600"
The udev rules file must have the rules extension and exist under the /etc/udev/rules.d directory (in Ubuntu 14.04, the directory path varies according to OS).
The above example file finds a device with the specified SCSI_ID (RESULT=="SCSI_ID") from the node of block devices (SUBSYSTEM=="block") with a kernel name of sd? (KERNEL=="sd?") and then configures owner and user privileges and creates the given symbolic link for the device.
To check a device SCSI_ID, execute /lib/udev/scsi_id (in Ubuntu 14.04, the directory path varies according to OS) with the administrator privilege. The following is an example.
$ /lib/udev/scsi_id -g -u -d /dev/sdc 35000c50087de8480
To configure Tibero with TAS, set an additional kernel parameter besides kernel parameters described in "Tibero Installation Guide." The kernel parameter can be set in the following configuration file (in Linux).
/etc/sysctl.conf
Set the kernel parameter as follows.
fs.aio-max-nr = 4194304
This section describes how to configure a TAS instance with two nodes.
Configure the environment variables in the user configuration files (.bashrc, etc.) of the OS.
Set the TB_HOME variable to the path where the TAS instance binary is installed on each node. Set the TB_SID variable to a distinct value for each node so that they can be identified uniquely.
The following is an example of configuring the first node. Set TB_SID of the TAS instance and CM_SID of the Cluster Manager on the first node to 'as0' and 'cm0' respectively.
export TB_HOME=/home/dba/tibero export TB_SID=as0 export CM_SID=cm0 export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib export PATH=$PATH:$TB_HOME/bin:$TB_HOME/client/bin
Set TB_SID of the TAS instance and CM_SID of the Cluster Manager on the second node to 'as1' and 'cm1' respectively.
[100.100.100.12]$ export TB_SID=as1 [TB_SID=as1@100.100.100.12]$ export CM_SID=cm1
The initialization parameters for TAS are the same as those for Tibero Active Cluster(TAC) by default.
The following are the additional initialization parameters required for TAS instance.
Parameter | Description |
---|---|
INSTANCE_TYPE | Instance type (set to AS to indicate Active Storage). |
TAS_DISKSTRING | Path pattern to use for disk search. |
The following are the initialization parameters for node 1.
[100.100.100.11]$ cat /home/dba/tibero/config/as0.tip INSTANCE_TYPE=AS AS_DISKSTRING="/dev/disk*" AS_ALLOW_ONLY_RAW_DISKS=N LISTENER_PORT=9620 TOTAL_SHM_SIZE=3G MEMORY_TARGET=4G CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=100.100.100.11 LOCAL_CLUSTER_PORT=20000 CM_CLUSTER_MODE=ACTIVE_SHARED CM_PORT=20005 THREAD=0 [100.100.100.11]$ cat /home/dba/tibero/config/cm0.tip CM_NAME=cm0 CM_UI_PORT=20005 CM_RESOUCE_FILE=/home/dba/tibero/config/cm0_res.crf
The following are the initialization parameters for node 2.
[100.100.100.12]$ cat /home/dba/tibero/config/as1.tip INSTANCE_TYPE=AS AS_DISKSTRING="/dev/disk*" AS_ALLOW_ONLY_RAW_DISKS=N LISTENER_PORT=9620 TOTAL_SHM_SIZE=3G MEMORY_TARGET=4G CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=100.100.100.12 LOCAL_CLUSTER_PORT=20000 CM_CLUSTER_MODE=ACTIVE_SHARED CM_PORT=20005 THREAD=1 [100.100.100.11]$ cat /home/dba/tibero/config/cm1.tip CM_NAME=cm1 CM_UI_PORT=20005 CM_RESOUCE_FILE=/home/dba/tibero/config/cm1_res.crf
1. To use a general file as a device, AS_ALLOW_ONLY_RAW_DISKS must be set to N.
2. A disk, which does not exist in the path pattern set to TAS_DISKSTRING, cannot be included in the disk space.
The following is an example of setting the connection string configuration file.
$ cat /home/dba/tibero/client/config/tbdsn.tbr as0=( (INSTANCE=(HOST=100.100.100.11) (PORT=9620) ) ) as1=( (INSTANCE=(HOST=100.100.100.12) (PORT=9620) ) )
The following describes how to create and start a disk space.
Before creating a disk space, start the TAS instance on node 1 in NOMOUNT mode first.
[TB_SID=as0@100.100.100.11]$ tbboot nomount
Connect to the running instance, and create a disk space.
To use external data mirroring function like RAID for high availability, the redundancy level must be set to EXTERNAL REDUNDANCY to prevent the use of internal data mirroring.
[TB_SID=as0@100.100.100.11]$ tbsql sys/tibero@as0 SQL> CREATE DISKSPACE ds0 EXTERNAL REDUNDANCY DISK '/dev/disk0' NAME disk0 SIZE 512G, '/dev/disk1' NAME disk1 SIZE 512G, '/dev/disk2' NAME disk2 SIZE 512G, '/dev/disk3' NAME disk3 SIZE 512G;
Once the disk space is created, the TAS instance is stopped automatically and it can be restarted in the NORMAL mode.
Create an as resource configuration file that is required for the Cluster Manager on the node 1 to execute the AS binary.
[CM_SID=cm0@100.100.100.11]$ cat $TB_HOME/as0.profile export TB_SID=as0
To configure a cluster with TAS instances, start up the Cluster Manager and add resources. When adding the as resource, specify the file path to the created configuration file in the envfile attribute.
[CM_SID=cm0@100.100.100.11]$ tbcm -b [CM_SID=cm0@100.100.100.11]$ cmrctl add network --name net0 --ipaddr 100.100.100.11 --portno 20010 [CM_SID=cm0@100.100.100.11]$ cmrctl add cluster --name cls0 --incnet net0 --cfile "+/dev/disk*" [CM_SID=cm0@100.100.100.11]$ cmrctl start cluster --name cls0 [CM_SID=cm0@100.100.100.11]$ cmrctl add service --type as --name tas --cname cls0 [CM_SID=cm0@100.100.100.11]$ cmrctl add as --name as0 --svcname tas --envfile "$TB_HOME/as0.profile" --dbhome "$TB_HOME"
Start up the instance in NORMAL mode.
[TB_SID=as0@100.100.100.11]$ tbboot # 또는 [CM_SID=cm0@100.100.100.11]$ cmrctl start as --name as0
Connect to the running instance and add a thread to start up the TAS instance on node 2.
[TB_SID=as0@100.100.100.11]$ tbsql sys/tibero@as0 SQL> ALTER DISKSPACE ds0 ADD THREAD 1;
Create an as resource configuration file that is required for the Cluster Manager on the node 2 to execute the AS binary.
[CM_SID=cm1@100.100.100.12]$ cat $TB_HOME/as1.profile export TB_SID=as1
Start up the Cluster Manager and add resources on node 2.
[CM_SID=cm1@100.100.100.12]$ tbcm -b [CM_SID=cm1@100.100.100.12]$ cmrctl add network --name net1 --ipaddr 100.100.100.12 --portno 20010 [CM_SID=cm1@100.100.100.12]$ cmrctl add cluster --name cls0 --incnet net1 --cfile "+/devs/disk*" [CM_SID=cm1@100.100.100.12]$ cmrctl start cluster --name cls0 [CM_SID=cm1@100.100.100.12]$ cmrctl add as --name as1 --svcname tas --envfile "$TB_HOME/as1.profile" --dbhome "$TB_HOME"
Start up the TAS instance on node 2.
[TB_SID=as1@100.100.100.12]$ tbboot # 또는 [CM_SID=cm1@100.100.100.12]$ cmrctl start as --name as1
The steps to install and start up a Tibero instance is the same as those for a DB instance without TAS instance, except for specifying the path for the files that will be created.
Configure the environment variables in the user configuration files (.bashrc, etc.) of the OS.
Set the TB_HOME variable to the path where the Tibero DB instance binary is installed on each node. Set the TB_SID variable to a distinct value for each node. The following example sets TB_SID of the DB instance on the first node to 'tac0'.
export TB_HOME=/home/dba/tibero export TB_SID=tac0 export CM_SID=cm0 export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib export PATH=$PATH:$TB_HOME/bin:$TB_HOME/client/bin
Set the TB_SID of the DB instance on node 2 to 'tac1'. If CM_SID is not set, set it to cm1.
[100.100.100.12]$ export TB_SID=tac1 [TB_SID=tac1@100.100.100.12]$ export CM_SID=cm1
The SID of each instance must be distinct only on the same server. Duplicate SIDs can be used if the instances are on different servers.
The following parameters must be set to use the TAS instance.
Parameter | Description |
---|---|
USE_ACTIVE_STORAGE | Option to use TAS instance (set to Y). |
AS_PORT | LISTENER_PORT of the TAS instance. |
The following are the initialization parameters for node 1.
[100.100.100.11]$ cat /home/dba/tibero/config/tac0.tip DB_NAME=tibero LISTENER_PORT=8629 DB_BLOCK_SIZE=32K MAX_SESSION_COUNT=300 TOTAL_SHM_SIZE=70G DB_CACHE_SIZE=55G MEMORY_TARGET=250G CONTROL_FILES="+DS0/c1.ctl" USE_ACTIVE_STORAGE=Y AS_PORT=9620 CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=100.100.100.11 LOCAL_CLUSTER_PORT=21000 CM_PORT=20005 THREAD=0 UNDO_TABLESPACE=UNDO00
The following are the initialization parameters for node 2.
[100.100.100.12]$ cat /home/dba/tibero/config/tac1.tip DB_NAME=tibero LISTENER_PORT=8629 DB_BLOCK_SIZE=32K MAX_SESSION_COUNT=300 TOTAL_SHM_SIZE=70G DB_CACHE_SIZE=55G MEMORY_TARGET=250G CONTROL_FILES="+DS0/c1.ctl" USE_ACTIVE_STORAGE=Y AS_PORT=9620 CLUSTER_DATABASE=Y LOCAL_CLUSTER_ADDR=100.100.100.12 LOCAL_CLUSTER_PORT=21000 CM_PORT=20005 THREAD=1 UNDO_TABLESPACE=UNDO01
The following is an example of setting the connection string configuration file.
$ cat /home/dba/tibero/client/config/tbdsn.tbr tac0=( (INSTANCE=(HOST=100.100.100.11) (PORT=8629) ) ) tac1=( (INSTANCE=(HOST=100.100.100.12) (PORT=8629) ) )
The following describes how to create and start a database.
Create a DB resource configuration file that is required for the Cluster Manager to execute the tibero binary.
[CM_SID=cm0@100.100.100.11]$ cat $TB_HOME/tac0.profile export TB_SID=tac0
[CM_SID=cm1@100.100.100.12]$ cat $TB_HOME/tac1.profile export TB_SID=tac1
Create a resource used for tibero clustering on the Cluster Manager that booted when configuring the TAS instance on node 1.
[CM_SID=cm0@100.100.100.11]$ cmrctl add service --type db --name tibero --cname cls0 [CM_SID=cm0@100.100.100.11]$ cmrctl add db --name tac0 --svcname tibero --envfile "$TB_HOME/tac0.profile" --dbhome "$TB_HOME"
Start up the instance in NOMOUNT mode.
[TB_SID=tac0@100.100.100.11]$ tbboot -t nomount
Connect to the instance and create the database.
[TB_SID=tac0@100.100.100.11]$ tbsql sys/tibero@tac0 SQL> CREATE DATABASE "tibero" USER sys IDENTIFIED BY tibero MAXINSTANCES 32 MAXDATAFILES 2048 CHARACTER SET MSWIN949 LOGFILE GROUP 1 '+DS0/log001' SIZE 2G, GROUP 2 '+DS0/log002' SIZE 2G, GROUP 3 '+DS0/log003' SIZE 2G MAXLOGGROUPS 255 MAXLOGMEMBERS 8 NOARCHIVELOG DATAFILE '+DS0/system.tdf' SIZE 4G AUTOEXTEND ON NEXT 64M MAXSIZE UNLIMITED SYSSUB DATAFILE '+DS0/syssub.tdf' SIZE 4G AUTOEXTEND ON NEXT 64M MAXSIZE UNLIMITED DEFAULT TEMPORARY TABLESPACE temp TEMPFILE '+DS0/temp00.tdf' SIZE 32G AUTOEXTEND OFF UNDO TABLESPACE undo00 DATAFILE '+DS0/undo00.tdf' SIZE 32G AUTOEXTEND OFF DEFAULT TABLESPACE usr DATAFILE '+DS0/usr.tdf' SIZE 4G AUTOEXTEND ON NEXT 64m MAXSIZE UNLIMITED;
Restart the instance in NORMAL mode, and then create the UNDO tablespace and REDO THREAD for node 2.
[TB_SID=tac0@100.100.100.11]$ tbboot [TB_SID=tac0@100.100.100.11]$ tbsql sys/tibero@tac0 SQL> CREATE UNDO TABLESPACE undo01 DATAFILE '+DS0/undo01.tdf' size 32G autoextend off; SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/log004' size 2G; SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/log005' size 2G; SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 6 '+DS0/log006' size 2G; SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1;
Add a DB resource to the Cluster Manager on node2.
[CM_SID=cm1@100.100.100.12]$ cmrctl add db --name tac1 --svcname tibero --envfile "$TB_HOME/tac1.profile" --dbhome "$TB_HOME"
Start up the instance on node 2.
[TB_SID=tac1@100.100.100.12]$ tbboot
The following are the recommended requirements for TAS.
Initialization Parameter Settings
Set the memory for Active Storage instance to a lower value than the database instance. The following memory settings are recommended for proper operation.
Disk Space REDUNDANCY Setting
When using an external data replication function like RAID, REDUNDANCY can be set to EXTERNAL. Otherwise, REDUNDANCY should be set to NORMAL. However, it can be set to HIGH if data availability is preferred over performance.
Disk Space Failure Group Setting
It is recommended to place disks that reside in the same physical server or switch in the same failure group since there is a high probability that a switch or cable failure will cause all the disks to be inaccessible. Since Active Storage keeps replicated copies of data in another failure group, data still can be accessed when failure occurs in the failure group. It is recommended to use three or more failure groups.
Disk Space Capacity Setting
Set the disk space capacity based on the database capacity. The maximum disk space capacity is 8 EB. It is recommended to maintain enough free disk space for data replication during disk failures.
Management Tips for DBA
Use SQL as a SYS user to connect to an Active Storage instance. The current disk space and disk states can be checked by querying the views described in “Chapter 4. TAS Information Views”. When there is a disk failure, the disk state in the view is changed to FAIL.
Disk Properties
It is recommended to use the same property settings (size, speed, etc.) for disks in the same disk space. This can be effective for disk striping by balancing the use of each disk.