Deploying Templates with Oracle Restart on OVM

I decided to post this after investigating how to use these templates in our demo room setup. You would think it’s straight forward but it appears the deploycluster tool, used to deploy these templates, is not yet compatible with OVM 3.3.1 and higher. That surprised me because it has been out for 3+ months now and with Oracle putting the emphasis on templates, it is strange that they do not work out of the box with the 2 most recent OVM versions. ( 3.3.1 & 3.3.2 )

Preparing the template

Let’s start with preparing everything for deployment

  • Download the template on oracle support patch nr: 18888811
    1
  • Unzip the 3 files resulting in 3 *.gz files
  • Join file 2A and 2B together
    (root) # cat OVM_OL6U6_X86_64_12102DBRAC_PVM-2of2-partA.tar.gz  OVM_OL6U6_X86_64_12102DBRAC_PVM-2of2-partB.tar.gz > OVM_OL6U6_X86_64_12102DBRAC_PVM-2of2.tar.gz
  • Place file 1 and the newly joined file 2 on a webserver ready for import into OVM.
  • Import the template
    2

Creating the VM

Follow the readme to create a VM, I decided to use 2 ASM disks in 1 diskgroup instead of the minimum of 5. I’m also deploying the oracle restart and not the RAC at this time.

  • Create the disks needed for ASM
    3
  • Create the VM based on the template
    4
  • Edit the VM and remove a network card, the second card is normally for interconnect in a rac deployment.
    5
  • Add the newly created disks to the VM
    7

Deploying it with deploycluster tool

  • Download the DeployCluster Tool
  • Place it on any linux machine, I normally place it on the OVM manager because of why not.
  • Configure netconfig.ini, start the VM and run the tool
    (root) # ./deploycluster.py -u admin -p password -M BNS -N netconfig.ini
    INFO: Oracle VM Client  (3.2.9.746) protocol (1.9) CONNECTED (tcp) to
          Oracle VM Manager (3.3.1.1065) protocol (1.10) IP (10.100.23.6) UUID (0004fb00000100008948135041eef83e)
    
    ERROR: This deploycluster.py version (v2.1.0) does not support connecting to Oracle VM Manager higher than 3.2; found Oracle Manager version 3.3. See My Oracle Support Note #1185244.1 and OTN for deployment options on this version.

So we see here there is a problem with compatibility. If we go to the mentioned note we see :

The Deploycluster tool currently only supports Oracle VM version 3.2 and below
Manual or Message based deployment is possible on Oracle VM 3.3.1

But no guidelines on manual deployment.

Deploying it with message based or manual deployment

  • Download and install ovm_utils on the ovm manager ( Patch: 13602094 )
  • Boot the VM and send the necessary messages ( or open console and do it manually ), KEYS :
    • com.oracle.racovm.netconfig.arguments => “-n1”
    • com.oracle.racovm.netconfig.contents.0 => send the content of netconfig.ini, this is for initial network setup
    • com.oracle.racovm.params.contents.0 => send the content of params.ini, we’ll leave this empty for now
    • com.oracle.racovm.racowner-password => password for oracle user
    • com.oracle.racovm.racowner-password => password for grid user
    • com.oracle.linux.root-password => password for root user
    • com.oracle.racovm.netconfig.interview-on-console => NO ( do not boot with the rac interview screen )

    Results in :

    export CMD="/u01/app/oracle/ovm-manager-3/ovm_utils/ovm_vmmessage -h 10.100.23.6 -u admin -p password -v BNS"
    
    $CMD -k "com.oracle.racovm.netconfig.arguments" -V "-n1"
    $CMD -k "com.oracle.racovm.netconfig.contents.0" -V "
    # Sample Single Instance or Single Instance/HA (Oracle Restart)
    NODE1=BNS
    NODE1IP=10.100.23.161
    PUBADAP=eth0
    PUBMASK=255.255.255.0
    PUBGW=10.100.23.254
    DOMAINNAME=labo.exitas  # May be blank
    DNSIP=10.100.23.20  
    CLONE_SINGLEINSTANCE_HA=yes  # Setup Single Instance/HA (Oracle Restart)"
    $CMD -k "com.oracle.racovm.params.contents.0"  -V ""
    $CMD -k "com.oracle.racovm.racowner-password" -V "oracle"
    $CMD -k "com.oracle.racovm.gridowner-password" -V "oracle"
    $CMD -k "com.oracle.linux.root-password" -V "ovsroot"
    $CMD -k "com.oracle.racovm.netconfig.interview-on-console" -V "NO"
    
  • Run the script and the machine will boot with a complete functional network
  • Logon as root with the previous specified password
  • Open /u01/racovm/params.ini and modify it for our environment, this file is very well commented and clearly explains every parameter, go wild 😉
    • Change GIHOME and DBHOME ( do not forget to move the clone files on the vm aswell if you change it )
    • Change the ASM settings, because we work with 2 disks
      RACASMDISKSTRING="/dev/xvd[c-d]1"
      ALLDISKS="/dev/xvdc /dev/xvdd"
      ASM_MIN_DISKS=2
    • Change DBNAME and SIDNAME
  • When you are done, perform the build :
    # ./buildsingle.sh
    Are you sure you want to install Single Instance/HA? YES
    Do not run if software is already installed and/or running.. [yes|no]? yes
    ...
    INFO (node:BNS): This entire build was logged in logfile: /u01/racovm/buildsingle.log
    2015-03-13 05:20:11:[buildsingle:Done :BNS] Building 12c Single Instance/HA
    2015-03-13 05:20:11:[buildsingle:Time :BNS] Completed successfully in 1052 seconds (0h:17m:32s)
    

That’s it, We deployed a VM with Oracle Restart and a 12c database with ease.

mysql – GTID replication

GTID replication is new in mysql 5.6 and adds an unique ID to every transaction on the database. That transaction ID is then used to ensure the transaction is applied on the slave. So this removes the need to know where the master is in which logfile. GTID ensures there is consistency and automatically determines at which transaction the slave is at and which transaction is next on the list.

Setting up GTID Replication

  • Create user for replication
    Mysql> CREATE USER 'repl'@'%.labo.exitas' IDENTIFIED BY 'RandomPassword';
    Mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.labo.exitas';
  • Adjustments needed to /etc/my.cnf ( on top of standard parameters for replication )
    • the server_id needs to be unique, Most of the time I set it to last octet of the IP.
    • Report-host and port determine how this mysql server will be reported in the mysql utilities. So set it to the current hostname and port.
    • gtid_mode should be enabled so that each transaction now has a unique id.
    • log-slave-updates should be enabled if you plan on further replicate the changes to other servers.
    • enforce-gtid-consistency should be enabled or GTID can not be set on. This makes sure all transactions are consistent and that transactions that combine actions on myISAM and InnoDB tables can’t be run.
    • don’t forget all the other parameters required for replication
    • RESULT

      # Replication
      server-id=111
      report-host=mysql02.labo.exitas
      report-port=3306 
      # GTID 
      gtid_mode=ON 
      log-slave-updates=ON
      enforce-gtid-consistency=true
  • Start the slave with auto positioning ( thanks to gtid )
    Mysql> CHANGE MASTER TO MASTER_HOST='mysql01.labo.exitas', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='RandomPassword';
    Mysql> START SLAVE;

How to monitor GTID Replication

First of all, the mysql-utilities that are provided are really excellent in giving you a view of the status of the different servers. I tend to use these all the time.

  • Install mysql-utilities
    (root) # yum install mysql-utilities.noarch
  • Check Topology
    (root) # mysqlrplshow --master=admin:admin@mysql01.labo.exitas --discover-slaves-login=admin:admin --verbose
    
    # master on mysql01.labo.exitas: ... connected.
    # Finding slaves for master: mysql01.labo.exitas:3306
    
    # Replication Topology Graph
    mysql01.labo.exitas:3306 (MASTER)
       |
       +--- mysql02.labo.exitas:3306 [IO: Yes, SQL: Yes] - (SLAVE)
    
  • Show Health
    # mysqlrpladmin --master admin:admin@mysql01.labo.exitas --discover-slaves-login=admin:admin health
    
    # Discovering slaves for master at mysql01.labo.exitas:3306
    # Discovering slave at mysql02.labo.exitas:3306
    # Found slave: mysql02.labo.exitas:3306
    # Checking privileges.
    #
    # Replication Topology Health:
    +--------------------------+-------+---------+--------+------------+---------+
    | host                     | port  | role    | state  | gtid_mode  | health  |
    +--------------------------+-------+---------+--------+------------+---------+
    | mysql01.labo.exitas      | 3306  | MASTER  | UP     | ON         | OK      |
    | mysql02.labo.exitas      | 3306  | SLAVE   | UP     | ON         | OK      |
    +--------------------------+-------+---------+--------+------------+---------+
    
  • Check GTID Status
    # mysqlrpladmin --master=admin:admin@mysql01.labo.exitas --discover-slaves-login=admin:admin gtid
    
    # Discovering slaves for master at mysql01.labo.exitas:3306
    # Discovering slave at mysql02.labo.exitas:3306
    # Found slave: mysql02.labo.exitas:3306
    # Checking privileges.
    #
    # UUIDS for all servers:
    +--------------------------+-------+---------+---------------------------------------+
    | host                     | port  | role    | uuid                                  |
    +--------------------------+-------+---------+---------------------------------------+
    | mysql01.labo.exitas      | 3306  | MASTER  | 9d8cc26b-ad2d-11e4-b175-005056b248f8  |
    | mysql02.labo.exitas      | 3306  | SLAVE   | 98cfd355-ad36-11e4-b1af-005056b21877  |
    +--------------------------+-------+---------+---------------------------------------+
    #
    # Transactions executed on the server:
    +--------------------------+-------+---------+--------------------------------------------+
    | host                     | port  | role    | gtid                                       |
    +--------------------------+-------+---------+--------------------------------------------+
    | mysql01.labo.exitas      | 3306  | MASTER  | 9d8cc26b-ad2d-11e4-b175-005056b248f8:1-18  |
    | mysql02.labo.exitas      | 3306  | SLAVE   | 98cfd355-ad36-11e4-b1af-005056b21877:1-3   |
    | mysql02.labo.exitas      | 3306  | SLAVE   | 9d8cc26b-ad2d-11e4-b175-005056b248f8:1-18  |
    +--------------------------+-------+---------+--------------------------------------------+