Deploying Templates with Oracle Restart on OVM

I decided to post this after investigating how to use these templates in our demo room setup. You would think it’s straight forward but it appears the deploycluster tool, used to deploy these templates, is not yet compatible with OVM 3.3.1 and higher. That surprised me because it has been out for 3+ months now and with Oracle putting the emphasis on templates, it is strange that they do not work out of the box with the 2 most recent OVM versions. ( 3.3.1 & 3.3.2 )

Preparing the template

Let’s start with preparing everything for deployment

  • Download the template on oracle support patch nr: 18888811
    1
  • Unzip the 3 files resulting in 3 *.gz files
  • Join file 2A and 2B together
    (root) # cat OVM_OL6U6_X86_64_12102DBRAC_PVM-2of2-partA.tar.gz  OVM_OL6U6_X86_64_12102DBRAC_PVM-2of2-partB.tar.gz > OVM_OL6U6_X86_64_12102DBRAC_PVM-2of2.tar.gz
  • Place file 1 and the newly joined file 2 on a webserver ready for import into OVM.
  • Import the template
    2

Creating the VM

Follow the readme to create a VM, I decided to use 2 ASM disks in 1 diskgroup instead of the minimum of 5. I’m also deploying the oracle restart and not the RAC at this time.

  • Create the disks needed for ASM
    3
  • Create the VM based on the template
    4
  • Edit the VM and remove a network card, the second card is normally for interconnect in a rac deployment.
    5
  • Add the newly created disks to the VM
    7

Deploying it with deploycluster tool

  • Download the DeployCluster Tool
  • Place it on any linux machine, I normally place it on the OVM manager because of why not.
  • Configure netconfig.ini, start the VM and run the tool
    (root) # ./deploycluster.py -u admin -p password -M BNS -N netconfig.ini
    INFO: Oracle VM Client  (3.2.9.746) protocol (1.9) CONNECTED (tcp) to
          Oracle VM Manager (3.3.1.1065) protocol (1.10) IP (10.100.23.6) UUID (0004fb00000100008948135041eef83e)
    
    ERROR: This deploycluster.py version (v2.1.0) does not support connecting to Oracle VM Manager higher than 3.2; found Oracle Manager version 3.3. See My Oracle Support Note #1185244.1 and OTN for deployment options on this version.

So we see here there is a problem with compatibility. If we go to the mentioned note we see :

The Deploycluster tool currently only supports Oracle VM version 3.2 and below
Manual or Message based deployment is possible on Oracle VM 3.3.1

But no guidelines on manual deployment.

Deploying it with message based or manual deployment

  • Download and install ovm_utils on the ovm manager ( Patch: 13602094 )
  • Boot the VM and send the necessary messages ( or open console and do it manually ), KEYS :
    • com.oracle.racovm.netconfig.arguments => “-n1″
    • com.oracle.racovm.netconfig.contents.0 => send the content of netconfig.ini, this is for initial network setup
    • com.oracle.racovm.params.contents.0 => send the content of params.ini, we’ll leave this empty for now
    • com.oracle.racovm.racowner-password => password for oracle user
    • com.oracle.racovm.racowner-password => password for grid user
    • com.oracle.linux.root-password => password for root user
    • com.oracle.racovm.netconfig.interview-on-console => NO ( do not boot with the rac interview screen )

    Results in :

    export CMD="/u01/app/oracle/ovm-manager-3/ovm_utils/ovm_vmmessage -h 10.100.23.6 -u admin -p password -v BNS"
    
    $CMD -k "com.oracle.racovm.netconfig.arguments" -V "-n1"
    $CMD -k "com.oracle.racovm.netconfig.contents.0" -V "
    # Sample Single Instance or Single Instance/HA (Oracle Restart)
    NODE1=BNS
    NODE1IP=10.100.23.161
    PUBADAP=eth0
    PUBMASK=255.255.255.0
    PUBGW=10.100.23.254
    DOMAINNAME=labo.exitas  # May be blank
    DNSIP=10.100.23.20  
    CLONE_SINGLEINSTANCE_HA=yes  # Setup Single Instance/HA (Oracle Restart)"
    $CMD -k "com.oracle.racovm.params.contents.0"  -V ""
    $CMD -k "com.oracle.racovm.racowner-password" -V "oracle"
    $CMD -k "com.oracle.racovm.gridowner-password" -V "oracle"
    $CMD -k "com.oracle.linux.root-password" -V "ovsroot"
    $CMD -k "com.oracle.racovm.netconfig.interview-on-console" -V "NO"
    
  • Run the script and the machine will boot with a complete functional network
  • Logon as root with the previous specified password
  • Open /u01/racovm/params.ini and modify it for our environment, this file is very well commented and clearly explains every parameter, go wild ;)
    • Change GIHOME and DBHOME ( do not forget to move the clone files on the vm aswell if you change it )
    • Change the ASM settings, because we work with 2 disks
      RACASMDISKSTRING="/dev/xvd[c-d]1"
      ALLDISKS="/dev/xvdc /dev/xvdd"
      ASM_MIN_DISKS=2
    • Change DBNAME and SIDNAME
  • When you are done, perform the build :
    # ./buildsingle.sh
    Are you sure you want to install Single Instance/HA? YES
    Do not run if software is already installed and/or running.. [yes|no]? yes
    ...
    INFO (node:BNS): This entire build was logged in logfile: /u01/racovm/buildsingle.log
    2015-03-13 05:20:11:[buildsingle:Done :BNS] Building 12c Single Instance/HA
    2015-03-13 05:20:11:[buildsingle:Time :BNS] Completed successfully in 1052 seconds (0h:17m:32s)
    

That’s it, We deployed a VM with Oracle Restart and a 12c database with ease.

mysql – GTID replication

GTID replication is new in mysql 5.6 and adds an unique ID to every transaction on the database. That transaction ID is then used to ensure the transaction is applied on the slave. So this removes the need to know where the master is in which logfile. GTID ensures there is consistency and automatically determines at which transaction the slave is at and which transaction is next on the list.

Setting up GTID Replication

  • Create user for replication
    Mysql> CREATE USER 'repl'@'%.labo.exitas' IDENTIFIED BY 'RandomPassword';
    Mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.labo.exitas';
  • Adjustments needed to /etc/my.cnf ( on top of standard parameters for replication )
    • the server_id needs to be unique, Most of the time I set it to last octet of the IP.
    • Report-host and port determine how this mysql server will be reported in the mysql utilities. So set it to the current hostname and port.
    • gtid_mode should be enabled so that each transaction now has a unique id.
    • log-slave-updates should be enabled if you plan on further replicate the changes to other servers.
    • enforce-gtid-consistency should be enabled or GTID can not be set on. This makes sure all transactions are consistent and that transactions that combine actions on myISAM and InnoDB tables can’t be run.
    • don’t forget all the other parameters required for replication
    • RESULT

      # Replication
      server-id=111
      report-host=mysql02.labo.exitas
      report-port=3306 
      # GTID 
      gtid_mode=ON 
      log-slave-updates=ON
      enforce-gtid-consistency=true
  • Start the slave with auto positioning ( thanks to gtid )
    Mysql> CHANGE MASTER TO MASTER_HOST='mysql01.labo.exitas', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='RandomPassword';
    Mysql> START SLAVE;

How to monitor GTID Replication

First of all, the mysql-utilities that are provided are really excellent in giving you a view of the status of the different servers. I tend to use these all the time.

  • Install mysql-utilities
    (root) # yum install mysql-utilities.noarch
  • Check Topology
    (root) # mysqlrplshow --master=admin:admin@mysql01.labo.exitas --discover-slaves-login=admin:admin --verbose
    
    # master on mysql01.labo.exitas: ... connected.
    # Finding slaves for master: mysql01.labo.exitas:3306
    
    # Replication Topology Graph
    mysql01.labo.exitas:3306 (MASTER)
       |
       +--- mysql02.labo.exitas:3306 [IO: Yes, SQL: Yes] - (SLAVE)
    
  • Show Health
    # mysqlrpladmin --master admin:admin@mysql01.labo.exitas --discover-slaves-login=admin:admin health
    
    # Discovering slaves for master at mysql01.labo.exitas:3306
    # Discovering slave at mysql02.labo.exitas:3306
    # Found slave: mysql02.labo.exitas:3306
    # Checking privileges.
    #
    # Replication Topology Health:
    +--------------------------+-------+---------+--------+------------+---------+
    | host                     | port  | role    | state  | gtid_mode  | health  |
    +--------------------------+-------+---------+--------+------------+---------+
    | mysql01.labo.exitas      | 3306  | MASTER  | UP     | ON         | OK      |
    | mysql02.labo.exitas      | 3306  | SLAVE   | UP     | ON         | OK      |
    +--------------------------+-------+---------+--------+------------+---------+
    
  • Check GTID Status
    # mysqlrpladmin --master=admin:admin@mysql01.labo.exitas --discover-slaves-login=admin:admin gtid
    
    # Discovering slaves for master at mysql01.labo.exitas:3306
    # Discovering slave at mysql02.labo.exitas:3306
    # Found slave: mysql02.labo.exitas:3306
    # Checking privileges.
    #
    # UUIDS for all servers:
    +--------------------------+-------+---------+---------------------------------------+
    | host                     | port  | role    | uuid                                  |
    +--------------------------+-------+---------+---------------------------------------+
    | mysql01.labo.exitas      | 3306  | MASTER  | 9d8cc26b-ad2d-11e4-b175-005056b248f8  |
    | mysql02.labo.exitas      | 3306  | SLAVE   | 98cfd355-ad36-11e4-b1af-005056b21877  |
    +--------------------------+-------+---------+---------------------------------------+
    #
    # Transactions executed on the server:
    +--------------------------+-------+---------+--------------------------------------------+
    | host                     | port  | role    | gtid                                       |
    +--------------------------+-------+---------+--------------------------------------------+
    | mysql01.labo.exitas      | 3306  | MASTER  | 9d8cc26b-ad2d-11e4-b175-005056b248f8:1-18  |
    | mysql02.labo.exitas      | 3306  | SLAVE   | 98cfd355-ad36-11e4-b1af-005056b21877:1-3   |
    | mysql02.labo.exitas      | 3306  | SLAVE   | 9d8cc26b-ad2d-11e4-b175-005056b248f8:1-18  |
    +--------------------------+-------+---------+--------------------------------------------+
    

TTS in Combination with RMAN backups on dataguard

At a Customer’s site we recently upgraded a database from 10.2.0.5 to 11.2.0.3 by using Transportable Tablespace ( TTS ). This worked flawlessly but we ran into an issue taking backups at the dataguard location of this database.

We followed the normal procedures for being able to take backups on Dataguard and use them for Primary

      • Use a RMAN catalog
      • Register the Primary database
        RMAN> REGISTER DATABASE;
      • Configure the DB Unique Names
        RMAN> CONFIGURE DB_UNIQUE_NAME DB CONNECT IDENTIFIER 'DB_PRIM';
        RMAN> CONFIGURE DB_UNIQUE_NAME DB_DG CONNECT IDENTIFIER 'DB_DG';
        
        RMAN> LIST DB_UNIQUE_NAME OF DATABASE;
        
        List of Databases
        DB Key  DB Name  DB ID            Database Role    Db_unique_name
        ------- ------- ----------------- ---------------  ------------------
        1       DB       336860753        PRIMARY          DB
        1       DB       336860753        STANDBY          DB_DG

At this moment, we can take backups on DB_DG and make them available to DB by changing the unique name in the catalog

RMAN> change backup for db_unique_name DB_DG reset db_unique_name to DB;

However for this one database we can’t seem to do anything with it inside RMAN.

RMAN> show all; 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of show command at 04/26/2013 13:58:11
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of partial resync command on default channel at 04/26/2013 13:58:11
RMAN-20999: internal error

RMAN> backup database; 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 04/26/2013 14:05:11
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of partial resync command on default channel at 04/26/2013 14:05:11
RMAN-20999: internal error

This turns out to be Bug 13000553 ( Metalink Id. 13000553.8 ) that occurs when you

  • take backups on dataguard 
  • use TTS
  • Add a datafile to the Tablespace you transported

At the moment of writing there is no fix for this. The only workaround is to take RMAN backups on the Primary Database.

Converting a windows 2003R2 VM from VMWare vCenter to Oracle VM 3.x

With more and more VMWare customers choosing to use Oracle VM as virtualisation platform for running Oracle Software, the need rose at a client to convert some of the windows VM’s on vCenter to OVM. Surprisingly, I didn’t find a guide in the OVM 3 manual. I knew OVM 2.2 had a chapter about V2V, but only P2V gets covered in the OVM 3 manual.

This is the procedure I successfully followed :

Pre-export

  • Apply fix for kb31408 if you are using scsi devices in your VM. ( This is also documented in Metalink note. 754071.1 )
  • Uninstall vmware tools
  • Stop the VM ( Downtime starts here )
  • Make sure there are no snapshots on the vm.

Export

  • Select the VM in vCenter and click on the menu bar on File > Export > OVF
  • When you get the question if you want a single OVA file, answer yes.
  • Place the exported ova file on the http server you use to import into OVM Manager. ( I use httpd on the ovm manager )

Import

  • Follow all the usual steps you would follow to import an ova template into OVM.
    • Import as assembly
    • Create VM Template
    • Create VM
    • Start VM ( TIP: open the console before you boot, so you can follow the boot sequence ). If you receive a blue screen, chances are that you didn’t apply the fix for kb31408 correctly.

Post-Import

  • You need to reconfigure your network because the mac address of the network card in the vm has now changed. 
  • Reactivate windows ( because your hardware has changed )
  • Install the paravirtual drivers ( version 3.0.1 ) and reboot ( Downtime ends here )

And we had a running VM on OVM identical to the one we had on vCenter in about 45 minutes. Most of the time is spent in the export process of vCenter.

ASMLIB – Slow running of oracleasm scandisks

Recently I had a client where oracleasm scandisks took over 10 minutes to run. This meant a reboot took 12 minutes instead of 2 minutes and performing any kind of HA tests where disks were involved had this 10-min wait.

The Problem

This client had an enormous amount of Logical Volumes per server and this was the only difference with other systems where I had succesfully used asmlib. So I went to check how these were defined on an OS level.

It turned out the devices were created in /dev/mapper and were visible in /proc/partitions but there were no devices created for them in /dev.

(root) # cat /proc/partitions|grep dm-
253 0 2097152 dm-0
253 1 4194304 dm-1
253 2 524288 dm-2
253 3 4194304 dm-3
253 4 524288 dm-4
253 5 524288 dm-5
253 6 6291456 dm-6
253 7 4194304 dm-7
253 8 524288 dm-8
253 9 16777216 dm-9
253 10 4194304 dm-10
253 11 2097152 dm-11
253 12 16777216 dm-12
253 13 17846400 dm-13
253 14 267696000 dm-14
253 15 267696000 dm-15
253 16 267696000 dm-16
253 17 267696000 dm-17
253 18 2973120 dm-18
253 19 107078400 dm-19
253 20 107078400 dm-20
253 21 267696000 dm-21
253 22 62462400 dm-22
253 23 2973120 dm-23
253 24 267691031 dm-24
253 25 107073161 dm-25
253 26 267691031 dm-26
253 27 62462336 dm-27
253 28 267691031 dm-28
253 29 107073161 dm-29
253 30 2971993 dm-30
253 31 17840118 dm-31
253 32 2971961 dm-32
253 33 267691031 dm-33
253 34 267691031 dm-34
253 35 524288 dm-35
253 36 10485760 dm-36
253 37 4194304 dm-37
253 38 52428800 dm-38
253 39 15728640 dm-39
253 40 2097152 dm-40
253 41 18874368 dm-41
253 42 524288 dm-42
253 43 1048576000 dm-43
253 44 10485760 dm-44
253 45 4194304 dm-45
253 46 15728640 dm-46
253 47 17846400 dm-47
253 48 267696000 dm-48
253 49 267696000 dm-49
253 50 267696000 dm-50
253 51 267696000 dm-51
253 52 2973120 dm-52
253 53 107078400 dm-53
253 54 107078400 dm-54
253 55 267696000 dm-55
253 56 62462400 dm-56
253 57 2973120 dm-57
253 58 2971961 dm-58
253 59 107073161 dm-59
253 60 107073161 dm-60
253 61 62462336 dm-61
253 62 267691031 dm-62
253 63 267691031 dm-63
253 64 17840118 dm-64
253 65 267691031 dm-65
253 66 2971993 dm-66
253 67 267691031 dm-67
253 68 267691031 dm-68
253 69 2935808 dm-69
253 70 2931831 dm-70

(root) # ls -l /dev/dm-*
brw-rw—- 1 root root 253, 13 Mar 3 14:17 /dev/dm-13
brw-rw—- 1 root root 253, 14 Mar 3 14:17 /dev/dm-14
brw-rw—- 1 root root 253, 15 Mar 3 14:17 /dev/dm-15
brw-rw—- 1 root root 253, 16 Mar 3 14:17 /dev/dm-16
brw-rw—- 1 root root 253, 17 Mar 3 14:17 /dev/dm-17
brw-rw—- 1 root root 253, 18 Mar 3 14:17 /dev/dm-18
brw-rw—- 1 root root 253, 19 Mar 3 14:17 /dev/dm-19
brw-rw—- 1 root root 253, 20 Mar 3 14:17 /dev/dm-20
brw-rw—- 1 root root 253, 21 Mar 3 14:17 /dev/dm-21
brw-rw—- 1 root root 253, 22 Mar 3 14:17 /dev/dm-22
brw-rw—- 1 root root 253, 23 Mar 3 14:17 /dev/dm-23
brw-rw—- 1 root root 253, 24 Mar 3 14:17 /dev/dm-24
brw-rw—- 1 root root 253, 25 Mar 3 14:17 /dev/dm-25
brw-rw—- 1 root root 253, 26 Mar 3 14:17 /dev/dm-26
brw-rw—- 1 root root 253, 27 Mar 3 14:17 /dev/dm-27
brw-rw—- 1 root root 253, 28 Mar 3 14:17 /dev/dm-28
brw-rw—- 1 root root 253, 29 Mar 3 14:17 /dev/dm-29
brw-rw—- 1 root root 253, 30 Mar 3 14:17 /dev/dm-30
brw-rw—- 1 root root 253, 31 Mar 3 14:17 /dev/dm-31
brw-rw—- 1 root root 253, 32 Mar 3 14:17 /dev/dm-32
brw-rw—- 1 root root 253, 33 Mar 3 14:17 /dev/dm-33
brw-rw—- 1 root root 253, 34 Mar 3 14:17 /dev/dm-34
brw-rw—- 1 root root 253, 47 Mar 3 14:50 /dev/dm-47
brw-rw—- 1 root root 253, 48 Mar 3 14:50 /dev/dm-48
brw-rw—- 1 root root 253, 49 Mar 3 14:50 /dev/dm-49
brw-rw—- 1 root root 253, 50 Mar 3 14:50 /dev/dm-50
brw-rw—- 1 root root 253, 51 Mar 3 14:50 /dev/dm-51
brw-rw—- 1 root root 253, 52 Mar 3 14:50 /dev/dm-52
brw-rw—- 1 root root 253, 53 Mar 3 14:50 /dev/dm-53
brw-rw—- 1 root root 253, 54 Mar 3 14:50 /dev/dm-54
brw-rw—- 1 root root 253, 55 Mar 3 14:50 /dev/dm-55
brw-rw—- 1 root root 253, 56 Mar 3 14:50 /dev/dm-56
brw-rw—- 1 root root 253, 57 Mar 3 14:50 /dev/dm-57
brw-rw—- 1 root root 253, 58 Mar 3 14:50 /dev/dm-58
brw-rw—- 1 root root 253, 59 Mar 3 14:50 /dev/dm-59
brw-rw—- 1 root root 253, 60 Mar 3 14:50 /dev/dm-60
brw-rw—- 1 root root 253, 61 Mar 3 14:50 /dev/dm-61
brw-rw—- 1 root root 253, 62 Mar 3 14:50 /dev/dm-62
brw-rw—- 1 root root 253, 63 Mar 3 14:50 /dev/dm-63
brw-rw—- 1 root root 253, 64 Mar 3 14:50 /dev/dm-64
brw-rw—- 1 root root 253, 65 Mar 3 14:50 /dev/dm-65
brw-rw—- 1 root root 253, 66 Mar 3 14:50 /dev/dm-66
brw-rw—- 1 root root 253, 67 Mar 3 14:50 /dev/dm-67
brw-rw—- 1 root root 253, 68 Mar 3 14:50 /dev/dm-68
brw-rw—- 1 root root 253, 69 Mar 3 14:58 /dev/dm-69
brw-rw—- 1 root root 253, 70 Mar 3 14:58 /dev/dm-70

I knew oracleasm used /proc/partitions as its leading table of devices to check, so I believed a timeout occured while trying to access the non-existing devices. It turns out this was correct.

The Solution

Oracle Linux 5 & Redhat 5 do not create the devices for LVM2 devices by default. It took me some time to find this, but in the udev rules there is a clear ignore rule.

(root) # cat /etc/udev/rules.d/90-dm.rules
KERNEL==”dm-[0-9]*”, ACTION==”add”, OPTIONS+=”ignore_device”

When we disable this by commenting it out and retrigger the udev rules, our devices get created.

(root) # cat /etc/udev/rules.d/90-dm.rules
#KERNEL==”dm-[0-9]*”, ACTION==”add”, OPTIONS+=”ignore_device”

(root) # udevtrigger 

(root) # ls -ltr /dev/dm-*
brw-rw—- 1 root root 253, 15 Mar 3 14:17 /dev/dm-15
brw-rw—- 1 root root 253, 14 Mar 3 14:17 /dev/dm-14
brw-rw—- 1 root root 253, 19 Mar 3 14:17 /dev/dm-19
brw-rw—- 1 root root 253, 20 Mar 3 14:17 /dev/dm-20
brw-rw—- 1 root root 253, 16 Mar 3 14:17 /dev/dm-16
brw-rw—- 1 root root 253, 21 Mar 3 14:17 /dev/dm-21
brw-rw—- 1 root root 253, 17 Mar 3 14:17 /dev/dm-17
brw-rw—- 1 root root 253, 22 Mar 3 14:17 /dev/dm-22
brw-rw—- 1 root root 253, 23 Mar 3 14:17 /dev/dm-23
brw-rw—- 1 root root 253, 25 Mar 3 14:17 /dev/dm-25
brw-rw—- 1 root root 253, 24 Mar 3 14:17 /dev/dm-24
brw-rw—- 1 root root 253, 30 Mar 3 14:17 /dev/dm-30
brw-rw—- 1 root root 253, 27 Mar 3 14:17 /dev/dm-27
brw-rw—- 1 root root 253, 26 Mar 3 14:17 /dev/dm-26
brw-rw—- 1 root root 253, 31 Mar 3 14:17 /dev/dm-31
brw-rw—- 1 root root 253, 33 Mar 3 14:17 /dev/dm-33
brw-rw—- 1 root root 253, 32 Mar 3 14:17 /dev/dm-32
brw-rw—- 1 root root 253, 34 Mar 3 14:17 /dev/dm-34
brw-rw—- 1 root root 253, 18 Mar 3 14:17 /dev/dm-18
brw-rw—- 1 root root 253, 28 Mar 3 14:17 /dev/dm-28
brw-rw—- 1 root root 253, 29 Mar 3 14:17 /dev/dm-29
brw-rw—- 1 root root 253, 13 Mar 3 14:17 /dev/dm-13
brw-rw—- 1 root root 253, 53 Mar 3 14:50 /dev/dm-53
brw-rw—- 1 root root 253, 47 Mar 3 14:50 /dev/dm-47
brw-rw—- 1 root root 253, 55 Mar 3 14:50 /dev/dm-55
brw-rw—- 1 root root 253, 49 Mar 3 14:50 /dev/dm-49
brw-rw—- 1 root root 253, 52 Mar 3 14:50 /dev/dm-52
brw-rw—- 1 root root 253, 54 Mar 3 14:50 /dev/dm-54
brw-rw—- 1 root root 253, 51 Mar 3 14:50 /dev/dm-51
brw-rw—- 1 root root 253, 56 Mar 3 14:50 /dev/dm-56
brw-rw—- 1 root root 253, 48 Mar 3 14:50 /dev/dm-48
brw-rw—- 1 root root 253, 57 Mar 3 14:50 /dev/dm-57
brw-rw—- 1 root root 253, 50 Mar 3 14:50 /dev/dm-50
brw-rw—- 1 root root 253, 64 Mar 3 14:50 /dev/dm-64
brw-rw—- 1 root root 253, 61 Mar 3 14:50 /dev/dm-61
brw-rw—- 1 root root 253, 66 Mar 3 14:50 /dev/dm-66
brw-rw—- 1 root root 253, 60 Mar 3 14:50 /dev/dm-60
brw-rw—- 1 root root 253, 58 Mar 3 14:50 /dev/dm-58
brw-rw—- 1 root root 253, 59 Mar 3 14:50 /dev/dm-59
brw-rw—- 1 root root 253, 68 Mar 3 14:50 /dev/dm-68
brw-rw—- 1 root root 253, 65 Mar 3 14:50 /dev/dm-65
brw-rw—- 1 root root 253, 63 Mar 3 14:50 /dev/dm-63
brw-rw—- 1 root root 253, 67 Mar 3 14:50 /dev/dm-67
brw-rw—- 1 root root 253, 62 Mar 3 14:50 /dev/dm-62
brw-rw—- 1 root root 253, 69 Mar 3 14:58 /dev/dm-69
brw-rw—- 1 root root 253, 70 Mar 3 14:58 /dev/dm-70
brw-r—– 1 root disk 253, 46 Mar 4 11:34 /dev/dm-46
brw-r—– 1 root disk 253, 45 Mar 4 11:34 /dev/dm-45
brw-r—– 1 root disk 253, 42 Mar 4 11:34 /dev/dm-42
brw-r—– 1 root disk 253, 40 Mar 4 11:34 /dev/dm-40
brw-r—– 1 root disk 253, 36 Mar 4 11:34 /dev/dm-36
brw-r—– 1 root disk 253, 44 Mar 4 11:34 /dev/dm-44
brw-r—– 1 root disk 253, 39 Mar 4 11:34 /dev/dm-39
brw-r—– 1 root disk 253, 41 Mar 4 11:34 /dev/dm-41
brw-r—– 1 root disk 253, 43 Mar 4 11:34 /dev/dm-43
brw-r—– 1 root disk 253, 37 Mar 4 11:34 /dev/dm-37
brw-r—– 1 root disk 253, 12 Mar 4 11:34 /dev/dm-12
brw-r—– 1 root disk 253, 38 Mar 4 11:34 /dev/dm-38
brw-r—– 1 root disk 253, 35 Mar 4 11:34 /dev/dm-35
brw-r—– 1 root disk 253, 4 Mar 4 11:34 /dev/dm-4
brw-r—– 1 root disk 253, 10 Mar 4 11:34 /dev/dm-10
brw-r—– 1 root disk 253, 7 Mar 4 11:34 /dev/dm-7
brw-r—– 1 root disk 253, 3 Mar 4 11:34 /dev/dm-3
brw-r—– 1 root disk 253, 6 Mar 4 11:34 /dev/dm-6
brw-r—– 1 root disk 253, 1 Mar 4 11:34 /dev/dm-1
brw-r—– 1 root disk 253, 11 Mar 4 11:34 /dev/dm-11
brw-r—– 1 root disk 253, 0 Mar 4 11:34 /dev/dm-0
brw-r—– 1 root disk 253, 9 Mar 4 11:34 /dev/dm-9
brw-r—– 1 root disk 253, 5 Mar 4 11:34 /dev/dm-5
brw-r—– 1 root disk 253, 8 Mar 4 11:34 /dev/dm-8
brw-r—– 1 root disk 253, 2 Mar 4 11:34 /dev/dm-2

After these actions, oracleasm scandisks now runs in a few seconds.

(root) # time oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…

real 0m0.480s
user 0m0.134s
sys 0m0.259s