ASM: Backup and restore DG metadata

Overview – ASMCMD md_backup and md_restore

  • ASMCMD 11g  is extended to include ASM disk group metadata backup and restore functionality.
  • This provides the ability to recreate a pre-existing ASM disk group with the same disk paths,disk names,failure groups, attributes, templates and alias directory structure
  • In 10g you have to manually recreate the ASM disk group and any required user directories/templates.
  • In 11g we can take backup of ASM diskgroup metadata

The md_backup command creates a backup file containing metadata for one or more disk groups. i
By default all the mounted disk groups are included in the backup file which is saved in the current working directory.

 

Backup and restore DG metadata

Backup ACFS DG metadata
ASMCMD> md_backup ACFS_DG.backup -G ACFS
Disk group metadata to be backed up: ACFS

[root@grac41 Desktop]# srvctl status diskgroup -g ACFS
Disk Group ACFS is running on grac42,grac43

Stop DG ACFS and check DG status
[root@grac41 Desktop]# srvctl stop  diskgroup -g ACFS -f
[grid@grac41 ASM]$ asmcmd lsdg -g  --discovery ACFS
Inst_ID  State       Type  Rebal  Sector  Block  AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      3  DISMOUNTED        N           0   4096   0         0        0                0               0              0             N  ACFS/
      2  DISMOUNTED        N           0   4096   0         0        0                0               0              0             N  ACFS/
      1  DISMOUNTED        N           0   4096   0         0        0                0               0              0             N  ACFS/

Remove DG cleaup disk headers  and check status 
[root@grac41 Desktop]# srvctl remove  diskgroup -g ACFS -f
[grid@grac41 ASM]$ dd if=/dev/zero of=/dev/asm_test_1G_disk1 bs=8192 count=1000
[grid@grac41 ASM]$ dd if=/dev/zero of=/dev/asm_test_1G_disk2  bs=8192 count=1000
[root@grac41 Desktop]# srvctl status diskgroup -g ACFS
Disk Group ACFS is not running
[grid@grac41 ASM]$  asmcmd lsdg -g  --discovery ACFS
ASMCMD-8001: diskgroup 'ACFS' does not exist or is not mounted


Before recreationg DG - check disk status and diskgroup status on all nodes 
[grid@grac41 ASM]$  asmcmd lsdg -g  --discovery ACFS
ASMCMD-8001: diskgroup 'ACFS' does not exist or is not mounted
[grid@grac41 ASM]$  asmcmd lsdsk -k -g --candidate
Inst_ID  Total_MB  Free_MB  OS_MB  Name       Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
      3         0        0   1019                        REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk1
      2         0        0   1019                        REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk1
      1         0        0   1019                        REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk1
      3         0        0   1019                        REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk2
      2         0        0   1019                        REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk2
      1         0        0   1019                        REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk2

Restore DG in full mode 
  - The full mode restores the diskgroup exactly as it was at the time of backup
ASMCMD>  md_restore ACFS_DG.backup  --full  -G ACFS
Current Diskgroup metadata being restored: ACFS
Diskgroup ACFS created!
System template DATAFILE modified!
System template AUTOBACKUP modified!
System template OCRFILE modified!
System template ASMPARAMETERFILE modified!
System template PARAMETERFILE modified!
System template DUMPSET modified!
System template ARCHIVELOG modified!
System template XTRANSPORT modified!
System template DATAGUARDCONFIG modified!
System template BACKUPSET modified!
System template ONLINELOG modified!
System template CONTROLFILE modified!
System template FLASHFILE modified!
System template XTRANSPORT BACKUPSET modified!
System template FLASHBACK modified!
System template TEMPFILE modified!
System template CHANGETRACKING modified!

[grid@grac41 ASM]$ asmcmd lsdg -g  --discovery ACFS 
Inst_ID  State       Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      3  DISMOUNTED          N           0   4096        0         0        0                0               0              0             N  ACFS/
      2  DISMOUNTED          N           0   4096        0         0        0                0               0              0             N  ACFS/
      1  MOUNTED     NORMAL  N         512   4096  1048576      2032     1926                0             963              0             N  ACFS/

Restart DG on the remaining nodes and display DG status 
[grid@grac41 ASM]$ srvctl status diskgroup -g ACFS 
Disk Group ACFS is running on grac41
[grid@grac41 ASM]$ srvctl start  diskgroup -g ACFS 
[grid@grac41 ASM]$ srvctl status diskgroup -g ACFS 
Disk Group ACFS is running on grac42,grac43,grac41


[grid@grac41 ASM]$ asmcmd lsdg -g  --discovery ACFS
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      3  MOUNTED  NORMAL  N         512   4096  1048576      2032     1758                0             879              0             N  ACFS/
      2  MOUNTED  NORMAL  N         512   4096  1048576      2032     1758                0             879              0             N  ACFS/
      1  MOUNTED  NORMAL  N         512   4096  1048576      2032     1758                0             879              0             N  ACFS/

Reference

  • ASMCMD – New commands in 11gR1 (Doc ID 451900.1)

Create/Drop Diskgroup

Overview using force option with mount,dismount drop and create DG

  Mount force
  The force option becomes a must when a disk group mount reports missing disks. This is one of the cases when 
  it's safe and required to use the force option. Provided we are not missing too many disks, the mount force 
  should succeed. Basically, at least one partner disk - from every disk partnership in the disk group - must 
  be available.

  Create Diskgroup force 
  If the disk to be added to a disk group is not CANDIDATE, PROVISIONED or FORMER, I have to specify force next 
  to the disk name. This will destroy the data on that specified disk(s).

  Forcing disk group drop
  To drop a disk group I have to mount it first. If I cannot mount a disk group, but must drop it, I can use the 
  force option of the DROP DISKGROUP statement, like this:
  SQL> drop diskgroup PLAY force including contents;

  Forcing disk group dismount
  ASM does not allow a disk group to be dismounted if it's still being accessed. But I can force the disk group 
  dismount even if some files in the disk group are open. Here is an example:
  SQL> alter diskgroup PLAY dismount;
  alter diskgroup PLAY dismount
  *
  ERROR at line 1:
  ORA-15032: not all alterations performed
  ORA-15027: active use of diskgroup "PLAY" precludes its dismount

  Note that the forced disk group dismount will cause all datafiles in that database to go offline, which means they 
  will need recovery (and restore if I drop disk group PLAY)

  For details please read

ASM operation : Rename Diskgroup

Current status and quick overview

  • Step 1: dismount the DG TEST  
  • Step 2: Validate rename DG operation by running  remamedg  command with check  options ( non destructive ) :  verbose=true check=true
  • Step 3: Change DG name  without check options     :  verbose=true   –
  • Step 4: mount the new DG TEST2 
  • Step 5: Cleanup OCR and delete old  DG TEST
Note renaming a DG with CW related files ( voting disks, OCR ) needs additional steps 
  ( For details  see Book: The Essential Guide to Oracle Automatic Storage Management - Chapater 13 ) 

Current statuscrs | egrep 'TARGET|TEST'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
ora.TEST.dg                    ONLINE     ONLINE          grac41        
ora.TEST.dg                    ONLINE     ONLINE          grac42        
ora.TEST.dg                    ONLINE     ONLINE          grac43      
[grid@grac41 ~]$ asmcmd lsdg test
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST/
[grid@grac41 ~]$  asmcmd lsdsk -k
Total_MB  Free_MB  OS_MB  Name       Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
    1023      873   1023  TEST_0000  TEST_0000  REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk1
    1023      873   1023  TEST_0001  TEST_0001  REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk2
--> DG TEST hs 2 disks and is mounted on all cluster nodes

Dismount DG test

[grid@grac41 ~]$ sqlplus / as sysasm

SQL> alter diskgroup test dismount;
alter diskgroup test dismount
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "TEST" precludes its dismount

SQL>  alter diskgroup test dismount force;
Diskgroup altered.

Run renamedg with check option  to validate the RENAME DG operation

[grid@grac41 ~]$  renamedg dgname=test newdgname=test_new verbose=true check=true asm_disksting='/dev/asm*'
Parsing parameters..
KFNDG-00201: file not found
KFNDG-00201: invalid arguments
    Cause: Invalid key or value was specified for renamedg.
    Action: Try renamedg -help for more information.
--> Wrong parameter:  Change asm_disksting to  asm_diskstring 


[grid@grac41 ~]$  renamedg dgname=test newdgname=test_new verbose=true check=true asm_diskstring='/dev/asm*'
..
KFNDG-00405: specified disk group string appears to be mounted
    Cause: Disk group was mounted.
    Action: Unmount the disk group and retry renamedg.
[grid@grac41 ~]$  asmcmd lsdg test -g
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      3  MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST/
      2  MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST/
--> DG test still mounted on 2 RAC instances 
    Either use asmca with option dismount DG on all nodes  or run  alter diskgroup test dismount force on the remaining nodes

[grid@grac41 ~]$  asmcmd lsdg test -g
ASMCMD-8001: diskgroup 'test' does not exist or is not mounted

[grid@grac41 ~]$   renamedg dgname=test newdgname=test_new verbose=true check=true asm_diskstring='/dev/asm*'
Parsing parameters..
Parameters in effect:
     Old DG name       : TEST 
     New DG name       : TEST_NEW 
     Phases            :
          Phase 1
          Phase 2
     Discovery str        : /dev/asm* 
     Check              : TRUE
     Clean              : TRUE
     Raw only           : TRUE
renamedg operation: dgname=test newdgname=test_new verbose=true check=true asm_diskstring=/dev/asm*
Executing phase 1
Discovering the group
Performing discovery with string:/dev/asm*
Identified disk UFS:/dev/asm_test_1G_disk2 with disk number:1 and timestamp (33004974 -2144396288)
Identified disk UFS:/dev/asm_test_1G_disk1 with disk number:0 and timestamp (33004974 -2144396288)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/asm*
Identified disk UFS:/dev/asm_test_1G_disk2 with disk number:1 and timestamp (33004974 -2144396288)
Identified disk UFS:/dev/asm_test_1G_disk1 with disk number:0 and timestamp (33004974 -2144396288)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:1
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/asm_test_1G_disk2
Leaving the header unchanged
Looking for /dev/asm_test_1G_disk1
Leaving the header unchanged
Completed phase 2
Terminating kgfd context 0x7f6f0781a0a0

Run renamedg without check option to RENAME DG from TEST to TEST_NEW

[grid@grac41 ~]$    renamedg dgname=test newdgname=test_new verbose=true   asm_diskstring='/dev/asm*'
Parsing parameters..
Parameters in effect:

      Old DG name         : TEST 
     New DG name          : TEST_NEW 
     Phases               :
          Phase 1
          Phase 2
     Discovery str        : /dev/asm* 
     Clean              : TRUE
     Raw only           : TRUE
renamedg operation: dgname=test newdgname=test_new verbose=true asm_diskstring=/dev/asm*
Executing phase 1
Discovering the group
Performing discovery with string:/dev/asm*
Identified disk UFS:/dev/asm_test_1G_disk2 with disk number:1 and timestamp (33004974 -2144396288)
Identified disk UFS:/dev/asm_test_1G_disk1 with disk number:0 and timestamp (33004974 -2144396288)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/asm*
Identified disk UFS:/dev/asm_test_1G_disk2 with disk number:1 and timestamp (33004974 -2144396288)
Identified disk UFS:/dev/asm_test_1G_disk1 with disk number:0 and timestamp (33004974 -2144396288)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:1
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/asm_test_1G_disk2
Modifying the header
Looking for /dev/asm_test_1G_disk1
Modifying the header
Completed phase 2
Terminating kgfd context 0x7ffae14fe0a0

Verify new DG paramter using kfed and mount DG test_new
[grid@grac41 ~]$ kfed  read  /dev/asm_test_1G_disk1 | grep 'name'
kfdhdb.dskname:               TEST_0000 ; 0x028: length=9
kfdhdb.grpname:                TEST_NEW ; 0x048: length=8
kfdhdb.fgname:                TEST_0000 ; 0x068: length=9
kfdhdb.capname:                         ; 0x088: length=0
[grid@grac41 ~]$ kfed  read  /dev/asm_test_1G_disk2  | grep 'name'
kfdhdb.dskname:               TEST_0001 ; 0x028: length=9
kfdhdb.grpname:                TEST_NEW ; 0x048: length=8
kfdhdb.fgname:                TEST_0001 ; 0x068: length=9
kfdhdb.capname:                         ; 0x088: length=0

Mount renamed DG test_new

SQL> alter diskgroup test_new mount;
Diskgroup altered.

[grid@grac41 ~]$ asmcmd lsdg -g test_new
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST_NEW/
--> DG test_new mounted on grac41 only -
    Again use asmca to mount DG test_new on the remaining nodes 
[grid@grac41 ~]$  asmcmd lsdg -g test_new
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST_NEW/
      2  MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST_NEW/
      3  MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST_NEW/

[grid@grac41 ~]$ asmcmd lsdsk -k 
Total_MB  Free_MB  OS_MB  Name       Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
    1023      873   1023  TEST_0000  TEST_0000  REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk1
    1023      873   1023  TEST_0001  TEST_0001  REGULAR         System                         UNKNOWN  /dev/asm_test_1G_disk2
--> Note disks are not renamed from  TEST_0000 to TEST_NEW_0000  and TEST_0001 to TEST_NEW_0001

Cleanup cluster resources

[grid@grac41 ~]$ crs | egrep 'NAME|TEST'
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
ora.TEST.dg                    OFFLINE    OFFLINE         grac41        
ora.TEST.dg                    OFFLINE    OFFLINE         grac42        
ora.TEST.dg                    OFFLINE    OFFLINE         grac43        
ora.TEST_NEW.dg                ONLINE     ONLINE          grac41        
ora.TEST_NEW.dg                ONLINE     ONLINE          grac42        
ora.TEST_NEW.dg                ONLINE     ONLINE          grac43       
--> old DG ora.TEST.dg is OFFLINE but still referenced by CW 

[grid@grac41 ~]$ srvctl remove diskgroup -g TEST
PRCA-1002 : Failed to remove CRS resource ora.TEST.dg for ASM Disk Group TEST
PRCR-1028 : Failed to remove resource ora.TEST.dg
PRCR-1072 : Failed to unregister resource ora.TEST.dg
CRS-0222: Resource 'ora.TEST.dg' has dependency error.

[grid@grac41 ~]$  crsctl status resource ora.grac4.db -p | grep -i test
START_DEPENDENCIES=hard(ora.DATA.dg,ora.FRA2.dg,ora.TEST.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) 
                   pullup(ora.DATA.dg,ora.FRA2.dg,ora.TEST.dg)
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.FRA2.dg,shutdown:ora.TEST.dg)

Remove start dependency:
[root@grac41 ~]# crsctl  modify resource  ora.grac4.db  -attr "START_DEPENDENCIES='hard(ora.DATA.dg,ora.FRA2.dg) 
                 weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DATA.dg,ora.FRA2.dg)' "
[root@grac41 ~]#   crsctl status resource ora.grac4.db -p | grep -i test
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.FRA2.dg,shutdown:ora.TEST.dg)
--> STOP DEPENDENCIES still there
Remove stop dependency:
[root@grac41 ~]# crsctl  modify resource  ora.grac4.db  -attr "STOP_DEPENDENCIES='hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.FRA2.dg)' "
[root@grac41 ~]# crsctl status resource ora.grac4.db -p | grep -i test
--> START and STOP DEPENDENCIES to DG TEST removed 

$ srvctl remove diskgroup -g TEST
[grid@grac41 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     40944    18471            10236            4117              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576     40952    31275                0           31275              0             N  FRA2/
MOUNTED  NORMAL  N         512   4096  4194304      6132     4960             2044            1458              0             Y  OCR/
MOUNTED  NORMAL  N         512   4096  1048576      2046     1746                0             873              0             N  TEST_NEW/

References

  •  Book: The Essential Guide to Oracle Automatic Storage Management – Chapater 13 ( from Nitin Vengurlekar / Prasad Bagal – Oracle Press )

ASM Operation: Rename an ASM disk

Renaming disk with 12c

SQL> alter diskgroup data2 dismount;
   Diskgroup altered.
SQL> alter diskgroup data2 mount restricted;
   Diskgroup altered.
Then use the following SQL to rename the disks. 
SQL> alter diskgroup data2 rename disk 'DATA2_0001' to 'DATA2_VMAX_0001', 'DATA2_0000' to 'DATA2_VMAX_0000';
Diskgroup altered.

 

Renaming disk with 11g

[grid@grac41 ~]$ asmcmd lsdg DATA
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     40944    18561            10236            4162              0             N  DATA/

SQL> @dg1

DG_NAME    DG_STATE   TYPE    DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH 			  FAILGROUP
---------- ---------- ------ ------- ------- ------------ ------- -------- ------------------------------ ---------------
DATA	   MOUNTED    NORMAL	   0 CACHED  MEMBER	  ONLINE  NORMAL   /dev/asmdisk1_udev_sdb1	  DATA_0000
DATA	   MOUNTED    NORMAL	   1 CACHED  MEMBER	  ONLINE  NORMAL   /dev/asmdisk1_udev_sdc1	  DATA_0001
DATA	   MOUNTED    NORMAL	   2 CACHED  MEMBER	  ONLINE  NORMAL   /dev/asmdisk1_udev_sdd1	  DATA_0002
DATA	   MOUNTED    NORMAL	   3 CACHED  MEMBER	  ONLINE  NORMAL   /dev/asmdisk1_udev_sde1	  DATA_0003

SQL> ALTER DISKGROUP DATA  REBALANCE POWER 11 WAIT;
SQL> ALTER DISKGROUP DATA  DROP DISK DATA_0003;

DG_NAME    DG_STATE   TYPE   NAME	 DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH			     FAILGROUP
---------- ---------- ------ ---------- ------- ------- ------------ ------- -------- ------------------------------ ---------------
DATA	   MOUNTED    NORMAL DATA_0000	      0 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdb1	     DATA_0000
DATA	   MOUNTED    NORMAL DATA_0001	      1 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdc1	     DATA_0001
DATA	   MOUNTED    NORMAL DATA_0002	      2 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdd1	     DATA_0002
DATA	   MOUNTED    NORMAL DATA_0003	      3 CACHED	MEMBER	     ONLINE  DROPPING /dev/asmdisk1_udev_sde1	     DATA_0003

DISK_GRP		       GROUP_NUMBER OPERA EST_MINUTES
------------------------------ ------------ ----- -----------
DATA					  1 REBAL	   25

After some time
DG_NAME    DG_STATE   TYPE   NAME	 DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH			     FAILGROUP
---------- ---------- ------ ---------- ------- ------- ------------ ------- -------- ------------------------------ ---------------
DATA	   MOUNTED    NORMAL DATA_0000	      0 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdb1	     DATA_0000
DATA	   MOUNTED    NORMAL DATA_0001	      1 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdc1	     DATA_0001
DATA	   MOUNTED    NORMAL DATA_0002	      2 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdd1	     DATA_0002
					      1 CLOSED	FORMER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sde1

Now cleanup disk so we can add it again to our +DATA DG 
# dd if=/dev/zero  of=/dev/asmdisk1_udev_sde1 bs=1024 count=1024
DG_NAME    DG_STATE   TYPE   NAME	 DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH			     FAILGROUP
---------- ---------- ------ ---------- ------- ------- ------------ ------- -------- ------------------------------ ---------------
					      1 CLOSED	CANDIDATE    ONLINE  NORMAL   /dev/asmdisk1_udev_sde1

Change udev rules and rename disk from /dev/asmdisk1_udev_sde1 to asm_data_10G_disk3
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBd3b6af8f-7dda2f15", 
  NAME="asm_data_10G_disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"

Remove old disk device and restart udev
[root@grac41 ~]# rm  /dev/asmdisk1_udev_sde1
rm: remove block special file `/dev/asmdisk1_udev_sde1'? y
[root@grac41 ~]#  ~/start_udev.sh
Starting udev:                                             [  OK  ]
brw-rw----. 1 grid asmadmin 8, 113 Jul 10 08:14 /dev/asm_data_10G_disk3
brw-rw----. 1 grid asmadmin 8,  65 Jul 10 08:14 /dev/asmdisk1_udev_sdb1
brw-rw----. 1 grid asmadmin 8,  81 Jul 10 08:14 /dev/asmdisk1_udev_sdc1
brw-rw----. 1 grid asmadmin 8,  97 Jul 10 08:14 /dev/asmdisk1_udev_sdd1
brw-rw----. 1 grid asmadmin 8, 129 Jul 10 08:14 /dev/asmdisk_fra1
brw-rw----. 1 grid asmadmin 8, 145 Jul 10 08:14 /dev/asmdisk_fra2
brw-rw----. 1 grid asmadmin 8,  17 Jul 10 08:14 /dev/asm_ocr_11204_2G_disk1
brw-rw----. 1 grid asmadmin 8,  33 Jul 10 08:14 /dev/asm_ocr_11204_2G_disk2
brw-rw----. 1 grid asmadmin 8,  49 Jul 10 08:14 /dev/asm_ocr_11204_2G_disk3

Add new disk to diskgroup and monitory rebalance operation
SQL> ALTER DISKGROUP DATA  ADD DISK '/dev/asm_data_10G_disk3';
SQL> select dg.name dg_name,  dg.state dg_state,  dg.type,d.name, d.DISK_NUMBER dsk_no, d.MOUNT_STATUS, d.HEADER_STATUS, d.MODE_STATUS,
    	d.STATE, d. PATH, d.FAILGROUP  FROM V$ASM_DISK d,  v$asm_diskgroup dg
     where dg.group_number(+)=d.group_number and  dg.name='DATA' order by dg_name, dsk_no;

DG_NAME    DG_STATE   TYPE   NAME	 DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH			     FAILGROUP
---------- ---------- ------ ---------- ------- ------- ------------ ------- -------- ------------------------------ ---------------
DATA	   MOUNTED    NORMAL DATA_0000	      0 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdb1	     DATA_0000
DATA	   MOUNTED    NORMAL DATA_0001	      1 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdc1	     DATA_0001
DATA	   MOUNTED    NORMAL DATA_0002	      2 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asmdisk1_udev_sdd1	     DATA_0002
DATA	   MOUNTED    NORMAL DATA_0003	      3 CACHED	MEMBER	     ONLINE  NORMAL   /dev/asm_data_10G_disk3	     DATA_0003

SQL> select	g.name disk_grp, o.group_number, operation , est_minutes from v$asm_operation o,  v$asm_diskgroup g
    	 where g.group_number = o.group_number;
DISK_GRP		       GROUP_NUMBER OPERA EST_MINUTES
------------------------------ ------------ ----- -----------
DATA					  1 REBAL	    8
--> In about 8 minutes the reblance operation will be finished

 

ASM : Cancel a DROP Disk operation

  • The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending drops of disks within disk groups.
  • If a drop disk operation has completed, then this statement cannot be used to restore it
SQL>  ALTER DISKGROUP DATA  DROP DISK DATA_0002;
Diskgroup altered.

DG_NAME    DG_STATE   TYPE   NAME     DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH                 FAILGROUP
---------- ---------- ------ ---------- ------- ------- ------------ ------- -------- ------------------------------ ---------------
DATA       MOUNTED    NORMAL DATA_0000          0 CACHED    MEMBER         ONLINE  NORMAL   /dev/asmdisk1_udev_sdb1         DATA_0000
DATA       MOUNTED    NORMAL DATA_0001          1 CACHED    MEMBER         ONLINE  NORMAL   /dev/asmdisk1_udev_sdc1         DATA_0001
DATA       MOUNTED    NORMAL DATA_0002          2 CACHED    MEMBER         ONLINE  DROPPING /dev/asm_data_10G_disk2         DATA_0002
DATA       MOUNTED    NORMAL DATA_0003          3 CACHED    MEMBER         ONLINE  NORMAL   /dev/asm_data_10G_disk3         DATA_0003

DISK_GRP               GROUP_NUMBER OPERA EST_MINUTES
------------------------------ ------------ ----- -----------
DATA                      1 REBAL       14

SQL> ALTER DISKGROUP data UNDROP DISKS;
Diskgroup altered.

G_NAME    DG_STATE   TYPE   NAME     DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH                 FAILGROUP
---------- ---------- ------ ---------- ------- ------- ------------ ------- -------- ------------------------------ ---------------
DATA       MOUNTED    NORMAL DATA_0000          0 CACHED    MEMBER         ONLINE  NORMAL   /dev/asmdisk1_udev_sdb1         DATA_0000
DATA       MOUNTED    NORMAL DATA_0001          1 CACHED    MEMBER         ONLINE  NORMAL   /dev/asmdisk1_udev_sdc1         DATA_0001
DATA       MOUNTED    NORMAL DATA_0002          2 CACHED    MEMBER         ONLINE  NORMAL   /dev/asm_data_10G_disk2         DATA_0002
DATA       MOUNTED    NORMAL DATA_0003          3 CACHED    MEMBER         ONLINE  NORMAL   /dev/asm_data_10G_disk3         DATA_0003

DISK_GRP               GROUP_NUMBER OPERA EST_MINUTES
------------------------------ ------------ ----- -----------
DATA                      1 REBAL        4
--> disk DATA_0002  switched from  DROPPING to NORMAL 
    It will take 4 minutes until the UNDROP operation will be finished