Migrate a 10g Standard Edition database to ASM using RMAN

First, this document is based on a PDF published in 2004 by Oracle entitled “Oracle Database 10g Migration to Automatic Storage Management”. You should download and reference that document for switching to ASM. That document presented two ways to migrate, “Cold”, and “Hot”. The Hot method assumed you had ASM up and running with it's own storage allocated, and that is the method I used.

I am providing my notes here for two reasons. One, you may encounter some errors while doing this process that are not noted in the document, and two, that document assumed Enterprise Edition.

The Hot Migration method is very simple, and for a 200GB database took about 2 hours. I performed it on a weekend, so I had the luxury of having no one in the system at the time.

Setup ASM

See this page for some info on setting up ASM using Oracle Grid 11gR2. I recommend creating two disk groups, because you will want to have some redundancy for your control files and log files.

Preparing the database for ASM usage

The document from Oracle will have you change the flash recovery area to ASM, but in my case, we left the flash recovery area on non ASM disks so that our Linux backup exec agent could pick them up. We will eventually use OCFS when we cluster to store these files. ACFS (the new cluster file system that runs on ASM) would let you store such files there, but storing database files in an acfs file system is not supported by Oracle! You will not find any notes here regarding the switch of the recovery area.

Also, note that the last step is the only one that requires the database to go down.

Steps:

  • Switch default destination for database files to ASM
    • alter system set db_create_file_dest='+DATA_AREA' scope=both;
  • Block change tracking
    • Standard Edition does not support block change tracking, so don't worry about these steps.
  • Make a copy of the database using RMAN
    • This step takes the longest.
    • Obviously, your ASM disk may not be called '+DATA1', so use your disk name.
    • RMAN> backup device type disk incremental level 0 as copy tag 'ASM_Migration' database format '+DATA1';
  • Moving redo log files
    • You will need to know a few things here.
      • How many log groups do you have and how many members do you have in each log group
        • select group#, count(*) from v$logfile group by group#;
      • What is the max members in each log group
        • ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
        • Go to your user dump destination, and look for the trace file by date:
          • ls -l –sort=t | grep “Aug 27”
        • in the file, look for MAXLOGMEMBERS
    • Hopefully you have 2 fewer log members in each group than the max. If not, you will have to drop some before you can proceed.
    • Now, add new members to each group:
      • alter database add logfile member '+DATA1', '+DATA2' to group 1;
      • etc…for each group. Note that you are creating two new members.
    • Now, you will have a bunch of invalid log members. You can make them valid by switching logfiles:
      • alter system switch logfile; – execute once for each group.
    • Now, drop the log files from the old location
      • select member from v$logfile where member like '/dg1%'; – replace /dg1% with your old location.
      • alter database drop logfile member '/dg1/app/oradata/MCHPROD/redo_g1.log'; – execute once for each old logfile
    • While doing this, you will probably get an error:
      • ORA-01609: log 9 is the current log for thread 1 - cannot drop members
      • do another logfile switch, then retry the drop.
  • Moving temp tablespaces
    • Find out which tablespaces are temp and their sizes:
      • select * from dba_tablespaces where contents = 'TEMPORARY';
      • select bytes/1024/1024 as mb, name from v$tempfile;
    • For each temp tablespace, add a new tempfile
      • alter tablespace temps add tempfile size 20480m;
    • If you are doing this stuff while users are online, you may have issues with dropping the old tempfiles. I recommend the below process rather than the one in the whitepaper:
      • alter database tempfile '/dg1/app/oradata/MCHPROD/temps01.tmp' offline; – take the file offline, no new operation permitted against it
      • alter database tempfile '/dg1/app/oradata/MCHPROD/temps01.tmp' drop;
        • Here is where you might get ORA-25152: TEMPFILE cannot be dropped at this time. This means a session is using it. Query v$sort_usage and v$temporary_lobs to see who. You can wait for them to finish, or kill the sessions :-). When the sessions are gone, reissue the drop.
  • refresh the copy of the database using RMAN
    • The second longest step
    • RMAN> run {
      backup incremental level 1 for recover of copy with tag 'ASM_Migration' database;
      recover copy of database with tag 'ASM_Migration';
      }
      
  • Move control files
    • Make a backup of the control files into ASM (This is good to have a backup, but this step is really intended to just create the necessary directories in ASM.) This is done to each control file location you plan to use in ASM
      • alter database backup controlfile to ‘+DATA1’;
      • alter database backup controlfile to ‘+DATA2’;
    • find out your db_unique_name
      • select * from v$parameter where name like 'db%name';
    • update the control files in the spfile.
      • alter system set control_files = ’+DATA1/{DB_UNIQUE_NAME}/CONTROLFILE/mycontrol.ctl’, ’+DATA2/{DB_UNIQUE_NAME}/CONTROLFILE/mycontrol.ctl’ scope=spfile;
    • write down the current control file names
      • select name from v$controlfile;
  • Switch the database to ASM disks!
    • RMAN> shutdown immediate;
      RMAN> startup nomount;
      RMAN> restore controlfile from '{an existing copy that you wrote down in the step above}';
      RMAN> alter database mount;
      RMAN> switch database to copy;
      RMAN> recover database;
      RMAN> alter database open;
      

You are now running on ASM! Now you can choose to clean up your disks where the old datafiles and log files were located.