Root Disk Mirroring On Debian Linux

Mirror is a simple way to gain redundancy on a single Unix server. By building mirrored root disks (as a RAID1 Group), system would be safer in case one disk failed. System still accessible anytime a single disk failure happen.

Mirroring with 2 physical disks

In this scenario we will try to build a mirrored root disk on Debian Linux server. We will prepare new disk drive as first component of RAID1 group. Then we will copy all content of root disk into it. After data has been copied, we will reboot the server and force the server to boot using the new disk. Once the system up, we will attach existing root disk as second component of RAID1 group. At the end we can use both disk as bootable disk.

Preparation

  1. Lets start by identified existing system :
    • Platform : this test environment is using Debian Linux release 7.1 (code name : Wheezy) :
      root@debian01:~# date
      Tue Sep 24 00:48:07 WIT 2013
      root@debian01:~# uname -a
      Linux debian01 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux
      root@debian01:~# lsb_release -a
      No LSB modules are available.
      Distributor ID: Debian
      Description:    Debian GNU/Linux 7.1 (wheezy)
      Release:    7.1
      Codename:   wheezy
      root@debian01:~#
    • Physical disks : we can check installed disks using fdisk -l command as shown in the following example :
      root@debian01:~# fdisk -l
      
      Disk /dev/sda: 21.5 GB, 21474836480 bytes
      255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x0004f798
      
         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1   *        2048      194559       96256   83  Linux
      /dev/sda2          194560    19726335     9765888   83  Linux
      /dev/sda3        19726336    23631871     1952768   82  Linux swap / Solaris
      /dev/sda4        23631872    41940991     9154560   83  Linux
      
      Disk /dev/sdb: 21.5 GB, 21474836480 bytes
      255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x00000000
      
      Disk /dev/sdb doesn't contain a valid partition table
      root@debian01:~#

      On the example above, we can see 2 disk drives installed. First is the active root disk (/dev/sda) and the second one is new disk (/dev/sdb).

      ASSUMPTION : For the sake of simplicity we will call the existing root disk (/dev/sda) as “rootdisk”. And the second disk (/dev/sdb) as “rootmirror”.

    • Filesystem : we also need to know what filesystem used by each partition. To do that we can use blkid command that also can provide us UUID information for each partition :
      root@debian01:~# blkid
      /dev/sda3: UUID="c1b679fb-d59f-4478-a138-fe2e3a6bceb0" TYPE="swap"
      /dev/sr0: LABEL="Debian 7.1.0 i386 1" TYPE="iso9660"
      /dev/sda1: UUID="1398dd06-7ef2-46c3-9ace-a2328b1db783" TYPE="ext3"
      /dev/sda2: UUID="4a066d6d-c8ad-46a4-ae54-118da63dcbd2" TYPE="ext3"
      /dev/sda4: UUID="8d819942-fe5d-4634-aab7-28ceca9b2fe1" TYPE="ext3"
      root@debian01:~#
    • Partition structure : we can check existing partition structure & its mountpoint by examining the /etc/fstab file :
      root@debian01:~# cat /etc/fstab
      # /etc/fstab: static file system information.
      #
      # Use 'blkid' to print the universally unique identifier for a
      # device; this may be used with UUID= as a more robust way to name devices
      # that works even if disks are added and removed. See fstab(5).
      #
      # <file system> <mount point>   <type>  <options>       <dump>  <pass>
      # / was on /dev/sda2 during installation
      UUID=4a066d6d-c8ad-46a4-ae54-118da63dcbd2 /               ext3    errors=remount-ro 0       1
      # /boot was on /dev/sda1 during installation
      UUID=1398dd06-7ef2-46c3-9ace-a2328b1db783 /boot           ext3    defaults        0       2
      # /home was on /dev/sda4 during installation
      UUID=8d819942-fe5d-4634-aab7-28ceca9b2fe1 /home           ext3    defaults        0       2
      # swap was on /dev/sda3 during installation
      UUID=c1b679fb-d59f-4478-a138-fe2e3a6bceb0 none            swap    sw              0       0
      /dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
      root@debian01:~#

      Or we can use the simplest but slightly less informative df command :

      root@debian01:~# df -h
      Filesystem                                              Size  Used Avail Use% Mounted on
      rootfs                                                  9.2G  636M  8.1G   8% /
      udev                                                     10M     0   10M   0% /dev
      tmpfs                                                   101M  236K  101M   1% /run
      /dev/disk/by-uuid/4a066d6d-c8ad-46a4-ae54-118da63dcbd2  9.2G  636M  8.1G   8% /
      tmpfs                                                   5.0M     0  5.0M   0% /run/lock
      tmpfs                                                   584M     0  584M   0% /run/shm
      /dev/sda1                                                92M   22M   65M  25% /boot
      /dev/sda4                                               8.6G  148M  8.1G   2% /home
      root@debian01:~#

      From those two output, we know that :

      • /dev/sda1 contains /boot filesystem
      • /dev/sda2 contains root filesystem
      • /dev/sda3 used as swap space
      • /dev/sda4 contains /home filesystem
  2. We will use mdadm package for manage RAID group and build the mirror. To install mdadm package in Debian-way, we can use apt-get install command as shown below :
    root@debian01:~# apt-get install mdadm
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      mdadm
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/566 kB of archives.
    After this operation, 1098 kB of additional disk space will be used.
    Preconfiguring packages ...
    Selecting previously unselected package mdadm.
    (Reading database ... 24694 files and directories currently installed.)
    Unpacking mdadm (from .../m/mdadm/mdadm_3.2.5-5_i386.deb) ...
    Processing triggers for man-db ...
    Setting up mdadm (3.2.5-5) ...
    Generating array device nodes... done.
    Generating mdadm.conf... done.
    update-initramfs: deferring update (trigger activated)
    [ ok ] Assembling MD arrays...done (no arrays found in config file or automatically).
    [ ok ] Starting MD monitoring service: mdadm --monitor.
    Processing triggers for initramfs-tools ...
    update-initramfs: Generating /boot/initrd.img-3.2.0-4-686-pae
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    W: mdadm: no arrays defined in configuration file.
    root@debian01:~#
  3. Before moving forward, we need to backup several important files (/etc/fstab,/etc/default/grub, and /etc/mdadm/mdadm.conf) as shown in the following example :
    root@debian01:~# cp /etc/fstab /etc/fstab_orig
    root@debian01:~# cp /etc/default/grub  /etc/default/grub_orig
    root@debian01:~# cp /etc/mdadm/mdadm.conf  /etc/mdadm/mdadm.conf_orig
  4. Next we need to load several module to the running kernel :
    root@debian01:~# modprobe linear
    root@debian01:~# modprobe multipath
    root@debian01:~# modprobe raid0
    root@debian01:~# modprobe raid1
    root@debian01:~# modprobe raid5
    root@debian01:~# modprobe raid6
    root@debian01:~# modprobe raid10
  5. Information of active & running RAID group can be seen inside /proc/mdstat file :
    root@debian01:~# cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    unused devices: <none>
    root@debian01:~#

    We can’t see any information yet since we don’t have active RAID group yet. Soon we will go back checking /proc/mdstat when the RAID1 group has been created.

RAID1 Group Setup

We will start create new RAID1 Group for join both disks as mirrored configuration.

  1. First we will work on the new (/dev/sdb) or what we called rootmirror. We need to prepare rootmirror before put it in the RAID1 group. Since rootmirror is new disk, it doesn’t have valid partition table yet :
    root@debian01:~# fdisk -l /dev/sdb
    
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdb doesn't contain a valid partition table
    root@debian01:~#

    What we need to do is copy the partition table from the existing root disk (/dev/sda) using sfdisk command as shown below :

    root@debian01:~# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
    Checking that no-one is using this disk right now ...
    OK
    
    Disk /dev/sdb: 2610 cylinders, 255 heads, 63 sectors/track
    
    sfdisk: ERROR: sector 0 does not have an msdos signature
     /dev/sdb: unrecognized partition table type
    Old situation:
    No partitions found
    New situation:
    Units = sectors of 512 bytes, counting from 0
    
       Device Boot    Start       End   #sectors  Id  System
    /dev/sdb1   *      2048    194559     192512  83  Linux
    /dev/sdb2        194560  19726335   19531776  83  Linux
    /dev/sdb3      19726336  23631871    3905536  82  Linux swap / Solaris
    /dev/sdb4      23631872  41940991   18309120  83  Linux
    Warning: partition 1 does not end at a cylinder boundary
    Warning: partition 2 does not start at a cylinder boundary
    Warning: partition 2 does not end at a cylinder boundary
    Warning: partition 3 does not start at a cylinder boundary
    Warning: partition 3 does not end at a cylinder boundary
    Warning: partition 4 does not start at a cylinder boundary
    Warning: partition 4 does not end at a cylinder boundary
    Successfully wrote the new partition table
    
    Re-reading the partition table ...
    
    If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
    to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
    (See fdisk(8).)
    root@debian01:~#

    After copy the partition table, we can see that rootmirror now has the same partition layout as the root disk :

    root@debian01:~# fdisk -l /dev/sdb
    
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *        2048      194559       96256   83  Linux
    /dev/sdb2          194560    19726335     9765888   83  Linux
    /dev/sdb3        19726336    23631871     1952768   82  Linux swap / Solaris
    /dev/sdb4        23631872    41940991     9154560   83  Linux
    root@debian01:~#
  2. Since we will join rootmirror into a RAID1 group, we need to set every partition under it to use “Linux raid auto” type. We use fdisk command to set the partition type, here is the sample :
    root@debian01:~# fdisk /dev/sdb
    
    Command (m for help): t
    Partition number (1-4): 1
    Hex code (type L to list codes): L
    
     0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
     1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
     2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
     3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
     4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
     5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
     6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
     7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
     8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
     9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
     a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
     b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
     c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs        
     e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT            
     f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
    10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
    11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor      
    12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor      
    14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary  
    16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS    
    17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
    18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
    1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
    1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
    1e  Hidden W95 FAT1 80  Old Minix      
    Hex code (type L to list codes): fd
    Changed system type of partition 2 to fd (Linux raid autodetect)
    
            Command (m for help): t
            Partition number (1-4): 2
            Hex code (type L to list codes): fd
            Changed system type of partition 1 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 3
    Hex code (type L to list codes): fd
    Changed system type of partition 3 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 4
    Hex code (type L to list codes): fd
    Changed system type of partition 4 to fd (Linux raid autodetect)
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks.
    root@debian01:~#

    Here is the explanation about what happen on the example above :

    • Start fdisk utility by typing “fdisk /dev/sdb” followed by Enter. fdisk utility will ready to work on /dev/sdb.
    • Type t to start setting the partition type. Press Enter to continue.
    • Then type the number of partition we want to set. Press Enter to continue.
    • There are many partition type available, we can see all available type by type L followed by Enter.
    • In this case we want to use partition type “Linux raid auto” so we type fd followed by Enter.
    • After all partition set up, we must finalize the changes by typing w (as in “Write this changes to the disk”) followed by Enter.
  3. With partition type already set, we can continue create md device a.k.a “virtual partition”. md is abrreviation for Multiple Devices. Before create new md device, we should clean up the partition using the following command :
            root@debian01:~# mdadm --zero-superblock /dev/sdb1
            root@debian01:~# mdadm --zero-superblock /dev/sdb2
            root@debian01:~# mdadm --zero-superblock /dev/sdb3
            root@debian01:~# mdadm --zero-superblock /dev/sdb4

    Actually this step only applicable if the partition has been used as md device. In this case it didn’t really matter since we use new empty disk.

  4. To start create md device, we use mdadm --create command. Here is the syntax to create md device using mdadm command :
    mdadm --create /dev/md<ID> --level=<RAID level> --raid-disks=<number of physical disk> <disk#1> <disk#2>

    We will create md device for each partition. Since we want to create RAID1 disk group, we set --level=1. And because we only have 2 physical disks involved, then we set --raid-disks=2.

    So lets start to prepare first partition on rootmirror as first md device :

    root@debian01:~# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    root@debian01:~#

    Because we don’t want to touch the existing root disk (/dev/sda) yet, we will mark it as “missing”.

    Then we can continue with second partition of rootmirror :

    root@debian01:~# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md1 started.
    root@debian01:~#

    Finish by create virtual partition for third and fourth partition of rootmirror :

    root@debian01:~# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md2 started.
    root@debian01:~#
    root@debian01:~# mdadm --create /dev/md3 --level=1 --raid-disks=2 missing /dev/sdb4
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md3 started.
    root@debian01:~#
  5. Now we have 4 virtual partition, so the next step is to make the new filesystem on each partition. We will use ext3 filesystem for /dev/md0, /dev/md1, and /dev/md3. To create a ext3 filesystem we use mkfs.ext3 command as shown in the following example :
    root@debian01:~# mkfs.ext3 /dev/md0 
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=1024 (log=0)
    Fragment size=1024 (log=0)
    Stride=0 blocks, Stripe width=0 blocks
    24096 inodes, 96128 blocks
    4806 blocks (5.00%) reserved for the super user
    First data block=1
    Maximum filesystem blocks=67371008
    12 block groups
    8192 blocks per group, 8192 fragments per group
    2008 inodes per group
    Superblock backups stored on blocks: 
        8193, 24577, 40961, 57345, 73729
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (4096 blocks): done
    Writing superblocks and filesystem accounting information: done 
    
    root@debian01:~# mkfs.ext3 /dev/md1
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    610800 inodes, 2439392 blocks
    121969 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2499805184
    75 block groups
    32768 blocks per group, 32768 fragments per group
    8144 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done 
    
    root@debian01:~# mkfs.ext3 /dev/md3
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    572320 inodes, 2286560 blocks
    114328 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2344615936
    70 block groups
    32768 blocks per group, 32768 fragments per group
    8176 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done 
    
    root@debian01:~#
  6. For /dev/md2, we will marked it as swap space using mkswap command as shown in the following example :
    root@debian01:~# mkswap -f /dev/md2
    Setting up swapspace version 1, size = 1951676 KiB
    no label, UUID=2fa8b47b-e889-4160-9fd0-c21b88b6d3e3
    root@debian01:~#
  7. Each virtual partition need to be started during the boot process. So we need to register all the virtual partition we had made into mdadm.conf file. What we need to register is the output of the following command :
    root@debian01:~# mdadm --examine --scan
    ARRAY /dev/md/0 metadata=1.2 UUID=5d7d8672:c019a7d0:69310f29:3cd65d04 name=debian01:0
    ARRAY /dev/md/1 metadata=1.2 UUID=67631129:b7766c7e:d1200d55:036ad1c6 name=debian01:1
    ARRAY /dev/md/2 metadata=1.2 UUID=19cbe54c:2aa2126c:92eda6ee:ffc62fb9 name=debian01:2
    ARRAY /dev/md/3 metadata=1.2 UUID=d15793a1:348c6380:a94d73b3:a47f3bf7 name=debian01:3
    root@debian01:~#

    We can copy those output to /etc/mdadm/mdadm.conf manually using text editor, or we can use simple redirection as shown below :

    root@debian01:~# mdadm --examine --scan >> /etc/mdadm/mdadm.conf
  8. In the Linux system each partition will has its own mountpoint (except for swap partition). Information about the partition and related mountpoint stored inside /etc/fstab file. Here is the sample of /etc/fstab file :
    root@debian01:~# cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    # / was on /dev/sda2 during installation
    UUID=4a066d6d-c8ad-46a4-ae54-118da63dcbd2 /               ext3    errors=remount-ro 0       1
    # /boot was on /dev/sda1 during installation
    UUID=1398dd06-7ef2-46c3-9ace-a2328b1db783 /boot           ext3    defaults        0       2
    # /home was on /dev/sda4 during installation
    UUID=8d819942-fe5d-4634-aab7-28ceca9b2fe1 /home           ext3    defaults        0       2
    # swap was on /dev/sda3 during installation
    UUID=c1b679fb-d59f-4478-a138-fe2e3a6bceb0 none            swap    sw              0       0
    /dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
    root@debian01:~#

    Each partition displayed with its UUID (Universally Unique Identifier). Those UUID is belongs to existing root disk (/dev/sda). We need to modify the partition address using new virtual partition identity. Here is the final /etc/fstab after modification :

    root@debian01:~# cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    
    /dev/md1    /               ext3    errors=remount-ro   0       1
    /dev/md0    /boot           ext3    defaults            0       2
    /dev/md3    /home           ext3    defaults            0       2
    /dev/md2    none            swap    sw                  0       0
    /dev/sr0    /media/cdrom0   udf,iso9660 user,noauto     0       0
    root@debian01:~#
  9. Next step is to modify /etc/default/grub. We need to set GRUB_DISABLE_LINUX_UUID parameter to true. By default GRUB_DISABLE_LINUX_UUID = true was commented, so what we need is to remove # to uncommented that line.
  10. Before start copying all existing root disk files, we need to mount each virtual partition. Here is the sample how we mount all virtual partitions :
    root@debian01:~# mkdir /mnt/md0
    root@debian01:~# mkdir /mnt/md1
    root@debian01:~# mkdir /mnt/md3
    root@debian01:~# mount /dev/md0 /mnt/md0
    root@debian01:~# mount /dev/md1 /mnt/md1
    root@debian01:~# mount /dev/md3 /mnt/md3

    We don’t mount /dev/md2 since it is swap partition.

  11. With all virtual partition mounted, we can start copy files from existing root disk :
    root@debian01:~# cp -dpRx / /mnt/md1 
    root@debian01:~# cp -dpRx /boot /mnt/md0
    root@debian01:~# cp -dpRx /home /mnt/md3
  12. Then we need to install the boot loader on the virtual partition. We will do it from chroot environment. There are some steps to prepare chroot environment as shown in the following example :
    root@debian01:~# umount /mnt/md0
    root@debian01:~# mount /dev/md0 /mnt/md1/boot
    root@debian01:~# mount -t proc none /mnt/md1/proc
    root@debian01:~# mount -o bind /dev /mnt/md1/dev
    root@debian01:~# mount -o bind /sys /mnt/md1/sys
  13. We can continue to enter chroot environment & execute update-grub command to fix the boot loader as shown in the following example :
    root@debian01:~# chroot /mnt/md1
    root@debian01:/# update-grub
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-3.2.0-4-686-pae
    Found initrd image: /boot/initrd.img-3.2.0-4-686-pae
    done
  14. Still from chroot environment of virtual partitions, we also need to generate the initramfs image. initramfs is used during boot process to load all root filesystem.
    root@debian01:/# update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-3.2.0-4-686-pae
    mdadm: cannot open /dev/md/0: No such file or directory
    mdadm: cannot open /dev/md/1: No such file or directory
    mdadm: cannot open /dev/md/2: No such file or directory
    mdadm: cannot open /dev/md/3: No such file or directory
    root@debian01:/#

    We can ignore those kind of errors for now.

  15. Last things must be done from inside chroot is reinstall boot loader on both disk drives. Debian system use GRUB boot loader by default. To reinstall boot loader we use grub-install command as shown below :
    root@debian01:/# grub-install /dev/sda
    Installation finished. No error reported.
    root@debian01:/# grub-install /dev/sdb
    Installation finished. No error reported.
    root@debian01:/# exit
  16. We can exit chroot environment and shutdown the system :
    root@debian01:/# 
    exit
    root@debian01:~# reboot
    root@debian01:~# shutdown -h now
    
    Broadcast message from root@debian7 (pts/3) (Tue Sep 24 01:04:03 2013):
    
    The system is going down for system halt NOW!
    root@debian01:~#
  17. Then we will boot the server using the second disk (/dev/sdb). If all the setup above success, then system can boot using /dev/sdb (rootmirror) disk.

Attach Root Disk To RAID1 Group

After reboot system should already use the virtual partition. As we can see from df command output below :

root@debian01:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.2G  637M  8.1G   8% /
udev             10M     0   10M   0% /dev
tmpfs           101M  276K  101M   1% /run
/dev/md1        9.2G  637M  8.1G   8% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           584M     0  584M   0% /run/shm
/dev/md0         91M   22M   65M  26% /boot
/dev/md3        8.6G  148M  8.1G   2% /home
root@debian01:~#

We can check the RAID1 information for each virtual partition using the following command :

mdadm -D /dev/md<ID>

At this stage RAID1 Group only have single component which is rootmirror disk we prepared on previous section. Since we define the RAID1 group to have 2 component, then mdadm -D command will report the degraded state of every virtual partition.

root@debian01:~# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:31 2013
     Raid Level : raid1
     Array Size : 96128 (93.89 MiB 98.44 MB)
  Used Dev Size : 96128 (93.89 MiB 98.44 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 01:04:33 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:0  (local to host debian01)
           UUID : 5d7d8672:c019a7d0:69310f29:3cd65d04
         Events : 34

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
root@debian01:~#
root@debian01:~# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:39 2013
     Raid Level : raid1
     Array Size : 9757568 (9.31 GiB 9.99 GB)
  Used Dev Size : 9757568 (9.31 GiB 9.99 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 01:05:01 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:1  (local to host debian01)
           UUID : 67631129:b7766c7e:d1200d55:036ad1c6
         Events : 118

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2
root@debian01:~#
root@debian01:~# mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:47 2013
     Raid Level : raid1
     Array Size : 1951680 (1906.26 MiB 1998.52 MB)
  Used Dev Size : 1951680 (1906.26 MiB 1998.52 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 00:55:18 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:2  (local to host debian01)
           UUID : 19cbe54c:2aa2126c:92eda6ee:ffc62fb9
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       19        1      active sync   /dev/sdb3
root@debian01:~#
root@debian01:~# mdadm -D /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:55 2013
     Raid Level : raid1
     Array Size : 9146240 (8.72 GiB 9.37 GB)
  Used Dev Size : 9146240 (8.72 GiB 9.37 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 01:04:33 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:3  (local to host debian01)
           UUID : d15793a1:348c6380:a94d73b3:a47f3bf7
         Events : 14

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       20        1      active sync   /dev/sdb4
root@debian01:~# 
root@debian01:~# cat /proc/mdstat 
Personalities : [raid1] 
md3 : active raid1 sdb4[1]
      9146240 blocks super 1.2 [2/1] [_U]

md2 : active (auto-read-only) raid1 sdb3[1]
      1951680 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[1]
      9757568 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sdb1[1]
      96128 blocks super 1.2 [2/1] [_U]

unused devices: <none>

root@debian01:~#

This section will finish mirroring with attaching the previous rootdisk (/dev/sda) to the RAID1 Group.

  1. We need to set the partition type on rootdisk to “Linux raid auto” :
    root@debian01:~# fdisk -l /dev/sda
    
    Disk /dev/sda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0004f798
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048      194559       96256   83  Linux
    /dev/sda2          194560    19726335     9765888   83  Linux
    /dev/sda3        19726336    23631871     1952768   82  Linux swap / Solaris
    /dev/sda4        23631872    41940991     9154560   83  Linux
    root@debian01:~# fdisk /dev/sda
    
    Command (m for help): t
    Partition number (1-4): 1
    Hex code (type L to list codes): fd
    Changed system type of partition 1 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 2
    Hex code (type L to list codes): fd
    Changed system type of partition 2 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 3
    Hex code (type L to list codes): fd
    Changed system type of partition 3 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 4
    Hex code (type L to list codes): fd
    Changed system type of partition 4 to fd (Linux raid autodetect)
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks.
    root@debian01:~#
  2. We can check that all partition inside /dev/sda (rootdisk) now has been converted to Linux raid auto type.
    root@debian01:~# fdisk -l /dev/sda
    
    Disk /dev/sda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0004f798
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048      194559       96256   fd  Linux raid autodetect
    /dev/sda2          194560    19726335     9765888   fd  Linux raid autodetect
    /dev/sda3        19726336    23631871     1952768   fd  Linux raid autodetect
    /dev/sda4        23631872    41940991     9154560   fd  Linux raid autodetect
    root@debian01:~#
  3. Now we can put rootdisk to the existing RAID1 Group. Each partition under rootdisk will be attached to existing virtual partition (md device).
    root@debian01:~# mdadm --add /dev/md0 /dev/sda1
    mdadm: added /dev/sda1
    root@debian01:~# mdadm --add /dev/md1 /dev/sda2
    mdadm: added /dev/sda2
    root@debian01:~# mdadm --add /dev/md2 /dev/sda3
    mdadm: added /dev/sda3
    root@debian01:~# mdadm --add /dev/md3 /dev/sda4
    mdadm: added /dev/sda4
    root@debian01:~#
  4. After all partition of rootdisk join the RAID1 group, system will start recovery/resynchronization automatically. We can monitor the recovery process by looking at /proc/mdstat file :
    root@debian01:~# cat /proc/mdstat 
    Personalities : [raid1] 
    md3 : active raid1 sda4[2] sdb4[1]
          9146240 blocks super 1.2 [2/1] [_U]
            resync=DELAYED
    
    md2 : active raid1 sda3[2] sdb3[1]
          1951680 blocks super 1.2 [2/1] [_U]
            resync=DELAYED
    
    md1 : active raid1 sda2[2] sdb2[1]
          9757568 blocks super 1.2 [2/1] [_U]
          [==>..................]  recovery = 11.6% (1132800/9757568) finish=1.1min speed=125866K/sec
    
    md0 : active raid1 sda1[2] sdb1[1]
          96128 blocks super 1.2 [2/2] [UU]
    
    unused devices: <none>
    root@debian01:~#

    Or we can use mdadm -D command as shown in the following example :

    root@debian01:~# mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:31 2013
         Raid Level : raid1
         Array Size : 96128 (93.89 MiB 98.44 MB)
      Used Dev Size : 96128 (93.89 MiB 98.44 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:30 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:0  (local to host debian01)
               UUID : 5d7d8672:c019a7d0:69310f29:3cd65d04
             Events : 55
    
        Number   Major   Minor   RaidDevice State
           2       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1
    root@debian01:~# mdadm -D /dev/md1
    /dev/md1:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:39 2013
         Raid Level : raid1
         Array Size : 9757568 (9.31 GiB 9.99 GB)
      Used Dev Size : 9757568 (9.31 GiB 9.99 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:42 2013
              State : clean, degraded, recovering 
     Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
     Rebuild Status : 13% complete
    
               Name : debian01:1  (local to host debian01)
               UUID : 67631129:b7766c7e:d1200d55:036ad1c6
             Events : 138
    
        Number   Major   Minor   RaidDevice State
           2       8        2        0      spare rebuilding   /dev/sda2
           1       8       18        1      active sync   /dev/sdb2
    root@debian01:~# mdadm -D /dev/md2
    /dev/md2:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:47 2013
         Raid Level : raid1
         Array Size : 1951680 (1906.26 MiB 1998.52 MB)
      Used Dev Size : 1951680 (1906.26 MiB 1998.52 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:29 2013
              State : clean, degraded, resyncing (DELAYED) 
     Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
               Name : debian01:2  (local to host debian01)
               UUID : 19cbe54c:2aa2126c:92eda6ee:ffc62fb9
             Events : 4
    
        Number   Major   Minor   RaidDevice State
           2       8        3        0      spare rebuilding   /dev/sda3
           1       8       19        1      active sync   /dev/sdb3
    root@debian01:~# mdadm -D /dev/md3
    /dev/md3:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:55 2013
         Raid Level : raid1
         Array Size : 9146240 (8.72 GiB 9.37 GB)
      Used Dev Size : 9146240 (8.72 GiB 9.37 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:32 2013
              State : clean, degraded, resyncing (DELAYED) 
     Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
               Name : debian01:3  (local to host debian01)
               UUID : d15793a1:348c6380:a94d73b3:a47f3bf7
             Events : 18
    
        Number   Major   Minor   RaidDevice State
           2       8        4        0      spare rebuilding   /dev/sda4
           1       8       20        1      active sync   /dev/sdb4
    root@debian01:~#
  5. When recovery/resync process completed, both disks will be in sync state. No more degraded status in the mdadm -D output :
    root@debian01:~# mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:31 2013
         Raid Level : raid1
         Array Size : 96128 (93.89 MiB 98.44 MB)
      Used Dev Size : 96128 (93.89 MiB 98.44 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:43 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:0  (local to host debian01)
               UUID : 5d7d8672:c019a7d0:69310f29:3cd65d04
             Events : 55
    
        Number   Major   Minor   RaidDevice State
           2       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1
    root@debian01:~# mdadm -D /dev/md1
    /dev/md1:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:39 2013
         Raid Level : raid1
         Array Size : 9757568 (9.31 GiB 9.99 GB)
      Used Dev Size : 9757568 (9.31 GiB 9.99 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:09:29 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:1  (local to host debian01)
               UUID : 67631129:b7766c7e:d1200d55:036ad1c6
             Events : 157
    
        Number   Major   Minor   RaidDevice State
           2       8        2        0      active sync   /dev/sda2
           1       8       18        1      active sync   /dev/sdb2
    root@debian01:~# mdadm -D /dev/md2
    /dev/md2:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:47 2013
         Raid Level : raid1
         Array Size : 1951680 (1906.26 MiB 1998.52 MB)
      Used Dev Size : 1951680 (1906.26 MiB 1998.52 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:08:59 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:2  (local to host debian01)
               UUID : 19cbe54c:2aa2126c:92eda6ee:ffc62fb9
             Events : 23
    
        Number   Major   Minor   RaidDevice State
           2       8        3        0      active sync   /dev/sda3
           1       8       19        1      active sync   /dev/sdb3
    root@debian01:~# mdadm -D /dev/md3
    /dev/md3:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:55 2013
         Raid Level : raid1
         Array Size : 9146240 (8.72 GiB 9.37 GB)
      Used Dev Size : 9146240 (8.72 GiB 9.37 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:08:49 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:3  (local to host debian01)
               UUID : d15793a1:348c6380:a94d73b3:a47f3bf7
             Events : 37
    
        Number   Major   Minor   RaidDevice State
           2       8        4        0      active sync   /dev/sda4
           1       8       20        1      active sync   /dev/sdb4
    root@debian01:~#
  6. Now both /dev/sda & /dev/sdb successfully joined as mirror disks. Both disks will hold the same files. We can verify it by rebooting the server using each disk.

Read on Scribd : Root Disk Mirroring On Debian Linux

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.