Sinkronisasi Data Antara 2 Mac

Sehari-hari di rumah saya bekerja menggunakan Mac mini. Namun bila pergi ke kantor atau ke tempat client, saya membawa Macbook Air. Bekerja dengan dua komputer menimbulkan masalah sinkronisasi data. Sehari-hari saya melakukan 2x sinkronisasi secara manual. Sebelum berangkat kerja saya melakukan sinkronsasi dari Mac mini ke Macbook Air dan sepulang kerja saya melakukan sinkronisasi balik dari Macbook Air & Mac mini. Repot? Resiko bekerja dengan 2 komputer yang berbeda.

 photo IMG_1899_zps3d38ea42.jpg

Saya memilih untuk menggunakan rsync untuk melakukan sinkronisasi. Perintah yang dipakai seperti berikut :

rsync -tarv --delete  ~/ORACLE-Stuff/ 192.168.10.13:~/ORACLE-Stuff/

Perintah ini dilakukan dari Macbook Air, untuk melakukan sinkronisasi ke Mac mini. Folder ORACLE-Stuff adalah tempat saya menyimpan semua berkas pekerjaan. Sementara IP 192.168.10.13 adalah IP statik Mac mini saya. Opsi --delete itu maksudnya semua berkas yang ada di Mac mini namun tidak ada di Macbook akan dihapus sebelum proses sinkronisasi berjalan. Opsi -t dipakai untuk mempertahankan modification date semua berkas yang disinkronisasi.

Memang sekarang layanan cloud sudah banyak dan mudah digunakan seperti misalnya Dropbox. Saya bisa saja menaruh semua data pekerjaan di folder Dropbox sehingga tidak perlu lagi melakukan sinkronisasi manual seperti di atas. Tapi dengan kondisi akses internet sekarang rasanya menggunakan Dropbox tidak efisien. Folder kerja saya ukurannya cukup besar, mencapai 27GB. Memang saya punya space kosong di Dropbox mencapai puluhan Gigabyte, tapi bukan itu kendala utamanya. Kendala utamanya menurut saya adalah memindahkan semuanya ke Dropbox mungkin perlu waktu berhari-hari untuk menyelesaikan proses upload & sinkronisasinya.

Balik lagi ke soal rsync tadi, daripada mengetik perintah yang sama berulang-ulang saya simpan kedua perintah tadi dalam file .bashrc di Macbook Air.

ttirtawi@macbook-air:~$ cat .bashrc 
syncmacbooktomacmini(){
    echo "    rsync -tarv --exclude=OracleLinux_6_OCCN.ova ~/ORACLE-Stuff/  192.168.10.13:~/ORACLE-Stuff/ "
    echo "Syncronize MacBook data to MacMini :"
    echo "  rsync -tarv --delete --exclude=OracleLinux_6_OCCN.ova ~/ORACLE-Stuff/ 192.168.10.13:~/ORACLE-Stuff/"
    while true; do
        read -p "Are you serious to sync MacBook to MacMini? " yn
        case $yn in
            [Yy]* ) echo "Starting to sync..."; rsync -tarv --delete --exclude=OracleLinux_6_OCCN.ova ~/ORACLE-Stuff/ 192.168.10.13:~/ORACLE-Stuff/; break;;
            [Nn]* ) exit;;
            * ) echo "Please answer yes or no.";;
        esac
    done
}

syncmacminitomacbook(){
    echo "Syncronize MacMini data to Macbook :"
    echo "  rsync -tarv --delete --exclude=OracleLinux_6_OCCN.ova  192.168.10.13:~/ORACLE-Stuff/ ~/ORACLE-Stuff/"
    while true; do
        read -p "Are you serious to sync MacMini to Macbook? " yn
        case $yn in
            [Yy]* ) echo "Starting to sync..."; rsync -tarv --delete --exclude=OracleLinux_6_OCCN.ova  192.168.10.13:~/ORACLE-Stuff/ ~/ORACLE-Stuff/; break;;
            [Nn]* ) exit;;
            * ) echo "Please answer yes or no.";;
        esac
    done
}
ttirtawi@macbook-air:~$

Jadi saya tinggal jalankan fungsi syncmacminitomacbook atau syncmacbooktomacmini, lumayan menghemat mengetik perintah rsync yang cukup panjang tadi.

Fungsi sinkronisasi tadi bisa juga saya gunakan saat saya ingin menyamakan data dari Macbook ke laptop kantor (Lenovo X220). Saya menggunakan ElementaryOS Linux di Lenovo x220, selama rsync terinstal di komputer tersebut proses sinkronisasi data jadi mudah.

Solusi lain yang lebih mudah mungkin dengan memindahkan semua data pekerjaan yang dipakai oleh beberapa komputer ke dalam external harddisk. Semua data di simpan dalam external harddisk, tinggal colok ke komputer mana saat diperlukan. Meskipun sederhana tapi saya rasa cara ini kurang efisien, malah cenderung ribet. Bagaimana dengan Anda? Apa metode Anda untuk sharing data antara beberapa komputer?

Firefox Java Runtime Plugin Di Linux

Sehari-hari saya memerlukan Java plugin untuk mengakses aplikasi Oracle NCC. Di Linux biasanya Firefox web browser terpasang tanpa dilengkapi dengan Java plugin (JRE/Java Runtime Environment). Untuk memeriksa apakah Firefox sudah dilengkapi dengan JRE, saya biasanya mengakses web ini : http://www.java.com/en/download/installed.jsp. Halaman web ini akan otomatis mendeteksi apakah Firefox sudah dilengkapi dengan JRE. Bila tidak ada JRE yang terpasang, saya akan mendapat hasil seperti tampilan berikut ini :

Cara untuk memasang JRE cukup sederhana, berikut langkah-langkah yang biasa saya lakukan untuk memasang JRE di Linux :

1. Pertama saya download file installler JRE dari website Java, ambil file yang sesuai dengan sistem operasi Anda. Pada contoh ini saya menggunakan distribusi elementary OS Linux 64bit (turunan dari Ubuntu?)

ttirtawi@x220:~$ ls -tlr Downloads/jre-7u45-linux-x64.tar.gz
-rw-rw-r-- 1 ttirtawi ttirtawi 46842143 Nov 20 00:19 Downloads/jre-7u45-linux-x64.tar.gz
ttirtawi@x220:~$

2. Copy installer JRE tadi ke /usr/local :

ttirtawi@x220:~$ sudo cp Downloads/jre-7u45-linux-x64.tar.gz /usr/local/
[sudo] password for ttirtawi:
ttirtawi@x220:~$

3. Ekstrak file JRE yang sudah ada di direktori /usr/local :

ttirtawi@x220:~$ cd /usr/local/
ttirtawi@x220:/usr/local$ ls -tlr jre-7u45-linux-x64.tar.gz
-rw-r--r-- 1 root root 46842143 Nov 20 00:31 jre-7u45-linux-x64.tar.gz
ttirtawi@x220:/usr/local$ sudo tar zxf jre-7u45-linux-x64.tar.gz
ttirtawi@x220:/usr/local$ ls -tlr | grep jre
drwxr-xr-x 6 uucp 143 4096 Oct 8 20:00 jre1.7.0_45
-rw-r--r-- 1 root root 46842143 Nov 20 00:31 jre-7u45-linux-x64.tar.gz
ttirtawi@x220:/usr/local$

4. Lalu saya pindah ke direktori profile Firefox yang ada di home directory, dalam contoh ini saya hanya punya 1 default profile hylavoon.default:

ttirtawi@x220:/usr/local$ cd ~/.mozilla/firefox/
ttirtawi@x220:~/.mozilla/firefox$ cd hylavoon.default/
ttirtawi@x220:~/.mozilla/firefox/hylavoon.default$

Bila sistem operasi yang digunakan adalah Linux 32 bit, nama direktorinya seperti ini : /usr/local/jre1.7.0_45/lib/i386/libnpjp2.so

5. Di dalam direktori profile tadi saya buat direktori baru dengan nama  “plugins” :

ttirtawi@x220:~/.mozilla/firefox/hylavoon.default$ mkdir plugins
ttirtawi@x220:~/.mozilla/firefox/hylavoon.default$ cd plugins/
ttirtawi@x220:~/.mozilla/firefox/hylavoon.default/plugins$

6. Dalam direktori plugins saya buat soft link file libnpjp2.so :

ttirtawi@x220:~/.mozilla/firefox/hylavoon.default/plugins$ ln -sf /usr/local/jre1.7.0_45/lib/amd64/libnpjp2.so .
ttirtawi@x220:~/.mozilla/firefox/hylavoon.default/plugins$ ls -tlrh
total 0
lrwxrwxrwx 1 ttirtawi ttirtawi 44 Nov 20 00:33 libnpjp2.so -> /usr/local/jre1.7.0_45/lib/amd64/libnpjp2.so
ttirtawi@x220:~/.mozilla/firefox/hylavoon.default/plugins$

7. Restart Firefox & test kembali, sekarang Firefox sudah dilengkapi dengan JRE plugin :

Sekarang saya bisa mengakses aplikasi Oracle NCC, Java applet-nya terbuka dengan sukses :

Recover Root Password Linux Dengan SystemRescue CD

SystemRescueCD adalah sebuah distro Linux yang menyediakan Live CD untuk keperluan troubleshooting.

Misalnya saya mendadak lupa apa password akun root salah satu server Linux. Saya bisa boot servernya dengan menggunakan SystemRescueCD sebagai boot media. Lalu dari konsolnya SystemRescueCD, saya bisa lakukan langkah-langkah berikut ini :

  1. Pertama saya perlu cari cek dulu partisi apa saja yang ada dalam harddisk server ini, saya bisa gunakan perintah fdisk seperti berikut ini :
    root@sysresccd /root % fdisk -l
    
    Disk /dev/sda: 16.7 GB, 16689012736 bytes, 32595728 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xbdf13b7d
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048    29362175    14680064   83  Linux
    /dev/sda2        29362176    32595727     1616776   83  Linux
    root@sysresccd /root %

    Dari situ saya tahu ada 2 partisi /dev/sda1 & /dev/sda2. Selain itu saya lihat partisi /dev/sda1 ditandai dengan flag Boot, jadi saya tebak /dev/sda1 adalah partisi root.

  2. Saya mount partisi /dev/sda1 sementara di /mnt :
    root@sysresccd /root % mount /dev/sda1 /mnt
  3. Saya perlu masuk ke environment root tadi dengan perintah chroot :
    root@sysresccd /root % chroot /mnt
    chroot: failed to run command ‘/bin/zsh’: No such file or directory
    root@sysresccd /root %

    Ternyata gagal karena SystemRescueCD menggunakan zsh sebagai default SHELL-nya. Bisa diakali dengan mengganti variabel SHELL dengan bash :

    root@sysresccd /root % export SHELL=bash
    root@sysresccd /root % chroot /mnt      
    root@sysresccd:/#
  4. Bagaimana saya tahu kalau saya sudah berada dalam chroot partisi /dev/sda1? Sederhananya saya bisa cek file hostname :
    root@sysresccd /root % chroot /mnt      
    root@sysresccd:/# cat /etc/hostname 
    debian7
    root@sysresccd:/#

    Benar ternyata hostname-nya adalah “debian7”, berarti saya sudah masuk ke chroot environment partisi /dev/sda1. Tentu ini sangat tergantung distro Linuxnya. Debian atau Ubuntu atau turunannya menaruh nama hostname di /etc/hostname. Sedikit berbeda dengan RedHat atau turunannya seperti CentOS, informasi hostname ditaruh di /etc/sysconfig/network :

    [root@sysresccd /]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    HOSTNAME=localhost.localdomain
    [root@sysresccd /]#

    Cara lain yang lebih sederhana mungkin bisa dengan memperhatikan tampilan prompt yang berubah dari

    root@sysresccd /root %

    menjadi :

    root@sysresccd:/#
  5. Dari sini saya bisa langsung mengganti password akun root dengan perintah passwd :
    root@sysresccd:/# passwd root
    Enter new UNIX password: 
    Retype new UNIX password: 
    passwd: password updated successfully
    root@sysresccd:/#
  6. Setelah itu saya bisa keluar dari chroot & langsung restart servernya dengan :
    root@sysresccd:/# exit
    exit
    root@sysresccd /root % reboot

Beberapa catatan kecil dari langkah-langkah di atas :

  1. Mungkin saja partisi hardidsk-nya lebih rumit (atau mungkin ada lebih dari 2 harddisk dalam server) seperti ini :
    root@sysresccd /root % fdisk -l
    
    Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    
    Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00027a20
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *          63      208844      104391   83  Linux
    /dev/sdb2          208845     6506324     3148740   83  Linux
    /dev/sdb3         6506325     8610839     1052257+  82  Linux swap / Solaris
    root@sysresccd /root %

    Tetap patokan saya adalah partisi dengan flag Boot (yang ditandai dengan simbol bintang *). Saya coba saja mount partisi /dev/sdb1 untuk melihat isinya :

    root@sysresccd /root % mount /dev/sdb1 /mnt
    root@sysresccd /root % ls /mnt
    config-2.6.39-300.32.1.el5uek  initrd-2.6.39-300.32.1.el5uek.img  symvers-2.6.39-300.32.1.el5uek.gz  vmlinuz-2.6.39-300.32.1.el5uek  xen-4.1.gz  xen.gz
    grub                           lost+found                         System.map-2.6.39-300.32.1.el5uek  xen-4.1.3OVM.gz                 xen-4.gz    xen-syms-4.1.3OVM
    root@sysresccd /root % umount /mnt

    Ternyata /dev/sdb1 bukan partisi root tapi partisi /boot. Karena /dev/sdb3 adalah partisi swap maka saya bisa tebak kalau root partisinya ada di /dev/sdb2 :

    root@sysresccd /root % mount /dev/sdb2 /mnt
    root@sysresccd /root % ls /mnt
    bin  boot  dev  etc  halt  home  lib  lib64  lost+found  media  mnt  opt  OVS  proc  root  sbin  selinux  srv  sys  tmp  u01  usr  var
    root@sysresccd /root %

    Langkah selanjutnya seperti di atas, chroot dulu lalu ganti password-nya.

  2. Sedikit berbeda bila server Linuxnya menggunakan LVM (Logical Volume Manager) :
    root@sysresccd /root % fdisk -l
    
    Disk /dev/sda: 26.8 GB, 26843545600 bytes, 52428800 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000c64ed
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *          63      208844      104391   83  Linux
    /dev/sda2          208845    52420094    26105625   8e  Linux LVM
    
    Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1              63    20964824    10482381   83  Linux
    
    Disk /dev/mapper/VolGroup00-LogVol00: 20.5 GB, 20468203520 bytes, 39976960 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    Disk /dev/mapper/VolGroup00-LogVol01: 6241 MB, 6241124352 bytes, 12189696 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    root@sysresccd /root %

    Dalam contoh di atas, partisi root-nya berada di /dev/mapper/VolGroup00-LogVol00. Jadi proses mount-nya seperti ini :

    root@sysresccd /root % mount /dev/mapper/VolGroup00-LogVol00 /mnt
    root@sysresccd /root % ls /mnt
    bin  boot  dev  etc  home  lib  lib64  lost+found  media  misc  mnt  net  opt  proc  root  sbin  selinux  srv  sys  tftpboot  tmp  u01  usr  var
    root@sysresccd /root %

Root Disk Mirroring On Debian Linux

Mirror is a simple way to gain redundancy on a single Unix server. By building mirrored root disks (as a RAID1 Group), system would be safer in case one disk failed. System still accessible anytime a single disk failure happen.

Mirroring with 2 physical disks

In this scenario we will try to build a mirrored root disk on Debian Linux server. We will prepare new disk drive as first component of RAID1 group. Then we will copy all content of root disk into it. After data has been copied, we will reboot the server and force the server to boot using the new disk. Once the system up, we will attach existing root disk as second component of RAID1 group. At the end we can use both disk as bootable disk.

Preparation

  1. Lets start by identified existing system :
    • Platform : this test environment is using Debian Linux release 7.1 (code name : Wheezy) :
      root@debian01:~# date
      Tue Sep 24 00:48:07 WIT 2013
      root@debian01:~# uname -a
      Linux debian01 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux
      root@debian01:~# lsb_release -a
      No LSB modules are available.
      Distributor ID: Debian
      Description:    Debian GNU/Linux 7.1 (wheezy)
      Release:    7.1
      Codename:   wheezy
      root@debian01:~#
    • Physical disks : we can check installed disks using fdisk -l command as shown in the following example :
      root@debian01:~# fdisk -l
      
      Disk /dev/sda: 21.5 GB, 21474836480 bytes
      255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x0004f798
      
         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1   *        2048      194559       96256   83  Linux
      /dev/sda2          194560    19726335     9765888   83  Linux
      /dev/sda3        19726336    23631871     1952768   82  Linux swap / Solaris
      /dev/sda4        23631872    41940991     9154560   83  Linux
      
      Disk /dev/sdb: 21.5 GB, 21474836480 bytes
      255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x00000000
      
      Disk /dev/sdb doesn't contain a valid partition table
      root@debian01:~#

      On the example above, we can see 2 disk drives installed. First is the active root disk (/dev/sda) and the second one is new disk (/dev/sdb).

      ASSUMPTION : For the sake of simplicity we will call the existing root disk (/dev/sda) as “rootdisk”. And the second disk (/dev/sdb) as “rootmirror”.

    • Filesystem : we also need to know what filesystem used by each partition. To do that we can use blkid command that also can provide us UUID information for each partition :
      root@debian01:~# blkid
      /dev/sda3: UUID="c1b679fb-d59f-4478-a138-fe2e3a6bceb0" TYPE="swap"
      /dev/sr0: LABEL="Debian 7.1.0 i386 1" TYPE="iso9660"
      /dev/sda1: UUID="1398dd06-7ef2-46c3-9ace-a2328b1db783" TYPE="ext3"
      /dev/sda2: UUID="4a066d6d-c8ad-46a4-ae54-118da63dcbd2" TYPE="ext3"
      /dev/sda4: UUID="8d819942-fe5d-4634-aab7-28ceca9b2fe1" TYPE="ext3"
      root@debian01:~#
    • Partition structure : we can check existing partition structure & its mountpoint by examining the /etc/fstab file :
      root@debian01:~# cat /etc/fstab
      # /etc/fstab: static file system information.
      #
      # Use 'blkid' to print the universally unique identifier for a
      # device; this may be used with UUID= as a more robust way to name devices
      # that works even if disks are added and removed. See fstab(5).
      #
      # <file system> <mount point>   <type>  <options>       <dump>  <pass>
      # / was on /dev/sda2 during installation
      UUID=4a066d6d-c8ad-46a4-ae54-118da63dcbd2 /               ext3    errors=remount-ro 0       1
      # /boot was on /dev/sda1 during installation
      UUID=1398dd06-7ef2-46c3-9ace-a2328b1db783 /boot           ext3    defaults        0       2
      # /home was on /dev/sda4 during installation
      UUID=8d819942-fe5d-4634-aab7-28ceca9b2fe1 /home           ext3    defaults        0       2
      # swap was on /dev/sda3 during installation
      UUID=c1b679fb-d59f-4478-a138-fe2e3a6bceb0 none            swap    sw              0       0
      /dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
      root@debian01:~#

      Or we can use the simplest but slightly less informative df command :

      root@debian01:~# df -h
      Filesystem                                              Size  Used Avail Use% Mounted on
      rootfs                                                  9.2G  636M  8.1G   8% /
      udev                                                     10M     0   10M   0% /dev
      tmpfs                                                   101M  236K  101M   1% /run
      /dev/disk/by-uuid/4a066d6d-c8ad-46a4-ae54-118da63dcbd2  9.2G  636M  8.1G   8% /
      tmpfs                                                   5.0M     0  5.0M   0% /run/lock
      tmpfs                                                   584M     0  584M   0% /run/shm
      /dev/sda1                                                92M   22M   65M  25% /boot
      /dev/sda4                                               8.6G  148M  8.1G   2% /home
      root@debian01:~#

      From those two output, we know that :

      • /dev/sda1 contains /boot filesystem
      • /dev/sda2 contains root filesystem
      • /dev/sda3 used as swap space
      • /dev/sda4 contains /home filesystem
  2. We will use mdadm package for manage RAID group and build the mirror. To install mdadm package in Debian-way, we can use apt-get install command as shown below :
    root@debian01:~# apt-get install mdadm
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      mdadm
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/566 kB of archives.
    After this operation, 1098 kB of additional disk space will be used.
    Preconfiguring packages ...
    Selecting previously unselected package mdadm.
    (Reading database ... 24694 files and directories currently installed.)
    Unpacking mdadm (from .../m/mdadm/mdadm_3.2.5-5_i386.deb) ...
    Processing triggers for man-db ...
    Setting up mdadm (3.2.5-5) ...
    Generating array device nodes... done.
    Generating mdadm.conf... done.
    update-initramfs: deferring update (trigger activated)
    [ ok ] Assembling MD arrays...done (no arrays found in config file or automatically).
    [ ok ] Starting MD monitoring service: mdadm --monitor.
    Processing triggers for initramfs-tools ...
    update-initramfs: Generating /boot/initrd.img-3.2.0-4-686-pae
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    W: mdadm: no arrays defined in configuration file.
    root@debian01:~#
  3. Before moving forward, we need to backup several important files (/etc/fstab,/etc/default/grub, and /etc/mdadm/mdadm.conf) as shown in the following example :
    root@debian01:~# cp /etc/fstab /etc/fstab_orig
    root@debian01:~# cp /etc/default/grub  /etc/default/grub_orig
    root@debian01:~# cp /etc/mdadm/mdadm.conf  /etc/mdadm/mdadm.conf_orig
  4. Next we need to load several module to the running kernel :
    root@debian01:~# modprobe linear
    root@debian01:~# modprobe multipath
    root@debian01:~# modprobe raid0
    root@debian01:~# modprobe raid1
    root@debian01:~# modprobe raid5
    root@debian01:~# modprobe raid6
    root@debian01:~# modprobe raid10
  5. Information of active & running RAID group can be seen inside /proc/mdstat file :
    root@debian01:~# cat /proc/mdstat 
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    unused devices: <none>
    root@debian01:~#

    We can’t see any information yet since we don’t have active RAID group yet. Soon we will go back checking /proc/mdstat when the RAID1 group has been created.

RAID1 Group Setup

We will start create new RAID1 Group for join both disks as mirrored configuration.

  1. First we will work on the new (/dev/sdb) or what we called rootmirror. We need to prepare rootmirror before put it in the RAID1 group. Since rootmirror is new disk, it doesn’t have valid partition table yet :
    root@debian01:~# fdisk -l /dev/sdb
    
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/sdb doesn't contain a valid partition table
    root@debian01:~#

    What we need to do is copy the partition table from the existing root disk (/dev/sda) using sfdisk command as shown below :

    root@debian01:~# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
    Checking that no-one is using this disk right now ...
    OK
    
    Disk /dev/sdb: 2610 cylinders, 255 heads, 63 sectors/track
    
    sfdisk: ERROR: sector 0 does not have an msdos signature
     /dev/sdb: unrecognized partition table type
    Old situation:
    No partitions found
    New situation:
    Units = sectors of 512 bytes, counting from 0
    
       Device Boot    Start       End   #sectors  Id  System
    /dev/sdb1   *      2048    194559     192512  83  Linux
    /dev/sdb2        194560  19726335   19531776  83  Linux
    /dev/sdb3      19726336  23631871    3905536  82  Linux swap / Solaris
    /dev/sdb4      23631872  41940991   18309120  83  Linux
    Warning: partition 1 does not end at a cylinder boundary
    Warning: partition 2 does not start at a cylinder boundary
    Warning: partition 2 does not end at a cylinder boundary
    Warning: partition 3 does not start at a cylinder boundary
    Warning: partition 3 does not end at a cylinder boundary
    Warning: partition 4 does not start at a cylinder boundary
    Warning: partition 4 does not end at a cylinder boundary
    Successfully wrote the new partition table
    
    Re-reading the partition table ...
    
    If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
    to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
    (See fdisk(8).)
    root@debian01:~#

    After copy the partition table, we can see that rootmirror now has the same partition layout as the root disk :

    root@debian01:~# fdisk -l /dev/sdb
    
    Disk /dev/sdb: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *        2048      194559       96256   83  Linux
    /dev/sdb2          194560    19726335     9765888   83  Linux
    /dev/sdb3        19726336    23631871     1952768   82  Linux swap / Solaris
    /dev/sdb4        23631872    41940991     9154560   83  Linux
    root@debian01:~#
  2. Since we will join rootmirror into a RAID1 group, we need to set every partition under it to use “Linux raid auto” type. We use fdisk command to set the partition type, here is the sample :
    root@debian01:~# fdisk /dev/sdb
    
    Command (m for help): t
    Partition number (1-4): 1
    Hex code (type L to list codes): L
    
     0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
     1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
     2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
     3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
     4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
     5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
     6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
     7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
     8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
     9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
     a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
     b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
     c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs        
     e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT            
     f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
    10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
    11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor      
    12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor      
    14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary  
    16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS    
    17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
    18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
    1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
    1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
    1e  Hidden W95 FAT1 80  Old Minix      
    Hex code (type L to list codes): fd
    Changed system type of partition 2 to fd (Linux raid autodetect)
    
            Command (m for help): t
            Partition number (1-4): 2
            Hex code (type L to list codes): fd
            Changed system type of partition 1 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 3
    Hex code (type L to list codes): fd
    Changed system type of partition 3 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 4
    Hex code (type L to list codes): fd
    Changed system type of partition 4 to fd (Linux raid autodetect)
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks.
    root@debian01:~#

    Here is the explanation about what happen on the example above :

    • Start fdisk utility by typing “fdisk /dev/sdb” followed by Enter. fdisk utility will ready to work on /dev/sdb.
    • Type t to start setting the partition type. Press Enter to continue.
    • Then type the number of partition we want to set. Press Enter to continue.
    • There are many partition type available, we can see all available type by type L followed by Enter.
    • In this case we want to use partition type “Linux raid auto” so we type fd followed by Enter.
    • After all partition set up, we must finalize the changes by typing w (as in “Write this changes to the disk”) followed by Enter.
  3. With partition type already set, we can continue create md device a.k.a “virtual partition”. md is abrreviation for Multiple Devices. Before create new md device, we should clean up the partition using the following command :
            root@debian01:~# mdadm --zero-superblock /dev/sdb1
            root@debian01:~# mdadm --zero-superblock /dev/sdb2
            root@debian01:~# mdadm --zero-superblock /dev/sdb3
            root@debian01:~# mdadm --zero-superblock /dev/sdb4

    Actually this step only applicable if the partition has been used as md device. In this case it didn’t really matter since we use new empty disk.

  4. To start create md device, we use mdadm --create command. Here is the syntax to create md device using mdadm command :
    mdadm --create /dev/md<ID> --level=<RAID level> --raid-disks=<number of physical disk> <disk#1> <disk#2>

    We will create md device for each partition. Since we want to create RAID1 disk group, we set --level=1. And because we only have 2 physical disks involved, then we set --raid-disks=2.

    So lets start to prepare first partition on rootmirror as first md device :

    root@debian01:~# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    root@debian01:~#

    Because we don’t want to touch the existing root disk (/dev/sda) yet, we will mark it as “missing”.

    Then we can continue with second partition of rootmirror :

    root@debian01:~# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md1 started.
    root@debian01:~#

    Finish by create virtual partition for third and fourth partition of rootmirror :

    root@debian01:~# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md2 started.
    root@debian01:~#
    root@debian01:~# mdadm --create /dev/md3 --level=1 --raid-disks=2 missing /dev/sdb4
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md3 started.
    root@debian01:~#
  5. Now we have 4 virtual partition, so the next step is to make the new filesystem on each partition. We will use ext3 filesystem for /dev/md0, /dev/md1, and /dev/md3. To create a ext3 filesystem we use mkfs.ext3 command as shown in the following example :
    root@debian01:~# mkfs.ext3 /dev/md0 
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=1024 (log=0)
    Fragment size=1024 (log=0)
    Stride=0 blocks, Stripe width=0 blocks
    24096 inodes, 96128 blocks
    4806 blocks (5.00%) reserved for the super user
    First data block=1
    Maximum filesystem blocks=67371008
    12 block groups
    8192 blocks per group, 8192 fragments per group
    2008 inodes per group
    Superblock backups stored on blocks: 
        8193, 24577, 40961, 57345, 73729
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (4096 blocks): done
    Writing superblocks and filesystem accounting information: done 
    
    root@debian01:~# mkfs.ext3 /dev/md1
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    610800 inodes, 2439392 blocks
    121969 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2499805184
    75 block groups
    32768 blocks per group, 32768 fragments per group
    8144 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done 
    
    root@debian01:~# mkfs.ext3 /dev/md3
    mke2fs 1.42.5 (29-Jul-2012)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    572320 inodes, 2286560 blocks
    114328 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2344615936
    70 block groups
    32768 blocks per group, 32768 fragments per group
    8176 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done 
    
    root@debian01:~#
  6. For /dev/md2, we will marked it as swap space using mkswap command as shown in the following example :
    root@debian01:~# mkswap -f /dev/md2
    Setting up swapspace version 1, size = 1951676 KiB
    no label, UUID=2fa8b47b-e889-4160-9fd0-c21b88b6d3e3
    root@debian01:~#
  7. Each virtual partition need to be started during the boot process. So we need to register all the virtual partition we had made into mdadm.conf file. What we need to register is the output of the following command :
    root@debian01:~# mdadm --examine --scan
    ARRAY /dev/md/0 metadata=1.2 UUID=5d7d8672:c019a7d0:69310f29:3cd65d04 name=debian01:0
    ARRAY /dev/md/1 metadata=1.2 UUID=67631129:b7766c7e:d1200d55:036ad1c6 name=debian01:1
    ARRAY /dev/md/2 metadata=1.2 UUID=19cbe54c:2aa2126c:92eda6ee:ffc62fb9 name=debian01:2
    ARRAY /dev/md/3 metadata=1.2 UUID=d15793a1:348c6380:a94d73b3:a47f3bf7 name=debian01:3
    root@debian01:~#

    We can copy those output to /etc/mdadm/mdadm.conf manually using text editor, or we can use simple redirection as shown below :

    root@debian01:~# mdadm --examine --scan >> /etc/mdadm/mdadm.conf
  8. In the Linux system each partition will has its own mountpoint (except for swap partition). Information about the partition and related mountpoint stored inside /etc/fstab file. Here is the sample of /etc/fstab file :
    root@debian01:~# cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    # / was on /dev/sda2 during installation
    UUID=4a066d6d-c8ad-46a4-ae54-118da63dcbd2 /               ext3    errors=remount-ro 0       1
    # /boot was on /dev/sda1 during installation
    UUID=1398dd06-7ef2-46c3-9ace-a2328b1db783 /boot           ext3    defaults        0       2
    # /home was on /dev/sda4 during installation
    UUID=8d819942-fe5d-4634-aab7-28ceca9b2fe1 /home           ext3    defaults        0       2
    # swap was on /dev/sda3 during installation
    UUID=c1b679fb-d59f-4478-a138-fe2e3a6bceb0 none            swap    sw              0       0
    /dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
    root@debian01:~#

    Each partition displayed with its UUID (Universally Unique Identifier). Those UUID is belongs to existing root disk (/dev/sda). We need to modify the partition address using new virtual partition identity. Here is the final /etc/fstab after modification :

    root@debian01:~# cat /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point>   <type>  <options>       <dump>  <pass>
    
    /dev/md1    /               ext3    errors=remount-ro   0       1
    /dev/md0    /boot           ext3    defaults            0       2
    /dev/md3    /home           ext3    defaults            0       2
    /dev/md2    none            swap    sw                  0       0
    /dev/sr0    /media/cdrom0   udf,iso9660 user,noauto     0       0
    root@debian01:~#
  9. Next step is to modify /etc/default/grub. We need to set GRUB_DISABLE_LINUX_UUID parameter to true. By default GRUB_DISABLE_LINUX_UUID = true was commented, so what we need is to remove # to uncommented that line.
  10. Before start copying all existing root disk files, we need to mount each virtual partition. Here is the sample how we mount all virtual partitions :
    root@debian01:~# mkdir /mnt/md0
    root@debian01:~# mkdir /mnt/md1
    root@debian01:~# mkdir /mnt/md3
    root@debian01:~# mount /dev/md0 /mnt/md0
    root@debian01:~# mount /dev/md1 /mnt/md1
    root@debian01:~# mount /dev/md3 /mnt/md3

    We don’t mount /dev/md2 since it is swap partition.

  11. With all virtual partition mounted, we can start copy files from existing root disk :
    root@debian01:~# cp -dpRx / /mnt/md1 
    root@debian01:~# cp -dpRx /boot /mnt/md0
    root@debian01:~# cp -dpRx /home /mnt/md3
  12. Then we need to install the boot loader on the virtual partition. We will do it from chroot environment. There are some steps to prepare chroot environment as shown in the following example :
    root@debian01:~# umount /mnt/md0
    root@debian01:~# mount /dev/md0 /mnt/md1/boot
    root@debian01:~# mount -t proc none /mnt/md1/proc
    root@debian01:~# mount -o bind /dev /mnt/md1/dev
    root@debian01:~# mount -o bind /sys /mnt/md1/sys
  13. We can continue to enter chroot environment & execute update-grub command to fix the boot loader as shown in the following example :
    root@debian01:~# chroot /mnt/md1
    root@debian01:/# update-grub
    Generating grub.cfg ...
    Found linux image: /boot/vmlinuz-3.2.0-4-686-pae
    Found initrd image: /boot/initrd.img-3.2.0-4-686-pae
    done
  14. Still from chroot environment of virtual partitions, we also need to generate the initramfs image. initramfs is used during boot process to load all root filesystem.
    root@debian01:/# update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-3.2.0-4-686-pae
    mdadm: cannot open /dev/md/0: No such file or directory
    mdadm: cannot open /dev/md/1: No such file or directory
    mdadm: cannot open /dev/md/2: No such file or directory
    mdadm: cannot open /dev/md/3: No such file or directory
    root@debian01:/#

    We can ignore those kind of errors for now.

  15. Last things must be done from inside chroot is reinstall boot loader on both disk drives. Debian system use GRUB boot loader by default. To reinstall boot loader we use grub-install command as shown below :
    root@debian01:/# grub-install /dev/sda
    Installation finished. No error reported.
    root@debian01:/# grub-install /dev/sdb
    Installation finished. No error reported.
    root@debian01:/# exit
  16. We can exit chroot environment and shutdown the system :
    root@debian01:/# 
    exit
    root@debian01:~# reboot
    root@debian01:~# shutdown -h now
    
    Broadcast message from root@debian7 (pts/3) (Tue Sep 24 01:04:03 2013):
    
    The system is going down for system halt NOW!
    root@debian01:~#
  17. Then we will boot the server using the second disk (/dev/sdb). If all the setup above success, then system can boot using /dev/sdb (rootmirror) disk.

Attach Root Disk To RAID1 Group

After reboot system should already use the virtual partition. As we can see from df command output below :

root@debian01:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.2G  637M  8.1G   8% /
udev             10M     0   10M   0% /dev
tmpfs           101M  276K  101M   1% /run
/dev/md1        9.2G  637M  8.1G   8% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           584M     0  584M   0% /run/shm
/dev/md0         91M   22M   65M  26% /boot
/dev/md3        8.6G  148M  8.1G   2% /home
root@debian01:~#

We can check the RAID1 information for each virtual partition using the following command :

mdadm -D /dev/md<ID>

At this stage RAID1 Group only have single component which is rootmirror disk we prepared on previous section. Since we define the RAID1 group to have 2 component, then mdadm -D command will report the degraded state of every virtual partition.

root@debian01:~# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:31 2013
     Raid Level : raid1
     Array Size : 96128 (93.89 MiB 98.44 MB)
  Used Dev Size : 96128 (93.89 MiB 98.44 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 01:04:33 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:0  (local to host debian01)
           UUID : 5d7d8672:c019a7d0:69310f29:3cd65d04
         Events : 34

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
root@debian01:~#
root@debian01:~# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:39 2013
     Raid Level : raid1
     Array Size : 9757568 (9.31 GiB 9.99 GB)
  Used Dev Size : 9757568 (9.31 GiB 9.99 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 01:05:01 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:1  (local to host debian01)
           UUID : 67631129:b7766c7e:d1200d55:036ad1c6
         Events : 118

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2
root@debian01:~#
root@debian01:~# mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:47 2013
     Raid Level : raid1
     Array Size : 1951680 (1906.26 MiB 1998.52 MB)
  Used Dev Size : 1951680 (1906.26 MiB 1998.52 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 00:55:18 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:2  (local to host debian01)
           UUID : 19cbe54c:2aa2126c:92eda6ee:ffc62fb9
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       19        1      active sync   /dev/sdb3
root@debian01:~#
root@debian01:~# mdadm -D /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Tue Sep 24 00:54:55 2013
     Raid Level : raid1
     Array Size : 9146240 (8.72 GiB 9.37 GB)
  Used Dev Size : 9146240 (8.72 GiB 9.37 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Sep 24 01:04:33 2013
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : debian01:3  (local to host debian01)
           UUID : d15793a1:348c6380:a94d73b3:a47f3bf7
         Events : 14

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       20        1      active sync   /dev/sdb4
root@debian01:~# 
root@debian01:~# cat /proc/mdstat 
Personalities : [raid1] 
md3 : active raid1 sdb4[1]
      9146240 blocks super 1.2 [2/1] [_U]

md2 : active (auto-read-only) raid1 sdb3[1]
      1951680 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sdb2[1]
      9757568 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sdb1[1]
      96128 blocks super 1.2 [2/1] [_U]

unused devices: <none>

root@debian01:~#

This section will finish mirroring with attaching the previous rootdisk (/dev/sda) to the RAID1 Group.

  1. We need to set the partition type on rootdisk to “Linux raid auto” :
    root@debian01:~# fdisk -l /dev/sda
    
    Disk /dev/sda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0004f798
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048      194559       96256   83  Linux
    /dev/sda2          194560    19726335     9765888   83  Linux
    /dev/sda3        19726336    23631871     1952768   82  Linux swap / Solaris
    /dev/sda4        23631872    41940991     9154560   83  Linux
    root@debian01:~# fdisk /dev/sda
    
    Command (m for help): t
    Partition number (1-4): 1
    Hex code (type L to list codes): fd
    Changed system type of partition 1 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 2
    Hex code (type L to list codes): fd
    Changed system type of partition 2 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 3
    Hex code (type L to list codes): fd
    Changed system type of partition 3 to fd (Linux raid autodetect)
    
    Command (m for help): t
    Partition number (1-4): 4
    Hex code (type L to list codes): fd
    Changed system type of partition 4 to fd (Linux raid autodetect)
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks.
    root@debian01:~#
  2. We can check that all partition inside /dev/sda (rootdisk) now has been converted to Linux raid auto type.
    root@debian01:~# fdisk -l /dev/sda
    
    Disk /dev/sda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0004f798
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048      194559       96256   fd  Linux raid autodetect
    /dev/sda2          194560    19726335     9765888   fd  Linux raid autodetect
    /dev/sda3        19726336    23631871     1952768   fd  Linux raid autodetect
    /dev/sda4        23631872    41940991     9154560   fd  Linux raid autodetect
    root@debian01:~#
  3. Now we can put rootdisk to the existing RAID1 Group. Each partition under rootdisk will be attached to existing virtual partition (md device).
    root@debian01:~# mdadm --add /dev/md0 /dev/sda1
    mdadm: added /dev/sda1
    root@debian01:~# mdadm --add /dev/md1 /dev/sda2
    mdadm: added /dev/sda2
    root@debian01:~# mdadm --add /dev/md2 /dev/sda3
    mdadm: added /dev/sda3
    root@debian01:~# mdadm --add /dev/md3 /dev/sda4
    mdadm: added /dev/sda4
    root@debian01:~#
  4. After all partition of rootdisk join the RAID1 group, system will start recovery/resynchronization automatically. We can monitor the recovery process by looking at /proc/mdstat file :
    root@debian01:~# cat /proc/mdstat 
    Personalities : [raid1] 
    md3 : active raid1 sda4[2] sdb4[1]
          9146240 blocks super 1.2 [2/1] [_U]
            resync=DELAYED
    
    md2 : active raid1 sda3[2] sdb3[1]
          1951680 blocks super 1.2 [2/1] [_U]
            resync=DELAYED
    
    md1 : active raid1 sda2[2] sdb2[1]
          9757568 blocks super 1.2 [2/1] [_U]
          [==>..................]  recovery = 11.6% (1132800/9757568) finish=1.1min speed=125866K/sec
    
    md0 : active raid1 sda1[2] sdb1[1]
          96128 blocks super 1.2 [2/2] [UU]
    
    unused devices: <none>
    root@debian01:~#

    Or we can use mdadm -D command as shown in the following example :

    root@debian01:~# mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:31 2013
         Raid Level : raid1
         Array Size : 96128 (93.89 MiB 98.44 MB)
      Used Dev Size : 96128 (93.89 MiB 98.44 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:30 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:0  (local to host debian01)
               UUID : 5d7d8672:c019a7d0:69310f29:3cd65d04
             Events : 55
    
        Number   Major   Minor   RaidDevice State
           2       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1
    root@debian01:~# mdadm -D /dev/md1
    /dev/md1:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:39 2013
         Raid Level : raid1
         Array Size : 9757568 (9.31 GiB 9.99 GB)
      Used Dev Size : 9757568 (9.31 GiB 9.99 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:42 2013
              State : clean, degraded, recovering 
     Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
     Rebuild Status : 13% complete
    
               Name : debian01:1  (local to host debian01)
               UUID : 67631129:b7766c7e:d1200d55:036ad1c6
             Events : 138
    
        Number   Major   Minor   RaidDevice State
           2       8        2        0      spare rebuilding   /dev/sda2
           1       8       18        1      active sync   /dev/sdb2
    root@debian01:~# mdadm -D /dev/md2
    /dev/md2:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:47 2013
         Raid Level : raid1
         Array Size : 1951680 (1906.26 MiB 1998.52 MB)
      Used Dev Size : 1951680 (1906.26 MiB 1998.52 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:29 2013
              State : clean, degraded, resyncing (DELAYED) 
     Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
               Name : debian01:2  (local to host debian01)
               UUID : 19cbe54c:2aa2126c:92eda6ee:ffc62fb9
             Events : 4
    
        Number   Major   Minor   RaidDevice State
           2       8        3        0      spare rebuilding   /dev/sda3
           1       8       19        1      active sync   /dev/sdb3
    root@debian01:~# mdadm -D /dev/md3
    /dev/md3:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:55 2013
         Raid Level : raid1
         Array Size : 9146240 (8.72 GiB 9.37 GB)
      Used Dev Size : 9146240 (8.72 GiB 9.37 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:32 2013
              State : clean, degraded, resyncing (DELAYED) 
     Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
               Name : debian01:3  (local to host debian01)
               UUID : d15793a1:348c6380:a94d73b3:a47f3bf7
             Events : 18
    
        Number   Major   Minor   RaidDevice State
           2       8        4        0      spare rebuilding   /dev/sda4
           1       8       20        1      active sync   /dev/sdb4
    root@debian01:~#
  5. When recovery/resync process completed, both disks will be in sync state. No more degraded status in the mdadm -D output :
    root@debian01:~# mdadm -D /dev/md0
    /dev/md0:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:31 2013
         Raid Level : raid1
         Array Size : 96128 (93.89 MiB 98.44 MB)
      Used Dev Size : 96128 (93.89 MiB 98.44 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:06:43 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:0  (local to host debian01)
               UUID : 5d7d8672:c019a7d0:69310f29:3cd65d04
             Events : 55
    
        Number   Major   Minor   RaidDevice State
           2       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1
    root@debian01:~# mdadm -D /dev/md1
    /dev/md1:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:39 2013
         Raid Level : raid1
         Array Size : 9757568 (9.31 GiB 9.99 GB)
      Used Dev Size : 9757568 (9.31 GiB 9.99 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:09:29 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:1  (local to host debian01)
               UUID : 67631129:b7766c7e:d1200d55:036ad1c6
             Events : 157
    
        Number   Major   Minor   RaidDevice State
           2       8        2        0      active sync   /dev/sda2
           1       8       18        1      active sync   /dev/sdb2
    root@debian01:~# mdadm -D /dev/md2
    /dev/md2:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:47 2013
         Raid Level : raid1
         Array Size : 1951680 (1906.26 MiB 1998.52 MB)
      Used Dev Size : 1951680 (1906.26 MiB 1998.52 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:08:59 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:2  (local to host debian01)
               UUID : 19cbe54c:2aa2126c:92eda6ee:ffc62fb9
             Events : 23
    
        Number   Major   Minor   RaidDevice State
           2       8        3        0      active sync   /dev/sda3
           1       8       19        1      active sync   /dev/sdb3
    root@debian01:~# mdadm -D /dev/md3
    /dev/md3:
            Version : 1.2
      Creation Time : Tue Sep 24 00:54:55 2013
         Raid Level : raid1
         Array Size : 9146240 (8.72 GiB 9.37 GB)
      Used Dev Size : 9146240 (8.72 GiB 9.37 GB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Sep 24 01:08:49 2013
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : debian01:3  (local to host debian01)
               UUID : d15793a1:348c6380:a94d73b3:a47f3bf7
             Events : 37
    
        Number   Major   Minor   RaidDevice State
           2       8        4        0      active sync   /dev/sda4
           1       8       20        1      active sync   /dev/sdb4
    root@debian01:~#
  6. Now both /dev/sda & /dev/sdb successfully joined as mirror disks. Both disks will hold the same files. We can verify it by rebooting the server using each disk.

Read on Scribd : Root Disk Mirroring On Debian Linux

Locale Error Debian 7

Locale di lingkungan sistem operasi Unix salah satunya dipakai untuk mengatur tentang bahasa & tipe encoding. Misalnya saya bisa mengatur supaya Linux saya menggunakan bahasa “English US” (en_US) & ingin menggunakan tipe encoding UTF8. Nah tadi saya menemukan masalah dengan setting tersebut saat mencoba memasang sebuah paket. Error-nya kurang lebih seperti ini :

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US:en",
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.

Ini contoh tampilan lengkap saat saya ingin memasang paket mdadm di Debian 7 :

root@debian01-new:~# apt-get install mdadm initramfs-tools
Reading package lists... Done
Building dependency tree       
Reading state information... Done
initramfs-tools is already the newest version.
The following NEW packages will be installed:
  mdadm
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/566 kB of archives.
After this operation, 1098 kB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Media change: please insert the disc labeled
 'Debian GNU/Linux 7.1.0 _Wheezy_ - Official i386 CD Binary-1 20130615-21:54'
in the drive '/media/cdrom/' and press enter

Can't set locale; make sure $LC_* and $LANG are correct!
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US:en",
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Preconfiguring packages ...
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US:en",
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US:en",
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
/usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory
/usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US:en",
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = "en_US:en",
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Selecting previously unselected package mdadm.
(Reading database ... 45904 files and directories currently installed.)
Unpacking mdadm (from .../m/mdadm/mdadm_3.2.5-5_i386.deb) ...
Processing triggers for man-db ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Setting up mdadm (3.2.5-5) ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Generating array device nodes... done.
Generating mdadm.conf... done.
update-initramfs: deferring update (trigger activated)
[ ok ] Assembling MD arrays...done (no arrays found in config file or automatically).
[ ok ] Starting MD monitoring service: mdadm --monitor.
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.2.0-4-686-pae
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
W: mdadm: no arrays defined in configuration file.
root@debian01-new:~#

Untuk menghilangkan error “locale” tadi, saya perlu menjalankan perintah berikut ini :

root@debian01-new:~# export LANGUAGE=en_US.UTF-8
root@debian01-new:~# export LANG=en_US.UTF-8
root@debian01-new:~# export LC_ALL=en_US.UTF-8
root@debian01-new:~# locale-gen en_US.UTF-8
Generating locales (this might take a while)...
  en_US.UTF-8... done
Generation complete.
root@debian01-new:~# dpkg-reconfigure locales
Generating locales (this might take a while)...
  en_US.UTF-8... done
Generation complete.
root@debian01-new:~#

Setelah mengatur ulang bahasa & encoding-nya sekarang Debian sudah menggunakan parameter yang tepat.

root@debian01-new:~# locale
LANG=en_US.UTF-8
LANGUAGE=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=en_US.UTF-8
root@debian01-new:~#

Untuk membuat pengaturan ini permanen saya perlu menambahkan 3 baris berikut ke dalam /etc/profile :

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8