Masalah Dengan BOOTFS Pada Solaris 11

Tadi pagi rekan saya mengajak diskusi tentang masalah bootfs di Solaris 11. Ceritanya rekan saya sedang melakukan recovery boot disk server Solarisnya yang bermasalah. Cara recovery-nya adalah dengan melakukan restore dari ZFS snapshot. Setelah boot servernya dengan menggunakan DVD Solaris, rekan saya sudah melakukan partisi root disk seperti ini :

partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? y

partition> p
Current partition table (unnamed):
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       0 - 14085      136.70GB    (14086/0/0) 286678272
  1 unassigned    wu       0                0         (0/0/0)             0
  2     backup    wu       0 - 14086      136.71GB    (14087/0/0) 286698624
  3 unassigned    wm       0                0         (0/0/0)             0
  4 unassigned    wm       0                0         (0/0/0)             0
  5 unassigned    wm       0                0         (0/0/0)             0
  6 unassigned    wm       0                0         (0/0/0)             0
  7 unassigned    wm       0                0         (0/0/0)             0

partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? y

partition>

Harddisk yang dialokasikan sebagai boot device di Solaris 11 harus menggunakan label SMI. Tapi rekan saya keliru saat membuat rpool, dia membuat rpool tanpa mendefinisikan slice 0. Perintah yang dia pakai adalah seperti berikut ini :

root@solaris:/# zpool create -o version=33 -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c3t0d0

Karena argumen yang dipakai hanya c3t0d0 (keseluruhan harddisk) hasil rpool-nya menjadi seperti berikut ini :

root@solaris:/root# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME      STATE     READ WRITE CKSUM
        rpool     ONLINE       0     0     0
          c3t0d0  ONLINE       0     0     0

errors: No known data errors
root@solaris:/root#

Akibatnya rekan saya menemukan masalah saat mendefinisikan bootfs, dia mendapatkan error seperti berikut ini :

root@solaris:/root# zpool set bootfs=rpool/ROOT/solaris rpool
cannot set property for 'rpool': property 'bootfs' not supported on EFI labeled devices
root@solaris:/root#

Ternyata walaupun di awal tadi harddisk c3t0d0 sudah diberi label SMI, labelnya berubah lagi karena penggunaan perintah zpool yang kurang tepat. Pembuatan rpool tanpa mendefinisikan slice, mengakibatkan Solaris melabel ulang harddisk tersebut dengan label EFI. Harddisk dengan label EFI ditandai dengan adanya partisi nomor 8, ini bisa dilihat dengan menggunakan perintah prtvtoc seperti contoh berikut ini :

root@solaris:/root# prtvtoc /dev/rdsk/c3t0d0
* /dev/rdsk/c3t0d0 partition map
*
* Dimensions:
*     512 bytes/sector
* 286739329 sectors
* 286739262 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*          34       222       255
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      4    00        256 286722656 286722911
       8     11    00  286722912     16384 286739295
 root@solaris:/root#

Di akhir diskusi saya sarankan untuk menghapus saja rpool yang sudah dibuat. Setelah dihapus, saya sarankan untuk membuat rpool dengan mendefinisikan harddisk lengkap sampai slice-nya (c3t0d0s0). Contoh perintahnya seperti berikut ini :

root@solaris:/root# zpool create -o version=33 -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool /dev/dsk/c3t0d0s0
root@solaris:/root#

root@solaris:/root# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c3t0d0s0  ONLINE       0     0     0

errors: No known data errors
root@solaris:/root#

Sampai saat ini saya tidak tahu apakah ada caranya supaya boot disk di mesin Sparc (dengan Solaris 11) bisa menggunakan harddisk dengan EFI label.

Cloning Solaris 10 Server

This document will show procedure of cloning a system running Solaris 10 that using ZFS as root file system. The idea is to take out mirror disk from the master server (source) & use it to boot up another server (target server). Then later we fix mirroring on both source & target server.

Cloning Solaris 10 Using Mirror Disk

Suppose we have 1 server named vws01 with Solaris 10 SPARC installed. There are 4 disks on vws01 configured with ZFS file system.

  • slot#1 : c1t0d0s0 (part of rpool)
  • slot#2 : c1t1d0s0 (part of datapool)
  • slot#3 : c1t2t0d0 (part of rpool)
  • slot#4 : c1t3t0d0 (part of datapool)

Remember that the device name could be different on your system, just pay attention on the concept and adapt it with your devices name.

We want to create 1 other server name which similar with the vws01. We want to utilize ZFS mirror disks to cloning this server to create cloned server. So we have some tasks to do to achive that goal, I will split the steps into two big steps :

  1. Procedure on the source machine (the existing server)
  2. Procedure on the target machine (the clone server we want to create)

Procedure On The Source Machine

  1. Make sure all ZFS pools (rpool & datapool) are in healthy state.
    root@vws01:/# zpool status -x rpool
    pool 'rpool' is healthy
    root@vws01:/# zpool status -x datapool
    pool 'datapool' is healthy
    root@vws01:/# zpool status
      pool: datapool
     state: ONLINE
     scan: resilvered 32.6G in 0h10m with 0 errors on Thu Apr 28 01:14:45 2011
    config:
    
            NAME          STATE     READ WRITE CKSUM
            datapool      ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0
                c1t3d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: rpool
     state: ONLINE
     scan: resilvered 32.6G in 0h10m with 0 errors on Thu Apr 28 01:14:45 2011
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t2d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#
  2. Shutdown the source machine.
    root@vws01:/# shutdown –g0 –y –i5
  3. Take out 1 disk from rpool (slot#3/c1t2d0) and 1 data disk from datapool (slot#4/c1t3d0). Give the label with clear information on it, for example :
    SOURCE_rpool_slot#3_c1t2d0
    SOURCE_datapool_slot#4_c1t3d0
  4. Boot the server using only 1 mirror leg for rpool (c1t0d0) & datapool (c1t1d0). During boot process its normal when we get error message like there :
    SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: Thu Apr 28 01:04:17 GMT 2011
    PLATFORM: SUNW,Netra-T5220, CSN: -, HOSTNAME: vws01
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: be9eeb00-66ab-6cf0-e27a-acab436e849d
    DESC: A ZFS device failed.  Refer to http://sun.com/msg/ZFS-8000-D3 for more information.
    AUTO-RESPONSE: No automated response will occur.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Run 'zpool status -x' and replace the bad device.
    
    SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: Thu Apr 28 01:04:17 GMT 2011
    PLATFORM: SUNW,Netra-T5220, CSN: -, HOSTNAME: vws01
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: fc0231b2-d6a8-608e-de11-afe0ebde7508
    DESC: A ZFS device failed.  Refer to http://sun.com/msg/ZFS-8000-D3 for more information.
    AUTO-RESPONSE: No automated response will occur.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Run 'zpool status -x' and replace the bad device.

    Those error come from ZFS inform that even that there are some fault on the ZFS pool. We can ignore those error for a while, since both pools still accessible & we intend to fix that broken mirror soon.

  5. After OS ready check the disk availability and the status of all pools. Solaris will only see 2 disks installed right now. You will see that both of rpool and datapool are in DEGRADED state.
    root@vws01:/# format < /dev/null
    Searching for disks...done
    
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@0,0
           1. c1t1d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
              /pci@0/pci@0/pci@2/scsi@0/sd@1,0
    Specify disk (enter its number):
    root@vws01:/#
    root@vws01:/# zfs status
      pool: datapool
     state: DEGRADED
    status: One or more devices could not be opened.  Sufficient replicas exist for
            the pool to continue functioning in a degraded state.
    action: Attach the missing device and online it using 'zpool online'.
       see: http://www.sun.com/msg/ZFS-8000-2Q
     scrub: none requested
    config:
    
            NAME          STATE     READ WRITE CKSUM
            datapool      DEGRADED     0     0     0
              mirror-0    DEGRADED     0     0     0
                c1t1d0s0  ONLINE       0     0     0
                c1t3d0s0  UNAVAIL      0     0     0  cannot open
    
    errors: No known data errors
    
      pool: rpool
     state: DEGRADED
    status: One or more devices could not be opened.  Sufficient replicas exist for
            the pool to continue functioning in a degraded state.
    action: Attach the missing device and online it using 'zpool online'.
       see: http://www.sun.com/msg/ZFS-8000-2Q
     scrub: none requested
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         DEGRADED     0     0     0
              mirror-0    DEGRADED     0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t2d0s0  UNAVAIL      0     0     0  cannot open
    
    errors: No known data errors
    root@vws01:/#
  6. Plug in replacement disks (the disks must be same size with the previous one), one disk goes to slot#3 (as c1t2d0) and the other goes to slot#4 (as c1t3d0) slot. You can plug in the disk when server running if your server support hot plug disk controller. If don’t, then please shutdown the server before insert the replacement disks.
  7. Assume your system support hot plug disks, the next step is detach the broken mirror leg. First we need to detach the broken disk from the rpool.
    root@vws01:/# zpool detach rpool c1t2d0s0

    After detach the missing root mirror, rpool now on ONLINE status again.

    root@vws01:/# zpool status
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c1t0d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#
  8. Repeat the detach process for datapool.
    root@vws01:/# zpool detach datapool c1t3d0s0

    After detach the missing data mirror, datapool now on ONLINE status again.

    root@vws01:/# zpool status datapool
      pool: datapool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            datapool    ONLINE       0     0     0
              c1t1d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#
  9. Before attach replacement disk to rpool, we need to prepare the replacement disk (c1t2d0) first. We need to label the disk first (this step verified on the SPARC system, but should be applicable on x86 system too).
    root@vws01# format -e c1t2d0
    selecting c1t2d0
    [disk formatted]
    
    FORMAT MENU:
            disk       - select a disk
            type       - select (define) a disk type
            partition  - select (define) a partition table
            current    - describe the current disk
            format     - format and analyze the disk
            repair     - repair a defective sector
            label      - write label to the disk
            analyze    - surface analysis
            defect     - defect list management
            backup     - search for backup labels
            verify     - read and display labels
            save       - save new disk/partition definitions
            inquiry    - show vendor, product and revision
            scsi       - independent SCSI mode selects
            cache      - enable, disable or query SCSI disk cache
            volname    - set 8-character volume name
            !<cmd>     - execute <cmd>, then return
            quit
    format> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[0]: 0
    Ready to label disk, continue? yes
    
    format> quit
    root@vws01#
  10. Then we need to copy the partition table from the online root disk (c1t0d0) to the replacement disk (c1t2d0) :
    root@vws01:/# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t2d0s2
    fmthard:  New volume table of contents now in place.
    root@vws01:/#
  11. Attach the replacement disk (c1t2d0) to the rpool.
    root@vws01:/# zpool attach rpool c1t0d0s0 c1t2d0s0
    Please be sure to invoke installboot(1M) to make 'c1t2d0s0' bootable.
    Make sure to wait until resilver is done before rebooting.
    root@vws01:/#

    Pay attention on the format of zfs attach command :

    zpool attach <pool_name> <online_disk> <new_disk>
  12. After attached the replacement disk to rpool then we can see rpool is in the resilver (resynchronization) status.
    root@vws01:/# zfs status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h0m, 1.90% done, 0h47m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t2d0s0  ONLINE       0     0     0  1010M resilvered
    
    errors: No known data errors
    root@vws01:/#
  13. Same as the step #9, we also need to prepare the replacement disk (c1t3d0) before attaching it to the datapool. We need to label the disk first (this step verified on the SPARC system, but should be applicable on x86 system too).
    root@vws01# format -e c1t3d0
    selecting c1t3d0
    [disk formatted]
    
    FORMAT MENU:
            disk       - select a disk
            type       - select (define) a disk type
            partition  - select (define) a partition table
            current    - describe the current disk
            format     - format and analyze the disk
            repair     - repair a defective sector
            label      - write label to the disk
            analyze    - surface analysis
            defect     - defect list management
            backup     - search for backup labels
            verify     - read and display labels
            save       - save new disk/partition definitions
            inquiry    - show vendor, product and revision
            scsi       - independent SCSI mode selects
            cache      - enable, disable or query SCSI disk cache
            volname    - set 8-character volume name
            !<cmd>     - execute <cmd>, then return
            quit
    format> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[0]: 0
    Ready to label disk, continue? yes
    
    format> quit
    root@vws01#
  14. Then we need to copy the partition table from the ONLINE data disk (c1t1d0) to the replacement disk (c1t3d0) :
    root@vws01:/# prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s - /dev/rdsk/c1t3d0s2
    fmthard: New volume table of contents now in place.
    root@vws01:/#
  15. Attach the replacement disk (c1t3d0) to the datapool.
    root@vws01:/# zpool attach datapool c1t1d0s0 c1t3d0s0
  16. After attached the replacement disk to datapool, we can see datapool is in the resilver (resynchronization) status.
    root@vws01:/# zpool status
      pool: datapool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h0m, 2.52% done, 0h9m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            datapool      ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0
                c1t3d0s0  ONLINE       0     0     0  760M resilvered
    
    errors: No known data errors
    root@vws01:/#
  17. After resilver process on the rpool completed, we need to reinstall boot block on the new disk (c1t2d0). This is crucial step otherwise we can’t use c1t2d0 as boot media even if it contains Solaris OS installation files. For SPARC system, we reinstall boot block on the new root mirror using installboot command like this :
    root@vws01:/# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t2d0s0

    In x86 system we do it using installgrub command like shown below :

    /# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0
    stage1 written to partition 0 sector 0 (abs 16065)
    stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
    /#

Procedure On The Target Machine

This procedure executed on the new server (the clone target). We will boot this new server using 2 disk we take from the source machine vws01.

  1. Assume that target machine is on powered off state (if doesn’t please shut it down first).
  2. Remove all attached disks from the target machine.
  3. Unplug all network cables attached to target machine. This step is important if target machine located in the same network segment. Because we use the disk from vws01 then when it boot up the server will try to use the same IP address as “original” vws01.
  4. Plug in the disk we got from the source machine (as explained on step #3 in the last section).
    SOURCE_rpool_slot#3_c1t2d0
    SOURCE_datapool_slot#4_c1t3d0
  5. With only 2 disk attached, boot the server. Remember to set the system to boot from slot #3. Otherwise system will failed to boot since it look at default disk on slot #1.
  6. When OS ready check disk availability & status of all ZFS pools.
    root@vws01:/# format
    Searching for disks...done
    
    AVAILABLE DISK SELECTIONS:
           0. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@2,0
           1. c1t3d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
              /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Specify disk (enter its number): ^D
    root@vws01:/# 
    root@vws01:/# zpool status
      pool: rpool
     state: DEGRADED
    status: One or more devices could not be opened.  Sufficient replicas exist for
            the pool to continue functioning in a degraded state.
    action: Attach the missing device and online it using 'zpool online'.
       see: http://www.sun.com/msg/ZFS-8000-2Q
     scrub: none requested
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         DEGRADED     0     0     0
              mirror-0    DEGRADED     0     0     0
                c1t0d0s0  UNAVAIL      0     0     0  cannot open
                c1t2d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#

    In case we only see rpool then we must import the other pool before proceed to the next steps.

    root@vws01:/# zpool import
      pool: datapool
        id: 1782173212883254850
     state: DEGRADED
    status: The pool was last accessed by another system.
    action: The pool can be imported despite missing or damaged devices.  The
            fault tolerance of the pool may be compromised if imported.
       see: http://www.sun.com/msg/ZFS-8000-EY
    config:
    
            datapool      DEGRADED
              mirror-0    DEGRADED
                c1t1d0s0  UNAVAIL  cannot open
                c1t3d0s0  ONLINE
    root@vws01:/#
    root@vws01:/# zpool import -f datapool
    root@vws01:/#
    root@vws01:/# zpool status datapool
      pool: datapool
     state: DEGRADED
    status: One or more devices could not be opened.  Sufficient replicas exist for
            the pool to continue functioning in a degraded state.
    action: Attach the missing device and online it using 'zpool online'.
       see: http://www.sun.com/msg/ZFS-8000-2Q
     scrub: none requested
    config:
    
            NAME                      STATE     READ WRITE CKSUM
            datapool                  DEGRADED     0     0     0
              mirror-0                DEGRADED     0     0     0
                10987259943749998344  UNAVAIL      0     0     0  was /dev/dsk/c1t1d0s0
                c1t3d0s0              ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#
  7. The next step is we need to detach unavailable disk from the rpool.
    root@vws01:/# zpool detach rpool c1t0d0s0

    After detach the missing root mirror, rpool now on ONLINE status again.

    root@vws01:/# zpool status -x rpool
    pool 'rpool' is healthy
    root@vws01:/# zpool status rpool
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c1t2d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#
  8. Repeat the detach process for unavailable disk on datapool.
    root@vws01:/# zpool detach datapool /dev/dsk/c1t1d0s0

    After detach the missing data mirror, datapool now on ONLINE status again.

    root@vws01:/# zpool status -x datapool
    pool 'datapool' is healthy
    root@vws01:/# zpool status datapool
      pool: datapool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            datapool    ONLINE       0     0     0
              c1t3d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    root@vws01:/#
  9. At this stage we probably want to change the hostname & modify IP address for this new machine before continue fix the mirroring. In this example the new machine will use hostname vws02. To change the hostname & IP address we need to edit several files :
    • /etc/hosts
    • /etc/hostname.*
    • /etc/nodename
    • /etc/netmasks
    • /etc/defaultrouter

    We can use the following command to change the hostname :

    root@vws01:/# find /etc -name "hosts" -exec perl -pi -e 's/vws01/vws02/g' {} \;
    root@vws01:/# find /etc -name "nodename" -exec perl -pi -e 's/vws01/vws02/g' {} \;
    root@vws01:/# find /etc -name "hostname*" -exec perl -pi -e 's/vws01/vws02/g' {} \;

    Reboot the server so it can pick up the new hostname.

    root@vws01:/# shutdown -g0 -y -i6
  10. Once the server reboot successfully, insert the replacement disks on slot#1 and slot#2. Invoke devfsadm command to let Solaris detect the new disks.
    root@vws02:/# devfsadm
    root@vws02:/# format < /dev/null
    Searching for disks...done
    
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@0,0
           1. c1t1d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
              /pci@0/pci@0/pci@2/scsi@0/sd@1,0
           2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@2,0
           3. c1t3d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
              /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Specify disk (enter its number):
    root@vws02:/#

    See that the new disks (c1t0d0 & c1t1d0) already recognized by Solaris.

  11. Before fixing mirroring on rpool we need to prepare the disk first (this step verified on the SPARC system, but should be applicable on x86 system too).
    root@vws02# format -e c1t0d0
    selecting c1t0d0
    [disk formatted]
    
    FORMAT MENU:
            disk       - select a disk
            type       - select (define) a disk type
            partition  - select (define) a partition table
            current    - describe the current disk
            format     - format and analyze the disk
            repair     - repair a defective sector
            label      - write label to the disk
            analyze    - surface analysis
            defect     - defect list management
            backup     - search for backup labels
            verify     - read and display labels
            save       - save new disk/partition definitions
            inquiry    - show vendor, product and revision
            scsi       - independent SCSI mode selects
            cache      - enable, disable or query SCSI disk cache
            volname    - set 8-character volume name
            !<cmd>     - execute <cmd>, then return
            quit
    format> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[0]: 0
    Ready to label disk, continue? yes
    
    format> quit
    root@vws02#

    Then we need to copy the partition table from the ONLINE root disk (c1t2d0) to the replacement disk (c1t0d0) :

    root@vws02:/# prtvtoc /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2
    fmthard:  New volume table of contents now in place.
    root@vws02:/#
  12. Now we can attach replacement disk to rpool.
    root@vws02:/# zpool attach rpool c1t2d0s0 c1t0d0s0
    Please be sure to invoke installboot(1M) to make 'c1t0d0s0' bootable.
    Make sure to wait until resilver is done before rebooting.
    root@vws02:/#
  13. After attached replacement disk to rpool, check that rpool now on resilver process.
    root@vws02:/# zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h0m, 6.60% done, 0h12m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t2d0s0  ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0  3.43G resilvered
    
    errors: No known data errors
    root@vws02:/#
  14. Then we can continue to fix mirroring on datapool. But before that we need to prepare the replacement disk first (c1t1d0).
            root@vws02# format -e c1t1d0
            selecting c1t0d0
            [disk formatted]
    
            FORMAT MENU:
                    disk       - select a disk
                    type       - select (define) a disk type
                    partition  - select (define) a partition table
                    current    - describe the current disk
                    format     - format and analyze the disk
                    repair     - repair a defective sector
                    label      - write label to the disk
                    analyze    - surface analysis
                    defect     - defect list management
                    backup     - search for backup labels
                    verify     - read and display labels
                    save       - save new disk/partition definitions
                    inquiry    - show vendor, product and revision
                    scsi       - independent SCSI mode selects
                    cache      - enable, disable or query SCSI disk cache
                    volname    - set 8-character volume name
                    !<cmd>     - execute <cmd>, then return
                    quit
            format> label
            [0] SMI Label
            [1] EFI Label
            Specify Label type[0]: 0
            Ready to label disk, continue? yes
    
            format> quit
            root@vws02#

    Then we need to copy the partition table from the ONLINE data disk (c1t3d0) to the replacement disk (c1t1d0) :

            root@vws02:/# prtvtoc /dev/rdsk/c1t3d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
            fmthard:  New volume table of contents now in place.
            root@vws02:/#
  15. Now we can attach replacement disk to datapool.
    root@vws02:/# zpool attach datapool c1t3d0s0 c1t1d0s0
  16. After attached replacement disk to datapool, check that datapool now on resilver process.
    root@vws02:/# zpool status datapool
      pool: datapool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h0m, 5.28% done, 0h6m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            datapool      ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t3d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0  1.56G resilvered
    
    errors: No known data errors
    root@vws02:/#
  17. After resilver process on the rpool finished, we need to reinstall boot block on the new disk (c1t0d0s0). This is crucial step otherwise we can’t use c1t0d0 as boot media even if it contains Solaris OS installation files. For SPARC system, we reinstall boot block on the new root mirror using installboot command like this :
    root@vws02:/# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0

    In x86 system we do it using installgrub command like shown below :

    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
    stage1 written to partition 0 sector 0 (abs 16065)
    stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
    #
  18. Reboot the target machine once more to wrap everything completely.
    root@vws02:/# shutdown -g0 -y -i6
  19. Plug in all network cables and check the network connectivity.

Cloning Solaris 10 Server With ZFS Mirror Disk by Tedy Tirtawidjaja

Convert UFS To ZFS

This article will show you how to turn Solaris 10 from UFS to ZFS filesystem. Please do backup on existing UFS filesystem before start execute this procedure. Suppose we have a server with Solaris 10 x86 installed on first disk (c1t0d0) and has free second disk (c1t1d0). We will try to copy the filesystem to the second disk which will be formatted using ZFS filesystem.

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@0,0
       1. c1t1d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@1,0
Specify disk (enter its number): ^D
bash-3.00#
bash-3.00# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0       12G   3.3G   8.4G    28%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   4.9G   548K   4.9G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                        12G   3.3G   8.4G    28%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   4.9G   748K   4.9G     1%    /tmp
swap                   4.9G    28K   4.9G     1%    /var/run
bash-3.00# 

We need to prepare the second disk before create new ZFS filesystem on top of it. We will use format command to setup the partition on the second disk. Here is the step by step guide to do that :

  1. Invoke format command as shown below :

    bash-3.00# format -e c1t1d0
    selecting c1t1d0
    [disk formatted]
    
    
    FORMAT MENU:
            disk       - select a disk
            type       - select (define) a disk type
            partition  - select (define) a partition table
            current    - describe the current disk
            format     - format and analyze the disk
            fdisk      - run the fdisk program
            repair     - repair a defective sector
            label      - write label to the disk
            analyze    - surface analysis
            defect     - defect list management
            backup     - search for backup labels
            verify     - read and display labels
            save       - save new disk/partition definitions
            inquiry    - show vendor, product and revision
            scsi       - independent SCSI mode selects
            cache      - enable, disable or query SCSI disk cache
            volname    - set 8-character volume name
            !<cmd>     - execute <cmd>, then return
            quit
    format>
    
  2. Type p for display current partition table in the c1t1d0 disk. If c1t1d0 is brand new disk, probably you will see the message about fdisk command as shown below :

    format> p
    WARNING - This disk may be in use by an application that has
              modified the fdisk table. Ensure that this disk is
              not currently in use before proceeding to use fdisk.
    format>
    
  3. To continue create a new partition table, type fdisk command and answer y to accept the default partition table.

    format> fdisk
    No fdisk table exists. The default partition for the disk is:
    
      a 100% "SOLARIS System" partition
    
    Type "y" to accept the default partition,  otherwise type "n" to edit the
     partition table.
    y
    format>
    
  4. Type p to display the partition menu :

    format> p
    
    
    PARTITION MENU:
            0      - change `0' partition
            1      - change `1' partition
            2      - change `2' partition
            3      - change `3' partition
            4      - change `4' partition
            5      - change `5' partition
            6      - change `6' partition
            7      - change `7' partition
            9      - change `9' partition
            select - select a predefined table
            modify - modify a predefined partition table
            name   - name the current table
            print  - display the current table
            label  - write partition map and label to the disk
            !<cmd> - execute <cmd>, then return
            quit
    partition>
    
  5. Type p again to display current partition layout :

    partition> p
    Current partition table (original):
    Total disk cylinders available: 2085 + 2 (reserved cylinders)
    
    Part      Tag    Flag     Cylinders        Size            Blocks
      0 unassigned    wm       0               0         (0/0/0)           0
      1 unassigned    wm       0               0         (0/0/0)           0
      2     backup    wu       0 - 2084       15.97GB    (2085/0/0) 33495525
      3 unassigned    wm       0               0         (0/0/0)           0
      4 unassigned    wm       0               0         (0/0/0)           0
      5 unassigned    wm       0               0         (0/0/0)           0
      6 unassigned    wm       0               0         (0/0/0)           0
      7 unassigned    wm       0               0         (0/0/0)           0
      8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
      9 unassigned    wm       0               0         (0/0/0)           0
    
    partition>
    
  6. In this example I want to assign the whole disk space to partition #0 :

    partition> 0
    Part      Tag    Flag     Cylinders        Size            Blocks
      0 unassigned    wm       0               0         (0/0/0)           0
    
    Enter partition id tag[unassigned]: 
    Enter partition permission flags[wm]: 
    Enter new starting cyl[0]: 0
    Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 2084c
    partition>
    partition> p
    Current partition table (unnamed):
    Total disk cylinders available: 2085 + 2 (reserved cylinders)
    
    Part      Tag    Flag     Cylinders        Size            Blocks
      0 unassigned    wm       0 - 2083       15.96GB    (2084/0/0) 33479460
      1 unassigned    wm       0               0         (0/0/0)           0
      2     backup    wu       0 - 2084       15.97GB    (2085/0/0) 33495525
      3 unassigned    wm       0               0         (0/0/0)           0
      4 unassigned    wm       0               0         (0/0/0)           0
      5 unassigned    wm       0               0         (0/0/0)           0
      6 unassigned    wm       0               0         (0/0/0)           0
      7 unassigned    wm       0               0         (0/0/0)           0
      8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
      9 unassigned    wm       0               0         (0/0/0)           0
    
    partition>
    
  7. The last thing to do is make the changes permanent by invoke label command :

    partition> label
    [0] SMI Label
    [1] EFI Label
    Specify Label type[0]: 0
    Ready to label disk, continue? yes
    
    partition> 
    
  8. Finish the step by type q twice as shown below :

    partition> q
    
    
    FORMAT MENU:
            disk       - select a disk
            type       - select (define) a disk type
            partition  - select (define) a partition table
            current    - describe the current disk
            format     - format and analyze the disk
            fdisk      - run the fdisk program
            repair     - repair a defective sector
            label      - write label to the disk
            analyze    - surface analysis
            defect     - defect list management
            backup     - search for backup labels
            verify     - read and display labels
            save       - save new disk/partition definitions
            inquiry    - show vendor, product and revision
            scsi       - independent SCSI mode selects
            cache      - enable, disable or query SCSI disk cache
            volname    - set 8-character volume name
            !<cmd>     - execute <cmd>, then return
            quit
    format> q
    bash-3.00# 
    

Since the second disk already prepared, then we can continue to make the new ZFS pool called rpool. We will create zpool create to make the ZFS pool.

bash-3.00# zpool create -f rpool c1t1d0s0

If we get error about invalid vdev specification we can move on by add -f option :

bash-3.00# zpool create rpool c1t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t1d0s0 overlaps with /dev/dsk/c1t1d0s8
bash-3.00# zpool create -f rpool c1t1d0s0
bash-3.00# 

We can use zpool status to check the new pool we just created :

bash-3.00# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool  15.9G    94K  15.9G     0%  ONLINE  -
bash-3.00# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0

errors: No known data errors
bash-3.00# 

To start copying the whole root partition into the new rpool, we will use Solaris 10 Live Upgrade feature. We will use lucreate to create ZFS boot environment inside the rpool. lucreate command will automatically copy the whole root partition into the new pool. It might take some time to finish depend on the size of root partition (it might hang on the Copying stage for a while).

bash-3.00# lucreate -n zfsBE -p rpool
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <c1t0d0s0>.
Current boot environment is named <c1t0d0s0>.
Creating initial configuration for primary boot environment <c1t0d0s0>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <c1t0d0s0> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <c1t0d0s0> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <c1t0d0s0>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Updating bootenv.rc on ABE <zfsBE>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <zfsBE> in GRUB menu
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
bash-3.00# 

To check the status of new boot environment we can use lustatus command :

bash-3.00# lustatus 
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
c1t0d0s0                   yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         
bash-3.00# 

To make new boot environment (zfsBE) active we will use luactivate command as shown below :

bash-3.00# luactivate zfsBE
Generating boot-sign, partition and slice information for PBE <c1t0d0s0>
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.

Generating boot-sign for ABE <zfsBE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <zfsBE>
Generating partition and slice information for ABE <zfsBE>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c1t0d0s0 /mnt

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfsBE> successful.
bash-3.00# 

To last thing to do is install boot loader program on the master boot record of the c1t1d0s0 :

bash-3.00# installgrub -fm /boot/grub/stage1  /boot/grub/stage2 /dev/rdsk/c1t1d0s0

Then we need to reboot the server using init command :

bash-3.00# init 6
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
bash-3.00# 

System will automatically boot from the second disk (c1t1d0) and use ZFS boot environment. Using df command we can easily identify that now the system run on ZFS file system.

# df -h
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/zfsBE        16G   3.4G   6.7G    35%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   4.3G   356K   4.3G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                        10G   3.4G   6.7G    35%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   4.3G    40K   4.3G     1%    /tmp
swap                   4.3G    24K   4.3G     1%    /var/run
rpool                   16G    29K   6.7G     1%    /rpool
rpool/ROOT              16G    18K   6.7G     1%    /rpool/ROOT
# 
# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool  15.9G  4.95G  10.9G    31%  ONLINE  -
# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             8.96G  6.67G  29.5K  /rpool
rpool/ROOT        3.45G  6.67G    18K  /rpool/ROOT
rpool/ROOT/zfsBE  3.45G  6.67G  3.45G  /
rpool/dump        1.50G  6.67G  1.50G  -
rpool/swap        4.01G  10.7G    16K  -
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0

errors: No known data errors
# 

We can delete the old UFS boot environment using ludelete command.

bash-3.00# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
c1t0d0s0                   yes      no     no        yes    -         
zfsBE                      yes      yes    yes       no     -         
bash-3.00#
bash-3.00# ludelete -f c1t0d0s0
System has findroot enabled GRUB
Updating GRUB menu default setting
Changing GRUB menu default setting to <0>
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> as <mount-point>//boot/grub/menu.lst.prev.
File </etc/lu/GRUB_backup_menu> propagation successful
Successfully deleted entry from GRUB menu
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment <c1t0d0s0> deleted.
bash-3.00# 

Now we have unused disk #0, we can use it as mirror disk. We do it by attaching c1t0d0 to the existing rpool.

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@0,0
       1. c1t1d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@1,0
Specify disk (enter its number): ^D
bash-3.00# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0

errors: No known data errors
bash-3.00# 

But first we need to copy the partition layout of c1t1d0 to the c1t0d0 :

bash-3.00# prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2
fmthard: Partition 0 overlaps partition 8. Overlap is allowed
        only on partition on the full disk partition).
fmthard: Partition 8 overlaps partition 0. Overlap is allowed
        only on partition on the full disk partition).
fmthard:  New volume table of contents now in place.
bash-3.00# 

Then we can attach c1t0d0 to the rpool using the following command :

bash-3.00# zpool attach -f rpool c1t1d0s0 c1t0d0s0

Once attached to the rpool, system will syncronize the disk (in ZFS term it called resilvering process). Don’t reboot the system before resilvering process completed. We can monitor resilvering process using zfs status command :

bash-3.00# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h3m, 9.23% done, 0h35m to go
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0

errors: No known data errors
bash-3.00# 

It probably take a while to let resilvering process finish :

bash-3.00# zpool status
  pool: rpool
 state: ONLINE
 scrub: resilver completed after 0h13m with 0 errors on Sun Sep  1 11:58:52 2013
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0

errors: No known data errors
bash-3.00# 

Last thing to do is to install boot loader on c1t0d0 :

bash-3.00# installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

Migrate UFS To ZFS by Tedy Tirtawidjaja

Berbagi ZFS Dataset Lewat NFS

Saya baru mencoba membagi ZFS dataset di Solaris 10 melalui NFS (Network File System). Beberapa hal yang perlu saya siapkan adalah :

  1. Dataset mana yang akan saya bagi?
  2. Server mana saja yang akan saya ijinkan mengakses dataset tadi? Siapa NFS client-nya? Berapa IP NFS client tersebut?
  3. Apakah permission yang akan diberikan? read only? read write?

Sebagai contoh misalnya saya punya sebuah server berisi Solaris 10 (hostname-nya nfs-server), lalu saya ingin membagi sebuah ZFS dataset dengan komputer lainnya. Saya akan membuat ZFS dataset baru rpool/testnfsrpool/testnfs akan saya mount di direktori /sharenfs. Berikut perintah yang saya gunakan untuk membuat ZFS dataset tadi :

root@nfs-server:/# zfs create -o mountpoint=/sharenfs rpool/testnfs

Untuk memeriksa ZFS dataset yang sudah baru saja dibuat saya gunakan perintah ini :

root@nfs-server:/# df -h | grep sharenfs
rpool/testnfs          134G    31K    83G     1%    /sharenfs
root@nfs-server:/#

Untuk dapat berbagi ZFS dataset tadi, saya perlu mengaktifkan service NFS Server terlebih dahulu. Untuk memeriksa apakah NFS server sudah aktif saya gunakan perintah berikut ini :

root@nfs-server:/# svcs | grep nfs
online         Jun_25   svc:/network/nfs/rquota:default
offline        Jun_25   svc:/network/nfs/status:default
offline        Jun_25   svc:/network/nfs/nlockmgr:default
offline        Jun_25   svc:/network/nfs/cbd:default
offline        Jun_25   svc:/network/nfs/mapid:default
offline        Jun_25   svc:/network/nfs/client:default
offline        16:59:36 svc:/network/nfs/server:default
root@nfs-server:/#

Ternyata service NFS (network/nfs/server) belum aktif, saya perlu mengaktifkannya dengan perintah berikut ini :

root@nfs-server:/# svcadm  -v enable -r network/nfs/server
svc:/network/nfs/server:default enabled.
svc:/milestone/network enabled.
svc:/network/loopback enabled.
svc:/network/physical enabled.
svc:/network/nfs/nlockmgr enabled.
svc:/network/rpc/bind enabled.
svc:/system/filesystem/minimal enabled.
svc:/system/filesystem/usr enabled.
svc:/system/boot-archive enabled.
svc:/system/filesystem/root enabled.
svc:/system/device/local enabled.
svc:/system/identity:node enabled.
svc:/system/sysidtool:net enabled.
svc:/milestone/single-user:default enabled.
svc:/milestone/devices enabled.
svc:/system/device/fc-fabric enabled.
svc:/system/sysevent enabled.
svc:/system/manifest-import enabled.
svc:/system/filesystem/local:default enabled.
svc:/milestone/single-user enabled.
svc:/system/filesystem/minimal:default enabled.
svc:/system/identity:domain enabled.
svc:/network/nfs/status enabled.
svc:/system/filesystem/local enabled.
root@nfs-server:/#

Sekarang service NFS sudah aktif :

root@nfs-server:/# svcs | grep nfs
online         Jun_25   svc:/network/nfs/rquota:default
online         17:05:00 svc:/network/nfs/status:default
online         17:05:01 svc:/network/nfs/cbd:default
online         17:05:01 svc:/network/nfs/mapid:default
online         17:05:01 svc:/network/nfs/nlockmgr:default
online         17:05:01 svc:/network/nfs/client:default
online         17:05:02 svc:/network/nfs/server:default
root@nfs-server:/#

Langkah berikutnya adalah membagi dataset rpool/testnfs (dengan kata lain membagi direktori /sharenfs), caranya dengan menggunakan perintah berikut ini :

root@nfs-server:/# zfs set sharenfs='rw=10.22.237.113,root=10.22.237.113' rpool/testnfs

Pada perintah di atas, saya menyertakan juga opsi untuk memberi akses kepada komputer lain yang akan menjadi NFS client. Komputer dengan IP 10.22.237.113 akan diijinkan untuk mengakses ZFS dataset rpool/testnfs, akses yang diberikan adalah read-write. Dengan kata lain client dapat mengakses data-data di dalam /sharenfs dan juga dapat menyimpan data di direktori tersebut.

Untuk memastikan direktori /sharenfs sudah tersedia, saya gunakan perintah share seperti contoh berikut ini :

root@nfs-server:/# share
-               /sharenfs   sec=sys,rw=10.22.237.113,root=10.22.237.113   ""  
root@nfs-server:/#

Perlu diingat bahwa perintah “zfs set sharenfs” sifatnya tidak permanen (hilang saat restart). Untuk membuat sharing ini permanen, kita perlu mendaftarkan direktori yang akan di-share tersebut ke dalam file /etc/dfs/dfstab. Contohnya seperti berikut ini :

root@nfs-server:/# cat /etc/dfs/dfstab 

share -F nfs -o rw=10.22.237.113,root=10.22.237.113 /sharenfs

root@nfs-server:/# 

Mari beralih ke NFS client (komputer dengan IP 10.22.237.113). Komputer ini akan mengakses ZFS dataset rpool/testnfs dari nfs-server. IP nfs-server adalah 10.22.250.181. Perintah yang saya gunakan adalah seperti berikut ini :

root@client:/# mount -F nfs 10.22.250.181:/sharenfs /nfs

Direktori /sharenfs akan di-mount pada direktori /nfs.

Berikut tampilan yang muncul bila akses NFSnya berfungsi dengan baik :

root@client:/# df -h | grep nfs
10.22.250.181:/sharenfs
                        83G    31K    83G     1%    /nfs
root@client:/#

Dengan demikian si client sekarang bisa mengakses ZFS dataset rpool/testnfs yang dimiliki oleh nfs-server.

Berikut contoh kemungkinan error yang muncul saat mengakses NFS direktori :

root@client:/# mount -F nfs 10.22.250.181:/sharenfs /nfs
nfs mount: 10.22.250.181: : RPC: Program not registered
nfs mount: retrying: /nfs
^C
root@client:/#

Bila muncul error seperti di atas, kemungkinan besar NFS service di server belum diaktifkan (lihat kembali contoh perintah untuk mengaktifkan NFS service dengan perintah svcadm ).

Contoh error lain saat mencoba mengakses NFS direktori adalah seperti berikut :

root@client:/# mount -F nfs 10.22.250.181:/sharenfs /nfs
nfs mount: mount: /nfs: I/O error
root@client:/# 

Bila muncul error seperti di atas, dapat dipastikan proses sharing di NFS server-nya tidak berhasil. Silakan cek kembali NFS server-nya dengan perintah share (seperti yang sudah dicontohkan sebelumnya).