BMW 4 Series Coupe
Locale di lingkungan sistem operasi Unix salah satunya dipakai untuk mengatur tentang bahasa & tipe encoding. Misalnya saya bisa mengatur supaya Linux saya menggunakan bahasa “English US” (en_US) & ingin menggunakan tipe encoding UTF8. Nah tadi saya menemukan masalah dengan setting tersebut saat mencoba memasang sebuah paket. Error-nya kurang lebih seperti ini :
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
Ini contoh tampilan lengkap saat saya ingin memasang paket mdadm di Debian 7 :
root@debian01-new:~# apt-get install mdadm initramfs-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
initramfs-tools is already the newest version.
The following NEW packages will be installed:
mdadm
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/566 kB of archives.
After this operation, 1098 kB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Media change: please insert the disc labeled
'Debian GNU/Linux 7.1.0 _Wheezy_ - Official i386 CD Binary-1 20130615-21:54'
in the drive '/media/cdrom/' and press enter
Can't set locale; make sure $LC_* and $LANG are correct!
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Preconfiguring packages ...
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
/usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory
/usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Selecting previously unselected package mdadm.
(Reading database ... 45904 files and directories currently installed.)
Unpacking mdadm (from .../m/mdadm/mdadm_3.2.5-5_i386.deb) ...
Processing triggers for man-db ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Setting up mdadm (3.2.5-5) ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Generating array device nodes... done.
Generating mdadm.conf... done.
update-initramfs: deferring update (trigger activated)
[ ok ] Assembling MD arrays...done (no arrays found in config file or automatically).
[ ok ] Starting MD monitoring service: mdadm --monitor.
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.2.0-4-686-pae
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
W: mdadm: no arrays defined in configuration file.
root@debian01-new:~#
Untuk menghilangkan error “locale” tadi, saya perlu menjalankan perintah berikut ini :
root@debian01-new:~# export LANGUAGE=en_US.UTF-8 root@debian01-new:~# export LANG=en_US.UTF-8 root@debian01-new:~# export LC_ALL=en_US.UTF-8 root@debian01-new:~# locale-gen en_US.UTF-8 Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. root@debian01-new:~# dpkg-reconfigure locales Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. root@debian01-new:~#
Setelah mengatur ulang bahasa & encoding-nya sekarang Debian sudah menggunakan parameter yang tepat.
root@debian01-new:~# locale LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=en_US.UTF-8 root@debian01-new:~#
Untuk membuat pengaturan ini permanen saya perlu menambahkan 3 baris berikut ke dalam /etc/profile :
export LANGUAGE=en_US.UTF-8 export LANG=en_US.UTF-8 export LC_ALL=en_US.UTF-8
This document will show procedure of cloning a system running Solaris 10 that using ZFS as root file system. The idea is to take out mirror disk from the master server (source) & use it to boot up another server (target server). Then later we fix mirroring on both source & target server.

Suppose we have 1 server named vws01 with Solaris 10 SPARC installed. There are 4 disks on vws01 configured with ZFS file system.
slot#1 : c1t0d0s0 (part of rpool)slot#2 : c1t1d0s0 (part of datapool)slot#3 : c1t2t0d0 (part of rpool)slot#4 : c1t3t0d0 (part of datapool)Remember that the device name could be different on your system, just pay attention on the concept and adapt it with your devices name.
We want to create 1 other server name which similar with the vws01. We want to utilize ZFS mirror disks to cloning this server to create cloned server. So we have some tasks to do to achive that goal, I will split the steps into two big steps :
rpool & datapool) are in healthy state.
root@vws01:/# zpool status -x rpool
pool 'rpool' is healthy
root@vws01:/# zpool status -x datapool
pool 'datapool' is healthy
root@vws01:/# zpool status
pool: datapool
state: ONLINE
scan: resilvered 32.6G in 0h10m with 0 errors on Thu Apr 28 01:14:45 2011
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t3d0s0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: resilvered 32.6G in 0h10m with 0 errors on Thu Apr 28 01:14:45 2011
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t2d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
root@vws01:/# shutdown –g0 –y –i5
rpool (slot#3/c1t2d0) and 1 data disk from datapool (slot#4/c1t3d0). Give the label with clear information on it, for example :
SOURCE_rpool_slot#3_c1t2d0 SOURCE_datapool_slot#4_c1t3d0
rpool (c1t0d0) & datapool (c1t1d0). During boot process its normal when we get error message like there :
SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Thu Apr 28 01:04:17 GMT 2011 PLATFORM: SUNW,Netra-T5220, CSN: -, HOSTNAME: vws01 SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: be9eeb00-66ab-6cf0-e27a-acab436e849d DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. AUTO-RESPONSE: No automated response will occur. IMPACT: Fault tolerance of the pool may be compromised. REC-ACTION: Run 'zpool status -x' and replace the bad device. SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Thu Apr 28 01:04:17 GMT 2011 PLATFORM: SUNW,Netra-T5220, CSN: -, HOSTNAME: vws01 SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: fc0231b2-d6a8-608e-de11-afe0ebde7508 DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. AUTO-RESPONSE: No automated response will occur. IMPACT: Fault tolerance of the pool may be compromised. REC-ACTION: Run 'zpool status -x' and replace the bad device.
Those error come from ZFS inform that even that there are some fault on the ZFS pool. We can ignore those error for a while, since both pools still accessible & we intend to fix that broken mirror soon.
rpool and datapool are in DEGRADED state.
root@vws01:/# format < /dev/null
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c1t1d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
/pci@0/pci@0/pci@2/scsi@0/sd@1,0
Specify disk (enter its number):
root@vws01:/#
root@vws01:/# zfs status
pool: datapool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
datapool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t3d0s0 UNAVAIL 0 0 0 cannot open
errors: No known data errors
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t2d0s0 UNAVAIL 0 0 0 cannot open
errors: No known data errors
root@vws01:/#
slot#3 (as c1t2d0) and the other goes to slot#4 (as c1t3d0) slot. You can plug in the disk when server running if your server support hot plug disk controller. If don’t, then please shutdown the server before insert the replacement disks.rpool.
root@vws01:/# zpool detach rpool c1t2d0s0
After detach the missing root mirror, rpool now on ONLINE status again.
root@vws01:/# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
datapool.
root@vws01:/# zpool detach datapool c1t3d0s0
After detach the missing data mirror, datapool now on ONLINE status again.
root@vws01:/# zpool status datapool
pool: datapool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
rpool, we need to prepare the replacement disk (c1t2d0) first. We need to label the disk first (this step verified on the SPARC system, but should be applicable on x86 system too).
root@vws01# format -e c1t2d0
selecting c1t2d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? yes
format> quit
root@vws01#
c1t0d0) to the replacement disk (c1t2d0) :
root@vws01:/# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t2d0s2 fmthard: New volume table of contents now in place. root@vws01:/#
c1t2d0) to the rpool.
root@vws01:/# zpool attach rpool c1t0d0s0 c1t2d0s0 Please be sure to invoke installboot(1M) to make 'c1t2d0s0' bootable. Make sure to wait until resilver is done before rebooting. root@vws01:/#
Pay attention on the format of zfs attach command :
zpool attach <pool_name> <online_disk> <new_disk>
rpool then we can see rpool is in the resilver (resynchronization) status.
root@vws01:/# zfs status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 1.90% done, 0h47m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t2d0s0 ONLINE 0 0 0 1010M resilvered
errors: No known data errors
root@vws01:/#
c1t3d0) before attaching it to the datapool. We need to label the disk first (this step verified on the SPARC system, but should be applicable on x86 system too).
root@vws01# format -e c1t3d0
selecting c1t3d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? yes
format> quit
root@vws01#
c1t1d0) to the replacement disk (c1t3d0) :
root@vws01:/# prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s - /dev/rdsk/c1t3d0s2 fmthard: New volume table of contents now in place. root@vws01:/#
c1t3d0) to the datapool.
root@vws01:/# zpool attach datapool c1t1d0s0 c1t3d0s0
datapool, we can see datapool is in the resilver (resynchronization) status.
root@vws01:/# zpool status
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 2.52% done, 0h9m to go
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t3d0s0 ONLINE 0 0 0 760M resilvered
errors: No known data errors
root@vws01:/#
rpool completed, we need to reinstall boot block on the new disk (c1t2d0). This is crucial step otherwise we can’t use c1t2d0 as boot media even if it contains Solaris OS installation files. For SPARC system, we reinstall boot block on the new root mirror using installboot command like this :
root@vws01:/# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t2d0s0
In x86 system we do it using installgrub command like shown below :
/# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0 stage1 written to partition 0 sector 0 (abs 16065) stage2 written to partition 0, 273 sectors starting at 50 (abs 16115) /#
This procedure executed on the new server (the clone target). We will boot this new server using 2 disk we take from the source machine vws01.
vws01 then when it boot up the server will try to use the same IP address as “original” vws01.SOURCE_rpool_slot#3_c1t2d0 SOURCE_datapool_slot#4_c1t3d0
root@vws01:/# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@2,0
1. c1t3d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
/pci@0/pci@0/pci@2/scsi@0/sd@3,0
Specify disk (enter its number): ^D
root@vws01:/#
root@vws01:/# zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t0d0s0 UNAVAIL 0 0 0 cannot open
c1t2d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
In case we only see rpool then we must import the other pool before proceed to the next steps.
root@vws01:/# zpool import
pool: datapool
id: 1782173212883254850
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
datapool DEGRADED
mirror-0 DEGRADED
c1t1d0s0 UNAVAIL cannot open
c1t3d0s0 ONLINE
root@vws01:/#
root@vws01:/# zpool import -f datapool
root@vws01:/#
root@vws01:/# zpool status datapool
pool: datapool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
datapool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
10987259943749998344 UNAVAIL 0 0 0 was /dev/dsk/c1t1d0s0
c1t3d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
rpool.
root@vws01:/# zpool detach rpool c1t0d0s0
After detach the missing root mirror, rpool now on ONLINE status again.
root@vws01:/# zpool status -x rpool
pool 'rpool' is healthy
root@vws01:/# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t2d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
datapool.
root@vws01:/# zpool detach datapool /dev/dsk/c1t1d0s0
After detach the missing data mirror, datapool now on ONLINE status again.
root@vws01:/# zpool status -x datapool
pool 'datapool' is healthy
root@vws01:/# zpool status datapool
pool: datapool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
c1t3d0s0 ONLINE 0 0 0
errors: No known data errors
root@vws01:/#
vws02. To change the hostname & IP address we need to edit several files :
/etc/hosts/etc/hostname.*/etc/nodename/etc/netmasks/etc/defaultrouterWe can use the following command to change the hostname :
root@vws01:/# find /etc -name "hosts" -exec perl -pi -e 's/vws01/vws02/g' {} \;
root@vws01:/# find /etc -name "nodename" -exec perl -pi -e 's/vws01/vws02/g' {} \;
root@vws01:/# find /etc -name "hostname*" -exec perl -pi -e 's/vws01/vws02/g' {} \;
Reboot the server so it can pick up the new hostname.
root@vws01:/# shutdown -g0 -y -i6
devfsadm command to let Solaris detect the new disks.
root@vws02:/# devfsadm
root@vws02:/# format < /dev/null
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c1t1d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
/pci@0/pci@0/pci@2/scsi@0/sd@1,0
2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@2,0
3. c1t3d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
/pci@0/pci@0/pci@2/scsi@0/sd@3,0
Specify disk (enter its number):
root@vws02:/#
See that the new disks (c1t0d0 & c1t1d0) already recognized by Solaris.
rpool we need to prepare the disk first (this step verified on the SPARC system, but should be applicable on x86 system too).
root@vws02# format -e c1t0d0
selecting c1t0d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? yes
format> quit
root@vws02#
Then we need to copy the partition table from the ONLINE root disk (c1t2d0) to the replacement disk (c1t0d0) :
root@vws02:/# prtvtoc /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2 fmthard: New volume table of contents now in place. root@vws02:/#
rpool.
root@vws02:/# zpool attach rpool c1t2d0s0 c1t0d0s0 Please be sure to invoke installboot(1M) to make 'c1t0d0s0' bootable. Make sure to wait until resilver is done before rebooting. root@vws02:/#
rpool, check that rpool now on resilver process.
root@vws02:/# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 6.60% done, 0h12m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t2d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0 3.43G resilvered
errors: No known data errors
root@vws02:/#
datapool. But before that we need to prepare the replacement disk first (c1t1d0).
root@vws02# format -e c1t1d0
selecting c1t0d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? yes
format> quit
root@vws02#
Then we need to copy the partition table from the ONLINE data disk (c1t3d0) to the replacement disk (c1t1d0) :
root@vws02:/# prtvtoc /dev/rdsk/c1t3d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
fmthard: New volume table of contents now in place.
root@vws02:/#
datapool.
root@vws02:/# zpool attach datapool c1t3d0s0 c1t1d0s0
datapool, check that datapool now on resilver process.
root@vws02:/# zpool status datapool
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 5.28% done, 0h6m to go
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t3d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0 1.56G resilvered
errors: No known data errors
root@vws02:/#
rpool finished, we need to reinstall boot block on the new disk (c1t0d0s0). This is crucial step otherwise we can’t use c1t0d0 as boot media even if it contains Solaris OS installation files. For SPARC system, we reinstall boot block on the new root mirror using installboot command like this :
root@vws02:/# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
In x86 system we do it using installgrub command like shown below :
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0 stage1 written to partition 0 sector 0 (abs 16065) stage2 written to partition 0, 273 sectors starting at 50 (abs 16115) #
root@vws02:/# shutdown -g0 -y -i6
Cloning Solaris 10 Server With ZFS Mirror Disk by Tedy Tirtawidjaja
Sometimes we need to clone a Linux system for the following reason :
This article will show you how to move a running Debian Linux system to larger disk. In this how-to document I will use the following Debian version :
root@debian01:~# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 7.1 (wheezy) Release: 7.1 Codename: wheezy root@debian01:~#
Lets start with examining the existing Linux system.
uname -a command as shown below :
root@debian01:~# uname -a Linux debian01 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux root@debian01:~#
df command as shown below :
root@debian01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 19G 1.3G 17G 7% / udev 10M 0 10M 0% /dev tmpfs 406M 268K 406M 1% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 19G 1.3G 17G 7% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 987M 0 987M 0% /run/shm root@debian01:~#
From the output we can see that, all directory stored under root filesystem. It didn’t separate directory like /boot, /var, or even /home. This information will affect the way we clone the disk to the new disks. We will talk about this later when we setup the partition on the new disk.
fdisk command :
root@debian01:~# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029de3 Device Boot Start End Blocks Id System /dev/sda1 * 2048 40136703 20067328 83 Linux /dev/sda2 40138750 41940991 901121 5 Extended /dev/sda5 40138752 41940991 901120 82 Linux swap / Solaris root@debian01:~#
blkid command. In the following command, we can see that existing disk only consist of 2 partition : root & swap. Root partition uses ext4 filesystem.
root@debian01:~# blkid /dev/sda5: UUID="5e43e27b-038a-45e2-bf84-2f31105b63a0" TYPE="swap" /dev/sda1: UUID="9cd78397-4505-4c80-bddb-703543cdc46f" TYPE="ext4" root@debian01:~#
ifconfig command as shown below :
root@debian01:~# ifconfig -a
eth0 Link encap:Ethernet HWaddr 08:00:27:10:f3:d2
inet addr:192.168.10.62 Bcast:192.168.10.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe10:f3d2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:433 errors:0 dropped:0 overruns:0 frame:0
TX packets:170 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:94304 (92.0 KiB) TX bytes:23369 (22.8 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@debian01:~#
So lets move on by attach the new disk to the system. In this case I use a disk with bigger size than the existing root disk. Depends on the server itself, we can instantly attaching the new disk or it probably require reboot to let the system recognize the new disk. To check whether the new disk already attached we can use the fdisk command as shown below :
root@debian01:~# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029de3 Device Boot Start End Blocks Id System /dev/sda1 * 2048 40136703 20067328 83 Linux /dev/sda2 40138750 41940991 901121 5 Extended /dev/sda5 40138752 41940991 901120 82 Linux swap / Solaris Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table root@debian01:~#
Before able to use new disk (/dev/sdb) as clone target, we need to prepare its partition first. We will use fdisk command to prepare the partition, below is the step-by-step guide to use fdisk :
fdisk command on /dev/sdb as shown below :
root@debian01:~# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x9c5d68fe. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help):
p and press Enter to see the existing partition layout :
Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System Command (m for help):
Since the disk doesn’t have any partition at all then it shows empty partition layout. If there is any partition in the disk, we better delete it first by typing d (as for delete).
n as in “new partition”. Then we must decide whether we want to create primary or extended partition type. In this sample I choose to create primary partition type.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-83886079, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-83886079, default 83886079): +15G
Command (m for help):
We need to define 3 value for each partition, they are :
/dev/sda1, /dev/sdb2, etc, etc).fdisk will let us define the size using human-readable size unit like K (for kilobyte), M (for Megabyte), or G (for Gigabyte). Just remember to start the value with + sign. In the example above, I choose to create 15G partition.Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 2048 31459327 15728640 83 Linux Command (m for help):
Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 2):
Using default value 2
First sector (31459328-83886079, default 31459328):
Using default value 31459328
Last sector, +sectors or +size{K,M,G} (31459328-83886079, default 83886079): +4G
Command (m for help):
See that the first sector now started from sector number 31459328. fdisk already know that the first partition will ended on 31459327 so the next available space will start on 31459328.
Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 2048 31459327 15728640 83 Linux /dev/sdb2 31459328 39847935 4194304 83 Linux Command (m for help):
/home as separate partition, so I create the 3rd partition and assign the rest available space to that.
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): p
Partition number (1-4, default 3):
Using default value 3
First sector (39847936-83886079, default 39847936):
Using default value 39847936
Last sector, +sectors or +size{K,M,G} (39847936-83886079, default 83886079):
Using default value 83886079
Command (m for help):
Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 2048 31459327 15728640 83 Linux /dev/sdb2 31459328 39847935 4194304 83 Linux /dev/sdb3 39847936 83886079 22019072 83 Linux Command (m for help):
/boot directory. Type a to start assign the bootable flag and then later type 1 to choose partition number 1.
Command (m for help): a Partition number (1-4): 1 Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 * 2048 31459327 15728640 83 Linux /dev/sdb2 31459328 39847935 4194304 83 Linux /dev/sdb3 39847936 83886079 22019072 83 Linux Command (m for help):
If the partition already marked as bootable, then you can see the * sign in that partition.
w as in “write into partition table” :
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. root@debian01:~#
After have the partition setup, we need to create the filesystem on top of it. In this example, we will use ext3 filesystem on partition 1 & 3. Also we need to activate partition number 2 as swap partition. To create the ext3 filesystem, we will use mkfs.ext3 command as shown in the following sample :
root@debian01:~# mkfs.ext3 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
983040 inodes, 3932160 blocks
196608 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4026531840
120 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@debian01:~# mkfs.ext3 /dev/sdb3
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1376256 inodes, 5504768 blocks
275238 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
168 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@debian01:~#
To activate the second partition as swap we will use mkswap command as shown below :
root@debian01:~# mkswap /dev/sdb2 Setting up swapspace version 1, size = 4194300 KiB no label, UUID=c891f223-4912-4c4d-adbc-0158c89b8aff root@debian01:~#
Once filesystem created, then we can mount the partition as shown on the following example :
root@debian01:~# mkdir /media/newroot root@debian01:~# mkdir /media/newhome root@debian01:~# mount /dev/sdb1 /media/newroot/ root@debian01:~# mount /dev/sdb3 /media/newhome/
We will use rsync command to do data synchronization between old disk to the new one. If you don’t have rsync installed on your system, then go install it first. Here is the sample installing rsync in Debian-based Linux distribution :
root@debian01:~# apt-get install rsync Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: rsync 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 357 kB of archives. After this operation, 639 kB of additional disk space will be used. Get:1 http://kambing.ui.ac.id/debian/ wheezy/main rsync i386 3.0.9-4 [357 kB] Fetched 357 kB in 0s (397 kB/s) Selecting previously unselected package rsync. (Reading database ... 45874 files and directories currently installed.) Unpacking rsync (from .../rsync_3.0.9-4_i386.deb) ... Processing triggers for man-db ... Setting up rsync (3.0.9-4) ... update-rc.d: using dependency based boot sequencing root@debian01:~#
Below is the sample how we use rsync to copy root partition to the new disk :
root@debian01:~# rsync -ar --exclude "/home" --exclude "/media/newroot" --exclude "/media/newhome" --exclude "/proc" --exclude "/sys" /* /media/newroot root@debian01:~#
We will exclude some directory like /proc/ and /sys because both aren’t needed for this cloning process. We also excluding the partition of new disk as well.
We will copy /home/ partition to dedicated partition as well :
root@debian01:~# rsync -ar /home/* /media/newhome/ root@debian01:~#
Here is the status after synchronization process finished :
root@debian01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 19G 1.3G 17G 8% / udev 10M 0 10M 0% /dev tmpfs 406M 280K 406M 1% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 19G 1.3G 17G 8% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 987M 0 987M 0% /run/shm /dev/sdb1 15G 1.3G 13G 10% /media/newroot /dev/sdb3 21G 173M 20G 1% /media/newhome root@debian01:~#
Then we will chroot-ing to the new root partition (/dev/sdb1 which mounted at /media/newroot at the moment). There are some step should be executed before we can chroot to the new root environment :
root@debian01:~# mkdir /media/newroot/proc root@debian01:~# mkdir /media/newroot/dev root@debian01:~# mkdir /media/newroot/sys root@debian01:~# mount -o bind /dev /media/newroot/dev/ root@debian01:~# mount -t proc none /media/newroot/proc root@debian01:~# mount -t sysfs none /media/newroot/sys root@debian01:~# chroot /media/newroot
There are some steps will be executed from inside chroot environment :
chroot environment. The simplest task is using df -h command as shown below :
root@debian01:/# df -h df: `/media/newhome': No such file or directory Filesystem Size Used Avail Use% Mounted on rootfs 15G 1.3G 13G 10% / udev 10M 0 10M 0% /dev devpts 10M 0 10M 0% /dev/pts tmpfs 15G 1.3G 13G 10% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 15G 1.3G 13G 10% / tmpfs 15G 1.3G 13G 10% /run/lock tmpfs 15G 1.3G 13G 10% /run/shm rpc_pipefs 15G 1.3G 13G 10% /var/lib/nfs/rpc_pipefs /dev/sdb1 15G 1.3G 13G 10% / udev 10M 0 10M 0% /dev root@debian01:/# root@debian01:/# pwd / root@debian01:/#
See that there is no /dev/sdb1 & /dev/sdb3 being mounted, since we are already inside that /dev/sdb1.
/etc/fstab file. This file will define the list of partition that must be mounted during boot process. Copy the existing fstab file as backup.
root@debian01:/# cd /etc/ root@debian01:/etc# cp fstab fstab-orig
Once have the backup, we can safely edit the fstab file. We need to change the disk identification so all partition information pointed to the new disk (/dev/sdb1). We can use blkid command to check the UUID of the new disk :
root@debian01:/etc# blkid /dev/sda5: UUID="5e43e27b-038a-45e2-bf84-2f31105b63a0" TYPE="swap" /dev/sda1: UUID="9cd78397-4505-4c80-bddb-703543cdc46f" TYPE="ext4" /dev/sdb1: UUID="d9e6bd2b-8446-4f61-9636-9b078c0d966a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdb2: UUID="c891f223-4912-4c4d-adbc-0158c89b8aff" TYPE="swap" /dev/sdb3: UUID="2cbe1cb3-1084-4229-92ff-847cd53e408a" SEC_TYPE="ext2" TYPE="ext3" root@debian01:/etc#
Then we can change the UUID informatio inside the fdisk file :
root@debian01:/etc# vi fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=d9e6bd2b-8446-4f61-9636-9b078c0d966a / ext3 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=c891f223-4912-4c4d-adbc-0158c89b8aff none swap sw 0 0 UUID=2cbe1cb3-1084-4229-92ff-847cd53e408a /home ext3 defaults 0 1 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 root@debian01:/etc#
We can compare the backup fstab with the one we just modified :
root@debian01:/etc# diff -u fstab-orig fstab --- /etc/fstab 2013-09-21 11:03:34.000000000 +0700 +++ /etc/fstab-orig 2013-09-21 10:57:55.000000000 +0700 @@ -6,8 +6,7 @@ # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation -UUID=d9e6bd2b-8446-4f61-9636-9b078c0d966a / ext3 errors=remount-ro 0 1 +UUID=9cd78397-4505-4c80-bddb-703543cdc46f / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation -UUID=c891f223-4912-4c4d-adbc-0158c89b8aff none swap sw 0 0 -UUID=2cbe1cb3-1084-4229-92ff-847cd53e408a /home ext3 defaults 0 1 +UUID=5e43e27b-038a-45e2-bf84-2f31105b63a0 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 root@debian01:/etc#
root@debian01:/etc# cat /etc/hostname debian01 root@debian01:/etc# vi /etc/hostname debian01-new root@debian01:/etc#
root@debian01:/etc# update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-4-686-pae Found initrd image: /boot/initrd.img-3.2.0-4-686-pae done root@debian01:/etc# grub-install /dev/sdb Installation finished. No error reported. root@debian01:/etc#
chroot environment :
root@debian01:/etc# exit root@debian01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 19G 1.3G 17G 8% / udev 10M 0 10M 0% /dev tmpfs 406M 280K 406M 1% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 19G 1.3G 17G 8% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 987M 0 987M 0% /run/shm /dev/sdb1 15G 1.3G 13G 10% /media/newroot /dev/sdb3 21G 173M 20G 1% /media/newhome udev 10M 0 10M 0% /media/newroot/dev root@debian01:~#
At this point we have successfully clone the complete Linux system to the new disk. We can shutdown the server and swap the disk or we can use the second disk (/dev/sdb) to boot another server.
root@debian01:~# shutdown -h now Broadcast message from root@debian01 (pts/0) (Sat Sep 21 11:00:57 2013): The system is going down for system halt NOW! root@debian01:~#
If everything is right then system will be able to boot using the /dev/sdb :
$ ssh root@192.168.10.62 root@192.168.10.62's password: Linux debian01-new 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Sat Sep 21 11:06:54 2013 root@debian01-new:~# uname -a Linux debian01-new 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux root@debian01-new:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 15G 1.3G 13G 10% / udev 10M 0 10M 0% /dev tmpfs 406M 268K 406M 1% /run /dev/disk/by-uuid/d9e6bd2b-8446-4f61-9636-9b078c0d966a 15G 1.3G 13G 10% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 1.6G 0 1.6G 0% /run/shm /dev/sda3 21G 173M 20G 1% /home root@debian01-new:~# su - ttirtawi ttirtawi@debian01-new:~$ pwd /home/ttirtawi ttirtawi@debian01-new:~$
See that the disk with UUID d9e6bd2b-8446-4f61-9636-9b078c0d966a is mounted as root now.
Congratulation you have your Linux live on the new disk now.
Illustration from : http://www.linux.org/resources/debian.11/