Cookies
Sometimes we need to clone a Linux system for the following reason :
This article will show you how to move a running Debian Linux system to larger disk. In this how-to document I will use the following Debian version :
root@debian01:~# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 7.1 (wheezy) Release: 7.1 Codename: wheezy root@debian01:~#
Lets start with examining the existing Linux system.
uname -a command as shown below :
root@debian01:~# uname -a Linux debian01 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux root@debian01:~#
df command as shown below :
root@debian01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 19G 1.3G 17G 7% / udev 10M 0 10M 0% /dev tmpfs 406M 268K 406M 1% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 19G 1.3G 17G 7% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 987M 0 987M 0% /run/shm root@debian01:~#
From the output we can see that, all directory stored under root filesystem. It didn’t separate directory like /boot, /var, or even /home. This information will affect the way we clone the disk to the new disks. We will talk about this later when we setup the partition on the new disk.
fdisk command :
root@debian01:~# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029de3 Device Boot Start End Blocks Id System /dev/sda1 * 2048 40136703 20067328 83 Linux /dev/sda2 40138750 41940991 901121 5 Extended /dev/sda5 40138752 41940991 901120 82 Linux swap / Solaris root@debian01:~#
blkid command. In the following command, we can see that existing disk only consist of 2 partition : root & swap. Root partition uses ext4 filesystem.
root@debian01:~# blkid /dev/sda5: UUID="5e43e27b-038a-45e2-bf84-2f31105b63a0" TYPE="swap" /dev/sda1: UUID="9cd78397-4505-4c80-bddb-703543cdc46f" TYPE="ext4" root@debian01:~#
ifconfig command as shown below :
root@debian01:~# ifconfig -a
eth0 Link encap:Ethernet HWaddr 08:00:27:10:f3:d2
inet addr:192.168.10.62 Bcast:192.168.10.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe10:f3d2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:433 errors:0 dropped:0 overruns:0 frame:0
TX packets:170 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:94304 (92.0 KiB) TX bytes:23369 (22.8 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@debian01:~#
So lets move on by attach the new disk to the system. In this case I use a disk with bigger size than the existing root disk. Depends on the server itself, we can instantly attaching the new disk or it probably require reboot to let the system recognize the new disk. To check whether the new disk already attached we can use the fdisk command as shown below :
root@debian01:~# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029de3 Device Boot Start End Blocks Id System /dev/sda1 * 2048 40136703 20067328 83 Linux /dev/sda2 40138750 41940991 901121 5 Extended /dev/sda5 40138752 41940991 901120 82 Linux swap / Solaris Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table root@debian01:~#
Before able to use new disk (/dev/sdb) as clone target, we need to prepare its partition first. We will use fdisk command to prepare the partition, below is the step-by-step guide to use fdisk :
fdisk command on /dev/sdb as shown below :
root@debian01:~# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x9c5d68fe. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help):
p and press Enter to see the existing partition layout :
Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System Command (m for help):
Since the disk doesn’t have any partition at all then it shows empty partition layout. If there is any partition in the disk, we better delete it first by typing d (as for delete).
n as in “new partition”. Then we must decide whether we want to create primary or extended partition type. In this sample I choose to create primary partition type.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-83886079, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-83886079, default 83886079): +15G
Command (m for help):
We need to define 3 value for each partition, they are :
/dev/sda1, /dev/sdb2, etc, etc).fdisk will let us define the size using human-readable size unit like K (for kilobyte), M (for Megabyte), or G (for Gigabyte). Just remember to start the value with + sign. In the example above, I choose to create 15G partition.Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 2048 31459327 15728640 83 Linux Command (m for help):
Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 2):
Using default value 2
First sector (31459328-83886079, default 31459328):
Using default value 31459328
Last sector, +sectors or +size{K,M,G} (31459328-83886079, default 83886079): +4G
Command (m for help):
See that the first sector now started from sector number 31459328. fdisk already know that the first partition will ended on 31459327 so the next available space will start on 31459328.
Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 2048 31459327 15728640 83 Linux /dev/sdb2 31459328 39847935 4194304 83 Linux Command (m for help):
/home as separate partition, so I create the 3rd partition and assign the rest available space to that.
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): p
Partition number (1-4, default 3):
Using default value 3
First sector (39847936-83886079, default 39847936):
Using default value 39847936
Last sector, +sectors or +size{K,M,G} (39847936-83886079, default 83886079):
Using default value 83886079
Command (m for help):
Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 2048 31459327 15728640 83 Linux /dev/sdb2 31459328 39847935 4194304 83 Linux /dev/sdb3 39847936 83886079 22019072 83 Linux Command (m for help):
/boot directory. Type a to start assign the bootable flag and then later type 1 to choose partition number 1.
Command (m for help): a Partition number (1-4): 1 Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders, total 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c5d68fe Device Boot Start End Blocks Id System /dev/sdb1 * 2048 31459327 15728640 83 Linux /dev/sdb2 31459328 39847935 4194304 83 Linux /dev/sdb3 39847936 83886079 22019072 83 Linux Command (m for help):
If the partition already marked as bootable, then you can see the * sign in that partition.
w as in “write into partition table” :
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. root@debian01:~#
After have the partition setup, we need to create the filesystem on top of it. In this example, we will use ext3 filesystem on partition 1 & 3. Also we need to activate partition number 2 as swap partition. To create the ext3 filesystem, we will use mkfs.ext3 command as shown in the following sample :
root@debian01:~# mkfs.ext3 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
983040 inodes, 3932160 blocks
196608 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4026531840
120 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@debian01:~# mkfs.ext3 /dev/sdb3
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1376256 inodes, 5504768 blocks
275238 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
168 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
root@debian01:~#
To activate the second partition as swap we will use mkswap command as shown below :
root@debian01:~# mkswap /dev/sdb2 Setting up swapspace version 1, size = 4194300 KiB no label, UUID=c891f223-4912-4c4d-adbc-0158c89b8aff root@debian01:~#
Once filesystem created, then we can mount the partition as shown on the following example :
root@debian01:~# mkdir /media/newroot root@debian01:~# mkdir /media/newhome root@debian01:~# mount /dev/sdb1 /media/newroot/ root@debian01:~# mount /dev/sdb3 /media/newhome/
We will use rsync command to do data synchronization between old disk to the new one. If you don’t have rsync installed on your system, then go install it first. Here is the sample installing rsync in Debian-based Linux distribution :
root@debian01:~# apt-get install rsync Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: rsync 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 357 kB of archives. After this operation, 639 kB of additional disk space will be used. Get:1 http://kambing.ui.ac.id/debian/ wheezy/main rsync i386 3.0.9-4 [357 kB] Fetched 357 kB in 0s (397 kB/s) Selecting previously unselected package rsync. (Reading database ... 45874 files and directories currently installed.) Unpacking rsync (from .../rsync_3.0.9-4_i386.deb) ... Processing triggers for man-db ... Setting up rsync (3.0.9-4) ... update-rc.d: using dependency based boot sequencing root@debian01:~#
Below is the sample how we use rsync to copy root partition to the new disk :
root@debian01:~# rsync -ar --exclude "/home" --exclude "/media/newroot" --exclude "/media/newhome" --exclude "/proc" --exclude "/sys" /* /media/newroot root@debian01:~#
We will exclude some directory like /proc/ and /sys because both aren’t needed for this cloning process. We also excluding the partition of new disk as well.
We will copy /home/ partition to dedicated partition as well :
root@debian01:~# rsync -ar /home/* /media/newhome/ root@debian01:~#
Here is the status after synchronization process finished :
root@debian01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 19G 1.3G 17G 8% / udev 10M 0 10M 0% /dev tmpfs 406M 280K 406M 1% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 19G 1.3G 17G 8% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 987M 0 987M 0% /run/shm /dev/sdb1 15G 1.3G 13G 10% /media/newroot /dev/sdb3 21G 173M 20G 1% /media/newhome root@debian01:~#
Then we will chroot-ing to the new root partition (/dev/sdb1 which mounted at /media/newroot at the moment). There are some step should be executed before we can chroot to the new root environment :
root@debian01:~# mkdir /media/newroot/proc root@debian01:~# mkdir /media/newroot/dev root@debian01:~# mkdir /media/newroot/sys root@debian01:~# mount -o bind /dev /media/newroot/dev/ root@debian01:~# mount -t proc none /media/newroot/proc root@debian01:~# mount -t sysfs none /media/newroot/sys root@debian01:~# chroot /media/newroot
There are some steps will be executed from inside chroot environment :
chroot environment. The simplest task is using df -h command as shown below :
root@debian01:/# df -h df: `/media/newhome': No such file or directory Filesystem Size Used Avail Use% Mounted on rootfs 15G 1.3G 13G 10% / udev 10M 0 10M 0% /dev devpts 10M 0 10M 0% /dev/pts tmpfs 15G 1.3G 13G 10% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 15G 1.3G 13G 10% / tmpfs 15G 1.3G 13G 10% /run/lock tmpfs 15G 1.3G 13G 10% /run/shm rpc_pipefs 15G 1.3G 13G 10% /var/lib/nfs/rpc_pipefs /dev/sdb1 15G 1.3G 13G 10% / udev 10M 0 10M 0% /dev root@debian01:/# root@debian01:/# pwd / root@debian01:/#
See that there is no /dev/sdb1 & /dev/sdb3 being mounted, since we are already inside that /dev/sdb1.
/etc/fstab file. This file will define the list of partition that must be mounted during boot process. Copy the existing fstab file as backup.
root@debian01:/# cd /etc/ root@debian01:/etc# cp fstab fstab-orig
Once have the backup, we can safely edit the fstab file. We need to change the disk identification so all partition information pointed to the new disk (/dev/sdb1). We can use blkid command to check the UUID of the new disk :
root@debian01:/etc# blkid /dev/sda5: UUID="5e43e27b-038a-45e2-bf84-2f31105b63a0" TYPE="swap" /dev/sda1: UUID="9cd78397-4505-4c80-bddb-703543cdc46f" TYPE="ext4" /dev/sdb1: UUID="d9e6bd2b-8446-4f61-9636-9b078c0d966a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdb2: UUID="c891f223-4912-4c4d-adbc-0158c89b8aff" TYPE="swap" /dev/sdb3: UUID="2cbe1cb3-1084-4229-92ff-847cd53e408a" SEC_TYPE="ext2" TYPE="ext3" root@debian01:/etc#
Then we can change the UUID informatio inside the fdisk file :
root@debian01:/etc# vi fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=d9e6bd2b-8446-4f61-9636-9b078c0d966a / ext3 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=c891f223-4912-4c4d-adbc-0158c89b8aff none swap sw 0 0 UUID=2cbe1cb3-1084-4229-92ff-847cd53e408a /home ext3 defaults 0 1 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 root@debian01:/etc#
We can compare the backup fstab with the one we just modified :
root@debian01:/etc# diff -u fstab-orig fstab --- /etc/fstab 2013-09-21 11:03:34.000000000 +0700 +++ /etc/fstab-orig 2013-09-21 10:57:55.000000000 +0700 @@ -6,8 +6,7 @@ # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation -UUID=d9e6bd2b-8446-4f61-9636-9b078c0d966a / ext3 errors=remount-ro 0 1 +UUID=9cd78397-4505-4c80-bddb-703543cdc46f / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation -UUID=c891f223-4912-4c4d-adbc-0158c89b8aff none swap sw 0 0 -UUID=2cbe1cb3-1084-4229-92ff-847cd53e408a /home ext3 defaults 0 1 +UUID=5e43e27b-038a-45e2-bf84-2f31105b63a0 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 root@debian01:/etc#
root@debian01:/etc# cat /etc/hostname debian01 root@debian01:/etc# vi /etc/hostname debian01-new root@debian01:/etc#
root@debian01:/etc# update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-4-686-pae Found initrd image: /boot/initrd.img-3.2.0-4-686-pae done root@debian01:/etc# grub-install /dev/sdb Installation finished. No error reported. root@debian01:/etc#
chroot environment :
root@debian01:/etc# exit root@debian01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 19G 1.3G 17G 8% / udev 10M 0 10M 0% /dev tmpfs 406M 280K 406M 1% /run /dev/disk/by-uuid/9cd78397-4505-4c80-bddb-703543cdc46f 19G 1.3G 17G 8% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 987M 0 987M 0% /run/shm /dev/sdb1 15G 1.3G 13G 10% /media/newroot /dev/sdb3 21G 173M 20G 1% /media/newhome udev 10M 0 10M 0% /media/newroot/dev root@debian01:~#
At this point we have successfully clone the complete Linux system to the new disk. We can shutdown the server and swap the disk or we can use the second disk (/dev/sdb) to boot another server.
root@debian01:~# shutdown -h now Broadcast message from root@debian01 (pts/0) (Sat Sep 21 11:00:57 2013): The system is going down for system halt NOW! root@debian01:~#
If everything is right then system will be able to boot using the /dev/sdb :
$ ssh root@192.168.10.62 root@192.168.10.62's password: Linux debian01-new 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Sat Sep 21 11:06:54 2013 root@debian01-new:~# uname -a Linux debian01-new 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1+deb7u1 i686 GNU/Linux root@debian01-new:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 15G 1.3G 13G 10% / udev 10M 0 10M 0% /dev tmpfs 406M 268K 406M 1% /run /dev/disk/by-uuid/d9e6bd2b-8446-4f61-9636-9b078c0d966a 15G 1.3G 13G 10% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 1.6G 0 1.6G 0% /run/shm /dev/sda3 21G 173M 20G 1% /home root@debian01-new:~# su - ttirtawi ttirtawi@debian01-new:~$ pwd /home/ttirtawi ttirtawi@debian01-new:~$
See that the disk with UUID d9e6bd2b-8446-4f61-9636-9b078c0d966a is mounted as root now.
Congratulation you have your Linux live on the new disk now.
Illustration from : http://www.linux.org/resources/debian.11/
This article will show you how to turn Solaris 10 from UFS to ZFS filesystem. Please do backup on existing UFS filesystem before start execute this procedure. Suppose we have a server with Solaris 10 x86 installed on first disk (c1t0d0) and has free second disk (c1t1d0). We will try to copy the filesystem to the second disk which will be formatted using ZFS filesystem.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci1000,8000@14/sd@0,0
1. c1t1d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/pci@0,0/pci1000,8000@14/sd@1,0
Specify disk (enter its number): ^D
bash-3.00#
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 12G 3.3G 8.4G 28% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 4.9G 548K 4.9G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
12G 3.3G 8.4G 28% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 4.9G 748K 4.9G 1% /tmp
swap 4.9G 28K 4.9G 1% /var/run
bash-3.00#
We need to prepare the second disk before create new ZFS filesystem on top of it. We will use format command to setup the partition on the second disk. Here is the step by step guide to do that :
Invoke format command as shown below :
bash-3.00# format -e c1t1d0
selecting c1t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format>
Type p for display current partition table in the c1t1d0 disk. If c1t1d0 is brand new disk, probably you will see the message about fdisk command as shown below :
format> p
WARNING - This disk may be in use by an application that has
modified the fdisk table. Ensure that this disk is
not currently in use before proceeding to use fdisk.
format>
To continue create a new partition table, type fdisk command and answer y to accept the default partition table.
format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y format>
Type p to display the partition menu :
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
9 - change `9' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition>
Type p again to display current partition layout :
partition> p Current partition table (original): Total disk cylinders available: 2085 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 2084 15.97GB (2085/0/0) 33495525 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition>
In this example I want to assign the whole disk space to partition #0 :
partition> 0 Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl[0]: 0 Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 2084c partition> partition> p Current partition table (unnamed): Total disk cylinders available: 2085 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 - 2083 15.96GB (2084/0/0) 33479460 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 2084 15.97GB (2085/0/0) 33495525 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 partition>
The last thing to do is make the changes permanent by invoke label command :
partition> label [0] SMI Label [1] EFI Label Specify Label type[0]: 0 Ready to label disk, continue? yes partition>
Finish the step by type q twice as shown below :
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
scsi - independent SCSI mode selects
cache - enable, disable or query SCSI disk cache
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
bash-3.00#
Since the second disk already prepared, then we can continue to make the new ZFS pool called rpool. We will create zpool create to make the ZFS pool.
bash-3.00# zpool create -f rpool c1t1d0s0
If we get error about invalid vdev specification we can move on by add -f option :
bash-3.00# zpool create rpool c1t1d0s0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c1t1d0s0 overlaps with /dev/dsk/c1t1d0s8 bash-3.00# zpool create -f rpool c1t1d0s0 bash-3.00#
We can use zpool status to check the new pool we just created :
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 15.9G 94K 15.9G 0% ONLINE -
bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
To start copying the whole root partition into the new rpool, we will use Solaris 10 Live Upgrade feature. We will use lucreate to create ZFS boot environment inside the rpool. lucreate command will automatically copy the whole root partition into the new pool. It might take some time to finish depend on the size of root partition (it might hang on the Copying stage for a while).
bash-3.00# lucreate -n zfsBE -p rpool Checking GRUB menu... Analyzing system configuration. No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <c1t0d0s0>. Current boot environment is named <c1t0d0s0>. Creating initial configuration for primary boot environment <c1t0d0s0>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <c1t0d0s0> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <c1t0d0s0> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <c1t0d0s0>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </rpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Updating bootenv.rc on ABE <zfsBE>. File </boot/grub/menu.lst> propagation successful Copied GRUB menu from PBE to ABE No entry for BE <zfsBE> in GRUB menu Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. bash-3.00#
To check the status of new boot environment we can use lustatus command :
bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t0d0s0 yes yes yes no - zfsBE yes no no yes - bash-3.00#
To make new boot environment (zfsBE) active we will use luactivate command as shown below :
bash-3.00# luactivate zfsBE
Generating boot-sign, partition and slice information for PBE <c1t0d0s0>
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
Generating boot-sign for ABE <zfsBE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <zfsBE>
Generating partition and slice information for ABE <zfsBE>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c1t0d0s0 /mnt
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfsBE> successful.
bash-3.00#
To last thing to do is install boot loader program on the master boot record of the c1t1d0s0 :
bash-3.00# installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
Then we need to reboot the server using init command :
bash-3.00# init 6 updating /platform/i86pc/boot_archive propagating updated GRUB menu Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> as <mount-point>//boot/grub/menu.lst.prev. File </boot/grub/menu.lst> propagation successful File </etc/lu/GRUB_backup_menu> propagation successful File </etc/lu/menu.cksum> propagation successful File </sbin/bootadm> propagation successful bash-3.00#
System will automatically boot from the second disk (c1t1d0) and use ZFS boot environment. Using df command we can easily identify that now the system run on ZFS file system.
# df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/zfsBE 16G 3.4G 6.7G 35% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 4.3G 356K 4.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
10G 3.4G 6.7G 35% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 4.3G 40K 4.3G 1% /tmp
swap 4.3G 24K 4.3G 1% /var/run
rpool 16G 29K 6.7G 1% /rpool
rpool/ROOT 16G 18K 6.7G 1% /rpool/ROOT
#
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 15.9G 4.95G 10.9G 31% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 8.96G 6.67G 29.5K /rpool
rpool/ROOT 3.45G 6.67G 18K /rpool/ROOT
rpool/ROOT/zfsBE 3.45G 6.67G 3.45G /
rpool/dump 1.50G 6.67G 1.50G -
rpool/swap 4.01G 10.7G 16K -
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
#
We can delete the old UFS boot environment using ludelete command.
bash-3.00# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t0d0s0 yes no no yes - zfsBE yes yes yes no - bash-3.00# bash-3.00# ludelete -f c1t0d0s0 System has findroot enabled GRUB Updating GRUB menu default setting Changing GRUB menu default setting to <0> Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> as <mount-point>//boot/grub/menu.lst.prev. File </etc/lu/GRUB_backup_menu> propagation successful Successfully deleted entry from GRUB menu Determining the devices to be marked free. Updating boot environment configuration database. Updating boot environment description database on all BEs. Updating all boot environment configuration databases. Boot environment <c1t0d0s0> deleted. bash-3.00#
Now we have unused disk #0, we can use it as mirror disk. We do it by attaching c1t0d0 to the existing rpool.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci1000,8000@14/sd@0,0
1. c1t1d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci1000,8000@14/sd@1,0
Specify disk (enter its number): ^D
bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
But first we need to copy the partition layout of c1t1d0 to the c1t0d0 :
bash-3.00# prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2
fmthard: Partition 0 overlaps partition 8. Overlap is allowed
only on partition on the full disk partition).
fmthard: Partition 8 overlaps partition 0. Overlap is allowed
only on partition on the full disk partition).
fmthard: New volume table of contents now in place.
bash-3.00#
Then we can attach c1t0d0 to the rpool using the following command :
bash-3.00# zpool attach -f rpool c1t1d0s0 c1t0d0s0
Once attached to the rpool, system will syncronize the disk (in ZFS term it called resilvering process). Don’t reboot the system before resilvering process completed. We can monitor resilvering process using zfs status command :
bash-3.00# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h3m, 9.23% done, 0h35m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
It probably take a while to let resilvering process finish :
bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: resilver completed after 0h13m with 0 errors on Sun Sep 1 11:58:52 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
Last thing to do is to install boot loader on c1t0d0 :
bash-3.00# installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
Tadi saya baru melakukan upgrade iOS 7 di iPhone, ya Apple sudah merilis iOS 7 kemarin. Tampilan iOS 7 ini benar-benar baru tidak seperti release iOS sebelumnya. Upgrade ini makan waktu cukup lama meskipun saya menggunakan koneksi internet yang cukup kencang. Mungkin banyak sekali pengguna Apple yang melakukan upgrade dalam waktu bersamaan.
Tampilan Lock Screen-nya baru, paling kentara adalah pilihan font yang dipakai lebih sekarang :
Ini perbandingan tampilan Home Screen di iPhone saya sebelum & sesudah upgrade :
Tampilan background-nya sekarang terlihat lebih hidup, berkesan 3D. Selain soal tampilan ada beberapa fitur baru di iOS 7 ini, salah satunya adalah aplikasi Camera.
Sekarang Camera punya pilihan untuk memotret dalam format 1×1 seperti yang dipakai oleh Instagram. Lalu Camera dilengkapi dengan internal filter (lagi-lagi seperti yang dipakai Instagram). Dan satu lagi yang baru, Camera akan melakukan burst-shooting (memotret terus menerus) bila saya tetap meletakkan jari di tombol shutter.
Beberapa setting penting pun sekarang mudah diakses dengan swipe ke atas. Ini fitur yang sudah lama dipakai oleh Android, hanya Android umumnya swipe dari bagian atas layar.
Mungkin karena baru pertama rilis, saya merasa ada penurunan performa iPhone 4S ini. Terasa sedikit lebih lambat dari sebelumnya. Mudah-mudahan Apple segera merilis fix untuk iOS 7 ini.