Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
zfsraid [2010/11/13 07:20] – Include the actual procedure. peterjeremyzfsraid [2014/12/13 00:36] – Fix type-o's, wrap code. chuzzwassa
Line 6: Line 6:
 by adding 3 new disks ''ada3'', ''da0'' and ''da1'' (Despite the names, the latter are SATA disks, attached to by adding 3 new disks ''ada3'', ''da0'' and ''da1'' (Despite the names, the latter are SATA disks, attached to
 a 3ware 9650SE-2LP since I'd run out of motherboard ports). a 3ware 9650SE-2LP since I'd run out of motherboard ports).
 +
 +Note that this procedure will not defragment your pool and you should do a send|recv if possible.
  
 ===== Original Configuration ===== ===== Original Configuration =====
Line 117: Line 119:
  
 The overall process is: The overall process is:
-  - Create a 6-way RAIDZ2 across the 3 new disks (ie each disk provides two vdevs) +  - Create a 6-way RAIDZ2 across the 3 new disks (ie each disk provides two vdevs). 
-  - Copy the existing pool onto the new disks +  - Copy the existing pool onto the new disks. 
-  - Switch the system to use the new 6-way pool +  - Switch the system to use the new 6-way pool. 
-  - Destroy the original pool +  - Destroy the original pool. 
-  - Replace the second vdev in each disk with one of the original disks+  - Replace the second vdev in each disk with one of the original disks.
   - Re-partition the new disks to expand the remaining vdev to occupy the now unused space.   - Re-partition the new disks to expand the remaining vdev to occupy the now unused space.
  
Line 255: Line 257:
 causes additional seeking between vdevs. causes additional seeking between vdevs.
  
 +<code>
   zpool scrub tank2   zpool scrub tank2
 +</code>
  
 ==== Switch to new pool ==== ==== Switch to new pool ====
Line 267: Line 271:
 In order to prevent any updates, the system should be brought down to In order to prevent any updates, the system should be brought down to
 single-user mode: single-user mode:
 +<code>
   shutdown now   shutdown now
 +</code>
  
 Once nothing is writing to ZFS, a second snapshot can be taken and Once nothing is writing to ZFS, a second snapshot can be taken and
Line 273: Line 279:
 altered since the previous 'zfs recv' (this includes atime updates). altered since the previous 'zfs recv' (this includes atime updates).
  
 +<code>
   zfs snapshot -r tank@20101105bu   zfs snapshot -r tank@20101105bu
   zfs rollback -R tank2@20101104bu   zfs rollback -R tank2@20101104bu
   zfs send -R -I tank@20101104bu tank@20101105bu | zfs recv -vu -d tank2   zfs send -R -I tank@20101104bu tank@20101105bu | zfs recv -vu -d tank2
 +</code>
  
 The original pool is now renamed by exporting and importing it under a The original pool is now renamed by exporting and importing it under a
 new name and then exporting it to umount it. new name and then exporting it to umount it.
  
 +<code>
   zpool export tank   zpool export tank
   zpool import tank tanko   zpool import tank tanko
   zpool export tanko   zpool export tanko
 +</code>
  
 And the new pool is renamed to the wanted name via export/import. And the new pool is renamed to the wanted name via export/import.
  
 +<code>
   zpool export tank2   zpool export tank2
   zpool import tank2 tank   zpool import tank2 tank
 +</code>
  
 The system can now be returned to multiuser mode and any required testing The system can now be returned to multiuser mode and any required testing
 performed. performed.
 +
 +<code>
   exit   exit
 +</code>
  
 ==== Replace vdevs ==== ==== Replace vdevs ====
Line 335: Line 350:
  
 In order to expand the pool, the vdevs on the 3 new disks need to be In order to expand the pool, the vdevs on the 3 new disks need to be
-resized.  It's not possible to expand the gpart partition do this also+resized.  It's not possible to expand the gpart partition so this also
 requires a (short) outage. requires a (short) outage.
  
Line 350: Line 365:
 The system needs to be placed in single-user mode to allow the partitions The system needs to be placed in single-user mode to allow the partitions
 and pool to be manipulated: and pool to be manipulated:
 +
 +<code>
   shutdown now   shutdown now
 +</code>
  
 Once in single-user mode, all 3 partition 6's can be deleted and the Once in single-user mode, all 3 partition 6's can be deleted and the
Line 366: Line 384:
 </code> </code>
  
-The pool has now expandedto 4TB:+The pool has now expanded to 4TB:
 <code> <code>
 zpool list zpool list
Line 375: Line 393:
  
 And the system can be restarted: And the system can be restarted:
 +<code>
   exit   exit
 +</code>
  
 Remember to add the new disks to (eg) daily_status_smart_devices Remember to add the new disks to (eg) daily_status_smart_devices
  
zfsraid.txt · Last modified: 2015/04/24 08:17 by peterjeremy
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Share Alike 4.0 International
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki