Tag Archives: Seagate

Switching to Seagate 8TB IronWolf NAS disks

Time to replace my NAS disks, as storage space runs low, and my 6TB Western Digital drives are now out of guarantee. And I know they die on a regular basis.

This time I decided to go with Seagate IronWolf disks filled with helium,
lets see. at least they seem to also provide good RMA just in case.

=== START OF INFORMATION SECTION ===

Model Family:     Seagate IronWolf

Device Model:     ST8000VN004-2M2101

swapping each disk one by one and resilvering the whole ZFS pool:

root@storage:~ # zpool replace storage /dev/gptid/ca518adc-dae2-11ea-b242-ac1f6bb1f010 /dev/gptid/1116da63-800d-11ec-a9d7-ac1f6bb1f010

root@storage:~ # zpool status storage

  pool: storage

 state: DEGRADED

status: One or more devices is currently being resilvered.  The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

  scan: resilver in progress since Fri Jan 28 08:38:19 2022

132G scanned at 2.70G/s, 3.06G issued at 64.0M/s, 17.3T total

0B resilvered, 0.02% done, 3 days 06:45:13 to go

config:

NAME                                              STATE     READ WRITE CKSUM

storage                                           DEGRADED     0     0     0

  raidz1-0                                        DEGRADED     0     0     0

    gptid/34eb0c58-d274-11ea-ab94-ac1f6bb1f010    ONLINE       0     0     0

    gptid/2d6266b8-0266-11e8-9491-001517d34dc1    ONLINE       0     0     0

    gptid/fa76d2f6-7ed4-11ec-b5c0-ac1f6bb1f010    ONLINE       0     0     0

    replacing-3                                   DEGRADED     0     0     0

      1500263774321294681                         OFFLINE      0     0     0  was /dev/gptid/ca518adc-dae2-11ea-b242-ac1f6bb1f010

      gptid/1116da63-800d-11ec-a9d7-ac1f6bb1f010  ONLINE       0     0     0

errors: No known data errors

root@storage:~ # zpool status storage

  pool: storage

 state: DEGRADED

status: One or more devices is currently being resilvered.  The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

  scan: resilver in progress since Fri Jan 28 08:38:19 2022

2.02T scanned at 5.10G/s, 61.8G issued at 156M/s, 17.3T total

13.9G resilvered, 0.35% done, 1 days 08:09:11 to go

config:

NAME                                              STATE     READ WRITE CKSUM

storage                                           DEGRADED     0     0     0

  raidz1-0                                        DEGRADED     0     0     0

    gptid/34eb0c58-d274-11ea-ab94-ac1f6bb1f010    ONLINE       0     0     0

    gptid/2d6266b8-0266-11e8-9491-001517d34dc1    ONLINE       0     0     0

    gptid/fa76d2f6-7ed4-11ec-b5c0-ac1f6bb1f010    ONLINE       0     0     0

    replacing-3                                   DEGRADED     0     0     0

      1500263774321294681                         OFFLINE      0     0     0  was /dev/gptid/ca518adc-dae2-11ea-b242-ac1f6bb1f010

      gptid/1116da63-800d-11ec-a9d7-ac1f6bb1f010  ONLINE       0     0     0  (resilvering)


After that the available space did automatically jump from 20.3TB to 29.1 TB, due to zpool autoexpand feature turned on.