Tag Archives: harddisk

Switching to Seagate 8TB IronWolf NAS disks

Time to replace my NAS disks, as storage space runs low, and my 6TB Western Digital drives are now out of guarantee. And I know they die on a regular basis.

This time I decided to go with Seagate IronWolf disks filled with helium,
lets see. at least they seem to also provide good RMA just in case.

=== START OF INFORMATION SECTION ===

Model Family:     Seagate IronWolf

Device Model:     ST8000VN004-2M2101

swapping each disk one by one and resilvering the whole ZFS pool:

root@storage:~ # zpool replace storage /dev/gptid/ca518adc-dae2-11ea-b242-ac1f6bb1f010 /dev/gptid/1116da63-800d-11ec-a9d7-ac1f6bb1f010

root@storage:~ # zpool status storage

  pool: storage

 state: DEGRADED

status: One or more devices is currently being resilvered.  The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

  scan: resilver in progress since Fri Jan 28 08:38:19 2022

132G scanned at 2.70G/s, 3.06G issued at 64.0M/s, 17.3T total

0B resilvered, 0.02% done, 3 days 06:45:13 to go

config:

NAME                                              STATE     READ WRITE CKSUM

storage                                           DEGRADED     0     0     0

  raidz1-0                                        DEGRADED     0     0     0

    gptid/34eb0c58-d274-11ea-ab94-ac1f6bb1f010    ONLINE       0     0     0

    gptid/2d6266b8-0266-11e8-9491-001517d34dc1    ONLINE       0     0     0

    gptid/fa76d2f6-7ed4-11ec-b5c0-ac1f6bb1f010    ONLINE       0     0     0

    replacing-3                                   DEGRADED     0     0     0

      1500263774321294681                         OFFLINE      0     0     0  was /dev/gptid/ca518adc-dae2-11ea-b242-ac1f6bb1f010

      gptid/1116da63-800d-11ec-a9d7-ac1f6bb1f010  ONLINE       0     0     0

errors: No known data errors

root@storage:~ # zpool status storage

  pool: storage

 state: DEGRADED

status: One or more devices is currently being resilvered.  The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

  scan: resilver in progress since Fri Jan 28 08:38:19 2022

2.02T scanned at 5.10G/s, 61.8G issued at 156M/s, 17.3T total

13.9G resilvered, 0.35% done, 1 days 08:09:11 to go

config:

NAME                                              STATE     READ WRITE CKSUM

storage                                           DEGRADED     0     0     0

  raidz1-0                                        DEGRADED     0     0     0

    gptid/34eb0c58-d274-11ea-ab94-ac1f6bb1f010    ONLINE       0     0     0

    gptid/2d6266b8-0266-11e8-9491-001517d34dc1    ONLINE       0     0     0

    gptid/fa76d2f6-7ed4-11ec-b5c0-ac1f6bb1f010    ONLINE       0     0     0

    replacing-3                                   DEGRADED     0     0     0

      1500263774321294681                         OFFLINE      0     0     0  was /dev/gptid/ca518adc-dae2-11ea-b242-ac1f6bb1f010

      gptid/1116da63-800d-11ec-a9d7-ac1f6bb1f010  ONLINE       0     0     0  (resilvering)


After that the available space did automatically jump from 20.3TB to 29.1 TB, due to zpool autoexpand feature turned on.

Western Digital 6TB NAS defect

One of my 6TB NAS drives just died. Not a big issue, as the harddrive is still in guarantee, and Western Digital has superb customer service. just fill out a form and you get a new disk, then send back the old one.
Of course your data should be encrypted in the first place 😀 you never know.

root@storage:~ # zpool offline storage /dev/gptid/feb58edc-34bb-11e2-8227-b8975a2ae4ef

root@storage:~ # zpool status storage

  pool: storage

 state: DEGRADED

status: One or more devices has been taken offline by the administrator.

Sufficient replicas exist for the pool to continue functioning in a

degraded state.

action: Online the device using ‘zpool online’ or replace the device with

‘zpool replace’.

  scan: resilvered 2.12T in 0 days 15:18:09 with 0 errors on Thu Jan 25 11:44:30 2018

config:

NAME                                            STATE     READ WRITE CKSUM

storage                                         DEGRADED     0     0     0

  raidz1-0                                      DEGRADED     0     0     0

    gptid/fdd627a8-34bb-11e2-8227-b8975a2ae4ef  ONLINE       0     0     0

    gptid/fe48d16c-34bb-11e2-8227-b8975a2ae4ef  ONLINE       0     0     0

    1956547944066614565                         OFFLINE      0     0     0  was /dev/gptid/feb58edc-34bb-11e2-8227-b8975a2ae4ef

    gptid/3158b989-013c-11e8-9491-001517d34dc1  ONLINE       0     0     0

root@storage:~ # gpart create -s gpt /dev/ada3

ada3 created

root@storage:~ # gpart add -b 128 -t freebsd-swap -s 2G /dev/ada3

ada3p1 added

root@storage:~ # gpart add -t freebsd-zfs /dev/ada3

ada3p2 added

root@storage:~ # gpart list ada3

Geom name: ada3

modified: false

state: OK

fwheads: 16

fwsectors: 63

last: 11721045127

first: 40

entries: 128

scheme: GPT

Providers:

1. Name: ada3p1

   Mediasize: 2147483648 (2.0G)

   Sectorsize: 512

   Stripesize: 4096

   Stripeoffset: 0

   Mode: r0w0e0

   rawuuid: f6f3c35f-01f8-11e8-9491-001517d34dc1

   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b

   label: (null)

   length: 2147483648

   offset: 65536

   type: freebsd-swap

   index: 1

   end: 4194431

   start: 128

2. Name: ada3p2

   Mediasize: 5999027552256 (5.5T)

   Sectorsize: 512

   Stripesize: 4096

   Stripeoffset: 0

   Mode: r0w0e0

   rawuuid: fbe6c0a0-01f8-11e8-9491-001517d34dc1

   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b

   label: (null)

   length: 5999027552256

   offset: 2147549184

   type: freebsd-zfs

   index: 2

   end: 11721045119

   start: 4194432

Consumers:

1. Name: ada3

   Mediasize: 6001175126016 (5.5T)

   Sectorsize: 512

   Stripesize: 4096

   Stripeoffset: 0

   Mode: r0w0e0

root@storage:~ # zpool status storage

  pool: storage

 state: DEGRADED

status: One or more devices is currently being resilvered.  The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

  scan: resilver in progress since Fri Jan 26 07:59:05 2018

8.80T scanned at 229M/s, 7.06T issued at 184M/s, 8.88T total

1.68T resilvered, 79.46% done, 0 days 02:53:18 to go

config:

NAME                                              STATE     READ WRITE CKSUM

storage                                           DEGRADED     0     0     0

  raidz1-0                                        DEGRADED     0     0     0

    gptid/fdd627a8-34bb-11e2-8227-b8975a2ae4ef    ONLINE       0     0     0

    replacing-1                                   OFFLINE      0     0     0

      16622126325015839303                        OFFLINE      0     0     0  was /dev/gptid/fe48d16c-34bb-11e2-8227-b8975a2ae4ef

      gptid/2d6266b8-0266-11e8-9491-001517d34dc1  ONLINE       0     0     0  (resilvering)

    gptid/fbe6c0a0-01f8-11e8-9491-001517d34dc1    ONLINE       0     0     0

    gptid/3158b989-013c-11e8-9491-001517d34dc1    ONLINE       0     0     0