- Offizieller Beitrag
sdf ist kaputt - schmeisst immer Fehler sde scheint auch hinüber aber hier tauschen wir mal nur sdf.
Nach dem Rauziehen der Platte:
Code
root@pve-iso-01:~# zpool status zfs-store
pool: zfs-store
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: resilvered 51.4G in 03:41:12 with 2104 errors on Sat Jun 3 23:08:27 2023
config:
NAME STATE READ WRITE CKSUM
zfs-store DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdd ONLINE 0 0 0
sde DEGRADED 0 0 0 too many errors
1373100937986091390 UNAVAIL 0 0 0 was /dev/sdf1
errors: 2090 data errors, use '-v' for a list
Alles anzeigen
nehmen wir die dann offline
root@pve-iso-01:~# zpool offline zfs-store 1373100937986091390
Code
root@pve-iso-01:~# zpool status zfs-store
pool: zfs-store
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: resilvered 51.4G in 03:41:12 with 2104 errors on Sat Jun 3 23:08:27 2023
config:
NAME STATE READ WRITE CKSUM
zfs-store DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdd ONLINE 0 0 0
sde DEGRADED 0 0 0 too many errors
1373100937986091390 OFFLINE 0 0 0 was /dev/sdf1
errors: 2090 data errors, use '-v' for a list
Alles anzeigen
nun wird sie als offline angezeigt. Die neue Platte ist schon drin und lsblk zeigt diese auch an wieder als (neue) sdf. Man sieht es war vorher eine die unter windows lief ((ntfs)
Code
root@pve-iso-01:~# lsblk -o NAME,UUID,FSTYPE,FSUSE%,TYPE,SIZE,MOUNTPOINT,MODEL,SERIAL
NAME UUID FSTYPE FSUSE% TYPE SIZE MOUNTPOINT MODEL SERIAL
sda disk 3.6T HGST_HDN726040ALE614 K4JJ13BB
├─sda1 12750829833267718484 zfs_member part 3.6T
└─sda9 part 8M
sdb disk 3.6T HGST_HDN726040ALE614 K4JH2ZZB
├─sdb1 12750829833267718484 zfs_member part 3.6T
└─sdb9 part 8M
sdc disk 447.1G KINGSTON_SA400S37480G 50026B778209DEE8
├─sdc1 part 1007K
├─sdc2 8B36-7257 vfat part 512M
└─sdc3 PWRr6o-R7XP-TAoP-NCsB-D15e-EvQ6-NU0Nwq LVM2_member part 446.6G
├─pve-swap 7e089e1c-23e8-4f53-a659-1f7e50c06f6e swap lvm 8G [SWAP]
├─pve-root 2932087e-8626-4e01-bad2-9db03d9670c7 ext4 10% lvm 96G /
├─pve-data_tmeta lvm 3.3G
│ └─pve-data-tpool lvm 320.1G
│ ├─pve-data lvm 320.1G
│ ├─pve-vm--100--disk--0 lvm 119.2G
│ └─pve-vm--100--disk--1 lvm 32G
└─pve-data_tdata lvm 320.1G
└─pve-data-tpool lvm 320.1G
├─pve-data lvm 320.1G
├─pve-vm--100--disk--0 lvm 119.2G
└─pve-vm--100--disk--1 lvm 32G
sdd disk 3.6T HGST_HDN726040ALE614 K4KDAGDB
├─sdd1 12750829833267718484 zfs_member part 3.6T
└─sdd9 part 8M
sde disk 3.6T ST4000DM004-2CV104 ZFN1F46F
├─sde1 12750829833267718484 zfs_member part 3.6T
└─sde9 part 8M
sdf disk 3.6T ST4000DM004-2CV104 ZFN1FD27
├─sdf1 part 16M
└─sdf2 20AA96CFAA96A138 ntfs part 3.6T
Alles anzeigen
Jetzt das replace disk Kommando ausführen:
root@pve-iso-01:~# zpool replace -f zfs-store 1373100937986091390 /dev/disk/by-id/ata-ST4000DM004-2CV104_ZFN1FD27
und nun ist der Pool im resilvering:
Code
root@pve-iso-01:~# zpool status zfs-store
pool: zfs-store
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sun Jun 4 10:58:18 2023
714M scanned at 59.5M/s, 13.0M issued at 1.08M/s, 7.51T total
0B resilvered, 0.00% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
zfs-store DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdd ONLINE 0 0 0
sde DEGRADED 0 0 0 too many errors
replacing-4 DEGRADED 0 0 0
1373100937986091390 OFFLINE 0 0 0 was /dev/sdf1
ata-ST4000DM004-2CV104_ZFN1FD27 ONLINE 0 0 0
errors: 2090 data errors, use '-v' for a list
Alles anzeigen
an der GUI sieht dsas so aus:
WAAARTEEN (Tage evtl.)
lsblk zeigt die vormals NTFS platte nun als zpool member
Code
root@pve-iso-01:~# lsblk -o NAME,UUID,FSTYPE,FSUSE%,TYPE,SIZE,MOUNTPOINT,MODEL,SERIAL
NAME UUID FSTYPE FSUSE% TYPE SIZE MOUNTPOINT MODEL SERIAL
sda disk 3.6T HGST_HDN726040ALE614 K4JJ13BB
├─sda1 12750829833267718484 zfs_member part 3.6T
└─sda9 part 8M
sdb disk 3.6T HGST_HDN726040ALE614 K4JH2ZZB
├─sdb1 12750829833267718484 zfs_member part 3.6T
└─sdb9 part 8M
sdc disk 447.1G KINGSTON_SA400S37480G 50026B778209DEE8
├─sdc1 part 1007K
├─sdc2 8B36-7257 vfat part 512M
└─sdc3 PWRr6o-R7XP-TAoP-NCsB-D15e-EvQ6-NU0Nwq LVM2_member part 446.6G
├─pve-swap 7e089e1c-23e8-4f53-a659-1f7e50c06f6e swap lvm 8G [SWAP]
├─pve-root 2932087e-8626-4e01-bad2-9db03d9670c7 ext4 10% lvm 96G /
├─pve-data_tmeta lvm 3.3G
│ └─pve-data-tpool lvm 320.1G
│ ├─pve-data lvm 320.1G
│ ├─pve-vm--100--disk--0 lvm 119.2G
│ └─pve-vm--100--disk--1 lvm 32G
└─pve-data_tdata lvm 320.1G
└─pve-data-tpool lvm 320.1G
├─pve-data lvm 320.1G
├─pve-vm--100--disk--0 lvm 119.2G
└─pve-vm--100--disk--1 lvm 32G
sdd disk 3.6T HGST_HDN726040ALE614 K4KDAGDB
├─sdd1 12750829833267718484 zfs_member part 3.6T
└─sdd9 part 8M
sde disk 3.6T ST4000DM004-2CV104 ZFN1F46F
├─sde1 12750829833267718484 zfs_member part 3.6T
└─sde9 part 8M
sdf disk 3.6T ST4000DM004-2CV104 ZFN1FD27
├─sdf1 12750829833267718484 zfs_member part 3.6T
└─sdf9
Alles anzeigen