Zimmie on Nostr: nprofile1q…rsr9k With that much data, you’re planning on spinning rust, right? I ...
nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqt7dmr2xue8zaxsxhy2xkdja0nx6f8cu8y8l7hfksw98y773djkgqqrsr9k (nprofile…sr9k) With that much data, you’re planning on spinning rust, right? I would do at least raidz2 plus snapshot replication to handle whole-server failures. raidz1 is generally fine for SSDs, but 40+ TB of SSDs may be prohibitively expensive. draid is an option, but mostly relevant for much larger deployments.
A ZFS pool can be expanded in a few ways: adding a new vdev, or expanding an existing vdev. Adding a new vdev takes at least two disks at a time, and it can result in somewhat asymmetric performance for a while. Still, it’s a simple option if you can do it. Just make sure your new vdev has at least the same fault tolerance as your existing vdev.
Since the earliest days of ZFS, you have been able to expand a vdev by replacing its disks one at a time with larger ones, and letting the vdev resilver. Once you have replaced them all, you tell the vdev to expand, and you get some extra space. This is slow, but simple.
Version 2.3.0 adds raidz stripe expansion. With this, you can take a raidz vdev containing n disks and add a disk to end up with n+1 disks. There are a few limits to this, though. ZFS avoids rewriting data, so existing data stays at the storage efficiency of the old stripe width. Once you have added a disk to a raidz vdev, the pool can only ever be accessed by ZFS 2.3.0 or higher. You can’t use this to increase the fault tolerance of the vdev (no adding a disk to go from raidz1 to raidz2), only to add capacity.
A ZFS pool can be expanded in a few ways: adding a new vdev, or expanding an existing vdev. Adding a new vdev takes at least two disks at a time, and it can result in somewhat asymmetric performance for a while. Still, it’s a simple option if you can do it. Just make sure your new vdev has at least the same fault tolerance as your existing vdev.
Since the earliest days of ZFS, you have been able to expand a vdev by replacing its disks one at a time with larger ones, and letting the vdev resilver. Once you have replaced them all, you tell the vdev to expand, and you get some extra space. This is slow, but simple.
Version 2.3.0 adds raidz stripe expansion. With this, you can take a raidz vdev containing n disks and add a disk to end up with n+1 disks. There are a few limits to this, though. ZFS avoids rewriting data, so existing data stays at the storage efficiency of the old stripe width. Once you have added a disk to a raidz vdev, the pool can only ever be accessed by ZFS 2.3.0 or higher. You can’t use this to increase the fault tolerance of the vdev (no adding a disk to go from raidz1 to raidz2), only to add capacity.