1) RAID-Z1 is similar to RAID 5 (allows one disk to fail), RAID-Z2 is similar to RAID 6 (allows two disks to fail) and RAID-Z3 (allows three disks to fail). The need for RAID-Z3 arose recently because RAID configurations with future disks (say 6–10 TB) may take a long time to repair, the worst case being weeks.
2) ZFS has no fsck repair tool equivalent, common on Unix filesystems, Instead, ZFS has a repair tool called “scrub” .
3) ZFS – data is being compressed first, then deduplicated
4) Logical Data (Original size of data without compression or dedup)
The amount of space logically consumed by a filesystem. This does not factor into compression, and can be viewed as the theoretical upper bound on the amount of space consumed by the filesystem. Copying the filesystem to another appliance using a different compression algorithm will not consume more than this amount. This statistic is not explicitly exported and can generally only be computed by taking the amount of physical space consumed and multiplying by the current compression ratio.
*Installation and other tips
2) modinfo zfs
3) zpool add zpool-2 raidz /dev/sdc1 /dev/sdc2 /dev/sdc3
4) RAID-Z configurations with single-digit groupings of disks should perform better.
5) zpool replace will copy all of the data from the old disk to the new one. After this operation completes, the old disk is disconnected from the vdev.
6)Although additional vdevs can be added to a pool, the layout of the pool cannot be changed
7) ZFS deduplication is in-band, which means deduplication occurs when you write data to disk and impacts both CPU and memory resources. Deduplication tables (DDTs) consume memory and eventually spill over and consume disk space. At that point, ZFS has to perform extra read and write operations for every block of data on which deduplication is attempted. This causes a reduction in performance.
8) zdb -bb zpool-1 | grep -i ‘file\|directory\|LSIZE’ | grep -v DSL | grep -v object
9) zpool list or df -k
10) zdb -dd zpool-1 | grep plain
* Gluster with ZFS