Do NOT use ZFS in these cases:
1. you want to use ZFS on single/one external USB drive (worst case, data corruption will happen on non clean dismount, and you would have to recreate whole dataset)
2. you want to use ZFS on single/one drive and you do not have any external drive for the backup purpose (why? when the zpool is not cleanly dismounted/exported, some data can get corrupted permanently and zfs will have no other drive from which it can automatically get valid data, unless you get secondary drive of same type, size for parity, redundancy)
3. you do not have hours of your time to learn basics of ZFS management, on this page though are most basic things

Majority of following commands will work on all Linux distributions though first part of the tutorial is using Arch/Manjaro Linux packages and package manager. Inc ase you are on Ubuntu, check the tutorial like this. You need to discover if your distribution has kernel with zfs support and how to enable it.

Upgrade and updade system and reboot (in case new kernel was installed since last reboot)

A)
sudo pacman -S linux-latest-zfs
reboot
sudo /sbin/modprobe zfs

if modprobe not works, try "sudo pacman -R linux-latest-zfs" and try method B:

B)
Discover installed kernels:
uname -r
pacman -Q | grep “^linux”

and install zfs packages for these:
sudo pacman -Ss zfs|grep -i linux
sudo pacman -S linux123-zfs
pamac install zfs-dkms
reboot

# enable zfs support in kernel (it was not enabled in 5.8.16-2-MANJARO after reboot, but once enabled by following command it persist)
sudo /sbin/modprobe zfs

===================================

Open two pages and search for things and parameters to understand following commands:
https://zfsonlinux.org/manpages/0.8.1/man8/zpool.8.html
https://zfsonlinux.org/manpages/0.8.1/man8/zfs.8.html

sudo smartctl -a /dev/sdb|grep -i "sector size"
Sector Sizes: 512 bytes logical, 4096 bytes physical
(smartctl is in package "smartmontools")

It was suggested here https://forum.proxmox.com/threads/how-can-i-set-the-correct-ashift-on-zfs.58242/post-268384 to use in the following "zpool create" command the parameter ashift=12 for drives with 4096 bytes physical sector size and for 8K physical to use ashift=13. If ashift not defined, then zfs use autodetect where i do not know how good it is.

# attempt to create pool named "poolname" on a HDD of choice: (use disk that store no important data)
A ) sudo zpool create -o ashift=12 -o [email protected]_destroy=enabled -o [email protected]_bpobj=enabled -o [email protected]_compress=enabled poolname /dev/disk/ID-HERE(ls -l /dev/disk/by-id/)
or the same command only the pool will be created across 2 physical drives (of same size, else pool will not use all the space on bigger drive?) where one will be used for redundancy (recommended to reduce irreversible data corruption risk and double the read/write performance)
B) sudo zpool create -o ashift=12 -o [email protected]_destroy=enabled -o [email protected]_bpobj=enabled -o [email protected]_compress=enabled poolname mirror /dev/disk/DRIVE1-ID-HERE(ls -l /dev/disk/by-id/) /dev/disk/DRIVE2-ID-HERE(ls -l /dev/disk/by-id/)
(for 4 drives mirror, it should be: zpool create poolname mirror drive1id drive2id mirror drive3id drive4id)

Regarding following parameter recordsize, it was suggested on places like this https://blog.programster.org/zfs-record-size and https://jrs-s.net/2019/04/03/on-zfs-recordsize/ and https://www.reddit.com/r/zfs/comments/8l20f5/zfs_record_size_is_smaller_really_better/ that for large media files drive, the block size is better to increase from 128k to 512k. So i did it for my multimedia drive. Though above linked manual page for zfs says this value is only suggested and zfs automatically adjust size per usage patterns.

Creating two datasets one encrypted one not:
sudo zfs create -o compression=lz4 -o checksum=skein -o atime=off -o xattr=sa -o encryption=on -o keyformat=passphrase -o mountpoint=/e poolname/enc
sudo zfs create -o compression=lz4 -o checksum=skein -o atime=off -o xattr=sa -o encryption=off -o recordsize=512K -o mountpoint=/d poolname/data

fix permissions:
sudo chown -R yourusername:yourusername /poolname /e /d

gracefully unmount the pools (i think necessary or poor will be marked as suspended and compute restart needed):
sudo zpool export -a

mount the pools:
sudo zpool export -a

If some pool is encrypted, then additional command needed (-l parameter to enter passphrase, else it complains "encryption key not loaded"):
sudo zfs mount -a -l

pool activity statistics:
zpool iostat -vlq

intent log statistics:
cat /proc/spl/kstat/zfs/zil

attach new drive (if the existing one is non-redundant single drive, result will be mirror (something like RAID1, with enhanced read/write and 1 drive fault tollerance, data self healing), if existing is part of mirror, it will be three way mirror:
zpool attach poolname existingdrive newdrive

Detach, remove, replace, see manual page (man zpool) or https://zfsonlinux.org/manpages/0.8.1/man8/zpool.8.html

destroy (delete) dataset (no prompt):
sudo zfs destroy poolname/enc

destroy (delete) whole pool (no prompt):
sudo zpool destroy poolname

========
If you are OK with the HDD activity to increase at times regular activity is no/low, then consider enabling automatic scrubbing (kind of runtime "fsck" that checks files and even can repair files on replicated devices (mirror/raidz)). Following sets the monthly task:
1. su;echo -e "[Unit]\nDescription=Monthly zpool scrub on %i\n\n[Timer]\nOnCalendar=monthly\nAccuracySec=1h\nPersistent=t rue\n\n[Install]\nWantedBy=multi-user.target" > /etc/systemd/system/[email protected]
2. echo -e "[Unit]\nDescription=zpool scrub on %i\n\n[Service]\nNice=19\nIOSchedulingClass=idle\nKillSignal=SIGI NT\nExecStart=/usr/bin/zpool scrub %i\n\n[Install]\nWantedBy=multi-user.target" > /etc/systemd/system/[email protected];exit
3. systemctl enable [email protected]

========
Another page worth reading: https://wiki.archlinux.org/index.php/ZFS