Solaris ZFS Command Line – Solaris Admin Reference

ZFS Command Line Quick ReferenceThe ZFS file system is a new kind of file system that fundamentally changes the way file systems are administered, with the below mentioned features:

ZFS Pooled Storage

ZFS uses the concept of storage pools to manage physical storage. Historically, file systems were constructed on top of a single physical device. To address multiple devices and provide for data redundancy, the concept of a volume managerwas introduced to provide a representation of a single device so that file systems would not need to be modified to take advantage of multiple devices. This design added another layer of complexity and ultimately prevented certain file system advances because the file system had no control over the physical placement of data on the virtualized volumes.

ZFS eliminates volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool

Transactional Semantics

ZFS is a transactional file system, which means that the file system state is always consistent on disk. In Transactional file system data is managed using copy on write semantics. Data is never overwritten, and any sequence of operations is either entirely committed or entirely ignored. Thus, the file system can never be corrupted through accidental loss of power or a system crash. Although the most recently written pieces of data might be lost, the file system itself will always be consistent. In addition, synchronous data (written using the O_DSYNC flag) is always guaranteed to be written before returning, so it is never lost.

Checksums and Self-Healing Data

With ZFS, all data and metadata is verified using a user-selectable checksum algorithm. In addition, ZFS provides for self-healing data. ZFS supports storage pools with varying levels of data redundancy. When a bad data block is detected, ZFS fetches the correct data from another redundant copy and repairs the bad data, replacing it with the correct data.

Unparalleled Scalability

 Zfs is 128 bit filesystem, that allows 256 quadrillion zettabytes of storage.All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created. All the algorithms have been written with scalability in mind. Directories can have up to 248 (256 trillion) entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.

ZFS Snapshots

snapshot is a read-only copy of a file system or volume. Snapshots can be created quickly and easily. Initially, snapshots consume no additional disk space within the pool.

As data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data. As a result, the snapshot prevents the data from being freed back to the pool.

Below is the Quick Reference for ZFS command line Operations


Remove a disk from a pool –

#zpool detach prod c0t0d0

Delete a pool and all associated filesystems –

#zpool destroy prod

Create a pool named prod

#zpool create prod c0t0d0

Create a pool with a different default mount point

#zpool create -m /app/db prod c0t0d0


Create RAID-Z vdev / pool

#zpool create raid-pool-1 raidz c3t0d0 c3t1d0 c3t2d0

Add RAID-Z vdev to pool raid-pool-1

#zpool add raid-pool-1 raidz c4t0d0 c4t1d0 c4t2d0

create a RAID-Z1 Storage Pool

#zpool create raid-pool-1 raidz1 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0


create a RAID-Z2 Storage Pool

#zpool create raid-pool-1 raidz2 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0

Add a new mirrored vdev to a pool

#zpool add prod mirror c3t0d0 c3t1d0


Force the creation of a mirror and concat

#zpool create -f prod c3t0d0 mirror c4t1d0 c5t2d0


Force the creation of a mirror between two different sized disks

#zpool create -f mypool mirror c2t0d0 c4t0d0

diska is mirrored to diskb

#zpool create mypool mirror diska diskb

diska is mirrored to diskb AND diskc is mirrored to diskd

#zpool create mypool mirror diska diskb mirror diskc diskd


Create a filesystem named db in pool prod

#zfs create prod/db


Create a 5gb block device volume named db in pool prod

#zfs create -V 5gb prod/db

Destroy the filesystem or block device db and associated snapshot(s)

#zfs destroy -fr prod/db

Destroy all datasets in pool prod

#zfs destroy -r prod


Set the FS mount point to /app/db

#zfs set mountpoint=/app/db prod/db

Mount #zfs oracle in pool prod

#zfs mount prod/db

Mount all #zfs filesystems

#zfs mount -a

Unmounting all #zfs filesystems

#zfs umount a

Unmount #zfs filesystem prod/db

#zfs umount prod/db


List all zfs filesystems

#zfs list

Listing all properties and settings for a FS

#zfs list -o all
#zfs get all mypool


List pool status

# zpool status -x


List individual pool status mypool in detail

# zpool status -v mypool


Listing storage pools brief

# zpool list

Listing name and size

# zpool list -o name,size


Listing without headers / columns

# zpool list -Ho name


Set a quota on the disk space available to user guest22

#zfs set quota=10G mypool/home/guest22


How to set aside a specific amount of space for a filesystem

#zfs set reservation=10G mypool/prod/test

Enable mounting of a filesystem only through /etc/vfstab

# zfs set mountpoint=legacy mypool/db
and then Add appropriate entries to /etc/vfstab


NFS share /prod/export/share

# zfs set sharenfs=on prod/export/share


Disable execution of files on /prod/export

# zfs set exec=off prod/export


Set the recordsize to 8k

# zfs set recordsize=8k prod/db

Do not update the file access time record

#zfs set atime=off prod/db/datafiles


Enable data compression

#zfs set compression=on prod/db

Enable fletcher4 type checksum

# zfs set checksum=fletcher4 prod/data

Remove the .snapshot directory visibility from the filesystem

# zfs set snapdir=hidden prod/data


Display zfs IO statistics every 2 seconds

#zpool iostat 2

Display #zfs IO statistics in detail every 2 seconds

#zpool iostat -v 2


Scrub all filesystems in pool mypool

# zpool scrub mypool

Temporarily offline a disk (until the next reboot)

#zpool offline -t mypool c0t0d0


Clear error count by onlining a disk

#zpool online

Clear error count (without the need to online a disk)

#zpool clear


List pools available for import

#zpool import

Imports all pools found in the search directories

#zpool import -access


To search for pools with block devices not located in /dev/dsk

#zpool import -d


Search for a pool with block devices created in /zfs

#zpool import -d /zfs prod/data


Import a pool originally named mypool under new name temp

#zpool import mypool temp

Import pool using pool ID

#zpool import 6789123456


Deport a Zfs pool named mypool

#zpool export mypool

Force the unmount and deport of a #zfs pool mypool

#zpool export -f mypool


Create a snapshot named test of the db filesystem

#zfs snapshot mypool/db@test

List snapshots

#zfs list -t snapshot

Roll back to Tues (recursively destroy intermediate snaps)

#zfs rollback -r prod/prod@tuesday

Roll back must and force unmount and remount

#zfs rollback -rf prod/prod@tuesday

Destroy snapshot created earlier

#zfs destroy mypool/db@test


Create a snapshot and then clone that snap

#zfs snapshot prod/prod@12-11-06
#zfs clone prod/prod@12-11-06 prod/prod/clone


Destroy clone

#zfs destroy prod/prod/clone




I have started ( aka in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' You can connect me at -

1 Response

  1. September 18, 2015

    […] Read – Command line Reference […]

What is in your mind, about this post ? Leave a Reply

  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us