Description/Explanation of zfs sysctl variables in FreeBSD

I try to add an explanation about every variable, no guarantee for correctness, there are a lot of guesses involved, please comment below to correct me if Im wrong.

vfs.zfs.l2c_only_size
Amount of data only cached in the l2, not the arc.
vfs.zfs.mfu_ghost_data_lsize
The amount of data referenced by the mfu ghost list, since this is a ghost list, this data is not part of the arc.
vfs.zfs.mfu_ghost_metadata_lsize
Same as above but for metadata.
vfs.zfs.mfu_ghost_size
vfs.zfs.mfu_ghost_data_lsize+vfs.zfs.mfu_ghost_metadata_lsize
vfs.zfs.mfu_data_lsize
Data used in the cache for mfu data.
vfs.zfs.mfu_metadata_lsize
Data used in the cache for mfu metadata.
vfs.zfs.mfu_size
This is the size in bytes used by the “most frequently used cache” (data and metadata)
vfs.zfs.mru_ghost_data_lsize
The amount of data referenced by the mru ghost list, since this is a ghost list, this data is not part of the arc.
vfs.zfs.mru_ghost_metadata_lsize
Same as above but for metadata.
vfs.zfs.mru_ghost_size
vfs.zfs.mru_ghost_data_lsize+vfs.zfs.mru_ghost_metadata_lsize
vfs.zfs.mru_data_lsize
Data used in the cache for mru data.
vfs.zfs.mru_metadata_lsize
Data used in the cache for mru metadata.
vfs.zfs.mru_size
This is the size in bytes used by the “most recently used cache” (data and metadata)
vfs.zfs.anon_data_lsize
See vfs.zfs.anon_size this is the data part.
vfs.zfs.anon_metadata_lsize
See vfs.zfs.anon_size this is the metadata part.
vfs.zfs.anon_size
This is the amount of data in bytes in the cache used anonymously, these are bytes in the write buffer, which are not jet synced to disk.
vfs.zfs.l2arc_norw
Dont read data from the l2 cache while writing to it.
vfs.zfs.l2arc_feed_again
Control if the l2arc is feed vfs.zfs.l2arc_feed_secs (set to 0)
or depending on the amount of data written is dynamicly adjusted between vfs.zfs.l2arc_feed_min_ms and vfs.zfs.l2arc_feed_secs. (set to 1)
vfs.zfs.l2arc_noprefetch
This controls if the zfs prefetcher (zfetch) should read data from the l2 arc when prefetching. It does not control if prefetched data is cached into l2. It only controls if the prefetcher uses the l2 arc to read from.
vfs.zfs.l2arc_feed_min_ms
Min time between l2 feeds (see vfs.zfs.l2arc_feed_again)
vfs.zfs.l2arc_feed_secs
Normal, max time between l2 feeds (see vfs.zfs.l2arc_feed_again)
vfs.zfs.l2arc_headroom
This value multiplied by vfs.zfs.l2arc_write_max, results in the scanning range for the arc feeder. The l2 feeder scans the tail of the 4 arc lists in the order mfu_meta,mru_meta,mfu_data and mru_data.
On each list the tail is scanned for data which is not jet in the l2 cache. The scan stops if vfs.zfs.l2arc_write_max data is found. If vfs.zfs.l2arc_write_max*vfs.zfs.l2arc_headroom was scanned whithout new data exceeding l2arc_write_max, the next list is scanned.
vfs.zfs.l2arc_write_boost
Write limit for the l2 feeder directly after boot (before the first arc eviction happend)
vfs.zfs.l2arc_write_max
Write limit for the l2 feeder (see vfs.zfs.l2arc_feed_again allso)
vfs.zfs.arc_meta_limit
Limits the amount of the arc which can be used by metadata.
vfs.zfs.arc_meta_used
Size of data in the arc used by meta data (mru and mfu)
vfs.zfs.arc_min
Minimal size of the arc, it cat shrink to.
vfs.zfs.arc_max
Maximum size of the arc, it can grow to.
vfs.zfs.dedup.prefetch
Don’t know what this is for. sysctl says “Enable/disable prefetching of dedup-ed blocks which are going to be freed” ???
vfs.zfs.mdcomp_disable
Disable compression of metadata.
vfs.zfs.write_limit_override
Maximum amount of not jet written data (anon, dirty) in the cache, this Setting overrides the dynamic size which is calculated by the write_limit options below, and sets it to a fixed value instead.
vfs.zfs.write_limit_inflated
If vfs.zfs.write_limit_override is 0, this value is the maximum write limit which can be dynamicly set. It is calculated by multiplying vfs.zfs.write_limit_max with 24 (If a lot of redundancy is used in a pool, 1MB could result in 24 redundant MBs to be written, 24 is the precalculated worst case)
vfs.zfs.write_limit_max
This is used to get vfs.zfs.write_limit_inflated it is set to RAM / 2 ^ vfs.zfs.write_limit_shift
vfs.zfs.write_limit_min
Minimum write limit.
vfs.zfs.write_limit_shift
see vfs.zfs.write_limit_max
vfs.zfs.no_write_throttle
Disable the write throttle, applications can write at dram speed, until write_limit is reached, then writes are completly stalled until a new empty txg is available.
vfs.zfs.zfetch.array_rd_sz
This is the maximum amount of bytes the prefetcher will prefetch in advance.
vfs.zfs.zfetch.block_cap
This is the maximum amount of blocks the prefetcher will prefetch in advance.
vfs.zfs.zfetch.min_sec_reap
Not realy sure
vfs.zfs.zfetch.max_streams
The maximum number of streams a zfetch can handle, not sure it there could be multiple zfetches at work.
vfs.zfs.prefetch_disable
Disable the prefetch (zfetch) readahead feature.
vfs.zfs.mg_alloc_failures
Could be the maximum write errors per vdev before it’s taken offline?
vfs.zfs.check_hostid
?
vfs.zfs.recover
Setting this to 1 tries to fix errors that would otherwise be fatal, don’t realy now what kinds of errors we talking about.
vfs.zfs.txg.synctime_ms
Try to keep txg commits shorter than this value, by shrinking the amount of data a txg can hold, this works together whith the write limit options above.
vfs.zfs.txg.timeout
Seconds betwen txg syncs (writes) to disk.
vfs.zfs.vdev.cache.bshift
This is a bit shift value, read requests smaller than vfs.zfs.vdev.cache.max will read 2^vfs.zfs.vdev.cache.bshift instead, (it dosn’t take longer to get this amount instead, and we might get a benifit later if we have this in the vdev cache)
vfs.zfs.vdev.cache.size
Size of the cache per vdev on the vdev level.
vfs.zfs.vdev.cache.max
See vfs.zfs.vdev.cache.bshift
vfs.zfs.vdev.write_gap_limit
It has something to do with two writes beeing merged into one if only this value (bytes?) is between them.
vfs.zfs.vdev.read_gap_limit
It has something to do with two reads beeing merged into one if only this value (bytes?) is between them.
vfs.zfs.vdev.aggregation_limit
It has something to do with two reads/writes beeing merged into one if the resulting read/write is below this amount of bytes?
vfs.zfs.vdev.ramp_rate
Freebsd sysctl says: “Exponential I/O issue ramp-up rate” you are kidding right?
vfs.zfs.vdev.time_shift
?
vfs.zfs.vdev.min_pending
?
vfs.zfs.vdev.max_pending
Maximum amount of requests in the per device queue.
vfs.zfs.vdev.bio_flush_disable
?
vfs.zfs.cache_flush_disable
No idea what cache we are talking abount here, but it disables flushing to it :-/
vfs.zfs.zil_replay_disable
You can disable the replay of your zil logs, not sure why someone would want this, and not simply disable writing a zil?
vfs.zfs.zio.use_uma
It has something todo whith how memory is allocated.
vfs.zfs.snapshot_list_prefetch
Prefetch data when listing snapshots (speed up snapshot listing)
vfs.zfs.version.zpl
Maximum zfs version supported
vfs.zfs.version.spa
Maximum zpool version supported
vfs.zfs.version.acl
?
vfs.zfs.debug
Set zfs debug level.
vfs.zfs.super_owner
This user-id can perform manage the filesystem.
kstat.zfs.misc.xuio_stats.onloan_read_buf
kstat.zfs.misc.xuio_stats.onloan_write_buf
kstat.zfs.misc.xuio_stats.read_buf_copied
kstat.zfs.misc.xuio_stats.read_buf_nocopy
kstat.zfs.misc.xuio_stats.write_buf_copied
kstat.zfs.misc.xuio_stats.write_buf_nocopy
kstat.zfs.misc.zfetchstats.hits
Counts the number of cache hits, to items wich are in the cache because of the prefetcher.
kstat.zfs.misc.zfetchstats.misses
kstat.zfs.misc.zfetchstats.colinear_hits
Counts the number of cache hits, to items wich are in the cache because of the prefetcher (prefetched linear reads)
kstat.zfs.misc.zfetchstats.colinear_misses
kstat.zfs.misc.zfetchstats.stride_hits
Counts the number of cache hits, to items wich are in the cache because of the prefetcher (prefetched stride reads)
http://en.wikipedia.org/wiki/Stride_of_an_array
kstat.zfs.misc.zfetchstats.stride_misses
kstat.zfs.misc.zfetchstats.reclaim_successes
kstat.zfs.misc.zfetchstats.reclaim_failures
kstat.zfs.misc.zfetchstats.streams_resets
kstat.zfs.misc.zfetchstats.streams_noresets
kstat.zfs.misc.zfetchstats.bogus_streams
kstat.zfs.misc.arcstats.hits
Total amount of cache hits in the arc.
kstat.zfs.misc.arcstats.misses
Total amount of cache misses in the arc.
kstat.zfs.misc.arcstats.demand_data_hits
Amount of cache hits for demand data, this is what matters (is good) for your application/share.
kstat.zfs.misc.arcstats.demand_data_misses
Amount of cache misses for demand data, this is what matters (is bad) for your application/share.
kstat.zfs.misc.arcstats.demand_metadata_hits
Ammount of cache hits for demand metadata, this matters (is good) for getting filesystem data (ls,find,…)
kstat.zfs.misc.arcstats.demand_metadata_misses
Ammount of cache misses for demand metadata, this matters (is bad) for getting filesystem data (ls,find,…)
kstat.zfs.misc.arcstats.prefetch_data_hits
The zfs prefetcher tried to prefetch somethin, but it was allready cached (boring)
kstat.zfs.misc.arcstats.prefetch_data_misses
The zfs prefetcher prefetched something which was not in the cache (good job, could become a demand hit in the future)
kstat.zfs.misc.arcstats.prefetch_metadata_hits
Same as above, but for metadata
kstat.zfs.misc.arcstats.prefetch_metadata_misses
Same as above, but for metadata
kstat.zfs.misc.arcstats.mru_hits
Cache hit in the “most recently used cache”, we move this to the mfu cache.
kstat.zfs.misc.arcstats.mru_ghost_hits
Cache hit in the “most recently used ghost list” we had this item in the cache, but evicted it, maybe we should increase the mru cache size.
kstat.zfs.misc.arcstats.mfu_hits
Cache hit in the “most freqently used cache” we move this to the begining of the mfu cache.
kstat.zfs.misc.arcstats.mfu_ghost_hits
Cache hit in the “most frequently used ghost list” we had this item in the cache, but evicted it, maybe we should increase the mfu cache size.
kstat.zfs.misc.arcstats.allocated
New data is written to the cache.
kstat.zfs.misc.arcstats.deleted
Old data is evicted (deleted) from the cache.
kstat.zfs.misc.arcstats.stolen
kstat.zfs.misc.arcstats.recycle_miss
kstat.zfs.misc.arcstats.mutex_miss
kstat.zfs.misc.arcstats.evict_skip
kstat.zfs.misc.arcstats.evict_l2_cached
We evicted something from the arc, but its still cached in the l2 if we need it.
kstat.zfs.misc.arcstats.evict_l2_eligible
We evicted something from the arc, and it’s not in the l2 this is sad. (maybe we hadn’t had enough time to store it there)
kstat.zfs.misc.arcstats.evict_l2_ineligible
We evicted something which cannot be stored in the l2.
Reasons could be:
We have multiple pools, we evicted something from a pool whithot an l2 device.
The zfs property secondarycache.
kstat.zfs.misc.arcstats.hash_elements
kstat.zfs.misc.arcstats.hash_elements_max
kstat.zfs.misc.arcstats.hash_collisions
kstat.zfs.misc.arcstats.hash_chains
kstat.zfs.misc.arcstats.hash_chain_max
kstat.zfs.misc.arcstats.p
kstat.zfs.misc.arcstats.c
Arc target size, this is the size the system thinks the arc should have.
kstat.zfs.misc.arcstats.c_min
kstat.zfs.misc.arcstats.c_max
kstat.zfs.misc.arcstats.size
Total size of the arc.
kstat.zfs.misc.arcstats.hdr_size
kstat.zfs.misc.arcstats.data_size
kstat.zfs.misc.arcstats.other_size
kstat.zfs.misc.arcstats.l2_hits
Hits to the L2 cache. (It was not in the arc, but in the l2 cache)
kstat.zfs.misc.arcstats.l2_misses
Miss to the L2 cache. (It was not in the arc, and not in the l2 cache)
kstat.zfs.misc.arcstats.l2_feeds
kstat.zfs.misc.arcstats.l2_rw_clash
kstat.zfs.misc.arcstats.l2_read_bytes
kstat.zfs.misc.arcstats.l2_write_bytes
kstat.zfs.misc.arcstats.l2_writes_sent
kstat.zfs.misc.arcstats.l2_writes_done
kstat.zfs.misc.arcstats.l2_writes_error
kstat.zfs.misc.arcstats.l2_writes_hdr_miss
kstat.zfs.misc.arcstats.l2_evict_lock_retry
kstat.zfs.misc.arcstats.l2_evict_reading
kstat.zfs.misc.arcstats.l2_free_on_write
kstat.zfs.misc.arcstats.l2_abort_lowmem
kstat.zfs.misc.arcstats.l2_cksum_bad
kstat.zfs.misc.arcstats.l2_io_error
kstat.zfs.misc.arcstats.l2_size
Size of the l2 cache.
kstat.zfs.misc.arcstats.l2_hdr_size
Size of the metadata in the arc (ram) used to manage (lookup if someting is in the l2) the l2 cache.
kstat.zfs.misc.arcstats.memory_throttle_count
kstat.zfs.misc.arcstats.l2_write_trylock_fail
kstat.zfs.misc.arcstats.l2_write_passed_headroom
kstat.zfs.misc.arcstats.l2_write_spa_mismatch
kstat.zfs.misc.arcstats.l2_write_in_l2
kstat.zfs.misc.arcstats.l2_write_io_in_progress
kstat.zfs.misc.arcstats.l2_write_not_cacheable
kstat.zfs.misc.arcstats.l2_write_full
kstat.zfs.misc.arcstats.l2_write_buffer_iter
kstat.zfs.misc.arcstats.l2_write_pios
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter
kstat.zfs.misc.vdev_cache_stats.delegations
kstat.zfs.misc.vdev_cache_stats.hits
Hits to the vdev (device level) cache.
kstat.zfs.misc.vdev_cache_stats.misses
Misses to the vdev (device level) cache.

5 Replies to “Description/Explanation of zfs sysctl variables in FreeBSD”

  1. I hope your okay with me copy pasting this to my site, I give you full credit and tell everyone to come visit your site!
    I just want a copy on mysite so I can easily find the information, and also just incase your site is ever down, I can look it up here. Or if mine is down people can look it up here. I tell everyone for the most up to date info to go to your site.

  2. -I hope your okay with me copy pasting this to my site, I give you full credit and tell everyone to come visit your site!
    I just want a copy on mysite so I can easily find the information, and also just incase your site is ever down, I can look it up here. Or if mine is down people can look it up here. I tell everyone for the most up to date info to go to your site.
    – Here is the link: http://ram.kossboss.com/zfs-vars/
    – ignore the other 2 comments

  3. vfs.zfs.check_hostid controls if ‘zpool import’ checks that your hostid is the same as person who last imported the pool. If you’ve ever imported your pool off a livecd or something, you’ll notice the message ‘this pool might be in use by another computer, are you sure you want to import it?’, this is the hostid check.

    The reason it does this is that if you were using a SAN or something with multipath, you might actually be trying to import a pool that is in use by another system, and doing so would cause all kinds of bad things to happen.

Leave a Reply

Your email address will not be published. Required fields are marked *