Feb 21, 2010

Benchmarks - raidz

Thank you for waiting! (No one wait for me? Please don't say such unkind thing)

Today, I show you the result if raidz configuration zfs-fuse.

As you know, the concept of raidz is same as raid 5 for zfs. The zfs-fuse makes parity data when writing data into disks and checks parity when reading from disks. Generally, it requires CPU power.

1. write

image image image

In this result, raidz is than raid-0 configuration. It is very reasonable, because raid-0 does not need to generate parity data.
But just moment, raidz is little bit faster than single disk zfs.
I think the performance increasing of multi disk access is more effective than the performance decreasing of generating parity data.
If you have over 3 disk drives, the raidz configuration is good for you, you can get fast data access and reliability at once.

2. read

image image image

To read data, the trend of effective is same as writing.

If you have high performance CPU, you may get more high data access capability.

Feb 11, 2010

Benchmarks - striping/multiple disk

Previous benchmarks, I showed you the result using one disk/multiple partition. Today, I show the result using multiple disk.

1. sequential write

imageimage image

In this result, striping is very effective to write multi disk. If you want to use zfs-fuse faster, I recommend to create multiple disk pool.

2. sequential read

image image image

Wow!. It's great.

The result of the using 2 or 3 disks pool is faster than single disk result.

Feb 7, 2010

Benchmarks - striping

Today, I show you the benchmarks with striping configuration.

In this time, I use one physical disk and make 4 partitions in one disk. I will try to use multi physical disks striping in future.

1. sequential write

img1

img2

img3

In this results, striping loading of zfs is very few.

2. sequential read

img4

img5

img6

Oh, it is good. The performance down for striping is not find.

Jan 31, 2010

First benchmark

It's fine day in Tokyo/Japan!

Here is my first benchmark report.

1. sequential write

I measure sequence write speed with following script.

for i in 0 1 2 3 4 5 6 7 8 9 ; do dd if=/dev/zero of=$i.dat bs=1M count=1024;sleep 60;done

And I show you the my results.

wr-sheet

wr-graph1

wr-graph2

Oops! It is only half performance under my environment!

2. sequential write

using script

for i in 0 1 2 3 4 5 6 7 8 9 ; do dd of=/dev/null if=$i.dat bs=1M count=1024;sleep 60;done

results

rd-sheetrd-graph1 rd-graph2 

Oh, it is very few decrease of performance.

Jan 30, 2010

create & mount pool

Now, you finished the preparation to use zfs.

Let's create and mount the zfs pool as Linux file system!

note: you need root account to execute the following operation.

1. start zfs-fuse

First of all, you should start zfs-fuse daemon.

To start zfs-fuse daemon.

# /usr/sbin/zfs-fuse

That's all.

2. create and mount the pool

To create and mount device into zfs-pool, you should use zpool command.

# mkdir /share
# zpool create -m /share archive /dev/hdb

The above zpool command executes the following work.
    (1) create the pool which is "archive" using /dev/hdb device.
    (2) mount its pool into /share directory.

Jan 20, 2010

ZFS commands - 2

If you finished to compile ZFS/FUSE, I described how to compile at here, you got ZFS and FUSE binary under src directory, like as

src/
+- cmd/
|   +- zdb/
|   |  +- zdb*
|   +- zfs/
|   |  +- zfs*
|   +- zpool/
|   |  +- zpool*
|   +- ztest/
|      +- ztest*
+- zfs-fuse/
    +- zfs-fuse*

To use ZFS/FUSE, you should copy those binaries into the directory which contains system executable module.

I copied those 5 binaries into /usr/sbin.

Jan 16, 2010

ZFS commands

Before I explain how to make the ZFS pool, I show you the commands which manage ZFS file system.

Usually, you should type only two command:
    zpool
    zfs

Today, I show all option of above two commands
If you want to show this option list, you just type the name of command itself.

zpool

usage: zpool command args ...
where 'command' is one of the following:

create [-fn] [-o property=value] ...
  [-O file-system-property=value] ...
  [-m mountpoint] [-R root] <pool> <vdev> ...
destroy [-f] <pool>
add [-fn] <pool> <vdev> ...
remove <pool> <device> ...
list [-H] [-o property[,...]] [pool] ...
iostat [-v] [pool] ... [interval [count]]
status [-vx] [pool] ...
online <pool> <device> ...
offline [-t] <pool> <device> ...
clear <pool> [device]
attach [-f] <pool> <device> <new-device>
detach <pool> <device>
replace [-f] <pool> <device> [new-device]
scrub [-s] <pool> ...
import [-d dir] [-D]
import [-o mntopts] [-o property=value] ...
  [-d dir | -c cachefile] [-D] [-f] [-R root] -a [-v]
import [-o mntopts] [-o property=value] ...
  [-d dir | -c cachefile] [-D] [-f] [-R root] <pool | id> [newpool]
export [-f] <pool> ...
upgrade
upgrade -v
upgrade [-V version] <-a | pool ...>
history [-il] [<pool>] ...
get <"all" | property[,...]> <pool> ...
set <property=value> <pool>

 

zfs

usage: zfs command args ...
where 'command' is one of the following:

create [-p] [-o property=value] ... <filesystem>
create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
destroy [-rRf] <filesystem|volume|snapshot>
snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapname>
rollback [-rRf] <snapshot>
clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
promote <clone-filesystem>
rename <filesystem|volume|snapshot> <filesystem|volume|snapshot>
rename -p <filesystem|volume> <filesystem|volume>
rename -r <snapshot> <snapshot>
list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
  [-S property] ... [filesystem|volume|snapshot] ...
set <property=value> <filesystem|volume|snapshot> ...
get [-rHp] [-d max] [-o field[,...]] [-s source[,...]]
  <"all" | property[,...]> [filesystem|volume|snapshot] ...
inherit [-r] <property> <filesystem|volume|snapshot> ...
upgrade [-v]
upgrade [-r] [-V version] <-a | filesystem ...>
userspace [-hniHp] [-o field[,...]] [-sS field] ... [-t type[,...]]
  <filesystem|snapshot>
groupspace [-hniHpU] [-o field[,...]] [-sS field] ... [-t type[,...]]
  <filesystem|snapshot>
mount
mount [-vO] [-o opts] <-a | filesystem>
unmount [-f] <-a | filesystem|mountpoint>
share <-a | filesystem>
unshare [-f] <-a | filesystem|mountpoint>
send [-R] [-[iI] snapshot] <snapshot>
receive [-vnF] <filesystem|volume|snapshot>
receive [-vnF] -d <filesystem>
allow <filesystem|volume>
allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...] <filesystem|volume>
allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
allow -c <perm|@setname>[,...] <filesystem|volume>
allow -s @setname <perm|@setname>[,...] <filesystem|volume>
unallow [-rldug] <"everyone"|user|group>[,...]
  [<perm|@setname>[,...]] <filesystem|volume>
unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow