Feb 21, 2010

Benchmarks - raidz

Thank you for waiting! (No one wait for me? Please don't say such unkind thing)

Today, I show you the result if raidz configuration zfs-fuse.

As you know, the concept of raidz is same as raid 5 for zfs. The zfs-fuse makes parity data when writing data into disks and checks parity when reading from disks. Generally, it requires CPU power.

1. write

image image image

In this result, raidz is than raid-0 configuration. It is very reasonable, because raid-0 does not need to generate parity data.
But just moment, raidz is little bit faster than single disk zfs.
I think the performance increasing of multi disk access is more effective than the performance decreasing of generating parity data.
If you have over 3 disk drives, the raidz configuration is good for you, you can get fast data access and reliability at once.

2. read

image image image

To read data, the trend of effective is same as writing.

If you have high performance CPU, you may get more high data access capability.

Feb 11, 2010

Benchmarks - striping/multiple disk

Previous benchmarks, I showed you the result using one disk/multiple partition. Today, I show the result using multiple disk.

1. sequential write

imageimage image

In this result, striping is very effective to write multi disk. If you want to use zfs-fuse faster, I recommend to create multiple disk pool.

2. sequential read

image image image

Wow!. It's great.

The result of the using 2 or 3 disks pool is faster than single disk result.

Feb 7, 2010

Benchmarks - striping

Today, I show you the benchmarks with striping configuration.

In this time, I use one physical disk and make 4 partitions in one disk. I will try to use multi physical disks striping in future.

1. sequential write

img1

img2

img3

In this results, striping loading of zfs is very few.

2. sequential read

img4

img5

img6

Oh, it is good. The performance down for striping is not find.

Jan 31, 2010

First benchmark

It's fine day in Tokyo/Japan!

Here is my first benchmark report.

1. sequential write

I measure sequence write speed with following script.

for i in 0 1 2 3 4 5 6 7 8 9 ; do dd if=/dev/zero of=$i.dat bs=1M count=1024;sleep 60;done

And I show you the my results.

wr-sheet

wr-graph1

wr-graph2

Oops! It is only half performance under my environment!

2. sequential write

using script

for i in 0 1 2 3 4 5 6 7 8 9 ; do dd of=/dev/null if=$i.dat bs=1M count=1024;sleep 60;done

results

rd-sheetrd-graph1 rd-graph2 

Oh, it is very few decrease of performance.

Jan 30, 2010

create & mount pool

Now, you finished the preparation to use zfs.

Let's create and mount the zfs pool as Linux file system!

note: you need root account to execute the following operation.

1. start zfs-fuse

First of all, you should start zfs-fuse daemon.

To start zfs-fuse daemon.

# /usr/sbin/zfs-fuse

That's all.

2. create and mount the pool

To create and mount device into zfs-pool, you should use zpool command.

# mkdir /share
# zpool create -m /share archive /dev/hdb

The above zpool command executes the following work.
    (1) create the pool which is "archive" using /dev/hdb device.
    (2) mount its pool into /share directory.

Jan 20, 2010

ZFS commands - 2

If you finished to compile ZFS/FUSE, I described how to compile at here, you got ZFS and FUSE binary under src directory, like as

src/
+- cmd/
|   +- zdb/
|   |  +- zdb*
|   +- zfs/
|   |  +- zfs*
|   +- zpool/
|   |  +- zpool*
|   +- ztest/
|      +- ztest*
+- zfs-fuse/
    +- zfs-fuse*

To use ZFS/FUSE, you should copy those binaries into the directory which contains system executable module.

I copied those 5 binaries into /usr/sbin.

Jan 16, 2010

ZFS commands

Before I explain how to make the ZFS pool, I show you the commands which manage ZFS file system.

Usually, you should type only two command:
    zpool
    zfs

Today, I show all option of above two commands
If you want to show this option list, you just type the name of command itself.

zpool

usage: zpool command args ...
where 'command' is one of the following:

create [-fn] [-o property=value] ...
  [-O file-system-property=value] ...
  [-m mountpoint] [-R root] <pool> <vdev> ...
destroy [-f] <pool>
add [-fn] <pool> <vdev> ...
remove <pool> <device> ...
list [-H] [-o property[,...]] [pool] ...
iostat [-v] [pool] ... [interval [count]]
status [-vx] [pool] ...
online <pool> <device> ...
offline [-t] <pool> <device> ...
clear <pool> [device]
attach [-f] <pool> <device> <new-device>
detach <pool> <device>
replace [-f] <pool> <device> [new-device]
scrub [-s] <pool> ...
import [-d dir] [-D]
import [-o mntopts] [-o property=value] ...
  [-d dir | -c cachefile] [-D] [-f] [-R root] -a [-v]
import [-o mntopts] [-o property=value] ...
  [-d dir | -c cachefile] [-D] [-f] [-R root] <pool | id> [newpool]
export [-f] <pool> ...
upgrade
upgrade -v
upgrade [-V version] <-a | pool ...>
history [-il] [<pool>] ...
get <"all" | property[,...]> <pool> ...
set <property=value> <pool>

 

zfs

usage: zfs command args ...
where 'command' is one of the following:

create [-p] [-o property=value] ... <filesystem>
create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
destroy [-rRf] <filesystem|volume|snapshot>
snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapname>
rollback [-rRf] <snapshot>
clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
promote <clone-filesystem>
rename <filesystem|volume|snapshot> <filesystem|volume|snapshot>
rename -p <filesystem|volume> <filesystem|volume>
rename -r <snapshot> <snapshot>
list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
  [-S property] ... [filesystem|volume|snapshot] ...
set <property=value> <filesystem|volume|snapshot> ...
get [-rHp] [-d max] [-o field[,...]] [-s source[,...]]
  <"all" | property[,...]> [filesystem|volume|snapshot] ...
inherit [-r] <property> <filesystem|volume|snapshot> ...
upgrade [-v]
upgrade [-r] [-V version] <-a | filesystem ...>
userspace [-hniHp] [-o field[,...]] [-sS field] ... [-t type[,...]]
  <filesystem|snapshot>
groupspace [-hniHpU] [-o field[,...]] [-sS field] ... [-t type[,...]]
  <filesystem|snapshot>
mount
mount [-vO] [-o opts] <-a | filesystem>
unmount [-f] <-a | filesystem|mountpoint>
share <-a | filesystem>
unshare [-f] <-a | filesystem|mountpoint>
send [-R] [-[iI] snapshot] <snapshot>
receive [-vnF] <filesystem|volume|snapshot>
receive [-vnF] -d <filesystem>
allow <filesystem|volume>
allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...] <filesystem|volume>
allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
allow -c <perm|@setname>[,...] <filesystem|volume>
allow -s @setname <perm|@setname>[,...] <filesystem|volume>
unallow [-rldug] <"everyone"|user|group>[,...]
  [<perm|@setname>[,...]] <filesystem|volume>
unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

Jan 13, 2010

How to install ZFS for Linux

Here you go.

If you can "apt-get" or "yum", installing ZFS for Linux system is very easy.

 

1. GET LIBRARIES

First of all, you install libraries and SCons.
SCons is the software build tool under open source. SCons substitutes traditional "make".

USE apt-get:

# apt-get install libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons

USE yum:

# yum install -y fuse-devel libattr-devel libaio-devel libacl-devel zlib-devel fuse-devel scons

Oh, under my environment, Vine Linux, it is little bit different, like as:

# apt-get install fuse-devel libattr-devel libaio-devel libacl-devel zlib-devel fuse-devel scons

 

2. GET ZFS TARBALL

You download ZFS tar ball from ZFS homepage, wget is useful command to get it, like as:

# wget http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2

After downloading, you uncompress the tar ball.

# bzip2 -d zfs-fuse-0.6.0.tar.bz2
# tar xv zfs-fuse-0.6.0.tar

note:
As you know, a part of "0.6.0" in filename is version number, and above filename is current version at today. In future it may change, then you should change it to fit last (or you want to) version name.

3. COMPILE

Compile is very easy. Just two commands.

# cd zfs-fuse-0.6.0/src
# scons

That's all. If you success to compile, you get ZFS binaries.

Next, I will show you how to use ZFS binaries on Linux system.

Jan 10, 2010

My test environment

Today, I introduce my test environment.

 

Motherboard : P4M80-M4 produced by BIOSTAR
CPU : Intel Pentium 4 2.4GHz
Memory : 768MB
OS : Vine Linux 5.0 (Japanese/English)

 

Vine Linux is one of the Japanese Linux distribution. It was based on Red Hat Linux.

Why Vine? not CentOS/Debian/etc.?

Umm,,, there is no special reason.

When I tried to learn Linux, Vine Linux (oh, version 2.6!) is very good distribution. I could install whole system from 1CD in only 4GB Disk with 256MB Memory. And It was finished to localization. So, I could use Japanese!.

Ever since, I have used to test and learn a lot of knowledge from Vine Linux system.

Of course, you may use other Linux distribution to test ZFS.
I try to describe my blog for other Linux users as possible as I can.

Jan 8, 2010

ZFS Disadvantage

ZFS is very excellent, there is scarcely disadvantage.

1. CPU POWER EXPENSE

Because of software RAID, ZFS expends CPU power. But it is very little. I executed ZFS on 1.7GHz Celeron with 256MB memory. But I have no stress.

2. NO ENCRYPTION

Today, security is very high priority issue. It is very safe solution that the file system has data encryption in itself.
Current ZFS does not support the encryption function. But, don't mind, implementation is working. I believe that ZFS will support the encryption option in very near future.
If you need encryption now, you can use TrueCrypt.

(Oh my god! TrueCrypt DOES NOT support Solaris! If you know the good and free encryption solution for Solaris, pleas inform me.)

Jan 7, 2010

ZFS Advantage

I'm not Sun Microsystems sales people, so I can not explain the detail and concrete advantage of ZFS. Here is my impression.

1. EASY MANAGEMENT

If you have an experience the UNIX/Linux management, it is easy to understand how to mange ZFS.
You can mange using very few management commands to manage ZFS. Maybe, only one command, it is "zfs" command, is enough to create/add/destroy/check status/etc. the ZFS.

2. EASY SCALE-UP

If you want to increase the disk capacity, you need only following steps:
   (1) add HDD in your computer.
   (2) add HDD device into your ZFS pool
That's all. How easy!. Ofcourse you DON'T lose all your files that are saved in current pool.

3. RELIABILITY

ZFS is designed very carefully. I heard it is very rare case to lost the file even if system is crashed.
Sun Microsystems provide the their Storage products, i.e. Sun Fire X4540 Server, based on ZFS with Solaris 10/x86-64.

4. CHARGE FREE

If you use ZFS on Solaris 10, you need no money to use ZFS license. It is very helpful for private use. Ofcourse, commercial use is same as it. Wow! what wouderful it is!

I believe there are more things, but it is enough to select ZFS to use for your private disk server system.

Jan 4, 2010

What is ZFS ?

ZFS (Zettabyte File System) is one of the file systems provided by Sun Microsystems. It is very modern designed and powerful file system and you can manage the devices easily.

Basically, ZFS is for Solaris and Open Solaris OS. Unfortunately, at present time, it is not ported Linux kernel. ZFS license is not suitable for Linux license.

If you want to know "what is ZFS", I suppose you to read the Sun Microsystems HP. Sun Microsystems publishes the Solaris OS manuals on its web site. It is very good text to learn ZFS architecture. ZFS administration Guide (english site) is here. You can change language, if you want to.

Roughly, ZFS is software RAID manager. So, you can get RAID devices in your computers.

For example, you have tree HDDs, 20GB, 40GB and 80GB, and you need 100GB disk space, you can create it from those three HDDs with one command!!! It called "pool". the physical disk devices managed in pool. The pool mounts into directory.
Of course, ZFS supports the redundancy. You can get equivalently RAID-5, therewith RAID-6! It called RAID-Z.

That's all ?, not yet.

You combine two pools into one another pool, then, you get equivalently RAID-1.

Oops!, then you need to install Solaris OS in you computer!

But, there is FUSE (Filesystem in User space) project. Many people try to port many file systems for various OS that is not support its file system originally.

Using FUSE, we can get ZFS on Linux.

Jan 3, 2010

Why ZFS?

First of all, I explain why I decided to try ZFS.

I have few servers. They are installed Linux/Solaris/Windows(not sever version). And I have some small HDDs (under 40GB). If I want to get the big drive, I should use Windows NTFS RAID or Linux/Solaris software RAID. But software RAID, except ZFS, cannot grow-up the capacity of its device after making the device. At that time, I need to move all files to other drives/computers, re-create new drive with adding disk, move all files from the drives/computers. Oh! a lot of time....

At first, I tried to use mdadm, Linux software RAID manager. But when I created the RAID-0 by mdadm, I could not add the new disk into the current RAID-0 device with mdadm command.

Perhaps, I had a mistake, but I could not find my mistake.

I have an experience ZFS file system on Solaris/x86 and it is very easy to manage the disks. So, I want to use ZFS file system on my Linux servers.

Jan 2, 2010

Hello, everyone

I try to show my experience how to use the "ZFS" file system on linux.
If my blog helps to you and your linux life, it is my pleasure.

Enjoy linux life!