Posts

Showing posts with the label ZFS

[Level 3] ZFS Evil Tuning on OpenSolaris

I saw a good artical about ZFS tuning , the link as following: http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide Wish this helps. regards, Stanley Huang

[Level 3] Clone ZFS Permissions

One day, someone ask me how to clone ZFS permissions from one ZFS file system to another ZFS file system. It seems that zfs command doesnot support the permission like getfacl setfacl to clone the file system permissions. Therefore I try to write a script to implement that, please refer to the following code. #!/usr/bin/bash showUsage() {   cat <<EOF Usage:   $0 source_zfs target_zfs Ex.   $0 rpool/fs1 karajon/fs2 EOF } ##################################### main sSZFS=$1 sTZFS=$2 declare -i fPermSet=0 declare -i fLocalPerm=0 zfs allow $sSZFS | while read s do   ( echo $s | grep "^---- Permissions on " > /dev/null ) && continue   ( echo $s | grep "^Permission sets:$" > /dev/null ) && fPermSet=1 && fLocalPerm=0 && continue   ( echo $s | grep "^Local+Descendent permissions:$" > /dev/null ) && fLocalPerm=1 && fPermSet=0 && continue   if [ $fPermSet -eq 1 ] ...

[Level 3] Script for clone VirtualBox 3.1.0 .

Before, I wrote a script to clone VirtualBox with ZFS technology. After new VirtualBox release, because of the architecture and command changed, the previous script will occur error while running. Therefore, I modify the script for version 3.1.0 later of VirtualBox. #!/usr/bin/bash showUsage() {   cat <<EOF Usage:   $0 [c]reate/[d]elete New_Machine_Name Source_Machine_Name interface begin_index end_index Ex.   $0 [c]reate OS128a_MySQL OS128a yge0 1 10   $0 [d]elete OS128a_MySQL OS128a yge0 1 10   $0 [d]elete OS128a_MySQL OS128a yge0 VMMachine: `showVMS | sed -e 's/^/  /'` EOF } ## Usage: ##   setUUID vdi_file ## Ex. ##   setUUID /Karajon/VBoxes/Guest/guest.vdi setUUID() {   sVDI=$1   VBoxManage -q internalcommands setvdiuuid $sVDI } ## Usage: ##   createVM vm_name os_type. ## Ex. ##   createVM S10u8_MySQL Solaris_64 ## PS. ##   OS Type: Solaris Solaris_64 Open...

[Level 3] The script for demo creating mass zones on OpenSolaris

Days ago, my colleague will demo about the virtualization techniques on OpenSolari. So I share the script for him to demo fast creating zones on OpenSolaris. After test, my lap will create a zone per 3 seconds. That impress him, and he decided to demo on that session. The same script that I'd like to share to you, please refer the following script code. Any question, and suggestion, please feel free to let me know. #!/usr/bin/bash showUsage() {   cat< <EOF Usage:   $0 [c]reate  [original index] [start index] [end index]   $0 [d]estroy [start index]    [end index]   [always yes]   $0 [l]ist Ex.   $0 c 1 2 10   => create  zone2, zone3, ... ,zone10   $0 c 1 3 8    => create  zone3, zone4, ... ,zone8   $0 d 2 10 [F] => destroy zone2, zone3, ... ,zone10   $0 l PS.   for solaris 11 only   all indexes must greater then 1, less then 255 Zones: ...

[Level 3] Script for create VirtualBox VM quickly.

This script is for quickly create VirtualBox VM, it base on ZFS snapshot/clone technique. The description for the script is as the following: The step as below: 1. create a ZFS filesystem first. Ex. # zfs create -p rpool/vbox/sourceOS 2. create a VirtualBox VM for source and put the vdi on the above ZFS filesystem. 3. use the script: Usage:   $0 create New_Machine_Name Source_Machine_Name interface begin_index end_index Ex.   ./createVBoxClientByClone.sh create NewVM SourceVM yukonx0 3 5 the above command will create 3 VMs (NewVM_3, NewVM_4, NewVM_5): The actions of create are, . snapshot origin ZFS . clone from origin ZFS snapshot to new ZFS . re-new vdi uuid on new ZFS . create new vm . modify new vm . modify new vm nic 4. If you want to delete VMs, use the command as below: Usage:   $0 delete New_Machine_Name Source_Machine_Name interface begin_index end_index Ex.   ./createVBoxClientByClone.sh delete NewVM SourceVM yukonx0 3 5 Above coma...

[Level 2] How to get ZFS clone file system?

Now I have use VirtualBox with ZFS clone file system. So sometimes, I have to figure out which file system is native, which file system is clone. Therefore, I write a script to list the ZFS clone file system. #!/usr/bin/bash showUsage() {   cat < <EOF Usage:   $0 pool_name Ex.   $0 rpool EOF } ######################################################## main [ $# -lt 1 ] && echo "Error without parameters, exit program..." && showUsage && exit 1 sPoolname=$1 #sPoolname=${1:-rpool} echo "get pool($sPoolname)..." zfs get -r origin $sPoolname | egrep -v -- 'origin[ ]+-[ ]+-$' The you can use the command to find out the clone file system. # getCloneFS.sh rpool get pool(rpool)... NAME                                          PROPERTY ...

[Level 2] Tips on ZFS.

1. Is that possible to migrate root filesytem from smaller hard disk to larger one without shutdown the operation system? The answer is "Yes", if you use OpenSolaris. You can use a larger disk to attach into the pool and make the hard disk be a part of mirror disk of the root file system. After join the pool, the zfs pool will reslivering the data from origin one to new hard disk. After reslivered, then deatch the old hard disk, then the zfs capacity will also increased. The simulation output as the following. # mkdir /temp # mkfile 128m /temp/128m # mkfile 256m /temp/256m # ls /temp/*m -rw-------   1 stanley  staff    134217728 Nov 19 09:06 /temp/128m -rw-------   1 stanley  staff    268435456 Nov 19 09:06 /temp/256m # # pfexec zpool create myPool /temp/128m # zpool status myPool   pool: myPool  state: ONLINE  scrub: none requested config:     NAME     ...

[Level 3] Clone multi-server on VirtualBox guest by ZFS clone.

How to save your disk space while you need multi VirtualBox guest? You can use ZFS clone to clone ZFS filesystem. But while you import the vdi, you willgot an error message with duplicated disk uuid. So you need to modify the disk uuid by command VBoxManage. The complete steps as the following: 1. create zfs pool for VirtualBox VDI. # zpool create vdiPool c1t0d0s0; # default folder is /vdiPool 2. create zfs filesystem for Source VDI. # zfs create vdiPool/vdiSource; # default folder is /vdiPool/vdiSource 3. create VirtualBox guest, and create vdi on /vdiPool/vdiSource/OpenSolaris.vdi 4. clone vdi source # zfs snapshot vdiPool/vdiSource@installed # zfs clone vdiPool/vdiSource@installed vdiPool/vdiTarget1 # VBoxManage internalcommands setvdiuuid /vdiPool/vdiTarget1/OpenSolaris.vdi Wish this helps. regards, Stanley Huang

[Level 3] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Advanced ZFS hands-on lab

The following is my lab file, please refer to it.  Wish this helps. regards, Stanley Huang **************************************************************************************************** The purpose of this lab is to let you have advanced ZFS filesystem administration skill. And then you will have the following capabilities. Lab 1: * replace zpool disk. Lab 2: * take ZFS filesystem snapshot, rollback ZFS filesystem. * clone ZFS filesystem. Lab 3: * use ZFS L2ARC * use ZFS ZIL Lab 1: 1. replace zpool disk. # cd /labs/ZFS/files; # zpool create mypool mirror `pwd`/f1 `pwd`/f2 spare `pwd`/f3; # zpool replace mypool `pwd`/f2 `pwd`/f3; # zpool status mypool; -------------------------------------------------------------------------------   pool: mypool  state: ONLINE  scrub: resilver completed after 0h0m with 0 errors on Sun Oct 18 11:13:15 2009 config:     NAME          ...

[Level 2] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Basic ZFS hands-on lab

The following is my lab file, please refer to it.  Wish this helps. regards, Stanley Huang **************************************************************************************************** The purpose of this lab is to let you have basic ZFS administration skill. And then you will have the following capabilities. Lab 1: * create ZFS pool. * check ZFS pool status. * set ZFS pool properties. * destroy ZFS pool. Lab 2: * create ZFS filesystem. * check ZFS filesystem status. * set ZFS filesystem properties. * destroy ZFS filesystem. Lab 1: 1. prepare files with command mkfile. # mkdir -p /labs/ZFS/files; # cd /labs/ZFS/files; # mkfile 128m f1 f2 f3 f4 f5 f6 f7 f8 f9; 2. create ZFS pool. # zpool create mypool mirror `pwd`/f1 `pwd`/f2 spare `pwd`/f3; # zpool create myraidz raidz `pwd`/f4 `pwd`/f5 `pwd`/f6 spare `pwd`/f7; 3. list ZFS pools, and check pool status. # zpool list; --------------------------------------------------- # zpool list mypool...

[Level 1] How to Set Autoreplace in ZFS Pool?

Someone ask me about why ZFS pool won't use the hot spare disk while one of the raid5 disk is crashed? That's because the default pool won't set "autoreplace" on. How can we set it "on"? 1. check the zpool properties first: # zpool get autoreplace rpool; NAME   PROPERTY     VALUE    SOURCE rpool  autoreplace  off      default 2. set it on: # zpool set autoreplace=on rpool; 3. check the zpool properties again: # zpool get autoreplace rpool; NAME   PROPERTY     VALUE    SOURCE rpool  autoreplace  on       local Wish this helps. regards, Stanley Huang

[Level 3] ZFS ARC stat script.

Sometimes, when we use OpenSolaris, we will find out the memory usage is very high. That's because ZFS ARC architecture will use memory as possible, and how can we know the ZFS ARC status? Actually we can use the command "kstat", but it seems to hard for general end user. And I found out a good script from World Wide Web, Neelakanth Nadgir , who writes a script in perl for easy to check ZFS ARC. And this script is worth for you to read. regards, Stanley Huang ******************************************************************************** #!/bin/perl -w # # Print out ZFS ARC Statistics exported via kstat(1) # For a definition of fields, or usage, use arctstat.pl -v # # Author: Neelakanth Nadgir http://blogs.sun.com/realneel # Comments/Questions/Feedback to neel_sun.com or neel_gnu.org # # CDDL HEADER START # # The contents of this file are subject to the terms of the # Common Development and Distribution License, Version 1.0 only # (the "License"...

[Level 2] ZFS and DTrace in Mac.

My friend who use Mac OS X, and told me that the Mac OS X support ZFS and DTrace. It surprises me! ZFS and DTrace is the new feature of Solaris 10, of course, OpenSolaris also has these two features, too. So I try the ZFS and DTrace on Mac immediately. After test, I have the two following conclusions. 1. Mac support ZFS but for read only, so Mac cannot modify ZFS. That's so pity. The primary reason that I love ZFS just because I can have snapshot, clone. Readonly, is not the impression that I know about ZFS. 2. In OpenSolaris, it has over 60K probes(use command to check how many probes your system has, "/usr/sbin/dtrace -l | wc -l"), so I can detect the system behavior. But when I use the command, I found out the Mac OS only have just over 20K probes, just the 1/3 of OpenSolaris. And I try the D-script, it does work. That's a good gift that all Mac fans should to cherish. Really, because DTrace can be used for online debug, performance and you do not need to modify...