Friday, October 30, 2009

[Level 1] How to let our source code display with colors in Vim.

In TWOSUG , someone ask about how to display colors in shell.
1. Setting TERM variables.
# export TERM=xterm-color;
2. Use Vim to open the source.
# vim ./test.c;
3. Turn on the syntax mode in Vim.
In last line mode, type the following command.
:syntax on

Wish this helps.

regards,
Stanley Huang

Thursday, October 29, 2009

[Level 3] Clone multi-server on VirtualBox guest by ZFS clone.

How to save your disk space while you need multi VirtualBox guest?
You can use ZFS clone to clone ZFS filesystem.
But while you import the vdi, you willgot an error message with duplicated disk uuid.
So you need to modify the disk uuid by command VBoxManage.
The complete steps as the following:

1. create zfs pool for VirtualBox VDI.
# zpool create vdiPool c1t0d0s0; # default folder is /vdiPool

2. create zfs filesystem for Source VDI.
# zfs create vdiPool/vdiSource; # default folder is /vdiPool/vdiSource

3. create VirtualBox guest, and create vdi on /vdiPool/vdiSource/OpenSolaris.vdi

4. clone vdi source
# zfs snapshot vdiPool/vdiSource@installed
# zfs clone vdiPool/vdiSource@installed vdiPool/vdiTarget1

# VBoxManage internalcommands setvdiuuid /vdiPool/vdiTarget1/OpenSolaris.vdi


Wish this helps.

regards,
Stanley Huang

chm reader in OpenSolaris

If you want to install chm reader on OpenSolaris,
you can install xchm from Blastwave repository.

Wish this helps.

regards,
Stanley Huang

Thursday, October 22, 2009

ScaleDB storage engine for MySQL.

Here comes a new commercial storage engine for MySQL called "ScaleDB".
The architecture ScaleDB just similar like "Oracle RAC",
but now, ScaleDB only support Linux and Windows.
All the db instant are active and supports shared storage.
Expecting the storage engine be mature.

PS. If you don't have shared storage, and you just use VirtualBox as I do,
you can use "DRBD" to simulate the shared storage,
but now, DRDB only supoort Linux.

The architecture of SacleDB as below. (All pictures rights belows to www.scaledb.com)





Ref:
http://www.scaledb.com

The DRDB architecture as below. (All picture rights below to DRDB.org)


Ref:
http://www.drbd.org

Wish this helps.

regards,
Stanley Huang

Tuesday, October 20, 2009

[Level 3] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Advanced ZFS hands-on lab

The following is my lab file, please refer to it. 

Wish this helps.

regards,
Stanley Huang

****************************************************************************************************
The purpose of this lab is to let you have advanced ZFS filesystem administration skill. And then you will have the following capabilities.
Lab 1:
* replace zpool disk.
Lab 2:
* take ZFS filesystem snapshot, rollback ZFS filesystem.
* clone ZFS filesystem.
Lab 3:
* use ZFS L2ARC
* use ZFS ZIL



Lab 1:
1. replace zpool disk.
# cd /labs/ZFS/files;
# zpool create mypool mirror `pwd`/f1 `pwd`/f2 spare `pwd`/f3;
# zpool replace mypool `pwd`/f2 `pwd`/f3;

# zpool status mypool;
-------------------------------------------------------------------------------
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Sun Oct 18 11:13:15 2009
config:

    NAME            STATE     READ WRITE CKSUM
    mypool                 ONLINE       0     0     0
      /lab/ZFS/files/f1    ONLINE       0     0     0
      spare                ONLINE       0     0     0
        /lab/ZFS/files/f2  ONLINE       0     0     0
        /lab/ZFS/files/f3  ONLINE       0     0     0  47.5K resilvered
    spares
      /lab/ZFS/files/f3    INUSE     currently in use

errors: No known data errors
-------------------------------------------------------------------------------

# zpool replace mypool `pwd`/f2 `pwd`/f8;

# zpool status mypool;
-------------------------------------------------------------------------------
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Sun Oct 18 11:20:55 2009
config:

    NAME          STATE     READ WRITE CKSUM
    mypool               ONLINE       0     0     0
      /lab/ZFS/files/f1  ONLINE       0     0     0
      /lab/ZFS/files/f8  ONLINE       0     0     0  57.5K resilvered
    spares
      /lab/ZFS/files/f3  AVAIL

errors: No known data errors
-------------------------------------------------------------------------------



Lab 2:
1. ZFS filesystem snapshot/rollback.
# zfs create mypool/myfs1;
# cp /etc/hosts /mypool/myfs1/hosts;
# ls -l /mypool/myfs1/hosts;
------------------------------------------------------------
-r--r--r-- 1 root root 4925 Oct 18 11:35 /mypool/myfs1/hosts
------------------------------------------------------------

# zfs snapshot mypool/myfs1@s1;
# cat /dev/null > /mypool/myfs1/hosts;
# ls -l /mypool/myfs1/hosts;
------------------------------------------------------------
-r--r--r-- 1 root root 0 Oct 18 11:36 /mypool/myfs1/hosts
------------------------------------------------------------

# zfs rollback mypool/myfs1@s1;
# ls -l /mypool/myfs1/hosts;
------------------------------------------------------------
-r--r--r-- 1 root root 4925 Oct 18 11:35 /mypool/myfs1/hosts
------------------------------------------------------------

2. clone ZFS filesystem, then promote it.
# zfs clone mypool/myfs1@s1 mypool/clonefs;
# zfs list -t all -r mypool;
-----------------------------------------------------
NAME              USED  AVAIL  REFER  MOUNTPOINT
mypool            218K  90.8M    24K  /mypool
mypool/clonefs     21K  90.8M    25K  /mypool/clonefs
mypool/myfs1       25K  90.8M    25K  /mypool/myfs1
mypool/myfs1@s1      0      -    25K  -
-----------------------------------------------------

# zfs get -r origin mypool;
--------------------------------------------------
NAME             PROPERTY  VALUE            SOURCE
mypool           origin    -                -
mypool/clonefs   origin    mypool/myfs1@s1  -
mypool/myfs1     origin    -                -
mypool/myfs1@s1  origin    -                -
--------------------------------------------------

# cd /mypool/clonefs/;
# ls -al;
----------------------------------------------
total 9
drwxr-xr-x 2 root root    3 Oct 18 11:35 .
drwxr-xr-x 6 root root    6 Oct 18 11:39 ..
-r--r--r-- 1 root root 4925 Oct 18 11:35 hosts
----------------------------------------------

# echo "192.168.100.1 host1" >> ./hosts;
# echo "192.168.100.2 host2" >> ./hosts;
# echo "192.168.100.3 host3" >> ./hosts;
# ls -l ./hosts;
------------------------------------------------
-r--r--r-- 1 root root 4985 Oct 18 11:44 ./hosts
------------------------------------------------

# tail -3 ./hosts;
-------------------
192.168.100.1 host1
192.168.100.2 host2
192.168.100.3 host3
-------------------

# cd /;
# zfs promote mypool/clonefs
# zfs get -r origin mypool;
------------------------------------------------------
NAME               PROPERTY  VALUE              SOURCE
mypool             origin    -                  -
mypool/clonefs     origin    -                  -
mypool/clonefs@s1  origin    -                  -
mypool/myfs1       origin    mypool/clonefs@s1  -
------------------------------------------------------

# zfs destroy -r mypool/clonefs@s1;
cannot destroy 'mypool/clonefs@s1': snapshot is cloned
no snapshots destroyed
# zfs destroy -R mypool/clonefs@s1;
# zfs rename mypool/clonefs mypool/fs1
root@Stanley-NB:/# zfs get -r origin mypool;
------------------------------------
NAME        PROPERTY  VALUE   SOURCE
mypool      origin    -       -
mypool/fs1  origin    -       -
------------------------------------

Lab 3:
# cd /labs/ZFS/files;
# zpool add mypool log `pwd`/f9;
# zpool status mypool;
-------------------------------------------------------------------------------
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Oct 18 11:22:13 2009
config:

    NAME                 STATE     READ WRITE CKSUM
    mypool               ONLINE       0     0     0
      /lab/ZFS/files/f1  ONLINE       0     0     0
      /lab/ZFS/files/f8  ONLINE       0     0     0  57.5K resilvered
    logs                 ONLINE       0     0     0
      /lab/ZFS/files/f9  ONLINE       0     0     0
    spares
      /lab/ZFS/files/f3  AVAIL  

errors: No known data errors
-------------------------------------------------------------------------------

# lofiadm -a `pwd`/f10; # cache only support vdev, so have to create vdev first.
/dev/lofi/1
# zfs add mypool cache /dev/lofi/1;
# zpool status mypool;
-------------------------------------------------------------------------------
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Oct 18 11:24:33 2009
config:

    NAME                 STATE     READ WRITE CKSUM
    mypool               ONLINE       0     0     0
      /lab/ZFS/files/f1  ONLINE       0     0     0
      /lab/ZFS/files/f8  ONLINE       0     0     0  57.5K resilvered
    logs                 ONLINE       0     0     0
      /lab/ZFS/files/f9  ONLINE       0     0     0
    cache
      /dev/lofi/1        ONLINE       0     0     0
    spares
      /lab/ZFS/files/f3  AVAIL  

errors: No known data errors
-------------------------------------------------------------------------------


[Level 2] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Basic ZFS hands-on lab

The following is my lab file, please refer to it. 

Wish this helps.

regards,
Stanley Huang


****************************************************************************************************

The purpose of this lab is to let you have basic ZFS administration skill. And then you will have the following capabilities.
Lab 1:
* create ZFS pool.
* check ZFS pool status.
* set ZFS pool properties.
* destroy ZFS pool.
Lab 2:
* create ZFS filesystem.
* check ZFS filesystem status.
* set ZFS filesystem properties.
* destroy ZFS filesystem.




Lab 1:
1. prepare files with command mkfile.
# mkdir -p /labs/ZFS/files;
# cd /labs/ZFS/files;
# mkfile 128m f1 f2 f3 f4 f5 f6 f7 f8 f9;

2. create ZFS pool.
# zpool create mypool mirror `pwd`/f1 `pwd`/f2 spare `pwd`/f3;
# zpool create myraidz raidz `pwd`/f4 `pwd`/f5 `pwd`/f6 spare `pwd`/f7;

3. list ZFS pools, and check pool status.
# zpool list;
---------------------------------------------------
# zpool list mypool;
NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
mypool    123M  77.5K   123M     0%  ONLINE  -
myraidz   370M   149K   370M     0%  ONLINE  -
---------------------------------------------------

# zpool status mypool;
---------------------------------------------------
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    mypool      ONLINE       0     0     0
      /temp/f1  ONLINE       0     0     0
      /temp/f2  ONLINE       0     0     0
    spares
      /temp/f3  AVAIL  
---------------------------------------------------

# zpool status myraidz;
------------------------------------------------
  pool: myraidz
 state: ONLINE
 scrub: none requested
config:

    NAME          STATE     READ WRITE CKSUM
    myraidz       ONLINE       0     0     0
      raidz1      ONLINE       0     0     0
        /temp/f4  ONLINE       0     0     0
        /temp/f5  ONLINE       0     0     0
        /temp/f6  ONLINE       0     0     0
    spares
      /temp/f7    AVAIL  

errors: No known data errors
------------------------------------------------

4. list zpool properties.
# zpool get all mypool;
--------------------------------------------------
NAME    PROPERTY       VALUE       SOURCE
mypool  size           246M        -
mypool  used           76K         -
mypool  available      246M        -
mypool  capacity       0%          -
mypool  altroot        -           default
mypool  health         ONLINE      -
mypool  guid           9946812783950608926  default
mypool  version        14          default
mypool  bootfs         -           default
mypool  delegation     on          default
mypool  autoreplace    off         default
mypool  cachefile      -           default
mypool  failmode       wait        default
mypool  listsnapshots  off         default
--------------------------------------------------

5. change mypool pool property 'autoreplace'.
# zpool set autoreplace=on mypool;

6. export pool.
# zpool export mypool;

7. import pool.
# zpool import -d ./ mypool;

8. destroy pool.
# zpool destroy mypool;
# zpool destroy myraidz;



Lab 2:
1. create ZFS filesystem.
# zfs create mypool/myfs1;
# zfs create -p mypool/myfs1/myfs2/myfs3;

2. list ZFS filesystemsA.
# zfs list mypool/myfs1;
# zfs list -r mypool;

3. list zpool properties.
# zfs get all mypool/myfs1;
PS. Partial sample properties.
------------------------------------------------------------------
NAME          PROPERTY              VALUE                  SOURCE
mypool/myfs1  compressratio         1.00x                  -
mypool/myfs1  mountpoint            /mypool/myfs1          default
mypool/myfs1  compression           off                    default
mypool/myfs1  copies                1                      default
...
------------------------------------------------------------------

4. change mypool filesystem properties.
# zfs set mountpoint=/mnt/mypool mypool;

5. change myfs1 filesystem properties.
# zfs set compression=on mypool/myfs1;
# zfs set copies=2 mypool/myfs1/myfs2;

6. list properties.
# zfs get -r mountpoint mypool;
--------------------------------------------------------------------------------
NAME                      PROPERTY    VALUE                          SOURCE
mypool                    mountpoint  /mnt/mypool                    local
mypool/myfs1              mountpoint  /mnt/mypool/myfs1              inherited from mypool
mypool/myfs1/myfs2        mountpoint  /mnt/mypool/myfs1/myfs2        inherited from mypool
mypool/myfs1/myfs2/myfs3  mountpoint  /mnt/mypool/myfs1/myfs2/myfs3  inherited from mypool
--------------------------------------------------------------------------------

# zfs get -r compression mypool;
--------------------------------------------------------------------------------
NAME                      PROPERTY     VALUE     SOURCE
mypool                    compression  off       default
mypool/myfs1              compression  on        local
mypool/myfs1/myfs2        compression  on        inherited from mypool/myfs1
mypool/myfs1/myfs2/myfs3  compression  on        inherited from mypool/myfs1
--------------------------------------------------------------------------------

# zfs get -r compressratio mypool;
--------------------------------------------------------------------------------
NAME                      PROPERTY       VALUE  SOURCE
mypool                    compressratio  1.00x  -
mypool/myfs1              compressratio  1.00x  -
mypool/myfs1/myfs2        compressratio  1.00x  -
mypool/myfs1/myfs2/myfs3  compressratio  1.00x  -
--------------------------------------------------------------------------------

# zfs get -r copies mypool;
--------------------------------------------------------------------------------
NAME                      PROPERTY  VALUE   SOURCE
mypool                    copies    1       default
mypool/myfs1              copies    1       default
mypool/myfs1/myfs2        copies    2       local
mypool/myfs1/myfs2/myfs3  copies    2       inherited from mypool/myfs1/myfs2
--------------------------------------------------------------------------------

7. reset properties default values.
# zfs inherit -r mountpoint mypool;
# zfs inherit -r compression mypool;
# zfs inherit -r copies mypool;

8. destroy all ZFS filesystem.
# zfs destroy mypool/myfs1/myfs2/myfs3;
# zfs destroy -r mypool/myfs1;

[Level 2] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Basic DTrace hands-on lab

The following is my lab file, please refer to it. 

Wish this helps.

regards,
Stanley Huang

****************************************************************************************************

The purpose of this lab is to let you have basic DTrace skill. And then you will have the following capabilities.
Lab 1:
* know dtrace probes.
* use dtrace tool kits to monitor your system.
* download dtrace tool kits.
* write your first D-script.

1. list dtrace probes.
# dtrace -l;

2. list demo dtrace tools
# cd /usr/demo/dtrace;
# ls -l *.d;
------------------------------------------------------------------
-rw-r--r--   1 root     bin         1677  5月 14 23:52 applicat.d
-rw-r--r--   1 root     bin         1699  5月 14 23:52 badopen.d
-rw-r--r--   1 root     bin         1732  5月 14 23:52 begin.d
-rw-r--r--   1 root     bin         1668  5月 14 23:52 callout.d
-rw-r--r--   1 root     bin         2220  5月 14 23:52 clause.d
-rw-r--r--   1 root     bin         1717  5月 14 23:52 clear.d
-rw-r--r--   1 root     bin         1640  5月 14 23:52 countdown.d
-rw-r--r--   1 root     bin         1664  5月 14 23:52 counter.d
-rw-r--r--   1 root     bin         2119  5月 14 23:52 dateprof.d
-rw-r--r--   1 root     bin         1694  5月 14 23:52 delay.d
-rw-r--r--   1 root     bin         1858  5月 14 23:52 denorm.d
------------------------------------------------------------------

3. run the dtrace script.
# /usr/sbin/dtrace -s ./iosnoop.d;
------------------------------------------------------------------------
    DEVICE                                                       FILE RW
       sd0                                                       W
       sd0                                                       W
       sd0                                                       W
       sd0                                                       W
       sd0                                                       W
       sd0                                                       W
       sd0                                                       W
       ...
------------------------------------------------------------------------
^C

# dtrace -s ./whoexec.d;
^C
-----------------------------------------------
WHO                  WHAT                 COUNT
bash                 dtrace               1
bash                 find                 1
bash                 ls                   1
bash                 passwd               1
dtrace               dtrace               1
-----------------------------------------------

4. down dtrace tool kits.
# firefox http://www.opensolaris.org/os/community/dtrace/dtracetoolkit/;
[After download the toolkit file.]
# gunzip -c ./DTraceToolkit-0.99.tar.gz | tar xvf -
# cd ./DTraceToolkit-0.99
# ./iosnoop
-------------------------------------------------
  UID   PID D    BLOCK   SIZE       COMM PATHNAME
  101  1264 R 182480384  65536       find
  101  1264 R 182480128  65536       find
  101  1264 R 229304832  65536       find
  101  1264 R 115049472  65536       find
  101  1264 R 91222144  65536       find
  101  1264 R 160868480  65536       find
  101  1264 R 27676544  65536       find
  101  1264 R 27649664  65536       find
  101  1264 R 193875712  65536       find
  101  1264 R 115076224  65536       find
  101  1264 R 167114112  65536       find
  101  1264 R 167346560  65536       find
-------------------------------------------------
^C

5. write your first D-script.
# cat ./myDscript.d;
-------------------------------------------------
#!/usr/sbin/dtrace -s
#pragma D option quiet
#pragma D option version=1.1
#pragma D option defaultargs
struct  myStruct {
  uint64_t nStartTimestamp;
  uint64_t nEndTimestamp;
  uint64_t nElapsed;
};
struct myStruct myTime;

BEGIN {
  myTime.nStartTimestamp=timestamp;
  trace("hello world\n");
}
END {
  printf("%s\n", "the end...");
  myTime.nEndTimestamp=timestamp;
  myTime.nElapsed=myTime.nEndTimestamp-myTime.nStartTimestamp;
  printf("The process spend time(sec) = %d sec\n",((myTime.nElapsed)/1000000000));
}
ERROR
{
  trace("Some syntax error!\n");
}
-------------------------------------------------
# chmod u+x ./myDscript.d;
# ./myDscript.d;
# ./h.d;
hello world
^C
the end...
The process spend time(sec) = 2 sec
#


[Level 2] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Basic SMF hands-on lab

The following is my lab file, please refer to it.

Wish this helps.

regards,
Stanley Huang

****************************************************************************************************
The purpose of this lab is to let you have basic SMF administration skill. And then you will have the following capabilities.
Lab 1:
* list service.
* check service dependenty.
* manage service.

Lab 1:
1. list services
# svcs -a; ## list all service included 'disabled services'
----------------------------
STATE          STIME    FMRI
...
----------------------------

# svcs */milestone/*;
----------------------------------------------------------------------------
STATE          STIME    FMRI
online          8:40:05 svc:/milestone/network:default
online          8:40:08 svc:/milestone/name-services:default
online          8:40:08 svc:/milestone/devices:default
online          8:40:09 svc:/milestone/single-user:default
online          8:40:13 svc:/milestone/sysconfig:default
online          8:40:25 svc:/milestone/multi-user:default
online          8:40:26 svc:/milestone/multi-user-server:default
----------------------------------------------------------------------------

2. list service associated instance.
# svcs -p inetd;
--------------------------------------------------
STATE          STIME    FMRI
online          8:40:13 svc:/network/inetd:default
                8:40:13      434 inetd
--------------------------------------------------

3. list service messages.
# svcs -l inetd;
-------------------------------------------------------------------
fmri         svc:/network/inetd:default
name         inetd
enabled      true
state        online
next_state   none
state_time   Sun Oct 18 08:40:13 2009
logfile      /var/svc/log/network-inetd:default.log
restarter    svc:/system/svc/restarter:default
contract_id  70
dependency   require_any/error svc:/network/loopback (online)
dependency   require_all/error svc:/system/filesystem/local (online)
dependency   optional_all/error svc:/milestone/network (online)
dependency   optional_all/error svc:/network/rpc/bind (online)
dependency   optional_all/none svc:/network/inetd-upgrade (disabled)
dependency   require_all/none svc:/milestone/sysconfig (online) svc:/milestone/name-services (online)
-------------------------------------------------------------------

4. list service error message.
# svcs -vx;
-------------------------------------------------------------------------------
svc:/system/cluster/cl-svc-cluster-milestone:default (Synchronizing the cluster userland services)
 State: disabled since Sun Oct 18 08:39:54 2009
Reason: Disabled by an administrator.
   See: http://sun.com/msg/SMF-8000-05
Impact: 1 dependent service is not running:
        svc:/system/cluster/sckeysync:default
-------------------------------------------------------------------------------

5. check service dependency. (-d: for parent, -D for child)
# svcs */milestone/*;
online          8:40:05 svc:/milestone/network:default
online          8:40:08 svc:/milestone/name-services:default
online          8:40:08 svc:/milestone/devices:default
online          8:40:09 svc:/milestone/single-user:default
online          8:40:13 svc:/milestone/sysconfig:default
online          8:40:25 svc:/milestone/multi-user:default
online          8:40:26 svc:/milestone/multi-user-server:default

# svcs -d multi-user | grep milestone;
online          8:40:08 svc:/milestone/name-services:default
online          8:40:09 svc:/milestone/single-user:default
online          8:40:13 svc:/milestone/sysconfig:default

# svcs -D multi-user | grep milestone;
online          8:40:26 svc:/milestone/multi-user-server:default

6. manage service.
# svcs *ssh*
STATE          STIME    FMRI
online          8:40:14 svc:/network/ssh:default

** stop service, service will not start next server boot.
# svcadm disable svc:/network/ssh:default
# svcadm disable ssh
# svcs -l ssh | grep enabled
-----------------------------
enabled      false
-----------------------------

** start service, service will start next server boot.
# svcadm enable svc:/network/ssh:default
# svcadm enable ssh
# svcs -l ssh | grep enabled
-----------------------------
enabled      true
-----------------------------

** restart service
** The option "restart" will not start the service status.
** If you want to start the service, please use "enable" option as above.
# svcadm restart svc:/network/ssh:default
# svcadm restart ssh

** temporary start/stop service, service will start next server boot.
# svcadm enable -t ssh
# svcadm disable -t ssh
# svcs -l ssh | grep enabled
-----------------------------
enabled      true (temporary)
-----------------------------

7. manage services with profiles.
** with secure net policy. stop un-secure service, like telnet, ftp, rpc...
# svccfg apply /var/svc/profile/generic_limited_net.xml
# cat /var/svc/profile/generic_limited_net.xml
PS. partial of the xml file.
-----------------------------------------------------------
  ...
 
   
   
   
 

 
   
   
 

 
   
 

  ...
-----------------------------------------------------------

** with generial net policy. start un-secure service, like telnet, ftp, rpc...
# svccfg apply /var/svc/profile/generic_open.xml
# cat /var/svc/profile/generic_open.xml
PS. partial of the xml file.
-----------------------------------------------------------
  ...
 
   
 

 
   
 

 
   
 

  ...
-----------------------------------------------------------


[Level 3] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Advanced SMF hands-on lab

The following is my lab file, please refer to it. 

Wish this helps.

regards,
Stanley Huang

****************************************************************************************************
The purpose of this lab is to let you have advanced SMF administration skill. And then you will have the following capabilities.
Ref:
  1. smfmanifest_howto.pdf
  2. http://www.sun.com/software/solaris/howtoguides/servicemgmthowto.jsp

Lab 1:
* create new SMF service.

Lab 1:
1. copy smfSample.xml to /temp/.
# mkdir /temp;
# cp ./smfSample.xml /temp/myService.xml;

2. modify xml file.
# cd /temp;
# vi /temp/myService.xml;
[ find all "REPLACE_ME" to replace ]

3. create myService method.
# vi /temp/myService;
[ service method just like init method, ex. /etc/init.d/apache ]

4. create myService.main.
# vi /temp/myService.main;
[ the service main program ]

5. import service configuration.
# svccfg import /temp/myService.xml;

6. check service status.
# svcs myService;
# svcs -vx myService;

7. enable/disable/restart service.
# svcadm enable myService;
# svcadm disable myService;
# svcadm restart myService;




Sunday, October 18, 2009

[Level 3] Solaris 10 Technical Conference ( 2009/10/25,11/13,12/4 ) -- Advanced DTrace hands-on lab

The following is my lab file, please refer to it. 

Wish this helps.

regards,
Stanley Huang
 


****************************************************************************************************

The purpose of this lab is to let you have advanced DTrace skill. And then you will have the following capabilities.
Lab 1:
* well known about D-script.
Lab 2:
* use DTrace to debug your own program.

Lab 1:
1. D-script arguments.
# cd /temp;
# cat ./testArgs.d;
------------------------------------
#!/usr/sbin/dtrace -qs
/*
 * $ is for digital argument,
 * $$ is for quated string argument.
 */
BEGIN {
  printf("args1=%d\n",$1);
  printf("args2=%s\n",$$2);
}
------------------------------------

# chmod u+x ./testArgs.d;
# ./testArgs.d 1 "a";
-------
args1=1
args2=a
-------
^C

2. D-script control
# cd /temp;
# cat ./0-60.d;
----------------------------------------------------------------------
#!/usr/sbin/dtrace -qs
dtrace:::BEGIN
{
        msecond = 0;
        speed = 0;
}
profile:::tick-1msec
/speed < 60/
{
        speed = 10 *msecond/1000;
        msecond++;
}
profile:::tick-1msec
/speed>=60/
{
        printf("0 to %d m/s in %d milli seconds\n", speed, msecond-1);
        exit(0);
}
----------------------------------------------------------------------

# chmod u+x ./0-60.d
# ./0-60.d
---------------------------------
0 to 60 m/s in 6000 milli seconds
---------------------------------

3. aggregation funcion.
aggregation function format:
@name[keys] = aggfunc(args);
'@'  -- notice that "name" is a aggregation set.
keys -- index

aggregation function has:
sum(expr)              -- summary
count()                -- count
avg(expr)              -- average
min(expr)/max(expr)    -- minimus/maxima
quantize()/lquantize() -- quantize/linear quantize

# cd /temp
# cat ./aggr1.d
------------------------------
#!/usr/sbin/dtrace -s
sysinfo:::pswitch
{
  @[execname] = count();
}
------------------------------

# chmod u+x ./aggr1.d
# ./aggr1.d
---------------------------------------------------------------------
dtrace: script './aggr1.d' matched 3 probes
---------------------------------------------------------------------

^C
---------------------------------------------------------------------
  devfsadm                                                          1
  dhcpagent                                                         1
  gnome-power-mana                                                  1
  gnome-volume-man                                                  1
  iiimd                                                             1
  inetd                                                             1
  mdnsd                                                             1
  nscd                                                              2
  fsflush                                                           3
  dtrace                                                            4
  httpd                                                             4
  metacity                                                          4
  updatemanagernot                                                  4
  xscreensaver                                                      4
  ...
---------------------------------------------------------------------


# cat ./aggr2.d
---------------------------------------------------------
#!/usr/sbin/dtrace -s
pid$target:libc:malloc:entry
{
  @[execname, "Malloc Distribution"]=quantize(arg0);
}
---------------------------------------------------------

# chmod u+x ./aggr2.d
# ./aggr2.d -c ls
--------------------------------------------------------------------------
dtrace: script './aggr2.d' matched 1 probe
0-60.d                     f8
aggr1.d                    f9
badopen.d                  h.d
Developer001.zip           myProg
f1                         myProg.c
f2                         myProg.d
f3                         myProg.sh
f4                         MySQLDeveloper5.1Exam
f5                         mysqldeveloper5_1exam.zip
f6                         testArgs.d
f7                         test.txt
dtrace: pid 1470 has exited

  ls                                                  Malloc Distribution                              
           value  ------------- Distribution ------------- count   
               4 |                                         0       
               8 |@@@@@@@@@@@@@@@@@@@@                     4       
              16 |@@@@@                                    1       
              32 |@@@@@                                    1       
              64 |                                         0       
             128 |                                         0       
             256 |@@@@@                                    1       
             512 |                                         0       
            1024 |                                         0       
            2048 |                                         0       
            4096 |                                         0       
            8192 |                                         0       
           16384 |                                         0       
           32768 |@@@@@                                    1       
           65536 |                                         0       
--------------------------------------------------------------------------



# cat ./aggr3.d
---------------------------------------------------------
#!/usr/sbin/dtrace -s
syscall::mmap:entry
{
  @a["number of mmaps"] = count();
  @b["average size of mmaps"] = avg(arg1);
  @c["size distribution"] = quantize(arg1);
}
profile:::tick-10sec
{
  printa(@a);
  printa(@b);
  printa(@c);

  clear(@a);
  clear(@b);
  clear(@c);
}
---------------------------------------------------------

# chmod u+x aggr3.d
# ./aggr3.d
---------------------------------------------------------------------
dtrace: script './aggr3.d' matched 2 probes
---------------------------------------------------------------------

^C
---------------------------------------------------------------------
  number of mmaps                                                   1
  average size of mmaps                                        131072
  size distribution                                
           value  ------------- Distribution ------------- count   
           65536 |                                         0       
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1       
          262144 |                                         0       
---------------------------------------------------------------------

# cat ./timespent.d
-------------------------------------------------------------------------------
#!/usr/sbin/dtrace -qs
syscall::open*:entry,
syscall::close*:entry
{
  self->ts=timestamp;
}
syscall::open*:return,
syscall::close*:return
{
  self->timespent = timestamp - self->ts;
  printf("Process %s  with ThreadID %d spent %d nsecs in %s\n", execname, tid, self->timespent, probefunc);
  self->ts=0; /*allow DTrace to reclaim the storage */
  self->timespent = 0;
}
-------------------------------------------------------------------------------

# chmod u+x ./timespent.d
# ./timespent.d
-------------------------------------------------------------------
Process gnome-netstatus-  with ThreadID 1 spent 57534 nsecs in open
Process multiload-applet  with ThreadID 1 spent 67725 nsecs in open
Process multiload-applet  with ThreadID 1 spent 5744 nsecs in close
Process multiload-applet  with ThreadID 1 spent 8165 nsecs in close
Process multiload-applet  with ThreadID 1 spent 2762 nsecs in close
Process multiload-applet  with ThreadID 1 spent 2441 nsecs in close
Process multiload-applet  with ThreadID 1 spent 2406 nsecs in close
Process multiload-applet  with ThreadID 1 spent 2381 nsecs in close
Process multiload-applet  with ThreadID 1 spent 2366 nsecs in close
...
-------------------------------------------------------------------



Lab 2:
1. create new C program (myProg.c).
# cd /temp
# cat ./myProg.c
----------------------------------------
#include
int f1(int n) {
  int r=f3(n*2);
  return r;
}
int f2(int n) {
  int r=n+3;
  return r;
}
int f3(int n) {
  int r=f2(n*5);
  return r;
}

int main(int argc, char** argv) {
  int r=f1(1);
  printf("%d\n",r); // f1 call f2, f2 call f3 => (1*2+3)*5=15
}
----------------------------------------
# gcc -o ./myProg ./myProg.c
# ./myProg
----------------------------------------
13 ## not 15
----------------------------------------

2. debug your program
# cat ./myProg.d
----------------------------------------
#!/usr/sbin/dtrace -s
pid$1:myProg:$$2:entry
{
  self->trace=1;
}
pid$1:myProg:$$2:return
{
  self->trace=0;
}
pid$1:myProg::entry,
pid$1:myProg::return
/self->trace/
{
}
----------------------------------------

[termianl 1]# mdb ./myProg
[terminal 1]> _start:b
[terminal 1]> :r
mdb: stop at _start
mdb: target stopped at:
_start:         pushl  $0x0
[terminal 1]> !ps
  PID TTY         TIME CMD
 1959 pts/4       0:00 mdb
 1961 pts/4       0:00 ps
 1662 pts/4       0:00 bash
 1960 pts/4       0:00 myProg
[terminal 2]# dtrace -F -s ./myProg.d 1960 f1
dtrace: script './myProg.d' matched 26 probes

[terminal 1]> :c
13
mdb: target has terminated
[terminal 1]> $q

[terminal 1]#
[terminal 2]
CPU FUNCTION                                
  0  -> f1                                   
  0    -> f3                                 
  0      -> f2                               
  0      <- f2                               
  0    <- f3                                 
[terminal 2]
^C
[terminal 2]#

3. result::
The program (myProg) main call f1, but f1 call f3, not call f2
so wrong function called in f1, modify the program...done!


Saturday, October 17, 2009

[Level 3] How to use DTrace to trace your program fuction steps.

Sometime when we want to debug our program, we have to verify what's the program real step of function calls. We can use the DTrace to achieve it.

For example (scenario):
1. Create C file /tmp/t.c with following source.
#include <stdio.h>
int f1(int a) {
  f2(a+2);
}

int f2(int b) {
  printf("%d\n", b+3);
}

int main(int argc, char *argv) {

  char a='1';
  f1(a);
}



2. compiler then run it.
# gcc -o /tmp/t /tmp/t.c
# /tmp/t
54
#
When we got the answer 54, why the answer is '54', is rare(we think, 1+2+3, should be '6'). Then we can use DTrace to trace it.

3. write a D-Language script (/tmp/t.d), then change mode with 700.

# cat /tmp/t.d
#!/usr/sbin/dtrace -s
#pragma D option quiet
pid$1:t:f1:entry {
  printf("f1: %d\n", arg0);
}
pid$1:t:f2:entry {
  printf("f2: %d\n", arg0);
}
# chmod 700

4. use mdb to execute the program. (terminal 1)
PS.
_start:b, set breakpoint in _start function
:r, resume it (and the process will stop at breakpoint)
!ps, execute the ps command , then you will the progess pid(2176)
# mdb /tmp/t
> _start:b
> :r
mdb: stop at _start
mdb: target stopped at:
_start:         pushl  $0x0
> !ps
  PID TTY         TIME CMD
 2176 pts/5       0:00 t
 1980 pts/5       0:00 bash
 2175 pts/5       0:00 mdb
 2177 pts/5       0:00 ps

>


5. Open an other terminal(terminal 2), and run DTrace to debug and assign the pid (2176).
# /tmp/t.d 2176

6. In terminal 1, continue the program, then quit mdb.
> :r
54
mdb: target has terminated

> ::quit
#


7. In terminal 2, you will see the result.
# /tmp/t.d 2176
f1: 49
f2: 51

^C


#
The you willsee the f1 was called by argument 49, not 1.
Then when we review the source code,
you will find out, we use the wrong type in main function,
then we correct it.
int main(int argc, char *argv) {
  //char a='1';
  int a=1;
  f1(a);
}

8. Re-compiler the source and run again.
Then it's correct.
# gcc -o /tmp/t /tmp/t.c
# /tmp/t
6
#

Wish this helps.

regards,
Stanley Huang

Thursday, October 15, 2009

[Level 2] File ACL...


Someone ask me about how to "Copy/Backup files with acl", there are several ways to do so.
1. use cp -p command:
# cp -p ./a.txt ./b.txt
PS.
If you copy a file from UFS to ZFS, is OK,
if you copy a file from ZFS to UFS, that will be failed.
2. use tar cp command:
# tar cp ./d.tar ./d
PS. the same problem as case 1, while you extract the file between 2 different filesystem.
3. copy acl from another file with getfacl/setfacl command on UFS:
# getfacl ./a.txt |  setfacl -f - ./b.txt
PS.
You can use above command on UFS, not on ZFS.
4. copy file with zfs send/recv command:
# zfs snapshot poolname/fssource@sname

# zfs send -R poolname/fssource@sname | zfs recv -dF poolname/fstarget

Wish this helps.

regards,
Stanley Huang

Monday, October 12, 2009

[Level 2] How to share screen within 2 sessions.

Someone ask me about, if the vendor ssh into the server from internet, how can we "monitor" the commands that vendor keyins, especially when vendor need the "root" privileges!
You can use the share screen to for-fill your request.

[ Session 1 : User with "root" account ]
[ Sessioi 2: Vendor with "stanley" account ]

root# chmod u+s /usr/bin/screen
root# screen

[ press Ctrl+a, then enter ": multiuser on" ]

[ press Ctrl+a, then enter ": addacl stanley" ]

[ press "Enter" ]
stanley$ screen -x root/


Then, your session screen is shared to your vendor.

Wish this helps.

regards,
Stanley Huang

[Level 2] How to clone VirtualBox vdi

The most easy way to clone a virtual box vdi file is by using the command "VBoxManage" by user root.
ex.
# pfexec VBoxManage clonehd ./source.vdi ./target.vdi
# sUser=`whoami`
# sGroup=`groups $sUser|cut -d' ' -f1`
# chown $sUser:$sGroup ./target.vdi



Wish this helps.

regards,
Stanley Huang

Thursday, October 8, 2009

[Level 2] How to run commands at remote server through telnet client.


Today, someone ask me, "How to run commands at remote server through telnet client?"
So I write a sample code for him and also share for you.

Wish this helps.

regards,
Stanley Huang

################################################

#!/usr/bin/bash

showUsage() {
  cat<<EOF
Usage:
  $0 host [user] [pwd] [sleep_sec] [cmd_file]
Ex.
  $0 localhost root root123 3 $0.cmd
  $0 localhost root root123 3
  $0 localhost root root123
  $0 localhost root
  $0 localhost
PS.
  Default values:
  user=root
  pwd=root123
  sleep_sec=3
  cmd_file=$0
EOF
}
################################## main
sGrep="^#[$] " # default grep filter

sHost=$1  && [ -z $sHost   ] && showUsage && exit 1 # sHost=b1500-16
sUsr=$2   && [ -z $sUsr    ] && sUsr=root           # whoami
sPwd=$3   && [ -z $sPwd    ] && sPwd=root123        # passwd
sSleep=$4 && [ -z $sSleep  ] && sSleep=3
sCmdF=$5  && [ -z "$sCmdF" ] && sCmdF="$0"

sCmdList="`grep "$sGrep" $sCmdF`"
sCmdList="$sUsr:$sPwd:`echo $sCmdList| sed -e 's/[ ]*#[$][ ]*/:/g'`:sleep $sSleep"
#echo $sCmdList
#exit

# IFS must put here
IFS=:
echo "Command Begin ($0) :: `date '+%Y/%m/%d %H:%M:%S'`"
for sCmd in $sCmdList
do
  sleep $sSleep
  echo $sCmd
done | telnet $sHost
echo "Command End   ($0) :: `date '+%Y/%m/%d %H:%M:%S'`"

exit 0
exit 0

######################### command line : by pattern : #$ command
######################### you can copy follow lines to a command file.
#$ uname -n
#$ #ls -ltr;
#$ id;
#$ pwd;
#$ exit;