LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Other *NIX Forums > Solaris / OpenSolaris
User Name
Password
Solaris / OpenSolaris This forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.

Notices


Reply
  Search this Thread
Old 02-17-2018, 03:06 PM   #1
abhisheks77
Member
 
Registered: Apr 2014
Posts: 63

Rep: Reputation: Disabled
Need help in patching with lu on SVM+ZFS FS with zones


Hello,
I need help in understanding, how lu can work on Solaris-10 on this server. I can detach mirror metadevices of LVM, but zpool looks confusing, which mirror I should break.
Code:
server-app01 # : |format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c0t1d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c0t2d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
          /pci@0/pci@0/pci@2/scsi@0/sd@2,0
       3. c0t3d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
          /pci@0/pci@0/pci@2/scsi@0/sd@3,0
Specify disk (enter its number):

server-app01 # zpool status -v
  pool: z
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        z             ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t0d0s7  ONLINE       0     0     0
            c0t1d0s7  ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0
            c0t3d0    ONLINE       0     0     0

errors: No known data errors
server-app01 # metastat -p
d5 -m d8 d9 1
d8 1 1 c0t0d0s5
d9 1 1 c0t1d0s5
d1 -m d4 d6 1
d4 1 1 c0t0d0s1
d6 1 1 c0t1d0s1
d0 -m d2 d3 1
d2 1 1 c0t0d0s0
d3 1 1 c0t1d0s0

server-app01 # zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
z                              452G  68.0G     1K  legacy
z/export                       379M  68.0G   379M  /export
z/shared                      1.76G  3.24G  1.76G  /export/zones/shared
z/swap2                          1G  68.5G   479M  -
z/swap3                          2G  69.8G   152M  -
z/zones                        447G  68.0G    38K  /export/zones
z/zones/pgpi-factory1        6.65G  13.3G  6.37G  /export/zones/pgpi-factory1
z/zones/pgpi-factory1/var     288M  9.72G   288M  legacy
z/zones/pgpi-oradb1           153G  68.0G   153G  /export/zones/pgpi-oradb1
z/zones/pgpi-oradb1/var       242M  9.76G   242M  legacy
z/zones/pgpi-pin1            21.5G  8.51G  19.1G  /export/zones/pgpi-pin1
z/zones/pgpi-pin1/var        2.38G  7.62G  2.38G  legacy
z/zones/pgpi-webserv1        14.4G  5.60G  14.0G  /export/zones/pgpi-webserv1
z/zones/pgpi-webserv1/var     423M  5.60G   423M  legacy
z/zones/pgpj-factory1        7.50G  12.5G  7.50G  /export/zones/pgpj-factory1
z/zones/pgpj-factory1_local   190M  68.0G   190M  legacy
z/zones/pgpj-factory1_var     293M  9.71G   293M  legacy
z/zones/pgpj-oradb1           191G  39.0G   191G  /export/zones/pgpj-oradb1
z/zones/pgpj-oradb1_local     198M  68.0G   198M  legacy
z/zones/pgpj-oradb1_var      8.09G  1.91G  8.09G  legacy
z/zones/pgpj-pin1            22.7G  7.26G  22.7G  /export/zones/pgpj-pin1
z/zones/pgpj-pin1_local       307M  68.0G   307M  legacy
z/zones/pgpj-pin1_var        3.87G  6.13G  3.87G  legacy
z/zones/pgpj-webserv1        14.9G  5.11G  14.9G  /export/zones/pgpj-webserv1
z/zones/pgpj-webserv1_local   198M  68.0G   198M  legacy
z/zones/pgpj-webserv1_var    1.81G  8.19G  1.81G  legacy
server-app01 #
server-app01 # zpool iostat -v
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
z              449G  78.6G      5      4   233K  18.1K
  mirror       233G  17.5G      3      1   170K  6.89K
    c0t0d0s7      -      -      1      0   109K  6.93K
    c0t1d0s7      -      -      1      0   112K  6.93K
  mirror       217G  61.1G      2      2  63.6K  11.2K
    c0t2d0        -      -      0      1  51.1K  11.2K
    c0t3d0        -      -      0      1  51.0K  11.2K
------------  -----  -----  -----  -----  -----  -----

server-app01 #
My plan is to run lucreate and then install patch cluster on alternate disk, but how should I include ZFS file-system on this ?
Code:
lucreate -c "Solaris-10" -m /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap -m /var:/dev/dsk/c0t1d0s5:ufs -n "Solaris_10_Patch_BE" -l /var/log/lucreate.log
How can I include zfs file-system (where zones are residing) in lucreate command ?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Article ZFS data integrity testing and more random ZFS thoughts. LXer Syndicated Linux News 0 05-15-2010 12:51 PM
LXer: Solaris 10 5/08 Released: ZFS-Rooted Zones Getting Better! LXer Syndicated Linux News 0 10-16-2008 07:50 AM
LXer: Dealing With ZFS-Rooted Zones on Solaris 10 Unix LXer Syndicated Linux News 0 07-02-2008 07:30 AM
Question about ZFS/ZONES custangro Solaris / OpenSolaris 1 12-06-2007 05:44 PM
ZFS Root / Boot into ZFS from a usb flash drive Kataku Solaris / OpenSolaris 1 07-15-2006 04:13 AM

LinuxQuestions.org > Forums > Other *NIX Forums > Solaris / OpenSolaris

All times are GMT -5. The time now is 11:25 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration