Solaris / OpenSolarisThis forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Does anyone know how to migrate a ZFS zpool from one big LUN to several smaller ones?
We have a 2 TB database mount. Our new Netapp provisioned 4 LUNs of 512 GB. The same total size but not by individual LUN.
On VXVM I could use vxevac and it would move the data to as many LUNs as needed -
vxevac ... big_lun new_1 new_2 new_3 new_4
Or I could build a plex witht he several new LUNs, mirror, sync, break the mirror.
On the Linux/HPUX/AIX LVMs that use logical partitions I could wrote a loop that live migrates the logical partitions one at a time until they had all been moved off the big LUN.
On ZFS "zpool -z database attach old new" does not allow several new devices.
I would very much like to be able to do this as a live migration. I've already migrated the boot points of my local zones live with zpool attach/detach, but that LUN was provisioned explicitly by size.
A trade off I see - If I let Netapp allocate the LUNs it can monitor them all together as a database. I can provision by hand to exact size but then it does not know they form a set.
Thanks! According to that document the source needs to be read-only, so it's a "dead migration" rather than a "live migration". Not clear how it's better than shutting down Oracle and doing a move or copy to a new mount point. The copy speed would be similar with a small number of large files.
Did I read it wrong that the source needs to be read-only? If the destination needs to be read-only that at least lets me sync alive then only have the database down time as a cycle not as a copy.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
You didn't read it wrong.
The source need indeed to be set in read-only mode. However, the destination will be in read-write mode as soon as you start the shadow migration and all data will be accessible from it.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.