Saturday, February 23, 2008

First steps to cloning a Solaris Zone

Today I want to just mention a few concepts that I've been deliberately neglecting - some of the ideas I mentioned in my 15-minute-to-your-first-zone guide and in my post about Automating Zone creation, and that I will still be posting about in the next few posts are based on this.

Firstly Sun has cleverly integrated Zone management with the ZFS file system.

1. If the parent directory of a Zone's root is on a ZFS file system, then zoneadm will create a new ZFS file system for the rootpath of the zone.

2. Stopping and starting the zone will mount and unmount the zone's root file system

3. Cloning a zone by means of the zoneadm utility will automatically use a ZFS snapshot to create a ZFS clone which will be mounted on the rootpath of the new zone.

This is even more interesting because prior to release 11/06 of Solaris 10, running a zone with its rootpath on a ZFS file system was an unsupported configuration.

The second thing is that resources which are only needed while a zone is running, is created and destroyed dynamically when the zone is started or halted. In particular this applies to network interfaces and loopback file systems. When you start up a zone, you will notice new entries were created for its interfaces, and new file systems were mounted. The file systems which are so managed are normally hidden from df in the global zone, but shows up when you run df with the new -Z switch.

The next concept is that of how zlogin works. You can think of zlogin as a kind of "su" command, but in stead of running a commands under a different userid, it runs a command in a different zone. The default command which it runs is a shell. You can also compare is to using ssh or rexec to run a specified command somewhere else, though there is no networking involved.


The concept that every process has got a UID and GID which controls its access to files and system calls in extended in Solaris by a new field storing the process' Zone-ID. This, together with Process Permission flags and a chroot is essentially what zones are, but more on that later.


Using zlogin to run a command or create a new shell in a zone will create a wtmpx login record having zone:global as the origin of the session.

zlogin with the -C option depends on the zone's console device, and creates a wtmpx entry in the zone with the console recorded as the origin of the login session.

A few things which I consider to be good Zone management habbits:

1. Keep an entry for each zone's IP address in the global zone's hosts file, and maintain the /etc/inet/netmasks file with entries for all the subnets you will be using.

2. If you put each zone in its own file system then they can not all "fill up" at the same time. With ZFS file systems, this requires that you set quotas and/or reservations on the zones' root file systems.

Finally a very simple yet important and eventually very powerful concept, namely exporting and importing of zone configurations.

I want to demonstate this using as an example the configuration from an existing zone. First we export it and store it in a text file, like this

globalzone # zonecfg -z myfirstzone export > /tmp/zone_config.txt

Have a look at the file ...

globalzone # cat /tmp/zone_config.txt

create -b

set zonepath=/export/zones/myfirstzone

set autoboot=false

set ip-type=shared

add net

set address=192.168.100.131

set physical=e1000g0

end

Now just make a few small changes to the file - specifically we update the zonepath and the IP address


globalzone # sed '

/zonepath/ s/myfirstzone/firstclone/;

/address=/ s/100.131/100.132/

' /tmp/zone_config.txt > /tmp/clone_config.txt

Of course you could use your favourite text editor to do that, but using sed is just so sexy.

globalzone # cat /tmp/clone_config.txt

create -b

set zonepath=/export/home/zones/firstclone

set autoboot=false

set ip-type=shared

add net

set address=192.168.100.132

set physical=e1000g0

end

We will feed this config file into zonecfg. Of course you could just manually set each of those entries, but zone configurations can easily get complex - you may have multiple network interfaces, many file systems, and several other non-default settings like resource controls, something I'll get to in due course.

Right now what I have on my system (the "before" picture)

globalzone # zoneadm list -vc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- myfirstzone installed /export/zones/myfirstzone native shared

- disposable installed /export/zones/disposable native shared

Creating the zone config based on this:

globalzone # zonecfg -z firstclone -f /tmp/zone_config.txt

Then the "after" picture showing the new zone configured...

globalzone # zoneadm list -vc

ID NAME STATUS PATH BRAND IP

0 global running / native shared

- myfirstzone installed /export/zones/myfirstzone native shared

- disposable installed /export/zones/disposable native shared

- firstclone configured /export/zones/firstclone native shared

All that remains is to populate this new "cloned" zone with user-land bits...

globalzone # timex zoneadm -z firstclone install

A ZFS file system has been created for this zone.

Preparing to install zone <firstclone>.

Creating list of files to copy from the global zone.

Copying <188162> files to the zone.

Initializing zone product registry.

Determining zone package initialization order.

Preparing to initialize <1307> packages on the zone.

Initialized <1307> packages on zone.

Zone <firstclone> is initialized.

Installation of <1> packages was skipped.

The file </export/zones/firstclone/root/var/sadm/system/logs/install_log> contains a log of the zone installation.

real 28:13.55

user 4:46.13

sys 7:18.22

And we're done!


Over 28 minutes – that is still much to slow. In the next post I will use this concept and add to it to take “cloning” to the next level - It should not take more than a few seconds.

No comments: