Zones can be treated as cheap, disposable application containers. Automated Zone creation is not necessarily there to allow you to rapidly deploy 1000s of Zones (though it could certainly be used for this purpose given sufficient planning), but allows you to create and use, then delete and easily re-create zones freshly and with a consistent configuration.
You will find that most, if not all, of your zones will use the same naming-services configuration, be in the same time-zone, attach to the same network interface (just with different IP addresses), etc. Many of the System Identification and system configuration settings will be identical or very similar between the Zones.
You might even find that with each new zone you create the same set of user-ids and have them all get their home directories from a central home-directory server. Basically repeat work. Computers are, in fact, good at repeatedly doing the same task over and over, without getting bored.
If all you want to achieve is to have a clean state to which you can restore a zone easily, then a fine plan would be to use file system snapshots, something like this:
1. Preparation / Setup
1.1. Create a file system structure in which to store the Zone. Since we've got ZFS for free with Solaris there's really no reason not to use it.
1.2. Set up the Zone in this file system, and complete the configuration up to the point where you want to be able to revert back to.
1.3. Shut down the Zone and take a snapshot.
2. Using this Zone:
2.1 Make any instance specific "custom" configuration changes (add some disk space, user-ids, tweak some settings)
2.2 Start the zone and let the users loose in it.
3. Reverting to the clean status
3.1 Bring the zone down (purely to make sure that no processes have files open in the file system containing the zone)
3.2 Recover the file system back to the Snap-shot state.
3.3 Go back to nr 2 above.
Before I show an example of doing this using ZFS, suffer me to mention the other techniques involved in automating Solaris Zone creation (Each of which I will cover in a separate blog post in detail)
Firstly copying the Zone configuration. This involves creating a zone config and exporting it to a file to be used as a template in the future. Then each time you want to create a zone based on this template, you just make a few small changes such as the zone-name and IP address, then import this modified copy of the template into a new zone, after which you continue with the normal zone installation.
Using a sysidcfg file and a few other tricks to speed up the zone configuration is quite similar to using a sysidcfg file to pre-configure a system from a jumpstart, and can by used to automate settings such as the timezone, locale, terminal type, networking, and name-services, amongst others.
Cloning Zones to speed up the install process. The Zone management framework from Sun gives us the ability to "clone" a master "template" zone. This involves creating one (or more) template zones which you then leave fully installed and configured, but don't actually ever start up or use, other than to tweak their configurations. This saves time during the actual install and subsequent configuration steps.
With that out of the way, on to the example of how to make a simple disposable Zone. As always the fixed-width text represents what you should see on the screen. I highlight the bits you enter.
globalzone# zpool create SPACE c0d0s4
globalzone# zfs create SPACE/zones
globalzone# zfs set mountpoint=/export/zones SPACE/zones
globalzone# zfs create SPACE/zones/disposable
globalzone# chmod 0700 /export/zones/disposable
globalzone# zfs set atime=off SPACE/zones/disposable
Disabling of “atime” above is a personal preference thing. Now we set up a simple zone. Yours can be as complicated or as simple as you want it to be.
globalzone# zonecfg -z disposable
zonecfg:disposable> set zonepath=/export/zones/disposable
zonecfg:disposable> add net
zonecfg:disposable:net> set physical=e1000g0
zonecfg:disposable:net> set address=192.168.24.133
zonecfg:disposable:net> end
zonecfg:disposable> verify
zonecfg:disposable> commit
zonecfg:disposable> exit
globalzone# zoneadm -z disposable install
cannot create ZFS dataset SPACE/zones/disposable: dataset already exists
Preparing to install zone
Creating list of files to copy from the global zone.
Copying <9386> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1307> packages on the zone.
Initialized <1307> packages on zone.
Zone
Installation of <1> packages was skipped.
Installation of these packages generated warnings:
The file contains a log of the zone installation.
For the eagle-eyed amongst you, the WebStackTooling failure is due to the fact that this is a sparse zone and I'm running beta software (Nevada Build 80). In a sparse zone the /usr file system is read-only and The WebStackTooling is trying to create or change some files. I'm just ignoring this error for now as it does not bother me.
So far, so good. Lets save a backup of what we've got so far.
globalzone# zfs snapshot SPACE/zones/disposable@freshly_installed
Now we perform the first boot and system identification. Below is an abbreviated copy-paste showing the flow of the process.
globalzone# zoneadm -z disposable boot; zlogin -C disposable
[Connected to zone 'disposable' console]
Configuring Services ... 150/150
Reading ZFS config: done.
>>> Select a Language
>>> Select a Locale
>>> What type of terminal are you using?
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair
Configuring network interface addresses: e1000g0.
>>> Host name for e1000g0:1 disposable
>>> Configure Security Policy:
>>> Name Service
>>> NFSv4 Domain Name:
>>> Region and Time zone: Africa/Johannesburg
>>> Root Password
System identification is completed.
rebooting system due to change(s) in /etc/default/init
[NOTICE: Zone rebooting]
SunOS Release 5.11 Version snv_80 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: disposable
Reading ZFS config: done.
disposable console login: root
Password:
Feb 20 21:33:05 disposable login: ROOT LOGIN /dev/console
Sun Microsystems Inc. SunOS 5.11 snv_80 January 2008
You may want to make a few more changes now that the Zone is running. Some ideas may be to set up User-IDs, enable/disable some services, and set up some NFS and/or autmounter file systems.
# mkdir /export/home
# useradd -c "Joe Blogs" -d /export/home/joeblogs -m joeblogs
# passwd joeblogs
Assuming you've done all you want, this is the point where we have a cleanly built zone, running, and essentially the point that we would like to be able to return to after we did whatever make-and-break or sandbox testing. The Zone should be halted before we take the snapshot, even if only to close all open files.
# halt
Feb 20 21:33:12 disposable halt: initiated by root on /dev/console
Feb 20 21:33:12 disposable syslogd: going down on signal 15
[NOTICE: Zone halted]
~.
[Connection to zone 'disposable' console closed]
Now just take another ZFS snapshot:
globalzone# zfs snapshot SPACE/zones/disposable@system_identified
=================
Now the Zone is ready for you to let your users loose in it. Allow them to have full root access, go crazy, run "rm -r /", etc.
globalzone# zoneadm -z disposable boot; zlogin -C disposable
zoneadm: zone 'disposable': WARNING: e1000g0:1: no matching subnet found in netmasks(4) for 192.168.24.133; using default of 255.255.255.0.
[Connected to zone 'disposable' console]
Hostname: disposable
Reading ZFS config: done.
disposable console login: root
Password:
Feb 20 21:40:11 disposable login: ROOT LOGIN /dev/console
Last login: Wed Feb 20 21:33:05 on console
Sun Microsystems Inc. SunOS 5.11 snv_80 January 2008
Now perform some "work" - Create a few directories, modify some files, etc. I chose to run sys-unconfig.
# sys-unconfig
WARNING
This program will unconfigure your system. It will cause it
to revert to a "blank" system - it will not have a name or know
about other systems or networks.
This program will also halt the system.
Do you want to continue (y/n) ? y
sys-unconfig started Wed Feb 20 21:40:30 2008
sys-unconfig completed Wed Feb 20 21:40:30 2008
Halting system...
svc.startd: The system is coming down. Please wait.
svc.startd: 59 system services are now being stopped.
svc.startd: The system is down.
[NOTICE: Zone halted]
Then, back in the global zone, examine the available ZFS snapshots:
globalzone# zfs list
NAME USED AVAIL REFER MOUNTPOINT
SPACE 684M 14.1G 18K /SPACE
SPACE/zones 684M 14.1G 19K /export/zones
SPACE/zones/disposable 684M 14.1G 624M /export/zones/disposable
SPACE/zones/disposable@freshly_installed 790K - 523M -
SPACE/zones/disposable@system_identified 59.2M - 611M -
These four commands can go nicely into a little "revert" script.
globalzone# zfs clone SPACE/zones/disposable@system_identified \
SPACE/zones/reverted_temp
globalzone# zfs promote SPACE/zones/reverted_temp
globalzone# zfs destroy SPACE/zones/disposable
globalzone# zfs rename SPACE/zones/reverted_temp SPACE/zones/disposable
That took just a few seconds, and we are ready to start using the zone again...
global# zoneadm -z disposable boot; zlogin -C disposable
[Connected to zone 'disposable' console]
SunOS Release 5.11 Version snv_80 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: disposable
Reading ZFS config: done.
disposable console login:
As expected, you will find that all changes are reverted. Besides the normal application test environment, one other area where I think this would be quite handy is in a class-room situation, where you can allow the students full root access in the zone, and at the end of the day quickly recover the system to a sane state for the next day's class.
All in all that was Q-E-D. This principle, as well as the information from my previous blog posting will form the basis of the next few posts.
No comments:
Post a Comment