skip to main |
skip to sidebar
Today I will take this very contentious, highly flame-war-provoking issue head on. Many people are likely to disagree with me, but that is OK - opinions are a big part of what makes the Internet tick.
Naturally you need to start off by considering your specific requirements and the resources that you have available: Many RISC based computers support only a very specific Unix operating system, while many consumer devices such as web-cams and WiFi adapters are difficult if not impossible to get to work with anything other than MS Windows. It will make sense to take stock of your existing skills to support that environment when choosing an operating system for a server application if multiple options exist.
To start off with I would like to group operating systems into three categories the way I see it. These are intentionally vague, and operating systems can migrate between the categories.
The categories are:
- The End-User operating system: The question "Does it do the job?" comes to mind. I will include MS Windows and MacOS into this category. For reasons I will get into soon, this includes the Server, Pro, and Home editions of MS Windows.
- The Power-user operating system: "I'll hack it until it works the way I want it to work". This category without any doubt includes Linux.
- The server operating system: This is for specifically supported hardware running specifically supported applications, and includes all the Uni'ces, in particular Solaris because it is available for free and on non-RISC hardware, and probably FreeBSD as well.
I'll discuss each of these categories a bit more in depth, taking them in the opposite order.
In the enterprise data center we usually find many large, multi-processor servers designed and ratified to run specific operating systems, which in turn are ratified and supported for use with specific applications and databases. Vendors in this environment sell solutions where all the components making up the whole have been preselected, and include support for their solutions under a contract which often limits your freedom, for example by limiting you to specific versions of a specific operating system.
In this environment, going with a supported operating system is the only sensible choice, and the big Unix vendors all have long lists of applications that can be run on their systems. The benefits of going with these types of solutions lies in that the hardware is intelligent about the operating system, being able to halt the operating system without terminating it, allowing hardware to be re-configured while the operating system is running, is able to take crash-dumps when the system hangs and presents a uniform device tree of the installed hardware to the operating system. This all provides a high level of stability and supportability, something which comes at a price, but worth it when the service delivery to your clients depends on the systems running without interruptions.
That is not to say that Linux and MS Windows servers do not have a place in the data center - the exact same rule holds true: Select the platform for which your application is supported - whether it is a mainframe or a network vendor's appliance with an embedded firewall and authentication service. With Linux-based servers I often notice that the applications prefer an environment where a high degree of customizability is provided, and with Windows servers I often find that the applications have a keep-it-simple nature, at least as far as the server configuration is concerned.
Linux fits squarely into the Power-user operating system category. Features like the ability of the user to re-compile the operating system to eke out an extra ounce of performance, directly modify the source code to get a specific result, the ability to change and/or replace any single component with one of your own choosing, and a wonderful following of supporters eager to help out one another truly makes this operating system into a power-user's heaven.
I find myself more productive when working on Linux: this is without a doubt because I have it set up exactly the way I like it and I have a huge collection of software tools installed and configured to help me maintain the system. Also something that helps is that Linux is much more transparent in reporting what is going on "under the hood" - You can obtain accurate log files of system events, examine what program processes are doing in fine detail, and you have accurate reports of the state of every single component that the operating system controls.
This all makes it easier to track down the "root cause" of problems, and in my experience with Linux, using the computer is less of a hit-and-miss affair - if it works a certain way today, it will work the same way tomorrow and the day after. Things like program crashes are much more easily reproducible, and as a result, easier to resolve. All of this requires a certain level of commitment and even enthusiasm to learn more about the workings of the computer and the operating system though, and that is probably not for everyone.
The other options available to "power users" with PCs are basically FreeBSD, Solaris, Open Solaris, and Windows. Windows is sometimes not transparent enough and things do not always work the way Microsoft says it does. Solaris' hardware support is still somewhat limited, making it a difficult to justify choice. Open Solaris is maybe still too much of a newcomer to be judged fairly, tough good work is being done, albeit slowly, by projects such as Nexenta GNU Solaris, an OpenSolaris distribution providing the GNU tools in stead of the Solaris defaults.
The End-user category are generally for people who don't care that their operating system limits their ability to troubleshoot faults, learn more about the computer or replace sub-components of the operating system. Many of these are company PCs used to run a selection of applications needed to run their business, the support and development of which companies have invested a lot of time and money into.
20 years ago Microsoft was probably deliberately lax in prosecuting pirates of their operating systems and application software. Most of these "pirates" were kids who were learning how computers worked, and by the time they made it into the workforce, Windows was what they expected on their computers and in the data centers. Today this large following of Windows users equates to a workforce who will pretty much need to be retrained if one suddenly wanted to switch to a different operating system.
You could argue that the training is not so hard - many applications are web based these days, sending an email remains pretty much the same, etc, but this only holds true in theory: The moment many users are faced with a new login screen that looks different, they feel lost. Suddenly the user name field becomes case sensitive, and your lost user turns into a frustrated user who hasn't even had a chance to try the applications yet. Worse, every application's menus are laid out differently, and that is only once they learn which application does what. Space characters in file names suddenly cause havoc, and the slash in directory paths is turned around. To the end user, it feels as if nothing works the way they are expecting it to work and it takes months before they once more become productive.
I must agree with A Russel Jones who concludes that for people who don't want to have to consider whether their applications will work with the window manager that they have installed, learn how to enter commands in terminal, or why security is a good thing when it seems to just make life harder, a Windows based PC is often the best option.
Linux can be carefully configured to simulate this in a controlled environment where an IT department makes all the design decisions, installs all the software, and makes sure that the all of the hardware in use is supported - when you have people with the necessary skills. Setting up such an ideal configuration is something that will take time and effort, and that means there is a price tag attached, and this could be more expensive than just running Windows. In the end business decisions always come back to the numbers: What are the benefits (over the alternative), and how does the long-term costs of the various options compare.
The Pro and Server editions of MS Windows have the same basic limitations that the Home edition have: The server edition is no more stable, powerful, transparent, manageable, or secure than the Home edition, and this puts it into the "End-user" category. It also fits in with the "keep-it-simple" tendency of this category, even if it is installed on a data center rack mounted server which never crashes.
The power user is probably the best off: They have the most freedom to choose what operating system and applications they want to run, how they want to configure their computers, and can usually make their own choices about what programs to use. This is basically because they are able to support themselves when things stop working.
So basically it boils down to the age-old adage: Use the right tool for the job. I suspect that you will find that nine times out of ten the categories I described here will hold true though.
Disclaimer: This article necessarily relies heavily on generalization. Please keep that in mind if you decide to comment.
No comments:
Post a Comment