Terminal Server Environment Basic Concepts

by [Published on 28 Feb. 2006 / Last Updated on 28 Feb. 2006]

Most IT professionals think that Terminal Servers are complex and difficult to administrate. This feeling is further enforced by stories where users complain about low performance, strange behavior and so on. This article will describe the basic concepts to keep in mind for Terminal Server Environments. Adopting these concepts in your environment will result in an easily manageable terminal server farm.

Introduction

What generally causes administrators to believe that Terminal Server environments as so complex and difficult to manage? Mainly these considerations are based on stories where Terminal Server environments are not stable because applications and printers which are not suitable for Terminal Server environments are implemented. This can result in crashing applications, spooler service or even the infamous blue screen of death (BSOD).

Also, although we will not discuss this behavior in this article, in infrastructures where applications and printers are fully covered, administrators often notice strange behavior and non-reproducible user errors.

The number one reason which causes this behavior is that the terminal servers in the infrastructure are not 100% identical. This is why errors are non-reproducible or very difficult to reproduce. The error occurs only on one or a few servers, because the configuration on the server(s) is different than on the others.

Concept Rule Number 1: Automate the Installation and Configuration

Why should the servers be 100% identical? If I buy a new server it is almost impossible to buy exactly the same server. When talking about the concept that the servers should be the same we are not talking about hardware, but about the installation and configuration of all software components on these servers.

How can you ensure that all your servers are exactly the same concerning the software? There is only one solution: automate the building and configuration of the servers completely. By completely I mean everything, a server should be built and configured without any direct manual intervention on the server.

In a nutshell, this means that the installation of Windows 2003 specifically for a Terminal deployment should be automated. This can be done via unattended installations, cloning, and third party products. I’m not going to discuss this part in detail, because there are already wonderful documents about unattended Windows installation on several websites.

If you are using Citrix Presentation Server (or another SBC product) this party should also be installed silently. I will discuss the unattended installation of Citrix in another article on this website later on.

Other software, which should be available on the server like monitoring tools and antivirus, should also be deployed unattended. This is comparable with deploying the applications which will be published to the users. Best practices for this kind of deployment will also be described in another article, which will be available soon.

Chronology of the Application Installation

Only one thing concerning (management) applications is significant for this article, which needs to be described to understand the following rule.

Several manufacturers release a number of applications. Often these manufacturers use shared resources or support software. Every installation often installs its “own” files within these shared directories overwritting earlier or other versions of the existing files. Normally a manufacturer does not mention every file his software installs or uses. In other words, it is almost impossible to know which files are installed or overwritten when you are installing a new application on your system.

So if you install a new application on your existing Terminal server there is a more than likely chance this installation will overwrite some files which are also used by one (or more) existing application(s). This can cause some existing applications to not function anymore or there will be some behavior changes. Let's focus on these behavior changes. If we install the applications in a different order on several machines, the behavior of the applications can be (and often is) different. This explains why lots of problems are not reproducible. They only exist on one machine because the installation order causes the different behavior. To prevent this different behavior all the applications should be installed in exactly the same installation order on all the machines. This installation order should be tested so you can ensure that applications work properly and guarantee that all the applications are working exactly the same way on all the servers. In SBC terms this is called the "chronology of the application installation". As mentioned earlier, the best method to achieve this is to use automated installation packages for these applications.

These packages should be deployed in such a way that the chronology is guaranteed. Making a numbered list is one of the easiest ways, but remember that some deployment tools (like Altiris) use their own internal system to create the deployment order. So if you are assigning more than one job to the target machine the jobs could be installed in a different order then you meant them to be.

Separation of the Installation and Configuration

Normal application installation actually is comprised of two parts. The first part is adding the binary files of the application to the system and some settings to make it possible to start the application. The second part is configuring the application itself so it satisfies your needs. You can imagine connection to database servers, file shares, the view settings in the application, the place of toolbars and so on.

Although it sounds like a good thing to combine these two parts together, it is actually not a good idea. By separating the installation of the binary files and configuration for the company needs you are much more flexible when the needs change or the infrastructure changes. For example, if the database is moved to another server you only need to adjust the configuration part and not the package.

Separation of the User and the Machine

Even more important is the separation of the user and the machine part. Every application consists of a machine part and a user part. The machine part is normally installed on one of the local discs in the server and registry settings in the keys HKEY_LOCAL_MACHINE and/or Classes Root. The following properties apply to the machine part: the data is static, errors have a high impact, 20% of all changes are applicable to the machine part and no back-up is necessary. The machine part can be managed via machine policies, unattended installations and configuration scripts (for example a startup and shutdown script).

The user part of an application can be found in the user profile and the user home drive and in the registry within the HKEY_CURRENT_USER. The user part is recognized as dynamic, low impact for the whole organization in consideration to errors, 80% of the changes are allocated to this part, and a back-up is necessary. User policies, login scripts and third party software are the tools which are used to manage this part.

Shadow Key

If applications are installed on a Terminal this is normally done via Add/Remove programs or via change user /install command. This sets the Terminal Server in installation mode. When an application within this installation mode writes registry keys to the HKEY_CURRENT_USER Microsoft also writes this key in the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Terminal Server\Install\Software. The values in this key, called Shadow key, are applied to a user when he logs on to the Terminal Server if no keys (or older keys) are found in his profile based on a timestamp. Because the timestamp is checked you may experience strange behavior if a new server is added to your infrastructure. This behavior occurs because the values in the Shadow key have the timestamp of when they were installed. Users can have different settings in their profile, but with an older timestamp. When these users log on to the new server, the Shadow key settings are written in their profile because the date is newer than the setting in the user profile, which is logically unwanted. Microsoft recognizes this behavior in knowledge base article 297379. In this article three solutions are proposed.

  1. Use Sysprep and "image" new servers. This ensures that new servers inherit the registry timestamps from the original build.
  2. Write to HKEY_CURRENT_USER\Software in Install mode with the system clock set in the past.
  3. Remove shadow keys that could potentially overwrite user preferences

The first solution is not desirable because changes to your environment are difficult to manage/arrange when applications are available in an image. Solution two and three work perfectly. When using solution two your unattended scripts should need to have some logic to change the datestamp of the new values created by the application in the Shadow Key. It is preferable to delete all keys in the Shadow Key when using solution three. These values should be monitored during packaging of the application and then added to the user login scripts (or something similar). My personal opinion is that solution three is the best solution, because you can manage the values in the easiest and best way.

Use a DTAP Environment

DTAP stands for Develop, Test, Accept and Produce. Because our main goal is to guarantee that all servers are 100% identical with every new setting, the application installed should be tested rigorously. To ensure this, several environments are needed. Sometimes the Development and Test environment can be combined for the Terminal Servers. Depending on your complete infrastructure some back-end infrastructure (like file, exchange and some database functions) can also be combined, but try to separate these functions in at least two parts. One for Development/Testing and one for Accepting/Productions.

Profiles and Printer Drivers

The last two rules for Terminal Server basics are to restrict the usage of profiles and printer drivers. On MSTerminalServices.org several articles are already published on these two topics: "Terminal Server and the Profile Challenge" and "Can Third Party Software Solve your Terminal Server Printing Problems?" so it is not necessary to go into detail about these two in this article.

Conclusion

To keep your Terminal Server in good shape and easily manageable your servers should be 100% identical. This is the only way to ensure that all servers respond the same way to all requests. Using the basic concepts like chronological installations, separating the user and machine, using the Shadow Key, and using a DTAP environment are necessary to achieve the goal of making and keeping your servers identical.

The Author — Wilco van Bragt

Wilco van Bragt avatar

After working for a couple of consulting firms as a senior technical consultant and technical project leader Wilco started his own freelance company VanBragt.Net Consultancy in April 2008. Wilco is certified n Citrix (CCIA, CCEE/CCEA, CCA), Microsoft (MCITP, MCTS, MSCE, MSCA) and Prince2 (Foundation). Wilco is also a RSVP (RES Software Valued Professional), Citrix CTP (Citrix Technology Professional) and a Microsoft MVP (Most Valuable Professional) on RDS.

Latest Contributions

Featured Links