Sai Gopal Wordpress Blog

saigopal wordpress blog

Saigopal's website

Saigopal's website

My Daughter Website

Weebly

Palwow

Freelance Jobs

Sunday, July 11, 2010

8. Working with Style Sheet Files

8. Working with Style Sheet Files
Before you start defining your style sheets, it's important to know how to create and use the files that will contain them. In this chapter, you'll learn how to create a style sheet file, and then apply that style sheet to an individual element, a whole Web page, or an entire Web site.

You'll learn how to create the content of your CSS style sheets in the chapters that follow.

Friday, July 9, 2010

Temporary tables in SQL Server

Temporary Tables

By Bill Graziano on 17 January 2001 | 15 Comments | Tags: Application Design, Table Design

Sophie writes "Can you use a Stored Procedure to open a table and copy data to a sort of virtual table (or a records set) so that you can change the values with and not affect the actual data in the actual table. And then return the results of the virtual table? Thanks!" This article covers temporary tables and tables variables and is updated for SQL Server 2005.

I love questions like this. This question is just a perfect lead in to discuss temporary tables. Here I am struggling to find a topic to write about and I get this wonderful question. Thank you very much Sophie.
Temporary Tables

The simple answer is yes you can. Let look at a simple CREATE TABLE statement:

CREATE TABLE #Yaks (
YakID int,
YakName char(30) )

You'll notice I prefixed the table with a pound sign (#). This tells SQL Server that this table is a local temporary table. This table is only visible to this session of SQL Server. When I close this session, the table will be automatically dropped. You can treat this table just like any other table with a few exceptions. The only real major one is that you can't have foreign key constraints on a temporary table. The others are covered in Books Online.

Temporary tables are created in tempdb. If you run this query:

CREATE TABLE #Yaks (
YakID int,
YakName char(30) )

select name
from tempdb..sysobjects
where name like '#yak%'

drop table #yaks

You'll get something like this:

name
------------------------------------------------------------------------------------
#Yaks_________________________ . . . ___________________________________00000000001D

(1 row(s) affected)

except that I took about fifty underscores out to make it readable. SQL Server stores the object with a some type of unique number appended on the end of the name. It does all this for you automatically. You just have to refer to #Yaks in your code.

If two different users both create a #Yaks table each will have their own copy of it. The exact same code will run properly on both connections. Any temporary table created inside a stored procedure is automatically dropped when the stored procedure finishes executing. If stored procedure A creates a temporary table and calls stored procedure B, then B will be able to use the temporary table that A created. It's generally considered good coding practice to explicitly drop every temporary table you create. If you are running scripts through SQL Server Management Studio or Query Analyzer the temporary tables are kept until you explicitly drop them or until you close the session.

Now let's get back to your question. The best way to use a temporary table is to create it and then fill it with data. This goes something like this:

CREATE TABLE #TibetanYaks(
YakID int,
YakName char(30) )

INSERT INTO #TibetanYaks (YakID, YakName)
SELECT YakID, YakName
FROM dbo.Yaks
WHERE YakType = 'Tibetan'

-- Do some stuff with the table



drop table #TibetanYaks

Obviously, this DBA knows their yaks as they're selecting the famed Tibetan yaks, the Cadillac of yaks. Temporary tables are usually pretty quick. Since you are creating and deleting them on the fly, they are usually only cached in memory.
Table Variables

If you are using SQL Server 2000 or higher, you can take advantage of the new TABLE variable type. These are similar to temporary tables except with more flexibility and they always stay in memory. The code above using a table variable might look like this:

DECLARE @TibetanYaks TABLE (
YakID int,
YakName char(30) )

INSERT INTO @TibetanYaks (YakID, YakName)
SELECT YakID, YakName
FROM dbo.Yaks
WHERE YakType = 'Tibetan'

-- Do some stuff with the table

Table variables don't need to be dropped when you are done with them.
Which to Use

* If you have less than 100 rows generally use a table variable. Otherwise use a temporary table. This is because SQL Server won't create statistics on table variables.
* If you need to create indexes on it then you must use a temporary table.
* When using temporary tables always create them and create any indexes and then use them. This will help reduce recompilations. The impact of this is reduced starting in SQL Server 2005 but it's still a good idea.

Answering the Question

And all this brings us back to your question. The final answer to your question might look something like this:

DECLARE @TibetanYaks TABLE (
YakID int,
YakName char(30) )

INSERT INTO @TibetanYaks (YakID, YakName)
SELECT YakID, YakName
FROM dbo.Yaks
WHERE YakType = 'Tibetan'

UPDATE @TibetanYaks


SET YakName = UPPER(YakName)

SELECT *
FROM @TibetanYaks

Global Temporary Tables

You can also create global temporary tables. These are named with two pound signs. For example, ##YakHerders is a global temporary table. Global temporary tables are visible to all SQL Server connections. When you create one of these, all the users can see it. These are rarely used in SQL Server.
Summary

That shows you an example of creating a temporary table, modifying it, and returning the values to the calling program. I hope this gives you what you were looking for.

Thursday, July 1, 2010

Administrator's Guide to SQL Server 2005--chapter01--Take Away To size your server and install the software, you need to understand the variables involved in the decision and then match up the options to create three possible choices: standard, enhanced, and optimal. To do that, you need to find out what the minimal possible configurations are and what to get if money is not an option. You can use the information in this section to develop your baseline, and then expand that with metrics to determine the second two options. If you make your baseline realistic and support it with the information that follows, you should be able to make a strong case for it. The bad news is that no magic formulas tell you what to buy and how to configure it. The best way to make the hardware decisions is to detail the information in some meaningful way and examine it to come up with a solution. An additional benefit is that after you have written everything down, you can talk with vendors and other professionals intelligently about your needs. I detail several processes here using simple spreadsheets, tables, and scripts. You can take these processes and create a programmatic system from them; remember, however, that at some level there is always an element of judgment that you will use from your own experiences and corporate environment. Every situation will differ, which is why no computer program can build your landscape for you. Some vendor programs perform this task, but they are for specific application servers, not for a generic use such as a database server. Unless your application maps to one of these calculators directly, you still need to do the work yourself. Variables First, build the variables from the information in this chapter. Use a separate table or spreadsheet for each and dedicate a separate grouping of these tables for each function the servers will perform. Application Begin with the application. The following spreadsheet is a representation of the kind of information you want to gather: Application Title: Application Overall Description: Application Class (ERP, CRM, etc.): Application Architecture Type, with description of each layer from data to presentation (remove those architectures not in use): Monolithic Client/Server Three-Tier Distributed Computing Service-Oriented Architecture Landscape Next, describe the type of environment the software will use. Record details regarding the number of sites, physical locations, and the number of users at each site: Farm Department Organization Enterprise User Information Create a list of the number of users expected for the application. For each location, categorize the users into types. For instance, in an ERP application, you might have 100 users that enter orders requests, 10 buyers, and so forth. Find out the hours the users will access the system, using Greenwich mean time if you are in multiple time zones. This information is important to know to develop your disaster recovery and maintenance windows. If you discover that you have no discernable maintenance windows, you must add a duplicate system to "fail over" to so that you can apply patches and so forth. Develop a "load profile" that gives a metric to the amount of resources that type of user will take. In the case of a user entering purchase orders, for instance, you might evaluate the system used to determine that they take 1 percent of the CPU, 14K of bandwidth, and 2K of storage per transaction. You can use those numbers to classify the users into high, medium, and low, or you can use a numeric scheme such as 1 to 10. Finally, record the functions within the application that this classification of user accesses, especially as they relate to reporting. Location Type Number Hours Load Profile Functions Accessed Decision Matrix The process to arrive at a good decision is to record what you know and use that information to answer what you do not know. What you know is explained in the variables section, and the decision matrix helps you develop answers to what you do not know. What you know is captured in the spreadsheets you just created, and what you do not know is which edition(s) of SQL Server to install and what to install them on. With that information filled out, you can put the variables into a matrix to determine the factors that will help you set up your systems. Assign a numeric set of values to each factor, using whatever scale is appropriate. In this example, I have set the numbers to range from 1 to 10, with 1 being a "lighter" value than 10. To further enhance the reliability of the decision, weight the factors that make sense in your environment. For instance, if speed is more important than growth, weight the user and I/O calculations higher than the projected growth values. After you put the measurements together in a matrix, you must figure what meets the need. You can create a single "atomic" value of a transaction and multiply that out over the variables or you can generalize the load across peak and base values. If you are unsure of creating the atomic transaction number, it is safer to estimate a peak load and put the base at half that value. Those numbers enable you to select a system that runs at a minimal level and one that can handle the peak load. In the middle of those systems is the "average" choice. I use the peak and base method in this example. In this chart, I take the variables I know and place them on the left. Each variable is given a score from 0 (not applicable/advisable) to 100 (perfectly suited), indicating how well that particular variable works on the choices of hardware and software. The numeric figure I use is not scientific, because I will weight the score for the environment that I have. The application I have is a Web-based architecture with a separate reporting instance, and the landscape at the organization is a "Business" type. The storage for the application is 100GB for the first year, with an estimated 10 percent growth pattern. My testing scenario against the application involved a Fibre Channel SAN with six LUNs presenting. I also used a 32-bit operating system and SQL Server edition. For this example, I have already filled out the user profile and used that information to get the following breakdown: Light (fewer than 10 transactions per day)25 users Normal (25100 transactions per day)350 users Heavy (150+ transactions or heavy reporting and analysis)100 users Mobile (local subset of data from system with periodic sync)15 users Remote (clients with limited access to main system)10 users Each transaction (averaged) in my testing showed the following loads on these objects, per minute: CPU .7% RAM 100K (released after each call) I/O .05% NIC 25K In some systems, each transaction generates a different load on these resources. To keep the example succinct, I averaged the transactions from the various user types based on testing. This is a valid method if your application will (over time) show an average load distribution. Your application might differ. To find out how your real-world application loads the system, measure the various objects such as CPU and RAM over a day of average use (or test to simulate that) and get the averages. Also calculate the standard deviations for those same periods. If the standard deviations are low, you can use the averages. If the standard deviations are high and you use the average method, you will end up with an underpowered system at critical times and an idle system at others, and a lot of unhappy users. In that situation, it is better to determine just how often the high values occur. If they occur often, optimize the system to handle the high values. I will consider the reporting system separate from the application server, so the information I gather in this exercise is for the application server only. I will follow the same process to develop the reporting server later. Using the information above, I developed the following matrix for my example: Loads Values CPU load (500 users at .7% per transaction) 350.00 % of capacity Memory (500 users at 100K per transaction) 48.83 MB Network bandwidth (500 users at 25K per transaction) 12 MB I/O load (at .05% per transaction) 25 % of capacity With the raw numbers on this chart, I can see that I need at least three and a half CPUs, and that is just for the application. That brings me to four full processors for the application and the operating system. I have to make a decision here, however, regarding growth. After checking with the developers, I discover that the application can federate (that is, break functions onto other servers). I decide that four processors in a single machine will be enough. If the system did not federate, I would recheck my numbers, and if they still held true, I would recommend more processors. Each decision affects others, so I might repeat the load testing with 64-bit servers and see whether I experience any gains there. Adding more than four processors kicks the SQL Server edition up a notch. The memory for the application is not a high value, but I want to keep as much cache as I can in RAM. I decide that 4GB of RAM will provide enough room for caching, locks, the operating system, and other utilities. Evaluating the network bandwidth, I see that I have plenty of network capability on the network interface card (NIC) that I will dedicate to the clients. I always include a separate NIC in a server for backups and replication of data, so I do not need to factor that traffic on the dedicated client network interface. My matrix also shows lots of room for growth with what is already in use over on the testing system. It is also fast, so I recommend a duplicate of that storage system for my production landscape. Based on all this information, I see that I need a SQL Server edition that can support four processors and has the capability for Reporting Services. Because the servers can federate and I do not need more than four processors on any one server, I can get by with Standard Edition. For the mobile users, I query the developers again and find that the Express Edition will work for the laptops they use. I now have enough information to begin researching the vendors for the best deal, all from fairly well-researched information. Although this is a simple representation, it does demonstrate the process. You can extrapolate from the information that you see here to create a system for your production applications. Remember that if the system will house multiple applications, you need to go through these exercises for each. Monitoring Using this process, you can now make the decisions for your environment and develop the metrics you will use to monitor the solution. Make sure that you instrument the system so that you can track the values over an appropriate interval, such as weekly or monthly. Each metric will have its own periodicity, so make sure your collection method takes that into account. I explain the mechanics of monitoring SQL Server further along in the book, but make sure you bring up this topic with the vendor or in-house developers. You need to make sure you can monitor their software, too. In Chapter 5, you will find that SQL Server 2005 provides a wealth of functions to monitor and track your SQL Server. You can use those features to monitor not only for growth, but also for performance. With your system installed and configured, let's explore a little more about the tools you have to control SQL Server.

Take Away
To size your server and install the software, you need to understand the variables involved in the decision and then match up the options to create three possible choices: standard, enhanced, and optimal. To do that, you need to find out what the minimal possible configurations are and what to get if money is not an option. You can use the information in this section to develop your baseline, and then expand that with metrics to determine the second two options. If you make your baseline realistic and support it with the information that follows, you should be able to make a strong case for it.

The bad news is that no magic formulas tell you what to buy and how to configure it. The best way to make the hardware decisions is to detail the information in some meaningful way and examine it to come up with a solution. An additional benefit is that after you have written everything down, you can talk with vendors and other professionals intelligently about your needs.

I detail several processes here using simple spreadsheets, tables, and scripts. You can take these processes and create a programmatic system from them; remember, however, that at some level there is always an element of judgment that you will use from your own experiences and corporate environment. Every situation will differ, which is why no computer program can build your landscape for you. Some vendor programs perform this task, but they are for specific application servers, not for a generic use such as a database server. Unless your application maps to one of these calculators directly, you still need to do the work yourself.

Variables
First, build the variables from the information in this chapter. Use a separate table or spreadsheet for each and dedicate a separate grouping of these tables for each function the servers will perform.

Application
Begin with the application. The following spreadsheet is a representation of the kind of information you want to gather:

Application Title:

Application Overall Description:

Application Class (ERP, CRM, etc.):

Application Architecture Type, with description of each layer from data to presentation (remove those architectures not in use):


Monolithic



Client/Server



Three-Tier



Distributed Computing



Service-Oriented Architecture


Landscape
Next, describe the type of environment the software will use. Record details regarding the number of sites, physical locations, and the number of users at each site:

Farm

Department

Organization

Enterprise

User Information
Create a list of the number of users expected for the application. For each location, categorize the users into types. For instance, in an ERP application, you might have 100 users that enter orders requests, 10 buyers, and so forth.

Find out the hours the users will access the system, using Greenwich mean time if you are in multiple time zones. This information is important to know to develop your disaster recovery and maintenance windows. If you discover that you have no discernable maintenance windows, you must add a duplicate system to "fail over" to so that you can apply patches and so forth.

Develop a "load profile" that gives a metric to the amount of resources that type of user will take. In the case of a user entering purchase orders, for instance, you might evaluate the system used to determine that they take 1 percent of the CPU, 14K of bandwidth, and 2K of storage per transaction. You can use those numbers to classify the users into high, medium, and low, or you can use a numeric scheme such as 1 to 10.

Finally, record the functions within the application that this classification of user accesses, especially as they relate to reporting.

Location
Type
Number
Hours
Load
Profile
Functions Accessed






Decision Matrix
The process to arrive at a good decision is to record what you know and use that information to answer what you do not know. What you know is explained in the variables section, and the decision matrix helps you develop answers to what you do not know. What you know is captured in the spreadsheets you just created, and what you do not know is which edition(s) of SQL Server to install and what to install them on.

With that information filled out, you can put the variables into a matrix to determine the factors that will help you set up your systems. Assign a numeric set of values to each factor, using whatever scale is appropriate. In this example, I have set the numbers to range from 1 to 10, with 1 being a "lighter" value than 10. To further enhance the reliability of the decision, weight the factors that make sense in your environment. For instance, if speed is more important than growth, weight the user and I/O calculations higher than the projected growth values.

After you put the measurements together in a matrix, you must figure what meets the need. You can create a single "atomic" value of a transaction and multiply that out over the variables or you can generalize the load across peak and base values. If you are unsure of creating the atomic transaction number, it is safer to estimate a peak load and put the base at half that value. Those numbers enable you to select a system that runs at a minimal level and one that can handle the peak load. In the middle of those systems is the "average" choice. I use the peak and base method in this example.

In this chart, I take the variables I know and place them on the left. Each variable is given a score from 0 (not applicable/advisable) to 100 (perfectly suited), indicating how well that particular variable works on the choices of hardware and software. The numeric figure I use is not scientific, because I will weight the score for the environment that I have.

The application I have is a Web-based architecture with a separate reporting instance, and the landscape at the organization is a "Business" type. The storage for the application is 100GB for the first year, with an estimated 10 percent growth pattern. My testing scenario against the application involved a Fibre Channel SAN with six LUNs presenting. I also used a 32-bit operating system and SQL Server edition.

For this example, I have already filled out the user profile and used that information to get the following breakdown:

Light (fewer than 10 transactions per day)25 users

Normal (25100 transactions per day)350 users

Heavy (150+ transactions or heavy reporting and analysis)100 users

Mobile (local subset of data from system with periodic sync)15 users

Remote (clients with limited access to main system)10 users

Each transaction (averaged) in my testing showed the following loads on these objects, per minute:

CPU .7%

RAM 100K (released after each call)

I/O .05%

NIC 25K

In some systems, each transaction generates a different load on these resources. To keep the example succinct, I averaged the transactions from the various user types based on testing. This is a valid method if your application will (over time) show an average load distribution. Your application might differ.

To find out how your real-world application loads the system, measure the various objects such as CPU and RAM over a day of average use (or test to simulate that) and get the averages. Also calculate the standard deviations for those same periods. If the standard deviations are low, you can use the averages.

If the standard deviations are high and you use the average method, you will end up with an underpowered system at critical times and an idle system at others, and a lot of unhappy users. In that situation, it is better to determine just how often the high values occur. If they occur often, optimize the system to handle the high values.

I will consider the reporting system separate from the application server, so the information I gather in this exercise is for the application server only. I will follow the same process to develop the reporting server later. Using the information above, I developed the following matrix for my example:

Loads
Values

CPU load (500 users at .7% per transaction)
350.00
% of capacity

Memory (500 users at 100K per transaction)
48.83
MB

Network bandwidth (500 users at 25K per transaction)
12
MB

I/O load (at .05% per transaction)
25
% of capacity





With the raw numbers on this chart, I can see that I need at least three and a half CPUs, and that is just for the application. That brings me to four full processors for the application and the operating system. I have to make a decision here, however, regarding growth.

After checking with the developers, I discover that the application can federate (that is, break functions onto other servers). I decide that four processors in a single machine will be enough. If the system did not federate, I would recheck my numbers, and if they still held true, I would recommend more processors. Each decision affects others, so I might repeat the load testing with 64-bit servers and see whether I experience any gains there. Adding more than four processors kicks the SQL Server edition up a notch.

The memory for the application is not a high value, but I want to keep as much cache as I can in RAM. I decide that 4GB of RAM will provide enough room for caching, locks, the operating system, and other utilities.

Evaluating the network bandwidth, I see that I have plenty of network capability on the network interface card (NIC) that I will dedicate to the clients. I always include a separate NIC in a server for backups and replication of data, so I do not need to factor that traffic on the dedicated client network interface.

My matrix also shows lots of room for growth with what is already in use over on the testing system. It is also fast, so I recommend a duplicate of that storage system for my production landscape.

Based on all this information, I see that I need a SQL Server edition that can support four processors and has the capability for Reporting Services. Because the servers can federate and I do not need more than four processors on any one server, I can get by with Standard Edition. For the mobile users, I query the developers again and find that the Express Edition will work for the laptops they use. I now have enough information to begin researching the vendors for the best deal, all from fairly well-researched information.

Although this is a simple representation, it does demonstrate the process. You can extrapolate from the information that you see here to create a system for your production applications. Remember that if the system will house multiple applications, you need to go through these exercises for each.

Monitoring
Using this process, you can now make the decisions for your environment and develop the metrics you will use to monitor the solution. Make sure that you instrument the system so that you can track the values over an appropriate interval, such as weekly or monthly. Each metric will have its own periodicity, so make sure your collection method takes that into account. I explain the mechanics of monitoring SQL Server further along in the book, but make sure you bring up this topic with the vendor or in-house developers. You need to make sure you can monitor their software, too.

In Chapter 5, you will find that SQL Server 2005 provides a wealth of functions to monitor and track your SQL Server. You can use those features to monitor not only for growth, but also for performance.

With your system installed and configured, let's explore a little more about the tools you have to control SQL Server.

Administrator's Guide to SQL Server 2005--chapter01--Installation process

Installation Process
All Microsoft software includes a simple installation program. Gone are the days when the system administrator had to set aside an entire week to install and configure the software. What I cover here is not the entire installation process, but the overall decisions you have to make during the install.


DBA 101
You can install SQL Server multiple times, using a separate designator for each installation. Microsoft calls these multiple installations "named instances" or simply "instances." Actually, the entire software suite is not installed multiple times, just enough of the binaries to make each instance unique. This feature allows you to have multiple security and configuration options. You could keep one instance at Service Pack 1 and another at Service Pack 2, for instance. Instances also allow you to have separate administrators for the same physical hardware.





When you install SQL Server for the first time, you can choose to have a named instance or a "default" instance. Using the default instance, your applications will refer to the server name alone as SQL Server; when you use a named instance, the format to refer to the SQL Server is SERVERNAME\INSTANCENAME. So, assuming you have installed SQL Server 2005 on a server named SVR1, you refer to the SQL Server default instance as SVR1. If you install the software again, this time as a named instance called INST2, you refer to it as SVR1\INST2.

When you install another instance of SQL Server, make sure you account for the increase in resources necessary to run it. Do not spend the same hardware twice without realizing that you have not added enough I/O and memory to carry the increased load. Instances are a convenience, not a way to buy fewer servers for the same load.

After you have read this chapter and planned out your system landscape, you will begin the actual installation. When you begin the installation, the primary choices you make are drive layouts and service accounts.

Before you install SQL Server, make sure you have your operating system current with the latest patches and security fixes and have all the utilities you need. In previous versions of SQL Server, it was also useful to have a mail client installed, but that is no longer the case.

First, you need to decide where you want to keep the binaries of the program and where you want to store data files. The optimal solution is to keep at least these five things apart: the SQL Server program files, data files, log files, index files, and the tempdb database files.

Next, pick a Windows account to run the SQL Server services. This account does not need administrative privileges on the local server or domain or enterprise admin rights. The installation handles assigning the account the proper privileges.

At least two services will be installed. One service will operate the SQL Server engine, another will operate the SQL Server Agent, which automates tasks and provides alerts and notifications. Use a separate account for each function and ensure that each is a specific Windows user account for that service. Make sure you set the password to something complex and set it not to expire. Do not use those accounts for anything else.

You will get one set of services for each instance. Use a separate set of accounts for each instance. The reason is that certain stored procedures (Transact-SQL code that runs on the server) might be allowed to access the operating system. If these are enabled (not always a wise choice anyway), using the same account across all instances loses the ability to track who has made changes to the system.

As you can see in Figure 1-1, when the installation program starts, it first installs the Windows Installer 2.0, the .NET Framework 2.0, and several SQL Server Setup support files. When that completes, it runs a system consistency check, which is a series of checks to ensure there is enough drive space, that the operating system is correct, and other housekeeping. You can see the output on my test system in Figure 1-2.


Figure 1-1.

[View full size image]







Figure 1-2.

[View full size image]





After the check runs, the system presents you with a list of options for the feature set you are interested in.

Do not install everything just because you think you might need it. That takes up space and makes significant changes to the way your installation is automatically configured. You can always come back and add features to the software later using this same installation program. On the other hand, if you are installing a test system (not production!) now, you can select everything. If you do not, we will add it later when we come to that feature in the book. In Figure 1-3, I've selected everything on this test installation.



Figure 1-3.

[View full size image]





Next you are asked about installing the software as a default versus a named instance. You are also asked where the programs and data should live and which account to use to start all the services. Because you have planned your installation, you are better able to intelligently answer these questions.

Two options at the end of the process might give you pause. The first option involves sending error reports to Microsoft in the case of a problem. The second deals with the usage of the tools you are using in SQL Server. In Figure 1-4, I've made both of these selections.



Figure 1-4.

[View full size image]





Think long and hard about these choices. The first reaction most technical professionals have is to clear these choices, because they are afraid of a privacy compromise. If you feel uncomfortable with these options or your organization policy prohibits them being set, then by all means deselect them. If you are willing to tolerate a little Big Brother, however, the information Microsoft receives from these choices can go a really long way toward making SQL Server a better, more secure platform. As you will learn in Appendix A, these installation check boxes make everyone's upgrade a little better.

If you take the defaults during the installation, you will not have any of the sample databases and code. This is desirable on a production server, but you will want to install these on development systems. To do that, go through the Control Panel in Windows and then select Add or Remove Programs. Select the Microsoft SQL Server 2005 Tools item and then click the Change button.

I usually add all the sample databases and examples on a development system, which will install the AdventureWorks database and (if you pick everything as I do) the AdventureWorks OLAP database, too.

You might want to make other changes after the system is installed. To do that, you use the SQL Server Surface Area Configuration tool. I cover that tool in a few chapters throughout the book.

Operating System Variables
Depending on the version of Windows your server is running, you need to set various switches and operating system variables. Windows Server 2003 and higher provides significant performance and security advantages over the earlier versions of Windows operating systems.

Regardless of the operating system version, keep in mind a few base settings. For starters, ensure that the system runs only SQL Server. Unless the installation is quite small, such as an Express Edition installation, you do not want to put any other applications on the server other than management utilities and backup software. It is tempting to put lots of applications on the new, not-heavily-used-yet SQL Server, but it could eventually conflict with SQL Server, and at the very least it will cause performance issues. Leave the SQL Server box for SQL Server.

Carefully consider the drive layout. The binaries for the operating system and other software should get its own physical drive, preferably inside the server hardware itself. The paging file should go on its own physical drive, too, and both of these should be kept away from the data, indexes, and tempdb files used by SQL Server.

If you are running Windows 2000 Server Editions, you need to set a few switches and options to take advantage of higher RAM amounts. If you are using more than 3GB of memory on a 32-bit CPU, you may want to enable the /3GB switch in the BOOT.INI file of the operating system. If you are using more than 4GB of RAM, you may want to use the /PAE switch, too.

Several caveats apply to these switches. You can find these in Books Online under the topic of "Enabling Memory Support for Over 4 GB of Physical Memory." The various settings and processes to follow depend on the operating system, processor type (32 or 64 bit), and the specific edition you are using. The SQL Server Web site at Microsoft also has several white papers on this topic; because each service pack might change these settings, check those resources for the most up-to-date information.

Post-Installation Configuration
With the planning for the server landscape complete, the final step is to predict the growth of the system. There are lots of fancy formulas to help you with this, and most of them amount to nothing more than an educated guess.

The better option is to monitor the system when it is in place and then use those numbers to do predictive analysis in the future. I explain more about that process in a bit. For the short term, plan for the growth you expect and add 10 to 25 percent, whichever your budget will allow. Most important of all, make sure the system you purchase can accept more CPUs and memory and is generally expandable.

After installation, gather metrics as you use the system. I show you how to do that in Chapter 5. Evaluate the metrics at least monthly and provide feedback to the site IT managers or whoever makes budget decisions. Most managers do not mind spending a little overtime to keep the infrastructure current, as opposed to a huge bill to change out the entire system.

There are many other factors in this decision process, but these are the primary ones to keep in mind. The "Take Away" section that follows shows a more complete exercise.

I have explained the versions and their capabilities and detailed the hardware components. Now it is time to put it all together and build your environment. In the following section, I have developed some processes that will help you document your variables, create a decision matrix, and then create a monitoring system for future growth.