12 February 2008
Improving ASP.NET Performance: SQL Server Database access
Your database interactions are among the most expensive operations you’ll perform in an ASP.NET application. There are a number of best practices that you should be following when coding a scalable application for SQL Server – here are some of the rules that I swear by.
1. The golden rule for connections: late in and early out
It’s important that you ensure that any database connections are opened at the last moment before you need them and closed the instant you have finished. Keeping connections hanging around is a waste of resource – something else could be using it instead.
2. Use stored procedures
Stored procedures – when written properly – will compile with execution plans when you create or alter them, ensuring a faster execution than firing ad-hoc SQL at the server. There are a number of other reasons why stored procedures are important: they allow you to assert a consistent and robust security model more easily, they enable an architecture design that separates data manipulation code from business logic, they reduce the amount of information that has to be sent to a server for database instructions, and they can protect you from SQL injection attacks – a major risk for applications that use ad-hoc SQL.
3. Only fetch the data that you really need
Too many columns or rows in your data set creates unnecessary traffic between application and database server. The SELECT * statement should never be used – always choose the specific columns that you need and use the WHERE clause to limit the data set.
4. Always Close() and Dispose() your data access objects
Data access objects use unmanaged resources that need to be explicitly closed. Close() and Dispose() methods are provided to give the developer the opportunity to tidy up – the rule of the thumb is that if these methods exist on an object then you should use them when you are finished with it. At the very least, make sure that these methods are called in a Finally() block whenever data access objects are being used.
5. Consider caching static data
Every application will have static data that rarely changes as opposed to the more dynamic data that is more volatile. Given that database access is so expensive, a caching system for static data will help to reduce the number of visits to the database in any given operation.
6. Consider your data design and test it for large amounts of data
Data design can be a common cause of system bottlenecks, particularly if it hasn’t been load tested. Insufficient indexing can cause queries to suddenly start slowing down once a table has a few million records in it. Over normalisation of a data structure can cause similar problems when a system has to start constructing too many joins across tables. Any data design is a trade-off between a number of different factors, but always ensure that your data design is fit for purpose by road testing it on a “worst case scenario” data sample.
7. Reduce server chatter
Many methods can cause unnecessary to-ing and fro-ing between an application and server that causes unnecessary delay in execution. Some operations, such as deriving parameters, can cause implicit round trips to a database server. Setting the NOCOUNT option to ON in SQL Server on a switches off the unnecessary exchange of status messages between server and application in a live application. If all you want is a single result from a stored procedure, use ExecuteScalar rather than opening a recordset.
8. Use connection pooling
Ensure that all your database connections use the same security context and connection strings so they can take advantage of SQL connection pooling. This ensures that new connections are taken from an existing pool of connections and that old connections are returned to the pool once closed. Pooling reduces the overhead of having to establish a brand new database connection from scratch.