6 May 2012
Micro ORMs demonstrate that less is usually… less
Micro ORMs such as Dapper, Massive and PetaPoco are an increasingly fashionable solution to the problem of bridging the databases and object world. They are quick to set up and can produce some impressive benchmark results. However, it’s important to understand the trade-offs involved in using a micro ORM as opposed to something “heavier” with more advanced functionality.
Each Micro ORM offers a varying feature set, but they all tend to take a minimalist approach where the focus is on mapping result sets onto typed objects as quickly as possible. Performance is everything with a Micro ORM, but this can serve to limit their value to very specific use cases.
We’re not all like Stack Overflow
Stack Overflow’s Dapper.Net library has done much to popularize Micro ORMs but this emerged in response to a particular set of performance-related problems.
The Stack Overflow site relies on simple queries based on clustered indexes. These queries tend not to put a lot of load onto a database server and bottlenecks were more associated with processing the results of the query in the .Net stack. At the heart of these performance problems was the fact that LinqToSql was generating a fresh dynamic query for each single database call.
The Stack developers found that a custom data access solution based on query caching and rapid object mapping handled peak loads much better. This is a classic case of custom development being used to solve a specific technical problem where a more generic technology has failed. It happens. It doesn’t necessarily follow that the generic technology is unfit.
How much does performance really matter?
Stack Overflow may have leveraged a Micro ORMs in improving page rendering times, but most scenarios don’t have to deal with this kind of load. Everybody wants fast performance, but just how fast is “fast”? If you’re not careful you can waste time chasing performance gains that are not adding any value.
It’s wise to consider the wider pattern of data access when choosing a data access strategy, i.e. do you really need fast data retrieval above all else? Micro ORMs concentrate on getting a database row into an object as fast as possible and provide little support for complex querying and updating scenarios. They are also less able to form the basis of a domain object model that encapsulates business logic and validation without a fair amount of boiler plate code.
Making your DBAs wince
One thing that worries me about Micro ORMs is their apparent reliance on hard-coded SQL for anything beyond the most simple of selects. Maybe I’m showing my age as this isn’t quite the performance killer it used to be before large database engines started caching queries more aggressively. There are even those that support the use of hard-coded statements on the basis that they are simple to implement and test.
That said, hard-coded SQL still makes seasoned DBAs wince and for good reason. They can lead to all sorts of problems with injection and maintenance can become very difficult if you scatter data access statements throughout your code.
At least hard-coded SQL allows you to understand what’s being fired at a database. The Stack developers freely admit that much of their performance gains were achieved through optimizing the SQL that was being generated by their Linq statements. No matter what data access technology you’re using it’s important to understand what’s going on under the hood. This could apply to pretty much any ORM technology out there.
Choosing the right tool
Micro ORMs often feel like little more than wrappers around creating an ADO command and reader. Sometimes this is all you really need. If you want to leverage really fast performance for predictable data access scenarios, then the overhead and complexity of a “full fat” ORM may seem like overkill. In this case, a Micro ORM can be a useful middle-ground between rolling your own solution and providing some basic tooling to bridge the gap between your database tables and application objects.