Integrating MvcMiniProfiler and LLBLGenPro

MvcMiniProfiler is a lean and mean mini profiler for MVC 3 that shows the profiling results on each page displayed at runtime. Besides the custom steps you can seed everywhere in your code it supports database profiling as well. Out of the box are supported ado.net, Linq to SQL and Entity Framework. LLBLGenPro, my favorite ORM, isn’t supported though and it won’t work just like that.

Luckily, it turns out, it requires just a little effort to integrate MvcMiniProfiler into LLBLGenPro.

How does MvcMiniProfiler database profiling works

The way it works is that it wraps DbConnection, DbCommand and other Db[Stuff] and thus records the execution time by tracking their inner workings. Here is an example for MvcMiniProfiler documentation about how to start:

public static DbConnection GetOpenConnection()
{
    var cnn = CreateRealConnection(); // A SqlConnection, SqliteConnection ... or whatever

    // wrap the connection with a profiling connection that tracks timings 
    return MvcMiniProfiler.Data.ProfiledDbConnection.Get(cnn, MiniProfiler.Current);
}

If client calls DbConnection.CreateCommand on an ProfiledDbConnection instance returned from previous method it will get a wrapped whatever command original connection returns and so on. There is also a way to manually create DbCommand through ProfiledDbCommand constructor.

The support for Linq To SQL and Entity Framework is done in a similar manner.

This gets us to the point, why can’t I just use the same approach with LLBLGenPro?

Integrate MvcMiniProfiler with LLBLGenPro – why doesn’t work with same approach

The major problem with LLBLGenPro and MvcMiniProfiler integration is that LLBLGenPro doesn’t use DbConnection.CreateCommand method to create commands from existing connection. Instead it creates an instance of proper DbCommand derived class and assigns a connection to it. Thus it won’t work because it would try to assign a ProfiledDbConnection to a i.e. SqlCommand class.

So a bit more work is required to match them.

The code for adapter scenario

1. Create a DynamicQueryEngine derived class. Note: this class is database specific, thus if you work with i.e. SQL Server you’ll find it in SD.LLBLGen.Pro.DQE.SqlServer.NET20.dll assembly.

public class ProfilingDynamicQueryEngine : DynamicQueryEngine
{
    protected override DbCommand CreateCommand()
    {
         DbCommand cmd = base.CreateCommand();
         ProfiledDbCommand pCmd = new ProfiledDbCommand(cmd, null, MiniProfiler.Current);
         return pCmd;
    }
}

Here the DbCommand creation is overriden. Note that I wrap the original cmd and pass a current MiniProfiler instance as arguments to ProfiledDbCommand constructor, while I pass a null for the connection instance because it will be assigned later.

2. Derive from DataAccessAdapter class. Note: this class is generated from a template and you’ll find it in DBSpecificLayer project generated by LLBLGenPro.

public class DataAccessAdapterEx: DataAccessAdapter
{    
    protected override System.Data.Common.DbConnection CreateNewPhysicalConnection(string connectionString)
    {
        DbConnection conn = base.CreateNewPhysicalConnection(connectionString);
        // return ProfiledDbConnection.Get(conn); Pre MvcMiniProfiler 1.9
        return new ProfiledDbConnection(conn, MiniProfiler.Current);
} protected override DynamicQueryEngineBase CreateDynamicQueryEngine() { return PostProcessNewDynamicQueryEngine(new ProfilingDynamicQueryEngine()); } }

Within CreateDynamicQueryEngine I pass the class I’ve created in step #1. CreateNewPhysicalConnection will return a wrapped connection.

Instead of using DataAccessAdapter you should use the one created in step #2 - DataAccessAdapterEx. That’s it.

Conclusion

As it turns out, integrating MvcMiniProfiler with LLBLGenPro is quite easy. And the required coding might be added to LLBLGenPro templates by modifying them, so you won’t have to manually add the same code each time.

Let me know if you have feedback.

Update 19.9.2011: Updated the code because MvcMiniProiler introduced a breaking change in v1.9 (instead of ProfiledDbConnection.Get static method a constructor has to be used - thanks for swift response from David from LLBLGenPro support team)

A workaround to a problem when upgrading BlogEngine from 2.0 to 2.5

During the BlogEngine upgrade from 2.0 to 2.5 (one that hosts this blog) I’ve come across a problem. The problem might happen during execution of the SQL Server upgrade scripts that come with BlogEngine 2.5.

After running the scripts I’ve got an error mentioning that constraint FK_be_PostComment_be_Posts can’t be enforced. Huh? After some experimenting I’ve seen that the 2.0 database isn’t exactly well enforced with constraints and I had some comments left that don’t belong to any of the posts (I guess I’ve deleted the posts but comments were still there because database didn’t enforce the constraints and BlogEngine didn’t delete them).

Here is what I’ve did to upgrade.

1. In the upgrade script comment this statement:

       1: ALTER TABLE dbo.be_PostComment
       2:   ADD CONSTRAINT FK_be_PostComment_be_Posts FOREIGN KEY (BlogID, PostID) REFERENCES dbo.be_Posts (BlogID, PostID)
       3: GO

2. Run the upgrade script.

3. (Optional) To find if any comment is parentless execute this SELECT statement

   1: SELECT * FROM dbo.be_PostComment WHERE postid NOT IN (SELECT postid FROM be_posts)

4. Delete the orphaned comments

   1: DELETE FROM dbo.be_PostComment WHERE postid NOT IN (SELECT postid FROM be_posts)

5. Run the statement from upgrade script that you’ve commented in #1.

That’s it. I guess you could delete orphaned comments even before running the upgrade script and thus avoid first and last step.

Observation: Looks like at least database in version 2.0 wasn’t very well enforced, hopefully 2.5 rectifies this problem (it adds constrains here and there). Don’t forget, database is the last defense tier against bad data and should be as much protected as it can be.

Windows Home Server 2011 restore is just horribly flawed

I am a long time user of Windows Home Server (first version) and it saved me quite a lot times. Its restore had only one annoying flaw – that is it didn’t properly recognized my Realtek integrated netwok card (instead it tried to use drivers for another version and didn’t let me to select proper ones). So in order to restore I had to put in a recognizable network card and only then it worked. I guess a bit of hardware work here and there isn’t that bad for my strength after all.

Here comes Windows Home Server 2011 and my first restore experience. Initially it started very well, network card was properly recognized and I was happy. But then, oh well. I successfully connected to server and all went well until I had to select the partitions to restore. For beginning the partition letters were mixed up. Luckily I recognized the partitions by their names and their size. I’ve picked only the system partition to restore. When it should start restoring it worked for a minute and then yielded an unknown error has occurred. Gulb. After looking at the end of its log file it was saying that it can’t lock the volume for reason 5. So how come it can’t lock an unused partition. After googling I discovered that I am not the only one with this locking issue (https://connect.microsoft.com/WindowsHomeServer/feedback/details/665345/unable-to-restore-windows-7-client-with-raid-0). The workaround: delete the partition, create the partition and don’t format it. So I did. However, previously the partition was exactly 150,000 MB and now it shrunk to 149,999 MB. One MB shouldn’t make the difference, should it – the partition wasn’t full to the last byte? It turns out that WHS restore is so dumb that it will refuse the restore to this new partition due to the missing MB even though partition was used like 60%. And there is no way to make it bigger because next to this partition is another one and I don’t want to delete is as well. Very stupid and worse than WHS v1 was. Much worse.

So here is my plan now: attach an external disk, restore there, boot from external disk, manually copy the system partition files to original shrunk partition, make it bootable and run as usual. Will I succeed? I certainly hope so.

I am very disappointed in WHS 2011. The half of the key features (backup, restore) is seriously flawed.

Update: Even a non formatted partition can't be locked for some reason. I guess it has something to do with Intel Rapid Storage since the partition in question is a RAID1 one. Don't forget, it worked just fine with previous WHS.

Update2:

Here is the final workaround after a day of trial and errors:

  1. Restore system partition using WHS recovery CD to other (external) disk (I've mounted a spare disk in a Sharkoon dock).
  2. (optional) Again restore system partition to the disk above or some other disk, just to have a untouched copy.
  3. Boot from other disk - system is now bootable but not from original disk.
  4. Delete the original boot partition (the one that WHS can't restore).
  5. Install Acronis Disk Director 11 Home (it isn't free but I bought it just for this purpose and it is well worth it) and copy partition from step 2 (or the from step 1) to the original location (to replace the one you deleted in step 4). This software is required to copy partitions - I guess any partition copy software would do.
  6. Reset and boot the original system.

In case either other disk or original disk isn't bootable anymore you might do one of these:

  1. Boot from Windows 7 CD and try "Startup Repair".
  2. Boot from Windows 7 CD and do the "Custom Installation" on that partition. That will make partition bootable. Once it is bootable repeat the process described above.

I also think I know why WHS can't lock my original partition. It is most probably because it is located on Intel Rapid Storage RAID1. Note that WHS v1 didn't have any problem whatsoever with the same configuration. Bad bad WHS 2011.

I lost a working day and 24€ but at least I restored my workstation. Next time it will be much faster.

Let me know if it works for you.

Update: Here is the issue on connect (thanks to Henk Panneman for re-finding it).

Update v2: Issue disappeared again. Oh well, tired to fix the link again and again....

The long path of installing Windows Home Server 2011 under Hyper-V R2

Here is my experience of Windows Home Server 2011 aka Vail installation under Hyper-V R2 server (Core2 Duo E7600, 8GB RAM, 500GB RAID 1 and 3TB RAID5). What could have gone wrong it actually went but let’s go by steps.

  1. During file copy at the very beginning I was experiencing couldn’t copy file XY. Problem: corrupt ISO file I was using. Solution: re-download the file.
  2. The setup make it further but I started experiencing random reboots and BSODs during install, this time it was happening soon after initial files copy finished. On the bright side I caught few of them and they were mostly mentioning memory corruption. Time for memtest86+ and for RAM test. It turned out that one of four Patriot DDR2 2GB CAS6 memory sticks was bad. Solution: Throw out the problematic stick and run the server with three sticks. Also donated to memtest86+, well deserved.
  3. Memory issues were still present during setup. Argh.  Solution: Throw out the third memory stick (they like to work in pair it seems and a pair and a third wheel obviously isn’t something one should use) and replaced it with two older 1GB sticks I had collecting dust.
  4. I’ve made it to the step when setup says “Waiting for installation to continue” and shows a marquee progress bar. Except it didn’t finish. Ever. Now what. After peeking into log files located at C:\Users\All Users\Microsoft\Windows Server\Logs I’ve more or less soon understood that it has something to do with the “waiting for a web page to show”. After more digging I’ve found out that there were problems connecting to the internet and the internal web page wasn’t showing. Problem: the network card didn’t get an address from my DHCP for some reason. Solution: I’ve set a fixed IP and DNS records.
    Hint: Type Ctrl+Alt+End to simulate Ctrl+Alt+Del from Hyper-V client. Pick Start Task Manager and File/Run. Run explorer.exe and you can browse around the file system.

It took me a couple of days to make it through but at least I did it. Hura. It took me that much because I was installing on a 1TB VHD disk on a RAID5 array and setup takes time each retry.

So, that’s it. Now it looks fine.

Automating the Righthand Dataset Visualizer build process and a refresh build

Since the last version released I become aware of an issue in the visualizer. If you were working on a project that referenced newer DevExpress assemblies the visualizer might have reported an exception due to the binary incompatibility with its references to the DevExpress assemblies - a assembly binding issue.

(If you want just the binaries you can skip to the end of the article and download them.)

The solution is to use a custom build of DevExpress assemblies. If you have their sources you can build your custom DevExpress assemblies using these useful scripts.

Then I have to use those custom built assemblies with my visualizer but I want to use them only for release build, not for debug. So I created a folder named CustomAssemblies under visualizer solution. I added a reference path to this folder to all of the visualizer projects. Which means that the MSBuild will use assemblies in this folder if they are present or ones from GAC (the original ones) if the folder is empty. Unfortunatelly the reference paths are global to the project and you can't have two different sets for two different configurations.

So the building of the release version looks like: populate CustomAssemblies folder with custom DevExpress assemblies, run MSBuild on Release configuration and at the end clear the CustomAssemblies folder so the debug version works with the original DevExpress assemblies. But there is one more obstacle. The license.licx file lists public key of the DevExpress assemblies and it doesn't match the one found in custom version. So I have to replace all occurences of the original public key with my custom version public key before the build and restore the originals after the build. Problems solved.

The actual release process involves also {SmartAssembly} which merges all assemblies into a single file and signing it with a certificate+timestamping and finally zipping the rather large result. Because I am not a masochist I decided to create a FinalBuilder project that does all of this automatically for me (except for building custom DevExpress assemblies).

Let me know if there are still problems!

Righthand.DebuggerVisualizer.Dataset.2008_v1.0.1.zip (12.67 mb)

Righthand.DebuggerVisualizer.Dataset.2010_v1.0.1.zip (12.67 mb)

Read more about Righthand DataSet Visualizer here.

About strongtyped datasets, DBNulls, WPF and binding

The mix of strongtyped datasets, DBNull values, WPF and bindings will likely yield exceptions of invalid cast type when binding nullable columns.

Imagine you have a table with a single, nullable, column Description. You would bind it against a TextBox (or any other control) with this syntax: {Binding Description}. When binding is performed against a row that has a DBNull value (remember, ADO.NET uses DBNull.Value for null values) in that column you get an Unable to cast object of type 'System.DBNull' to type 'System.String' exception.

The reason for that exception is quite simple. Let’s take a look at how ADO.NET implements the strongtyped Description column:

   1: [global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
   2: [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Data.Design.TypedDataSetGenerator", "4.0.0.0")]
   3: public bool IsDescriptionNull() {
   4:     return this.IsNull(this.tableXXX.DescriptionColumn);
   5: }
   6:  
   7: [global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
   8: [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Data.Design.TypedDataSetGenerator", "4.0.0.0")]
   9: public string Description {
  10:     get {
  11:         try {
  12:             return ((string)(this[this.tableXXX.DescriptionColumn]));
  13:         }
  14:         catch (global::System.InvalidCastException e) {
  15:             throw new global::System.Data.StrongTypingException("The value for column \'Description\' in table \'XXX\' is DBNull.", e);
  16:         }
  17:     }
  18:     set {
  19:         this[this.tableXXX.DescriptionColumn] = value;
  20:     }
  21: }

The obvious problem is that WPF binding goes straight to Description property without checking IsDescriptionNull() method (as your code should) and then property getter fails to convert DBNull.Value to a string. And no, at this point even a converter won’t help you because the value is read before converter is even invoked.

But fear not, the solution is a very simple one. Just add square brackets around Description in binding syntax, like this: {Binding [Description]}. Looks similar but it is different. The former syntax uses strongtyped property to access the data while the later uses the DataRow’s this[string] accessor instead which returns an object and thus no exceptions are raised due to the DBNull.Value to string conversion. Furthermore a DBNullConverter value converter can come handy as well:

   1: public class DBNullConverter: IValueConverter
   2: {
   3:     public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
   4:     {
   5:         if (value == DBNull.Value)
   6:             return DependencyProperty.UnsetValue;
   7:         else
   8:             // return base.Convert(value, targetType, parameter, culture);
   9:             return value;
  10:     }
  11:  
  12:     public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
  13:     {
  14:         if (value == null)
  15:             return DBNull.Value;
  16:         else
  17:             // return base.ConvertBack(value, targetType, parameter, culture);
  18:             return value;
  19:     }
  20: }