ASUS Support? Who cares.

Not long ago I’ve purchased an ASUS Transformer (Eee pad) Honeycomb tablet. Good specs, great price. I’d buy it even sooner if it weren’t for ASUS’ blunder with not providing enough units to the market (for some reason they released this great tablet in ultra-low quantities and it took almost a quarter of the year to provide enough units to satisfy market demands – first ASUS fail – what were they thinking?).

Transformer is really a great tablet, nothing to complain about and ASUS is really taking care of updating the OS in timely fashion. In fact it is the best combination out there right now (for Honeycomb tablets AFAIK) – others should follow their example. Anyway, I was a happy user for a month or so until I’ve come across Kendo UI – an optimized javascript/HTML5 library for UI components. Curiosity took over and I’ve tried few demos just to realize that they are running abnormally slow on a tablet that is supposed to perform very fast. My initial though was that Kendo UI is crap but later I’ve found that I was totally wrong on this assumption. Just to be sure I’ve tried Kendo UI on my Samsung Galaxy S phone and wonders, it runs much faster on my phone (supposedly much slower device) than on my (supposed to be) faster tablet. Makes sense? Not really.

So I started investigating by comparing the two devices. The most objective way of comparison are of course benchmark tests. I started with SunSpider (javascript benchmark – Kendo UI is all about javascript). I’ve got a result that is twice as slower compared to what others are getting on the same tablet. Even my phone scores better. I’ve also run Antutu and Quadrant. The results are below (expected results are from a fellow Transformer owner and from results from various web sites).




Difference (the factor of slowness)

lower is better





Note that running a different browser doesn’t change the results significantly.





Difference (the factor of slowness)

lower is better





CPU Integer




CPU floating-point




2D graphics




3D graphics




Database IO




SD card write




SD card read












Difference (the factor of slowness)

lower is better





What can I gather from results is that there is a problem with CPU but not with GPU (factor is about 2 or more for CPU related tests which means twice as slower as it should be).

I even performed a factory reset and still got the same results. This is the first time I saw a device underperforming and I had no idea why. I’ve contacted Asus UK (I’ve bought it from UK because there is cheaper and it was the only EU country actually selling them) and they suggested a RMA (sending it in for a repair). The ASUS’ response was pretty quick in less than a day. I was supposed to contact a local Slovene company which I did and they dispatched an express courier to pick my tablet up (which was a pleasant surprise, something I am not used to). Slovene guys also warned me that they are not a repair shop, they will just forward it to designated repair service (supposedly in Czech republic) and that it might take a couple of weeks or even three weeks until I get it back. At least I’ll get a properly functioning tablet back I thought at the moment, even though I was getting used to the tablet.

The fail of the ASUS service logistics

So the tablet is gone for a service and after three weeks there was no sign of it – even though I’ve waited eagerly outside the house for the postal courier every day (just kidding). Hence I called the local Slovene company to ask how is it going with my tablet and when I might expect it back. The answer was by far the one I didn’t expect: “hey, in a day or two we will finally send it to the service”. “Errr, what? I think I didn’t understood that sentence, can you repeat it for me?” And the repeated answer was horribly the same. “So, you are telling me that you’ve spent three weeks or so just to prepare for sending my transformer to the service?” “Yes, but that doesn’t depend on us, you know. The Czechs (service) are supposed to organize the physical transfer, they are working on it, we are just the messenger, it doesn’t depend on us. We just (magically) open a case on our application and that’s it as far as we are concerned.”. WTF? ASUS could replace the device immediately without even sending it to the repair service if you care about your customers. But no, everything has to be by the internal rules, which involves stupid internal logistic problems or who knows what.

ASUS, is this the way of treating your customers? Is it really? I mean I had plenty of confidence in ASUS that they will make it right with their excellent tablet. I understand that the tablet might malfunction for a reason or another. But not dealing with failures in timely manner is the second and by far their worst failure (first one is failure to provide enough units at the start). And one wonders why iPad is still reigning the tablet market? It is because Taiwanese companies just don’t get it (nor does Motorola). They don’t get the whole picture nor they take care to provide customer friendly service in every aspect. At this point what is “pushing” the repair is the Slovene law which says that the warranty repair has to be done within 45 days (otherwise they have to replace it with a new device). Same on ASUS of even considering this time limitation.

If I know all this I’d just plan a family vacation somewhere in Czech republic near the repair service.

16.9.2011 Breaking update: A month after I sent my tablet for repair and a week after it was actually sent to the service (and after a week I've wrote this rant) I've got a replacement back - or at least the attached document says so. It is working as expected now. I am again a happy Honeycomb user.

DevExpress’ FlowLayoutControl and MVVM

FlowLayoutControl unfortunately doesn’t support items binding. You can’t just provide a source and hope FlowLayoutControl will populate the content. But fear not, there is nothing attached properties can’t solve.

I’ve created an attached property ItemsSource that does all that for you. Here is its declaration:

public static readonly DependencyProperty ItemsSourceProperty = DependencyProperty.RegisterAttached("ItemsSource", typeof(IEnumerable), typeof(FlowLayoutExtensions), 
            new UIPropertyMetadata(null, new PropertyChangedCallback(OnItemsSourceChanged)));

It accepts an IEnumerable as an input.

And here is the relevant code when ItemsSource changes:

private static void OnItemsSourceChanged(DependencyObject o, IEnumerable oldValue, IEnumerable newValue)
    FlowLayoutControl layout = o as FlowLayoutControl;
    if (layout != null)

        NotifyCollectionChangedEventHandler collectionChanged = delegate(object s, NotifyCollectionChangedEventArgs e)
            switch (e.Action)
                case NotifyCollectionChangedAction.Add:
                    AddItems(layout, e.NewItems);
                case NotifyCollectionChangedAction.Remove:
                    RemoveItems(layout, e.OldItems);

        // remove event implementation
        if (oldValue != null)
            INotifyCollectionChanged oldIncc = oldValue as INotifyCollectionChanged;
            if (oldIncc != null)
                oldIncc.CollectionChanged -= collectionChanged;

        if (newValue != null)
            AddItems(layout, newValue);
            INotifyCollectionChanged incc = newValue as INotifyCollectionChanged;
            if (incc != null)
                incc.CollectionChanged += collectionChanged;

First it defines a delegate that gets called upon collection changes (when source is INotifyCollectionChanged) then it unsubscribes from CollectionChanged if it has previously subscribed. And finally it populates FlowLayoutControl with items and optionally subscribes to CollectionChanged event (when source supports it). Note that ObservableCollection<T> implements INotifyCollectionChanged.

Here is the code that adds or removes items:

private static void AddItems(FlowLayoutControl layout, IEnumerable source)
    foreach (object item in source)
        GroupBox box = new GroupBox { DataContext = item };

private static void RemoveItems(FlowLayoutControl layout, IEnumerable source)
    foreach (object item in source)
        GroupBox match = (from gb in layout.Children.OfType<GroupBox>()
                          where gb.DataContext == item
                          select gb).FirstOrDefault();
        if (match != null)

Add items adds an GroupBox instance for each new item and sets its DataContext to the item. While RemoveItems searches for a matching GroupBox (based on DataContext match) instance and removes it from the FlowLayoutControl's Children collection.

A bit of XAML is required as well. I control the GroupBox appearance through a Style, like this:

<Style TargetType="dxlc:GroupBox">
    <Setter Property="MaximizeElementVisibility" Value="Visible"/>
    <Setter Property="MinimizeElementVisibility" Value="Visible"/>
    <Setter Property="Width" Value="150"/>
    <Setter Property="Header" Value="{Binding Caption}" />
    <Setter Property="Content" Value="{Binding}" />

Note the binding of the Content property (remember, I am assigning current item as DataContext). And here is the FlowLayoutControl instance declaration:

<dxlc:FlowLayoutControl loc:FlowLayoutExtensions.ItemsSource="{Binding}" />

There you go, a MVVM friendly approach.

Note that this is not a fully featured code but it is a good starting point.

31.8.2011 - correct demo files

20.9.2011 - ufff, again uploaded really proper demo (9.99 kb)

Fixing combination of NuGet and Team Foundation in workgroup configuration: 401 Unauthorized

The problem

A lot of users of Visual Studio 2010 (SP1), Team Foundation Server in workgroup and NuGet faced a very annoying problem – often we’d get 401-Unauthorized when installing/uninstalling/updating a NuGet package. Apparently it happens only in this combination (not sure if my host OS – Windows 7 plays any role in it) and not consistently. But when it starts the only way to get rid of errors is to restart Visual Studio.

The only workaround so far was to:

  1. Go Offline with TFS
  2. Manually make files writable (csproj, packages configuration, etc.) or uncheck them before #1
  3. Close Visual Studio
  4. Open Visual Studio
  5. Do NuGet
  6. Close Visual Studio
  7. Open Visual Studio
  8. Go Online with TFS

The steps above were mandatory for every batch of NuGet operations. Which is a huge pain and absurdly great annoyance with, otherwise excellent, NuGet. Needless to say I was among the people facing this issue. And I get so annoyed that I decided to make a choice at that point: either ditch NuGet or fix it myself (NuGet is an open source project).

Being a developer I opted for second choice of course. Was there really a choice? Anyway, here is how it went my 24hrs of debugging and 15s fixing. If you just want to see the solution feel free to skip to the Solution below.


1. I downloaded NuGet sources.

2. When opening NuGet solution I quickly find out that I was missing Visual Studio 2010 SDK (because NuGet is an Visual Studio extension) so I downloaded the one I’ve found on the Internet. And it didn’t install saying something about prerequisites not installed. Ah, one needs Visual Studio 2010 SP1 SDK. Get it here. Somebody, please let know Visual Studio Extensibility Developer Center that they are listing the outdated SDK.

3. I set NuGet.VsExtension as Start Up project and fired up the debugger. Which opens another instance of Visual Studio 2010 where I’ve crafted up a sample solution used for reproducing the dreadful 401. I was able to reproduce it often but not always.

4. It took me some time to get familiar with NuGet sources. After that I invested some time to speed up the problem detection (as soon as possible the better) by modifying pieces of NuGet sources and after many trials and errors I’ve found that I have to dig deeper, into the bowels of Team Foundation System Client code.

5. I fired up my preferred tool for debugging assemblies for which I don’t have sources – .net reflector. It works better than using MS source symbols and it works for every assembly. It doesn’t work perfectly but that’s due to assembly optimizations and other black magic issues but it works well enough. Armed with decompiled TFS client assemblies I dug deeper and deeper. But couldn’t find an obvious fault.

6. I brought up a new weapon: Microsoft Network Monitor to analyse the network traffic. After all TFS communication is through HTTP/SOAP. There I’ve found the first clue to the root of the problem. Normally TFS client would send a request that would be refused by server with response saying that NTLM authentication is required. The client would re-send request with NTLM authentication and everything would work. But when the problem occurs the client just doesn’t respond to NTLM challenge – instead it just throws 401 unauthorized exception without even trying to authenticate against the server. I had no idea why it sometimes work and sometimes not.

Successful communication
Successful communication

Unsuccessful communication
Unsuccessful communication

7. At this point I was thinking of enabling System.Net tracing to get more useful info if possible. Immediately I faced a problem. The only way to enable System.Net is through app.config file but not in code. See, I couldn’t use app.config file because I was debugging a library and library’s app.config file is simply ignored. I’ve looked for a way to enable tracing programmatically in code, which is possible for user tracing scenarios, but not for System.Net. Bad luck, but there is nothing that can’t be fixed with a bit of reflection, like this:

private static void InitLogging()
    TextWriterTraceListener listener = new TextWriterTraceListener(@"D:\temp\ts.log");
    Type type = typeof(HttpWebRequest).Assembly.GetType("System.Net.Logging");
    MethodInfo initl = type.GetMethod("InitializeLogging", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic);
    initl.Invoke(null, null);

    foreach (string s in new string[] { "s_WebTraceSource", "s_HttpListenerTraceSource", "s_SocketsTraceSource", "s_CacheTraceSource" })
        FieldInfo webTsFi = type.GetField(s, BindingFlags.Static | BindingFlags.NonPublic);
        TraceSource webTs = (TraceSource)webTsFi.GetValue(null);
        webTs.Switch.Level = SourceLevels.Verbose;
    FieldInfo le = type.GetField("s_LoggingEnabled", BindingFlags.Static | BindingFlags.NonPublic);
    le.SetValue(null, true);

And voila, the thing started to spit a ton of information into file D:\temp\ts.log. But again, it only showed the symptom but not the cause (trace parts after first request, note that unsuccessful one doesn’t even try to NTLM authenticate):

System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with ConnectStream#13361802
System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with HttpWebResponse#7364733
System.Net Information: 0 : [10488] AcquireDefaultCredential(package = NTLM, intent  = Outbound)
System.Net Information: 0 : [10488] InitializeSecurityContext(credential = System.Net.SafeFreeCredential_SECURITY, context = (null), targetName = HTTP/TFS, inFlags = Delegate, MutualAuth, Connection)
System.Net Information: 0 : [10488] InitializeSecurityContext(In-Buffers count=0, Out-Buffer length=40, returned code=ContinueNeeded).
System.Net Warning: 0 : [10488] HttpWebRequest#51488348::() - Resubmitting request.

Successful communication

System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with ConnectStream#13361802
System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with HttpWebResponse#7364733

Unsuccessful communication

8. At this point I concentrated on debugging System.Net.HttpWebRequest class as re-submitting is not done at TFS client level. After even more trial and errors I was finally able to pinpoint the root of the evil.

The root of the problem

The decision whether to or not to try NTLM authentication is based on which internet zone OS thinks the request target is. In other words if OS says that your TFS server is outside intranet then HttpWebRequest won’t bother with NTLM authentication at all. It is that simple. The decision lies within PresentationCore’s (!) internal CustomCredentialPolicy.InternetSecurityManager class which delegates the question about the internet zone to OS and returns the result to HttpWebRequest. For some reason at some point it starts to return Internet instead of Intranet. I am not sure exactly why, but I have a remedy. A dramatically simple one which doesn’t even involve modifications to NuGet (no need to wait for a NuGet fix!).

The solution

Open Internet Explorer browser, go to Internet Options/Security, select Local Intranet icon, click Sites button


On Local Intranet dialog click Advanced


and add your TFS server to the Websites list, like I did with mine (replace TFS with the name of your server)


Restart Visual Studio any enjoy NuGet from a new perspective!

This solution apparently solves all of the issues I had with the dreaded 401. Let me know if it works for you as well.


The problem might not be related to NuGet at all but rather to PresentationCore (NuGet is a WPF application) which gets confusing results from OS through some interop. NuGet/Visual Studio is just a combination that triggers the otherwise sleeping problem.

Two Windows 8 feature requests

Microsoft started blogging about Windows 8 (there is twitter account @BuildWindows8 as well) and I started thinking what I’d like to see in Windows 8. I can think of two features right now, out of my head:

  1. Make .net first class development tool. You might say that it is, but in reality it isn’t. Not all APIs are accessible through .net. Just look at (abandoned) Windows® API Code Pack for Microsoft® .NET Framework. Furthermore it is ridden with bugs. There are other APIs requiring black magic to use them from .net. Why are we, .net developers, supposed to mess with that?
  2. Give us a chance to store non-OS essential files on a separate drive. Starting with hibernation file which is as large as your RAM is. More or less. If that is 12GB it means you’ll have to give up 12GB of OS disk space. You might say who cares, 12GB is nothing with current disk prices. Sure, if you don’t look at obscene prices of non-mainstream disks (i.e. SSD) where 12GB matters. A lot. Then there are temporary files, user related documents, etc. A lot of stuff I’d be happy to offload to a cheaper and larger disk. Some of these can be redirected already, but mostly in obscure ways.

That much for now. What do you think?

SAZAS, država in računalništvo prvič, v1.1 (slovene)

Par novih dejstev glede SAZAS, država in računalništvo, prvič, predvsem po zaslugi @Bekstejdz –a.

1. SAZAS ne prejme ves znesek iz nadomestil temveč “le” 32-40%. Drugo si razdelijo Zavod IPF in rezervacije (karkoli že to pomeni). Letno poročilo Zavoda IPF za 2009.

2. Zgleda, da to nadomestilo ne pobira država, kot sem prvotno mislil, ampak pooblaščenec (pooblastilo da Urad RS za intelektualno lastnino) – do konec leta 2009 je bil to Zavod IPF, potem pa mu je potekla začasna licenca. Po preteku pa Urad ni izdal več začasne ali stalnega dovoljenja za pobiranja. Oba, SAZAS in Zavod IPF sta hotela pridobiti dovoljenje, vendar:

Po mnenju sodišča (sodbi opr. št. I U 1080/2010 in I U 1111/21010) mora za izdajo stalnega dovoljenja vložnik izkazati, da združuje vse raznovrstne upravičence do nadomestila. Za izdajo stalnega dovoljenja je prav tako nujno, da se v pravilih delitve nadomestila (ki morajo biti vsebovana v statutu kolektivne organizacije) določijo natančnejša merila za delitev nadomestila med posamezne upravičence.

Skratka, ni problem, da bi si država premislila ali kaj takega, problem je le ključ delitve in dejstvo, da noben posamezno ne zastopa vseh. Očitno je denarja dovolj “samo za enega” in za nas je dobro, dokler se kregajo sami med seboj. Zanimivo pa je še, da Urada ti dve dejstvi nista motili pri izdraji prve začasne odločbe Zavodu IPF - je takrat predstavljal vse in je imel jasen ključ deliteve?

3. Posledica točke 2. je ta, da od začetka 2010 noben ne izvršuje uredbe in pobira nadomestila. Vsaj tako zgleda.

4. Bistvo originalnega zapisa seveda ostaja, to, da se začasno ne znajo zmeniti kdo nas bo odiral ne spremeni praktično nič, prav tako ne spremeni dejstvo, da ne dobi vsega zneska SAZAS (potrebno je vprašati uvoznike, da se ugotovi, kaj se z nadomestili dogaja – pozna kdo kakšnega?).

SAZAS, država in računalništvo, prvič (slovene)

Ljudje se pritožujemo nad višino vseh možnih davkov, a ne. Redko kdo pa ve, da obstaja še ena zla dajatev, ki je še bistveno bolj prikrita in se ji strokovno reče nadomestilo. In to so nadomestila za privatno in drugo lastno repoduciranje - SAZASu na (računalniško) strojno opremo in medije.

Dejmo si takoj pogledat ekstremni, praktičen primer. Za 500GB trdi disk, ki se ga kupi v prosti prodaji ali že vgrajenega bo uvoznik plačal SAZASu 50% (petdeset procentov, ali z drugimi besedami POLOVICO). Sliši se neverjetno, ampak je žal resnično: 500GB disk stane nekje okrog 40€, odštejmo DDV in dobimo 33,3€. Od teh 33,3€ bo uvoznik odštel 16,7€ (polovica od 33,3€) za nadomestilo in ta del tega prejme SAZAS. In da smo si na jasnem, tale postavka na računu, ki ga prejme kupec, nikakor ni navedena.



NI mogoče? O, pa je. In to od dne 6.10.2006 naprej, ko je vlada evropskega svetilnika oz. njen vladar podpisal odredbo “o zneskih nadomestil za privatno in drugo lastno reproduciranje”. Tam piše takole, v 2. b) točki 2. člena:

Nadomestilo za tonsko ali vizualno snemanje varovanih del, ki se plačuje pri prvi prodaji ali uvozu novih praznih nosilcev zvoka ali slike, znaša za posamezen prazni nosilec zvoka in/ali slike, ki po deklaraciji proizvajalca omogoča:

2. digitalni zapis avdio in/ali vizualnih ter pisanih del, in sicer:

b) nosilec, ki ni izključno namenjen reproduciranju avdio in/ali vizualnih del:

– podatkovni CD,
– podatkovni DVD,
računalniški trdi disk,
– spominska kartica (na primer: CF, SD, SDHC),
– nosilec z integrirano spominsko enoto in predvajalnikom, ki ni izključno
namenjen reproduciranju digitalnih avdio in/ali vizualnih del (na primer:
mobilni telefon, dlančnik), in
– drug podoben nosilec
za vsakih začetih 1 GB zmogljivosti 8 SIT, vendar ne več kot 4000 SIT.

4.000 bivših SIT je natančno 16,691704223001168419295610081789€ ali 16.7€ na kratko. Ker je znesek (na srečno) navzgor omejen pride najbolj do izraza pri nakupu zgoraj omenjenega 500GB diska. Vendar odredba ni omejena na trde diske, kje pa, vse živo se plača, vključno z spominskimi karticami, ki jih imate v fotoaparatih. Kolikor je meni jasno, se ta denar delno steka v SAZAS (glej spodaj razpredelnico o delitvi prilivov).

Novost: Zgleda, da se uredba trenutno ne izvaja, zaradi kregarij, kdo bo pobiral denar. Več v posodobitvi 1.1.




Vsi smo krivi

Tale odredba je vsekakor nastala pod vplivom oslovske sodbe v Višnji gori. Predpostavlja namreč, da smo vsi krivi prepisovanja SAZASovih in drugih vsebin, prav vsi, ki kupimo poljubno napravo ali medij iz Odredbe in vse nelegalne vsebine bomo zapisovali na vse kupljene medije z uporabo vseh kupljenih naprav. Brez izjeme. Ko kupiš, si kriv. In ker si kriv, potem plačaj pavšal. Ni ti pa potrebno vedeti, da si kriv in da plačaš, zato pa postavka ni nikjer navedena. Plačilo naj opravi kar zlodej (uvoznik), ki take stvari prodaja. Seveda z denarjem kupcev. Človek se seveda vpraša, če sem že avtomatično kriv in za to še plačam (čeprav nevede) a potem lahko dejansko legalno prepisujem to vsebino? Ne. Namreč, če se prepiše zaščiteno vsebino in storilca policaj dobi med delom, se plača kazen. Plača se nekaj, kar se je že plačalo ob nakupu zločinskega medija in naprav. Pa še kartoteko se dobi. Le, da se tokrat zavedno plača za storjeni zločin (in ne nevedno za možni zločin). Torej sodeč po odredbi smo vsi krivi vnaprej in zato plačamo, če smo pa res krivi, potem pa še enkrat plačamo.

Toda krog plačil se tukaj ne konča. Evropsko združenje SAZASov (med njimi tudi naš preljubi) sedaj zahteva, da bi se pavšal plačeval še pri ponudnikih interneta (SIOL, T-2, etc.), spet pavšalno, ker se pač preko njih pretaka nelegalna vsebina. Hočeš nočeš, bi vsi ponudniki morali plačevat pavšal, ker se preko njih lahko pretaka nelegalna vsebina.


Slabo za razvoj

Se še spomnite megalomanskih stavkov o svetilniku Evrope (iste vlade in vladarja podpisanega pod uredbo), kako bomo postali oh in sploh napredni. Seveda, ni boljšega načina kot obesiti pijavke na računalniške naprave in medije (ne pozabiti, 50% za 500GB disk), denar preusmeriti izven tehnoloških voda, in tehnološki napredek je zagotovljen.

Denarni tok

Kam točno gre denar, ki se ga na zgornji način pobira? Očitno gre tretjina SAZASu (glej spodaj razpredelnico o delitvi prilivov), ki je “neprofitna” organizacija. In ta “neprofitna” organizacija prejema od države (od kupcev strojne opreme in medijev) na tone denarja, če vse to drži. Koliko denarja? Kdo bi vedel. Komu točno gre ta denar in po kakšnem ključu? Kdo bi vedel, SAZAS takih in podobnih podatkov ne daje, vlado pa očitno tudi ne zanima.  In to koristi tehnološkemu preboju Slovenije kako?

Zaključek prvega dela

Če na kratko povzame, naša država lupi kupce strojne opreme in medijev za zločine, ki bi jih le-ti lahko storili, to očitno prikriva (kupec se ne zaveda tega) in pobran denar namenja neki neprofitni organizaciji, kjer le-ta ponikne. Film Minority report je vsaj temeljil na nekih specifičnih domnevah jasnovidcev, tukaj je pa vlada posplošeno jasnovidna.

Dopuščam možnost, da se motim, vendar amaterski pogled na to odredbo kaže tako. Popravki so zaželeni in dobrodošli.

Zgodba se nadaljuje.

Novost: Razpredelnice delitve prilivov iz Odredbe za 2009. SAZAS dobi "samo" 32%, ostalo si razdelijo drugi. Nič bolj ni jasno, komu gre koliko denarja. Sem malo prilagodil zgornji tekst temu dejstvu. Torej ne dobi samo SAZAS, ampak tudi drugi, bistvo zgodbe ostaja isto.

Posodobitev - v1.1

Integrating MvcMiniProfiler and LLBLGenPro

MvcMiniProfiler is a lean and mean mini profiler for MVC 3 that shows the profiling results on each page displayed at runtime. Besides the custom steps you can seed everywhere in your code it supports database profiling as well. Out of the box are supported, Linq to SQL and Entity Framework. LLBLGenPro, my favorite ORM, isn’t supported though and it won’t work just like that.

Luckily, it turns out, it requires just a little effort to integrate MvcMiniProfiler into LLBLGenPro.

How does MvcMiniProfiler database profiling works

The way it works is that it wraps DbConnection, DbCommand and other Db[Stuff] and thus records the execution time by tracking their inner workings. Here is an example for MvcMiniProfiler documentation about how to start:

public static DbConnection GetOpenConnection()
    var cnn = CreateRealConnection(); // A SqlConnection, SqliteConnection ... or whatever

    // wrap the connection with a profiling connection that tracks timings 
    return MvcMiniProfiler.Data.ProfiledDbConnection.Get(cnn, MiniProfiler.Current);

If client calls DbConnection.CreateCommand on an ProfiledDbConnection instance returned from previous method it will get a wrapped whatever command original connection returns and so on. There is also a way to manually create DbCommand through ProfiledDbCommand constructor.

The support for Linq To SQL and Entity Framework is done in a similar manner.

This gets us to the point, why can’t I just use the same approach with LLBLGenPro?

Integrate MvcMiniProfiler with LLBLGenPro – why doesn’t work with same approach

The major problem with LLBLGenPro and MvcMiniProfiler integration is that LLBLGenPro doesn’t use DbConnection.CreateCommand method to create commands from existing connection. Instead it creates an instance of proper DbCommand derived class and assigns a connection to it. Thus it won’t work because it would try to assign a ProfiledDbConnection to a i.e. SqlCommand class.

So a bit more work is required to match them.

The code for adapter scenario

1. Create a DynamicQueryEngine derived class. Note: this class is database specific, thus if you work with i.e. SQL Server you’ll find it in SD.LLBLGen.Pro.DQE.SqlServer.NET20.dll assembly.

public class ProfilingDynamicQueryEngine : DynamicQueryEngine
    protected override DbCommand CreateCommand()
         DbCommand cmd = base.CreateCommand();
         ProfiledDbCommand pCmd = new ProfiledDbCommand(cmd, null, MiniProfiler.Current);
         return pCmd;

Here the DbCommand creation is overriden. Note that I wrap the original cmd and pass a current MiniProfiler instance as arguments to ProfiledDbCommand constructor, while I pass a null for the connection instance because it will be assigned later.

2. Derive from DataAccessAdapter class. Note: this class is generated from a template and you’ll find it in DBSpecificLayer project generated by LLBLGenPro.

public class DataAccessAdapterEx: DataAccessAdapter
    protected override System.Data.Common.DbConnection CreateNewPhysicalConnection(string connectionString)
        DbConnection conn = base.CreateNewPhysicalConnection(connectionString);
        // return ProfiledDbConnection.Get(conn); Pre MvcMiniProfiler 1.9
        return new ProfiledDbConnection(conn, MiniProfiler.Current);
} protected override DynamicQueryEngineBase CreateDynamicQueryEngine() { return PostProcessNewDynamicQueryEngine(new ProfilingDynamicQueryEngine()); } }

Within CreateDynamicQueryEngine I pass the class I’ve created in step #1. CreateNewPhysicalConnection will return a wrapped connection.

Instead of using DataAccessAdapter you should use the one created in step #2 - DataAccessAdapterEx. That’s it.


As it turns out, integrating MvcMiniProfiler with LLBLGenPro is quite easy. And the required coding might be added to LLBLGenPro templates by modifying them, so you won’t have to manually add the same code each time.

Let me know if you have feedback.

Update 19.9.2011: Updated the code because MvcMiniProiler introduced a breaking change in v1.9 (instead of ProfiledDbConnection.Get static method a constructor has to be used - thanks for swift response from David from LLBLGenPro support team)