My “compiler as a service” talk at Bleeding Edge 2011

Microsoft is working on compiler as a service codenamed Roslyn for Visual Studio 11 which is supposed to come sometime next year, I assume towards the end of the 2012. Not much is known and Roslyn might be less feature rich as one might expect. Microsoft announced at Build conference that they’ll release some Roslyn CTP bits in a few weeks time.

The good news is that you can already use compiler as a service today through tools such as CodeRush(commercial)/CodeRush Xpress(free), PostSharp(commerical) and custom coding. Even if some tools are commercial, they provide a tremendous value.

Anyway, after a sabbatical year, I’ll talk again at the Bleeding Edge. In fact I’ll be talking about how to leverage these tools and use compiler as a service for improving both design time coding and your applications in general.

But most importantly, I'll be giving a lot of swag away :-)

See you at Bleeding Edge - do stop by and say hi.

Added some features to Righthand Dataset Debugger Visualizer

Here is the list of what’s have been added in 1.0.5.:

  • Tables tree width is persisted
  • added "show original values" feature (through button in a toolbar). When enabled it will show the cell original value in parentheses. Disabled by default.
  • Error and Changed columns are narrower and fixed width to conserve space

Go, grab v1.0.5 from dedicated page and visit forums as well.

ASUS Support? Who cares.

Not long ago I’ve purchased an ASUS Transformer (Eee pad) Honeycomb tablet. Good specs, great price. I’d buy it even sooner if it weren’t for ASUS’ blunder with not providing enough units to the market (for some reason they released this great tablet in ultra-low quantities and it took almost a quarter of the year to provide enough units to satisfy market demands – first ASUS fail – what were they thinking?).

Transformer is really a great tablet, nothing to complain about and ASUS is really taking care of updating the OS in timely fashion. In fact it is the best combination out there right now (for Honeycomb tablets AFAIK) – others should follow their example. Anyway, I was a happy user for a month or so until I’ve come across Kendo UI – an optimized javascript/HTML5 library for UI components. Curiosity took over and I’ve tried few demos just to realize that they are running abnormally slow on a tablet that is supposed to perform very fast. My initial though was that Kendo UI is crap but later I’ve found that I was totally wrong on this assumption. Just to be sure I’ve tried Kendo UI on my Samsung Galaxy S phone and wonders, it runs much faster on my phone (supposedly much slower device) than on my (supposed to be) faster tablet. Makes sense? Not really.

So I started investigating by comparing the two devices. The most objective way of comparison are of course benchmark tests. I started with SunSpider (javascript benchmark – Kendo UI is all about javascript). I’ve got a result that is twice as slower compared to what others are getting on the same tablet. Even my phone scores better. I’ve also run Antutu and Quadrant. The results are below (expected results are from a fellow Transformer owner and from results from various web sites).

SunSpider

Expected

Actual

Difference (the factor of slowness)

lower is better

 

2291

4550

1,99

Note that running a different browser doesn’t change the results significantly.

 

Antutu

Expected

Actual

Difference (the factor of slowness)

lower is better

RAM

806

363

2,22

CPU Integer

1152

519

2,22

CPU floating-point

1014

453

2,24

2D graphics

298

302

0,99

3D graphics

859

725

1,18

Database IO

270

165

1,64

SD card write

189

174

1,09

SD card read

126

119

1,06

Overall

4714

2820

1,67

 

Quadrant

Expected

Actual

Difference (the factor of slowness)

lower is better

 

2399

1005

2,39

What can I gather from results is that there is a problem with CPU but not with GPU (factor is about 2 or more for CPU related tests which means twice as slower as it should be).

I even performed a factory reset and still got the same results. This is the first time I saw a device underperforming and I had no idea why. I’ve contacted Asus UK (I’ve bought it from UK because there is cheaper and it was the only EU country actually selling them) and they suggested a RMA (sending it in for a repair). The ASUS’ response was pretty quick in less than a day. I was supposed to contact a local Slovene company which I did and they dispatched an express courier to pick my tablet up (which was a pleasant surprise, something I am not used to). Slovene guys also warned me that they are not a repair shop, they will just forward it to designated repair service (supposedly in Czech republic) and that it might take a couple of weeks or even three weeks until I get it back. At least I’ll get a properly functioning tablet back I thought at the moment, even though I was getting used to the tablet.

The fail of the ASUS service logistics

So the tablet is gone for a service and after three weeks there was no sign of it – even though I’ve waited eagerly outside the house for the postal courier every day (just kidding). Hence I called the local Slovene company to ask how is it going with my tablet and when I might expect it back. The answer was by far the one I didn’t expect: “hey, in a day or two we will finally send it to the service”. “Errr, what? I think I didn’t understood that sentence, can you repeat it for me?” And the repeated answer was horribly the same. “So, you are telling me that you’ve spent three weeks or so just to prepare for sending my transformer to the service?” “Yes, but that doesn’t depend on us, you know. The Czechs (service) are supposed to organize the physical transfer, they are working on it, we are just the messenger, it doesn’t depend on us. We just (magically) open a case on our application and that’s it as far as we are concerned.”. WTF? ASUS could replace the device immediately without even sending it to the repair service if you care about your customers. But no, everything has to be by the internal rules, which involves stupid internal logistic problems or who knows what.

ASUS, is this the way of treating your customers? Is it really? I mean I had plenty of confidence in ASUS that they will make it right with their excellent tablet. I understand that the tablet might malfunction for a reason or another. But not dealing with failures in timely manner is the second and by far their worst failure (first one is failure to provide enough units at the start). And one wonders why iPad is still reigning the tablet market? It is because Taiwanese companies just don’t get it (nor does Motorola). They don’t get the whole picture nor they take care to provide customer friendly service in every aspect. At this point what is “pushing” the repair is the Slovene law which says that the warranty repair has to be done within 45 days (otherwise they have to replace it with a new device). Same on ASUS of even considering this time limitation.

If I know all this I’d just plan a family vacation somewhere in Czech republic near the repair service.

16.9.2011 Breaking update: A month after I sent my tablet for repair and a week after it was actually sent to the service (and after a week I've wrote this rant) I've got a replacement back - or at least the attached document says so. It is working as expected now. I am again a happy Honeycomb user.

DevExpress’ FlowLayoutControl and MVVM

FlowLayoutControl unfortunately doesn’t support items binding. You can’t just provide a source and hope FlowLayoutControl will populate the content. But fear not, there is nothing attached properties can’t solve.

I’ve created an attached property ItemsSource that does all that for you. Here is its declaration:

public static readonly DependencyProperty ItemsSourceProperty = DependencyProperty.RegisterAttached("ItemsSource", typeof(IEnumerable), typeof(FlowLayoutExtensions), 
            new UIPropertyMetadata(null, new PropertyChangedCallback(OnItemsSourceChanged)));

It accepts an IEnumerable as an input.

And here is the relevant code when ItemsSource changes:

private static void OnItemsSourceChanged(DependencyObject o, IEnumerable oldValue, IEnumerable newValue)
{    
    FlowLayoutControl layout = o as FlowLayoutControl;
    if (layout != null)
    {

        NotifyCollectionChangedEventHandler collectionChanged = delegate(object s, NotifyCollectionChangedEventArgs e)
        {
            switch (e.Action)
            {
                case NotifyCollectionChangedAction.Add:
                    AddItems(layout, e.NewItems);
                    break;
                case NotifyCollectionChangedAction.Remove:
                    RemoveItems(layout, e.OldItems);
                    break;
            }
        };

        // remove event implementation
        if (oldValue != null)
        {
            INotifyCollectionChanged oldIncc = oldValue as INotifyCollectionChanged;
            if (oldIncc != null)
                oldIncc.CollectionChanged -= collectionChanged;
        }
        layout.Children.Clear();

        if (newValue != null)
        {
            AddItems(layout, newValue);
            INotifyCollectionChanged incc = newValue as INotifyCollectionChanged;
            if (incc != null)
            {
                incc.CollectionChanged += collectionChanged;
            }
        }
    }
}

First it defines a delegate that gets called upon collection changes (when source is INotifyCollectionChanged) then it unsubscribes from CollectionChanged if it has previously subscribed. And finally it populates FlowLayoutControl with items and optionally subscribes to CollectionChanged event (when source supports it). Note that ObservableCollection<T> implements INotifyCollectionChanged.

Here is the code that adds or removes items:

private static void AddItems(FlowLayoutControl layout, IEnumerable source)
{
    foreach (object item in source)
    {
        GroupBox box = new GroupBox { DataContext = item };
        layout.Children.Add(box);
    }
}

private static void RemoveItems(FlowLayoutControl layout, IEnumerable source)
{
    foreach (object item in source)
    {
        GroupBox match = (from gb in layout.Children.OfType<GroupBox>()
                          where gb.DataContext == item
                          select gb).FirstOrDefault();
        if (match != null)
            layout.Children.Remove(match);
    }
}

Add items adds an GroupBox instance for each new item and sets its DataContext to the item. While RemoveItems searches for a matching GroupBox (based on DataContext match) instance and removes it from the FlowLayoutControl's Children collection.

A bit of XAML is required as well. I control the GroupBox appearance through a Style, like this:

<Style TargetType="dxlc:GroupBox">
    <Setter Property="MaximizeElementVisibility" Value="Visible"/>
    <Setter Property="MinimizeElementVisibility" Value="Visible"/>
    <Setter Property="Width" Value="150"/>
    <Setter Property="Header" Value="{Binding Caption}" />
    <Setter Property="Content" Value="{Binding}" />
</Style>

Note the binding of the Content property (remember, I am assigning current item as DataContext). And here is the FlowLayoutControl instance declaration:

<dxlc:FlowLayoutControl loc:FlowLayoutExtensions.ItemsSource="{Binding}" />

There you go, a MVVM friendly approach.

Note that this is not a fully featured code but it is a good starting point.

31.8.2011 - correct demo files

20.9.2011 - ufff, again uploaded really proper demo

FlowLayoutExtensionsDemo.zip (9.99 kb)

Fixing combination of NuGet and Team Foundation in workgroup configuration: 401 Unauthorized

The problem

A lot of users of Visual Studio 2010 (SP1), Team Foundation Server in workgroup and NuGet faced a very annoying problem – often we’d get 401-Unauthorized when installing/uninstalling/updating a NuGet package. Apparently it happens only in this combination (not sure if my host OS – Windows 7 plays any role in it) and not consistently. But when it starts the only way to get rid of errors is to restart Visual Studio.

The only workaround so far was to:

  1. Go Offline with TFS
  2. Manually make files writable (csproj, packages configuration, etc.) or uncheck them before #1
  3. Close Visual Studio
  4. Open Visual Studio
  5. Do NuGet
  6. Close Visual Studio
  7. Open Visual Studio
  8. Go Online with TFS

The steps above were mandatory for every batch of NuGet operations. Which is a huge pain and absurdly great annoyance with, otherwise excellent, NuGet. Needless to say I was among the people facing this issue. And I get so annoyed that I decided to make a choice at that point: either ditch NuGet or fix it myself (NuGet is an open source project).

Being a developer I opted for second choice of course. Was there really a choice? Anyway, here is how it went my 24hrs of debugging and 15s fixing. If you just want to see the solution feel free to skip to the Solution below.

Debugging

1. I downloaded NuGet sources.

2. When opening NuGet solution I quickly find out that I was missing Visual Studio 2010 SDK (because NuGet is an Visual Studio extension) so I downloaded the one I’ve found on the Internet. And it didn’t install saying something about prerequisites not installed. Ah, one needs Visual Studio 2010 SP1 SDK. Get it here. Somebody, please let know Visual Studio Extensibility Developer Center that they are listing the outdated SDK.

3. I set NuGet.VsExtension as Start Up project and fired up the debugger. Which opens another instance of Visual Studio 2010 where I’ve crafted up a sample solution used for reproducing the dreadful 401. I was able to reproduce it often but not always.

4. It took me some time to get familiar with NuGet sources. After that I invested some time to speed up the problem detection (as soon as possible the better) by modifying pieces of NuGet sources and after many trials and errors I’ve found that I have to dig deeper, into the bowels of Team Foundation System Client code.

5. I fired up my preferred tool for debugging assemblies for which I don’t have sources – .net reflector. It works better than using MS source symbols and it works for every assembly. It doesn’t work perfectly but that’s due to assembly optimizations and other black magic issues but it works well enough. Armed with decompiled TFS client assemblies I dug deeper and deeper. But couldn’t find an obvious fault.

6. I brought up a new weapon: Microsoft Network Monitor to analyse the network traffic. After all TFS communication is through HTTP/SOAP. There I’ve found the first clue to the root of the problem. Normally TFS client would send a request that would be refused by server with response saying that NTLM authentication is required. The client would re-send request with NTLM authentication and everything would work. But when the problem occurs the client just doesn’t respond to NTLM challenge – instead it just throws 401 unauthorized exception without even trying to authenticate against the server. I had no idea why it sometimes work and sometimes not.

Successful communication
Successful communication

Unsuccessful communication
Unsuccessful communication

7. At this point I was thinking of enabling System.Net tracing to get more useful info if possible. Immediately I faced a problem. The only way to enable System.Net is through app.config file but not in code. See, I couldn’t use app.config file because I was debugging a library and library’s app.config file is simply ignored. I’ve looked for a way to enable tracing programmatically in code, which is possible for user tracing scenarios, but not for System.Net. Bad luck, but there is nothing that can’t be fixed with a bit of reflection, like this:

private static void InitLogging()
{
    TextWriterTraceListener listener = new TextWriterTraceListener(@"D:\temp\ts.log");
    Type type = typeof(HttpWebRequest).Assembly.GetType("System.Net.Logging");
    MethodInfo initl = type.GetMethod("InitializeLogging", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic);
    initl.Invoke(null, null);

    foreach (string s in new string[] { "s_WebTraceSource", "s_HttpListenerTraceSource", "s_SocketsTraceSource", "s_CacheTraceSource" })
    {
        FieldInfo webTsFi = type.GetField(s, BindingFlags.Static | BindingFlags.NonPublic);
        TraceSource webTs = (TraceSource)webTsFi.GetValue(null);
        webTs.Switch.Level = SourceLevels.Verbose;
        webTs.Listeners.Add(listener);
    }
    FieldInfo le = type.GetField("s_LoggingEnabled", BindingFlags.Static | BindingFlags.NonPublic);
    le.SetValue(null, true);
}

And voila, the thing started to spit a ton of information into file D:\temp\ts.log. But again, it only showed the symptom but not the cause (trace parts after first request, note that unsuccessful one doesn’t even try to NTLM authenticate):

System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with ConnectStream#13361802
System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with HttpWebResponse#7364733
System.Net Information: 0 : [10488] AcquireDefaultCredential(package = NTLM, intent  = Outbound)
System.Net Information: 0 : [10488] InitializeSecurityContext(credential = System.Net.SafeFreeCredential_SECURITY, context = (null), targetName = HTTP/TFS, inFlags = Delegate, MutualAuth, Connection)
System.Net Information: 0 : [10488] InitializeSecurityContext(In-Buffers count=0, Out-Buffer length=40, returned code=ContinueNeeded).
System.Net Warning: 0 : [10488] HttpWebRequest#51488348::() - Resubmitting request.

Successful communication

System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with ConnectStream#13361802
System.Net Information: 0 : [10488] Associating HttpWebRequest#51488348 with HttpWebResponse#7364733

Unsuccessful communication

8. At this point I concentrated on debugging System.Net.HttpWebRequest class as re-submitting is not done at TFS client level. After even more trial and errors I was finally able to pinpoint the root of the evil.

The root of the problem

The decision whether to or not to try NTLM authentication is based on which internet zone OS thinks the request target is. In other words if OS says that your TFS server is outside intranet then HttpWebRequest won’t bother with NTLM authentication at all. It is that simple. The decision lies within PresentationCore’s (!) internal CustomCredentialPolicy.InternetSecurityManager class which delegates the question about the internet zone to OS and returns the result to HttpWebRequest. For some reason at some point it starts to return Internet instead of Intranet. I am not sure exactly why, but I have a remedy. A dramatically simple one which doesn’t even involve modifications to NuGet (no need to wait for a NuGet fix!).

The solution

Open Internet Explorer browser, go to Internet Options/Security, select Local Intranet icon, click Sites button

image

On Local Intranet dialog click Advanced

image

and add your TFS server to the Websites list, like I did with mine (replace TFS with the name of your server)

image

Restart Visual Studio any enjoy NuGet from a new perspective!

This solution apparently solves all of the issues I had with the dreaded 401. Let me know if it works for you as well.

Considerations

The problem might not be related to NuGet at all but rather to PresentationCore (NuGet is a WPF application) which gets confusing results from OS through some interop. NuGet/Visual Studio is just a combination that triggers the otherwise sleeping problem.

Two Windows 8 feature requests

Microsoft started blogging about Windows 8 (there is twitter account @BuildWindows8 as well) and I started thinking what I’d like to see in Windows 8. I can think of two features right now, out of my head:

  1. Make .net first class development tool. You might say that it is, but in reality it isn’t. Not all APIs are accessible through .net. Just look at (abandoned) Windows® API Code Pack for Microsoft® .NET Framework. Furthermore it is ridden with bugs. There are other APIs requiring black magic to use them from .net. Why are we, .net developers, supposed to mess with that?
  2. Give us a chance to store non-OS essential files on a separate drive. Starting with hibernation file which is as large as your RAM is. More or less. If that is 12GB it means you’ll have to give up 12GB of OS disk space. You might say who cares, 12GB is nothing with current disk prices. Sure, if you don’t look at obscene prices of non-mainstream disks (i.e. SSD) where 12GB matters. A lot. Then there are temporary files, user related documents, etc. A lot of stuff I’d be happy to offload to a cheaper and larger disk. Some of these can be redirected already, but mostly in obscure ways.

That much for now. What do you think?

SAZAS, država in računalništvo prvič, v1.1 (slovene)

Par novih dejstev glede SAZAS, država in računalništvo, prvič, predvsem po zaslugi @Bekstejdz –a.

1. SAZAS ne prejme ves znesek iz nadomestil temveč “le” 32-40%. Drugo si razdelijo Zavod IPF in rezervacije (karkoli že to pomeni). Letno poročilo Zavoda IPF za 2009.

2. Zgleda, da to nadomestilo ne pobira država, kot sem prvotno mislil, ampak pooblaščenec (pooblastilo da Urad RS za intelektualno lastnino) – do konec leta 2009 je bil to Zavod IPF, potem pa mu je potekla začasna licenca. Po preteku pa Urad ni izdal več začasne ali stalnega dovoljenja za pobiranja. Oba, SAZAS in Zavod IPF sta hotela pridobiti dovoljenje, vendar:

Po mnenju sodišča (sodbi opr. št. I U 1080/2010 in I U 1111/21010) mora za izdajo stalnega dovoljenja vložnik izkazati, da združuje vse raznovrstne upravičence do nadomestila. Za izdajo stalnega dovoljenja je prav tako nujno, da se v pravilih delitve nadomestila (ki morajo biti vsebovana v statutu kolektivne organizacije) določijo natančnejša merila za delitev nadomestila med posamezne upravičence.

Skratka, ni problem, da bi si država premislila ali kaj takega, problem je le ključ delitve in dejstvo, da noben posamezno ne zastopa vseh. Očitno je denarja dovolj “samo za enega” in za nas je dobro, dokler se kregajo sami med seboj. Zanimivo pa je še, da Urada ti dve dejstvi nista motili pri izdraji prve začasne odločbe Zavodu IPF - je takrat predstavljal vse in je imel jasen ključ deliteve?

3. Posledica točke 2. je ta, da od začetka 2010 noben ne izvršuje uredbe in pobira nadomestila. Vsaj tako zgleda.

4. Bistvo originalnega zapisa seveda ostaja, to, da se začasno ne znajo zmeniti kdo nas bo odiral ne spremeni praktično nič, prav tako ne spremeni dejstvo, da ne dobi vsega zneska SAZAS (potrebno je vprašati uvoznike, da se ugotovi, kaj se z nadomestili dogaja – pozna kdo kakšnega?).