Running DevExtreme as MVC project in simulator

DevExtreme has a nice (web) simulator that lets you preview your application on a target device (tablet, phone, iOS, Android).

 

That’s really nice. However, out of the box, it works only when you have a special type of Visual Studio project that comes with DevExtreme (when you create a new DevExtreme project through one of its templates). Which might be a problem if you have other project type (i.e. MVC) instead – DevExtreme project is meant for distribution as a packaged “native” application. In that case no simulator for you, at least not out of the box.

Luckily here is a simple solution how to enable simulator for any web project. The mandatory step is to

  • find WebServer folder that is part of DevExtreme extension for Visual Studio. Mine is located in C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\Extensions\mblaqgom.mw5.
  • from WebServer folder found in step above copy simulator.html and Images and Simulator folders to root of your MVC project.
  • Do not copy web.config – it will wreck your application

That’s it. You can run your application by going to URL/simulator.html?appPage=index.html (assuming your starting page is index.html). Note that you can pass other parameters to simulator as well, like device=iPhone and orientation=p..

If you want to omit the appPage=index.html parameter you can rename index.html to app.html and simulator will pick app.html by default.

Incoming TypeScript definitions for PhoneJS and ChartJS (DevExtreme)

PhoneJS along with ChartJS are DevExpress’ efforts in providing single page web applications for mobile platforms (think Tablets and Phones).  Together they are combined into DevExtreme combo. In practice they are pure javascript libraries without ties to any server side platform - which is good, are built with KnockoutJS, jQuery in mind, etc (really plenty of nice features, check hyperlinks). And they can be packed into “native” applications for various platforms using PhoneGap. Added bonus is an emulator (again in javascript) that let’s your preview applications. Everything you need to run projects based on DevExtreme is a web server. Any web server on any OS since they are client side stuff.

I recently poked around this technologies and I soon felt that one, rather important feature (well, to me at least) was missing. Typescript definition files were missing. Until now that is. They are coming with v13.2 and you can already preview them in beta. Plus, there is a template that let’s you start your project with Typescript code instead of Javascript. While this doesn’t seem a big deal, well, it is. Typescript is a huge boost for javascript development, more so when it comes to bigger projects. Hence it is a big deal to be able to use all of Typescript goodies with DevExtreme goodies. And now I can.

Other improvements are coming as well (improved theming, improved and new widgets, localization…). While I can’t call myself experienced in DevExtreme it will be definitely my first choice for new projects.

Solving Blend 2013 interactivity error

I am playing with Blend 2013 Silverlight Sketchflow for creating web application mockups. Recently I’ve stumbled into an curios error when adding interactivity to a button.

Here is the repro:

Create two screens, and a button on a screen. So far so good. Right click on that button and select Navigate To/Screen 2. Blend will add a navigational behavior, like:

<i:Interaction.Triggers>
    <i:EventTrigger EventName="Click">
        <pi:NavigateToScreenAction 
TargetScreen="SilverlightPrototype2Screens.Screen_1"/> </i:EventTrigger> </i:Interaction.Triggers>

It will underline pi:NavigateToScreenAction with a red squiggle as well complaining that type NavigateToScreenAction from assembly Microsoft.Expression.Prototyping.Interactivity is built with an older version of the Blend SDK, and it is not supported in Silverlight 5 projects.

A puzzling error. After playing around I discovered that the problem is with the referenced System.Windows.Interactivity assembly. Somehow Blend added a reference to a version 4.0.5.0. instead of 5.0.5.0.

Not sure what triggered the referencing of the wrong version but the solution is quite simple. Just remove it (in both projects) and reference the proper version. And Blend application sparks to life again.

Investigating why an instance is kept alive in a .net application

Sometimes, when I want to verify .net memory management behavior, I fire up ANTS Memory Profiler and run it on a test application just to see how memory management behaves.

So today I went and created this application

class Program
{
    static void Main(string[] args)
    {
        A a = new A();
        B b = new B { Pointer = a };
        Console.WriteLine("One");
        Console.ReadLine();
        b.Pointer = null;
        b = null;
        Console.WriteLine("Two");
        Console.ReadLine();
        a = null;
        Console.WriteLine("Three");
        Console.ReadLine();
        a = new A();
        Console.WriteLine("Four");
        Console.ReadLine();
    }
}

class A
{

}

class B
{
    public A Pointer;
}

The ReadLine calls allow me to hit the “Take Memory Snapshot” in profiler. Later I examined these snapshots, specially the one between One and Two. What would you expect, how many instances are alive at that point? I’d say one, the instance of class A (referenced by a).

However, the profiler was showing, surprisingly, two. According to it, both an instance of A and an instance of B, were still very much alive. This result surprised me. Even more surprisingly, after Four there was still an instance of B around. How is this possible? There are no references to b and it is pointing to null. A bug in Memory Profiler? Hardly possible.

Right before I was going to post a question on RedGate’s forum I decided to check the IL code. With .net reflector of course. Immediately I saw the reason for that odd behavior. Can you spot it?

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 2
    .locals init (
        [0] class ConsoleApplication213.A a,
        [1] class ConsoleApplication213.B b,
        [2] class ConsoleApplication213.B b2)
    L_0000: nop 
    L_0001: newobj instance void ConsoleApplication213.A::.ctor()
    L_0006: stloc.0 
    L_0007: newobj instance void ConsoleApplication213.B::.ctor()
    L_000c: stloc.2 
    L_000d: ldloc.2 
    L_000e: ldloc.0 
    L_000f: stfld class ConsoleApplication213.A ConsoleApplication213.B::Pointer
    L_0014: ldloc.2 
    L_0015: stloc.1 
    L_0016: ldstr "One"
    L_001b: call void [mscorlib]System.Console::WriteLine(string)
    L_0020: nop 
    L_0021: call string [mscorlib]System.Console::ReadLine()
    L_0026: pop 
    L_0027: ldloc.1 
    L_0028: ldnull 
    L_0029: stfld class ConsoleApplication213.A ConsoleApplication213.B::Pointer
    L_002e: ldnull 
    L_002f: stloc.1 
    L_0030: ldstr "Two"
    L_0035: call void [mscorlib]System.Console::WriteLine(string)
    L_003a: nop 
    L_003b: call string [mscorlib]System.Console::ReadLine()
    L_0040: pop 
    …
}

For some reason, compiler decided to allocate two references for class B: local variables b and b2. When the instance of class B is created it is stored in b2. After the Pointer property is assigned the same reference is stored to b while b2 isn’t set to null. Then when b is set to null, the b2 is still very much alive and thus keeping the instances count for class B to 1 – even though b is null and there are no other C# variables for referencing instances of B.

Mystery solved.  The lessons learned are that compiler might generate code one wouldn’t expect and that profiling memory isn’t always black and white. Those 50 shades of gray happen as well. IOW experience and good understanding of .net is a necessity.

When .net assemblies aren’t properly versioned odd errors happen

Today I’ve upgraded a SignalR library from 1.x to 2.0 in an MVC 5 (server) and WPF (client) projects I am developing. The upgrade itself went smooth but afterwards the client started yielding odd errors and refusing to connect anymore. It was yielding an exception saying something like Transport Connection time out (I can’t remember the exact message). Since it was a rather simple upgrade this error made no sense. Google didn’t help much either.

So I went debugging.

Investigation

First step is to create a simple repro sample – a solution with a basic server and a console client (see attached zip).

Second step is to turn on SignalR tracing on client:

hubConnection.TraceLevel = TraceLevels.All;
hubConnection.TraceWriter = Console.Out;

Armed with these two steps I was able to get an error that made a bit more sense although it was even more bizzare.

18:43:53.9073548 - null - ChangeState(Disconnected, Connecting)
18:43:55.4913444 - b8fd1874-483d-4d89-8a69-c2cfe33cf946 - WS Connecting to: ws://localhost:53705/signalr/connect?transport=webSockets&connectionToken=0dQ3scw2aJ7RbXuUsTfa4wlY%2FJlMzW%2FVL1sVb%2FyswewEO8n4qAth7Erpt0ga0laLV%2B8BCD833lZZy1MblOe0HuQs0O1mYi%2FfiiLSHc5%2F%2B1wk6tUsbFxMKVQFwRO9BvSW&connectionData=[{"Name":"formsHub"}] 18:43:55.5800651 - b8fd1874-483d-4d89-8a69-c2cfe33cf946 - WS: OnMessage({"C":"d-5AB67EA2-B,0|C,0|D,1|E,0","S":1,"M":[]})
18:43:55.5940751 - b8fd1874-483d-4d89-8a69-c2cfe33cf946 - OnError(System.MissingMethodException: Method not found: 'System.Collections.Generic.IEnumerator`1 Newtonsoft.Json.Linq.JArray.GetEnumerator()'. at Microsoft.AspNet.SignalR.Client.Transports.TransportHelper.ProcessResponse(IConnection connection, String response, Boolean& shouldReconnect, Boolean& disconnected, Action onInitialized) at Microsoft.AspNet.SignalR.Client.Transports.WebSocketTransport.OnMessage(String message) at Microsoft.AspNet.SignalR.WebSockets.WebSocketHandler.d__e.MoveNext())
18:43:55.6110864 - b8fd1874-483d-4d89-8a69-c2cfe33cf946 - WS: OnClose()

It says something like – I can’t find a method in the Newtonsoft.Json assembly. That’s totally weird as this kind of error should be handled at compile time.

I went looking into Newtonsoft.Json assembly referenced by SignalR 2.0. It correctly references version 5.0.8. The assembly in package folder has the method it was mentioned as missing. Mystery. One thing that caught my eye  while .net reflecting the assembly was its version. While the file version is 5.0.8. the assembly version is, oddly, 4.5.0.0. How come?

Fourth step was looking into modules loaded by my application. They are listed through Visual Studio-Debug/Windows/Modules. Amazingly, Visual Studio was listing Newtonsoft.Json v4.05.8.15203 from … GAC. Not from my package. And by looking at its date it was clear that it was an older version (4.8.2012). Heck, I didn’t even know it was in GAC. I didn’t put it, some other app had to.

 

From previous Googling I remembered an explanation somewhere that the missing GetEnumerator method was introduced in version 5.0.5. So this was probably it – .net was loading an older version from GAC - that was really missing that method. At this point I knew what was wrong.

Just in case, to make sure, I checked whether the assembly really was in GAC:

C:\Windows\system32>gacutil /l newtonsoft.json
Microsoft (R) .NET Global Assembly Cache Utility.  Version 4.0.30319.18020
Copyright (c) Microsoft Corporation.  All rights reserved.

The Global Assembly Cache contains the following assemblies:
  newtonsoft.json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchi
tecture=MSIL

Number of items = 1

It was.

Remedy

The proper way would be to have a properly versioned Newtonsoft.Json assembly (those are strong key signed as well). That’s why there is versioning and .net wouldn’t make this confusion. I really wonder why James uses the same assembly version for assemblies that should really have different signatures. Hopefully he will fix this problem in next versions. But until then there is …

… the next best way is to force .net to not load it from GAC.  If the assembly is removed from GAC, .net won’t find the wrong version anymore. See, .net always tries GAC first and only if it doesn’t find a suitable assembly it will go looking elsewhere, like into the folder where your application is. Perhaps I could configure it to load the proper assembly through .config file but this solution is better – it’ll solve the problem unless an older newtonsoft.json is reinstalled again in GAC. Hm.

Removing assembly from GAC should be a rather simple process. Just fire

GACUTIL /U Newtonsoft.Json

and it should be removed. But not this time. Instead I got this warning:

C:\Windows\system32>gacutil /u newtonsoft.json
Microsoft (R) .NET Global Assembly Cache Utility.  Version 4.0.30319.18020
Copyright (c) Microsoft Corporation.  All rights reserved.


Assembly: newtonsoft.json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, proces
sorArchitecture=MSIL
Unable to uninstall: assembly is required by one or more applications
Pending references:
              SCHEME: <WINDOWS_INSTALLER>  ID: <MSI>  DESCRIPTION : <Windows Installer>
Number of assemblies uninstalled = 0
Number of failures = 0

Basically it is saying that some installed application did install it into GAC and now it is preventing me from removing it from GAC. Uf, nasty. To make it worse, it doesn’t say which application. This article lists the way to “unreference it” to be able to remove it from GAC. Basically you have to remove its entry from the registry and then execute gacutil removal command again.

 

After registry modification it succeeded.

C:\Windows\system32>gacutil /u newtonsoft.json
Microsoft (R) .NET Global Assembly Cache Utility.  Version 4.0.30319.18020
Copyright (c) Microsoft Corporation.  All rights reserved.


Assembly: newtonsoft.json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, proces
sorArchitecture=MSIL
Uninstalled: newtonsoft.json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, pro
cessorArchitecture=MSIL
Number of assemblies uninstalled = 1
Number of failures = 0

The aftermath

Removing that old Newtonsoft.Json from registry did the trick. SignalR works like charm again. However, I had to manually remove a component of an unidentified application. And this removal might cause problems with that application in the future.

I really hope that Newtonsoft.Json gets proper versioning to avoid situations like this. This should be a lesson to every library developer out there – do use proper version numbering!.

SignalRTest.zip (4.69 mb)

UPDATES:

12.11.2013: James, the author, is looking at the possible solution. See https://json.codeplex.com/discussions/465332#post1122165.

Android activity life cycle and IoC

I was recently working on an Android (Xamarin, .net) application based on MvvmCross framework. Actually not just an Android app since it could be ported quite easily to other platforms, such as Windows Phone 8 or IPhone. Anyway, I was using inversion of control principle in a slightly incorrect way and thus done a mistake which revealed only a while after the deployment.

Here are the symptoms: Application would cold start fine, would resume fine when resume was done within reasonable time (no idea, depends on the device, in my case could be hours and it would work perfectly) but it wouldn’t resume well if the timespan was too much (i.e. a day or several hours). Instead of displaying the activity content it was just an actionbar with title without any content of menuitems.

By looking at android’s log on the device itself it was clear that there were problems with IoC resolve method. It yielded an “Object reference null” type of exception. I was resolving the reference within activity’s constructor. Odd error, since MvvmCross is supposed to trigger IoC registration  before the first activity is run. But somehow it wasn’t. I’ve mentioned that to Stuart (@slodge, man behind MvvmCross) and he instantly pointed to the mistake I made – I neglected the Android activity’s lifecycle. I immediately understood my mistake: I am used to put IoC references within constructors as arguments, which is mostly fine. Except when it comes to Android activities. The thing is that the entry point of a suspended application (with activity destroyed) is the activity itself, more precisely, its constructor. MvvmCross does trigger the IoC registration method correcly, but, of course, only after the activity is created and hence I was having errors in this particular situation.

The solution is fairly simple – move IoC resolving within OnCreate method, not sooner – that’s the point where you can be certain that MvvmCross initialization is done.

Before:

public abstract class SomeClass<T> : Activity
{
    protected readonly IFragmentPresenter fragmentPresenter;

    public MvxActivityFragmentHost(): this(Mvx.Resolve<IFragmentPresenter>())
    {}

    public MvxActivityFragmentHost(IFragmentPresenter fragmentPresenter)
    {
        this.fragmentPresenter= fragmentPresenter;
    }
    ...
}

after

public abstract class SomeClass<T> : Activity
{
    protected IFragmentPresenter fragmentPresenter;

    protected override void OnCreate (Bundle bundle)
    {
        base.OnCreate (bundle);
        fragmentPresenter = Mvx.Resolve<IFragmentPresenter>();
    }
    ...
}

This change did the trick. A bit more work to inject the mock reference when testing but not a big deal.

Now, if you followed the post you might be asking, why the heck does application cold starts just fine, since it should get the same error. The explanation is rather simple – I am using a splashscreen activity when application cold starts. This one doesn’t have any IoC resolving involved at all, but it does initialize MvvmCross. So when it gets to the my problematic activity, the IoC is already in place.

Now, there you have it. Respect the Android activity lifecycle otherwise it will bit you.

Visual Studio 2013 brings IntelliSense for Data Binding to XAML editor

One of the better improvements in Visual Studio 2013 is IntelliSense for data binding in XAML editor. The improvement is described in this blog post. Very nice. But what article fails to mention is how does one get that mystical d namespace. In face, it is not exactly easy to find the proper declaration.

After some googling around I have found this declaration:

xmlns:d="http://schemas.microsoft.com/expression/blend/2008"

Then I tried applying a d:DataContext to the first element on the default window template – Grid, like:

<Grid d:DataContext="{d:DesignInstance Type=local:Tubo}">
    <TextBlock Text="{Binding Xul}"
</Grid>

That is supposed to work, but it doesn't (I have a simple class Tubo with a single property Xul in local namespace). Instead of compiling it was throwing this error at me:

The property 'DataContext' must be in the default namespace or in the element namespace 'http://schemas.microsoft.com/winfx/2006/xaml/presentation'.

Yeah, right. After some more googling I found that I had to add another namespace and a ignore property to the mix:

xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"

These two lines did the trick, error was gone, the project compiled and IntelliSense was alive.

Here is my whole XAML:

<Window x:Class="WpfApplication73.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:local="clr-namespace:WpfApplication73"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        mc:Ignorable="d"
        Title="MainWindow" Height="350" Width="525">
    <Grid d:DataContext="{d:DesignInstance Type=local:Tubo}">
        <TextBlock Text="{Binding Xul}"
    </Grid>
</Window>

Hopefully this post will spare some time for others trying to achieve the IntelliSense…

Righthand’s DataSet Debugger Visualizer supports VS2013

Highlights from new version to 1.0.11.

  • added VS2013 version
  • added a "separated assembly" versions. Until now I was using RedGate's Smart Assembly to pack all referenced assemblies into a single DLL file for easier management and distribution. However, this black magic might cause problems in certain situations (Visual Studio add-ins screams for problems). Thus I've added another set that features assemblies in separate files. The bottom line, if you have problems or you want to be on the safe side, use the later set.

Go nuts!

Poor man’s performance profiling

If you have been doing development with Xamarin on Android you have probably noticed the total absence of any profiler, there is not even a decent method to get the high resolution time. Sure, there is Environment.TickCount but the resolution is at best in milliseconds. Pretty useless for performance profiling.

Rarely people use performance profilers on the desktop. Mostly because their applications don’t have complicated stuff and the modern CPUs are hiding most of the non performing code. However mobile devices are far more delicate and have less CPU power, less RAM, less resources in general. Thus slow code is much more notable and visible to end users. There is no better way to profile your code with a performance profiler. Just take a look at RedGate’s ANTS Performance profiler. Unfortunately there are no such tools for Xamarin Android platform even though mobile code should be really optimized.

I present  my solution for getting some useable metrics from your code (the code you have sources for). The code I’ve written was used for MvvmCross performance profiling hence it slightly relies on it (logging, IoC) however it can be easily made standalone if necessary.

Getting high resolution time

Java has java.lang.System.nanoTime() method that returns the highest resolution time available on Android. Xamarin doesn’t provide it through .net, you have to write a wrapper by yourself using java reflection.

public static class JavaLangSystem
{
    static IntPtr class_ref;
    static IntPtr id_nanoTime;
    public readonly static long Avg;

    static JavaLangSystem()
    {
        class_ref = JNIEnv.FindClass("java/lang/System");
        id_nanoTime = JNIEnv.GetStaticMethodID(class_ref, "nanoTime", "()J");

        for (int i = 0; i < 10; i++)
        {
            NanoTime();
        }

        long start = NanoTime();
        for (int i = 0; i < 1000; i++)
        {
            NanoTime();
        }
        Avg = (NanoTime() - start) / 1000;
MvxTrace.TaggedTrace("metrics", "Avg NanoTime reflection is {0}", Avg); } [Register("nanoTime", "()J", "")] public static long NanoTime() { return JNIEnv.CallStaticLongMethod(class_ref, id_nanoTime); } }

Note that I do some calibration in static constructor because of reflection the call to nanoTime() itself consumes some time. I store the average call time into a static variable Avg that I'll later use.

Accessing high resolution timer through IoC

First part is to create an interface

public interface IHighResolution
{
    long Ticks { get; }
    long Avg { get; }
}

then implement it

public class HighResolution: IHighResolution
{

    #region IHighResolution Members

    public long Ticks
    {
        get { return JavaLangSystem.NanoTime(); }
    }

    public long Avg
    {
        get { return JavaLangSystem.Avg; }
    }

    #endregion
}

and finally register it to IoC provider.

protected override IMvxApplication CreateApp()
{
    Mvx.RegisterType<IHighResolution, HighResolution>();
    return new App();
}

I register it in MvxAndroidSetup derived class (MvvmCross initialization) but if you are not using MvvmCross feel free to register it with IoC provider of your choice on some other initialization place.

This separation comes specially handy if you are to measure PCL assemblies where nanoTime() isn’t accessible.

 

Define class that will store a single measurement

[DebuggerDisplay("{Name} in {Ticks}")]
public class PerfMeasurement
{
    public string Name { get; set; }
    public List<PerfMeasurement> Children;
    public long Ticks { get; set; }

    public int AllChildren
    {
        get
        {
            if (Children == null || Children.Count == 0)
                return 0;
            else
            {
                int sum =0;
                foreach (PerfMeasurement m in Children)
                {
                    sum += 1 + m.AllChildren;
                }
                return sum;
            }
        }
    }
}

There are two interesting features here. First is Children field which holds nested measurements. And the other is AllChildren property that recursively counts all children.

The main measurement class

public static class Metrics
{
    private static readonly IHighResolution highresolution;
    private static readonly long twoAvg;
    
    public static List<PerfMeasurement> RootData { get; private set; }
    private static Stack<PerfMeasurement> stack;
    //         private static int counter;
    
    static Metrics()
    {
        RootData = new List<PerfMeasurement>(100000);
        stack = new Stack<PerfMeasurement>();
        highresolution = Mvx.Resolve<IHighResolution>();
        twoAvg = highresolution.Avg * 2;
    }
    
    public static PerfMeasurement Push(string name)
    {
        PerfMeasurement head;
        if (stack.Count == 0)
        {
            head = new PerfMeasurement { Name = name };
            RootData.Add(head);
        }
        else
        {
            PerfMeasurement previous = stack.Peek();
            head = new PerfMeasurement { Name = name };
            if (previous.Children == null)
                previous.Children = new List<PerfMeasurement>();
            previous.Children.Add(head);
        }
        stack.Push(head);
        return head;
    }
    
    public static void Pull()
    {
        stack.Pop();
        if (stack.Count == 0)
        {
            PerfMeasurement m = RootData[RootData.Count - 1];
            OffsetParents(m);
            Print(m);
        }
    }
    
    private static void OffsetParents(PerfMeasurement m)
    {
        m.Ticks -= m.AllChildren * twoAvg;
        if (m.Children != null)
        {
            foreach (PerfMeasurement child in m.Children)
                OffsetParents(child);
        }
    }
    
    public static void Print(PerfMeasurement metric)
    {
        StringBuilder sb = new StringBuilder();
        Print(metric, sb, 0);
        MvxTrace.TaggedTrace("metrics", sb.ToString());
    }
    
    public static void Print(PerfMeasurement metric, StringBuilder sb, int depth)
    {
        string prepend = new string(' ', depth * 4);
        sb.AppendLine(string.Format("{0}{1} took {2:#,##0}", prepend, metric.Name, metric.Ticks));
        if (metric.Children != null && metric.Children.Count > 0)
        {
            foreach (PerfMeasurement child in metric.Children)
            Print(child, sb, depth + 1);
        }
    }
}

This class acts like a Stack for measurement items defined in earlier one. When item at the root level is pulled it outputs the results of the whole hierarchy linked to it into the (MvvmCross) logging mechanism (Print methods). But before doing that it recursively subtracts the nanoTime() invocation of all children (OffsetParents - for each children it assumes nanoTime() took two average invocations – hence the static field twoAvg).

That’s all we need. However there is one step that will make measurement a bit easier and more clean.

 

Making it easier to use Metrics with a helper class

public class MeasurePerf : IDisposable
{
    private static readonly IHighResolution highresolution;

    private long start;
    private PerfMeasurement m;

    static MeasurePerf()
    {
        highresolution = Mvx.Resolve<IHighResolution>();
    }

    public MeasurePerf(string name)
    {
        m = Metrics.Push(name);
        this.start = highresolution.Ticks;
        
    }

    public void Dispose()
    {
        m.Ticks += highresolution.Ticks - start - highresolution.Avg;
        Metrics.Pull();
    }
}

This class is intended as a wrapper around measured code using using keyword. Measurement looks like:

using (new MeasurePerf("Your comment here"))
{
    // your code here
    // nesting measuring is perfectly fine
}

Here is a sample output I’ve been looking at:

profiler_results

I highlighted the root item. All items are showing their total time (children included). Note the negative numbers. That happens because this kind of measurement isn’t accurate at all (remember, I am subtracting twoAvgs). However it gives you at least some insight of what code is slow and which child is guilty.

So, that’s it. Until Xamarin gives us profilers there is no better way AFAIK. Unfortunately. There is one thing you can do, though. You can tell Xamarin that we need profilers. Perhaps through my uservoice suggestion.