An alpha version of memory profiler for Xamarin Android

Memory profiling is one of the pillars of a good application. Even more important is on mobile devices since memory is usually scarce there. But sadly, there is no memory profiler for Xamarin Android development. Actually there is some raw support but lacks UI and friendliness. Hence I’ve built my own memory profiler.

This is an alpha quality version.

How it works

Since version 4.14 Xamarin has a (not) very visible support for memory profiling. But even with this support it isn’t easy to profile. With each garbage collection a log file is written on the device containing all the relevant data in a text format. Note that once you turn profiling on with

adb shell setprop debug.mono.profile log:heapshot

This setting is global and it works on all Xamarin apps on the device. Remember to turn it off afterwards if you don’t need it anymore to avoid, I guess, a huge performance hit.

There are two problems with the existing support here:

  • hard to trigger a garbage collection manually
  • hard to analyze data

My solution tackles both problems. The solution is divided into two parts.

Server side

The library available through NuGet does two things:

  • broadcasts its presence through UDP
  • allows triggering garbage collection from client

The library supports API Level 4 (Xamarin v1.6) and above.

Here is the NuGet package URL. You can also install it using this Package Manager Console command:

Install-Package RhMemProfiler.Server –Pre

The package is marked as prerelease and it won’t appear in NuGet Package manager if it isn’t set for Include Prerelease.

Once package has been added go ahead and create a global variable of type RhMemProfiler.Server.Beacon and initialize it with a friendly name and package name, like:

RhMemProfiler.Server.Beacon beacon;
….
beacon.Start("your friendly name", PackageName);

PackageName is the name of your package and it is ContextWrapper’s property.

Client side

This is a Windows application distributed through ClickOnce. After installation you’ll be prompted to open the firewall for the application. This is required since it, well, uses UDP and TCP to communicate with the server side.

profilerPrivate networks access is enough unless your mobile device is running on public network.

Once that is done you’ll be presented with a main window:

profiler

If your application on mobile device/emulator is running then you’ll see at least one option (one per IP on the device) in the combo box left of Connect button. That list contains friendly name (used when initializing on server) and its IP address. Pick any of them (IP doesn’t matter at this time). Hit Connect button.

If connection is successful you’ll see Conncted text in bottom left corner. If this is the first time you are connecting and/or memory profiling isn’t enabled yet on the device then you’ll be presented the Enable Profiling button, like:

enableprof

Enable profiling, restart mobile device application, disconnect, connect again and your are good to start memory profiling:

enableprof

Once there were at least two garbage collections and a snapshot has been taken afterwards you are able to compare them by selecting start and end version.

Collect Garbage … forces garbage collection on the mobile device
Snapshot … collects memory profiling log file from the device
Disable profiling … disables profiling on the device (profiling is per device, not per application – it affects all Xamarin apps on that device)
Only with source … shows only types from your package
Group panel … lets you group columns
Start and End … memory profile versions to compare to

Test

Imagine I have a class

public class MemLeak
{
    public string Value;
}

and a list of these somewhere defined. If I do a garbage collection before, when list is empty, and after one instance of MemLeak has been added I’ll see something like this when comparing before and after:

leak

It is pretty much obvious that the count of MemLeak instances increased by 1 and there are total of two instances of it around (actually I created one before that’s why there are two of them).

Memory profiling strategy

Memory profiling works by comparing state before and after. Usually a garbage collection is taken before some action and after that action. If the number of some instance increased without a valid reason it means that the application is leaking memory. Professional memory profilers help you even pinpointing the source of the leaks, however in our case, Xamarin doesn’t have that data available through this strategy.

Known limitations

  • ADB has to be in the path.
  • Only one device or emulator can be connected to the computer at the same time (the combo lets you pick the application but not the IP)

Final words

This is an alpha release, so be gentle with it and do not complain if your computer explodes because of it. It is something I wanted to release even at this stage for you to play with. It is actually pretty functional. So, feedback is appreciated and so are questions but I don’t guarantee anything, the development is done in my spare time (like that there is any Smile) for now. Hope you’ll find it useful.

A memory profiler for Xamarin.Android

Here is a first ever snapshot of my home-grown memory profiler for Xamarin.Android.

While very spartan it does the core job – comparing two snapshots for objects with growing instances (aka memory leaks). An array of Autofac.Core.IRegistrationSource[] is showing its references (one root, one to a List<>).

There you. Interested?

PS. Sample features DevExpress WPF components, chiefly XpfGrid.

Investigating why an instance is kept alive in a .net application

Sometimes, when I want to verify .net memory management behavior, I fire up ANTS Memory Profiler and run it on a test application just to see how memory management behaves.

So today I went and created this application

class Program
{
    static void Main(string[] args)
    {
        A a = new A();
        B b = new B { Pointer = a };
        Console.WriteLine("One");
        Console.ReadLine();
        b.Pointer = null;
        b = null;
        Console.WriteLine("Two");
        Console.ReadLine();
        a = null;
        Console.WriteLine("Three");
        Console.ReadLine();
        a = new A();
        Console.WriteLine("Four");
        Console.ReadLine();
    }
}

class A
{

}

class B
{
    public A Pointer;
}

The ReadLine calls allow me to hit the “Take Memory Snapshot” in profiler. Later I examined these snapshots, specially the one between One and Two. What would you expect, how many instances are alive at that point? I’d say one, the instance of class A (referenced by a).

However, the profiler was showing, surprisingly, two. According to it, both an instance of A and an instance of B, were still very much alive. This result surprised me. Even more surprisingly, after Four there was still an instance of B around. How is this possible? There are no references to b and it is pointing to null. A bug in Memory Profiler? Hardly possible.

Right before I was going to post a question on RedGate’s forum I decided to check the IL code. With .net reflector of course. Immediately I saw the reason for that odd behavior. Can you spot it?

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 2
    .locals init (
        [0] class ConsoleApplication213.A a,
        [1] class ConsoleApplication213.B b,
        [2] class ConsoleApplication213.B b2)
    L_0000: nop 
    L_0001: newobj instance void ConsoleApplication213.A::.ctor()
    L_0006: stloc.0 
    L_0007: newobj instance void ConsoleApplication213.B::.ctor()
    L_000c: stloc.2 
    L_000d: ldloc.2 
    L_000e: ldloc.0 
    L_000f: stfld class ConsoleApplication213.A ConsoleApplication213.B::Pointer
    L_0014: ldloc.2 
    L_0015: stloc.1 
    L_0016: ldstr "One"
    L_001b: call void [mscorlib]System.Console::WriteLine(string)
    L_0020: nop 
    L_0021: call string [mscorlib]System.Console::ReadLine()
    L_0026: pop 
    L_0027: ldloc.1 
    L_0028: ldnull 
    L_0029: stfld class ConsoleApplication213.A ConsoleApplication213.B::Pointer
    L_002e: ldnull 
    L_002f: stloc.1 
    L_0030: ldstr "Two"
    L_0035: call void [mscorlib]System.Console::WriteLine(string)
    L_003a: nop 
    L_003b: call string [mscorlib]System.Console::ReadLine()
    L_0040: pop 
    …
}

For some reason, compiler decided to allocate two references for class B: local variables b and b2. When the instance of class B is created it is stored in b2. After the Pointer property is assigned the same reference is stored to b while b2 isn’t set to null. Then when b is set to null, the b2 is still very much alive and thus keeping the instances count for class B to 1 – even though b is null and there are no other C# variables for referencing instances of B.

Mystery solved.  The lessons learned are that compiler might generate code one wouldn’t expect and that profiling memory isn’t always black and white. Those 50 shades of gray happen as well. IOW experience and good understanding of .net is a necessity.

ANTS Memory Profiler 5 and its expiration message

I installed ANTS Memory Profiler 5 beta version a couple of weeks ago to try it out. Unfortunately I haven’t had time to really use it but it sure looks like a great memory profiler – easy to use and very powerful. So, I’ve started it again today only to get these two funny messages:

expiration1

expiration2

 

First one is really funny. Note that a newer build is available with bug fixes and upgrading is easy.

Here is a funny splash screen as well:

ants

It is really refreshing to see un-boring messages and splash screen. That said you should give it a try – you can get a public beta version from Red-gate’s forum.

Make XtraVerticalGrid fast as a bullet

Recently I've discovered that XtraVerticalGrid, a nice vertical grid from [DevEx], has some serious problems with speed when doing batch updates. Usually you should enclose batch updates within BeginUpdate/EndUpdate method calls. This usually prevents processing/redrawing within the control when one does many updates to underlying datasource at once. The neat effect of BeginUpdate/EndUpdate results in dramatic speed improvements.

However, XtraVerticalGrid doesn't implement batch BeginUpdate/EndUpdate very well and still massively processes the changes even though developer doesn't want to. Basically an operation that should take a fraction of second took more than 3s, which is an annoyance to the user, due to this problem.

Here is the workaround

Derive a class, i.e. RhVerticalGrid from VGridControl and add this piece of code:

public class RhVerticalGrid: VGridControl { private int lockUpdateCount; #region BeginUpdate public override void BeginUpdate() { lockUpdateCount++; base.BeginUpdate(); } #endregion #region EndUpdate public override void EndUpdate() { lockUpdateCount--; Debug.Assert(lockUpdateCount >= 0); base.EndUpdate(); } #endregion #region CancelUpdate public override void CancelUpdate() { lockUpdateCount--; Debug.Assert(lockUpdateCount >= 0); base.CancelUpdate(); } #endregion protected override void OnDataManager_ListChanged(object sender, ListChangedEventArgs e) { if (lockUpdateCount == 0) base.OnDataManager_ListChanged(sender, e); } public override void InvalidateRecord(int recordIndex) { if (lockUpdateCount == 0) base.InvalidateRecord(recordIndex); } }

That's it. My execution speed dropped down from >3s to almost instantaneous execution. Begin|End|CancellUpdate methods just increment/decrement lock counter (when lockUpdateCount == 0 the updates are allowed, otherwise not). The main improvements are within OnDataManager_ListChanged and InvalidateRecod methods where I propagate the processing only if updates are allowed. That's it - use the derived grid control instead of the original. Simple as that. And make sure your batch updates are enclosed with Begin|EndUpdate methods!

How does one find such culprits?

If you are scratching your head with the question "how does one find the culprit of such slowdowns"? The answer lies in performance profiling. To find this one I've used my favorite AQTime profiler (much more than just a performance profiler) and quickly found the culprit. I should also mention that this isn't the first time that AQTime helped me to find both memory and performance problems. Yep, a performance and memory profiler is a must have tool for serious developer, even better if this is AQTime. Anyway, here is clear picture (from AQTime graph view) of the culprit in action:

image

BTW, here is the link to the bug report on the [DevEx] support center I made today.