Generic Thread.VolatileRead and Thread.VolatileWrite

I am thinking about using a generic version of Thread.VolatileRead and Thread.VolatileWrite. The code is similar to existing one (actually there are plenty of overloads) with the difference of generic type parameter:

[MethodImpl(MethodImplOptions.NoInlining)]
public T VolatileRead<T>(ref T address)
{
    T result = address;
    Thread.MemoryBarrier();
    return result;
}

[MethodImpl(MethodImplOptions.NoInlining)]
public void VolatileWrite<T>(ref T address, T value)
{
    Thread.MemoryBarrier();
    address = value;
}

Why would I use generics if there is plenty of overloads already there? Because you have to perform casting to and from object since there is a single overload that accepts reference (ref object address that is). Here is an example:

Tubo tubo = new Tubo();
object param = tubo;
Tubo read = (Tubo)Thread.VolatileRead(ref param);

Isn’t the following code much better?

Tubo tubo = new Tubo();
Tubo read = MyThread.VolatileRead(ref tubo);

The questions is whether my code is correct or not. Sure it looks like correct but one never knows for sure when dealing with threading. Feedback appreciated.

Meet “Go To Implementator” DXCore plugin for Visual Studio

The problem

One of the biggest annoyance when doing unit-test-friendly projects is that you have to deal with interfaces much more than you usually would. This isn’t bad by itself but it will effectively kill your F12 aka “Go To Definition”. In other words F12 won’t show you the code anymore but rather the interface definition of the method or property. Which is not what I like and I guess not what you like as well.

Let’s see an example:

imageWhen you F12 (or right click, Go To Definition) on DoTubo() you’ll get catapulted to the interface declaration of the method. Which might be what you want but mostly it isn’t. I’d like to see  the Tubo.DoTubo() method declaration where the code I am interested is. Specially because often an interface is really implemented just by one class, at least in design time.

image

This is what I’d like to see. And this is what you’d get if you weren’t using IAnnoy interface but rather Tubo type directly.

The solution

Good news is that now there is a way to use the go to method implementation. Even better news is that it is free. As a free lunch.

I’ve created a DXCore plugin named “Go To Implementator” that does exactly that. When over a interface’s method or property reference it will give you an option to go directly to (one of the) implementation. Sounds fine?

Installation

1. If you don’t have CodeRush already installed then do install its Express version which is free or even better, go with full version (which is not free but it is well worth it).

2. Download attached zip file and unpack its content into either [USER]\Documents\DevExpress\IDE Tools\Community\PlugIns or [Program files [x86]]\DevExpress [DX version, i.e. 2009.3]\IDETools\System\CodeRush\BIN\PLUGINS.

3. Run Visual Studio. Go to DevExpress/Options… menu and select IDE/Shortcuts node the left.

4. Create a new shortcut: click on the first icon under Shortcuts title. Click on Key 1 text box on the left and press F12 (you are telling which key to respond to). Pick “Go to interface implementation” command from Commands combo box. The last step is to click on checkbox Focus/Documents/Source/Code Editor on the usage tree list on right side – a green checkmark should appear. Note that you can bind this action (or any other) to a different shortcut or even a mouse shortcut or any other way supported by CodeRush.

image

Take also note that there is a “Parameters” text box. I’ll use it pass parameters to my action later on in the article.

Test & use

Create a console application and past in there this code:

class Program
{
static void Main(string[] args)
{
IAnnoy annoy = new Tubo();
annoy.DoTubo();
}
}

interface IAnnoy
{
void DoTubo();
}

class Tubo : IAnnoy
{
public void DoTubo()
{
}
}

Position the caret on DoTubo() method reference. There are two ways to invoke my plugin.

Context menu

Right click, there should be an submenu “Go To Implementator” in context menu:

image

Keyboard shortcut (F12)

Just press F12. But what if you are not on a interface method/property reference? The default “Go To Definition” will be called like it would be without this plugin.

Dealing with more than one interface implementation

So far there was only one interface implementation. But what happens if there are two or more classes that implement the interface?

Let’s add another implementation to the crowd:

class AnotherTubo : IAnnoy
{
public void DoTubo()
{
}
}

Now both Tubo and AnotherTubo implement the IAnnoy interface. Right click context menu should give both options in no particular sort order, like this:

image 

Let’s pick AnotherTubo class. Plugin will remember this choice and next time it will be placed on the top:

image

But what about F12?

If there is no default class assigned then it should present you a smart tag with options:

image

 

 

However, if a default is available it would go straight to the implementation rather then presenting this smart tag.

Customizing the action

Popup menu behavior is fixed and can’t be customized in current version. The action, one that you can bind to a keyboard shortcut or whatever else input CodeRush is supporting can be customized. There are two parameters (parameters are a comma delimited string that you pass to Parameters text box when you are creating/editing shortcut in DevExpress/Options… – see the 4. step in installation) you might use.

NoPassGoToDefinition

You can disable invoking Go To Definition when action doesn’t do anything. The default behavior is to pass through when action does nothing. So why would you need this option? In case you are using the action from non F12 shortcut or if you want the action to do its job and nothing else.

ShowPopupAlways

When there is a default class for action available no smart tag is shown by default. You can override this behavior by passing ShowPopupAlways parameter. Then smart tags menu will be shown always when there is more than one class suitable regardless the default value is present or not.

Here is an example of passing both parameters:

image

The conclusion

I hope you’ll find this plugin useful. I am starting to use it and it saves me a lot of clicking. And thanks to DXCore it was really easy to create it.

Let me know if you have ideas how to enhance it or anything else, all feedback is welcome.

1.0.0 (18.1.2010)

  Initial release

1.0.1 (19.1.2010)

  Bug fix: it worked only within project scope.

1.0.2 (19.1.2010)

  Too strict parsing used which might resulted in no go to implementor option at all.

1.0.3 (21.1.2010)

  Didn't work on partial classes.

1.0.4 (26.1.2010)

  Fixed navigational bug (thanks to Quan Truong Anh for finding the bug and creating a repro case)

1.0.5 (16.7.2010)

  Added some logging - make sure DevExpress/Options.../Diagnostics/Message Logs is turned on (if you are turning it on then you have to restart IDE).
  No new functionallity besides logging - you might consider this version if plugin doesn't work well and you want to understand why.

GotoImplementator_1_0_5.zip (10.20 kb)

17.1.2011

Important: this is the last version supporting DXCore/CodeRush 10.1.x and lower.

Even more important: I've created a dedicated page and all further improvements will be published throug that page. This post won't be updated anymore.

About DevExpress skinning and custom skins

Here is the thing. DevExpress WinForms components support custom skinning. Out of the box there are plenty of skins you might use just by assigning a simple property with a name of the skin. Every DevExpress WinForms control will follow the skin settings and will look fancy and so will your application. That’s great. But if you want more advanced skinning you are in for troubles.

Let’s see my case. An application I am building for a customer of mine supports skinning. However I had to slightly modify out of the box skins with some adjustments and I’ve added few new glyphs which I use in my custom controls (they follow skinning UI as well since entire application does). Here is how I started. I’ve opened SkinEditor, a tool provided by DevExpress and created new skins based on their skins, i.e. MyCaramel from Caramel, etc. Once I had “my” skins I’ve adjusted some properties still using SkinEditor. Finally I’ve created a “skin resource” assembly. That’s all easily done via SkinEditor. So far so good. But there are problems ahead.

First problem – adding custom glyphs to skin

Since I have custom controls that have custom glyphs I had to add those glyphs to skins. After all they belong in the skin assembly since they will be also changed when skin changes. I could add them somewhere else, but that would be asking for troubles – better to have “grouped” resources in one place. But there is no way to add custom glyphs to my skin via SkinEditor. By design. Obviously nobody at DevExpress ever supposed that custom skins might be used for custom controls.

Second problem – updating the custom skin

Next, much more annoying problem, is updating custom skin to a newer DevExpress version. Even when a minor DevExpress version is released the out of the box skin definition might change a bit - here and there. So, the template you have built your custom skin from has changed but your custom skin still “sits” on the top of the old template version. It might even result in a runtime exception if you don’t upgrade the skin while application uses newer version of DevExpress components. And go wonder, SkinEditor doesn’t have an “upgrade custom skin” option. You have to recreate the original project (what a fun when you are dealing with 20+ skins – you have to add separately) and reapply all changes you might to out of the box skins. Eeek.

Third problem – skin size

If you use a lot of custom skins they will use a lot of space (each skin is about 500KB) even though you might not be using all of DevExpress controls and thus you don’t require full skins. The relatively big size might be a problem if you distribute your application via internet and even if you don’t your application uses more memory without any apparent benefit. SkinEditor doesn’t support removing of skin elements and even if you modify skin.xml definition (by removing unnecessary nodes) SkinEditor will add them again when you open the project next time.

Fourth problem – nor skin nor its assembly can’t be unloaded

Once the skin assembly is loaded to your application (main AppDomain) it can’t be unloaded. And once skin is registered it creates a hash table of all resources (a ton of Images – I am not 100% about this but it pretty much looks like it) and you can’t unloaded any of them. So, when you register a skin assembly it will remain loaded until the application is closed and all resources will be loaded to hash table in the form of images (souds like a sort of duplication to me). There is no way to load a skin from a custom AppDomain.

Solutions to problems

The first problem can be solved “manually”. I say manually, but you can pretty easily create some XML manipulation and file copy code. While SkinEditor doesn’t support adding custom glyphs you can still add them manually in two steps – save the graphics to the proper folder of the SkinEditor project and modify skin.xml file by adding proper XML nodes pointing to newly added glyphs. After some trial and error I was able to accomplish this task.

I’ll write about the solution to the second problem in a later post. I’ll also provide an utility that does a part of the job.

I have an idea how to solve the third problem but didn’t solve it yet nor I am sure whether it will work.

The fourth problem is the most hard to solve due to the current implementation. I am not sure whether it is even possible or whether does it make sense to invest much energy into this.

Conclusion

While skinning works pretty nicely in DevExpress controls its implementation is not the best one. Specially support for custom skinning isn’t very well thought and SkinEditor can be enhanced with these problems in mind.

The good news is that with little effort I’ve managed the overcome the most important issue – how to create slightly modified out of the box skins and how to update them to new versions (automatically). I’ll talk more about this solution in a later post.

What do you think? Do you use DevExpress skinning feature? Did you create your own skins?

Bridging IPTV over wireless using two Linksys WRT54GL routers and DD-WRT firmware

The motivation

I had a need to bridge IPTV using a wireless connection. Immediately I faced several problems which I finally managed to overcome and since not much information is available on internet I’ve decided to write my recipe down.

Note that there is a simple albeit expensive solution: use Ruckus wireless bridge (it is supposed to work pretty much out of the box). Since spending something like 200€ is a bit too much for this stunt I’ve decided to go with two Linksys WRT54GL routers instead. They are much cheaper and there is a challenge, who doesn’t like a challenge? And sending a bunch of network packets back and forth shouldn’t be that hard, should it? Ehm.

Current state

My ISP/IPTV provider is SIOL whose major stockholder is government. I have their modem/router that broadcasts IPTV UDP multicast stream on one of its LAN ports. And then I have a set top box (a lousy Sagem IAD81, doesn’t have any wireless capability whatsoever) connected with network cable to the router on the one end and to the (not yet LCD) TV set on the other end. This combinations works albeit picture quality isn’t good, but that’s due to the too-slow stream from the IPTV provider issue (it is a 4Mbps MPEG2 stream, not enough when there is a lot of dynamics on the screen).

image

The goal

Instead of the LAN cable change to wireless using two Linksys WRT54GL routers, like this

image

The hardware

As stated, two Linksys WRT54GL v1.1 (evergreen) routers.

The firmware

Finally I settled with DD-WRT BrainSlayer version 13525. I’ve tried many other versions and Tomato 1.27 as well, but more about it later.

The trial and error process

Knowing correct IPs

First thing one needs to know is what IP can and should be assigned to both routers – each one requires its own IP. Since IPs are assigned from provider based on MAC address of the STB (provider doesn’t care about anything else besides STB) it is not exactly known what additional IPs can/should one assign. So far every (unofficial) post about watching TV on computer suggested to use STB’s IP increased by 1. i.e. if your STB has an IP of 10.150.80.4 you’ll want to use 10.150.80.5. This isn’t documented officially anywhere and worse, I needed two additional IPs, not one. Hence I decided to ask about it on provider’s support forum. I’ve got no official answer, just the usual +1 answer from a peer.

Then I tried mail support only to get a person who doesn’t know what is an IP and how to get them. First he sent me link to unofficial posts about +1 (how to watch IPTV on computer by assigning IP manually) and to expensive Ruckus alternative being sold by them. I wasn’t happy about it and insisted on what IPs exactly can I use if I need two additional IPs, not only one. Since provider assigns them they should know, shouldn’t they? At the end his final answer was: it won’t work if you assign IPs manually. Never. (he clearly contradicted himself showing total lack of any clue). After that provider’s mail support didn’t respond to me anymore. So, thanks SIOL, my dear provider, for being dumb. Somehow they didn’t surprise me with that attitude.

In the lack of any official response I’ve decided to assign IP+1 to host side WRT54GL (one being close to provider’s router, left on my picture) and IP+2 to client side WRT54GL.

Oh, and how do I know the STB’s IP? If you have Sagem IAD81 STB you can press Menu, Blue, Yellow buttons. Or is it Menu, Yellow, Blue, who knows – try it and a diagnostics screen appears sooner or later including the assigned IP.

Installing and configuring correct firmware

Warning

Before flashing your router do read how-to. Such as this from DD-WRT website. If you don’t do it correctly you might “brick” your router for good.

DD-WRT recommended build 13064

WRT54GL out of the box comes with Linksys firmware which isn’t exactly rocket science and doesn’t support what I was looking for. This doesn’t matter since GL model is meant to be flashed and its firmware developed from community. Hence I first tried DD-WRT currently recommend build 13064. (go to DD-WRT router database and type in your router model to find the recommended build). I flashed both routers and configured them to Client Bridge mode. Only to find that it does work for electronic program guide (html pages) but not for multicast, resulting in no picture or sound. When asking in (very friendly and useful) DD-WRT forum for Broadcom devices (WRT54GL uses Broadcom hardware) I was redirected to use WDS mode instead (here is the comparison between various bridge modes) . Apparently WDS is much better in terms of compatibility but you need administrative access to both routers, which was not a problem for me.

I configured both routers in WDS mode without any security, DHCP or WLAN and tried. This time I’ve got picture and sound but they were somewhat distorted. Soon I found out that the stream that client side router provided was just ~700Kbps instead of 4Mbps. The stream quality on the host side router was fine. Eeeek. After trying a ton of different settings (I didn’t even know what to look for) I gave up. Stream was slow no matter what I did.

Now, looking back I am not sure whether I did manually assign the transfer fixed rate. There are posts on the internet that one has to limit the WDS transmission rate to a some value (I saw different reports saying 6, 9, 12, 18, … Mbps).

Tomato 1.27

Next firmware I’ve tried was Tomato 1.27. The same “small, lean and simple” firmware I am using for my internet router. Again I’ve set both routers in WDS mode (after re-flashing). From the start it didn’t work but then I’ve manually assigned transmission rate to something like 18Mbps and the picture and sound appeared in all their glory (read: as good as it gets – which is not good – due to my provider’s settings). But there was a serious problem which I’ve discovered after 1 minute. Provider’s router stopped providing IPTV stream for some obscure reason. Its IPTV section was froozen, other parts were still working. Only a hard reset on provider’s router fixed that, but only for another minute. Again, no setting I’ve changed helped. And resetting the main router every minute didn’t look as a viable solution to me. Call me strange, if you wish.

I asked both my provider (they are the admins of their router placed in my house, the one with the problems) about it and the guy behind Tomato. The first one couldn’t help (no big surprise) while the other didn’t respond.

So I was stuck with two solutions that didn’t work for one or the another reason. And I was so close.

DD-WRT experimental BrainSlayer build 12874

With no “official” DD-WRT or Tomato firmwares available I’ve went with experimental builds. I’ve tried “DD-WRT BrainSlayer build 12874” build with same results. I like the name though. You’ll find it here. Before using DD-WRT experimental builds make sure you read this page.

The solution that works: DD-WRT BrainSlayer build 13525

Finally this is the build that worked for me. Find it here. I’ve flashed both routers with this build. Reset both to WDS and nothing else (per wireless settings I’ve used Network Mode=G only, Channel=2 and SSID=tv on both) In addition I’ve set wireless transmission fixed rate to 36Mbps on both and enabled Frame Burst check box (below Transmission Fixed Rate settings on Wireless/Advanced Settings). And a miracle happened – the stream begin to flow to the STB in full speed.

Beware that while this solution works for me it might not work for you or you might even brick your router(s). Consider yourself warned. I merely described my solution.

The drawbacks of twin routers/WDS solution

The only drawback is that stream sometimes, like a hickup, looses some quality and artifacts in form of squares appear for a brief moment. I am not sure whether this is a feature of wireless connection or of non exactly stream oriented router firmware. This is the only drawback I’ve found so far.

Credits

  • The guys in the DD-WRT forums for responding to my noob questions.
  • Luka Manojlovič, a fellow MVP for providing support

Anti credits

  • To my IPTV provider SIOL for being really clueless and not helpful at all.

26.10.2010 Update: According to feedback from Jože, my solution requires twice the bandwith necessary because packets travel forth and back due to my settings. He proposes to switch wrtgl next to the router to AP and the wrtgl next to the stb to Repeater Bridge and to remove WDS. Add this script to the later "ifconfig eth1 allmulti" (without quotes, Administration/Commands menu). This way it should work the same but without double traffic. I didn't try it yet, but if you do, let me know. <- won't work.

Enabling and understanding code contracts in Visual Studio 2010 beta 2

Code Contracts are a new way to do both runtime and static checking of your code coming with .net 4.0/Visual Studio 2010.

Prerequisites

Visual Studio 2010 beta 2. To enable Code Contract project tab you need to update code contracts by downloading the latest version from Code Contracts web page. You’ll find very good introduction documentations on the same side from Documentation link (PDF).

Here is a really simple example of a code that uses Code Contracts

static void Main(string[] args)
{
    decimal c = Divide(4, 0); // fails validation because b == 0
    List<int> list = null;
    Add(list, 5); // fails validation list == 0
}

static decimal Divide(decimal a, decimal b)
{
    Contract.Requires(b != 0);
    return a / b;
}

static void Add(IList<int> list, int element)
{
    Contract.Requires(list != null);
    Contract.Ensures(list.Count == Contract.OldValue(list.Count) + 1);
    list.Add(element);
}

Pay attention to Contract static class and its Requires (pre-condition) and Ensures (post-condition) methods. There is lot more to these two methods i.e. you can even access original argument values through OldValue method – make sure you read the documentation.

Just the code by itself won’t do much. To enable both runtime and static checking you have to open project properties and check Perform Runtime Contract Checking and Perform Static Contract Checking checkboxes. If the static analysis has been checked then you’ll see warnings in the Error List (and Output) window regarding contracts failures. If you enable runtime checking you’ll get assertions (by default, but you can easily switch to exceptions) when the program execution hits contract checking failure.

The interesting aspect of Code Contracts is how it implements post conditions – it injects the proper code at the place of the post condition methods. Something like Post Sharp has been doing for years. Looks like AOP is slowly getting into the .net after all.

Try it for yourself.

Developer Express published roadmap for 2010

First new “feature” is stepping down from three releases per year to two releases per year. Regardless how it sounds, it makes sense. A lot of sense. Developers and other staff spent a lot of time preparing releases and I don’t mean implementing new features but just preparing and testing the releases. From now one they will have more time for coding new features. As per DevEx clients this is good news as well – having to do major updates less time per year provides similar benefits. Bottom line is that we have more features, less work and possibly less bugs.

From the technologies perspective it looks like WPF and Silverlight are getting mainstream while WinForms is dropping down from first place (that said it isn’t a dead product, far from it, it is just not the most important anymore – I am very pleased to see this shift happening). ASP.NET MVC will get some beta versions (“we shall be testing the waters with beta versions of the data grid, menu, navigation bar, and tab control”) while ASP.NET is still the most important of the two.

Visual Studio 2010 is going to be supported big time including CodeRush for VS2010.

These are news in brief from what I can read. But don’t take my word for granted, read it yourself at http://www.devexpress.com/Home/Announces/roadmap-2010.xml.

Easy way to edit Visual Studio’s project file using XML editor

There are times one needs to edit a project file (csproj, vbproj or others) in XML editor or just text editor because all settings are not available through UI or perhaps it is easier. I usually right clicked on project node in solution explorer to open the folder (in explorer) where project file resides  and then edited the file using Notepad2.

Watching Hanselman’s podcast on MVC 2 I found that there is an easier way to do it. Right click on the project, do the unload project, re-click on, now unloaded, project node and pick edit yourproject.XXproj context item. There you go. After you’ve done editing, save it and pick reload project to continue working.

A managed path to DirectCompute

About DirectCompute

After NVidia Cuda, OpenCL we got DirectX’s version (of GPGPU parallelism for numerical calculations) named DirectCompute. All three technologies are very similar and have one goal: to provide some sort of API and shader language for numerical calculations that GPGPU is capable of. And if you use it wisely the performance gains over normal CPU are huge, really huge. Not just that you offload CPU but the calculations due to GPGPU massive multithreading (and multicore) are much faster compared to even the most advanced x64 CPU.

It works like this: you prepare enough data that can be worked on in parallel, send it to GPGPU, wait till GPGPU finished processing the data and fetch the results back to the CPU where your application can use them. You can achieve up to a TFLOP with a graphics card sold today.

I guess this information is enough to see what is all about. For more information on the topic check out  Cuda and OpenCL. I’d suggest to check out the DirectCompute web site but the reality is that I couldn’t find any official (or non-official) page that deals with it (it is the youngest of the three but still that could be more online information, don’t you think). However you might check the DirectCompute PDC 2009 presentation here and a simple tutorial.

Of the three I am most interested in DirectCompute because:

  1. Cuda is tied to NVidia GPGPU while OpenCL and DirectCompute aren’t
  2. I prefer using DirectX over OpenGL for a simple reason – I did work for a client using DirectX so I am more familiar with it

I have a NVidia silent GeForce 9600 series graphic card (cards from GeForce 8 and up are supported) and the good news is that NVidia has been supporting CUDA for a while and recently introduced support for DirectCompute as well. OpenCL is also supported. I don’t know the current state of support for ATI cards.

There is a potential drawback if you go with DirectCompute though. It is a part of DirectX 11 and will run on Windows 7, Windows 2008 and Vista SP2 (somebody correct me). The problem is if you have an older Windows OS like Windows XP or you want support older Windows. In that case you might consider alternatives.

Goal

The not that good aspect of a young technology such as DirectCompute is lack of samples, documentation and tutorials are almost nonexistent. What’s even worse for a .net developer as I am is apparent lack of managed support. Which turns to be there to some extent through Windows API Code Pack (WACP) managed code wrappers. Note, WACP isn’t very .netish such as Managed DirectX was but it is good enough.

So I tried the managed approach for BasicCompute11 C++ sample (that comes with DirectX SDK) through WACP and here are the steps to make it run. It should give you a good starting point if you are DirectComputing through managed code and even if you are using native code. Hopefully somebody will benefit from my experience.

BasicCompute11 sample is really a simple one. It declares a type holding an int and a float, creates two arrays of those with an incremental value ( {0, 0.0}, {1, 1.1}, {2, 2.2}, ….{8191, 8191.0}) and adds them together to a third array.

I’ll be working on Windows 7 x64 with NVidia GeForce 9600 graphics card but will create a Win32 executable.

Steps

1. Install the latest NVidia graphics drivers. Current version 195.62 works well enough.

2. Install Windows 7 SDK. It is required because WACP uses its include files and libraries.

3. Make sure that Windows 7 SDK version is currently selected. You’ll have to run [Program Files]\Microsoft SDKs\Windows\v7.0\Setup\SDKSetup.exe. Note that I had to run it from command prompt because it didn’t work from UI for some reason.

4. Install DirectX SDK August 2009 SDK.

  • If it doesn't exist yet then declare an environment variable DXSDK_DIR that points to [Program Files (x86)]\Microsoft DirectX SDK (August 2009)\ path.
    image

5. Install Windows API Code Pack. Or better, copy the source to a folder on your computer. Open those sources and do the following:

  • Add $(DXSDK_DIR)Include path to Include folder
    image
  • And $(DXSDK_DIR)Lib\x86 to libraries
    image

After these settings the project should compile. However, there are additional steps, because one of the methods in there is flawed. Let’s correct it.

  • Open file Header Files\D3D11\Core\D3D11DeviceContext.h and find method declaration 
    Map(D3DResource^ resource, UInt32 subresource, D3D11::Map mapType, MapFlag mapFlags, MappedSubresource mappedResource);
    (there are no overloads to Map method). Convert it to
    MappedSubresource^ Map(D3DResource^ resource, UInt32 subresource, D3D11::Map mapType, MapFlag mapFlags);
  • Open file Source Files\D3D11\Core\D3D11DeviceContext.cpp and find the (same) method definition
    DeviceContext::Map(D3DResource^ resource, UInt32 subresource, D3D11::Map mapType, D3D11::MapFlag mapFlags, MappedSubresource mappedResource)
    Replace the entire method with this code:
    MappedSubresource^ DeviceContext::Map(D3DResource^ resource, UInt32 subresource, 
    D3D11::Map mapType, D3D11::MapFlag mapFlags) { D3D11_MAPPED_SUBRESOURCE mappedRes; CommonUtils::VerifyResult(GetInterface<ID3D11DeviceContext>()->Map( resource->GetInterface<ID3D11Resource>(), subresource, static_cast<D3D11_MAP>(mapType), static_cast<UINT>(mapFlags), &mappedRes)); return gcnew MappedSubresource(mappedRes); }

DirectX wrappers we need are now functional. Compile the project and close it.

6. Open DirectX Sample Browser (you’ll find it in Start\All Programs\Microsoft DirectX SDK (August 2009) and install BasicCompute11 sample somewhere on the disk.

image

7. Open the sample in Visual Studio 2008 and run it. The generated console output should look like this:

image Pay attention to the output because if DirectX can’t find an adequate hardware/driver support for DirectCompute it will run the code with a reference shader – using CPU not GPGPU (“No hardware Compute Shader capable device found, trying to create ref device.”) or even won’t run at all.

If the sample did run successfully it means that your environment supports DirectCompute. Good for you.

8. Now try running my test project (attached to this article) that is pure managed C# code. It more or less replicates the original BasicCompute11 sample which is a C++ project. The main differences with the original are two:

  • I removed the majority of device capability checking code (you’ll have to have compute shader functional or else…)
  • I compile compute shader code (HLSL –> FX) at design time rather at run time. This is required because there are no runtime compilation managed wrappers in WACP because those are part of D3DX which isn’t part of the OS but rather redistributed separately. So you have to rely on FXC compiler (part of DirectX SDK). A note here: I’ve lost quite a lot of time (again!) figuring out why FXC doesn’t find a suitable entry point to my compute shader code when compiling (and errors out). I finally remembered that I’ve stumbled upon this problem quite a long time ago: FXC compiles only ASCII(!!!!!) files. Guys, is this 2009 or 1979? Ben 10 would say. “Oh man.”
    Anyway, the FX code is included with project so you won’t have to compile it again. This time the output should look like this:
    image

I didn’t document my rewritten BasicCompute11 project. You can search for comments in the original project.

Conclusion

Compute shaders are incredibly powerful when it comes to numerical calculations. And being able to work with them through managed code is just great. Not a lot of developers will need their power though, but if/when they’ll need them the support is there.

As for final word: I miss documentations, tutorials and samples. Hopefully eventually they’ll come but for now there is a good book on CUDA (remember, the technologies are similar) – the book comes in a PDF format along NVidia CUDA SDK.

Have fun DirectComputing!

DirectComputeManaged.zip (2.05 mb)

My blacklisted companies

Here is my current short list of two companies involved with hardware and/or software sale that I won’t buy from again due to the various reasons.

First is HTC, yes, the one that produces all sort of phones. In fact it massively produces new versions monthly. Does this suggest that they are smart and productive? I’d rather say that they a) don’t know what to produce and thus they throw a lot of different versions on the market and b) they count on customers buying newer and newer versions of their hardware instead of keeping existing ones. The latter is strictly linked to the fact that HTC is very reluctant to fix existing bugs and you pretty much can’t count on upgrading to newer OS even though your device could easily run it. Did I mention that they sometime forget to include drivers, like graphics acceleration? Of course, they kindly suggest to buy a new device if you want hw accelerated graphics because they just won’t provide it for your model for some reason. I am specifically talking about HTC TyTN II aka Kaiser, not a cheap device by any means. A device that is in fact in 75% range of iPhone v1 hardware performances including graphics, if there was a proper driver. Yet due to the missing graphics hardware acceleration and very much outdated WM6.x this device is nowhere near iPhone. People might think that the situation would improve with HTC Android devices but it certainly doesn’t look so. True, HTC is creating attractive models from hardware perspective but the little things like bugs, hardware issues and mostly the company’s arrogant and greedy attitude put them to my blacklist of companies I' won’t buy again from. Warning, I am talking about my experience with HTC and I am not implying that other companies are better or worse.

So, what will be my next phone? I’ve decided to try Android. I like the fact that it is open source and modern. What about the device? Currently I am considering Motorola Milestone because it a) has good hardware (a bit faster CPU wouldn’t hurt though) and b) is a Google reference device meaning no custom Motorola UI put on top of it. Why is this good? Because upgrading to newer Android versions shouldn’t take long and certainly they won’t depend on anybody else but Google (hopefully, the future will tell). I certainly don’t want to depend on companies such as HTC anymore. Pity though Milestone isn’t yet being sold nearby (I am not that keen to pay 50€ (>10% of the device) for postage from Germany to Slovenia) in Europe.

The other one is Kettler, a German fitness equipment manufacturer. What it has to do with hardware or software? Oh, it does. Among other fitness machines they have a rowing machine featuring an onboard “computer” and USB port. The older version of this rower featured a PC application (for a healthy price) that linked to the machine through the USB port and could control and/or register various data – it got total control of the rowing machine. Imagine the possibilities. The slightly newer version of the same rowing machine isn’t compatible anymore with this PC software (which I found only after I’ve bought the machine) but they assured a newer version is on the way. This was years ago. I’ve asked them several times whether it is possible to get at least communication protocol specs so I could create the software I wanted myself. All answer were like: “our policy is not to disclose anything, we won’t give you anything” – you get the idea. But will you ever release an updated PC application? “We might be working on it”. Years after nothing to see. I can only deduce that they don’t care about their customers. They never disclosed this version incompatibility. Again, their hardware is fine, it’s their software and mostly attitude that put them on my blacklist. BTW, is anybody out there willing to reverse engineer the 256KB ROM (Freescale HCS12 CPU) to understand the protocol? I’d try but having no experience it would take me a long time (I imagine a skilled person might understand it quickly). The time I currently don’t have unfortunately.

If I were to buy a rowing machine again I’d buy a Concept2 rower which comes with all sort of free PC applications and even a free SDK – heck, the company even encourages you to write applications. In fact I might switch to it after I sell my Kettler on eBay.

Why am I writing all this? Well, sharing bad experience helps others from falling in the same hole. After all we mostly judge companies by bad experience don’t we? Furthermore exposing such bad practices might make them think twice. So take my writing as you wish but consider yourself warned :-). Also I am not saying that I won’t buy from those companies ever: I might but not before the company policy changes considerably.

That said I am interested in your blacklists as I am sure everybody keeps one. Perhaps we can create a blacklist page.

Want to try Parallel Extensions on .net 3.5?

Check out Reactive Extensions to .NET (Rx). Looks like it includes “a back ported (and unsupported) release of Parallel Extensions for the .NET Framework 3.5 in the form of System.Threading.dll”. So, if you don’t have Visual Studio 2010 beta handy you might check it out and let us know how it goes. While you are there make sure you check out Rx as well as it looks an interesting and useful library once you grasp its concepts.

See the related blog post.