Creating movies on Vista

So I have a movie taped on my digital camera and I want to make a relatively simple DVD out of it. Luckily for me I have Vista Ultimate with two applications I need: Windows Movie Maker and Windows DVD Maker. One is for producing movie (you know, cutting, adding titles, effects, transitions and stuff) while the other is for creating “pure” DVDs. I say pure DVD because the former doesn’t have option to burn a real DVD – though it has an option to “Publish to DVD” which really means that it’ll burn WMV file to the DVD – it won’t be playable on usual DVD player.

So, first I need Windows Movie Maker (WMM) to produce something that one can watch. WMM is quite good for most common and simple tasks and I have no big problems here. After I am done with producing I publish my movie to disk (aka create a wmv file). Once the production is finished I need to burn a real DVD – one that would play on any DVD player. I fire up Windows DVD Maker (WDM), import previously crafted wmv file and I am quite happy with its ease of use and great looking DVD menus I can add to my movie. But [MS] wouldn’t be [MS] if there wasn’t a show stopper inside. WDM is so intelligent that will create chapters (they are called scenes in WDM) for you. The bad side of this is that it doesn’t have a clue how to cut the movie to chapters – looks like WDM uses some sort of random algorithm. OK, I try importing WMM project instead of raw wmv file. I think that it should understand where it should cut because all the data is there – in the WMM project. But no, the outcome is even worse this time as the DVD menu blinks oddly and chapters are still random. This might be even tolerable if there was a “manually create chapters” option. But there isn’t one. So I am stuck with great looking movie with random chapters here and there. But I won’t give up at this point.

So I remember that my Panasonic video camera had some software bundled. The problem at the time I bought it was that the bundles software didn’t support Vista. Now, a half a year later, Google helps and finds updates for me – MotionDV Studio 6.0 LE for DV and SweetMovieLife 1.1E now work on Vista. Great, I think, I’ll be certainly able to produce something with not-cheap-video-camera bundled software. After five minutes of testing of both applications I slightly correct my opinion. It would be better if they wouldn’t run on Vista so I wouldn’t loose those five minutes. SML is like WDM just 10x times worse while MDS is like WMM just 10x times harder to use and probably 10x times worse.

Does anybody have a good recommendation for video authoring software – one that would be relatively easy to use and would create DVDs?


Sometimes boot and load times really matter

It is annoying when your computer takes time to boot or to load some application. Mostly it is just an annoyance. But sometimes it might be more than that:

“My finger slipped on the steering wheel and I accidentally pressed the button used for the starting sequence,”

“The car went into neutral and I had to reinitialize the system, that is, reload the gearbox management program,” he explained. The onboard camera recorded images of Hamilton pressing several buttons on his steering wheel while other drivers sped by. (read more)

The quotes are related to the latest Formula One Grand Prix event held in Brazil. Hamilton pressed a wrong key during the race which cost him the race and the title. One has to ask, why did software let him press the button in first place. I mean, it doesn’t make sense to initialize the start sequence during the high speed race, does it? Why didn’t the software prevent him a nonsense operation? The next question is, why the heck did reinitialization take a minute. Is is really necessary to “reload the gearbox management system” to reinitialize? From my perspective it looks like a software with stupid problems which might, obviously, cause huge problems. Not foreseeing that driver, who is still a human, might press a wrong button during the race is not really sign of well thought out software.

The bottom line is that the software should prevent nonsense actions as much as it is possible. At the same time there has to exist an “override all” button but it has to be turned off by default. That’s it.

.net DevExpress

Ribbon mania

I often wonder why is everybody jumping on the (Office like) ribbon wagon. Ribbon UI is clumsy if it isn't done properly and furthermore, if you ask me, it just isn't suited for all applications. So I came across this example of application:

The image is taken from this "Case Study: Itagent chooses DXperience" blog post

Honestly, I didn't read entire post nor do I know what application exactly does. What caught my eye is ribbon taking almost third of the window area – ~30% for displaying a bunch of buttons is a nonsense, bad design and a sign of ribbon misuse. It is good only if you are paid by the screen area used by your application. Again, perhaps there is more to it – a broader view, but judging from the picture I can't think of anything else. And this isn't the only application that does it – there are plenty of others.

So, if you feel like you have to use ribbon, use it wisely – not just because you have to.

.net DevExpress

What’s cooking in Developer Express house

[DevEx] improved public insight into DXperience next versions – they created a forum where their employees post news about forthcoming versions. So, if you curious what’s next by [DevEx] or you want to ask something (current license holders only) make sure you take a peek into DXperience v2007 vol 3 Release Candidate forum. And no, there isn’t a publicly available version out yet.

ASP.Net Hardware SQL Windows

Peculiar problem involving Windows 2003, VMWare Server, SQL Server 2005 and networking

Over the weekend I’ve built my new server – yes, the content you are reading right now is served from it. Perhaps more about this new server in another post. Back to the point. Host OS on the server is Windows 2003 R2 x32 and there is also a SQL Server 2005 running there. I’ve had x64 bit before but it is just too much trouble running it since the drivers and support situation. So, if you don’t need more than 4GB of RAM then I don’t see a compelling reason to go with 64 bits. Why I say “host OS”? Because I am running VMWare Server on top of it. And there are two guest OS running inside virtual machines: Another Windows 2003 that serves web content (it uses SQL Server located on host) and Windows Home Server that takes care of backups. So, after I’ve installed everything I fired up my web virtual machine and take a took a look at my blog – it was a no go. Virtual machine was working fine, just the Community Server wasn’t running. After turning off custom exception handling I ended with a exception reporting page which I was looking for. However the error was an odd one. It stated that connection with SQL Server (running on host) has timed out. Hm. I investigated further by creating a test application that reads a table from database. Running on my workstation it read the data just fine. But when running from within web server it read just first n rows (i.e. 20) and then it timed out, always at the same row – which was really puzzling. The same symptom appeared to any SQL Server client running within guest OS. So it was obviously a problem related to VMWare Server. Yet, if I’ve turned off Windows Firewall on the host my application worked even on guest OS – this fact deceived me to thinking that the problem is firewall related (perhaps it was in a way) – after half an hour testing with any possible firewall configuration I gave up. Since I knew it has something to do with VMWare Server I then started searching their forums. And soon enough I’ve found a solution (at the bottom of the thread):

Disable TCP Offload on the host

While the solution talks about disabling TCP Offload Engine on Windows, I’ve disabled TCP Offload LargeSend (it sounded enough similar to me and a good candidate for my problem) and it worked like a charm. This is how it looks on my computer:


Perhaps networking now consumes 0.00000000001% more of my CPU but at least it works fine. I am not sure whether this is a bug or not, I’ll contact VMWare anyway. Funny, the building and installing my server took less than troubleshooting this problem.

.net 3.5 LLBLGenPro ORM

LINQ to SQL showstoppers

I have to say that I like LINQ to SQL, heck, I even do presentations about it. So, it is a cute ORM. What is that good about it features LINQ. However, there are some serious showstoppers in there. One are more serious than the others. I won’t deal with what’s good about LINQ to SQL, rather I’ll dive into showstoppers as I see them. So, let’s take a look at them:

  1. Support for SQL Server flavors only, not even Katmai (SQL Server 2008).
    This might or might be a showstopper for you. One of the great advantages of ORM products is that they are abstracted from database, meaning the same code works with different supported databases. You probably won’t write an application that is totally abstracted and you’ll have to write few database dependant stuff but it is still true for the majority of the code. So, why would I bother learning an ORM that deals with SQL Server only when I have plenty of other ORMs that supports a bunch of different databases? Why would I bother to tie my application to SQL Server only when I could have an application that works with all the major databases out there?
  2. Poor n-tier support
    A good ORM would let you send all the modified entities to the server tier with a few of method calls and store changes to the database with few other method calls. But not LINQ to SQL. Oh, no. Doing n-tier with LINQ to SQL requires a fair amount of coding and lot of manual work that should be done by LINQ to SQL itself. IOW LINQ to SQL is very client-server oriented. Looks like n-tier was added afterwards.
  3. Designer that doesn’t sync with database
    Imagine you have a Context with myriad of entity types on it. That’s very messy by itself since you can’t isolate a certain set of entity types you might be interested in but you always have to look at all of them. But that’s not the real issue here. The real problem arises when database metadata changes (right, you always define your database structure in stone before you start coding and you don’t modify it ever) – perhaps this is a bad project management, but it does change through the life of the project. And when it does, LINQ to SQL won’t help you modifying the Context. Nope, you are on your own. So, you can recreate the context manually and reapplying all the modifications done to it (you are certainly looking forward to this) which is not only boring, but error prone, too. Or you can manually sync the Context with database which is not that boring but error prone and tedious. So, this one is a huge showstopper if you ask me.
    Luckily there is [CodeSmith] with its PLINQ templates that supports database synchronization. While not a perfect solution it solves many problems. Furthermore it allows you to modify the templates to your requirements – IMO the [CodeSmith] path is the only path if you are serious about LINQ to SQL.
  4. No source code
    [MS] is probably releasing LINQ to SQL source code, but even if they does, you won’t be able to fix problems as you can’t modify it in any way. Not a big problem.
  5. Slow development cycle (tied to .net 3.5)
    If you need a fix quickly you will have problems as LINQ to SQL is part of .net 3.5 framework. And knowing the speed of [MS] applying bugfixes it might take years before official update is available.
  6. No direct execution against database
    Imagine you have a task that has to update all item prices for 10% for whatever reason (it acually happened last month in Slovenia for bread and milk items). All orders have to be fetched from the database to the application, each order has to be properly update in application and all of them have to be persisted back to database. Right, you can’t execute the operation directly against database from the code. OK, you can issue a SQL statement or use a stored procedure but the former breaks strong typing (SQL statement is a string, compiler won’t help you catching the errors) while the later is unnecessary burden.
  7. There is plenty of better competitors out there
    Apart from the (obviously) LINQ advantage there is plenty of better and proven ORM products out there. Even the LINQ, major LINQ to SQL’s advantage will soon be integrated by the competition as some of them already feature LINQ (to Entities) while others will soon.

Again, don’t take me wrong. LINQ to SQL has plenty of good features. It is just that it has too much of show stoppers for me and knowing problems of certain technology before using it is usually a valuable knowledge. So, what can you do? Pick an alternative of course, either one of commercial or free 3rd parties or wait for Entity Framework which will see light next year, sometime after .net 3.5 RTM.

What will I do? Actually nothing. I am very happy with [LGP] that is much superior to LINQ to SQL except for the LINQ. Also [LGP] is getting LINQ support supposedly this year so it shouldn’t be long before I am be able to happily LINQing with ORM.

And let me know if there are other problems you foresee with LINQ to SQL.

.net 3.5

The unknowns of .net 3.5 framework library source code licensing

Frans has an interesting post (and an update here) regarding .net 3.5 framework library source code licensing and he is raising an interesting question: “Should we look at this source code or not”. AFAIK he is the only blog poster so far that is concerned with [MS] .net 3.5 framework source code hot topic. All the others, including me, is very happy to see this happen. So, should we have concerns, too?

Well, yes and no. First, Frans is developer behind excellent [LGP] ORM that will be a competition to [MS], once .net 3.5 is released. Actually [LGP] will (future tense because none of [MS] products is out there yet) be superior, except for LINQ integration, to both [MS] products: LINQ to SQL (which is very limited anyway) and Entity Framework (whose release data is unknown, but after .net 3.5 RTM for sure). So, he is not a typical developer like you or me, he is a component developer and he competes with [MS], or better, he will compete. So, he has to take in account any possible drawback for looking at .net source code and his concerns are legitimate – who says [MS] won’t try a legal action against competition based on .net source code license agreement?

Next, I doubt that anybody actually understands the licensing. That’s because licenses are written in legal dialect which is unreadable to humans and understood by few lawyers at most. And even those lawyers understand it differently. A good example is saga. So, forget about common sense and hire a bunch of good lawyers if you want to have your back partially covered – afaik there is no other way to understand the license. You can’t ask [MS] whether you can do this or that – they will redirect you to your lawyer(s). And if you are dealing with [MS] lawyers you would probably need many of your lawyers – the more the better. As per us, “normal, non competition” developers, I don’t think this is a problem in real world nor will [MS] bother.

Frans also says that looking at reverse engineered code (i.e. Reflector output) isn’t the same as looking at source code. Actually, looking at reverse engineered code might actually be worse. I don’t think anybody is allowing anybody to reverse engineering the code in the beginning (big brother doesn’t yet have cameras at our work places, so it is tolerated for now). Anyway, looking at or using reverse engineered code sounds same to me as looking at or using source code. If there are patent issues it won’t matter where did you get code from.

True, you can only look at the source code and not touch it or use it. I am going to say: at least I can look at it. I didn’t ever think to modify the .net library anyway. But then I can’t fix the bugs that I am having problems with (assuming I would know how to fix them). True, but knowing the enemy is half of the victory – IOW if I understand why the bug happens and what’s going on behind the scenes I might be able to create a workaround. Next, there is no better documentation than source code. Period. And comments are even better – a cherry on the cake (who knows what they’ll look like). And ability to step into the methods is very good, too. So it does solve real problems as it did Reflector. If we’d be able to temporarily use (in the case of [MS] bugfix cycle this means at least 3 years ;-)) recompiled assemblies for our own application it would be even better. But heck, even releasing source code took many years. Less restrictive licensing might take next n-years but it might happen as well. If the demand is there of course.

The bottom line is that Frans is looking through eyes of the competition developer and he has to be prudent in what he does. While for the rest of us, source code is still a great benefit, and I don’t think there are real problems as long as we don’t do anything really stupid. Take also note, that I am not a lawyer nor I did read [MS] Reference License carefully so my opinions are opinions of a developer.

.net 3.5

Microsoft is going to release the source code for the .NET Framework 3.5 libraries

This is simply amazing. Entire NET Framework Libraries source code including comments. Priceless. Forget Reflector, now we’ll be able to step in the source while debugging and see how it works. Fantastic and bold move from [MS], even hard to believe that this is really going to happen! Check out the details here.

But hey, what took them so long?

Hardware Windows

Playing video or audio in Vista is a booby trap

Since I switched to gigabit network some months ago I often saw a lousy network performance against other computers in my local network. It wasn’t enough annoying to actually investigate. However, one day I said: enough is enough. Time for an investigation even if I have to loose a day (which I lost).

So I equipped myself with pcattcp network performance tool (found through Coding Horror blog) which is a great and simple utility – perfect for me. So I started testing and tests confirmed that local network performance is sub 100Mb/s even though I am on 1Gb/s speed. Why would that be? My first instinct was that there has to be a hardware/driver issue. After praying a bit (on the knees next to my computer, experimenting for a couple of hours with different network cards, cables and configurations) it was obvious that this wasn’t it. It has to be a software problem then. So I’ve restarted Vista in safe mode with networking only (which I should have done from beginning) and the network performed as it supposed to perform (~60MB/s). Hua, it looked like a service or some other application that runs at startup interferes with my network.  My instinct was again active and suggested NOD antivirus that I am using (mostly with its real-time features disabled). It is a fine antivirus, but still, it is an antivirus which might create collateral damage.

So I restarted with only NOD disabled and guess what, network worked with full speed as in safe start. I said, OK, that was it, cursing antivirus software – unjustified, as you’ll see soon.. And started working the usual stuff. After a while I rechecked the network performance only to, my disappointment, see that it is stuck again. Oh well. Sorry NOD, I wrongly accused you. But why did the network problem manifested only after a while? Then it struck me, perhaps not that obvious, that I am currently listening Winamp‘s shoutcast radio. Can Winamp have such a power to cause Vista network problems? I closed Winamp and network regained the full speed. Even though my odd experience with iTunes I didn’t believe that Winamp is actually the culprit. I was right, the same happens with Windows Media Player (or any other audio/video player) – as soon as one starts listening music or watching video – bum, there goes 2/3 of your precious bandwidth.

It has to do with QOS or something like that then. Now, that I had pinpointed the problem I was able to make a phone call to [MS] local support. As soon I told the engineer words: network, audio and playing he said: “Right, it is a known ‘feature’ in Vista with no currently available official fix“. He kindly sent me two links, one is referencing Mark Russinovich’s blog post describing the background (Multimedia Class Scheduler – MMCSS being a culprit) and one link to a possible workaroundLuckily the workaround works, kudos to Courtney Malone and to  local [MS] support.

However, the bitter taste remains. Who the heck developed such a lame solution? The collateral damage is just incredible. Imagine driving a car on the highway, you turn on the radio and bum – there goes your speed from 130km/h to 40km/h. Though radio might play better.

Was MMCSS programmed by a high school intern? By a person who doesn’t know the real world? Who knows. We can only hope that this issue is addressed in SP1.

.net 3.5 SQL

Post CodeCamp impressions

On saturday I held a speech and co-host a “round table” at CodeCamp in Zagreb in front of a crowd of ~60. My presentation was well received and understood (at least I was told so) even though my Croatian is not perfect – English and hand gestures helped. The entire event was at high level with world-class speakers (not counting me in this category) worth attending.

And I have to tell that community in Croatia is amazing. They are passionate, communicative, skilled, some of them came even from Dubrovnik (500km) or Slavonija region, and  lastly, they are willing to sacrifice entire warm sunny Saturday for an indoor event. OK, they got plenty of very useful swag, but still, they were there for the content. It is a pleasure speaking for such crowd and among great speakers.