Book Review: Introducing HTML5

by jmorris 4. March 2011 22:54

As usual I have been reading a ton of books lately on various subjects from HTML5 to NHibernate to unit testing and other subjects as well. The plan is to write a series of reviews of each book over the next couple of weeks.


The first book I will write about is Introducing HTML5 by Bruce Lawson and Remy Sharp. If your not familiar with HTML5, it along with CSS3 are the latest and greatest in web standards and technology from the W3C. Too be honest, you have to use the term HTML lightly, with respect to HTML5…while it does add to the markup common associated with HTML and other SGML derivatives, it also offers a whole slew of technologies that are decidedly outside the scope of markup: graphics, client side messaging, geo-location and others.

I’ll keep this book review short and sweet, like the book at barely 200 pages…as the title states, it is an introduction to HTML5 and gives a summary of the various technologies that it comprises. It’s not a reference manual nor a deep explanation of the subject; it’s a brief description of each technology with simple, real world examples of their usage. In my opinion it’s worth is in that a few hundred pages you get enough understanding of each technology, not enough to walk the walk, but definitely enough to hold a conversation on one of the topics. If you are already past this stage, it probably would not appeal to you; it simply doesn’t offer he enough detail.

Additionally, one thing that it does do is give small nuggets of information regarding work-rounds for various browsers that only partially support HTML5 or have bugs in their implementation. In fact, the authors are not shy about discussing some of the limitations of the support for HTML5 across the common, modern browsers. Of course, this is somewhat awry from some of the HTMl5/CSS3 “fanboys” out their that try to pawn HTML5 as the next coming…the truth is that only portions of HTML5 are completely supported by all browsers and knowing the work-rounds (html5shivselectivizr, modernizr, etc) the key to using it now.

Who is the book for? Pretty much anybody with a cursory understanding of HTML, browsers, how the web works and maybe some experience with JavaScript or another programming language. Much of it is suitable for managers as well.


Tags: ,


ActiveMQ: Topics vs. Queues

by jmorris 9. February 2011 13:47

The JMS specification defines two different ways of defining message destinations: Queues and Topics. Understanding the behavior of each is vital to implementing messaging within a system and fulfilling any expectations of message availability.

First a quick aside, the purpose of this post is mostly to keep myself from committing the mistakes I made today based upon invalid assumptions regarding the availability messages sent to a queue within an environment with multiple listeners. In a nutshell, I was writing a test to queue up a single message and I have a listener within the test receive the message.  For some reason the message was never being received…baffling given the simplicity of the test and the maturity of the code.

Here is the test (ignore that the test is not well written, I just want to quickly test that the code works while I am developing):


I edited this code a bit for simplicity, but in essence a message should be pumped to the console and displayed (real test validates that message was received after x seconds delay). The only problem was I that I was getting nada…note that I am not testing the queuing API itself, I am testing that the strategy will push a message onto the queue without throwing an exception. It’s basically a throwaway test to prove that it works as expected in an integrated environment.

The problem was that the message was never being received by the listener (QueueReceiver is the listener), however other similar tests that are messaging API specific were working as expected. Running through the usual checks turned up nothing. The code simple didn’t work…but only for this test! Swapping out for another queue worked perfectly :(

Fool me once, shame on you. Fool me twice…

So, what was causing the problem? Well it all became very clear once I went back and reflected on the basics of JMS messaging and the differences in how messages are delivered to queues and topics.

The JMS specification supports two models for sending messages: point-to-point for queues and publish and subscribe for topics. In a nutshell, only one consumer of a queue will receive a messaging regardless of the number of consumers; round-robin message dispersal. Whereas a topic broadcasts a copy of the message to every registered consumer.

It turns out out that I had a windows service running that was a consumer of the same queue and It was the first listener, so it always got the first message. The unit test never received a copy because only one message was sent and the consumer on the windows service always pulled it off. This was proved by simply calling publish twice:


This consumer (_receiver) got the second message, because the first one was consumed by the windows service and the next round-robin listener was this unit test. ARGGGHHHH, what a waste of time and brain power!

Lessons learnt:

  1. Don’t write unit tests with external dependencies and assume they will behave as expected
  2. Don’t use shared queues for testing
  3. Don’t forget the difference between queues and topics



Apache.NMS, JMS | rant

Adding Implementation to Interfaces with Extension Methods

by jmorris 21. January 2011 22:58

One of the quirks of extension methods added to the .NET 3.5 release is that you can add them to interfaces;  seemingly adding implementation to interfaces. By definition, in the CLR (and most other languages or platforms) interfaces contain only the signatures of methods, delegates, properties or indexers. By implementing an interface in a class, the body of the method, delegate, property or indexer is added…the implementation is added.

For example:


Here I am extending the IMessage interface with a couple of methods for simplifying access to value pairs contained within a IPrimitiveMap (a hash map implementation).  A couple of unit tests illustrates the usage:


Truthfully, you are truly adding nothing the interface! No implementation is actually added…basically a static class is created with the extension methods and some compile time kung fu enables you to invoke the method off of the interface definition. Extension methods are not of the domain of the CLR, they are of the domain of the compiler via the System.Runtime.CompilerServices.ExtensionAttribute.

I would imagine this is old news for most seasoned .NET developers, but I just kind of stumbled upon it myself today and thought it was interesting with respect to interfaces and implementation :)

Tags: ,


The Dilemma's of a Developer

by jmorris 12. January 2011 01:43

I ran across this today while refactoring/reviewing some code:


What in the heck I am I supposed to do with this? Delete it? Refactor it? Quickly close the file and pretend I never saw it? Mind you that it’s in code that’s in production and get’s run several times a day…and the code works as expected.

I chose to refactor it:


Unit tests pass:


Hold your breath!

Tags: , , , ,


.NET Framework 4 Client Profile: The Devil Itself!

by jmorris 7. January 2011 02:16

I am convinced that Microsoft’s decision to set the build profile of projects created with the Console Application template to “.NET 4 Client Profile” is a work of the devil itself! Why you might ask? Because it is set to this profile by default and because it will cause projects that should rightfully compile to fail, without an adequate explanation of why!

The reason this happens is because VS 2010 will allow you to reference projects and/or DLLs compiled under the “.NET 4 Framework” target, which may contain references to resources excluded by the “.NET 4 Client Profile” without complaining or warning the user. What a horrible, frustrating “feature”…

Here is a formal description from MSDN:

“The .NET Framework 4 Client Profile is a subset of the .NET Framework 4 that is optimized for client applications. It provides functionality for most client applications, including Windows Presentation Foundation (WPF), Windows Forms, Windows Communication Foundation (WCF), and ClickOnce features. This enables faster deployment and a smaller install package for applications that target the .NET Framework 4 Client Profile.”MSDN

Here is an example of a compile time error caused by this:


The weird, confusing part is that while typing in the using statement for the namespace, intellisense will show you the namespaces and the object viewer will confirm that they exist. However, when you go to compile it will fail! Very frustrating!!!

This fix is very easy, simply right click on the project and select properties and then in the “Target Framework” dropdown, select “.NET 4 Framework” and your good to go. I am hoping that this will be fixed or changed in VS2010 SP1 that should be released soon.

Tags: , , , ,

rant | Visual Studio

OOP 101 – Understanding SOLID

by jmorris 5. January 2011 22:14


SOLID is an acronym for several specific traits that are consistently found in well written, reusable, and extendable software. Specifically, there are five principals to follow when constructing object orientated systems: SRP, OCP, LSP, ISP, and DIP.

SRP– Single Responsibility Principle

SRP relates to class cohesion. In essence a class should have no more than one reason to change. A class violating this principle is likely doing too much and thus lacking cohesion. Classes that lack cohesion are brittle and difficult to maintain.

For example, here is a class does more than one thing or in other words has more than one purpose or responsibility:


In this case we have a class that mixes both database activities (responsibility #1) and the abstraction of the real world entity that the class represents (responsibility #2). Classes like this can be refactored into more cohesive pieces by using a well defined pattern such as Data Transfer Objects and Data Access Objects (DTO/DAO):


By separating the purpose into separate classes, it becomes clearer what the intended purpose of each class is, increases maintainability and promotes code reuse.

OCP – Open/Closed Principle

The OCP principle states that objects (and other software artifacts) should be open for extension, but closed for modification. What this means is that you should be able to extend a classes behavior without modifying it internally and potentially breaking any other usages or users of the class or artifact.

The OCP principle is typically enforced via inheritance and abstract or virtual classes and methods. Re-using our example we introduced in SRP let’s make up the following scenario: assume that we not only want to persist our objects to the database, but we also want to now persist the XML representation of the file to disk. We could chose to modify the existing PersonDAO class by adding a new methods for writing the xml to disk, reading the xml off disk and deleting the file from disk :


This of course would/could potentially break any clients using the DAO. A better way to accomplish this is to create a base class or interface that provides the signatures of the methods and/or a base implementation and then have specific derived classes provide specific implementations:


In nutshell, OCP deals with increasing the maintainability and reusability of code, which is achieved by extending existing code by introducing new subtypes as opposed to modifying older already working code when new behavior or features are required.

LSP – Liskov Substitution Principle

LSP is another principle related to class structure and states that any derived classes must be substitutable for their base classes.  What this means is that the base class must be able to use references of derived classes, without knowing that it is. Like OCP is closely related to inheritance and polymorphism and mutability of the objects state leading to a violation of one the classes invariants.

The classic example is as follows: a square that derives from a rectangle with getter and setter methods or properties for height and width. Because the height and width can be changed independently, it’s possible to violate the fact that a square has sides of equal length.


Can violate the LSP easily:


However, by refactoring the class to ensure that each property is not modified independently, we can preserve the class invariant (a square is a rectangle with equal sides):



LSP defines a several behavioral conditions that subtypes must adhere to, namely:

  • Invariants of the base types must be preserved in subtypes
  • Post conditions cannot be weakened in a subtype
  • Pre conditions cannot be strengthened by a subtype

ISP – Interface Segregation Principle

ISP largely relates to class cohesiveness. Classes with high cohesion tend to do less, but do it much better and easier to reuse and maintain. Classes with low cohesiveness will tend to lots of things resulting in unwarranted dependencies, which reduce maintainability,  reliability, testability, and understandability.

Historically ISP has dealt with the of “fat interfaces” leading to “fat classes” that are not very cohesive. “Fat interfaces” is a term for interfaces with many methods that can and should be broken into smaller, more fine grained interfaces. In situations where “fat interfaces” are required, abstract base classes that implement the more cohesive fines grained interfaces should be used. The major theme here is that clients should not be required to depend upon interfaces that they do not need [Uncle Bob].

For an example of an ISP violation, we do not have to look far in the .NET world (note that this is my opinion and my opinion only) to find one: System.Xml.Serialization.IXmlSerializable. For those not familiar with this interface it provides a means of implementing  custom XML serialization. It’s a very simple interface in that it only provides three method signatures to implement: ReadXml, WriteXml, and GetSchema.


It’s also very useful, in that the .NET framework provides a corresponding class for serializing and de-serializing objects marked with the IXmlSerializable interface: XmlSerializer. However, what happens if you only need serialization or de-serialization and not both? What if schema validation is an overkill, and is in fact a reserved method that should not be used? You end up with GetSchema and either ReadXml or WriteXml throwing a NotImplementedException!


The ISP violation in the Person class can be refactored by providing an abstract class that implements IXmlSerializable and then choosing the methods you wish and then overriding the methods you wish to implement.


Now, you are no longer forcing clients to implement parts of the interface that are not of concern to them.

DIP – Dependency Inversion Principle

DIP is concerned with the structural relationship between classes and the effect that dependencies have on the quality and maintainability of software. Formally, it is the concept that you should depend upon abstractions and not upon concretions [wiki]. DIP is associated with Dependency Injection (DI) and the Invocation of Control (IoC) containers that are commonly used as means of abstracting the construction of objects from their use. It’s sounds confusing, but it’s really not.

The two major tenets of DIP are as follows:

  1. High level models should not depend upon low level modules; both should depend upon abstractions
  2. Abstractions should not depend upon details; details should depend upon abstractions

What this means is that systems should be comprised of a series of layers, from the abstract and generic to the concrete and specific.  Changes in lower level modules should not effect or cause higher level modules to change; the opposite should be true. The references that are used should be made by using abstract class or interfaces and not concrete representations.

Here is an example of a DIP violation:


Note that the PersonDAO class depends upon the concrete SqlConnection implementation. Now this is fine and dandy if you are working in environment where you are always using a SQL Server specific provider, however what happens if you want reuse this code in an environment that is using a MySql provider? In that case, you really can’t without adding some horrible dependencies.

Here is a better example of the same code refactored so that it uses the abstraction (DbConnection) instead of a concrete representation (SqlConnection):


Notice that the refactored object model is using two forms of Dependency Injection: constructor injection and method injection.


Dependency Injection provides a means of injecting dependencies into an object as opposed to hard coding the dependencies within  the object itself. Classes designed in this matter lend themselves to both unit testing via mocking or stubbing and construction by way of IoC containers.


Tags: , , , , , , ,


T4 Templates Part Duex

by jmorris 15. December 2010 23:04

I previously wrote about some experiences I have had with T4 templates and generating objects from database tables using the MSSQL metadata tables. Well another code generation opportunity came up and a comment by a reader pointed me in the direction of Damian Guard’s template so I decided to take a look. Nothing wrong with using something better than what you’ve already developed.

First up, a couple of observations with T4 and VS2010 (my previous experience with T4 in my post was using VS2008 and it would interesting to see how far T4 and IDE have come along).

“Save” Means “Execute”…Still!

The first thing I have noticed, which I was hoping would have changed, was the generation model/compilation model: when ever you hit Ctrl S, the template will save and immediately be run by the T4 engine. The problem with this is that if your doing a bunch of refactoring and if your work flow is modify/save/build (such as mine is, I blame TDD) then you’ll quickly slow down as the code generated that doesn’t compile will muck up VS. For example if you are generating a lot of files, things slow down to a stop if this happens after every save. I think it would be so much nicer if it was tied into it’s own command (such as in the Build dropdown) and instead of piggy-backing on “Save”.

Always Set debug=true!

This is super important -- always set debug to “true” on your template headers:


Doing this is the difference between pulling your hair out trying to figure what is failing and quickly fixing the issue and moving along

Assembly Locking is Major PITA: Use VolatileAssembly Directive from the T4 Toolbox!

Note: Apparently this is resolved in VS2010 SP1 (Thanks Will!)

One thing that you’ll quickly discover if your calling into code that exists outside of the template (such as the assembly that you are creating your templates in) is that the T4 engine will reuse the assemblies used by the templates, thus they will remain blocked while the engine is running. What this means is that you will have to close VS and reopen it if you want to modify the code in those assemblies (such as when you discover an issue in the code the template is using to generate your output).

Closing and Reopening Visual Studio is a major PITA and seriously cuts into your productivity. Fortunately, there is a workaround: download and install the T4 Toolbox from and use the VolatileAssembly directive on one and only one template.


What this does is create a temporary copy of the referenced assembly for each template generation which is cleaned up after the generation completes.

Damien Guards Template Rocks!

Now for the grand finale: Damien Guards is awesome and really makes multi-file code generation easy. For details, check out his blog posting from the references below, but here is quick overview:


Here is a breakdown of what is going on:

  1. Include Damien’s T4 template
  2. Create a reference to the T4 generation environment
  3. Start a new file using StartNewFile(string fileName). The file’s body is created between this method call and the next EndBlock()
  4. EndBlock() stops the generation of the file’s body
  5. StartHeader() starts a new header section
  6. EndBlock() ends the header section
  7. Call Process() to generate the files

It’s really pretty straightforward and easy to use.


Tags: , , ,


Parsing FTP LIST Command Results

by jmorris 3. December 2010 15:05


As part of an automated process I have a service that recursively iterates through an FTP directory and pulls down any new or changed files saving them to disk before being uploaded into a CDN (Content  delivery network) where they are used on the Web. Another automated process digitalizes published magazines and writes the images and text as XML documents into this FTP directory (so it can be consumed). 

This has been running for some time when I noticed that there were a lot of images that were not being copied down from the FTP server. After some investigation I noticed that the only images with spaces in the name were failing to download and upon looking into my code it was pretty clear what was going wrong.

Processing the LIST Command Result

I was using the WebRequestMethods.Ftp.ListDirectoryDetails flag which makes the FtpWebRequest object use the FTP LIST command to retrieve the file names in each directory. The FTP server is a Unix based computer thus when I execute the directory LIST command it returns back a CLRF delimited blob of text containing a record describing each file in the directory as a record that looks like the following:

-rw-r--r--    1 1089    1091       505482 Nov 19 22:53 paper texture 2jpg1290206009609589.JPG

This breaks down into the following structure which is delimited by a space:

mode links owner  group  size datetime name
  -rw-r--r-- 1 1089 1091 505482 Nov 19 22:53 paper texture 2jpg1290206009609589.JPG

Note that there is another FTP command NLIST, which returns back just the filename itself which works just fine, but it doesn’t given you enough information about what kind of entry the file is. For example, if I encounter a folder, I want to step into and read the contents. If it’s a file, I want to download the file. The mode portion of the LIST result gives you the information required to make this is decision: if the first element is a “d”, it’s a directory so keep traversing…otherwise assume it’s a file so download it.

The Problem with Spaces (and my code)

When I process a LIST record string, I split it into an array and based on ordinal (position-the last element) I select the name element:


The problem is that when the name element has spaces, the last element is the last whole part of the file name – any proceeding parts are split into separate elements and ignored. So for most files, there were no problems; it wasn’t until files with spaces in the name began appearing did the appear.

Regular Expressions To The Rescue

A quick Google search revealed that people who have encountered this problem, used regex’s to correctly parse the LIST result record. In particular, this post on stackoverflow hit the nail on the head:


A quick unit test confirms the simplest case:


Tags: , , , , , ,


Lazy Initialization in the CLR 4

by jmorris 18. November 2010 22:38

Lazy initialization or lazy loading, is the idea that the object will not be constructed, created, initialized or loaded until it is absolutely needed.  A common scenario for lazy initialization is a list that stored in a cache is empty until it is accessed, then it is loaded perhaps from a record-set in a database or file and then placed into cached and finally returned back to the caller. If the list is never requested, then the cache is never filled and the application’s working set is smaller. Typically smaller working sets (of memory) mean better perceived performance and happier customers.

System.Lazy<T> provides a (potentially) thread safe wrapper around an object and provides a means of deferring initialization of the core object until it’s requested via Func<T> delegate. Depending upon the constructor called, you can emulate one or more different locking techniques such as doubled checked locking, using a nested class to defer initialization or by doing initialization via a locking mechanism when the object is first accessed. In memory challenged situations, the System.Threading.LazyInitializer class provides static methods for doing thread-safe (potentially) lazy initialization as well.

I included  thread safety “potentially” in parenthesis above because depending upon the constructor overload or method overload used on Lazy<T>  thread safety may or may not be ensured. The reason this is so is to improve performance when thread safety is not required or desired because no synchronization locking is done.

Lazy Initialized Singletons

One of the commonest examples of lazy initialization are implementations of the GOF pattern, the Singleton.  There are several ways to implement the singleton pattern in C# using lazy initialization, from extremely simple to somewhat complex. Each way offers differences in thread synchronization schematics used, the relative thread safety, and the “laziness” of the implementation. Besides the traditional way of implementing singletons,  you can also use one of the new .NET 4 classes System.Lazy<T> or System.Threading.LazyInitializer class.

Their are at least four thread-safe variants of the singleton available to languages targeting the CLR with three of them supporting directly lazy initialization. In a nutshell they are the double-checked locking techniquewhich is broken in Java, but works accurately in C# [1], and full lazy instantiation using a nested inner class with a private static constructor and a reference to an internal static readonly instance of the parent class, and a third version shows thread safe lazy initialization with the caveat of reduced performance since a lock is acquired on all reads. For an in-depth discussion check out Jon Skeet’s excellent article on the topic.

Using the System.Lazy<T> class in .NET 4.0 you can easily implement the singleton pattern in a thread safe, lazy initialized manner that is optimized for performance:


Note that passing in “true” to the constructor makes the initialization thread safe by using double locked schematics described above. If “false” is passed in for isThreadSafe, then no synchronization takes place. You can also use one of the following LazyThreadSafetyMode enumerations in another overloaded constructor call:

  • LazyThreadSafetyMode.None – no synchronization occurs (not thread safe)
  • LazyThreadSafetyMode.Publication – uses Interlocked.CompareExchange
  • LazyThreadSafetyMode.ExecutionAndPublication – uses the C# lock keyword which the CLR interprets as a Monitor (note in Richter’s book he states that it uses double checked locking, but reflector shows only one lock…)

Here is another example using System.Threading.LazyInitializer:


Lazy Initialization and Micro Optimizations

Now one might argue that lazy initialization is a premature if not unnecessary micro-optimization. In some respects that is probably true. Singletons, for instance, are only created once for the entire lifetime of an application, typically at start up. However, if you truly need thread safety, lazy initialization then System.Lazy or System.Threading.LazyInitializer are the way to go with .NET 4.


  1. CLR via C# by Richter, Jeffry 2010 Microsoft Press

Tags: , , ,


Refactoring 101: Method Groups

by jmorris 4. November 2010 14:10

C# allows method group conversions, which simplify the invocation of delegates within your code. This is a feature that was added to C# 2.0 and when combined with the Linq extensions provided with .NET 3.5, you can drastically shorten and simplify code.

“Similar to the implicit anonymous method conversions described in §13.5, an implicit conversion exists from a method group (§14.1) to a compatible delegate type. If D is a delegate type, and E is an expression that is classified as a method group, then D is compatible with E if and only if E contains at least one method that is applicable in its normal form (§ to any argument list (§14.4.1) having types and modifiers matching the parameter types and modifiers of D.”

Basically what the above means that the compiler is “smart” enough to infer the correct overload to call given that their is an adequate candidate method available. For example, given the following two methods:


First we can refactor the for loop using lambda expression and the Linq extensions:


Then simplify the lambda expression even further by substituting for the implicit method group conversion:


Note that the green squiggly lines are hints made by Resharper that the line of code can be refactored. If your not aware of Resharper, it’s a Visual Studio add on that turns VS from a Pinto to Ferrari! If you don’t believe me, try the free trial here. Ok, enough cool-aid and free marketing for resharper…

So, your probably thinking one of three things about now (assuming  you made it this far):

  1. “Big fricken deal, he saved five lines of code”
  2. “Eh, old news. Moving on.”
  3. Wow, that’s fricken awesome dude!”

Personally, I tend towards #3. I am a huge fan  (obviously) of method group conversions because they reduce complexity. They simply make the code easier to read and digest. Code that is easier to read and digest is more easily maintained. Code that is easier to maintained, tends to be of higher quality and less error prone.


Tags: , , , ,

Refactoring | Resharper | C#

Jeff Morris

Tag cloud

Month List

Page List