Sunday, May 18, 2014


Whenever I need to load assemblies dynamically at runtime I have up to this date been using the Reflection Namespace somewhat like this:
// Use the file name to load the assembly into the current application domain.
Assembly a = Assembly.Load("example");
// Get the type to use.
Type myType = a.GetType("Example");
// Get the method to call.
MethodInfo myMethod = myType.GetMethod("MethodA");
// Create an instance.
MyClass obj = Activator.CreateInstance(myType) as MyClass;
// Execute the method.
myMethod.Invoke(obj, null);
By using this approach I need to iterate over all my assemblies in order to load objects implementing a certain interface if the assembly name is not known at this time.
However, I discussed this with a colleague and found out there is another way of loading assemblies at runtime. The alternative is called MEF and is shipped with the .Net framework since version 4.0. The magic happens inside the Microsoft.Composition namespace.
Basically, types that should be loaded at runtime is attributed with Export.
public class TobiasPlayer : IPlayer
Put the assembly where TobiasPlayer is located in a directory accessible by your application. Then, in your application code:

public List GetPlayers()
var players = new List();
// Set up MEF
var myPath = @"C:\temp";
var catalog = new DirectoryCatalog(myPath);
var container = new CompositionContainer(catalog);
// Import the exported classes
return players;
The GetPlayers method is returning initialized objects of all the types implementing IPlayer and attributed with Export from within the c:\temp directory. The example above uses the GetExportedValues method which instantiates the Export classes. By using GetExports() you receive Lazy objects meaning the objects will be initialized when calling Value on the Lazy object.
This way I don´t have to write code that iterates over all assemblies and can have my assemblies in a separate directory rather than putting them in the web applications bin folder.


This is the scenario: you have a CRM system where the editors can change customer details. The CRM user interface is a web application which will be used by several editors. There is a chance that multiple editors will edit the same customer simultaneously.
Since the HTTP protocol is stateless there is a chance that an editor can overwrite changes made after the editor loaded the “edit customer” web page.
To solve this you can make use of an ETag containing a value representation of the customer data, preferably a changed date. By submitting that value when initially sending the page to the web client and then posting the value back along with the new customer details the values can be compared. The comparison will result in either accepting or rejecting the changed customer information.
The HTTP specification ( states that if the If-Match HTTP header value is not a representation of the current entity the server should return status code 412 (Precondition Failed) and not persist the data. Otherwise, return 200 (OK).
When loading the page you submit the ETag either in the header or in the body. When the customer details are sent back to the server using a PUT request you pass the ETag value in the If-Match HTTP header.
If you are utilizing an ASP.NET MVC solution with AngularJS (without using SPA) and ASP.NET Web API you can solve this by doing the following.
GET request – when loading the page with the customer information
Pass a representation of the ETag through the MVC model from the MVC controller and make it accessible from your Angular controller. I use a sort of initial data collection which will populate an AngularJS scope variable when the page is loaded.
PUT request – when passing the changed data back to the server
The data is passed from the UI through an AngularJS $http.put request
var config = {
method: 'PUT',
url: '/customer',
data: {  },
headers: { 'If-Match': $scope.etag }
// $scope.etag is initiated during loading of page
.success(function (response) {
// notify user that update was ok
.error(function (data, status) {
if(status == '412'){
// notify the editor that customer has already been updated by someone else and should reload the page to get the new customer data.
The receiving end which is the Web API controller
public HttpResponseMessage Put(CustomerData customer)
var customer = GetCustomerFromDatabase(customer.Id);
var isAlreadyModified = IsAlreadyModified(customer);
if (isAlreadyModified)
// return status code 412 if the customer has already been changed during the editing
return Request.CreateErrorResponse(HttpStatusCode.PreconditionFailed, "Customer has already been modified. Please reload the page and redo your changes.");
return Request.CreateResponse(HttpStatusCode.OK);
private bool IsAlreadyModified (Customer customer)
// using the ticks as etag
var ourEtag = customer.ChangedDate.Ticks.ToString(CultureInfo.InvariantCulture)
: string.Empty;
var theirEtag = Request.Headers.IfMatch.ToString();
return ourEtag.Equals(theirEtag, StringComparison.InvariantCultureIgnoreCase) == false;


Tuesday, May 13, 2014

CLR 4.5: Managed Profile Guided Optimization (MPGO)

Again this performance enhancement (or technology I would say ) is targeting the application startup time more focused on the large desktop mammoth though! the new technology Microsoft introducing with .Net 4.5 is not going to be brand new for you if you are a C++ developer.
The PGO build process in C++
The PGO build process in C++
In C++ shipped with .Net 1.1 a multi-step compilation known as Profile Guided Optimization (PGO) as an optimization technique can be used by the C++ developer to enhance the startup time of the application. by running the application though some common scenarios (exercising your app) with data collection running in the background then used the result of that for optimized compilation of the code. you can read more about PGO here.
As a matter of fact Microsoft has been using this technology since .Net 2.0 to generate optimized ngen-ed assemblies (eating their own dog food) and now they are releasing this technology for you to take advantage of in your own managed applications.

Crash Course to .Net Compilation

In .Net when you compile your managed code it really gets interpreted into another language (IL) and is not compiled to binaries will be running on the production machine, this happen at the run time and referred to as Just-In-Time (JIT) Compilation. The alternative way to that is to pre-compile your code prior to execution at runtime using NGEN tool (shipped with .Net).
Generating these native images to the production processor usually speed up lunch time of the application especially large desktop apps that require a lot of JITting now there are not JIT compilation required at all.

Why you want to use NGen?

If it’s not obvious yet here are some good reasons:
  • NGen typically improves the warm startup time of applications, and sometimes the cold startup time as well
  • NGen also improves the overall memory usage of the system by allowing different processes that use the same assembly to share the corresponding NGen image among them

Exercise your Assemblies!

The Managed Profile Guided Optimization (MPGO) technology can improve the startup and working set (memory usage) of managed applications by optimizing the layout of precompiled native images.Byorganize the layout of your native image so most frequently called data are located together in a minimal number of disk pages in order to optimize each request for a disk page of image data to include a higher density of useful image data for the running program which in return will reduce the number of page requests from disk. if you think more about it,  mostly it will be useful to machines with mechanical disks but if you have already moved to the Solid State Drive Nirvana the expected performance improvement will probably will be unnoticeable.

How it Works?

As I mentioned before it’s a very similar process to PGO from C++ compiler, where you exercise your assemblies, here are a simplified steps of the process:
  1. Compile and Build your large desktop application to generate the normal IL assemblies.
  2. Run MPGO tool on the IL assemblies built before.
  3. Now exercise a variety of representative user scenarios within your application.(not too little not too much you don’t want the tool be confused about how to organize the layout of your image).
  4. MPGO will store the profile created from your “training” with each IL assembly trained.
  5. Now when you use NGEN tool on the trained assemblies it will genreate an optimized native image.

How to use MPGO

you can find MPGO in the following directory:
C:\program files(x86)\microsoft visual studio 11.0\team tools\performance tools\mpgo.exe
  • Run VS command as administrator and execute MPGO with the correct parameters
MPGO -scenario “c:\profiles\WinApp.exe” -OutDir “C:\profiles\optimized” -AssemblyList “c:\profiles\WinApp.exe
  • Exercise some most used scenarios on your application. then close the application
  • The IL with profiles will be generated in the output dir
  • Run Ngen on the IL+Profiles
NGEN “C:\profiles\optimized\WinApp.exe
  • Run the optimized application.

Final Words

MPGO is very cool innovative technology but the results of it depends highly on the human factor, what scenarios you will exercise? and how you’ll run it? the profile gets created so you might end up running it multiple times with different scenarios before your get the expected lunch time you’re looking for.

Saturday, May 10, 2014



I have put together a study guide for the Microsoft exam 70-487 (Developing Windows Azure and Web Services) since there are yet no books available from Microsoft Press. This is the material I am using right now to study for the exam. Hopefully it covers most of the content on the exam.
The exam covers the following sections according to the exam site:
  • Accessing Data (24%)
  • Querying and Manipulating Data by Using the Entity Framework (20%)
  • Designing and Implementing WCF Services (19%)
  • Creating and Consuming Web API-based services (18%)
  • Deploying Web Applications and Services (19%)
The objectives are narrowed down to keywords in the following sections in this post. The keywords are paired with links, preferably a link to a webcast. Some of the links refer to older .net technologies but will hopefully be applicable even in .Net 4.5. Most of the webcast links require a PluralSight account, so I suggest you visit and get yourself a subscription. They have a free subscription for up to 200 minutes for new customers.
Happy studying!






Thursday, May 1, 2014

ScriptCS: Turning C# into a Scripting Language

ScriptCS empowers engineers to compose C# provisions utilizing a basic content tool. Aggregation is performed by Roslyn and bundle administration by Nuget. 

Glenn Block, Project Manager of the Windows Azure SDK group, began  ScriptCS, a side venture endeavoring to make C# a scripting dialect. While an engineer can utilize his C# learning to compose projects utilizing a straightforward word processor, the gathering is carried out in  Roslyn, Microsoft's compiler-as-an administration. Scriptcs utilizes Nuget to run across bundles it relies on then transfers binaries. The Roslyn's r: sentence structure is utilized to include GAC or other DLL references. 

In the event that a record hello.csx holds the following C# code line
Console.WriteLine("Hello World!");
then, running the command scriptcs hello.csx results in printing the Hello World! string at the console.
There is no need for namespaces nor class definitions for this example, and there is no project, no .obj nor .exe file generated. Roslyn does the compilation and ScriptCS executes the result.
Another more elaborate example is creating a Web API host:
using System;
using System.IO;
using System.Web.Http;
using System.Web.Http.SelfHost;

var address = "http://localhost:8080";
var conf = new HttpSelfHostConfiguration(new Uri(address));
conf.Routes.MapHttpRoute(name: "DefaultApi",
   routeTemplate: "api/{controller}/{id}",
   defaults: new { id = RouteParameter.Optional }

var server = new HttpSelfHostServer(conf);
ScriptCS has a plug-in mechanism using the so called script packs as Block explains:
A script pack can offer up namespace imports, references, as well as objects which will be available to the script via the Require API.
The main goal of a script pack is to make authoring scripts using frameworks easier.
As script packs can be installed via NuGet packages, they can easily be discovered and consumed.
Work is in progress to make ScriptCS work on MonoAdding debugging capabilities to Roslyn is investigated. Sublime Text has created a plug-in for ScriptCS enabling syntax highlighting in a simple editor. Alternatively, Roslyn can be used to generate syntax highlighting in Visual Studio for .csx files.
Block lists the advantages of scripting C# based on his experience with Node.js:
  • No projects, just script- One of the things I love about node.js is you don’t need a project. You can just jump in a folder start creating js files and go to town.
  • No IDE requirement, you can just use a text editor.
  • Packages over assemblies – In node, when you want to get something you use npm to download the packages.It’s super simple. You just have your app and your local node_modules folder and you are good to go.
  • No compilation – This is a big one. With node, I just run node.exe and my app and it works. I don’t have to first create an executable to run, I just run.
All that is possible with Roslyn and NuGet. ScriptCS still deals with assemblies, but “but I don’t have to manage them individually, I just install packages.”
ScriptCS carries an Apache 2 license and it is currently not endorsed by Microsoft..