Friday, 7 December 2012

Left and Right Outer Join in LINQ Query

Say we have Staff and Department entities as below:
List<Staff> staffs = new List<Staff>() { 
 new Staff { FullName = "Person 1", DepartmentId =1 },
 new Staff { FullName = "Person 2", DepartmentId =1 },
 new Staff { FullName = "Person 3", DepartmentId =1 },
 new Staff { FullName = "Person 4", DepartmentId =2 },
 new Staff { FullName = "Person 5", DepartmentId =2 },
 new Staff { FullName = "Person 6", DepartmentId =3 },
 new Staff { FullName = "Person 7", DepartmentId =3 }
};

List<Department> departments = new List<Department>() {
 new Department { DepartmentId = 1, Name = "Dept One"},
 new Department { DepartmentId = 3, Name = "Dept Three"},
 new Department { DepartmentId = 4, Name = "Dept Four"},
 new Department { DepartmentId = 5, Name = "Dept Five"}
};

To do Left Outer Join Staff with Department is:
var leftJoinQuery = 
  from staff in staffs
  join dept in departments on staff.DepartmentId equals dept.DepartmentId into joinedStaffDept
  from r in joinedStaffDept.DefaultIfEmpty()
  //select r;  // this returns 'dept' list
  select new { 
    staff.FullName, 
    DeptName = r != null ? r.Name : null
    //DepartmentName = dept != null ?dept.Name : null  // using 'dept' here does not work
  };
Note that the combined result 'joinedStaffDept' actually consists of the first entity only 'Staff'. It doesn't include 'Department'.
Below is the result:


To do Right Outer Join, we need to swap the order of the entities joined:
var rightJoinQuery = 
  from dept in departments
  join staff in staffs on dept.DepartmentId equals staff.DepartmentId into joinedDeptStaff
  from r in joinedDeptStaff.DefaultIfEmpty()
  //select r; // this returns 'staff' list
  select new {
    FullName = r != null? r.FullName : null, // using 'staff' here does not work
    dept.Name
 };
The result:


Thursday, 15 November 2012

Some Notes about Entity Framework Code First Fluent API on Properties

- By convention a property with name 'Id' or '[Class]Id' will become the generated table primary key.

- string property will become an nvarchar(max) column.

- Keys properties and value types (any numeric, DateTime, bool and char) properties will become non-nullable columns. Reference types (String and arrays) and nullable value types (e.g.; Int16?, int?, decimal?, etc) properties will yield as nullable columns.

- byte[] property will become varbinary(max) column.

- Configuring primary key
modelBuilder.Entity<[ClassName]>().HasKey(p => p.[PropertyName]);

- Non-nullable column
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsRequired();

- Nullable column
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsOptional();

- Set the maximum length for a property and the generated column
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).HasMaxLength([NumberLength]);

- Largest possible length of column's data type
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsMaxLength();

- Use fixed rather than variable data type, e.g.; varchar instead of nvarchar
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsFixedLength();
To extend the fixed data type column use
.IsFixedLength().HasMaxLength([NumberLength])
To have largest possible length of the fixed data type column use
.IsFixedLength().IsMaxLength()
For string property, we can change the default data type generated (nvarchar) to varchar by using
.IsUnicode(false)

- Use variable length data type
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsVariableLength();

- Specify the generated column data type
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).HasColumnType("[ColumnName]");

- Set the property to be used for concurrency checking
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsConcurrencyToken();

- Set a row version column in the generated table to be used as the concurrency token
modelBuilder.Entity<[ClassName]>().Property(p => p.[PropertyName]).IsRowVersion();
The property must have Byte[] type. IsRowVersion() is only allowed one in a class.


Further reading:
Configuring Properties and Types with the Fluent API

Friday, 2 November 2012

Get Started with Entity Framework Code First

Entity Framework version 5.0.0 is used when writing this post.

First thing we need to do is add Entity.Framework library into the project. If you are using NuGet, you can do:
PM> Install-Package EntityFramework

Then prepare your POCO classes. An example of POCO classes:
public class Stock
{
    public int StockId { get; set; }
    public int ItemId { get; set; }
    public Int16 Quantity { get; set; }
    public DateTime DateUpdated { get; set; }

    public virtual Item Item { get; set; }
}
and
public class Invoice
{
    public int InvoiceId { get; set; }
    public string Name { get; set; }    
    public string Description { get; set; }
    public decimal TotalPrice { get; set; }
    public DateTime DateSold { get; set; }
    public DateTime DateCreated { get; set; }
    public DateTime DateUpdated { get; set; }

    public virtual ICollection<ItemSelling> Items { get; set; }
}
To allow lazy loading, declare each navigational property as public virtual. For change tracking, declare the property as public virtual and use ICollection<T> for navigational property which contains collection. For complete details, please see this MSDN article 'Requirements for Creating POCO Proxies'

Next, create a context class that inherits from DBContext. Then specify one DBSet property for each of the POCO class that we have. If you would like to use Fluent API for configuring POCO class properties, then override the OnModelCreating() method. See an example below:
public class MyContext : DbContext
{
    . . .

    public DbSet<Stock> Stocks { get; set; }
    public DbSet<Invoice> Invoices { get; set; }

    . . .

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Entity().HasKey(p => p.StockId);
        modelBuilder.Entity().Property(p => p.ItemId).IsRequired();
        modelBuilder.Entity().Property(p => p.Quantity).IsRequired();

        modelBuilder.Entity().HasKey(p => p.InvoiceId);
        modelBuilder.Entity().Property(p => p.Name).HasMaxLength(50);
    }
}

If a database connection string has not been specified, EF will try to find an SQL Server Express for the database. If you would like EF to create the database in a specified server, then you need to specify a connection string and make sure to named it similar as the database context class' name.

Then we might want to specify Database.SetInitializer() method to determine the behaviour of EF Code First when initialising our database. By default, it will create the database if it not exists yet but will not change it afterwards even if the model(s) has changed. In an ASP.NET application, we put this inside Global.asax.cs file. An instance:
Database.SetInitializer(new DropCreateDatabaseIfModelChanges());
There are three built in database initialisers options available:
- CreateDatabaseIfNotExists (default)
- DropCreateDatabaseIfModelChanges
- DropCreateDatabaseAlways
We can also pass null as the parameter to the Database.SetInitializer() method to skip database initialisation process.

By default the database will be initialised when the context is used for the first time. For example, when the code is trying to retrieve items from an entity. To do the database initialisation explicitly without waiting for the context to be used, call:
db.Database.Initialize(false);
For example, you can put this code below in Global.asax.cs
// do the database initialisation explicitly without waiting for the context to be used 
using (var db = new MyContext())
{
    db.Database.Initialize(false);
}

Tuesday, 2 October 2012

Baseless Merge with TFS 2010

When you would like to use Source Control Merge Wizard in Visual Studio (e.g.; by right clicking a branch folder then going to 'Branching and Merging' -> 'Merge') and you find that the intended target is not in the 'Target branch' selection that means the source does not have any relationship with the target. You can proof this from 'Branching and Merging' -> 'View Hierarchy' option. This is when we need to do baseless merge operation.

To perform this operation, we can use the merge command on TFS command line tool. The syntax is
Tf merge /baseless /recursive /version:[versionspec] [source path] [target path] 
- path can be physical folder or source control location.
- [versionspec] can be a changeset, range of (inclusive) changesets separated by '~' character, a label, a date or versions. If we don't specify /version then it will merge all changes.

Below are some examples:
- Merge all changes from branch to trunk
D:\My_Project>tf merge /baseless /recursive "$/My Project/branch/one" "$/My Project/trunk" 

- Merge all changesets up to changeset 1000
D:\My_Project>tf merge /baseless /recursive /version:1000 "$/My Project/branch/one" "$/My Project/trunk" 

- Merge only changeset 1000
D:\My_Project>tf merge /baseless /recursive /version:1000~1000 "$/My Project/branch/one" "$/My Project/trunk" 

- Merge changesets 1000 to 1010 inclusively
D:\My_Project>tf merge /baseless /recursive /version:1000~1010 "$/My Project/branch/one" "$/My Project/trunk" 


If it is happened that the merge causing a lot of checkouts of unmodified files then we can use TFS Power Tools to undo the checkouts.
D:\My Project\trunk>tfpt uu /r /noget *
'uu' - undo unchanged
'/r' - recursive
'/noget' - do the operation without getting latest


If this one is still not working and leaving a lot of unmodified files checked out, we have to do Undo Changes manually. This can be done through the Pending Changes view, right click on any file from the list then select 'Undo'. The Undo Pending Changes dialog will pop up. Check all the files then click 'Undo Changes' button. A confirmation dialog with the message "... has changed. Undo check-out and discard changes?' will be shown. Click 'No to All' button. Then all unmodified files will be reverted while the modified ones from the merge operation stay.


References:
Team Foundation Merge Command on MSDN
Command-Line Syntax (Version Control)

Monday, 1 October 2012

Deferred Execution in LINQ Query Expression or Standard Query Operators Extension Methods

An IQueryable and IEnumerable type variable can be used to specify a query in query or method syntax (query syntax is also known as query expression while method syntax as standard query operators extension methods; see this MSDN article for definition). However the execution will take place when the items are required.

Methods such as Count(), ToList(), ToArray(), ToDictionary() or ToLookup() will iterate through items thus executing the query definition. Other methods such as Average(), Sum(), Max() and Min() will also execute the query definition.

If one of those methods mentioned above are called twice then the query definition will be executed again to get the result. The execution can be very expensive as the result may be retrieved from database, network, etc. This is done to get fresh result every time from the source.

To avoid re-executing the query definition in a query expression or standard query operators extension methods, store the result into a collection. When obtaining result from the collection, the query definition won't be run again rather the cached/stored result will be used.

Below is an example to clarify:
static void Main(string[] args)
{
    List<int> numbers = new List<int> { 1, 2, 3, 4, 5 };

    int invocationsCount = 0;
    Func<int, int> func = number =>
    {
        invocationsCount++;
        return number;
    };

    // 1. define the query expression or extension methods (method invocations)
    IEnumerable<int> selection = from n in numbers
                                    select func(n);
    //IEnumerable<int> selection = numbers.Select(n => func(n));
    Console.WriteLine("1. invocationsCount = {0}", invocationsCount);

    // 2. do a loop
    foreach (var item in selection)
    {
    }
    Console.WriteLine("2. invocationsCount = {0}", invocationsCount);

    // 3. do a count
    selection.Count();
    Console.WriteLine("3. invocationsCount = {0}", invocationsCount);

    // 4. do an average
    selection.Average();                
    Console.WriteLine("4. invocationsCount = {0}", invocationsCount);

    // 5. do another loop
    foreach (var item in selection)
    {
    }
    Console.WriteLine("5. invocationsCount = {0}", invocationsCount);

    // 6. do ToList() and cache it to a collection
    List<int> collection = selection.ToList();
    Console.WriteLine("6. invocationsCount = {0}", invocationsCount);

    // 7. do the loop on the cache collection
    foreach (var item in collection)
    {
    }
    Console.WriteLine("7. invocationsCount = {0}", invocationsCount);


    Console.ReadLine();
}

Wednesday, 19 September 2012

Using Windows Server AppFabric Caching Service

AppFabric Caching Service is a distributed cache platform for in-memory caches spread across multiple systems, developed by Microsoft. It is one of Microsoft's AppFabric's Services. In its early stages of development, it was referred by a code name Velocity.

Some useful PowerShell commands:
Get-Cache [-HostName] [-PortNumber] Without any parameters, this will list information about all caches and regions on a cluster. Otherwise on a specified cache host.
Get-CacheHost [-HostName] [-PortNumber] Sees all of the cache services status in a cluster. If the parameters are used then of the host.
New-Cache
Creates a new named cache. It has a few other parameters that are optional.
Get-CacheAllowedClientAccounts Lists all accounts that have permission.
Grant-CacheAllowedClientAccount -Account "DOMAINNAME\username" Grants permission for an account.
Stop-CacheHost
Stops the specified cache service.
The exception message that you will see when it is down: ErrorCode:SubStatus:There is a temporary failure. Please retry later.
Start-CacheCluster Starts all cache services in the cluster.
Start-CacheHost
Starts the specified cache service.
Restart-CacheCluster Restarts all cache services in the cluster.


GUI admin tool
There is also a GUI admin tool for managing the cache service: http://mdcadmintool.codeplex.com


Coding guide
To use the cache service in the codes, we need to include Microsoft.ApplicationServer.Caching.Client and Microsoft.ApplicationServer.Caching.Core to the project's References.

Examples of how to add an object to cache using Add method, Put method or Item property:
http://msdn.microsoft.com/en-us/library/ee790846%28v=azure.10%29.aspx

A simple codes example of how to use the cache service:
DataCacheFactory factory = new DataCacheFactory();
DataCache cache = factory.GetCache("test");

string key = "key1";
string value = "value1";
cache.Put(key, value);

var value = cache.Get(key);

For a thorough example that uses various features of AppFabric caching service, look at the CacheAPISample project of this example projects from Microsoft. Alternatively download the codes from here and run it as a console application.


References:
http://en.wikipedia.org/wiki/AppFabric_Caching_Service

Cache Administration with Windows PowerShell (Windows Server AppFabric Caching)
http://msdn.microsoft.com/en-us/library/ff718177%28v=azure.10%29.aspx

Managing Security (Windows Server AppFabric Caching)
http://msdn.microsoft.com/en-us/library/ff921012%28v=azure.10%29.aspx

Friday, 7 September 2012

Some Notes about Parallel Programming in .NET 4 (and 4.5)

- Try to avoid writing to shared memory object such as static variables or class properties. Using locks in parallelization will suffer the performance.

- Only use thread-safe methods. Calling to non thread-safe methods in parallel programming can cause exceptions and undetected data loss.

- Most of the time, any regular loop that fulfills both of the above requirements can be converted into a parallel loop.

- Keep it simple and avoid over-parallelization (i.e.; unnecessary nested parallelization). When using parallelization there is an overhead cost to partition the work and merge the result back.

- If the parallel work is used to populate data into a collection, use one of the thread-safe collection types.
http://msdn.microsoft.com/en-us/library/dd997305.aspx

- Avoid any ordering operation if possible. By default PLINQ does not preserve the ordering sequence in the source.
http://msdn.microsoft.com/en-us/library/dd460677.aspx

- When needed to do further operation of a PLINQ result, prefer to use ForAll() method instead of Parallel.ForEach().
For example; use this
source.AsParallel()
      .Where( i => i.SomePredicate() )
      .ForAll( i => i.DoSomething() );
instead of
var filteredItems = source.AsParallel().Where( i => i.SomePredicate() );

Parallel.ForEach(filteredItems, item =>
{
    item.DoSomething();
});

References:
Potential Pitfalls with PLINQ
Parallelism in .NET – Part 2, Simple Imperative Data Parallelism
Parallelism in .NET – Part 6, Declarative Data Parallelism
Parallelism in .NET – Part 8, PLINQ’s ForAll Method

Friday, 17 August 2012

Stored Procedure in NHibernate Part 3 - Populate Two Related Entities Objects

Now we will see how to use stored procedure in NHibernate to return two entities at the same time. We will continue our work from the previous post (second post of this series). You can also use the works from the first post of this series, however you need to ignore the Component part from the examples below. I choose to continue from the latter post to show that in NHibernate, we can use stored procedures to populate many different types' objects at one time.

First we modify our existing stored procedure to return data to populate objects of the new entity as well:
ALTER PROCEDURE [sec].[spGetRelatedUsers]
 @UserName nvarchar(256),
 @NumberOfRecords int
AS
BEGIN
   -- some processing before the final select
   -- this can include complex processing using temporary table(s), CTE(s), etc...

   SELECT DISTINCT TOP (@NumberOfRecords) U.*
   , R.*
   , A.*
   FROM Users U
   INNER JOIN Roles R ON U.RoleId=R.RoleId
   LEFT JOIN Attributes A ON A.UserId=U.UserId
   -- other joins and conditions
END

Then we create the new entity class:
public class UserAttribute
{
    public int AttributeId { get; set; }
    public User User { get; set; }  // link to the main entity
    public string DisplayName { get; set; }
    public string Code { get; set; }           
}

And its .hbm.xml mapping file:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping assembly="MyAssemblyName" namespace="MyNamespace" xmlns="urn:nhibernate-mapping-MyNHibernateVersionNumber">
  <class name="UserAttribute" lazy="false">
    <id name="AttributeId">
      <generator class="identity" />
    </id>
    <many-to-one name="User" column="UserId" not-null="true"/>  <!-- specify relationship to the main entity -->
    <property name="DisplayName" />
    <property name="Code" />
  </class>
</hibernate-mapping>

We also link the new entity from our main entity:
public class User
{
    public long UserId { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime? DOB { get; set; }
    public UserRole Role { get; set; }
    public IList<UserAttribute> Attributes { get; set; } // link to the new entity
}

Also modify the main entity's mapping file to specify the relationship with the new entity. In this case we use bag collection mapping:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping assembly="MyAssemblyName" namespace="MyNamespace" xmlns="urn:nhibernate-mapping-2.2">
  <class name="User" lazy="false">
    <id name="UserId">
      <generator class="identity" />
    </id>
    <property name="FirstName" />
    <property name="LastName" />
    <property name="DOB" column="DateOfBirth" />
    <component name="Role" class="UserRole">
      <property name="RoleId">
        <column name="RoleId" />
      </property>
      <property name="Name" />
      <property name="Description" />
      <property name="Active" />
    </component>
    <bag name="Attributes">   <!-- specify relationship to the new entity -->
      <key column="UserId" />
      <one-to-many class="UserAttribute" />
    </bag>
  </class>
</hibernate-mapping>

Finally we modify the named query's mapping; add an alias attribute to the existing <return> element then add a new <return-join> element with its alias and property attributes in order to populate the new objects:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping assembly="MyAssemblyName" namespace="MyNamespace" xmlns="urn:nhibernate-mapping-MyNHibernateVersionNumber">
  <sql-query name="GetRelatedUsers"> 
    <return alias="U" class="User"></return>  <!-- alias is from the one used in the select query -->
    <return-join alias="A" property="U.Attributes"></return-join>  <!-- alias is from the one used in the select query -->
    exec spGetRelatedUsers :UserName, :NumberOfRecords  
  </sql-query>
</hibernate-mapping>

The codes to call the named query remain the same.

Friday, 10 August 2012

Stored Procedure in NHibernate Part 2 - Using Component Mapping

This time we will learn how to use stored procedure in NHibernate with Component mapping to extend an entity to contain another type in one of its properties. A Component is an object that is persisted as a value type. Usually it is used to represent objects that are part of an entity.

We will continue the work that we have done on the previous post. We will modify our main entity to have its related one-to-one type as a property. Then we will modify our stored procedure to populate these two objects together.

Firstly, we prepare a new class for the related one-to-one type that will be linked from our main entity:
public class UserRole
{
    public long RoleId { get; set; }
    public string Name { get; set; }
    public string Description { get; set; }
    public bool Active { get; set; }
}

Then we modify our main entity (i.e. User class) to contain the new type as a property:
public class User
{
    public long UserId { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime? DOB { get; set; }
    public UserRole Role { get; set; }
}

We also need to change the stored procedure to return the new type as well in addition of the main type:
CREATE PROCEDURE [sec].[spGetRelatedUsers]
 @UserName nvarchar(256),
 @NumberOfRecords int
AS
BEGIN
   -- some processing before the final select
   -- this can include complex processing using temporary table(s), CTE(s), etc...

   SELECT DISTINCT TOP (@NumberOfRecords) U.*
   , R.*
   FROM Users U
   INNER JOIN Roles R ON U.RoleId=R.RoleId
   -- other joins and conditions
END

Finally we change the main entity mapping file to include the new type by using Component:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping assembly="MyAssemblyName" namespace="MyNamespace" xmlns="urn:nhibernate-mapping-2.2">
  <class name="User" lazy="false">
    <id name="UserId">
      <generator class="identity" />
    </id>
    <property name="FirstName" />
    <property name="LastName" />
    <property name="DOB" column="DateOfBirth" />
    <component name="Role" class="UserRole">
      <property name="RoleId">
        <column name="RoleId" />
      </property>
      <property name="Name" />
      <property name="Description" />
      <property name="Active" />
    </component>
  </class>
</hibernate-mapping>

On the next post we will see how to use stored procedure to populate two entities' objects (an entity with its related one-to-many entity).

Friday, 3 August 2012

Stored Procedure in NHibernate Part 1 - Map to Simple Class

On this post, we will see a simple example of how to use stored procedure in NHibernate. In the coming posts we will learn how to use stored procedure with Component mapping to populate an entity with its related type and then how to populate two related entities' objects.

First, prepare the mapping file for the stored procedure. Save this as a .hbm.xml file.
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping assembly="MyAssemblyName" namespace="MyNamespace" xmlns="urn:nhibernate-mapping-MyNHibernateVersionNumber">
  <sql-query name="GetRelatedUsers">  <!-- the name of named query that will be called by the codes later -->
    <return class="User"></return>  <!-- the type that will be mapped onto -->
    exec spGetRelatedUsers :UserName, :NumberOfRecords  <!-- stored proc name and its parameters -->
  </sql-query>
</hibernate-mapping>

On line 4 of the named query's mapping file above, we set the return type as a class (i.e. User). We can use <return-property> elements to specify the mappings for the class' properties, however I prefer to put the mappings inside the class' own mapping file to support future extension and accommodate more complex properties as we will see later in the coming posts. So we create a mapping file for the class:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping assembly="MyAssemblyName" namespace="MyNamespace" xmlns="urn:nhibernate-mapping-2.2">
  <class name="User" lazy="false">
    <id name="UserId">
      <generator class="identity" />
    </id>
    <property name="FirstName" />
    <property name="LastName" />
    <property name="DOB" column="DateOfBirth" />
  </class>
</hibernate-mapping>

Then here is the class:
public class User
{
    public long UserId { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime? DOB { get; set; }
}

The stored procedure:
CREATE PROCEDURE [sec].[spGetRelatedUsers]
 @UserName nvarchar(256),
 @NumberOfRecords int
AS
BEGIN
   -- some processing before the final select
   -- this can include complex processing using temporary table(s), CTE(s), etc...

   SELECT DISTINCT TOP (@NumberOfRecords) U.*
   FROM Users U
   -- other joins and conditions
END

Lastly we call the named query from codes:
IQuery query = Session.GetNamedQuery("GetRelatedUsers");

//add the parameter(s)
query.SetString("UserName", approverUserName);
query.SetInt32("NumberOfRecords", numberOfRecords);

return query.List<User>();

Friday, 27 July 2012

WCF Binding Configuration for Maximum Data Size

Below is the WCF binding configuration to allow maximum data size. Please bear in mind not to put these settings on public services as these settings will make your application vulnerable to security attacks.
<bindings>
  <wsHttpBinding>
    <binding name="myBindingName" maxReceivedMessageSize="2147483647" 
        closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" >
      <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" 
          maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" />
      . . .
    </binding>
  </wsHttpBinding>
</bindings>
maxReceivedMessageSize, maxDepth, maxStringContentLength, maxArrayLength, maxBytesPerRead, maxNameTableCharCount attributes accept integer values so we can put the maximum integer value which is 2147483647.

According to MSDN documentation, increasing this value alone is not enough in ASP.NET compatible mode. The maxRequestLength attribute value of httpRuntime needs to be increased as well. Its default value is 4096 KB. The maximum value for .NET Framework 2.0 or above is 2GB (2097151).
<system.web>
  <httpRuntime maxRequestLength="2097151" />
  . . .
</system.web>

Note that I also increase the timeout attributes to 10 minutes.


For more information:
webHttpBinding on MSDN
readerQuotas on MSDN

Tuesday, 24 July 2012

How to Use Assert.IsAssignableFrom in xUnit

Assert.IsAssignableFrom<{type}>({object}) method means whether {type} is assignable from the type of {object} which also means whether {object} has a type similar to or is derived from {type}. While Assert.IsType method is used to test an object against a particular type, Assert.IsAssignableFrom offers more flexibility.

Let us look at some examples of this. Say that we have these classes:
public class BaseClass{}
public class TestClass : BaseClass{}
public class DerivedClass : TestClass{}

Then when we write these unit tests below. The results are commented out on each test.
public class Test
{
    private TestClass testClass;

    public Test()
    {
        testClass = new TestClass();
    }

    [Fact]
    public void test1() //Pass
    {
        Assert.IsType<TestClass>(testClass);
    }

    [Fact]
    public void test2() //Pass
    {
        Assert.IsAssignableFrom<TestClass>(testClass);
    }

    [Fact]
    public void test3() //Pass
    {
        var result = Assert.IsAssignableFrom<BaseClass>(testClass);
        //Assert.IsType<BaseClass>(result); //failed
        Assert.IsType<TestClass>(result);
    }

    [Fact]
    public void test4() //Pass
    {
        var result = Assert.IsAssignableFrom<Object>(testClass);
        //Assert.IsType<Object>(result); //failed
        //Assert.IsType<BaseClass>(result); //failed
        Assert.IsType<TestClass>(result);
    }

    [Fact]
    public void test5() //Failed
    {
        Assert.IsAssignableFrom<DerivedClass>(testClass);
    }
}
Note that when using implicit type 'var' to hold the result of Assert.IsAssignableFrom method, the new variable's type is still the same as the tested object type (line 26 & 34).

Saturday, 30 June 2012

xUnit Examples of Testing Controller Actions

We will see some examples of unit tests using xUnit testing framework to test controller actions. Let say we have these controller actions:
public ViewResult Add()
{            
    return View("Add", new StuffViewModel());
}


[HttpPost]
public ActionResult Add(StuffViewModel stuffViewModel)
{
    if (ModelState.IsValid)
    {
        Stuff stuff = mappingService.Map<StuffViewModel, Stuff>(stuffViewModel);
        stuffRepository.InsertOrUpdate(stuff);
        return RedirectToAction("List");
    }
    return View("Add", stuffViewModel);
}
As you can see both actions use same view. The first action is for displaying the view (HTTP GET) and the other is for handling submission from the view (HTTP POST).

Then below are the unit tests to cover both actions functionalities:
public class Add
{
    private Mock<IStuffRepository> stuffRepository;
    private Mock<IMappingService> mappingService;
    private StuffsController controller;

    public Add()
    {
        stuffRepository = new Mock<IStuffRepository>();
        mappingService = new Mock<IMappingService>();
        controller = new StuffsController(stuffRepository.Object, mappingService.Object);
    }


    [Fact]
    public void GET_should_return_add_view()
    {
        // Arrange

        // Act
        var result = controller.Add();

        // Assert
        var viewResult = Assert.IsType<ViewResult>(result);
        Assert.Equal("Add", viewResult.ViewName);
    }

    [Fact]
    public void GET_should_have_StuffViewModels()
    {
        // Arrange

        // Act
        var result = controller.Add();

        // Assert
        //Assert.IsAssignableFrom<StuffViewModel>(result.ViewData.Model); 
        Assert.IsType<StuffViewModel>(result.ViewData.Model);
    }

    [Fact]
    public void POST_should_save_to_database_if_model_is_valid()
    {
        // Arrange
        StuffViewModel stuffViewModel = new StuffViewModel { StuffID = 1 };
        Stuff stuff = new Stuff { StuffID = 1};
        mappingService.Setup(m => m.Map<StuffViewModel, Stuff>(It.IsAny<StuffViewModel>()))
                        .Returns(stuff);
                
        // Act
        controller.Add(stuffViewModel);

        //Assert
        stuffRepository.Verify(o => o.InsertOrUpdate(stuff), Times.Once());
    }

    [Fact]
    public void POST_should_redirect_to_list_view_after_saving()
    {
        // Arrange
        StuffViewModel stuffViewModel = new StuffViewModel { StuffID = 1 };
        Stuff stuff = new Stuff { StuffID = 1 };
        mappingService.Setup(m => m.Map<StuffViewModel, Stuff>(It.IsAny<StuffViewModel>()))
                        .Returns(stuff);

        // Act
        var result = controller.Add(stuffViewModel);

        // Assert
        var redirectToRouteResult = Assert.IsAssignableFrom<RedirectToRouteResult>(result);
        Assert.Equal("List", redirectToRouteResult.RouteValues["action"]);
    }

    [Fact]
    public void POST_if_not_valid_should_not_save_into_database()
    {
        // Arrange
        StuffViewModel stuffViewModel = new StuffViewModel { StuffID = 1 };
        Stuff stuff = new Stuff { StuffID = 1 };
        mappingService.Setup(m => m.Map<StuffViewModel, Stuff>(It.IsAny<StuffViewModel>()))
                        .Returns(stuff);
        controller.ModelState.AddModelError("key", "error");

        // Act
        var result = controller.Add(stuffViewModel);

        // Assert
        stuffRepository.Verify(o => o.InsertOrUpdate(stuff), Times.Never());
    }

    [Fact]
    public void POST_if_not_valid_should_return_to_add_view()
    {
        // Arrange
        StuffViewModel stuffViewModel = new StuffViewModel { StuffID = 1 };
        Stuff stuff = new Stuff { StuffID = 1 };
        mappingService.Setup(m => m.Map<StuffViewModel, Stuff>(It.IsAny<StuffViewModel>()))
                        .Returns(stuff);
        controller.ModelState.AddModelError("key", "error");

        // Act
        var result = controller.Add(stuffViewModel);

        // Assert
        var viewResult = Assert.IsType<ViewResult>(result);
        Assert.Equal("Add", viewResult.ViewName);
    }

    [Fact]
    public void POST_if_not_valid_should_return_view_with_StuffViewModel()
    {
        // Arrange
        StuffViewModel stuffViewModel = new StuffViewModel { StuffID = 1 };
        Stuff stuff = new Stuff { StuffID = 1 };
        mappingService.Setup(m => m.Map<StuffViewModel, Stuff>(It.IsAny<StuffViewModel>()))
                        .Returns(stuff);
        controller.ModelState.AddModelError("key", "error");

        // Act
        var result = controller.Add(stuffViewModel);

        // Assert
        var viewResult = Assert.IsType<ViewResult>(result);
        Assert.IsType<StuffViewModel>(viewResult.ViewData.Model);
    }
}

Thursday, 21 June 2012

Mocking AutoMapper in Unit Testing

This post will show how to mock AutoMapper with Moq in unit testing. Unit testing that has dependency to AutoMapper would require all of the mapping configurations be specified and run first before the actual mapping takes place (Mapper.Map(...) is called). These configurations would be burdensome and should not be included in unit testing. A unit test for a feature should only test that particular feature the developer has written. It should not test other services or functionalities. This is where the concept of mocking come up.

To be able to mock AutoMapper, we can use Dependency Injection to inject a mapper interface to the constructor of the calling code's class rather than using AutoMapper directly. Below is an example of such interface:
public interface IMappingService
{
    TDest Map<TSrc, TDest>(TSrc source) where TDest : class;
}

Then create an implementation class for the interface. Note on line 5 that for this class we specify AutoMapper directly.
public class MappingService : IMappingService
{
    public TDest Map<TSrc, TDest>(TSrc source) where TDest : class
    {
        return AutoMapper.Mapper.Map<TSrc, TDest>(source);
    }
}

Next, bind the interface with the concrete class. Below is an example of how to do it with Ninject:
kernel.Bind<IMappingService>().To<MappingService>();

Then whenever we want to do mapping, we call the interface's Map method instead of using the AutoMapper's Mapper.Map() method.
public ViewResult List()
{
    IEnumerable<Stuff> stuffs = stuffRepository.All;

    List<StuffViewModel> model = mappingService.Map<IEnumerable<Stuff>, List<StuffViewModel>>(stuffs);

    return View("List", model);
}
Please remember that mapping configurations need to be specified first before we could do any mapping. You can see this post to see how to set up the configurations.

Now we can mock the mapper in our unit test. In the example below I use xUnit testing framework. As you can see; first we create a mock instance from the interface, setup what the Map method will return and then pass the mock object to the class constructor of the feature to be tested (line 6, 25 and 28).
[Fact]
public void ListPageReturnsStuffViewModels()
{
    // Arrange
    Mock<IStuffRepository> stuffRepository = new Mock<IStuffRepository>();
    Mock<IMappingService> mappingService = new Mock<IMappingService>();

    List<Stuff> stuffs = new List<Stuff>();
    stuffRepository.Setup(r => r.All).Returns(stuffs.AsQueryable());

    var viewModelStuffs = new List<StuffViewModel> {
        new StuffViewModel { StuffID = 1/*,
                                Name= "Bip",
                                Description= "Colourful baby bip",
                                DateAdded = DateTime.Now,
                                UserID = 1 */
        },
        new StuffViewModel { StuffID = 2/*,
                                Name= "Socks",
                                Description= "Winter socks with animal figures",
                                DateAdded = DateTime.Now,
                                UserID = 1 */
        }
    };
    mappingService.Setup(m => m.Map<IEnumerable<Stuff>, List<StuffViewModel>>(It.IsAny<IEnumerable<Stuff>>()))
                    .Returns(viewModelStuffs);

    var controller = new StuffsController(stuffRepository.Object, mappingService.Object);


    // Act
    var result = controller.List() as ViewResult;
    //var model = result.ViewData.Model as List<StuffViewModel>;


    // Assert
    var model = Assert.IsType<List<StuffViewModel>>(result.ViewData.Model);
    Assert.Equal(2, model.Count);                
}

Tuesday, 29 May 2012

Some Notes about Finaliser and Dispose() Method in .NET


- A finaliser cannot be called explicitly therefore we cannot determine when exactly it will run. The finalisation process (using a queueing method called f-reachable queue) manages when objects' finalisers will be called. This finalisation process happens right before garbage collection.

- A finaliser does not accept any parameter and cannot be overloaded. They also do not support access modifiers.

- Finalisers should only be implemented on objects that use expensive resources. They are used to free up expensive resources that garbage collector does not know about. Therefore finalisers delay garbage collection.

- By using IDisposable and implementing its one and only Dispose() method, we are able to free up resources whenever we want by putting the codes in the Dispose() method. When combining this with using statement, we automatically have implemented an implicit try/finally block that will call the Dispose() method in the finally block.

- Object with finaliser should also implement IDisposable.

- Finaliser and Dispose() should have the same codes.

- Avoid any unhandled exception from happening inside a finaliser because this will be very hard to diagnose as finalisers run in their own thread.

- Dispose() should call System.GC.SuppressFinalize() which will remove the object from finalisation queue so that it can go to garbage collection immediately. Remember that finalisation must happen first before garbage collection. By calling System.GC.SuppressFinalize(), the resources clean up will not be repeated twice.

- The codes inside Dispose() or finaliser should be simple and do not refer to other objects. The codes should only be there for freeing up resources.

- If an object's class has a base class that implements Dispose() method then the derived class implementation should call the base class' Dispose().

- Dispose() method should be safe to be called multiple times.

- An object should be marked as unusable after its Dispose() method is called. An ObjectDisposedException() should be thrown when the object is tried to be used again.


Reference:
Essential C# 4.0 - Mark Michaelis , p 393-400

Thursday, 17 May 2012

Using Remote Attribute Validator in ASP.NET MVC 3

In the past I have written a few posts about validation in ASP.NET MVC 3. One of them is about creating custom client side validation. There is another way to do client validation in MVC 3. This one is easier to implement for standard/common validation functions. It is the new RemoteAttribute. This attribute allows us to have client side validation that calls controller action on the back end.

To use the attribute, first we need to prepare an action method:
[HttpPost]
public ActionResult CheckFirstName(string firstname)
{
    return Json(firstname.Equals("firstname"));
}

Then specify the attribute on a field of a model:
[Remote("CheckFirstName", "Account", HttpMethod = "Post", ErrorMessage = "First name is not valid")]
public string FirstName { get; set; }
In this case we pass the action method name, controller name, HTTP method used by the action and a default error message. Notice also that the action method must accept an input parameter which would be the field's value. When specifying the Remote attribute we could also use a route name or action, controller and area names.

Let us look at another example. Here we have another action method:
[HttpPost]
public ActionResult CheckFullName(string fullName, string firstname, string lastname)
{
    return Json(fullName.Equals(string.Format("{0} {1}", firstname, lastname)));
}

Then on the model class:
[Remote("CheckFullName", "Account", HttpMethod="Post", AdditionalFields="FirstName,LastName", ErrorMessage = "Full name is not valid")]
public string FullName { get; set; }

This time we pass additional fields to the validation method. Notice the AdditionalFields parameter and also on the action method there are extra parameters passed.

One important thing to remember, to avoid the validation result being cached by the browser, we need to include this attribute on the controller action:
[OutputCache(Location = OutputCacheLocation.None, NoStore = true)]

Reference:
The Complete Guide To Validation In ASP.NET MVC 3 - Part 1

Monday, 30 April 2012

Slides - a Slideshow Plugin for jQuery

When I was looking for a simple jQuery slideshow plugin, I came across to Slides (http://slidesjs.com). This plugin is simple to be implemented and seems to be highly customisable. It also supports either text or image content elements that can be easily set with html and css. In addition, it includes a pagination as well.

To get started, we need to include jQuery and the javascript library:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" type="text/javascript"></script>
<script src="js/slides.min.jquery.js" type="text/javascript" charset="utf-8"></script>
The file can be obtained from the plugin website http://slidesjs.com

Then prepare some basic html:
<div id='slides'>
    <div class="slides_container">
        <div>
            Content One
        </div>
        <div>
            Content Two
        </div>
        <div>
            Content Three
        </div>
    </div>
</div>
Note, the first div container's name can be anything, however the one inside it 'slides_container' should not be renamed. Then for each slide of the slideshow, we just need to create a separate 'div'. Inside each one of them, we can put any nested or more complicated html if we like.

Next, call the slides() function:


After that, we can put the desired html inside each 'div' and style them. We will also need to style the pagination later. An example can be found on http://slidesjs.com/examples/standard/

This is an example that I have:
<div id='slides'>
    <div class="slides_container">
        <div class="slide one">
            <div class='captionone'>
                <h1>Heading one</h1>
                <p>Content one 1 1 1 <a href='/page-one.aspx' class='more'>more</a></p>
            </div>
            <div class="bottomImages">
                <a class='imageone'></a>
                <a class='imagetwo'></a>
                <a class='imagethree'></a>
            </div>
        </div>
        <div class="slide two">
            <div class='captiontwo'>
                <h1>Heading twolt;/h1>
                <p>Content two 2 2 2 <a href='/page-two.aspx' class='more'>more</a></p>
            </div>
            <div class="bottomImages">
                <a class='imageone'></a>
                <a class='imagetwo'></a>
                <a class='imagethree'></a>
            </div>
        </div>
        <div class="slide three">
            <div class='captionthree'>
                <h1>Heading three</h1>
                <p>Content three 3 3 3 <a href='/page-three.aspx' class='more'>more</a></p>
            </div>
            <div class="bottomImages">
                <a class='imageone'></a>
                <a class='imagetwo'></a>
                <a class='imagethree'></a>
            </div>
        </div>
    </div>
 </div>
In my example, I have three slides with images background. On each slide there's a text element and three small images on the bottom that have hover styles as well.

Now for the pagination, the slideshow script generates a pagination underneath 'slides_container' div. Below is the html added by the script when we have three slides:
<ul class="pagination">
    <li class="current"><a href="#0">1</a> </li>
    <li class=""><a href="#1">2</a> </li>
    <li class=""><a href="#2">3</a> </li>
</ul>

We can style this as we like but we cannot change the html structures. Tips; if we would like to create a custom pagination inside/outside the slideshow div, we could use jQuery click event to make our custom pagination elements to do the same thing as the built in pagination when they are clicked. In the following example, I made the bottom images as my custom pagination:
$('#slides .imageone').click(function () {
    $("ul.pagination li:first-child a").click();
});
$('#slides .imagetwo').click(function () {
    $("ul.pagination li:nth-child(2) a").click();
});
$('#slides .imagethree').click(function () {
    $("ul.pagination li:last-child a").click();
});

There are also some parameters that can be set for the slides() function. For example:
$('#slides').slides({
    preload: true,
    preloadImage: '/images/loading.gif',
    play: 5000,
    pause: 2500,
    hoverPause: true,
    animationStart: function (current) {
        /* do something here */
    },
    animationComplete: function (current) {
        /* do something here*/
    },
    slidesLoaded: function () {
        /* do something here*/
    }
});
Please refer to the website for a list of parameters and their description.

Friday, 20 April 2012

Example of Using JQuery Validation Engine Plugin with ASP.NET Controls

JQuery validation engine is a jQuery plugin that provides easy to implement validation functionality for your html form. It also comes with nice styling and ample of built-in functions.

To start using this plugin, we need to include these scripts and css:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.js" type="text/javascript"></script>
<script src="js/jquery.validationEngine-en.js" type="text/javascript" charset="utf-8"></script>
<script src="js/jquery.validationEngine.js" type="text/javascript" charset="utf-8"></script>
<link rel="stylesheet" href="css/validationEngine.jquery.css" type="text/css"/>
These files can be obtained from https://github.com/posabsolute/jQuery-Validation-Engine

There are many built in validation functions that can be put directly on an input control as its css classes; for example: required, custom, equals, min, max, etc... After the validations are placed, what we need to do is to instantiate the validation engine when the document is ready, ie:
$("#form.id").validationEngine();

We could also work with this plugin more manually by calling showPrompt or hide to show or hide the validation message when an input is invalid or corrected.

Below are some examples to validate some ASP.Net controls:

- Single Checkbox

var cb1= $('#<%=cbSingle1.ClientID %>:checked').val();
if (!cb1) {
    result = false;
    $('#<%=cbSingle1.ClientID %>').validationEngine('showPrompt', '* This field is required', null, null, true);
} else {
    $('#<%=cbSingle1.ClientID %>').validationEngine('hide');
}


- CheckBoxList

    one
    two
    three
    four

var cbl1Value = 0;
$('#<%=cblControl1.ClientID %> input[type=checkbox]:checked').each(function () {
    cbl1Value ++;
});
if (cbl1Value == 0) {
    // if nothing is selected
    result = false;
    $('.cblControl1Class').validationEngine('showPrompt', '* This field is required', null, 'topLeft', true);
} else {
    $('.cblControl1Class').validationEngine('hide');
}
Note that here we specify a css class on the control's CssClass attribute for showing the validation error message. We also pass 'topLeft' for the position, other possible values are 'topRight', 'bottomLeft', 'centerRight' and 'bottomRight'. We could also use X (horizontal) and Y (vertical) offsets from a position value in this format 'position_value:x,y', eg: 'topRight:30,-10'.


- RadioButtonList

    one
    two
    three
    four

var rbl1 = $('input[name=<%= rblControl1.ClientID %>]:checked').val();
if (!rbl1) {
    // if nothing is selected
    result = false;
    $('.rblControl1Class').validationEngine('showPrompt', '* This field is required', null, null, true);
} else {
    $('.rblControl1Class').validationEngine('hide');
}
Here we also use a css class to show the validation message.

For further information about this plugin, see http://posabsolute.github.com/jQuery-Validation-Engine
For more examples, see http://www.position-relative.net/creation/formValidator/

Friday, 30 March 2012

Client Side Custom Annotation Validation in ASP.NET MVC 3

On this post, we will see how to implement client side custom data annotation validation in ASP.NET MVC 3. I wrote about server side custom validation on my previous post.

There are a few steps that need to be done to implement client side custom validation:

1. Make our custom validation class (see my previous post for the codes example) inherits from IClientValidatable and implement its GetClientValidationRules method.
public class SumIntegers : ValidationAttribute, IClientValidatable
{
    . . .

    public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, 
          ControllerContext context)
    {
        ModelClientValidationRule rule = new ModelClientValidationRule();

        //specify a name for the custom validation
        rule.ValidationType = "sumintegers";

        //pass an error message to be used
        rule.ErrorMessage = FormatErrorMessage(metadata.GetDisplayName());

        //pass parameter(s) that need to be used when validating
        rule.ValidationParameters.Add("sum", _sum);

        yield return rule;
    }
}
To do this we need to add a reference to System.Web.Mvc namespace in the class file.

Below is the html generated from the ModelClientValidationRule properties set above.
<input type="text" value="" name="CSVInput" id="CSVInput" 
data-val-sumintegers-sum="20" 
data-val-sumintegers="custom error message for CSVInput" 
data-val="true" class="text-box single-line valid">
As we can see, the properties are put into data-val attributes. The attribute data-val-[validation_name] contains the error message, where [validation_name] is the ValidationType property's value we set above. Each passed parameter is put into data-val-[validation_name]-[parameter_name] attribute.


2. Write a jQuery validation adapter.
The adapter is used to retrieve the data-val attributes with their values and translate them into a format that jQuery validation can understand. So this adapter is helping us to easily implement our unobtrusive client side validation.

The adapter has several methods that we can use:
- addBool - creates an adapter for a validation rule that is 'on' or 'off', it requires no additional parameters
- addSingleVal- creates an adapter for a validation rule that needs to retrieve a single parameter value
- addMinMax - creates an adapter that maps to a set of validation rules, one that checks for a minimum value and the other checks for a maximum value
- add - used to create a custom adapter if we cannot use one of the methods above. We can use this if the adapter requires additional parameters or extra setup code.

In our case, addSingleVal is the best one to use.
// first parameter is the adapter name which should match with the value of ValidationType 
//    property of ModelClientValidationRule set on the server side
// second parameter is the parameter name added to ValidationParameters property of 
//    ModelClientValidationRule on the server side
$.validator.unobtrusive.adapters.addSingleVal("sumintegers", "sum");


3. Write the jQuery validator.
We do this through a method called addMethod that belongs to jQuery validator object.
// first parameter is the validator name which should match with the adapter name 
//    (which is also the same as the value of ValidationType)
// second parameter is the validation function to be invoked
$.validator.addMethod("sumintegers", 

  // the validation function's first parameter is the input value, second is the input element 
  //    and the third one is the validation parameter or an array of validation parameters passed
  function (inputValue, inputElement, sum) {

    var returnValue = true;
    if (inputValue) {
        var total = 0;

        try {
            $.each(inputValue.split(','), function () {
                total += parseInt(this);
            });
        }
        catch (err) {
            returnValue = false;
        }

        if (total != sum) {
            returnValue = false;
        }
    }
    return returnValue;

});

Say we put the scripts from step two and three in a single file called CustomScripts.js. Below is all the scripts that we have written:
/// <reference path="jquery-1.4.4.js" />
/// <reference path="jquery.validate.js" />
/// <reference path="jquery.validate.unobtrusive.js" />

if ($.validator && $.validator.unobtrusive) {

    $.validator.unobtrusive.adapters.addSingleVal("sumintegers", "sum");

    $.validator.addMethod("sumintegers", function (inputValue, inputElement, sum) {
        var returnValue = true;
        if (inputValue) {
            var total = 0;

            try {
                $.each(inputValue.split(','), function () {
                    total += parseInt(this);
                });
            }
            catch (err) {
                returnValue = false;
            }

            if (total != sum) {
                returnValue = false;
            }
        }
        return returnValue;
    });

}
The first three lines are references put to have IntelliSense works in our codes. Make sure the paths are correct.


4. Finally, include jquery.validate, jquery.validate.unobtrusive and our custom scripts files on the page to render.
<script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/CustomScripts.js")"></script>


Tips: at the first time this validation will fire after the input lost focus but after that it will fire after each key press. This is the default behaviour of other built-in validators. If you are not happy with this and would like the validation to always fire only when the input lost focus then you need to add one of these scripts:
// if only for the specific input field
$(document).ready(function () {
    $("input#CSVInput").keyup(function () {
        return false;
    });
}

// or for all input fields
//    never put this inside document ready function
$.validator.setDefaults({
    onkeyup: false
})

Reference:
Professional ASP.NET MVC 3 - Jon Galloway, Phil Haack, Brad Wilson, K. Scott Allen

Friday, 16 March 2012

Server Side Custom Annotation Validation in ASP.NET MVC 3

On this post we will see an example of server side custom data annotation validation in ASP.NET MVC 3. I will write the client side one on my next post.

Let's say we would like to create a custom validation using data annotation to check the sum of a list of comma separated integers. If the sum is equal to the sum parameter supplied then the value is valid otherwise an error message will be displayed.

To begin, we need to create a class that derived from ValidationAttribute. This class is used by other built in validation annotations. We also need to add reference to System.ComponentModel.DataAnnotations namespace in our class file.

When implementing ValidationAttribute class, we need to override its IsValid method (line 12 and 13 of the codes below). This method has two parameters; the first one is the value to be validated and the second one provides more information about the field (context) where the value comes from. This method is where we put our validation logic.

Because we want to pass the sum amount to be checked against as a parameter, in other words we would like to use the validation attribute as something like [SumIntegers(20)], then we need to create a constructor accepting the parameter (line 6-10).

On line 27, we return a hard coded error message when an exception has occurred. We could also have a custom error message when using the validation attribute. We do this through ErrorMessage property of the ValidationAttribute. Say we would like to use our validation attribute as something like [SumIntegers(20, ErrorMessage="custom error message for {0}")]; to do this we call FormatErrorMessage method, pass the context display name to the method, then return the message as ValidationResult type (line 21-22).

We also provide a default error message (line 7), passing it to the base constructor. If we do not specify a default error message and somehow we forget to provide one on the validation attribute then the error message returned will be 'The field [fieldname] is invalid.'

Notice also that on the first line there are some attributes for the validation class. This is the default one used if we do not specify any. So in this case we may not need to specify it. To know more about System.AttributeUsage you can see http://msdn.microsoft.com/en-us/library/tw5zxet9%28v=VS.100%29.aspx

[System.AttributeUsage(System.AttributeTargets.All, AllowMultiple = false, Inherited = true)]
public class SumIntegers : ValidationAttribute
{
    private readonly int _sum;

    public SumIntegers(int sum)
        :base("The sum of integers of {0} are not equal to the one specified.")
    {
        _sum = sum;
    }

    protected override ValidationResult IsValid(
        object value, ValidationContext validationContext)
    {
        if (value != null)
        {
            try
            {
                if (value.ToString().Split(',').Select(n => int.Parse(n.Trim())).Sum() != _sum)
                {
                    var errorMessage = FormatErrorMessage(validationContext.DisplayName);
                    return new ValidationResult(errorMessage);
                }
            }
            catch (Exception ex)
            {
                return new ValidationResult("Input is not valid");
            }
        }

        return ValidationResult.Success;
    }
}

Finally, the usages:
[SumIntegers(50)]
public string CSVInput { get; set; }

[SumIntegers(20, ErrorMessage="custom error message for {0}")]
public string Numbers { get; set; }

Monday, 5 March 2012

Grouping Data with LINQ

To group data in LINQ, we can use group ... by ... clause in query syntax or GroupBy() in method syntax. We will go through some examples and explanations along this post.


SIMPLE GROUPING
Let's start with simple grouping, below is an example:
// query syntax
var groupedData = from c in context.Customers
                  group c by c.Country;
// method syntax
var groupedData = context.Customers.GroupBy(c => c.Country);
Grouping in LINQ will result in an object of type IEnumerable<IGrouping<TKey,TSource>> which in this case is IEnumerable<IGrouping<String,Customer>>. IGrouping is a special class that has two properties; a key property and an IEnumerable<TSource> property that holds all the items corresponding to the key.

If we try to debug the 'groupedData' object, we will get something like this (you may need to click the image below to make it displayed bigger):
As we can see there's a 'Key' property and another property that contains some items corresponding to the 'Key' value.

To print out these items on screen:
foreach(var groupedItems in groupedData)
{
    Console.WriteLine(string.Format("Key: {0}", groupedItems.Key));
    foreach (var item in groupedItems)
    {
        Console.WriteLine(string.Format("{0} - {1}", item.CompanyName, item.Country));
    }
    Console.WriteLine("----------------------------------");
}


GROUPING WITH MORE THAN ONE KEY
If we want to have a grouping using two keys, we could use group x by new { x.Key1, x.Key2 } in query syntax or GroupBy( x => new { x.Key1, x.Key2 } ) in method syntax. Below is an example:
// query syntax
var groupedData2 = from c in context.Customers
                   group c by new { c.Country, c.City };
// method syntax
var groupedData2 = context.Customers.GroupBy(c => new {c.Country, c.City});

foreach (var groupedItems in groupedData2)
{
    //note that the Keys' names now become part of Key properties; ie. Key.Country and Key.City
    Console.WriteLine(string.Format("Key: {0} - {1}", groupedItems.Key.Country, groupedItems.Key.City));
    foreach (var item in groupedItems)
    {
        Console.WriteLine(string.Format("{0} - {1} - {2}", item.CompanyName, item.City, item.Country));
    }
    Console.WriteLine("----------------------------------");
}


PROJECTION
Here is an example of projecting the result into anonymous type objects:
// query syntax
var groupedData3 = from c in context.Customers
                   group c by c.Country into grp
                   select new
                   {
                       Key = grp.Key,
                       Items = grp.Select(g => new { g.CompanyName, g.Country })
                   };
// method syntax
var groupedData3 = context.Customers.GroupBy(c => c.Country).
                   Select(grp => new {
                                       Key = grp.Key, 
                                       Items = grp.Select(g => new {g.CompanyName, g.Country})
                                     }
                   );

foreach (var groupedItems in groupedData3)
{
    Console.WriteLine(string.Format("Key: {0}", groupedItems.Key));
    foreach (var item in groupedItems.Items)
    {
        Console.WriteLine(string.Format("{0} - {1}", item.CompanyName, item.Country));
    }
    Console.WriteLine("----------------------------------");
}

Below is another example that projects the result into strong typed objects.
The classes (made simple for demonstration purpose):
public class CompanyViewModel
{
    public string Name { get; set; }
    public string Country { get; set; }
}

public class GroupedCompanies
{
    public string CountryKey { get; set; }
    public IEnumerable<CompanyViewModel> Companies { get; set; }
}
Then the query:
var groupedData4 = from c in context.Customers
                   group c by c.Country into grp
                   select new GroupedCompanies
                   {
                       CountryKey = grp.Key,
                       Companies = grp.Select(g => new CompanyViewModel { Name = g.CompanyName, Country = g.Country })
                   };
foreach (GroupedCompanies groupedItems in groupedData4)
{
    Console.WriteLine(string.Format("Key: {0}", groupedItems.CountryKey));
    foreach (CompanyViewModel item in groupedItems.Companies)
    {
        Console.WriteLine(string.Format("{0} - {1}", item.Name, item.Country));
    }
    Console.WriteLine("----------------------------------");
}


GROUPING WITH MORE THAN ONE KEY + PROJECTION
Finally this example shows a combination of grouping with two keys and projection:
// query syntax
var groupedData5 = from c in context.Customers
                   group c by new { c.Country, c.City } into grp
                   select new
                   {
                       Key = grp.Key,
                       Items = grp.Select(g => new { g.CompanyName, g.City, g.Country })
                   };
// method syntax
var groupedData5 = context.Customers.GroupBy( c => new {c.Country, c.City} ).
                   Select( grp => new {
                                       Key = grp.Key, 
                                       Items = grp.Select(g => new {g.CompanyName, g.City, g.Country})
                                      }
                   );

Thursday, 16 February 2012

Adding Abstract Entity in Entity Framework

This post will show how to create an Abstract entity and its derived entities in Entity Framework 4.1.

Suppose we have an 'Employee' table as shown below that is set as a table to store two different types of employees, namely staffs and managers. For simplicity; staffs are under managers, both staffs and managers have 'FirstName' and 'LastName', only staffs have 'DeskNumbers' while managers have 'OfficeRoomNumbers'. They are differentiate by 'Type' flag. We could see that 'Table per Hierarchy' (TPH) style is used here.

We would like to create these mappings of staffs and managers to the 'Employee' table in Entity Framework designer. To implement this 'Table per Hierarchy' style, we will create an abstract Employee entity and have Staff and Manager entities as the derived/child entities from Employee entity.

Assume we create Employee entity from scratch:
1. Right click an empty space on designer
2. Select Add > Entity
3. Type 'Employee' as Entity name
4. Leave Base type as '(None)'
5. On the 'Key Property' section, type 'EmployeeID' as the Property name. This is the same name as the primary key column name in the database table
6. Click 'OK' then a new entity is added
7. A new entity called 'Employee' is added
8. Right click the entity then select Properties
9. Change 'Abstract' value to 'True.

Then we create the child entities:
1. Right click an empty space on designer
2. Select Add > Entity
3. Type 'Staff' as Entity name
4. On Base type dropdown select' Employee'. This will make 'Key Property' section disabled.
5. Click 'OK' then the new entity is added
6. Repeat the same process to add 'Manager' entity

Next we need to add properties that are common to both derived entities on the abstract entity. In this case we need to add 'FirstName' and 'LastName' as Scalar Properties on Employee entity. Make sure to modify the 'Type' and 'MaxLength' values of the properties according to their data types in database.

After we add the common properties, we need to add properties that are specific to the derived entities. Add 'DeskNumber' Scalar Property to Staff entity and 'OfficeRoomNumber' Scalar Property to Manager entity. Right click each of the newly added Scalar Properties then select Properties. Modify the 'Type' of the properties and make sure the 'Nullable' value is set to 'True'.

After doing all of those, we will have this:

Finally we need to map the entities to the table in database. First we map the abstract entity.
1. Right click 'Employee' entity then select Table Mapping
2. Click '<Add a Table or View>' then select 'Employee' from dropdown
3. Then you will see under 'Column Mappings' all the columns from 'Employee' table in the database are displayed on the left side, while the entity properties are displayed on the right side. All matched properties are automatically mapped.

The derived entities need to be mapped as well. They will be mapped to the same table as the abstract entity.
1. Right click 'Staff' entity then select Table Mapping
2. Click '<Add a Table or View>' then select 'Employee' from dropdown
3. Click '<Add a Condition>' then select 'Type' from dropdown
4. Type 'S' as the ' When Type' value
5. Do the same process with 'Manager' entity, however the 'When Type' value will be 'M'

Thursday, 2 February 2012

Adding Complex Type in Entity Framework

This post will show how to create and add Complex Type to entity class in Entity Framework 4.1.

Let's say that we have a Customer table that looks like the following:
We can see that the table stores billing and shipping addresses information. The addresses information have similar structure and data types. Other tables in the same database might have other addresses information similar to these as well. Therefore, instead of handling each of the address part individually for each address information, we can use a common type for these addresses. In Entity Framework, we call this Complex Type.

So now we are going to add 'AddressInfo' Complex Type. Here are the steps to do that:
1. Right click an empty space on designer
2. Select Add > Complex Type
3. A new complex type is added, rename it to 'AddressInfo'
4. To add its properties, right click the complex type then select Add > Scalar Poperty > String
5. A new property is added, rename it to 'Address'
6. then right click 'Address', select Properties
7. change 'Max Length' value to 150
8. Repeat steps 4-7 above to add 'Suburb', 'Country' and 'PostCode'

After adding the Complex Type, we need to add that to the 'Customer' entity. Say that we want to create 'Customer' entity manually then add the Complex Type to it:
1. Right click an empty space on designer
2. Select Add > Entity
3. Type 'Customer' as Entity name
4. On the 'Key Property' section, type 'CustomerID' as the Property name. This is the same name as the primary key column name in the database table
5. Click 'OK' then a new entity is added
6. To add 'FirstName' property, right click the entity then select Add > Scalar Property
7. A new property is added, rename it to 'FirstName'
8. Right click 'FirstName' then select Properties
9. Change the 'Max Length' value to 50
10. Repeat the steps 6-9 to add 'LastName' property
11. To add 'BillingAddress', select Add > Complex Property
12. A new property is added, rename to 'BillingAddress'
13. Right click 'BillingAddess' then select Properties
14. Make sure the 'Type' is 'AddressInfo'
15. Repeat steps 11-14 to add 'ShippingAddress'

Finally we need to map the entity and its Complex Type and Scalar properties to the actual columns in the database.
1. Right click 'Customer' entity then select Table Mapping
2. Click '<Add a Table or View>' then select 'Customer' from dropdown
3. Then you will see under 'Column Mappings' all the columns from 'Customer' table in the database displayed on the left side, while the entity properties are displayed on the right side. All matched properties are automatically mapped, however our Complex Types are not recognized.
4. Click on the 'Value / Property' column of 'BillingAddress' then select 'BillingAddress.Address'
5. Repeat the process for other columns that will be mapped to Complex Type properties. When we have done all the mappings we will have this:

Monday, 30 January 2012

How to Get Enum Elements' Values and Descriptions

Below is an example of how to populate a collection from an Enumeration elements' values and Description attributes (names if Description attributes are not exist).

Say that we have the following Enum:
public enum EnumStatus
{
    [Description("Not Submitted")]
    NotSubmitted = 0,

    Requested = 1,

    [Description("Pending Approval")]
    PendingApproval = 2,

    Approved = 3,

    Rejected = 4
}

Then the code to get each element's value and Description attribute (or name) is:
var type = typeof(EnumStatus);
foreach (var field in type.GetFields().Where(f => f.FieldType == type))
{
    var attribute = Attribute.GetCustomAttribute(field, typeof(System.ComponentModel.DescriptionAttribute)) 
        as System.ComponentModel.DescriptionAttribute;
    var value = field.GetValue(Activator.CreateInstance(type));
    myCollections.Add(
        new
        {
            Id = (int)value,
            Name = attribute != null ? attribute.Description : value.ToString() // or Enum.GetName(typeof(EnumStatus), value)
        }
    );
}

Friday, 20 January 2012

Windows Installer Issue when Installing over Previous Version Application

Recently I worked on Windows Setup Project to deploy a new version of an existing application. The application and installer was developed using Visual Studio 2008. I'm using Visual Studio 2010 to work on the project. All of the projects including the Windows Setup are still using the same .NET framework and prerequisites.

When I try to install the application (it has a setup.exe, an msi and some other files that have been packaged together using IExpress) over the older version on a machine, a few issues occurred even tough the installer said that the installation was successful.

The issues were the application's shortcuts were disappeared and all unmodified files in the installation directory were gone, only the modified ones were still there. What I mean by unmodified files are the files that were exist in both version and have not been updated/changed in the new version.

According to these articles; http://connect.microsoft.com/VisualStudio/feedback/details/559575/problem-with-installing-and-removing-previous-versions-after-upgrading-my-setup-project-to-vs2010 and http://social.msdn.microsoft.com/Forums/en-US/winformssetup/thread/b87f1aea-d15a-484b-8cdc-0d212784f941/, the problem occurs because all of the files' component GUIDs are changed when the setup project is migrated from Visual Studio 2008 to Visual Studio 2010. A workaround for this is to re-sequence 'RemoveExistingProducts' right after 'InstallInitialize' in the installation processes sequence table of the application's msi file.

Here is the detail of the process:
1. Use 'Orca' to open the new version applicaton's msi file. 'Orca' is a tool for creating and editing Windows Installer packages and merge modules. 'Orca' can be downloaded from http://msdn.microsoft.com/en-us/library/windows/desktop/aa370557%28v=vs.85%29.aspx.
2. Right click the msi file then select 'Edit with Orca'.
3. Select 'InstallExecuteSequence' table on the left pane window.
4. Then on the right panel, find 'RemoveExistingProduct' in the 'Action' column, see the blue colour highlighted row.
5. Double click the 'Sequence' value (the yellow colour highlighted cell) then change the value to 1525.
6. Save the changes.

Then try the installation again.

Friday, 13 January 2012

Template of a Stored Procedure with Savepoint

Savepoint is used for selective roll back. Using savepoint, a transaction can roll back to a selected location that has been marked. When it is rolled back, in the end the transaction must be completed by using 'commit transaction' or rolled back altogether.

Savepoint name should be unique even though duplicate is allowed. If a roll back is occurred where there is a duplicate, the transaction will be rolled back to the latest savepoint.

CREATE PROCEDURE [Procedure_Name]
AS
BEGIN

-- generate a unique savepoint name by appending procedure name (OBJECT_NAME(@@procid)) and nested level (@@nestlevel)
-- we could also use only @@nestlevel as it will always be unique in an active connection
-- savepoint name's maximum length is limited to 32 characters only
DECLARE @savepoint NVARCHAR(32) = CAST (OBJECT_NAME(@@procid) AS NVARCHAR(29)) +
           CAST (@@nestlevel AS NVARCHAR(3))

-- this is to check whether nested transactions exist when entering this procedure,
--  the value will be used later for checking condition
DECLARE @entryTrancount INT = @@trancount

BEGIN TRY
 BEGIN TRANSACTION
 SAVE TRANSACTION @savepoint
 
 --do something here
 
 COMMIT TRANSACTION
END TRY
BEGIN CATCH
 -- transaction is uncommittable (XACT_STATE() = -1) and no nested transactions exist (@entryTrancount = 0)
 IF XACT_STATE() = -1 AND @entryTrancount = 0
  ROLLBACK TRANSACTION
 -- otherwise if transaction is committable
 ELSE IF XACT_STATE() = 1    
  BEGIN
   ROLLBACK TRANSACTION @savepoint
   COMMIT TRANSACTION
  END
   
 DECLARE @ERROR_MESSAGE NVARCHAR(4000)
 SET @ERROR_MESSAGE = 'Error occured in procedure ''' + OBJECT_NAME(@@procid)
       + ''', Original Message: ''' + ERROR_MESSAGE() + ''''
 RAISERROR (@ERROR_MESSAGE, 16, 1)
 RETURN -100
END CATCH
END

According to MSDN, XACT_STATE function returns three values:
1 - The current request has an active user transaction. The request can perform any actions, including writing data and committing the transaction.
0 - There is no active user transaction for the current request.
-1 - The current request has an active user transaction, but an error has occurred that has caused the transaction to be classified as an uncommittable transaction. The request cannot commit the transaction or roll back to a savepoint; it can only request a full rollback of the transaction. The request cannot perform any write operations until it rolls back the transaction. The request can only perform read operations until it rolls back the transaction. After the transaction has been rolled back, the request can perform both read and write operations and can begin a new transaction.

Both the XACT_STATE and @@TRANCOUNT functions can be used to detect whether the current request has an active user transaction. @@TRANCOUNT cannot be used to determine whether that transaction has been classified as an uncommittable transaction. XACT_STATE cannot be used to determine whether there are nested transactions.


References and further reading:
http://msdn.microsoft.com/en-us/library/ms188378%28v=SQL.105%29.aspx
Pro SQL Server 2008 Relational Database Design and Implementation - Louis Davidson
http://msdn.microsoft.com/en-us/library/ms189797.aspx
http://dosql.com/cms/index.php?option=com_content&view=article&id=101:trancount-and-xactstate&catid=40:microsoft-sql-server&Itemid=41