RSS

Category Archives: .NET

Implementing a Concurrent Data Flow in C#

It is very common to do parallel processing in modern day applications that do intensive amount of processing. In those tasks, you might need to use set of chained tasks that are interlinked each other such a way that you do processing in set of stages/phases. However, you cannot wait all the input raw data/initial steps if you don’t have input data to finish undergoing initial transformation/processing to start subsequent transformation/processing steps to proceed. Such processing pipelines could also be much faster to process since different stages might more likely to use different type of resources available in the execution environment.

work flow model

To implement to parallel pipeline workflow we can use Microsoft TPL data flow library which is available as a nuget package.
nuget TPL data flow package

This library provide ability to define blocks of code that can be interlinked to run concurrently as a workflow. Best way to figure this out is to see a sample code implementation given below which is a simulation of image download workflow.

Problem:
Sample Workflow
Imagine we have an application where we need to download images from external provider routinely with following steps. To make whole process faster we use need to run each steps concurrently.
1. Read download URL of an image from data source
2. Download images from remote source
3. After each image get downloaded we need to save it into disk and update database
Solution:
With TPL data flow library we can abstract these three steps into different TPL blocks and link each other to create a parallel processing mesh as shown below. (inside each task actual image downloading code omitted for clarity)


var cts = new CancellationTokenSource();
            Task.Run(() =>
            {
                if (ReadKey().KeyChar == 'c')
                    cts.Cancel();
            });

            var inputBlock = new BufferBlock<DownloadInput>(
            new DataflowBlockOptions
            {
                BoundedCapacity = 5,
                CancellationToken = cts.Token
            });

            var downloadBlock = new TransformBlock<DownloadInput, DownloadResult>(
            n =>
            {
                DownloadResult result = new DownloadResult();
                Console.WriteLine("Downloading {0} image on thread id {1}", n.DownloadUrl, Thread.CurrentThread.ManagedThreadId);
                //do the actual download here 
                Thread.Sleep(TimeSpan.FromMilliseconds(2000)); //image download simulation
                return result;
            },
            new ExecutionDataflowBlockOptions
            {
                MaxDegreeOfParallelism = 4,
                CancellationToken =
                cts.Token
            });

            var outputBlock = new ActionBlock<DownloadResult>(
            s =>
            {

                //do other stuff such as updating flags in database etc that image has been downloaded for maintenance processcess
                Thread.Sleep(TimeSpan.FromMilliseconds(200)); //simulation of other work
                Console.WriteLine("Saving image to database {0} on thread id {1}", s.DownloadUrl, Thread.CurrentThread.ManagedThreadId);
            },
            new ExecutionDataflowBlockOptions
            {
                MaxDegreeOfParallelism = 4,
                CancellationToken =
                cts.Token
            });

            inputBlock.LinkTo(downloadBlock, new DataflowLinkOptions
            {
                PropagateCompletion = true
            });

            downloadBlock.LinkTo(outputBlock, new DataflowLinkOptions
            { PropagateCompletion = true });

Finally, I used following code to test run the workflow.

try
            {
                Parallel.For(0, 20, new ParallelOptions
                {
                    MaxDegreeOfParallelism = 4,
                    CancellationToken =
                cts.Token
                }
                , i =>
                {
                   
                    var downloadInput = new DownloadInput();
                    downloadInput.DownloadUrl = string.Format("http://myimagesite.com/{0}", i);
                    Console.WriteLine("added {0} to source data on thread id {1}", downloadInput.DownloadUrl, Thread.CurrentThread.ManagedThreadId);
                    inputBlock.SendAsync(downloadInput).GetAwaiter().GetResult();
                });
                inputBlock.Complete();
                await outputBlock.Completion;
                Console.WriteLine("Press ENTER to exit.");
            }
            catch (OperationCanceledException)
            {
                Console.WriteLine("Operation has been canceled! Press ENTER to exit.");
            }
            Console.ReadLine();

I have tested this code in a console project so the out put with simulated work looked like below.
output

TPL data flow blocks also accepts cancellation tokens. This is extremely useful when you need to cancel entire workflow, otherwise might be difficult to manage the cancellation process.

Advertisements
 
Leave a comment

Posted by on April 24, 2017 in .NET, C#

 

Tags: , , , ,

Internationalization Aspects for Web Applications

Making web applications that supports multiple languages might be a challenging to maintain. So before building you should consider several aspects that are concerned with internationalization.

In summary very often it is not only translating the texts to other “languages” but also supporting “specific culture/locale”s as well!!!.

First of all there are certain terms that are often described when it comes to supporting multiple languages.

Internationalization (I18n)

In software applications, internationalization (often shortened to “I18N , meaning “I – eighteen letters and then a -N”) is the process of planning and implementing applications and services so that they can easily be adapted to specific local languages translations and cultures, a process called localization(which described below) which also means making your applications and services “localizable”.

In Other words the process of changing your software so that it isn’t hardwired to one language/locale/culture.

 

Localization (l10n)

The process of adding the appropriate resources to your software so that a particular language/locale is supported. This often includes adding language translation files without re-implementing/rebuilding your application.

 

Approaches to Localizing Web Applications

For web applications you can either do the translations in server side or do it in the client side. This almost always depends on the way your web application has been developed. ie. if it is a SPA application written with AngularJS then doing the translations in client side is preferable.

 

Server Side Translations for ASP.NET Applications

Obvious choice is to use the .NET framework support to create resource files(.resx) in the application and get the framework support to set CurrentCulture and CurrentUICulture for the current request thread. You need to suffix resource files with standard culture acronym and appropriate resource values will be selected by framework. Visual studio has builtin support for managing resource files.

resources

In summary “CurrentCulture” will be used for date formatting, number formatting etc. and the CurrentUICulture will be used for resource translations. You can configure the settings as following in Web.config file.

<configuration>
<system.web>
<globalization culture="auto:en-US" uiCulture="auto:fr" />
</system.web>
</configuration>

 

Use of “auto” in globalization setting.

We can set the current culture and “ui culture” as shown above with “auto” prefixed setting. when we use “auto” client user browser’s settings will be used to detect cultures and if not present will be defaulted to “en-US” and “de” in above example. This is through accept-language” request header as shown below. These settings are typically found in OS level or changeable in browser settings.

accept-language

 

Set Current Culture and Current UI Culture Programatically

For ASP.NET applications culture settings can be set at application_beginrequest or application_aquirerequeststate. This will affect all web pages unless they have been written at page level as describe below

void Application_BeginRequest(object sender, EventArgs e)
{
  Thread.CurrentThread.CurrentCulture = new System.Globalization.CreateSpecificCulture("en-US");

  Thread.CurrentThread.CurrentUICulture = new System.Globalization.CultureInfo("en-US");
}

Overriding Culture at Individual Page Level(ASP.NET Web Forms)

ASP.NET web forms has a virtual method called  “InitializeCulture” which can be overridden to change culture settings at page level as shown below.

protected override void InitializeCulture()
{
        Thread.CurrentThread.CurrentCulture = 
            CultureInfo.CreateSpecificCulture("en-US");
        Thread.CurrentThread.CurrentUICulture = new 
            CultureInfo("en-US");

        base.InitializeCulture();
}

Client Side Translations

For web applications client side translations are often done with the help of JSon files which acts as resource files.

Localization for Angular JS

Angular Js has built in support for datetime, currency simbol and number format culture support via https://docs.angularjs.org/guide/i18n. Language translations to be done easier in AngularJS, we can use https://github.com/angular-translate/angular-translate library.

In brie your angular application’s config phase you need to configure the translate provider service that provided with the library with some basic settings ie. path to language translation resources folder etc.  as shown below

function configure($logProvider, routerHelperProvider, exceptionHandlerProvider, $translateProvider) {
$translateProvider
 .addInterpolation('$translateMessageFormatInterpolation')
 .preferredLanguage('en')
 .fallbackLanguage('en')
 .useStaticFilesLoader({
 prefix: '/app/i18n/',
 suffix: '.json'
 });
}

You can maintain the translation files JSON in your project as indicated below

translation folder

translation json content

Translation JSON Example (Spanish-es)

Within the UI language translation key can be used with the directive given by the translation library as indicated below.

translation in UI

Above the “translate” is a directive given by angular translate library. “Splash_Msg” is a key in translation file which will be included in run time.

Internationalization Concerns

  • Date Time – Week month name translations should be done to support localization. It is often easier to find and use controls that supports these concerns or build your own that supports these features. When date time period calculations done it is often easier to do it with base neutral culture.
  • calendar

    French and English Language Supported Date Picker

    Bootstrap UI DateTime Picker (http://plnkr.co/edit/nLw9DMgOLDK1dTIOJq4Q?p=preview)

    Date time related concerns are probably be the most complex and challenging concern in localization. Time Zone related conversions are extremely hard to deal accurately in web applications with different locales.

  • Unit of Measure – This might be probably not very important but some parts of the world it is often specific units are being used when displaying data. e.g., in Russia, it is normal to use Russian abbreviations (in Cyrillic letters) instead of standard symbols, e.g. кг and not kg.
  • Monetary conversions – It is often useful to display monetary values with locale specific currency units or using money symbols specific to the culture. Frameworks like .NET framework and Angular l18n supports currency formatting and currency symbols but biggest concern is exchange rates.
  • Numeric Formats – In some cultures symbols have been used for different purposes. e.g. Comma and period being used  alternatively for decimals  33.3 (English) > 33,3 (German). Client side validations and server side validations should be used with care for these concerns and additional coding has to be done.
  • String translations – String translations for static string labels are relatively easier to do when it compared to culture specific formatting. Resource files with different languages are often used.

Language/Culture Selections

Culture and language selections can be done in variety of ways in web applications. It can be detected automatically based on request information or can be set by user as briefly described below.

  • Browser settings
    • accept-language http header present in request headers
    • Location of the client – Based on client request remote IP location can be detected and hence the culture can be set automatically.
  • User Selections
    • User selects language – Within the UI we can let user to select preferred language and change the language in user interface.
    • Store it in profile – Based on user registration or in user profile page we can set the language and culture.

What to Translate

  • Static Texts – Static text labels are relatively easily factored out to resource files
  • Application Data – Application data is probably one of the most difficult to be translated unless you use a dynamic translation service to translate data(e.g. Google Translate). It is difficult and inefficient to “store” these translation data and will be difficult to maintain highly changing data to be maintained with different languages unlike static text labels etc. present in a web site.
  • Error Messages – Error messages should also be translated to specific languages. Specially the client side validation script should be written to support multiple langues.
 
Leave a comment

Posted by on April 22, 2016 in .NET, AngularJs, ASP.NET, Javascript

 

Tags: , , , , ,

Limiting Number of Concurrent Threads With SemaphoresSlim

Coordinating concurrent actions for a predictable outcome or in one word synchronization with concurrent programming is a sometimes a challenging task. There are three main ways of achieving this.

  1. Exclusive locking
  2. Non-exclusive locking
  3. Signaling

Problem
I was working on an external image downloader service which heavily used C# async processing for downloading thousands of images through internet. Downloader tool was too fast due to async nature so there were occasional failures at runtime due to excessive use of network and exceeding resource capacity in the server.

Using Non-exclusive locking for limiting concurrency
Nonexclusive locking is useful in limiting concurrency where preventing too many threads from executing particular function or section in your code at once.  I used SemaphoreSlim which lives in System.Threading namespace. There are two  similar versions of this construct which are Semaphore and  SemaphoreSlim. However latter has been introduced in .NET 4 and has been optimized for performance. So we should better use it.

So how does SemaphoreSlim limits concurrency?
The way semaphore slim work is extremely simple to understand as shown in diagram below.semaphoreslim

The way it work is analogous to a hall that has “N” number of seats for people and has two does to enter and exit. When hall filled with people people has to wait near entrance door until someone goes out from exit door. Maximum number of threads that can be active in semaphore will be limited. It is configurable in SemaphoreSlim constructor. I have also moved this to a configurable value.

private SemaphoreSlim _mysemaphoreSlim = new SemaphoreSlim(Configuration. MaxConcurrency);

Following is very simple method shell that you might need to limit concurrency.

private async Task&lt;bool&gt; AsyncMethod()
{
    this._ mysemaphoreSlim.Wait();

    /* Do the other cool things here, Only N number
    of threads can be between Wait and Release */

    /* After finishing your work, call release this will
    allow to enter another thread if any waiting to execute */

    this._mysemaphoreSlim.Release();
}

 

You could also use

 mysemaphoreSlim.WaitAsync()

if you need waiting threads to be utilized as non-blocking synchronization.

 
Leave a comment

Posted by on February 2, 2016 in .NET, C#

 

Tags: ,

Configuring Redis Session State Provider With Azure

Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. Even though it has been originally developed for POSIX systems it can be easily configured to store session data in .NET with the aid of couple of nugget packages and using windows port that can be downloaded from https://github.com/MSOpenTech/redis.

One particular need you might need to store session external to webserver itself is when you have a web farm/multiple webservers with an authenticated user where subsequent requests would be redirected to different servers for example via a load balancer. Traditionally this can be solved easily with session store providers but using Redis for this can be fast and easier if you have your host system in Azure which was in my case.

To test the implementation locally for development you could do following,

  1. Download Redis port for windows from https://github.com/MSOpenTech/redis. Note: it is not available in http://redis.io site.redis windows port extract
  2. 2. Starting local Redis server is very easier as indicated below.

redis server

It will start the Redis server and note the connection details. You can change these by changing redis.conf file which is documented in the site.

e.g. https://github.com/MSOpenTech/redis/blob/2.8/redis.conf

redis server start

3. You can use various commands specified in the project site’s documentation e.g. to view session keys use command below. (not to worry on hacking my servers these are my DEV ENV values )

redis keys

4. Then you have to do install redis session state provider nugget package from https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider/ to your project.

5. One caveat here is to support “sliding expiration” for session state you need to make sure you install package version >=1.5.0. Sliding expiration is typically a must in session state management. Current version at the time of writing is 1.6.5 so make sure you install the latest.

6. To install RedisSessionStateProvider, run the following command in theNuget Package Manager Console of Visual Studio.

PM> Install-Package Microsoft.Web.RedisSessionStateProvider -Version {latest version number here}

7. Then all you have to do is add following configuration section in your web.config and change values accordingly.

session state config

8. After you have successfully tested it in your development environment you might need to configure Redis in windows Azure. Creating Redis Cache in Azure is super easy as shown below. At the time of writing it will be only supported in new Azure portal found in http://portal.azure.com/. Old portals does not support Azure Redis Cache.

azure redis create

azure redis create 2

9. Choose appropriate pricing tear depending your data load.

azure redis pricing

10. It will take a while to create the Redis cache. It sometimes takes more than 10 minutes to up and gets running.

redis cache creating

11. After you create Redis cache you need to configure cryptographic key and connection details for the nuget package. Selecting created Redis cache will show the access keys as below.

redis session config cryptographic keys

12. Then all you need to do is enter Access key and host name of azure redis server to the web.config. Notice how the connection details being entered, its quite different than we used earlier for local Redis server. Also note that host name given by Microsoft is always ends suffixed with cache.windows.net. So your full host name should look like someuniquename.redis.cache.windows.net. This will essentially point your session store to Azure Redis cache.

redis session config values

Note: Redis desktop manager can be used to manage Redis DB if you prefer GUI interface which can be download at http://redisdesktop.com/

redis desktop manager

 
Leave a comment

Posted by on October 20, 2015 in .NET, ASP.NET, ASP.NET MVC, Azure, C#

 

Tags: , , , , ,

Working with IANA TimeZone Database in .NET

Implementing date time related functions are sometimes becomes very tedious if you have internationalized web application which has users in different timezones. Simple “DateTime” data type is not sufficient when representing dates and times unless you go for conventionally to UTC. Even if you do it will introduce lot of nuances. Although DateTimeOffset does represent point in time unambiguously it does not indicate anything about user timezone. If the time zone has daylight saving rules (DST) things becomes really complicated. This becomes critical for online booking applications, online order processing applications etc. when indicating point in time especially relatively to different time zones in the world.

.NET framework provides two classes for TimeZones.

  1. TimeZone (Obsolete)
  2. TimeZoneInfo

If you research on this in MSDN documentation found in https://msdn.microsoft.com/en-us/library/system.timezone%28v=vs.110%29.aspx has following warning….

TimeZone type warning

Reason that System.TimeZone class not useful is its limited to local and UTC time zones which means it is not useful in many scenarios. For this BCL has new class named TimeZoneInfo which is introduced in later versions of .NET framework. Although this allows you to use Timezones found in the world, it’s TZDB  is based on Microsoft, it is not based on IANA TZDB. IANA TZDB known to have following advantages,

  • Historical accuracy since 1970
  • Referenced by RFC and many other standards.
  • TZDB is being frequently updated
  • Widely implemented either natively or via libraries(for .NET what I am going to talk about) in programming languages and DBMS

At least from accuracy point of view it’s better to use IANA time zones over native timezones found in .NET There are many compelling reasons to use IANA timezones over Microsoft timezones  even if there are few cosmetic convenient factors when using Microsoft timezones. To do so the library I used is “Noda Time”( http://nodatime.org). You can easily include it in your project via nuget. Not only the timezone issue probably more importantly it has many other benefits when doing datetime math over native types in .NET.

Noda time has a new structs(which are literally acting as datatypes) to represent point in time.

E.g.

public struct ZonedDateTime : IEquatable<ZonedDateTime>, IComparable<ZonedDateTime>, IComparable, IFormattable, IXmlSerializable

These structs fills gaps found in .NET datetime related types. For example in .NET we don’t have “date only” data type.

Eg.

DateTime.Now.Date

would return datetime instance where time portion is in midnight. This has number of issues when it comes to TZDB rules. In Noda time you have time only and date only structs or literally data types. ZonedDateTime in noda time is all in one object where it has timezone, offset, the date and time. No equivalent is found natively found .NET framework. Even the datetimeoffset does not. What this means it is really safe to do date time related math on these types over .NET equivalent solution implementations. This is one other benefit you get with Noda Time.

Following is how you would associate particular date and time to an IANA timezone in Noda time.

 LocalDateTime lt= new LocalDateTime(2015, 06, 02, 3, 3);  
 DateTimeZone tz = DateTimeZoneProviders.Tzdb.GetZoneOrNull("Asia/Colombo");
 ZonedDateTime slTime = lt.InZoneLeniently(tz);

Here local date time type in Noda time does not refer to local computers time. It just represents the given time. This is equivalent to DateTimeKind.Unspecified in .NET. In the latter I am associating it with Sri Lanka’s IANA time zone. Please note there are number of other ways you could construct point in date and time. Please refer to Noda time documentation. Noda time’s DateTimeZoneProviders used above also supports BCL timezones so which means you can use Noda time in your legacy application which uses MS timezones by considering many other benefits you get. You will find there are number of other benefits to use Noda time over native .NET date time types especially if you do serious datetime critical calculations.

 
Leave a comment

Posted by on June 3, 2015 in .NET, C#

 

Tags: , , ,

How extension methods implemented in .NET

In one of my previous posts I wrote about “Adding Extension Methods to Every Object in C#”. So how does extension methods internally implemented?
In essence extension methods are surprisingly compiler tricks and the code generated is pretty much ordinary static method calls! To get better understanding about this we need to look at the IL generated by the code. My favorite tools for this task are ILSpy, Teleric free just in compiler or simply ILDasm. I use these tools interchanging.

Consider following extension method,

namespace System
{
   public static class ObjectExtensions
   {
      public static string ToJson(this Object obj)
      {
         return JsonConvert.SerializeObject(obj);
      }
   }
}

And consider simple usage of it on here,

namespace Extensions.Tests
{
   [TestFixture]
   public class ObjectExtentionTests
   {
      [Test]
      public void Object_Extention_ConvertToJSON_Test()
      {
         var p = new Person() { FirstName = &amp;quot;Dimuthu&amp;quot;, LastName = &amp;quot;Perera&amp;quot;};
         Debug.Write(p.ToJson());
      }
   }

   public class Person
   {
      public string FirstName { get; set; }
      public string LastName { get; set; }
   }
}

If you load this built output assembly containing this test code to ILSpy as shown in following. I have

highlighted IL code that is calling “ToJson()” to method.
extension methods IL1

extension methods IL2
The code is this,

IL_0022: call string [Extensions]System.ObjectExtensions::ToJson(object)

As you can see this is actually a static method call to “ToJson” method inside static System.ObjectExtensions class. Actually exactly the same IL as above will be generated when you just write a static method in ObjectExtensions class and call it by passing any object to it! So extension methods are kind of programmer friendly “syntactic sugar”. So code generated by compiler is exactly the same for method calls if you have written it as following. Note that we do not use “this” keyword to distinguish the method as extension method.

namespace System
{
	public static class ObjectExtensions
	{
		public static string ToJson(Object obj)
		{
			return JsonConvert.SerializeObject(obj);
		}
	}
}

So my test method becomes as following,


                [Test]		
                public void Object_Extention_ConvertToJSON_Test()
		{
			var p = new Person() { FirstName = &amp;quot;Dimuthu&amp;quot;, LastName = &amp;quot;Perera&amp;quot;};
			Debug.Write(ObjectExtensions.ToJson(p));
		}

Note now we are passing object as an argument to the static method. If you disassembly this and inspect IL you will find that the IL for these method calls are same as when we use extension methods.

So how does the compiler knows a method is an extension method or not? Answer is clever C# compiler knows it by looking “this” keyword in method argument and in generated IL compiler adds metadata to extension method as in following. ([System.Core]System.Runtime.CompilerServices.ExtensionAttribute)

.class public auto ansi abstract sealed beforefieldinit System.ObjectExtensions
    extends [mscorlib]System.Object
{
    .custom instance void [System.Core]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = (
        01 00 00 00
    )
    // Methods
    .method public hidebysig static 
        string ToJson (
            object obj
        ) cil managed 
    {
        .custom instance void [System.Core]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = (
            01 00 00 00
        )
        // Method begins at RVA 0x208c
        // Code size 12 (0xc)
        .maxstack 1
        .locals init (
            [0] string CS$1$0000
        )

        IL_0000: nop
        IL_0001: ldarg.0
        IL_0002: call string [Newtonsoft.Json]Newtonsoft.Json.JsonConvert::SerializeObject(object)
        IL_0007: stloc.0
        IL_0008: br.s IL_000a

        IL_000a: ldloc.0
        IL_000b: ret
    } // end of method ObjectExtensions::ToJson

} // end of class System.ObjectExtensions

This essentially enables external code to detect that the original codebase has used the method as an extension method by using reflection. Otherwise there is no way of telling since the IL is just static method call This is exactly what reflection based tools like ILSpy does when decompiling IL. Current version of it correctly decompile the IL to extension method format which is easy with this compiler generated attributes. However this was not the case with early version of TelericDecompile that I had previously and it just shows extension methods as static method calls when decompiled to source code.

 
Leave a comment

Posted by on October 18, 2014 in .NET, C#

 

Tags: ,

Adding Extension Methods To Every Object in C#

Extension methods in C# are great tools at some locations they are especially helpful when extending types that are not owned by the caller.
Extension methods provide three major advantages,
1. Centralizing common code by providing domain specific language.
2. Extending types without using inheritance or composition.
3. Extending external codebases.
If you need to extend every object how would you do it? Well you can extend “Object” type and “Type” classes since every object in C# inherits from “Object” type and “Object” type contains “GetType()” method which returns “Type”. This means extending “Object” and “Type” will propagate to all objects if you import the correct namespace.
In following implementation I am writing extension method to serialize any object to JSON format. This sort of implementations are very useful in crosscutting concerns like logging.

namespace System
{
	public static class ObjectExtensions
	{
		public static string ToJson(this Object obj)
		{
			return JsonConvert.SerializeObject(obj);
		}
	}
}

In above code I am extending Object type so that every object might be able to use provided that the assembly referenced and since I am defining this in “System” namespace that is the only requirement. If you define this in any other namespace then you have to import that namespace to use as an object extension.
Usage for the JSON convert is simple as shown below.

namespace Extensions.Tests
{
	[TestFixture]
	public class ObjectExtentionTests
	{
		[Test]
		public void Object_Extention_ConvertToJSON_Test()
		{
			var p = new Person() { FirstName = "Dimuthu", LastName = "Perera"};
			Debug.Write(p.ToJson());
		}
	}

	public class Person
	{
		public string FirstName { get; set; }
		public string LastName { get; set; }
	}
}

When you access intelisense for object, Visual Studio will show the extension method as an option in the list as shown below.
Extension method intelisense support for every object

So output will be when JSON serialized in above code will be as following,

{"FirstName":"Dimuthu","LastName":"Perera"}
 
Leave a comment

Posted by on February 28, 2014 in .NET, C#, Visual Studio

 

Tags: , ,