RSS

Limiting Number of Concurrent Threads With SemaphoresSlim

Coordinating concurrent actions for a predictable outcome or in one word synchronization with concurrent programming is a sometimes a challenging task. There are three main ways of achieving this.

  1. Exclusive locking
  2. Non-exclusive locking
  3. Signaling

Problem
I was working on an external image downloader service which heavily used C# async processing for downloading thousands of images through internet. Downloader tool was too fast due to async nature so there were occasional failures at runtime due to excessive use of network and exceeding resource capacity in the server.

Using Non-exclusive locking for limiting concurrency
Nonexclusive locking is useful in limiting concurrency where preventing too many threads from executing particular function or section in your code at once.  I used SemaphoreSlim which lives in System.Threading namespace. There are two  similar versions of this construct which are Semaphore and  SemaphoreSlim. However latter has been introduced in .NET 4 and has been optimized for performance. So we should better use it.

So how does SemaphoreSlim limits concurrency?
The way semaphore slim work is extremely simple to understand as shown in diagram below.semaphoreslim

The way it work is analogous to a hall that has “N” number of seats for people and has two does to enter and exit. When hall filled with people people has to wait near entrance door until someone goes out from exit door. Maximum number of threads that can be active in semaphore will be limited. It is configurable in SemaphoreSlim constructor. I have also moved this to a configurable value.

private SemaphoreSlim _mysemaphoreSlim = new SemaphoreSlim(Configuration. MaxConcurrency);

Following is very simple method shell that you might need to limit concurrency.

private async Task<bool> AsyncMethod()
{
    this._ mysemaphoreSlim.Wait();

    /* Do the other cool things here, Only N number
    of threads can be between Wait and Release */

    /* After finishing your work, call release this will
    allow to enter another thread if any waiting to execute */

    this._mysemaphoreSlim.Release();
}

 

You could also use

 mysemaphoreSlim.WaitAsync()

if you need waiting threads to be utilized as non-blocking synchronization.

 
Leave a comment

Posted by on February 2, 2016 in .NET, C#

 

Tags: ,

Automatically Restart Node For Application Changes With Grunt

When you are building applications with node at start it might be very annoying to restart node js server when you do a change to a js file. Each time you do a server side non public Js file change you need to restart server to reflect those changes to be seen in client side(browser). This is not necessary to static resources.

One quick way to avoid this workflow is to use grunt to restart your server when there is a file change in your js files. If you are new to grunt, Grunt is node package more of like a build tool that can be used to run various build time tasks in your applications.

First step is installing latest version of grunt in your project. Here make sure you save this as a dev dependency but not as an actual dependency.

npm install grunt  --save-dev

Secondly you have to make sure grunt command line client is installed.

npm install grunt-cli –g

To monitor server side js changes you need to install grunt node monitoring package.

npm install grunt-nodemon  - -save-dev

In your NodeJs project root you need to create gruntfile.js which is basically acts as configuration file for grunt.

node project

You can enter grunt file content as following.

module.exports = function(grunt) {

  grunt.initConfig({
    nodemon: {
      all: {
        script: 'app.js',
        options: {
          watchedExtensions: ['js']
        }
      }
    }
  });

  grunt.loadNpmTasks('grunt-nodemon');
  grunt.registerTask('default', 'nodemon');
};

Your application entry point Js file should be included for node mon’s “script” config property value in my case which was “app.js”. Once you have done this all you have to do is just type “grunt” in you terminal.

node autorestart

You can enter “rs” to restart node at any time if you change a server side non public js file you would see nodemon is restarting the node server avoiding you to do it manually and avoiding the annoyance as you would see in terminal window.
node autorestart 2

 
Leave a comment

Posted by on December 25, 2015 in ASP.NET, NodeJs

 

Tags: ,

Fixing Canvas Image Rendering Issues With Cross Origin Resource Sharing (CORS)

I was working on integrating photosphere library which will render 3d images to 3d image gallery. This library internally heavily uses Three.js 3d capable JavaScript library. Initially everything was working fine but when we access CORS enabled image domain canvas images were not rendered and no errors logged in console. Issue was library internally fetch ajax request to image source and there is no option to set “withCredentials”. I simply extended library to have this option enabled. But more importantly another issue was by default library was setting “crossorigin” attribute to “anonymous” in canvas image renderer.
Following is the code from

https://github.com/JeremyHeleine/Photo-Sphere-Viewer/blob/master/photo-sphere-viewer.js#L362.

// CORS when the panorama is not given as a base64 string
if (!this.config.panorama.match(/^data:image\/[a-z]+;base64/)) {
loader.setCrossOrigin('anonymous');
}

Again I extended library little bit to include this as an option in initialization configuration object. When setting this to appropriate setting fixed my issue. I will share forked library with my fixes. Actually this issue is worth noting as it might be more commonly encountered in image canvas processing libraries. Actually there is nothing wrong with these libraries but CORS used for images causing issues(depending on your requirement). Finding the cause of these issues might be tedious hence thought of sharing this in this blog post.

HTML specification now has a crossorigin attribute for resources that allows images defined by the element loaded from foreign origins to be used in canvas as if they were being loaded from the current origin(W3C Source : http://dev.w3.org/html5/spec-preview/urls.html#cors-enabled-fetch).

You can use CORS enabled images without cross origin heads but it will seal the canvas so that it will be unchangeable afterwards (which is known as “tainted canvas” for security reasons such as users from having private data exposed by using images to pull information from remote web sites without permission.)
So in summary CORS enabled images can be reused since it will prevent tainting as described above.

var img = new Image(),
canvas = document.createElement("canvas"),
ctx = canvas.getContext("2d"),
src = "http://mydomain.com/image"; // image url goes here

img.crossOrigin = "use-credentials"; // or anonymous (which is what this post is
// focused on)

img.onload = function() {
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage( img, 0, 0 );
}
img.src = src;

 
Leave a comment

Posted by on November 30, 2015 in ASP.NET

 

Configuring Redis Session State Provider With Azure

Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. Even though it has been originally developed for POSIX systems it can be easily configured to store session data in .NET with the aid of couple of nugget packages and using windows port that can be downloaded from https://github.com/MSOpenTech/redis.

One particular need you might need to store session external to webserver itself is when you have a web farm/multiple webservers with an authenticated user where subsequent requests would be redirected to different servers for example via a load balancer. Traditionally this can be solved easily with session store providers but using Redis for this can be fast and easier if you have your host system in Azure which was in my case.

To test the implementation locally for development you could do following,

  1. Download Redis port for windows from https://github.com/MSOpenTech/redis. Note: it is not available in http://redis.io site.redis windows port extract
  2. 2. Starting local Redis server is very easier as indicated below.

redis server

It will start the Redis server and note the connection details. You can change these by changing redis.conf file which is documented in the site.

e.g. https://github.com/MSOpenTech/redis/blob/2.8/redis.conf

redis server start

3. You can use various commands specified in the project site’s documentation e.g. to view session keys use command below. (not to worry on hacking my servers these are my DEV ENV values )

redis keys

4. Then you have to do install redis session state provider nugget package from https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider/ to your project.

5. One caveat here is to support “sliding expiration” for session state you need to make sure you install package version >=1.5.0. Sliding expiration is typically a must in session state management. Current version at the time of writing is 1.6.5 so make sure you install the latest.

6. To install RedisSessionStateProvider, run the following command in theNuget Package Manager Console of Visual Studio.

PM> Install-Package Microsoft.Web.RedisSessionStateProvider -Version {latest version number here}

7. Then all you have to do is add following configuration section in your web.config and change values accordingly.

session state config

8. After you have successfully tested it in your development environment you might need to configure Redis in windows Azure. Creating Redis Cache in Azure is super easy as shown below. At the time of writing it will be only supported in new Azure portal found in http://portal.azure.com/. Old portals does not support Azure Redis Cache.

azure redis create

azure redis create 2

9. Choose appropriate pricing tear depending your data load.

azure redis pricing

10. It will take a while to create the Redis cache. It sometimes takes more than 10 minutes to up and gets running.

redis cache creating

11. After you create Redis cache you need to configure cryptographic key and connection details for the nuget package. Selecting created Redis cache will show the access keys as below.

redis session config cryptographic keys

12. Then all you need to do is enter Access key and host name of azure redis server to the web.config. Notice how the connection details being entered, its quite different than we used earlier for local Redis server. Also note that host name given by Microsoft is always ends suffixed with cache.windows.net. So your full host name should look like someuniquename.redis.cache.windows.net. This will essentially point your session store to Azure Redis cache.

redis session config values

Note: Redis desktop manager can be used to manage Redis DB if you prefer GUI interface which can be download at http://redisdesktop.com/

redis desktop manager

 
Leave a comment

Posted by on October 20, 2015 in .NET, ASP.NET, ASP.NET MVC, Azure, C#

 

Tags: , , , , ,

PowerShell Script to Add Current IP Address to Azure FireWall Rules

When you are connecting to Azure databases it is necessary to add current public IP address to the Azure firewall rules to white list your public IP. This can be done via azure management portal but this might be annoying at sometimes especially when you’re IP changes with router restarts where you are not under a static public IP. The other drawback is automatic firewall rule generated via portal is the firewall rule names will clutter up in firewall rules table and you should remember to clean these unnecessary IP addresses to avoid not only to remove unnecessary IPs but to avoid security risks with public ISP provided dynamic IPs from the pool.

You could automate this by relatively easily by using PowerShell script as described below. In order for it to run you should import azure publish settings profile. Script uses certificate based authentication that can easily be achieved by importing publish settings profile. Note: this should be done in your azure management computer so after importing publish settings to your management machine’s certificate store, you can issue powershell azure cmdlets without explicitly needing to enter Azure credentials.

1. Run Windows PowerShell as an administrator as follows:
Choose Start, in the Search box, type Windows Powershell then Right-click the Windows PowerShell link, and then choose Run as administrator.

2. At the Windows PowerShell command prompt, type Get-AzurePublishSettingsFile and then press Enter.
3. Web browser opens with azure management portal and log in to it and follow instructions to download publish profile.
4. Once you have downloaded publish profile, import the publish profile by entering following command in Powershell command prompt.
Import-AzurePublishSettingsFile ‘{pathtopublishsettingsfile}’

5. If no error fired up then entering Get-AzureSubscription in powershell should show current subscription(s).

get-current-subscriptions

6. Once you have done that you can run the following PowerShell script by changing variables to appropriate values as shown below. You can save this to “ps1” type file(powershell script file) and execute the script through power shell when necessary or even better “PS2EXE” method as described below.

  • $subscriptionName – Get the subscription name from “Subscription Name” field got from above step 5
  • $firewallRule – Fire wall rule name which is something descriptive to you eg. Dimuthu-Home
  • $serverName – Your server name

Note : Full script can be downloaded at here...


$subscriptionName = 'Your Subscription Name'
$ipGetCommand = 'http://www.iplocation.net/find-ip-address'
$firewallRule = 'Dimuthu-Home'
$serverName = "Your Azure Server Instance Name";
$webclient = New-Object System.Net.WebClient
$queryResult = $webclient.DownloadString($ipGetCommand)
$queryResult -match '\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b'
$currentPublicIp = $($matches[0])

Select-AzureSubscription -SubscriptionName $subscriptionName

If ((Get-AzureSqlDatabaseServerFirewallRule -ServerName $serverName -RuleName $firewallRule) -eq $null) {
    New-AzureSqlDatabaseServerFirewallRule -ServerName $serverName -RuleName $firewallRule -StartIpAddress $currentPublicIp -EndIpAddress $currentPublicIp
}
else {
    Set-AzureSqlDatabaseServerFirewallRule -ServerName $serverName -RuleName $firewallRule -StartIpAddress $currentPublicIp -EndIpAddress $currentPublicIp
}

7. I have saved this script to “azure-ipenable.ps1” for demo purpose and executed it through PowerShell as shown below. This will create firewall rule with given name i.e. “Dimuthu-Home” if it does not exist or will update the IP address if the firewall rule does exist. This is especially useful when router resets or your dynamic public IP address changes.

after-create-rule-output

Running PS1 Script More Easily By Converting to EXE


To make life more easier when after you change subscription details to appropriate values you could convert it to exe.

First download the PS2EXE utility and follow instructions!
https://gallery.technet.microsoft.com/PS2EXE-Convert-PowerShell-9e4e07f1
Following is the command I used to convert PS script to exe…
ps2exe

This will generate .NET executable that will do the job more easily…:-)

ps2exe output

 
Leave a comment

Posted by on September 16, 2015 in Automation, Azure, Powershell

 

Tags: , ,

Cross site request forgery (CSRF) Protection Considerations in ASP.NET Webforms and ASP.NET MVC

Cross site request forgery (CSRF) attacks or one click attacks(term being used interchangeably) are one of the most common vulnerabilities present in websites, dashboards etc. In ASP.NET MVC and ASP.NET webforms have some built in security switches you can turn on to defend against CSRF risks but still there are less known facts and caveats in this area that those built in mechanisms will not rescue. ASP.NET MVC developers think they can mitigate the risk by using [ValidateAntiForgeryToken] action filter with MVC actions. To generate the anti-XSRF tokens, call the @Html.AntiForgeryToken() method from an MVC view or @AntiForgery.GetHtml() from a Razor page. This is mostly true but there are still some risks regarding CSRF for Http “GET” requests which is really important to understand which I will be discussing below. If needed you can jump into that.

What about ASP.NET webforms? Many developers think that built in view state would mitigate CSRF risk which is absolutely not. Even though View state will make CSRF attack very little harder, it will not make it impossible. MAC encoding present in ASP.NET webforms is also to prevent viewstate tampering not to defend against CSRF. If an attacker can log in to the page by his own account but still needs to hack into some other user’s account, he can easily grab the view state for the particular page and can be used to attack target user account page action via CSRF. In theory this clearly indicate that there must be some random token not guessable in advance to defend against CSRF risk.  So general recommendation is synchroniser token pattern which used to mitigate this risk.

Does ViewStateUserKey used in ASP.NET web forms really mitigate CSRF risk?

What you can do is set ViewStateUserKey property of the web page instance to set a unique key in Page_Init event handler to associate the page’s view state with a specific authenticated user.

void Page_Init(object sender, EventArgs e)
{
  if (User.Identity.IsAuthenticated)        
  {
    ViewStateUserKey = Session.SessionID;
  }
}

When you do this malicious user cannot guess this to include in payload of “one click attack” since this value is unique to specific user. Since requests are validated against this key and exception will be thrown hence CSRF attack will be mitigated. However there are very important security breaches that can possibly arise that must remembered with this approach.

  1. ViewStateUserKey property is an extra addition to the data used in ViewState MAC calculation. If that value changes between post-backs, the ViewState Machine Authentication Code (MAC) calculation will fail hence exception will be thrown. However important bit is view state MAC validation will only be checked against for post backs. What this means is get requests are still vulnerable to “one click attacks” consider this “https://www.foo.com/deleteproducts.aspx?productid=23” this will be happily bypass View State Mac validation.
  2. Sometimes developers disable viewstate

 

Mitigating HTTP Get CSRF vulnerability?

As stated above even if you apply CSRF protection measure there still you are vulnerable to Get request associated CSRF attacks. Surprisingly this is not mentioned in many CSRF related material. Better approach is to employ combination of below or at least one of the approaches mentioned below to mitigate GET CSRF risk.

  1. One easy approach what I do believe is actions like “https://www.foo.com/deleteproducts.aspx?productid=23” will only be allowed through POST requests. But do remember to include preventive measures as stated above. Allowing only POST requests will NOT mitigate CSRF risk.
  2. Including another random secret key regenerated based on session key can also be employed so in the legitimate website links will be generated in the UI for this. Key can also be included in the cookie and hence can be verified in combination with request url verification token(This approach is known as Double submission cookies). Malicious users can’t lure legitimate users to click on links since random key is generated based on session id and should match against the cookie values. https://www.foo.com/deleteproducts.aspx?productid=23&randomkey={xxx}
  3. Only allowing requests with a referrer header from the same site is another approach. This has issues since referrer header can be legitimately altered in request propagation to the server from the user browser by firewalls, browsers proxy which can be leads to throwing off legitimate requests from server.
  4. Challenge-Response – When there is an important business transaction such as deleting products above to a get request as stated above, server can respond with another authentication response such as CAPTCHA, re-authentication with password. However using this is really impractical in most scenarios but really strong mechanism.

 

 
Leave a comment

Posted by on September 14, 2015 in ASP.NET, Security

 

Tags: ,

Optimizing AngularJS Directives By Using Compile Phase

Many AngularJs developers do not use compile phase in directives at all, instead they almost always use link function. I have seen this so much times. But refactoring them later might be painful and might introduce glitches. Learning to use compile function when appropriate is really important skill to have since it will improve performance of your application as well. Consider following AngularJs view fragment,

 <div ng-app="app">
    <div ng-controller="myController">
        <div ng-repeat="fruit in fruits">
            <my-directive fruit="{{fruit}}"></my-directive>
        </div>
    </div>
 </div>

And consider possible naive JS implementation for the directive.

angular.module('app', [])
    .controller('myController', function ($scope) {
    $scope.fruits = ["Bananas", "Apples", "Oranges", "Grapes"];
})
    .directive('myDirective', function ($log) {
    return {
        template: "<div>{{fruit}}</div>",

        link: function (scope, element, attrs) {

                $log.log('link phase:', attrs.fruit);
            
        }
    };
});

As you can see in the above code, common pitfall is all the code that is common to all directive instances and instance specific code in same good old link function! Think how this will impact for the performance of the application, if the directive lives inside of a “ngrepeat” directive and especially if the repeated collection is relatively large performance impact will be augmented. In the above implementation common code is wasting time by executing additional code in each of “ngrepeat” directive’s iterations.

Solution is to use compile function as shown in below implementation.

angular.module('app', [])
    .controller('myController', function ($scope) {
    $scope.fruits = ["Bananas", "Apples", "Oranges", "Grapes"];
})
    .directive('myDirective', function ($log) {
    return {
        template: "<div>{{fruit}}</div>",

        compile: function (element, attrs) {
            // Code in this block will be called once regardless of instance count
            $log.log('compile phase:', attrs.fruit);
            return function link(scope, element, attrs) {
                // Code in this block will be called for each directive instance
                $log.log('link phase:', attrs.fruit);
            };
        }
    };
});

One common use from compile phase is to do template customizations dynamically based on runtime conditions. If you run the above sample code in your browser you would see following console output, notice that compile phase log appears only once but for each iteration in ngrepeat item will be logged through link function of the directive.

directive output

What you should understand is in compile phase scope is not linked to directive content, so there are serious limitations on things you could do within it. What this means is you should avoid using it for registering DOM listeners etc. Compile phase is also useful to do template modifications dynamically hence it will be cloned and propagated to each and every directive instances.

Complete sample is available in following js fiddle.
http://jsfiddle.net/idimuthu/263oo2nw/

 
Leave a comment

Posted by on August 5, 2015 in AngularJs

 

Tags: , , , ,