RSS

Implementing a Concurrent Data Flow in C#

It is very common to do parallel processing in modern day applications that do intensive amount of processing. In those tasks, you might need to use set of chained tasks that are interlinked each other such a way that you do processing in set of stages/phases. However, you cannot wait all the input raw data/initial steps if you don’t have input data to finish undergoing initial transformation/processing to start subsequent transformation/processing steps to proceed. Such processing pipelines could also be much faster to process since different stages might more likely to use different type of resources available in the execution environment.

work flow model

To implement to parallel pipeline workflow we can use Microsoft TPL data flow library which is available as a nuget package.
nuget TPL data flow package

This library provide ability to define blocks of code that can be interlinked to run concurrently as a workflow. Best way to figure this out is to see a sample code implementation given below which is a simulation of image download workflow.

Problem:
Sample Workflow
Imagine we have an application where we need to download images from external provider routinely with following steps. To make whole process faster we use need to run each steps concurrently.
1. Read download URL of an image from data source
2. Download images from remote source
3. After each image get downloaded we need to save it into disk and update database
Solution:
With TPL data flow library we can abstract these three steps into different TPL blocks and link each other to create a parallel processing mesh as shown below. (inside each task actual image downloading code omitted for clarity)


var cts = new CancellationTokenSource();
            Task.Run(() =>
            {
                if (ReadKey().KeyChar == 'c')
                    cts.Cancel();
            });

            var inputBlock = new BufferBlock<DownloadInput>(
            new DataflowBlockOptions
            {
                BoundedCapacity = 5,
                CancellationToken = cts.Token
            });

            var downloadBlock = new TransformBlock<DownloadInput, DownloadResult>(
            n =>
            {
                DownloadResult result = new DownloadResult();
                Console.WriteLine("Downloading {0} image on thread id {1}", n.DownloadUrl, Thread.CurrentThread.ManagedThreadId);
                //do the actual download here 
                Thread.Sleep(TimeSpan.FromMilliseconds(2000)); //image download simulation
                return result;
            },
            new ExecutionDataflowBlockOptions
            {
                MaxDegreeOfParallelism = 4,
                CancellationToken =
                cts.Token
            });

            var outputBlock = new ActionBlock<DownloadResult>(
            s =>
            {

                //do other stuff such as updating flags in database etc that image has been downloaded for maintenance processcess
                Thread.Sleep(TimeSpan.FromMilliseconds(200)); //simulation of other work
                Console.WriteLine("Saving image to database {0} on thread id {1}", s.DownloadUrl, Thread.CurrentThread.ManagedThreadId);
            },
            new ExecutionDataflowBlockOptions
            {
                MaxDegreeOfParallelism = 4,
                CancellationToken =
                cts.Token
            });

            inputBlock.LinkTo(downloadBlock, new DataflowLinkOptions
            {
                PropagateCompletion = true
            });

            downloadBlock.LinkTo(outputBlock, new DataflowLinkOptions
            { PropagateCompletion = true });

Finally, I used following code to test run the workflow.

try
            {
                Parallel.For(0, 20, new ParallelOptions
                {
                    MaxDegreeOfParallelism = 4,
                    CancellationToken =
                cts.Token
                }
                , i =>
                {
                   
                    var downloadInput = new DownloadInput();
                    downloadInput.DownloadUrl = string.Format("http://myimagesite.com/{0}", i);
                    Console.WriteLine("added {0} to source data on thread id {1}", downloadInput.DownloadUrl, Thread.CurrentThread.ManagedThreadId);
                    inputBlock.SendAsync(downloadInput).GetAwaiter().GetResult();
                });
                inputBlock.Complete();
                await outputBlock.Completion;
                Console.WriteLine("Press ENTER to exit.");
            }
            catch (OperationCanceledException)
            {
                Console.WriteLine("Operation has been canceled! Press ENTER to exit.");
            }
            Console.ReadLine();

I have tested this code in a console project so the out put with simulated work looked like below.
output

TPL data flow blocks also accepts cancellation tokens. This is extremely useful when you need to cancel entire workflow, otherwise might be difficult to manage the cancellation process.

 
Leave a comment

Posted by on April 24, 2017 in .NET, C#

 

Tags: , , , ,

Configuring Bamboo to Show Jasmine Test Results

If you already got unit test setup that has been configured to run with karma in local environment it might be a good idea to integrate unit test suite with your continues integration(CI) server.
You can follow these steps to configure popular Bamboo build server to integrate unit test suite written with Jasmine.
1. First step on integrating with Bamboo is to generate test results file that can be understood by Bamboo server. To do that first need to go to configure build plan page in Bamboo server as shown below.

bamboo-configure-plan2. In the configuration page Locate the build job and go to job tasks configuration by clicking on the job.

job-configuration-in-bamboo

task-configuration3. Click on Add task to add a task.

4. In the add tasks configuration dialog for the selected job, we have few options to parse jasmine test results. We can add JUnit test parser, Mocha test parser etc. in this case I will show how to use Junit test parser.

task-types

5. After you add the Junit Test parser your task configuration might look similar as shown below It is super important to make sure that you add Junit test parser as a “Final Task in Bamboo”. You should ideally add them after “Final Tasks” gray bar as shown below

junit-parser-task

In JUnit parser task configuration, you should specify test reports XML path which I will describe later.

What are Final Tasks in Bamboo?

Test result parser tasks like these should always run regardless of when previous build tasks successfully ran or not. The reason is when one of unit test get failed previous build tasks might get failed but still you need to run test parser task since you need to see the failed tests too.

So in above steps we have mostly completed Bamboo configuration steps. Next we need to do is generate JUnit test result xml from jasmine tests through npm command line. This post assumes you already have Jasmine unit tests configured to run with Karma in your local environment. Covering that is beyond the scope of this post and you can find more information on that in here if you need to. If you have Karma test runner already setup you can add Karma JUnit reporter as shown below.

6. First install “karma-junit-reporter” to your unit test project with below command.

[source-code language=”java”]npm install karma-junit-reporter –save-dev[/source-code]

7. Add following to the Karma configuration. Make sure you define options for JUnit reporter as expected.

junitReporter: {
    outputDir: 'test-reports', // results will be saved as $outputDir/$browserName.xml
    outputFile: 'junit-report.xml', // if included, results will be saved as $outputDir/$browserName/$outputFile
    suite: '', // suite will become the package name attribute in xml testsuite element
    useBrowserName: true, // add browser name to report and classes names
    nameFormatter: undefined, // function (browser, result) to customize the name attribute in xml testcase element
    classNameFormatter: undefined, // function (browser, result) to customize the classname attribute in xml testcase element
    properties: {} // key value pair of properties to add to the section of the report
}

Important thing to note in this test reporter configuration is that make sure the output path you give in this step is correct as expected in above “JUnit Parser” custom result directories path as shown below.

tasks.png

8. Sample karma.conf can be as below. You might need to make sure “singlerun” to true so that test run will not hang up the build.

module.exports = function(config) {
    config.set({
        basePath: '',
        frameworks: ['jasmine'],
        files: [
            'your files here...'
        ],
        preprocessors: {},
        reporters: ['progress', 'spec', 'kjhtml', 'junit'],
        // web server port
        port: 9876,
        browsers: ['Chrome'],
        singleRun: true,
        concurrency: Infinity,
        // the default configuration
        junitReporter: {
            outputDir: 'test-reports', // results will be saved as $outputDir/$browserName.xml
            outputFile: 'junit-report.xml', // if included, results will be saved as $outputDir/$browserName/$outputFile
            suite: '', // suite will become the package name attribute in xml testsuite element
            useBrowserName: true, // add browser name to report and classes names
            nameFormatter: undefined, // function (browser, result) to customize the name attribute in xml testcase element
            classNameFormatter: undefined, // function (browser, result) to customize the classname attribute in xml testcase element
            properties: {} // key value pair of properties to add to the section of the report
        }
    })
}

8. In your build step you can run following command to run unit tests. This will generate test report
[source-code language=”java”] karma start[/source-code]
As you would understand now after the build task completes in Bamboo job, we have got the JUnit test results xml already saved to the disk. In the JUnit test parser task which runs after build task, these test results will be parsed and shown in the bamboo build results page.
In the build dashboard you will see test summary in in “Tests” tab as shown below…

build-results-1

If you need to see the test results you can click on “Job link” and see more test run details.

build results 2.png

build-results-3

If you have failing test you will see that build get failed and failed test results will be shown in build results page.

failed-build-results

If you do not see these test results when unit tests get failed, make sure you have added “JUnit Parser” as a Final task as described above in Bamboo configuration. If you have Mocha as test runner instead of Karma you do not need a converter task. Instead you can directly use Mocha test parser in Bamboo and the result of the results are much similar.

 
1 Comment

Posted by on February 23, 2017 in ASP.NET, Bamboo, Jasmine, NodeJs, Unit Testing

 

Tags: , , , , ,

Using Microsoft Machine Learning Studio in Azure

Machine learning is one of the fields that having growing popularity from past decades probably had been there for long time as a different term which might be “data mining”. Microsoft ML studio in azure is quite easy to use and relatively cheap to use even in big projects.

What is Machine Learning?

According to Wikipedia Machine learning is the subfield of computer science that “gives computers the ability to learn without being explicitly programmed” (Arthur Samuel, 1959).

However I like to simply describe it as finds pattern in data and use those patterns to predict future!. E.g. Assume you had sample data set which contains income of particular individuals with some other various attributes about themselves such as social status, occupation, geographical are, age etc. Based on those sample data we can predict income of a individual given the other data attributes about that individual except income.

Using Microsoft Azure Machine Learning Studio without zero programming!

One cool thing about Azure ML studio is You can run lot of machine learning projects with zero programming (Rather than using R or Python scripts). Although it support Python and R scripts if necessary in some scenarios but most common decision tree algorithms can run with zero programming!

A Sample case study in Microsoft Azure Machine learning Studio for Heart Disease Prediction.

There is a sample machine learning project that you can run in machine learning studio. Based on previous clinical diagnostic data of heart disease patients you can predict presence of heart disease in a person Cortana gallery contains the sample in here. To run this project in your azure environment you can do following.

Log into Microsoft machine learning studio

ml-studio-initial

Navigate to https://gallery.cortanaintelligence.com/Experiment/Heart-Disease-Prediction-2 and click open in studio as shown below.

hear-disease-prediction

Project will be opened in Microsoft ML studio you can inspect how the workflow has been built and run the workflow

ml-studio-heat-disease-model.png

You can visualize scored model for positive and negative predictions(heart disease presence or absence in this case) with their underlying scored probability in scored model.ml-studio-results-visualize-menu

evaluate-results

 
Leave a comment

Posted by on January 8, 2017 in Azure, Machine Learning

 

Tags: , ,

Connecting to SQL Server from NodeJs

With NodeJs it is more common to use MySQL, MongoDB, PostgreSQL than Microsoft SQL server as the database server, at least what I have encountered. However if we happens to prefer using Microsoft SQL with NodeJs there are few of options we could use.  Advantage of using NodeJs server for SQL server communication is it can be hosted in any platform with minimal issues.

Using an ORM ~ Using Sequelize?

Sequelize is a great ORM utility to use with NodeJs which also has the advantage of having abstraction layer for separate database drivers to interact with Microsoft SQL server, PostgreSQL, MySQL, MariaDB, SQLite and MSSQL. This is by far the best option we had so far. Other major benefit is the friendly API that it supports with promise based API which many NodeJs, AngularJS developers are following these days. Sequelize uses appropriate NodeJs package internally for each database server.

For Microsoft SQL server tedious is the node package to be used with Sequelize. Following two steps should be followed to getting started with connecting SQL server from NodeJs.

npm install --save sequelize //sequelize package

npm install --save tedious // SQL server TDS driver

Connecting SQL Server using Tedious without ORM?

Tedious is the npm package that is being internally used by Sequelize to connect to SQL server. This actually uses Tabular data stream protocol(TDS) for connecting to SQL server natively. Another compelling reason to use Tedios is that it is being actively contributed by Microsoft from recent past.

If you do not prefer using ORM for connecting to SQL server from NodeJs, you can also use Tedious without using Sequelize by using following steps. Tedious also supports SQL Azure by supporting encryption.

var Connection = require('tedious').Connection;

  var config = {
    userName: 'test',
    password: 'test',
    server: '192.168.1.210',

    // If you're on Windows Azure, you will need this:
    options: {encrypt: true}
  };

  var connection = new Connection(config);

  connection.on('connect', function(err) {
    // If no error, then good to go...
      executeStatement();
    }
  );

Connecting to SQL server from Sequelize

Connecting to SQL server from node js is simple you can abstract this in your factory class in a way that consumer code of backed database will be unaware about the database server completely.

var sequelize = new Sequelize('database name', 'username', 'password',
	{ host: 'localhost', dialect: 'mssql',
	  port: 1433,
	  pool: { max: 5, min: 0, idle: 10000 },
	  dialectOptions: { instanceName: 'instancename'}
	});

Any database driver specific code as above can be passed to dialect options.

 
Leave a comment

Posted by on December 14, 2016 in ASP.NET, MS SQL Server, NodeJs

 

Tags: , , , ,

Moving Null/Empty Values to End of Results Collection in Elastic Search Results

Elastic search is a great enterprise level full text search engine with multi-tenant capability distributed search server setup and which provides http based web API to communicate with the server. When communicating with Elastic search server, it is worth using Nest library as .NET client.

Problem

I was helping to a colleague where he needed to sort elastic search results by selling price of items by ascending order (the ES documents had a field for selling price for products). This is easily achievable if you use nest.

var searchDescriptor = new SearchDescriptor<MyEsDocument>();
searchDescriptor.SortAscending(x => x.SellingPrice);

However, one problem that he was facing is all the products that do not have a price will be returned on first set of results when sorted as depicted below for very simple data set.

sorting elastic search

This can be solved in multiple ways such as query’s score boosting, introducing a additional “flag field” (which is undesirable since index size might grow but queries perform better).

The solution I used is “script” sort for elastic search. This can be easily achievable with Nest using code similar to following.

var sortScriplet = "doc['sellingPrice'].value ? 0:1";
searchDescriptor.SortScript(x => x.Type("number")
        .Order(SortOrder.Descending).Script(sortScriplet))
        .SortAscending(x => x. sellingPrice);

First if the selling price has acceptable value documents will be flagged with 1 and the rest with 0 and will be sorted with descending order. This will bring all resulting objects that have selling price to top. Then the actual selling price field will be used for sorting within the initially partitioned result set leads to effectively sending blank values to bottom of the result set!!

 
Leave a comment

Posted by on May 17, 2016 in ASP.NET, Dev tools, Elastic Search

 

Tags: , ,

Internationalization Aspects for Web Applications

Making web applications that supports multiple languages might be a challenging to maintain. So before building you should consider several aspects that are concerned with internationalization.

In summary very often it is not only translating the texts to other “languages” but also supporting “specific culture/locale”s as well!!!.

First of all there are certain terms that are often described when it comes to supporting multiple languages.

Internationalization (I18n)

In software applications, internationalization (often shortened to “I18N , meaning “I – eighteen letters and then a -N”) is the process of planning and implementing applications and services so that they can easily be adapted to specific local languages translations and cultures, a process called localization(which described below) which also means making your applications and services “localizable”.

In Other words the process of changing your software so that it isn’t hardwired to one language/locale/culture.

 

Localization (l10n)

The process of adding the appropriate resources to your software so that a particular language/locale is supported. This often includes adding language translation files without re-implementing/rebuilding your application.

 

Approaches to Localizing Web Applications

For web applications you can either do the translations in server side or do it in the client side. This almost always depends on the way your web application has been developed. ie. if it is a SPA application written with AngularJS then doing the translations in client side is preferable.

 

Server Side Translations for ASP.NET Applications

Obvious choice is to use the .NET framework support to create resource files(.resx) in the application and get the framework support to set CurrentCulture and CurrentUICulture for the current request thread. You need to suffix resource files with standard culture acronym and appropriate resource values will be selected by framework. Visual studio has builtin support for managing resource files.

resources

In summary “CurrentCulture” will be used for date formatting, number formatting etc. and the CurrentUICulture will be used for resource translations. You can configure the settings as following in Web.config file.

<configuration>
<system.web>
<globalization culture="auto:en-US" uiCulture="auto:fr" />
</system.web>
</configuration>

 

Use of “auto” in globalization setting.

We can set the current culture and “ui culture” as shown above with “auto” prefixed setting. when we use “auto” client user browser’s settings will be used to detect cultures and if not present will be defaulted to “en-US” and “de” in above example. This is through accept-language” request header as shown below. These settings are typically found in OS level or changeable in browser settings.

accept-language

 

Set Current Culture and Current UI Culture Programatically

For ASP.NET applications culture settings can be set at application_beginrequest or application_aquirerequeststate. This will affect all web pages unless they have been written at page level as describe below

void Application_BeginRequest(object sender, EventArgs e)
{
  Thread.CurrentThread.CurrentCulture = new System.Globalization.CreateSpecificCulture("en-US");

  Thread.CurrentThread.CurrentUICulture = new System.Globalization.CultureInfo("en-US");
}

Overriding Culture at Individual Page Level(ASP.NET Web Forms)

ASP.NET web forms has a virtual method called  “InitializeCulture” which can be overridden to change culture settings at page level as shown below.

protected override void InitializeCulture()
{
        Thread.CurrentThread.CurrentCulture = 
            CultureInfo.CreateSpecificCulture("en-US");
        Thread.CurrentThread.CurrentUICulture = new 
            CultureInfo("en-US");

        base.InitializeCulture();
}

Client Side Translations

For web applications client side translations are often done with the help of JSon files which acts as resource files.

Localization for Angular JS

Angular Js has built in support for datetime, currency simbol and number format culture support via https://docs.angularjs.org/guide/i18n. Language translations to be done easier in AngularJS, we can use https://github.com/angular-translate/angular-translate library.

In brie your angular application’s config phase you need to configure the translate provider service that provided with the library with some basic settings ie. path to language translation resources folder etc.  as shown below

function configure($logProvider, routerHelperProvider, exceptionHandlerProvider, $translateProvider) {
$translateProvider
 .addInterpolation('$translateMessageFormatInterpolation')
 .preferredLanguage('en')
 .fallbackLanguage('en')
 .useStaticFilesLoader({
 prefix: '/app/i18n/',
 suffix: '.json'
 });
}

You can maintain the translation files JSON in your project as indicated below

translation folder

translation json content

Translation JSON Example (Spanish-es)

Within the UI language translation key can be used with the directive given by the translation library as indicated below.

translation in UI

Above the “translate” is a directive given by angular translate library. “Splash_Msg” is a key in translation file which will be included in run time.

Internationalization Concerns

  • Date Time – Week month name translations should be done to support localization. It is often easier to find and use controls that supports these concerns or build your own that supports these features. When date time period calculations done it is often easier to do it with base neutral culture.
  • calendar

    French and English Language Supported Date Picker

    Bootstrap UI DateTime Picker (http://plnkr.co/edit/nLw9DMgOLDK1dTIOJq4Q?p=preview)

    Date time related concerns are probably be the most complex and challenging concern in localization. Time Zone related conversions are extremely hard to deal accurately in web applications with different locales.

  • Unit of Measure – This might be probably not very important but some parts of the world it is often specific units are being used when displaying data. e.g., in Russia, it is normal to use Russian abbreviations (in Cyrillic letters) instead of standard symbols, e.g. кг and not kg.
  • Monetary conversions – It is often useful to display monetary values with locale specific currency units or using money symbols specific to the culture. Frameworks like .NET framework and Angular l18n supports currency formatting and currency symbols but biggest concern is exchange rates.
  • Numeric Formats – In some cultures symbols have been used for different purposes. e.g. Comma and period being used  alternatively for decimals  33.3 (English) > 33,3 (German). Client side validations and server side validations should be used with care for these concerns and additional coding has to be done.
  • String translations – String translations for static string labels are relatively easier to do when it compared to culture specific formatting. Resource files with different languages are often used.

Language/Culture Selections

Culture and language selections can be done in variety of ways in web applications. It can be detected automatically based on request information or can be set by user as briefly described below.

  • Browser settings
    • accept-language http header present in request headers
    • Location of the client – Based on client request remote IP location can be detected and hence the culture can be set automatically.
  • User Selections
    • User selects language – Within the UI we can let user to select preferred language and change the language in user interface.
    • Store it in profile – Based on user registration or in user profile page we can set the language and culture.

What to Translate

  • Static Texts – Static text labels are relatively easily factored out to resource files
  • Application Data – Application data is probably one of the most difficult to be translated unless you use a dynamic translation service to translate data(e.g. Google Translate). It is difficult and inefficient to “store” these translation data and will be difficult to maintain highly changing data to be maintained with different languages unlike static text labels etc. present in a web site.
  • Error Messages – Error messages should also be translated to specific languages. Specially the client side validation script should be written to support multiple langues.
 
Leave a comment

Posted by on April 22, 2016 in .NET, AngularJs, ASP.NET, Javascript

 

Tags: , , , , ,

Using JSX With Bable to Write React JS Components

I have been working on number of AngularJS and lately into React JS development as well. If you write ReactJs components just using pure JavaScript it might be tedious task to write virtual DOM to actual browser DOM friendly structure that is necessary by the framework in JavaScript.

Eg.

var Menu = React.createClass({
    render: function () {
       return React.createElement("nav", {
         className: "nav navbar-default"
    },
        React.createElement("div", {
          className: "container-fluid"
        },
          React.createElement("ul", {
            className: "nav navbar-nav"
          },
            React.createElement("li", {
              className: "active"
            },
              React.createElement("a", {
                className: "active",
                  href: "#"
              }, "Home")),

           React.createElement("li", {
              className: ""
           },
              React.createElement("a", {
                className: "",
                href: ""
               }, "Contact Us")))));
              }
});

ReactDOM.render(React.createElement(Menu, null), document.getElementById("menu"));

As shown below all above code is just about rendering following simple html menu :-).

menu

Actually this is an insanely unmaintainable structure and this kind of structure is indeed for ReactJs to build the virtual DOM structure that is being internally used.  One obvious choice that given by ReactJs authors is to utilize a transpiled language.

What is Transpilation?

Transpilation is converting one programing language grammar into another programing language typically trough compilation.

One such compiled language is React JSX or React JS extensions.  So React JSX is a language which transforms from a XML like syntactic sugar into actual JavaScript. XML elements, attributes and children are transpiled into arguments that are passed to React.createElement with appropriate nested structure.

How to Use JSX?

One popular  JSX transpiler is Babel.

To setup Babel go to root of your application and execute following in your node command prompt,

npm install babel-cli --save-dev

Note: You need to install NodeJs and npm for this to work

Next we need to setup React for Babel with following.

npm install babel-preset-react -- save-dev

Above will install react transpiler plugin support in node for Babel.
Next we need to setup basic package.json script to easily transpile our JSX files. To that we need to tell in which folder we have JSX files and into which folder the transpiled files should be copied as shown below in package.json file.

  "name": "reacttest",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "devDependencies": {
    "babel-cli": "^6.6.5",
    "babel-preset-react": "^6.5.0"
  },
  "scripts": {
    "build": "babel js -d built --presets react"
  },
  "author": "",
  "license": "ISC"
}

In above we include “babel js -d built –presets react” which tells that “js” folder is where we have JSX files and “built” folder is where we need to copy transpiled files. As indicated in my project explorer below.

project explorer
Babel Transpilation Source And Target Folders

So we can rewrite the initial menu example with JSX as shown below in the react component,

var Menu = React.createClass({
    render: function() {
       return (
<nav className="nav navbar-default">
<div className="container-fluid">
<ul className="nav navbar-nav">
	<li className="active"><a className="active" href="#">Home</a></li>
	<li className=""><a className="" href="">Contact Us</a></li>
</ul>
</div>
</nav>

       );
    }
});

ReactDOM.render(React.createElement(Menu, null), document.getElementById("menu"));

Notice the return statement where you just return more close to HTML version of XML(please note the use of “className” instead of standard “class”). This will be transpiled into native JavaScript that will be syntactically identical to our code sample with code to build the react component if we execute below command in Node command prompt.

npm run build

this will run “build” npm script to compile with babel as shown below.

babel transpilation
Bebel Transpilation Step

Furthermore, you can integrate this command to Gulp or Grunt task runners which maybe to include a development build workflow to make this bit easier.

 
Leave a comment

Posted by on March 12, 2016 in ASP.NET, Babel, JSX, React

 

Tags: , , , ,