Wednesday, July 20, 2016

Never Transpile When UnitTesting

I've been using Karma, Mocha and Chai for UnitTesting our client side environment at HPE Yehud, Israel where we develop the Mobile Center product.


Karma is an excellent unittest runner, and at the beginning we used the PhantomJS plugin to be used as our headless browser, but, some time ago, we started to write ES6 (ECMAScript 6) code.
As the new standards came, also arrived the inability for some browsers to support those new standards, and Phantom wasn't different.

We had two possibilities, use Babel as a plugin for our karma runner to transpile our code-to-test, or, use another headless browser that supports ES6 standards. Of course, we opted for the second option despite all the objections and defiance of the colleagues.


This is when I came across electron which it is an excellent framework for building cross platform desktop applications. In our case, we used it only as a plugin for our headless browser in our Karma configuration file.

So, here is how our karma config file looks like:



Now, the important part here is the browser property and the electronOpts property where we define to run our unittest with electron and it will be hidden, so it won't bother our jenkins when building our drops.

Conclusion: We decided to use a browser that supports ES6, since otherwise it was like saying that when writing unittests for your server side C# code, you need to test not your code but the IL (intermediate language) code that .NET compiles in.

You are invited to leave your comments or questions.

Thursday, June 2, 2016

FileReader compatibility use and performance

The FileReader object lets web applications asynchronously read the contents of files (or raw data buffers) stored on the user's computer.

There are different approaches of reading files in order to upload them.

I came across a problem when trying to read large files. In my case these files where APK and IPA files (Android and IOS mobile applications)

I found a specific problem that actually reading very large files (over 160MB), was not possible to Firefox.

I found that the problem was generated with the following code line:

var fr = new FileReader();
fr.readAsDataURL(application);


readAsDataURL cannot finish its job and no error is provided, as well, the onload/onloadend 
function is not applied.

The solution was replacing the readAsDataURL by the readAsArrayBuffer.

I also found that the reading is performed much faster with the readAsArrayBuffer. Specially for
large files.
Both methods are compatible with all new browsers http://caniuse.com/#feat=filereader

Files: 
Small file: ipa, len: 5.5M
Large file: ipa, len: 158M


Browser          readAsDataURL       readAsArrayBuffer
                  Small File              Large File                          Small File               Large File
Edge 1021 12332 116 3034
IE 10 114 3051 16 256
Chrome 18 995 33 982
Firefox 345 5890 13 119


Saturday, March 12, 2016

Best practice when using $http in AngularJS


In almost every one-page-application, you will be requested to retrieve some sort of data from the server or any http url, which usually retrieves a JSON data. After reading a lot of materials and working on different projects, I have arrived to the conclusion of what may be the best practice of retrieving data. In this article I will also explain the reasons.

Before reading: This article is intended for those having moderate knowledge on AngularJS.

The power of AngularJS resides on its ability to dynamically build Web Applications rapidly.


If you have built some AngularJS applications, for sure, you wrote a lot of controllers, if you have injected $http to your controller, it was probably wrong. Why?

The Controllers need to bind the data (model) to the html page (Template), and it does not need to know the mechanism used to obtain that data.
So, the mechanism to obtain the JSON data will be encapsulated into services.
This way, we are achieving two main design principles, "decoupling" and "separation of concerns".

You always will start your angular js file with something like this:

var App = angular.module('ProfileApp', []);
The module is the container for the different parts of your app – controllers, services, filters, directives, etc.

Now, this will be the controller that, in this case retrieve a bunch of facebook profiles:



In this controller we are injecting the "ProfileService" and calling the function getProfileList() which retrieves a "promise" (explained below).
A "Promise" is a mechanism that let you defer an asynchronous action.

The service will look like this:
App.service('ProfileService',['$http',function($http){

      

        // get facebook profile list.

                this.getProfileList = function() {

                    return $http.get('http://duda-api-test.herokuapp.com/profiles').then(function(response) {   

                        return response.data;

                    });

                }

}]);

Some interesting things about this code:


  1. Despite, they are also used for the same purpose, I'm using "Services" instead of "Factories". There are a lot of good articles explaining which one should you choose. I choose "Service" since, ES6 (ES stands for ECMAScript) is more compatible with Services than Factories, and it will be much easier to migrate when the time comes.
  2. I'm using the shortcut-ed version of $http for simplicity and more important, I'm using it with "then", since the $http legacy promise methods success and error have been deprecated.
  3. I'm NOT using $q, since there is no reason to use $q.defer to create promises. Why? Because, "then" already returns a promise, therefore, it is much more readable and simple.



You can see a full working version of my "angular service JSFiddle" containing this approach.