Monday, May 11, 2015

AngularJS custom directive with two-way binding using NgModelController


It took me a while, but I finally got it right!
I recently tried to create a custom directive with 2 way binding, using the 'ng-model' attribute. This was a little tricky at first - I ran into some articles, but they didn't seem to work for me, and I needed to make some tweaks to get it right.

I don't want to go over everything I read, but just want to publish the changes or gotcha's you should know about.

The best article I read on the subject is this one : http://www.chroder.com/2014/02/01/using-ngmodelcontroller-with-custom-directives/
I recommend reading it. It has the best explanation about how '$formatters' and '$parsers' work, and what's their relation to the ngModelController.


After reading that article, there are 2 problems I ran into.

1. ngModelController.$parsers and ngModelController.$formatters are arrays, but 'pushing' my custom function to the end of the array didn't work for me. When changing the model, it never got invoked. To make this work, I needed to push it in the beginning of the array, using the Array.prototype.unshift method.

2. The second problem I had was that I needed to pass ng-model an object. Passing it a value won't work. You might be thinking that it's obvious since passing a value won't suffice as a reference to it, but this wasn't obvious to me, since passing ng-model a value when using an 'input' element for example works and still updates it both ways.

For a full working example of a two-way binding directive using ngModelController (the ng-model attribute), you can take a look at this:
https://github.com/gillyb/angularjs-helpers/tree/master/directives/dropdown

Monday, April 27, 2015

Reviewing Kibana 4's client side code


I haven't written anything technical for a while, and that's mainly because the past year I changed jobs a few times. After working at Sears Israel for almost 3 years, I thought it's time to find the next adventure. I think I finally found a good match for me, and I'll probably write a whole post about that soon.

For now, I'll just say that at the new startup I work at, we're doing a lot of work on the ELK stack, and I got to do a lot of work on Kibana. With years of experience on various client side applications, I still learned a lot from looking at kibana's code. I think there are many things here written really elegantly, so I wanted to point them out in a concentrated post on the subject. Also, there are some bad notes, mainly minor things (in my opinion) that I will mention as well.


At First Glance

Kibana 4 is a large AngularJS application. The first thing I noticed when looking at the code is that it has a great structure. Many AngularJS tutorials (or any other tutorials for MVC frameworks) and code-bases I've worked on have the messy structure of a 'models' directory, a 'controllers' directory, and a 'views' (or 'templates') directory.
AngularJS did the right thing by organising the code by features/components, and not by code-framework definitions. This makes it much easier to navigate through the code base, and to easily add more features.
Having a code base organised by controllers, models, views, etc, doesn't do much for your code base except become a pile of unrelated features in each directory, violating the Separation of Concerns principle.


(In the image you can see each component grouped in it's own directory, which includes it's templates, it's code and it's styles all together)

In addition, most AngularJS applications I've seen have all their routes defined in one file (usually app.js or index.js), which goes along with many global definitions, and sometimes logic related to specific pages or models all in a single file with no relation to any feature.
Kibana's code is nicely organised, and each 'plugin' or 'component' (discover/visualize/dashboard/settings/etc) defines it's own routes in it's own controller.
They manage to do this by creating their own 'RouteManager' (https://github.com/elastic/kibana/blob/master/src/kibana/utils/routes/index.js). This basically defines the same api as angular's RouteManager, but it collects the routes you define, and in the end calls angular's route manager to actually add them (by calling routes.config here : https://github.com/elastic/kibana/blob/master/src/kibana/index.js#L41).
This custom route manager also adds the ability to resolve certain things before the route is called, which is real useful in many situations.


Javascript Libraries

The creators of kibana did a great job (with a few minor exceptions that I will explain in the end) in choosing many open source javascript libraries to lean on while building kibana. It's usually a good idea to not reinvent the wheel, especially when someone already did a good job before you.

RequireJS
RequireJS is a javascript module loader. It helps you create modular javascript code, and makes it really easy dealing with dependencies between modules. Kibana's code does a great job utilizing RequireJS by defining most javascript modules in the AMD standard.

A really nice trick they did here that is definitely worth mentioning is the 'Private' service they created. This is a wrapper that allows you to define a RequireJS module, with angularJS dependencies. This allows you to use angular's dependency injection abilities side-by-side with RequireJS' DI abilities.

Regularly loading RequireJS modules in the code looks like this :
define(function(require) {
    var myService = require('my_service');
    // now do something with myService
});

Using the 'Private' service you load modules like this :
define(function(require) {
    var myAngularService = Private(require('my_angular_service'));
    // now you can use myAngularService
});

And most important is that my_angular_service looks like this :
define(function(require) {
    return function($q, $location, $routeParams) {
        // all angular providers in the function parameters are available here!
    };
});

The Private service uses angular's get() method to retrieve the $injector provider, and uses it to inject the dependencies we need.
(Take a look at the 'Private' service code here : https://github.com/elastic/kibana/blob/master/src/kibana/services/private.js)


lodash!
If you're not familiar with lodash, you should be. It's the missing javascript utility library that will definitely help you DRY up your javascript code. It has many "LINQ"-like methods (for those familiar with C#), and many other basic methods you would usually write yourself to help iterate over json objects and arrays in javascript. One of the really nice features about lodash is that most methods you can chain to make your code more readable and lodash uses lazy evaluation so performance is amazing!

I don't want to start writing about the features of lodash, but I strongly suggest reading their docs, and getting familiar with it.
Almost every service, component or controller in the kibana code starts with this line :
var _ = require('lodash');


They also did a really good job extending lodash with some utility methods of their own. Take a look at these files to see for yourself :
https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js
https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_notchainable.js

(There's one thing I don't like here, which is the methods 'get' and 'setValue' - They do a 'deepGet' and 'deepSet' which is like saying "hey, i know i have something here in this object, but have no idea where it is". This just doesn't feel right... :/ )


Some HTML5

Throughout the code there has been some good use of html5 features.
The first one I noticed and really liked is the 'Notifier' service (https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js). I really like the abstraction here over notifying the user of different message types, and the abstraction over the browser's 'console' methods. The 'lifecycle' method (https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js#L139) is really neat, and uses the console.group() method to group messages in the browser's console. It also uses 'window.performance.now' which is really nice, and much better than using the older 'Date.now()' method (it's more exact, and it's relative to the navigationStart metric).

Kibana also makes use of the less-common <wbr/> tag. This is new to html5 and is intended to give you a little more control over where the line breaks when text overflows in it's container.

There's also use of 'localStorage' and 'sessionStorage' for saving many local view settings in the different kibana pages. In general, they did a great job in persisting the user's state on the client side. When navigating between tabs, it keeps you on the last view you were in when returning to the tab.

Another nice thing is that there is a lot of use with aria-* attributes, and recently I see more and more of this in the newer commits. It's nice to see a big open source project dedicating time to these kinds of details.


Object Oriented Programming

There is a great deal of attention to the design of objects in the code.
First, I like the way inheritance is implemented here. A simple lodash 'mixin' allows for object inheritance.

inherits: function (Sub, Super) {
    Sub.prototype = Object.create(Super.prototype, {
        constructor: {
            value: Sub
        },
        superConstructor: Sub.Super = Super
    });
    return Sub;
}
(https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js#L23)

Many objects in the code use this to inherit all the properties of some base object. Here's an example from the 'SearchSource' object :
return function SearchSourceFactory(Promise, Private) {
    var _ = require('lodash');
    var SourceAbstract = Private(require('components/courier/data_source/_abstract'));
    var SearchRequest = Private(require('components/courier/fetch/request/search'));
    var SegmentedRequest = Private(require('components/courier/fetch/request/segmented'));

    _(SearchSource).inherits(SourceAbstract);
    function SearchSource(initialState) {
      SearchSource.Super.call(this, initialState);
    }

    // more SearchSource object methods

}

(https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/search_source.js#L9)

You can see the SearchSource object inherits all the base properties from the SourceAbstract object.

In addition, all the methods that would've been static are defined on the object prototype. This is great mainly for memory usage. Putting a method on the object's prototype makes sure there's only one instance of the method in memory.


Memory Usage

Since kibana is a big single-page application, there is a need to be careful with memory usage. Many apps like kibana can be left on in a browser for a long time without any refresh, so it's important to make sure there are no memory leaks. AngularJS makes this easy to implement, but many programmers don't bother going the extra mile for this.
In the kibana code, many directives subscribe to the '$destroy' event and unbind event handlers not to hold references to unused objects.

An example from a piece of kibana code (the css_truncate directive) :
$scope.$on('$destroy', function () {
    $elem.unbind('click');
    $elem.unbind('mouseenter');
});
(https://github.com/elastic/kibana/blob/master/src/kibana/directives/css_truncate.js#L41)


Code Conventions

Kibana's code is mostly very organized, and more importantly readable. A small negative point goes here for some inconsistencies with variable naming. There are classes that have public methods that start with '_' and some don't.

For an example of this, look at the DocSource object. This file has even commented 'Public API' and 'Private API' but the naming convention differences between the two aren't clear.
(https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/doc_source.js)


Code Comments

I can say the code has enough comments, but I have no idea how much that actually is, since most of the code is readable without comments, which is an amazing thing. There are great comments in most places that should have them.

Just a funny anecdote is that I was surprised to see comments that actually draw in ascii art the function they describe! Kudos!
/**
 * Create an exponential sequence of numbers.
 *
 * Creates a curve resembling:
 *
 *                                                         ;
 *                                                         /
 *                                                        /
 *                                                     .-'
 *                                                 _.-"
 *                                            _.-'"
 *                                      _,.-'"
 *                               _,..-'"
 *                       _,..-'""
 *              _,..-'""
 *  ____,..--'""
 *
 * @param {number} min - the min value to produce
 * @param {number} max - the max value to produce
 * @param {number} length - the number of values to produce
 * @return {number[]} - an array containing the sequence
 */
createEaseIn: _.partialRight(create, function (i, length) {
    // generates numbers from 1 to +Infinity
    return i * Math.pow(i, 1.1111);
})

(https://github.com/elastic/kibana/blob/master/src/kibana/utils/sequencer.js#L29)


CSS Styling

Another great success here was using the 'less' format for css files. This allows for small and concise 'less' files, and reuse of css components easily (known as 'mixins'). There has been a great job here done with colors especially - All colors are defined in a single file (https://github.com/elastic/kibana/blob/master/src/kibana/styles/theme/_variables.less). Editing this file, you can easily create your own color scheme.

(There are a few exceptions - mainly a few colors defined in js files or css files, but It's 99% covered in _variables.less).


Build Process

Kibana has a grunt build process setup. It compiles the css files, combines them and js files (without minifying, using r.js), adds parameters to the resource files for cache-busting, and some more small tasks.
I would be happy to see this upgraded to using gulp, which is stream based and has a much nicer api (in my opinion), but grunt still does the job.


Performance

After writing so many good points about kibana's source code, this is where I lack good feedback.
Maybe it's because when building kibana they had in mind that it's not to be served over the internet, and it's just an internal tool, and maybe it's just because I'm overly sensitive after working for quite a while on the performance team at Sears Israel (working on ShopYourWay.com). Either way, if it was an online website, it's performance would be considered under-par.

JS files aren't minified. They are combined, but not minified. Unfortunately, the code isn't even prepared to just minify the files. In order to do this, angularjs dependencies need to be defined with the dependencies declared as strings before the function itself. Otherwise angularjs's dependency injection mechanism won't work.

CSS files aren't minified either, just combined.

JS files are ~5MB !!! Yes, almost 5MB!! That's huge, and it's all downloaded on kibana's initial load. This could've been done in a few separate files, downloading only the ones needed for the initial view first. This would already be a great improvement. Though there are advantages to not minifying the js, and I think that's what the creators had in mind - It's easier to debug with DevTools (no need for mapping files), and although initial load will take a long time, after that there is no wait on any other pages. If the resources are cached on your machine, then even getting back to kibana the second time should be really fast.

There are also some libraries in the source code which I think are redundant and maybe could've been removed with a little extra work. One example is jquery, which I think is frowned upon using with angularjs. AngularJS comes with jqlite, which is a smaller version of jquery and should suffice.

I hope it doesn't sound like I think they did a bad job - I'm pointing out some areas in the code that maybe could've been done differently. All in all the app is amazing, and works great! :)


In conclusion

I had a great time learning and working (and still working) on kibana's code. I tried to show a lot of good things I like about the code, and point out a few minor bad things in the code. I hope you enjoyed reading this, and Kudos to you if you got to this point! :)

I also hope to write another post about how kibana communicates with elasticsearch and maybe another one on how it renders the visualizations with the help of D3.js

Sunday, February 22, 2015

When 7 Billion users just aren't enough...


It seems like the only thing threatening facebook's user growth is earth's mortality rate.
It also seems that they're not going to let stop them anytime soon from growing even more!

Facebook just released a new feature that allows you to assign another facebook account to control your account after you pass away.

http://money.cnn.com/2015/02/12/technology/facebook-legacy-contact/

I personally give them a HUGE amount of credit for the creativity. Let this be a lesson to all of us about making the most out of a situation where resources are limited.

Tuesday, February 17, 2015

Simple nodejs desktop time tracking utility


I recently wrote about Creating desktop applications with nodejs...
Well, I was playing around a little with node-webkit (again!) - It's a nodejs framework for building cross-platform desktop applications. Within a few hours I built a super simple time tracking utility that I needed for quite some time!

I know there are a ton of utilities like this out there already, but all of them have much more features than I want and need, and annoy me too much while using them. This utility does *nothing* but track time. You just add a task and it starts timing it. You can stop and start tasks, and just remove them when you're done.

I'm not that fanatic about time productivity (yet!) that I need history graphs to show me how productive i've been lately. It's actually more for me to see if the tasks I'm working on take as long as I think they should.

So here it is: https://github.com/gillyb/tt-trakr

All the code is there.
There's also a compiled executable for windows ready inside the 'Installation' folder.
Now that I have a mac I want to compile it for mac too soon.
(I'll also probably be making some UI improvements and maybe adding some more small features in the future, so follow the repository if you're interested.)

And here's a picture of what it looks like :

:),
Gilly.

Saturday, November 8, 2014

Online multiple javascript compression tool


Minifying/compressing javascript files has become a standard for a while already when developing websites. It is very important to save as much space as you can and have your users downloading as little as possible to improve the performance of your site.
I don't think you'll find an article out there that denies it.

An important question that arises from this is which compressor/minifier should you use ?
Different famous open source projects use different compressors, and I'm guessing (or at least hoping) they chose them wisely relying on benchmarks they did on their own.
You see, each compressor works differently, so different code bases won't be affected in the same way by different compressors.

In the past I used to manually test my code against different compressors to see which one was best for me. I finally got sick of doing it manually, so decided to look for a tool that will do the job for me. Surprisingly, I didn't find one that did exactly that, so I quickly wrote a script that will do it for me. Then I decided to design a UI for it and put it online for others to enjoy as well.

I present to you : http://compress-js.com
You can copy text, or drag some js files to it, and choose which compressor you want. Or, and this is the default method, choose 'Check them all' which will compress your code using the most popular compressors and show you the results, and the compressed size from all of them. You can download the files directly from the site.

Here's a screenshot :


Currently the site can compress your javascript code with YUI Compressor, UglifyJS, JSMin and Google's Closure Compiler.
If you have any thoughts or suggestions on how to improve, feel free to drop a comment below. :)

Tuesday, November 4, 2014

Lazy loading directives in AngularJS the easy way


The past few months I've been doing a lot of work with AngularJS, and currently I'm working on a single page application which is supposed to be quite big in the end. Since I have the privilege of building it from scratch, I'm taking many client-side performance considerations in mind now, which I think will save me a lot of hard work optimizing in the future.

One of the problems main problems is HUGE amounts of js files being downloaded to the user's computer. A great way to avoid this is to only download the minimum the user needs and dynamically load more resources in the background, or as the user runs into pages which require a specific feature.

AngularJS is a great framework, but doesn't have anything built in that deals with this, so I did some research myself...
I ran into some great articles on the subject, which really helped me a lot (and I took some ideas from), but weren't perfect.
A great article on the subject is this one : http://www.bennadel.com/blog/2554-loading-angularjs-components-with-requirejs-after-application-bootstrap.htm
The important part is that it explains how to dynamically load angularjs directives (or other components) after bootstrapping your angularjs app.
What I didn't like about this article is that the writer's example requires RequireJS and jQuery along with all the AngularJS files you already have. This alone will make your app really heavy, and I think doesn't need to be like this.

Let me show you how I wrote a simple AngularJS service that can dynamically load directives.

The first crucial step is that you need to save a reference to $compileProvider. This is a provider that is available to us when bootstrapping, but not later, and this provider will compile our directive for us.
var app = angular.module('MyApp', ['ngRoute', 'ngCookies']);

app.config(['$routeProvider', '$compileProvider', function($routeProvider, $compileProvider) {
    $routeProvider.when('/', {
        templateUrl: 'views/Home.html',
        controller: 'HomeController'
    });

    app.compileProvider = $compileProvider;
}]);


Now, we can write a service that will load our javascript file on demand, and compile the directive for us, to be ready to use.
This is a simplified version of what it should look like :
app.service('LazyDirectiveLoader', ['$rootScope', '$q', '$compile', function($rootScope, $q, $compile) {
    
    // This is a dictionary that holds which directives are stored in which files,
    // so we know which file we need to download for the user
    var _directivesFileMap = {
        'SexyDirective': 'scripts/directives/sexy-directive.js'
    };

    var _load = function(directiveName) {
        // make sure the directive exists in the dictionary
        if (_directivesFileMap.hasOwnProperty(directiveName)) {
            console.log('Error: doesnt recognize directive : ' + directiveName);
            return;
        }

        var deferred = $q.defer();
        var directiveFile = _directivesFileMap[directiveName];

        // download the javascript file
        var script = document.createElement('script');
        script.src = directiveFile;
        script.onload = function() {
            $rootScope.$apply(deferred.resolve);
        };
        document.getElementsByTagName('head')[0].appendChild(script);

        return deferred.promise;
    };

    return {
        load: _load
    };

}]);


Now we are ready to load a directive, compile it and add it to our app so it is ready for use.
To use this service we will simply call it from a controller, or any other service/directive like this:
app.controller('CoolController', ['LazyDirectiveLoader', function(LazyDirectiveLoader) {
    
    // lets say we want to load our 'SexyDirective' all we need to do is this :
    LazyDirectiveLoader.load('SexyDirective').then(function() {
        // now the directive is ready...
        // we can redirect the user to a page that uses it!
        // or dynamically add the directive to the current page!
    });

}]);


One last thing to notice, is that now your directives need to be defined using '$compileProvider', and not how we would do it regularly. This is why we exposed $compileProvider on our 'app' object, for later use. So our directive js file should look like this:
app.compileProvider.directive('SexyDirective', function() {
    return {
        restrict: 'E',
        template: '<div class=\"sexy\"></div>',
        link: function(scope, element, attrs) {
            // ...
        }
    };
});


I wrote earlier that this is a simplified version of what it should look like, since there are some changes that I would make before using it as is.
First I would probably add some better error handling to look out for edge cases.
Second, We wouldn't want the same pages to attempt to download the same files several times, so I would probably add a cache mechanism for loaded directives.
Also, I wouldn't want the list of directive files (the variable _directivesFileMap) directly in my LazyDirectiveLoader service, so I would probably create a service that holds this list and inject it the service. The service that holds the list will be generated by my build system (in my case I created a gulp task to do this). This way I don't need to make sure this file map is always updated.
Finally, I think I will take out the part that loads the javascript file to a separate service so I will be able to easily mock it in tests I write. I don't like touching the DOM in my services, and if I have to, I rather separate it to a separate service I can easily mock.

I uploaded a slightly better (and a little less simplified) version of this over here : https://github.com/gillyb/angularjs-helpers/tree/master/directives/lazy-load

Thursday, October 9, 2014

Desktop applications with nodejs! ...as if winforms and wpf aren't dead already!


I used to disfavor javascript over other languages because it wasn't type-safe, it was hard to refactor, hard to write tests, find usages in the code, ...and the list goes on...
The past few years though, some amazing things have happened in the world that now make javascript an amazing language!

IDE's got much better! My personal favorite is WebStorm which has great auto-completion in javascript and supports many frameworks like nodejs and angular.

Web frameworks got much better! Newer and more advanced frameworks like angularJS and Ember allow you to write really organized and well structured javascript on the client side.

V8 was created and open sourced, which brought a whole variety of new tools to the table. Some of them being headless browsers like phantomJS which are great for automation testing, and creating quick web crawling scripts.

And my personal favorite - NodeJS! This tool is amazing! It can do so many things from being a fully functional and scalable backend server to a framework for writing desktop applications.


While looking into the code of PopcornTime I realized it was written in nodejs, with a framework called node-webkit. This was an amazing concept to me. It's basically a wrapper, that displays a frame with a website in it. The 'website' displayed is your typical client side code - html, javascript and css, so obviously you can use any framework you like, like angular or ember. This 'website' which is displayed in the frame can use all nodejs modules (directly in the js code) which gives you access to the operating system - you can access the file system, databases, networks and everything else you might need. Since nodejs runs on all major operating systems, you can also 'compile' your desktop app to run on any platform.
You can wrap all this as an executable file ('.exe' in windows) and easily tweak it not to show the toolbar which means the user has no way of knowing it's actually a website 'beneath' the desktop application they're using.


The steps taken to create a simple desktop application with node-webkit are super-simple!
(and easier than building a desktop application with any other language i've tried!)

First, I'm assuming you have nodejs and npm installed.
Now, download node-webkit : https://github.com/rogerwang/node-webkit#downloads

Start building your application just like you would a website. You can use the browser just like you're used to, to see your work.
When you want to start accessing node modules, you'll need to start running it with node-webkit.
In order to do this, just run the node-webkit executable from the command line with your main html file as a parameter.

C:\Utilities\node-webkit\nw.exe index.html


This will open your website as a desktop application.

You can now access all nodejs modules directly from the DOM!
Some of the operating system's api's are wrapped as node modules as well, so you can create a tray icon, native window menus, and much much more..

Debugging the app is also super simple and can easily be done with the Developer Tools, just like you would in Chrome! (you just need to configure your app to open with the toolbar visible, which you can define while developing in your package.json file)


I see so many benefits creating desktop applications like this, so I'm expecting to see many more apps running on this framework (or other nodejs-based frameworks) in the near future. (Except for major algorithms which probably would be better off written in C/C++. Hence, i'm not expecting to see the next Photoshop version be written in nodejs, but there are a ton of good examples out there which should be!)


Some good references :
- Node-Webkit Github page
- Introduction to HTML5 Desktop apps with node-webkit (a great tutorial to get started)