Showing posts with label javascript. Show all posts
Showing posts with label javascript. Show all posts
Monday, May 11, 2015
AngularJS custom directive with two-way binding using NgModelController
It took me a while, but I finally got it right!
I recently tried to create a custom directive with 2 way binding, using the 'ng-model' attribute. This was a little tricky at first - I ran into some articles, but they didn't seem to work for me, and I needed to make some tweaks to get it right.
I don't want to go over everything I read, but just want to publish the changes or gotcha's you should know about.
The best article I read on the subject is this one : http://www.chroder.com/2014/02/01/using-ngmodelcontroller-with-custom-directives/
I recommend reading it. It has the best explanation about how '$formatters' and '$parsers' work, and what's their relation to the ngModelController.
After reading that article, there are 2 problems I ran into.
1. ngModelController.$parsers and ngModelController.$formatters are arrays, but 'pushing' my custom function to the end of the array didn't work for me. When changing the model, it never got invoked. To make this work, I needed to push it in the beginning of the array, using the Array.prototype.unshift method.
2. The second problem I had was that I needed to pass ng-model an object. Passing it a value won't work. You might be thinking that it's obvious since passing a value won't suffice as a reference to it, but this wasn't obvious to me, since passing ng-model a value when using an 'input' element for example works and still updates it both ways.
For a full working example of a two-way binding directive using ngModelController (the ng-model attribute), you can take a look at this:
https://github.com/gillyb/angularjs-helpers/tree/master/directives/dropdown
Monday, April 27, 2015
Reviewing Kibana 4's client side code
CodeProject
I haven't written anything technical for a while, and that's mainly because the past year I changed jobs a few times. After working at Sears Israel for almost 3 years, I thought it's time to find the next adventure. I think I finally found a good match for me, and I'll probably write a whole post about that soon.
For now, I'll just say that at the new startup I work at, we're doing a lot of work on the ELK stack, and I got to do a lot of work on Kibana. With years of experience on various client side applications, I still learned a lot from looking at kibana's code. I think there are many things here written really elegantly, so I wanted to point them out in a concentrated post on the subject. Also, there are some bad notes, mainly minor things (in my opinion) that I will mention as well.
At First Glance
Kibana 4 is a large AngularJS application. The first thing I noticed when looking at the code is that it has a great structure. Many AngularJS tutorials (or any other tutorials for MVC frameworks) and code-bases I've worked on have the messy structure of a 'models' directory, a 'controllers' directory, and a 'views' (or 'templates') directory.
AngularJS did the right thing by organising the code by features/components, and not by code-framework definitions. This makes it much easier to navigate through the code base, and to easily add more features.
Having a code base organised by controllers, models, views, etc, doesn't do much for your code base except become a pile of unrelated features in each directory, violating the Separation of Concerns principle.
(In the image you can see each component grouped in it's own directory, which includes it's templates, it's code and it's styles all together)
In addition, most AngularJS applications I've seen have all their routes defined in one file (usually app.js or index.js), which goes along with many global definitions, and sometimes logic related to specific pages or models all in a single file with no relation to any feature.
Kibana's code is nicely organised, and each 'plugin' or 'component' (discover/visualize/dashboard/settings/etc) defines it's own routes in it's own controller.
They manage to do this by creating their own 'RouteManager' (https://github.com/elastic/kibana/blob/master/src/kibana/utils/routes/index.js). This basically defines the same api as angular's RouteManager, but it collects the routes you define, and in the end calls angular's route manager to actually add them (by calling routes.config here : https://github.com/elastic/kibana/blob/master/src/kibana/index.js#L41).
This custom route manager also adds the ability to resolve certain things before the route is called, which is real useful in many situations.
Javascript Libraries
The creators of kibana did a great job (with a few minor exceptions that I will explain in the end) in choosing many open source javascript libraries to lean on while building kibana. It's usually a good idea to not reinvent the wheel, especially when someone already did a good job before you.
RequireJS
RequireJS is a javascript module loader. It helps you create modular javascript code, and makes it really easy dealing with dependencies between modules. Kibana's code does a great job utilizing RequireJS by defining most javascript modules in the AMD standard.
A really nice trick they did here that is definitely worth mentioning is the 'Private' service they created. This is a wrapper that allows you to define a RequireJS module, with angularJS dependencies. This allows you to use angular's dependency injection abilities side-by-side with RequireJS' DI abilities.
Regularly loading RequireJS modules in the code looks like this :
Using the 'Private' service you load modules like this :
And most important is that my_angular_service looks like this :
The Private service uses angular's get() method to retrieve the $injector provider, and uses it to inject the dependencies we need.
(Take a look at the 'Private' service code here : https://github.com/elastic/kibana/blob/master/src/kibana/services/private.js)
lodash!
If you're not familiar with lodash, you should be. It's the missing javascript utility library that will definitely help you DRY up your javascript code. It has many "LINQ"-like methods (for those familiar with C#), and many other basic methods you would usually write yourself to help iterate over json objects and arrays in javascript. One of the really nice features about lodash is that most methods you can chain to make your code more readable and lodash uses lazy evaluation so performance is amazing!
I don't want to start writing about the features of lodash, but I strongly suggest reading their docs, and getting familiar with it.
Almost every service, component or controller in the kibana code starts with this line :
They also did a really good job extending lodash with some utility methods of their own. Take a look at these files to see for yourself :
https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js
https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_notchainable.js
(There's one thing I don't like here, which is the methods 'get' and 'setValue' - They do a 'deepGet' and 'deepSet' which is like saying "hey, i know i have something here in this object, but have no idea where it is". This just doesn't feel right... :/ )
Some HTML5
Throughout the code there has been some good use of html5 features.
The first one I noticed and really liked is the 'Notifier' service (https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js). I really like the abstraction here over notifying the user of different message types, and the abstraction over the browser's 'console' methods. The 'lifecycle' method (https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js#L139) is really neat, and uses the console.group() method to group messages in the browser's console. It also uses 'window.performance.now' which is really nice, and much better than using the older 'Date.now()' method (it's more exact, and it's relative to the navigationStart metric).
Kibana also makes use of the less-common <wbr/> tag. This is new to html5 and is intended to give you a little more control over where the line breaks when text overflows in it's container.
There's also use of 'localStorage' and 'sessionStorage' for saving many local view settings in the different kibana pages. In general, they did a great job in persisting the user's state on the client side. When navigating between tabs, it keeps you on the last view you were in when returning to the tab.
Another nice thing is that there is a lot of use with aria-* attributes, and recently I see more and more of this in the newer commits. It's nice to see a big open source project dedicating time to these kinds of details.
Object Oriented Programming
There is a great deal of attention to the design of objects in the code.
First, I like the way inheritance is implemented here. A simple lodash 'mixin' allows for object inheritance.
Many objects in the code use this to inherit all the properties of some base object. Here's an example from the 'SearchSource' object :
(https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/search_source.js#L9)
You can see the SearchSource object inherits all the base properties from the SourceAbstract object.
In addition, all the methods that would've been static are defined on the object prototype. This is great mainly for memory usage. Putting a method on the object's prototype makes sure there's only one instance of the method in memory.
Memory Usage
Since kibana is a big single-page application, there is a need to be careful with memory usage. Many apps like kibana can be left on in a browser for a long time without any refresh, so it's important to make sure there are no memory leaks. AngularJS makes this easy to implement, but many programmers don't bother going the extra mile for this.
In the kibana code, many directives subscribe to the '$destroy' event and unbind event handlers not to hold references to unused objects.
An example from a piece of kibana code (the css_truncate directive) :
Code Conventions
Kibana's code is mostly very organized, and more importantly readable. A small negative point goes here for some inconsistencies with variable naming. There are classes that have public methods that start with '_' and some don't.
For an example of this, look at the DocSource object. This file has even commented 'Public API' and 'Private API' but the naming convention differences between the two aren't clear.
(https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/doc_source.js)
Code Comments
I can say the code has enough comments, but I have no idea how much that actually is, since most of the code is readable without comments, which is an amazing thing. There are great comments in most places that should have them.
Just a funny anecdote is that I was surprised to see comments that actually draw in ascii art the function they describe! Kudos!
(https://github.com/elastic/kibana/blob/master/src/kibana/utils/sequencer.js#L29)
CSS Styling
Another great success here was using the 'less' format for css files. This allows for small and concise 'less' files, and reuse of css components easily (known as 'mixins'). There has been a great job here done with colors especially - All colors are defined in a single file (https://github.com/elastic/kibana/blob/master/src/kibana/styles/theme/_variables.less). Editing this file, you can easily create your own color scheme.
(There are a few exceptions - mainly a few colors defined in js files or css files, but It's 99% covered in _variables.less).
Build Process
Kibana has a grunt build process setup. It compiles the css files, combines them and js files (without minifying, using r.js), adds parameters to the resource files for cache-busting, and some more small tasks.
I would be happy to see this upgraded to using gulp, which is stream based and has a much nicer api (in my opinion), but grunt still does the job.
Performance
After writing so many good points about kibana's source code, this is where I lack good feedback.
Maybe it's because when building kibana they had in mind that it's not to be served over the internet, and it's just an internal tool, and maybe it's just because I'm overly sensitive after working for quite a while on the performance team at Sears Israel (working on ShopYourWay.com). Either way, if it was an online website, it's performance would be considered under-par.
JS files aren't minified. They are combined, but not minified. Unfortunately, the code isn't even prepared to just minify the files. In order to do this, angularjs dependencies need to be defined with the dependencies declared as strings before the function itself. Otherwise angularjs's dependency injection mechanism won't work.
CSS files aren't minified either, just combined.
JS files are ~5MB !!! Yes, almost 5MB!! That's huge, and it's all downloaded on kibana's initial load. This could've been done in a few separate files, downloading only the ones needed for the initial view first. This would already be a great improvement. Though there are advantages to not minifying the js, and I think that's what the creators had in mind - It's easier to debug with DevTools (no need for mapping files), and although initial load will take a long time, after that there is no wait on any other pages. If the resources are cached on your machine, then even getting back to kibana the second time should be really fast.
There are also some libraries in the source code which I think are redundant and maybe could've been removed with a little extra work. One example is jquery, which I think is frowned upon using with angularjs. AngularJS comes with jqlite, which is a smaller version of jquery and should suffice.
I hope it doesn't sound like I think they did a bad job - I'm pointing out some areas in the code that maybe could've been done differently. All in all the app is amazing, and works great! :)
In conclusion
I had a great time learning and working (and still working) on kibana's code. I tried to show a lot of good things I like about the code, and point out a few minor bad things in the code. I hope you enjoyed reading this, and Kudos to you if you got to this point! :)
I also hope to write another post about how kibana communicates with elasticsearch and maybe another one on how it renders the visualizations with the help of D3.js
I haven't written anything technical for a while, and that's mainly because the past year I changed jobs a few times. After working at Sears Israel for almost 3 years, I thought it's time to find the next adventure. I think I finally found a good match for me, and I'll probably write a whole post about that soon.
For now, I'll just say that at the new startup I work at, we're doing a lot of work on the ELK stack, and I got to do a lot of work on Kibana. With years of experience on various client side applications, I still learned a lot from looking at kibana's code. I think there are many things here written really elegantly, so I wanted to point them out in a concentrated post on the subject. Also, there are some bad notes, mainly minor things (in my opinion) that I will mention as well.
At First Glance
Kibana 4 is a large AngularJS application. The first thing I noticed when looking at the code is that it has a great structure. Many AngularJS tutorials (or any other tutorials for MVC frameworks) and code-bases I've worked on have the messy structure of a 'models' directory, a 'controllers' directory, and a 'views' (or 'templates') directory.
AngularJS did the right thing by organising the code by features/components, and not by code-framework definitions. This makes it much easier to navigate through the code base, and to easily add more features.
Having a code base organised by controllers, models, views, etc, doesn't do much for your code base except become a pile of unrelated features in each directory, violating the Separation of Concerns principle.

(In the image you can see each component grouped in it's own directory, which includes it's templates, it's code and it's styles all together)
In addition, most AngularJS applications I've seen have all their routes defined in one file (usually app.js or index.js), which goes along with many global definitions, and sometimes logic related to specific pages or models all in a single file with no relation to any feature.
Kibana's code is nicely organised, and each 'plugin' or 'component' (discover/visualize/dashboard/settings/etc) defines it's own routes in it's own controller.
They manage to do this by creating their own 'RouteManager' (https://github.com/elastic/kibana/blob/master/src/kibana/utils/routes/index.js). This basically defines the same api as angular's RouteManager, but it collects the routes you define, and in the end calls angular's route manager to actually add them (by calling routes.config here : https://github.com/elastic/kibana/blob/master/src/kibana/index.js#L41).
This custom route manager also adds the ability to resolve certain things before the route is called, which is real useful in many situations.
Javascript Libraries
The creators of kibana did a great job (with a few minor exceptions that I will explain in the end) in choosing many open source javascript libraries to lean on while building kibana. It's usually a good idea to not reinvent the wheel, especially when someone already did a good job before you.
RequireJS
RequireJS is a javascript module loader. It helps you create modular javascript code, and makes it really easy dealing with dependencies between modules. Kibana's code does a great job utilizing RequireJS by defining most javascript modules in the AMD standard.
A really nice trick they did here that is definitely worth mentioning is the 'Private' service they created. This is a wrapper that allows you to define a RequireJS module, with angularJS dependencies. This allows you to use angular's dependency injection abilities side-by-side with RequireJS' DI abilities.
Regularly loading RequireJS modules in the code looks like this :
define(function(require) { var myService = require('my_service'); // now do something with myService });
Using the 'Private' service you load modules like this :
define(function(require) { var myAngularService = Private(require('my_angular_service')); // now you can use myAngularService });
And most important is that my_angular_service looks like this :
define(function(require) { return function($q, $location, $routeParams) { // all angular providers in the function parameters are available here! }; });
The Private service uses angular's get() method to retrieve the $injector provider, and uses it to inject the dependencies we need.
(Take a look at the 'Private' service code here : https://github.com/elastic/kibana/blob/master/src/kibana/services/private.js)
lodash!
If you're not familiar with lodash, you should be. It's the missing javascript utility library that will definitely help you DRY up your javascript code. It has many "LINQ"-like methods (for those familiar with C#), and many other basic methods you would usually write yourself to help iterate over json objects and arrays in javascript. One of the really nice features about lodash is that most methods you can chain to make your code more readable and lodash uses lazy evaluation so performance is amazing!
I don't want to start writing about the features of lodash, but I strongly suggest reading their docs, and getting familiar with it.
Almost every service, component or controller in the kibana code starts with this line :
var _ = require('lodash');
They also did a really good job extending lodash with some utility methods of their own. Take a look at these files to see for yourself :
https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js
https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_notchainable.js
(There's one thing I don't like here, which is the methods 'get' and 'setValue' - They do a 'deepGet' and 'deepSet' which is like saying "hey, i know i have something here in this object, but have no idea where it is". This just doesn't feel right... :/ )
Some HTML5
Throughout the code there has been some good use of html5 features.
The first one I noticed and really liked is the 'Notifier' service (https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js). I really like the abstraction here over notifying the user of different message types, and the abstraction over the browser's 'console' methods. The 'lifecycle' method (https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js#L139) is really neat, and uses the console.group() method to group messages in the browser's console. It also uses 'window.performance.now' which is really nice, and much better than using the older 'Date.now()' method (it's more exact, and it's relative to the navigationStart metric).
Kibana also makes use of the less-common <wbr/> tag. This is new to html5 and is intended to give you a little more control over where the line breaks when text overflows in it's container.
There's also use of 'localStorage' and 'sessionStorage' for saving many local view settings in the different kibana pages. In general, they did a great job in persisting the user's state on the client side. When navigating between tabs, it keeps you on the last view you were in when returning to the tab.
Another nice thing is that there is a lot of use with aria-* attributes, and recently I see more and more of this in the newer commits. It's nice to see a big open source project dedicating time to these kinds of details.
Object Oriented Programming
There is a great deal of attention to the design of objects in the code.
First, I like the way inheritance is implemented here. A simple lodash 'mixin' allows for object inheritance.
inherits: function (Sub, Super) { Sub.prototype = Object.create(Super.prototype, { constructor: { value: Sub }, superConstructor: Sub.Super = Super }); return Sub; }(https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js#L23)
Many objects in the code use this to inherit all the properties of some base object. Here's an example from the 'SearchSource' object :
return function SearchSourceFactory(Promise, Private) { var _ = require('lodash'); var SourceAbstract = Private(require('components/courier/data_source/_abstract')); var SearchRequest = Private(require('components/courier/fetch/request/search')); var SegmentedRequest = Private(require('components/courier/fetch/request/segmented')); _(SearchSource).inherits(SourceAbstract); function SearchSource(initialState) { SearchSource.Super.call(this, initialState); } // more SearchSource object methods }
(https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/search_source.js#L9)
You can see the SearchSource object inherits all the base properties from the SourceAbstract object.
In addition, all the methods that would've been static are defined on the object prototype. This is great mainly for memory usage. Putting a method on the object's prototype makes sure there's only one instance of the method in memory.
Memory Usage
Since kibana is a big single-page application, there is a need to be careful with memory usage. Many apps like kibana can be left on in a browser for a long time without any refresh, so it's important to make sure there are no memory leaks. AngularJS makes this easy to implement, but many programmers don't bother going the extra mile for this.
In the kibana code, many directives subscribe to the '$destroy' event and unbind event handlers not to hold references to unused objects.
An example from a piece of kibana code (the css_truncate directive) :
$scope.$on('$destroy', function () { $elem.unbind('click'); $elem.unbind('mouseenter'); });(https://github.com/elastic/kibana/blob/master/src/kibana/directives/css_truncate.js#L41)
Code Conventions
Kibana's code is mostly very organized, and more importantly readable. A small negative point goes here for some inconsistencies with variable naming. There are classes that have public methods that start with '_' and some don't.
For an example of this, look at the DocSource object. This file has even commented 'Public API' and 'Private API' but the naming convention differences between the two aren't clear.
(https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/doc_source.js)
Code Comments
I can say the code has enough comments, but I have no idea how much that actually is, since most of the code is readable without comments, which is an amazing thing. There are great comments in most places that should have them.
Just a funny anecdote is that I was surprised to see comments that actually draw in ascii art the function they describe! Kudos!
/** * Create an exponential sequence of numbers. * * Creates a curve resembling: * * ; * / * / * .-' * _.-" * _.-'" * _,.-'" * _,..-'" * _,..-'"" * _,..-'"" * ____,..--'"" * * @param {number} min - the min value to produce * @param {number} max - the max value to produce * @param {number} length - the number of values to produce * @return {number[]} - an array containing the sequence */ createEaseIn: _.partialRight(create, function (i, length) { // generates numbers from 1 to +Infinity return i * Math.pow(i, 1.1111); })
(https://github.com/elastic/kibana/blob/master/src/kibana/utils/sequencer.js#L29)
CSS Styling
Another great success here was using the 'less' format for css files. This allows for small and concise 'less' files, and reuse of css components easily (known as 'mixins'). There has been a great job here done with colors especially - All colors are defined in a single file (https://github.com/elastic/kibana/blob/master/src/kibana/styles/theme/_variables.less). Editing this file, you can easily create your own color scheme.
(There are a few exceptions - mainly a few colors defined in js files or css files, but It's 99% covered in _variables.less).
Build Process
Kibana has a grunt build process setup. It compiles the css files, combines them and js files (without minifying, using r.js), adds parameters to the resource files for cache-busting, and some more small tasks.
I would be happy to see this upgraded to using gulp, which is stream based and has a much nicer api (in my opinion), but grunt still does the job.
Performance
After writing so many good points about kibana's source code, this is where I lack good feedback.
Maybe it's because when building kibana they had in mind that it's not to be served over the internet, and it's just an internal tool, and maybe it's just because I'm overly sensitive after working for quite a while on the performance team at Sears Israel (working on ShopYourWay.com). Either way, if it was an online website, it's performance would be considered under-par.
JS files aren't minified. They are combined, but not minified. Unfortunately, the code isn't even prepared to just minify the files. In order to do this, angularjs dependencies need to be defined with the dependencies declared as strings before the function itself. Otherwise angularjs's dependency injection mechanism won't work.
CSS files aren't minified either, just combined.
JS files are ~5MB !!! Yes, almost 5MB!! That's huge, and it's all downloaded on kibana's initial load. This could've been done in a few separate files, downloading only the ones needed for the initial view first. This would already be a great improvement. Though there are advantages to not minifying the js, and I think that's what the creators had in mind - It's easier to debug with DevTools (no need for mapping files), and although initial load will take a long time, after that there is no wait on any other pages. If the resources are cached on your machine, then even getting back to kibana the second time should be really fast.
There are also some libraries in the source code which I think are redundant and maybe could've been removed with a little extra work. One example is jquery, which I think is frowned upon using with angularjs. AngularJS comes with jqlite, which is a smaller version of jquery and should suffice.
I hope it doesn't sound like I think they did a bad job - I'm pointing out some areas in the code that maybe could've been done differently. All in all the app is amazing, and works great! :)
In conclusion
I had a great time learning and working (and still working) on kibana's code. I tried to show a lot of good things I like about the code, and point out a few minor bad things in the code. I hope you enjoyed reading this, and Kudos to you if you got to this point! :)
I also hope to write another post about how kibana communicates with elasticsearch and maybe another one on how it renders the visualizations with the help of D3.js
Tuesday, February 17, 2015
Simple nodejs desktop time tracking utility
I recently wrote about Creating desktop applications with nodejs...
Well, I was playing around a little with node-webkit (again!) - It's a nodejs framework for building cross-platform desktop applications. Within a few hours I built a super simple time tracking utility that I needed for quite some time!
I know there are a ton of utilities like this out there already, but all of them have much more features than I want and need, and annoy me too much while using them. This utility does *nothing* but track time. You just add a task and it starts timing it. You can stop and start tasks, and just remove them when you're done.
I'm not that fanatic about time productivity (yet!) that I need history graphs to show me how productive i've been lately. It's actually more for me to see if the tasks I'm working on take as long as I think they should.
So here it is: https://github.com/gillyb/tt-trakr
All the code is there.
There's also a compiled executable for windows ready inside the 'Installation' folder.
Now that I have a mac I want to compile it for mac too soon.
(I'll also probably be making some UI improvements and maybe adding some more small features in the future, so follow the repository if you're interested.)
And here's a picture of what it looks like :
:),
Gilly.
Saturday, November 8, 2014
Online multiple javascript compression tool
CodeProject
Minifying/compressing javascript files has become a standard for a while already when developing websites. It is very important to save as much space as you can and have your users downloading as little as possible to improve the performance of your site.
I don't think you'll find an article out there that denies it.
An important question that arises from this is which compressor/minifier should you use ?
Different famous open source projects use different compressors, and I'm guessing (or at least hoping) they chose them wisely relying on benchmarks they did on their own.
You see, each compressor works differently, so different code bases won't be affected in the same way by different compressors.
In the past I used to manually test my code against different compressors to see which one was best for me. I finally got sick of doing it manually, so decided to look for a tool that will do the job for me. Surprisingly, I didn't find one that did exactly that, so I quickly wrote a script that will do it for me. Then I decided to design a UI for it and put it online for others to enjoy as well.
I present to you : http://compress-js.com
You can copy text, or drag some js files to it, and choose which compressor you want. Or, and this is the default method, choose 'Check them all' which will compress your code using the most popular compressors and show you the results, and the compressed size from all of them. You can download the files directly from the site.
Here's a screenshot :
Currently the site can compress your javascript code with YUI Compressor, UglifyJS, JSMin and Google's Closure Compiler.
If you have any thoughts or suggestions on how to improve, feel free to drop a comment below. :)
Minifying/compressing javascript files has become a standard for a while already when developing websites. It is very important to save as much space as you can and have your users downloading as little as possible to improve the performance of your site.
I don't think you'll find an article out there that denies it.
An important question that arises from this is which compressor/minifier should you use ?
Different famous open source projects use different compressors, and I'm guessing (or at least hoping) they chose them wisely relying on benchmarks they did on their own.
You see, each compressor works differently, so different code bases won't be affected in the same way by different compressors.
In the past I used to manually test my code against different compressors to see which one was best for me. I finally got sick of doing it manually, so decided to look for a tool that will do the job for me. Surprisingly, I didn't find one that did exactly that, so I quickly wrote a script that will do it for me. Then I decided to design a UI for it and put it online for others to enjoy as well.
I present to you : http://compress-js.com
You can copy text, or drag some js files to it, and choose which compressor you want. Or, and this is the default method, choose 'Check them all' which will compress your code using the most popular compressors and show you the results, and the compressed size from all of them. You can download the files directly from the site.
Here's a screenshot :
Currently the site can compress your javascript code with YUI Compressor, UglifyJS, JSMin and Google's Closure Compiler.
If you have any thoughts or suggestions on how to improve, feel free to drop a comment below. :)
Tuesday, November 4, 2014
Lazy loading directives in AngularJS the easy way
CodeProject
The past few months I've been doing a lot of work with AngularJS, and currently I'm working on a single page application which is supposed to be quite big in the end. Since I have the privilege of building it from scratch, I'm taking many client-side performance considerations in mind now, which I think will save me a lot of hard work optimizing in the future.
One of the problems main problems is HUGE amounts of js files being downloaded to the user's computer. A great way to avoid this is to only download the minimum the user needs and dynamically load more resources in the background, or as the user runs into pages which require a specific feature.
AngularJS is a great framework, but doesn't have anything built in that deals with this, so I did some research myself...
I ran into some great articles on the subject, which really helped me a lot (and I took some ideas from), but weren't perfect.
A great article on the subject is this one : http://www.bennadel.com/blog/2554-loading-angularjs-components-with-requirejs-after-application-bootstrap.htm
The important part is that it explains how to dynamically load angularjs directives (or other components) after bootstrapping your angularjs app.
What I didn't like about this article is that the writer's example requires RequireJS and jQuery along with all the AngularJS files you already have. This alone will make your app really heavy, and I think doesn't need to be like this.
Let me show you how I wrote a simple AngularJS service that can dynamically load directives.
The first crucial step is that you need to save a reference to $compileProvider. This is a provider that is available to us when bootstrapping, but not later, and this provider will compile our directive for us.
Now, we can write a service that will load our javascript file on demand, and compile the directive for us, to be ready to use.
This is a simplified version of what it should look like :
Now we are ready to load a directive, compile it and add it to our app so it is ready for use.
To use this service we will simply call it from a controller, or any other service/directive like this:
One last thing to notice, is that now your directives need to be defined using '$compileProvider', and not how we would do it regularly. This is why we exposed $compileProvider on our 'app' object, for later use. So our directive js file should look like this:
I wrote earlier that this is a simplified version of what it should look like, since there are some changes that I would make before using it as is.
First I would probably add some better error handling to look out for edge cases.
Second, We wouldn't want the same pages to attempt to download the same files several times, so I would probably add a cache mechanism for loaded directives.
Also, I wouldn't want the list of directive files (the variable _directivesFileMap) directly in my LazyDirectiveLoader service, so I would probably create a service that holds this list and inject it the service. The service that holds the list will be generated by my build system (in my case I created a gulp task to do this). This way I don't need to make sure this file map is always updated.
Finally, I think I will take out the part that loads the javascript file to a separate service so I will be able to easily mock it in tests I write. I don't like touching the DOM in my services, and if I have to, I rather separate it to a separate service I can easily mock.
I uploaded a slightly better (and a little less simplified) version of this over here : https://github.com/gillyb/angularjs-helpers/tree/master/directives/lazy-load
The past few months I've been doing a lot of work with AngularJS, and currently I'm working on a single page application which is supposed to be quite big in the end. Since I have the privilege of building it from scratch, I'm taking many client-side performance considerations in mind now, which I think will save me a lot of hard work optimizing in the future.
One of the problems main problems is HUGE amounts of js files being downloaded to the user's computer. A great way to avoid this is to only download the minimum the user needs and dynamically load more resources in the background, or as the user runs into pages which require a specific feature.
AngularJS is a great framework, but doesn't have anything built in that deals with this, so I did some research myself...
I ran into some great articles on the subject, which really helped me a lot (and I took some ideas from), but weren't perfect.
A great article on the subject is this one : http://www.bennadel.com/blog/2554-loading-angularjs-components-with-requirejs-after-application-bootstrap.htm
The important part is that it explains how to dynamically load angularjs directives (or other components) after bootstrapping your angularjs app.
What I didn't like about this article is that the writer's example requires RequireJS and jQuery along with all the AngularJS files you already have. This alone will make your app really heavy, and I think doesn't need to be like this.
Let me show you how I wrote a simple AngularJS service that can dynamically load directives.
The first crucial step is that you need to save a reference to $compileProvider. This is a provider that is available to us when bootstrapping, but not later, and this provider will compile our directive for us.
var app = angular.module('MyApp', ['ngRoute', 'ngCookies']); app.config(['$routeProvider', '$compileProvider', function($routeProvider, $compileProvider) { $routeProvider.when('/', { templateUrl: 'views/Home.html', controller: 'HomeController' }); app.compileProvider = $compileProvider; }]);
Now, we can write a service that will load our javascript file on demand, and compile the directive for us, to be ready to use.
This is a simplified version of what it should look like :
app.service('LazyDirectiveLoader', ['$rootScope', '$q', '$compile', function($rootScope, $q, $compile) { // This is a dictionary that holds which directives are stored in which files, // so we know which file we need to download for the user var _directivesFileMap = { 'SexyDirective': 'scripts/directives/sexy-directive.js' }; var _load = function(directiveName) { // make sure the directive exists in the dictionary if (_directivesFileMap.hasOwnProperty(directiveName)) { console.log('Error: doesnt recognize directive : ' + directiveName); return; } var deferred = $q.defer(); var directiveFile = _directivesFileMap[directiveName]; // download the javascript file var script = document.createElement('script'); script.src = directiveFile; script.onload = function() { $rootScope.$apply(deferred.resolve); }; document.getElementsByTagName('head')[0].appendChild(script); return deferred.promise; }; return { load: _load }; }]);
Now we are ready to load a directive, compile it and add it to our app so it is ready for use.
To use this service we will simply call it from a controller, or any other service/directive like this:
app.controller('CoolController', ['LazyDirectiveLoader', function(LazyDirectiveLoader) { // lets say we want to load our 'SexyDirective' all we need to do is this : LazyDirectiveLoader.load('SexyDirective').then(function() { // now the directive is ready... // we can redirect the user to a page that uses it! // or dynamically add the directive to the current page! }); }]);
One last thing to notice, is that now your directives need to be defined using '$compileProvider', and not how we would do it regularly. This is why we exposed $compileProvider on our 'app' object, for later use. So our directive js file should look like this:
app.compileProvider.directive('SexyDirective', function() { return { restrict: 'E', template: '<div class=\"sexy\"></div>', link: function(scope, element, attrs) { // ... } }; });
I wrote earlier that this is a simplified version of what it should look like, since there are some changes that I would make before using it as is.
First I would probably add some better error handling to look out for edge cases.
Second, We wouldn't want the same pages to attempt to download the same files several times, so I would probably add a cache mechanism for loaded directives.
Also, I wouldn't want the list of directive files (the variable _directivesFileMap) directly in my LazyDirectiveLoader service, so I would probably create a service that holds this list and inject it the service. The service that holds the list will be generated by my build system (in my case I created a gulp task to do this). This way I don't need to make sure this file map is always updated.
Finally, I think I will take out the part that loads the javascript file to a separate service so I will be able to easily mock it in tests I write. I don't like touching the DOM in my services, and if I have to, I rather separate it to a separate service I can easily mock.
I uploaded a slightly better (and a little less simplified) version of this over here : https://github.com/gillyb/angularjs-helpers/tree/master/directives/lazy-load
Saturday, August 9, 2014
AngularJS hack/tip :: Invoking JS code after DOM is ready
CodeProject
When working with AngularJS, you frequently update the DOM after the DOM was already 'ready'.
What I mean by that is that the browser will load the DOM, and the template will completely load. BUT, your template might have an 'ng-if' or 'ng-repeat' directive that will only be attached to the DOM slightly after, since you might be setting it with an ajax response inside the control.
This will happen when your code is similar to this pattern :
The main problem with this code is that most of the time when the method DoSomeJS() is invoked, the DOM changes caused by the changes to $scope won't be 'ready'.
This is because the way angularJS is built -
Each property on the scope has a 'watcher' attached to it, checking it for changes. Once the property is changed, it invokes a '$digest' loop which is responsible for updating the model and view as well. This is invoked asynchronously (for performance reasons i guess), and this actually gives you the great ability of invoking js code immediately after updating the scope without waiting for the DOM to be updated - something you'll probably want as well from time to time. (The nitty gritty details of how this works behind the scenes is interesting, but will take me too long to go through in this post. For the brave ones among us, I encourage you to look a bit into the code yourself --> https://github.com/angular/angular.js/blob/master/src/ng/rootScope.js#L667)
So, how can we invoke some JS code, and make sure it runs only after the DOM was updated ?
Well, one quick and hacky way to do this is to let a js timer invoke your code with a '0' delay. Since JS is single-threaded, running a timer with a 0ms delay doesn't always mean the JS runs immediately. What will happen in this case, it will push the code to 'the end of the line' and invoke it once the JS thread is ready.
The updated code looks like this :
A great read on how JS timers are invoked : Understanding Javascript timers
But wait, This whole solution is a hack, isn't it ?!...
Yes, and truth be told, when I first ran into this problem, this was the first solution I came up with. It was only after that I realized I don't want any js code in my controller touching my DOM.
I still decided to write this post though, to explain a little about JS timers and angular $digest.
The solution I would favor more in this case would be to have a custom directive on the DOM being inserted dynamically. Then adding the code modifying the DOM in the 'link' method of the directive.
And the code should look more like this :
In angular directives describe various elements of the templates, and therefore I feel like they are the 'right' place for most of the code we need to modify our DOM. I like to keep my controllers clean from touching the DOM, and just have them construct the models they need to pass on to the template.
When working with AngularJS, you frequently update the DOM after the DOM was already 'ready'.
What I mean by that is that the browser will load the DOM, and the template will completely load. BUT, your template might have an 'ng-if' or 'ng-repeat' directive that will only be attached to the DOM slightly after, since you might be setting it with an ajax response inside the control.
This will happen when your code is similar to this pattern :
app.controller('MyAngularController', function($scope, $http) { $http.get('www.someURL.com/api').success(function(response) { // Add some data to the scope $scope.Data = response; // This caused the DOM to change // so invoke some js that will take care of the new DOM changes DoSomeJS(); }); });
The main problem with this code is that most of the time when the method DoSomeJS() is invoked, the DOM changes caused by the changes to $scope won't be 'ready'.
This is because the way angularJS is built -
Each property on the scope has a 'watcher' attached to it, checking it for changes. Once the property is changed, it invokes a '$digest' loop which is responsible for updating the model and view as well. This is invoked asynchronously (for performance reasons i guess), and this actually gives you the great ability of invoking js code immediately after updating the scope without waiting for the DOM to be updated - something you'll probably want as well from time to time. (The nitty gritty details of how this works behind the scenes is interesting, but will take me too long to go through in this post. For the brave ones among us, I encourage you to look a bit into the code yourself --> https://github.com/angular/angular.js/blob/master/src/ng/rootScope.js#L667)
So, how can we invoke some JS code, and make sure it runs only after the DOM was updated ?
Well, one quick and hacky way to do this is to let a js timer invoke your code with a '0' delay. Since JS is single-threaded, running a timer with a 0ms delay doesn't always mean the JS runs immediately. What will happen in this case, it will push the code to 'the end of the line' and invoke it once the JS thread is ready.
The updated code looks like this :
app.controller('MyAngularController', function($scope, $http, $timeout) { $http.get('www.someURL.com/api').success(function(response) { // Add some data to the scope $scope.Data = response; // This caused the DOM to change // so invoke some js that will take care of the new DOM changes $timeout(DoSomeJS); }); });Note: invoking '$timeout()' like we did is just like invoking 'setTimeout(fn, 0);' - $timeout is an angularJS service that wraps setTimeout.
A great read on how JS timers are invoked : Understanding Javascript timers
But wait, This whole solution is a hack, isn't it ?!...
Yes, and truth be told, when I first ran into this problem, this was the first solution I came up with. It was only after that I realized I don't want any js code in my controller touching my DOM.
I still decided to write this post though, to explain a little about JS timers and angular $digest.
The solution I would favor more in this case would be to have a custom directive on the DOM being inserted dynamically. Then adding the code modifying the DOM in the 'link' method of the directive.
And the code should look more like this :
app.directive('myDirective', function() { return { restrict: 'A', link: function(scope, elem, attrs) { // DO WHATEVER WE WANT HERE... } }; });
In angular directives describe various elements of the templates, and therefore I feel like they are the 'right' place for most of the code we need to modify our DOM. I like to keep my controllers clean from touching the DOM, and just have them construct the models they need to pass on to the template.
Saturday, April 12, 2014
Debugging and solving the 'Forced Synchronous Layout' problem
CodeProject
If you're using Google Developer tools to profile your website's performance, you might have realized that Chrome warns you about doing 'forced layouts'.
This looks something like this :
In this screenshot, I marked all the warning signs chrome tries to give you so you can realize this problem.
So, what does this mean ?
When the browser constructs a model of the page in memory, it builds 2 trees that represent the DOM in memory. One is the DOM structure itself, and the other is a tree that represents the way the elements should be rendered on the screen.
This tree needs to always stay updated, so when you change an element's css properties for example, the browser might need to update these trees in memory to make sure the next time you request a css property, the browser will know it has updated information.
Why should you care about this ?
Updating both these trees in memory may take some time. Although they are in memory, most pages these days have quite a big DOM so the tree will be pretty big. It also depends on which element you change, since updating different elements might mean only updating part of the tree or the whole tree in different cases.
Can we avoid this ?
The browser can realize that you're trying to update many elements at once, and will optimize itself so that a whole tree update won't happen after each update, but only when the browser knows it needs relevant data. In order for this to work correctly, we need to help it out a little.
A very simple example of this scenario might be setting and getting 2 different properties, one after the other, as so :
In this simple example, the browser will update the whole layout twice. This is because after setting the first element's width, you are asking to retrieve an element's width. When retrieving the css property, the browser know's it needs updated data, so it then goes and updates the whole DOM tree in memory. Only then, it will continue to the next line, which will soon after cause another update because of the same reasons.
This can simply be fixed by changing around the order of the code, as so :
Now, the browser will update both properties one after the other without updating the tree. Only when asking for the width on the 7th line, it will update the DOM tree in memory, and will keep it updated for line number 8 as well. We easily saved one update.
Is this a 'real' problem ?
There are a few blogs out there talking about this problem, and they all seem like textbook examples of the problem. When I first read about this, I too thought it was a little far fetched and not really practical.
Recently though I actually ran into this on a site I'm working on...
Looking at the profiling timeline, I realized the same pattern (which was a bunch of rows alternating between 'Layout' and 'Recalculate Style').
Clicking on the marker showed that this was actually taking around ~300ms.
I can see that the evaluation of the script was taking ~70ms which I could handle, but over 200ms was being wasted on what?!...
Luckily, when clicking on the script in that dialog, it displays a JS stacktrace of the problematic call. This was really helpful, and directed me exactly to the spot.
It turned out I had a piece of code that was going over a loop of elements, checking each element's height, and setting the container height according to the aggregated height. This was being set and get in each loop iteration, causing a performance hit.
The problematic code looked something like this :
You can see that the 'for' loop has a call to the method 'appendItemToContainer' which sets the container's height according to the previous height - which means setting and getting in the same line.
I fixed this by looping over all the item's in the container, and building an array of their height's. Then I aggregated them all together and set the container's height once. This saved many DOM tree updates, and only left one which is necessary.
The fixed code looked something like this :
After fixing the code, I saw that the time spent was actually much less now -
As you can see, I managed to save a little over 150ms which is great for such a simple fix!!
If you're using Google Developer tools to profile your website's performance, you might have realized that Chrome warns you about doing 'forced layouts'.
This looks something like this :
In this screenshot, I marked all the warning signs chrome tries to give you so you can realize this problem.
So, what does this mean ?
When the browser constructs a model of the page in memory, it builds 2 trees that represent the DOM in memory. One is the DOM structure itself, and the other is a tree that represents the way the elements should be rendered on the screen.
This tree needs to always stay updated, so when you change an element's css properties for example, the browser might need to update these trees in memory to make sure the next time you request a css property, the browser will know it has updated information.
Why should you care about this ?
Updating both these trees in memory may take some time. Although they are in memory, most pages these days have quite a big DOM so the tree will be pretty big. It also depends on which element you change, since updating different elements might mean only updating part of the tree or the whole tree in different cases.
Can we avoid this ?
The browser can realize that you're trying to update many elements at once, and will optimize itself so that a whole tree update won't happen after each update, but only when the browser knows it needs relevant data. In order for this to work correctly, we need to help it out a little.
A very simple example of this scenario might be setting and getting 2 different properties, one after the other, as so :
var a = document.getElementById('element-a'); var b = document.getElementById('element-b'); a.clientWidth = 100; var aWidth = a.clientWidth; b.clientWidth = 200; var bWidth = b.clientWidth;
In this simple example, the browser will update the whole layout twice. This is because after setting the first element's width, you are asking to retrieve an element's width. When retrieving the css property, the browser know's it needs updated data, so it then goes and updates the whole DOM tree in memory. Only then, it will continue to the next line, which will soon after cause another update because of the same reasons.
This can simply be fixed by changing around the order of the code, as so :
var a = document.getElementById('element-a'); var b = document.getElementById('element-b'); a.clientWidth = 100; b.clientWidth = 200; var aWidth = a.clientWidth; var bWidth = b.clientWidth;
Now, the browser will update both properties one after the other without updating the tree. Only when asking for the width on the 7th line, it will update the DOM tree in memory, and will keep it updated for line number 8 as well. We easily saved one update.
Is this a 'real' problem ?
There are a few blogs out there talking about this problem, and they all seem like textbook examples of the problem. When I first read about this, I too thought it was a little far fetched and not really practical.
Recently though I actually ran into this on a site I'm working on...
Looking at the profiling timeline, I realized the same pattern (which was a bunch of rows alternating between 'Layout' and 'Recalculate Style').
Clicking on the marker showed that this was actually taking around ~300ms.
I can see that the evaluation of the script was taking ~70ms which I could handle, but over 200ms was being wasted on what?!...
Luckily, when clicking on the script in that dialog, it displays a JS stacktrace of the problematic call. This was really helpful, and directed me exactly to the spot.
It turned out I had a piece of code that was going over a loop of elements, checking each element's height, and setting the container height according to the aggregated height. This was being set and get in each loop iteration, causing a performance hit.
The problematic code looked something like this :
for (var i=0; i<containerItems.length; i++) { var item = containerItems[i]; appendItemToContainer(item); } var appendItemToContainer = function(item) { container.clientHeight += item.clientHeight; }
You can see that the 'for' loop has a call to the method 'appendItemToContainer' which sets the container's height according to the previous height - which means setting and getting in the same line.
I fixed this by looping over all the item's in the container, and building an array of their height's. Then I aggregated them all together and set the container's height once. This saved many DOM tree updates, and only left one which is necessary.
The fixed code looked something like this :
// collect the height of all elements var totalHeight = 0; for (var i=0; i<containerItems.length; i++) { totalHeight += containerItems[i].clientHeight; } // set the container's height once container.clientHeight = totalHeight;
After fixing the code, I saw that the time spent was actually much less now -
As you can see, I managed to save a little over 150ms which is great for such a simple fix!!
Friday, February 21, 2014
Chrome developer tools profiling flame charts
CodeProject
I just recently, and totally coincidentally, found out that Chrome developer tools can generate flame charts while profiling js code!
Recently it seems like generating flame charts from profiling data has become popular in languages like Ruby, python and php, so i'm excited to see that chrome has this option for js code as well.
The default view for profiling data in the dev tools is the 'tree view', but you can easily change it to 'flame chart' by selecting it on the drop down in the bottom part of the window.
Like here :

Then you will be able to see the profiling results, in a way that sometimes is easier to look at.
You can use the mouse scroll button to zoom in on a specific area of the flame chart, and see what's going on there.
In case you're not familiar with reading flame charts, then here's a simple explanation -
And here you can see an example of a flame chart, and I marked a few sections that the flame chart points out for us, that are non-optimized TryCatchBlocks. In this case it's comfortable viewing it in a flame chart because you can see nicely how many method calls each try/catch block is surrounding.
Recently it seems like generating flame charts from profiling data has become popular in languages like Ruby, python and php, so i'm excited to see that chrome has this option for js code as well.
The default view for profiling data in the dev tools is the 'tree view', but you can easily change it to 'flame chart' by selecting it on the drop down in the bottom part of the window.
Like here :

Then you will be able to see the profiling results, in a way that sometimes is easier to look at.
You can use the mouse scroll button to zoom in on a specific area of the flame chart, and see what's going on there.
In case you're not familiar with reading flame charts, then here's a simple explanation -
- Each colored line is a method call
- The method calls above one another represent the call stack
- The width of the lines represents how long each call was
And here you can see an example of a flame chart, and I marked a few sections that the flame chart points out for us, that are non-optimized TryCatchBlocks. In this case it's comfortable viewing it in a flame chart because you can see nicely how many method calls each try/catch block is surrounding.

Wednesday, February 19, 2014
Preloading resources - the right way (for me)
CodeProject
Looking through my 'client side performance glasses' when browsing the web, I see that many sites spend too much time downloading resources, mostly on the homepage, but sometimes the main bulk is on subsequent pages as well.
Starting to optimize
When trying to optimize your page, you might think that it's most important that your landing page is the fastest since it defines your users' first impression. So what do you do ? You probably cut down on all the js and css resources you can and leave only what's definitely required for your landing page. You minimize those and then you're left with one file each. You might even be putting the js at the end of the body so it doesn't block the browser from rendering the page, and you're set!
But there's still a problem
Now, your users go onto the next page, probably an inner page of your site, and this one is filled with much more content. On this page you use some jquery plugins and other frameworks you found useful and probably saved yourself hours of javascript coding, but your users are paying the price...
My suggestion
I ran into this same exact problem a few times in the past, and the best way I found of solving this was to preload the resources on the homepage. I can do this after 'page load' so it doesn't block the homepage from rendering, and while the user is looking at the homepage, a little extra time is spent in the background downloading resources they'll probably need on the next pages they browse.
How do we do this ?
Well, there are several techniques, but before choosing the right one, lets take a look at the requirements/constraints we have -
The different techniques are (I have tested all of these, and these are my notes)
1. Creating an iframe and appending the script/stylesheet file inside it
So, close, but no cigar here (at least, fully).
2. Creating a javascript Image object
3. Building an <object/> tag with file in data attribute
4. XMLHttpRequest a.k.a. ajax
5. Creating a 'prefetch' tag
6. 'script' tag with invalid 'type' attribute
Conclusion
So, first I must say, that given all the constraints that I have, this is more complicated than I thought would be at first.
Some of the techniques worked well on all of the browsers for non-secured resources (non SSL) but only on some browsers for secured resources. In my specific case I just decided to go with one of those, and figure that some users will not have cached resources that are for SSL pages (these are a minority in my case).
But, I guess that given your circumstances, you might choose a different technique. I had quite a few constraints that I'm sure not everyone has.
Another thing worth mentioning is that I didn't test Safari on any technique. Again, this was less interesting for me in my case.
I also didn't think about solving this problem on mobile devices yet. Since mobile bandwidth is also usually much slower I might tackle this problem differently for mobile devices...
Looking through my 'client side performance glasses' when browsing the web, I see that many sites spend too much time downloading resources, mostly on the homepage, but sometimes the main bulk is on subsequent pages as well.
Starting to optimize
When trying to optimize your page, you might think that it's most important that your landing page is the fastest since it defines your users' first impression. So what do you do ? You probably cut down on all the js and css resources you can and leave only what's definitely required for your landing page. You minimize those and then you're left with one file each. You might even be putting the js at the end of the body so it doesn't block the browser from rendering the page, and you're set!
But there's still a problem
Now, your users go onto the next page, probably an inner page of your site, and this one is filled with much more content. On this page you use some jquery plugins and other frameworks you found useful and probably saved yourself hours of javascript coding, but your users are paying the price...
My suggestion
I ran into this same exact problem a few times in the past, and the best way I found of solving this was to preload the resources on the homepage. I can do this after 'page load' so it doesn't block the homepage from rendering, and while the user is looking at the homepage, a little extra time is spent in the background downloading resources they'll probably need on the next pages they browse.
How do we do this ?
Well, there are several techniques, but before choosing the right one, lets take a look at the requirements/constraints we have -
- We want download js/css files in a non-blocking way
- Trigger the download ourselves so we can defer it to after 'page load'
- Download the resources in a way that won't execute them (css and js) (This is really important and the reason we can't just dynamically create a '<script/>' tag and append it to the '<head/>' tag!)
- Make sure they stay in the browser's cache (this is the whole point!)
- Work with resources that are stored on secure servers (https). This is important since I would like it to preload resources from my secured registration/login page too if I can.
- Work with resources on a different domain. This is very important since all of my resources are hosted on an external CDN server with a different subdomain.
The different techniques are (I have tested all of these, and these are my notes)
1. Creating an iframe and appending the script/stylesheet file inside it
var iframe = document.createElement('iframe'); iframe.setAttribute("width", "0"); iframe.setAttribute("height", "0"); iframe.setAttribute("frameborder", "0"); iframe.setAttribute("name", "preload"); iframe.id = "preload"; iframe.src = "about:blank"; document.body.appendChild(iframe); // gymnastics to get reference to the iframe document iframe = document.all ? document.all.preload.contentWindow : window.frames.preload; var doc = iframe.document; doc.open(); doc.writeln(""); doc.close(); var iFrameAddFile = function(filename) { var css = doc.createElement('link'); css.type = 'text/css'; css.rel = 'stylesheet'; css.href = filename; doc.body.appendChild(css); } iFrameAddFile('http://ourFileName.js');This works on Chrome and FF but on some versions of IE it wouldn't cache the secure resources (https).
So, close, but no cigar here (at least, fully).
2. Creating a javascript Image object
new Image().src = 'http://myResourceFile.js';This only works properly on Chrome. On FireFox and IE it would either not download the secure resources or download them but without caching.
3. Building an <object/> tag with file in data attribute
var createObjectTag = function(filename) { var o = document.createElement('object'); o.data = filename; // IE stuff, otherwise 0x0 is OK if (isIE) { o.width = 1; o.height = 1; o.style.visibility = "hidden"; o.type = "text/plain"; } else { o.width = 0; o.height = 0; } document.body.appendChild(o); } createObjectTag('http://myResourceFile.js');This worked nicely on Chrome and FF, but not on some versions of IE.
4. XMLHttpRequest a.k.a. ajax
var ajaxRequest = function(filename) { var xhr = new XMLHttpRequest(); xhr.open('GET', filename); xhr.send(''); } ajaxRequest('http://myResourceFile.js');This technique won't work with files on a different domain, so I immediately dropped this.
5. Creating a 'prefetch' tag
var prefetchTag = function(filename) { var link = document.createElement('link'); link.href = filename; link.rel = "prefetch"; document.getElementsByTagName('head')[0].appendChild(link); } prefetchTag('http://myResourceFile.js');
6. 'script' tag with invalid 'type' attribute
// creates a script tag with an invalid type, like 'script/cache' // I realized this technique is used by LabJS for some browsers var invalidScript = function(filename) { var s = document.createElement('script'); s.src = filename; s.type = 'script/cache'; document.getElementsByTagName('head')[0].appendChild(s); } invalidScript('http://myJsResource.js');This barely worked in any browser properly. It would download the resources, but wouldn't cache them for the next request.
Conclusion
So, first I must say, that given all the constraints that I have, this is more complicated than I thought would be at first.
Some of the techniques worked well on all of the browsers for non-secured resources (non SSL) but only on some browsers for secured resources. In my specific case I just decided to go with one of those, and figure that some users will not have cached resources that are for SSL pages (these are a minority in my case).
But, I guess that given your circumstances, you might choose a different technique. I had quite a few constraints that I'm sure not everyone has.
Another thing worth mentioning is that I didn't test Safari on any technique. Again, this was less interesting for me in my case.
I also didn't think about solving this problem on mobile devices yet. Since mobile bandwidth is also usually much slower I might tackle this problem differently for mobile devices...
Monday, November 11, 2013
Some jQuery getters are setters as well
CodeProject
A couple of days ago I ran into an interesting characteristic of jQuery -
Some methods which are 'getters' are also 'setters' behind the scenes.
I know this sounds weird, and you might even be wondering why the hell this matters... Just keep reading and I hope you'll understand... :)
If you call the element dimension methods in jquery (which are height(), innerHeight(), outerHeight(), width(), innerWidth() & outerWidth() ) you'll probably be expecting it to just check the javascript object properties using simple javascript and return the result.
The reality of this is that sometimes it needs to do more complicated work in the background...
The problem :
If you have an object which is defined as 'display:none', calling 'element.clientHeight' in javascript, which should return the object's height will return '0'. This is because a 'hidden' object using 'display:none' isn't rendered on the screen and therefore the client never knows how much space it visually actually takes, leading it to think it's dimensions are 0x0 (which is right in some sense).
How jquery solves the problem for you :
When asking jquery what the height of a 'display:none' element is (by calling $(element).height() ), it's more clever than that.
It can identify that the element is defined as 'display:none', and takes some steps to get the actual height of the element :
- It copies all the element's styles to a temporary object
- Defines the object as position:absolute
- Defines the object as visibility:hidden
- Removes 'display:none' from the element. After this, the browser is forced to 'render' the object, although it doesn't actually display it on the screen because it is still defined as 'visibility:hidden'.
- Now the jquery knows what the actual height of your element is
- Swaps back the original styles and returns the value.
Okay, so now that you know this, why should you even care ?
The step that jquery changes the styles of your element without you knowing, which forces the browser to 'render' the element in the background can take time. Not a lot of time, but still take some time. Probably a few milliseconds. Doing this once wouldn't matter to anyone, but doing this many times, lets say in a loop, might cause performance issues.
Real life!
I recently found a performance issue on our site that was caused by this exact reason. The 'outerHeight()' method was being called in a loop many times, and fixing this caused an improvement of ~200ms. (Why saving 200ms can save save millions of dollars!)
I will soon write a fully detailed post about how I discovered this performance issue, how I tracked it down, and how I fixed it.
Always a good tip!
Learn how your libraries are working under the hood. This will give you great power and a great understanding of how to efficiently use them.
A couple of days ago I ran into an interesting characteristic of jQuery -
Some methods which are 'getters' are also 'setters' behind the scenes.
I know this sounds weird, and you might even be wondering why the hell this matters... Just keep reading and I hope you'll understand... :)
If you call the element dimension methods in jquery (which are height(), innerHeight(), outerHeight(), width(), innerWidth() & outerWidth() ) you'll probably be expecting it to just check the javascript object properties using simple javascript and return the result.
The reality of this is that sometimes it needs to do more complicated work in the background...
The problem :
If you have an object which is defined as 'display:none', calling 'element.clientHeight' in javascript, which should return the object's height will return '0'. This is because a 'hidden' object using 'display:none' isn't rendered on the screen and therefore the client never knows how much space it visually actually takes, leading it to think it's dimensions are 0x0 (which is right in some sense).
How jquery solves the problem for you :
When asking jquery what the height of a 'display:none' element is (by calling $(element).height() ), it's more clever than that.
It can identify that the element is defined as 'display:none', and takes some steps to get the actual height of the element :
- It copies all the element's styles to a temporary object
- Defines the object as position:absolute
- Defines the object as visibility:hidden
- Removes 'display:none' from the element. After this, the browser is forced to 'render' the object, although it doesn't actually display it on the screen because it is still defined as 'visibility:hidden'.
- Now the jquery knows what the actual height of your element is
- Swaps back the original styles and returns the value.
Okay, so now that you know this, why should you even care ?
The step that jquery changes the styles of your element without you knowing, which forces the browser to 'render' the element in the background can take time. Not a lot of time, but still take some time. Probably a few milliseconds. Doing this once wouldn't matter to anyone, but doing this many times, lets say in a loop, might cause performance issues.
Real life!
I recently found a performance issue on our site that was caused by this exact reason. The 'outerHeight()' method was being called in a loop many times, and fixing this caused an improvement of ~200ms. (Why saving 200ms can save save millions of dollars!)
I will soon write a fully detailed post about how I discovered this performance issue, how I tracked it down, and how I fixed it.
Always a good tip!
Learn how your libraries are working under the hood. This will give you great power and a great understanding of how to efficiently use them.
Wednesday, August 14, 2013
My talk on Latency & Client Side Performance
As an engineer in the 'Core team' of the company, we are responsible for making the site as available as we can, while having great performance and standing heavy load. We set high goals, and we're working hard to achieve them.
Up until a while ago, we were focusing mainly on server side performance - Looking at graphs under various load and stress tests, and seeing how the servers perform, each time making more and more improvements in the code.
A few weeks ago we started putting a lot of focus on latency and client side performance. I have taken control in this area and am following the results and creating tasks that will improve the performance every day.
Since I've been reading a lot about it lately, and working on it a lot, I decided to create a presentation on the subject to teach others some lessons learned from the short time I've been at it...
Here are the slides : http://slid.es/gillyb/latency
There are many details you'll be missing by just looking at the slides, but if this interests you than you should take a look anyway. The last slide also has many of the references from which I took the information for the presentation. I strongly recommend reading them. They are all interesting! :)
I might add some future posts about specific client side performance tips and go much more into details.
I'm also thinking about presenting this at some meetup that will be open to the public... :)
Up until a while ago, we were focusing mainly on server side performance - Looking at graphs under various load and stress tests, and seeing how the servers perform, each time making more and more improvements in the code.
A few weeks ago we started putting a lot of focus on latency and client side performance. I have taken control in this area and am following the results and creating tasks that will improve the performance every day.
Since I've been reading a lot about it lately, and working on it a lot, I decided to create a presentation on the subject to teach others some lessons learned from the short time I've been at it...
Here are the slides : http://slid.es/gillyb/latency
There are many details you'll be missing by just looking at the slides, but if this interests you than you should take a look anyway. The last slide also has many of the references from which I took the information for the presentation. I strongly recommend reading them. They are all interesting! :)
I might add some future posts about specific client side performance tips and go much more into details.
I'm also thinking about presenting this at some meetup that will be open to the public... :)

Saturday, July 27, 2013
Improving website latency by converting images to WebP format
CodeProject
A couple of years ago Google published a new image format called WebP (*.webp). This format is supposed to be much smaller in size, without losing from the quality (or at least no noticeable quality). You can convert jpeg images to webp without noticing the difference, with a smaller image file size and even support transparency.
According to Ilya Grigorik (performance engineer at google) - you can save 25%-35% on jpeg and png formats, and 60%+ on png files with transparency! (http://www.igvita.com/2013/05/01/deploying-webp-via-accept-content-negotiation/)
Why should we care about this ?
Your web site latency is super important! If you don't measure it by now, then you really need to start. In commerce sites it's already been proven that better latency directly equals more revenue (Amazon makes 1% more in revenue by saving 100ms).
How is this new image format related to latency ?
If your site has many images, then your average user is probably spending a fair amount of time downloading those images. Think of a site like pinterest which is mostly contrived of user based images, then the user is downloading many new images with each page view.
While on a PC at your home, with a DSL connection this might not seem like a lot, but we all know that a big percentage of our users are using mobile devices, with 3G internet connection, which is much slower and they suffer from much longer download times.
What are our options ?
Just converting all our images to WebP is clearly not an option. Why ? Well, some people in the world have special needs. In this case i'm referring to people with outdated browsers (We all know who they are!).
BUT, we can still let some of our users enjoy the benefit of a faster site, and this includes many mobile users as well!
We will need to make some changes to our site in order for us to support this, so lets see what we can do -
(Technical details on implementation at the end)
Option #1 - Server side detection :
When our server gets the request, we can detect if the user's browser supports webp, and if so reply with an html source that has '*.webp' image files in it.
This option comes with a major downside - You will no longer be able to cache the images on the server (via OutputCaching or a CDN like Akamai) since different users can get a different source code for the same exact page.
Option #2 - Server side detection of image request :
This means we can always request the same file name, like 'myImage.png'. Add code to detect if this client can support webp then just send it the same file but in webp format.
This option has a similar downside - Now we can cache the html output, but when sending the image files to the user we must mark them as 'non-cacheable' too since the contents can vary depending on the user's browser.
Option #3 - Client side detection :
Many big sites defer the downloading of images on the client only until the document is ready. This is also a trick to improve latency - It means the client will download all the resources they need, the browser will render everything, and only then start downloading the images. Again, for image intensive sites this is crucial, since it allows the user to start interacting with the site before waiting for the downloading of many images that might not be relevant at the moment.
This is done by inserting a client side script that will detect if the browser supports webp format. If so, you can change the image requests to request the *.webp version of the image.
The downside to this option is that you can only use it if the browser supports the webp format.
(btw - you can decide to go extreme with this and always download the webp version, and if the client doesn't support it, there are js decoders that will allow you to convert the image on the client. This seems a little extreme to me, and you probably will be spending a lot of time decoding in js anyway).
The gritty details -
How can we detect if our browser supports webp ?
Don't worry, there's no need here for looking up which browsers support webp and testing against a list. Browsers that support webp format should claim they do when requesting images. We can see this done by Chrome (in the newer versions) :
You can see in the request headers 'Accept: image/webp'
How do we do this on the client ?
In javascript we don't have access to the request headers, so we need to get creative.
There is a trick that can be done by actually rendering an image on the client, using base64 to store the image in the code, and then detect if the browser loaded the image successfully.
This will do the trick :
How do we convert our images to webp format ?
We can do it manually using Google's converter - https://developers.google.com/speed/webp/docs/cwebp
Doing it programatically depends on what language you're using.
There's a wrapper for C# - http://webp.codeplex.com/
(and there are more for other languages, but not all - I'm actually looking for a java wrapper, and couldn't find one yet)
So, should I run ahead and do this ?
All this good does come with a price, as all good things do... :)
There might be side affects you didn't think of yet. Some of them being the fact that if a user sends a link to an image that ends with webp and the user that receives this is using a browser that doesn't support it, then they won't be able to open the image.
More what, even if the user does use a new browser (e.i.: a new version of Chrome) and they save a webp file to disk, they probably won't be able to open it on their computer.
These are problems that facebook ran into, and eventually retreated from the idea of using webp. You can read all about that here.
Which browsers did you say support this ?
According to www.caniuse.com - Chrome has obviously been supporting it for a while. Opera also supports it, and FireFox is supposed to start supporting this really soon as well. The most important news is that Android browsers, Chrome for Android and Opera mobile all support this which means many of your mobile users can gain from this change.
If you're still reading and want more information -
- Ilya Grigorik explains how to implement this using your CDN and NginX
- An excellent presentation on web image optimization by Guy Podjarny
According to Ilya Grigorik (performance engineer at google) - you can save 25%-35% on jpeg and png formats, and 60%+ on png files with transparency! (http://www.igvita.com/2013/05/01/deploying-webp-via-accept-content-negotiation/)
Why should we care about this ?
Your web site latency is super important! If you don't measure it by now, then you really need to start. In commerce sites it's already been proven that better latency directly equals more revenue (Amazon makes 1% more in revenue by saving 100ms).
How is this new image format related to latency ?
If your site has many images, then your average user is probably spending a fair amount of time downloading those images. Think of a site like pinterest which is mostly contrived of user based images, then the user is downloading many new images with each page view.
While on a PC at your home, with a DSL connection this might not seem like a lot, but we all know that a big percentage of our users are using mobile devices, with 3G internet connection, which is much slower and they suffer from much longer download times.
What are our options ?
Just converting all our images to WebP is clearly not an option. Why ? Well, some people in the world have special needs. In this case i'm referring to people with outdated browsers (We all know who they are!).
BUT, we can still let some of our users enjoy the benefit of a faster site, and this includes many mobile users as well!
We will need to make some changes to our site in order for us to support this, so lets see what we can do -
(Technical details on implementation at the end)
Option #1 - Server side detection :
When our server gets the request, we can detect if the user's browser supports webp, and if so reply with an html source that has '*.webp' image files in it.
This option comes with a major downside - You will no longer be able to cache the images on the server (via OutputCaching or a CDN like Akamai) since different users can get a different source code for the same exact page.
Option #2 - Server side detection of image request :
This means we can always request the same file name, like 'myImage.png'. Add code to detect if this client can support webp then just send it the same file but in webp format.
This option has a similar downside - Now we can cache the html output, but when sending the image files to the user we must mark them as 'non-cacheable' too since the contents can vary depending on the user's browser.
Option #3 - Client side detection :
Many big sites defer the downloading of images on the client only until the document is ready. This is also a trick to improve latency - It means the client will download all the resources they need, the browser will render everything, and only then start downloading the images. Again, for image intensive sites this is crucial, since it allows the user to start interacting with the site before waiting for the downloading of many images that might not be relevant at the moment.
This is done by inserting a client side script that will detect if the browser supports webp format. If so, you can change the image requests to request the *.webp version of the image.
The downside to this option is that you can only use it if the browser supports the webp format.
(btw - you can decide to go extreme with this and always download the webp version, and if the client doesn't support it, there are js decoders that will allow you to convert the image on the client. This seems a little extreme to me, and you probably will be spending a lot of time decoding in js anyway).
The gritty details -
How can we detect if our browser supports webp ?
Don't worry, there's no need here for looking up which browsers support webp and testing against a list. Browsers that support webp format should claim they do when requesting images. We can see this done by Chrome (in the newer versions) :
You can see in the request headers 'Accept: image/webp'
How do we do this on the client ?
In javascript we don't have access to the request headers, so we need to get creative.
There is a trick that can be done by actually rendering an image on the client, using base64 to store the image in the code, and then detect if the browser loaded the image successfully.
This will do the trick :
$("") .attr('src', 'data:image/webp;base64,UklGRh4AAABXRUJQVlA4TBEAAAAvAQAAAAfQ//73v/+BiOh/AAA=') .on("load", function() { // the images should have these dimensions if (this.width === 2 || this.height === 1) { alert('webp format supported'); } else { alert('webp format not supported'); } }).on("error", function() { alert('webp format not supported'); });
How do we convert our images to webp format ?
We can do it manually using Google's converter - https://developers.google.com/speed/webp/docs/cwebp
Doing it programatically depends on what language you're using.
There's a wrapper for C# - http://webp.codeplex.com/
(and there are more for other languages, but not all - I'm actually looking for a java wrapper, and couldn't find one yet)
So, should I run ahead and do this ?
All this good does come with a price, as all good things do... :)
There might be side affects you didn't think of yet. Some of them being the fact that if a user sends a link to an image that ends with webp and the user that receives this is using a browser that doesn't support it, then they won't be able to open the image.
More what, even if the user does use a new browser (e.i.: a new version of Chrome) and they save a webp file to disk, they probably won't be able to open it on their computer.
These are problems that facebook ran into, and eventually retreated from the idea of using webp. You can read all about that here.
Which browsers did you say support this ?
According to www.caniuse.com - Chrome has obviously been supporting it for a while. Opera also supports it, and FireFox is supposed to start supporting this really soon as well. The most important news is that Android browsers, Chrome for Android and Opera mobile all support this which means many of your mobile users can gain from this change.
If you're still reading and want more information -
- Ilya Grigorik explains how to implement this using your CDN and NginX
- An excellent presentation on web image optimization by Guy Podjarny
Sunday, February 17, 2013
Getting started with nodejs - building an MVC site
CodeProject
A couple of weeks ago I started getting into nodejs. At first I was quite skeptic, I don't even recall why, but after playing with it for just a couple hours I started loving it. Seriously, It's so simple to use, and it seems like the nodejs eco-system is growing really fast. I'm not going to go into what nodejs is or how it works, so if you don't know, you should start by reading this.
What I am going to show here, is a really quick and simple tutorial as to how to get started building a website on nodejs using the MVC design pattern. I'll go over the quick installation of nodejs and walk through getting the very basic mvc wireframe website up and running.
(Since I've been a .net developer for quite a while, I might be comparing some of the terms used to the terminology .net programmers are familiar with)
Installing nodejs
First, download and install nodejs.
On ubuntu, this would be :
Now, you need to install the npm (nodejs package manager) :
Starting our website
I've looked up quite a few mvc frameworks for nodejs, and I would say that the best one, by far, is expressjs. It's really easy to use and it's being actively updated.
Create a directory for your website, navigate their in the terminal, and type
Now we need to tell nodejs how to configure our application, where are the controllers/models/views, and what port to listen to...
Create a file called index.js in the website directory you created -
First things first :
This defines 'app' as our expressjs web application, and gives us all the cool functionality that comes with the expressjs framework.
After that we need to configure our application :
The first two lines tells express we're going to use the 'jade' view engine to render our views. (This is like 'razor' but a little different, for people coming from the .net mvc). You can read about how the view engine works over here. The next 3 lines tell express to use certain middleware. ('middleware' are like 'filters' in the asp.net mvc world) middleware intercept each request, and can do what it wants, including manipulating the request. Basically, each middleware is a method being called with the request object, response object, and 'next' object respectively.
The 'next' object is a function that calls the next middleware in line.
All the middleware are called in the same order that they are defined. The middlewares I use here are basic middlewares that comes with the expressjs framework, and just make our life much easier (by parsing the request object, the cookies to our request/response objects and logging each request for us).
The final 3 lines of code, tell expressjs which directories have static files in them. This means that each request to a filename that exists in one of these files will be served as static content.
Note : if we put a file called 'main.css' in the '/css' folder, we request it by going to http://ourdomain.com/main.css and NOT by going to http://ourdomain.com/css/main.css. (This got me confused a little at first...)
After all that, we need to add our models and controllers...
For models you can create javascript object however you like. On the projects I'm working on, I started using mongoose - which is like an ORM for mongodb. It's really simple to use, but I won't go into it for now...
Finally, in our init.js file, we need to tell our app to listen to a certain port -
Controllers
Defining controllers is really easy with express - Each 'action' is a method, defined by GET or POST, the url (which can include dynamic parameters in it), and the function to call. A typical controller looks like this :
So, that's the end of this. It's really basic I know, but I hope it will help you get started... :)
The main idea of this post was to show just how easy it is to get started with nodejs.
I think I will be posting a lot more about nodejs in the near future! :)
Have fun! :)
What I am going to show here, is a really quick and simple tutorial as to how to get started building a website on nodejs using the MVC design pattern. I'll go over the quick installation of nodejs and walk through getting the very basic mvc wireframe website up and running.
(Since I've been a .net developer for quite a while, I might be comparing some of the terms used to the terminology .net programmers are familiar with)
Installing nodejs
First, download and install nodejs.
On ubuntu, this would be :
sudo apt-get install nodejs(If your ubuntu version is lower than 12.10 than you need to add the official PPA. Read this)
Now, you need to install the npm (nodejs package manager) :
sudo apt-get install nodejs npmThis will help us install packages built for nodejs. (exactly like 'Nuget' for Visual Studio users).
Starting our website
I've looked up quite a few mvc frameworks for nodejs, and I would say that the best one, by far, is expressjs. It's really easy to use and it's being actively updated.
Create a directory for your website, navigate their in the terminal, and type
sudo npm install express
Now we need to tell nodejs how to configure our application, where are the controllers/models/views, and what port to listen to...
Create a file called index.js in the website directory you created -
First things first :
var express = require('express'); app = express();
This defines 'app' as our expressjs web application, and gives us all the cool functionality that comes with the expressjs framework.
After that we need to configure our application :
app.configure(function() { app.set('view engine', 'jade'); app.set('views', __dirname + '/views'); app.use(express.logger()); app.use(express.bodyParser()); app.use(express.cookieParser()); app.use(express.static(__dirname + '/scripts')); app.use(express.static(__dirname + '/css')); app.use(express.static(__dirname + '/img')); app.use(app.router); });
The first two lines tells express we're going to use the 'jade' view engine to render our views. (This is like 'razor' but a little different, for people coming from the .net mvc). You can read about how the view engine works over here. The next 3 lines tell express to use certain middleware. ('middleware' are like 'filters' in the asp.net mvc world) middleware intercept each request, and can do what it wants, including manipulating the request. Basically, each middleware is a method being called with the request object, response object, and 'next' object respectively.
The 'next' object is a function that calls the next middleware in line.
All the middleware are called in the same order that they are defined. The middlewares I use here are basic middlewares that comes with the expressjs framework, and just make our life much easier (by parsing the request object, the cookies to our request/response objects and logging each request for us).
The final 3 lines of code, tell expressjs which directories have static files in them. This means that each request to a filename that exists in one of these files will be served as static content.
Note : if we put a file called 'main.css' in the '/css' folder, we request it by going to http://ourdomain.com/main.css and NOT by going to http://ourdomain.com/css/main.css. (This got me confused a little at first...)
After all that, we need to add our models and controllers...
require('./models'); require('./controllers');The nodejs default when requiring a directory is to look for the file 'index.js' in that directory, so what I did is create an index.js file in each of those directories, and inside it just add a couple of 'require()' calls to specific files in that directory.
For models you can create javascript object however you like. On the projects I'm working on, I started using mongoose - which is like an ORM for mongodb. It's really simple to use, but I won't go into it for now...
Finally, in our init.js file, we need to tell our app to listen to a certain port -
app.listen(8888);
Controllers
Defining controllers is really easy with express - Each 'action' is a method, defined by GET or POST, the url (which can include dynamic parameters in it), and the function to call. A typical controller looks like this :
app.get('/about', function(request, response) { // just render the view called 'about' // this requires us to have a file called 'about.jade' in our 'views' folder we defined response.render('about'); }); app.get('/user/:userId', function(request, response) { // userId is a parameter in the url request response.writeHead(200); // return 200 HTTP OK status response.end('You are looking for user ' + request.route.params['userId']); }); app.post('/user/delete/:userId', function(request, response) { // just a POST url sample // going to this url in the browser won't return anything.. // do some work... response.render('user-deleted'); // again, render our jade view file });
So, that's the end of this. It's really basic I know, but I hope it will help you get started... :)
The main idea of this post was to show just how easy it is to get started with nodejs.
I think I will be posting a lot more about nodejs in the near future! :)
Have fun! :)
Sunday, March 4, 2012
jQuery relative position plugin - nextTo
CodeProject
The Problem :
I've already created quite a few jQuery plugins in the past, at work, and for personal use, and in many of them there are certain parts of code that always tend to repeat themselves.
One of these parts of code has to do with element positioning calculations relative to another element.
For example, when creating a plugin for a drop-down menu, or a tooltip, you can't avoid having a nasty piece of code in there, that all it does is calculate the element's position relative to the element we clicked on or hovered above.
I don't think there's any need for me to post an example of this, you probably know what I mean. This is usually the least most maintainable part of code in the plugin and the least easy to understand.
The Solution :
I finally decided to extract this ugly piece of code into a nice jQuery plugin that will hold all the dirty work calculations, and will leave you with a nice clean and understandable piece of code inside your plugin.
This plugin is hosted on google code : https://code.google.com/p/next-to/ (project name: 'next-to')
At the project page you will find more sample usages, usage explanations, the source code and a minified version.
The Problem :
I've already created quite a few jQuery plugins in the past, at work, and for personal use, and in many of them there are certain parts of code that always tend to repeat themselves.
One of these parts of code has to do with element positioning calculations relative to another element.
For example, when creating a plugin for a drop-down menu, or a tooltip, you can't avoid having a nasty piece of code in there, that all it does is calculate the element's position relative to the element we clicked on or hovered above.
I don't think there's any need for me to post an example of this, you probably know what I mean. This is usually the least most maintainable part of code in the plugin and the least easy to understand.
The Solution :
I finally decided to extract this ugly piece of code into a nice jQuery plugin that will hold all the dirty work calculations, and will leave you with a nice clean and understandable piece of code inside your plugin.
<scrip type="text/javascript"> $(function() { $('.PutThisDiv').nextTo('.ThisOtherDiv', {position:'right', shareBorder:'top'}); }); </script>
This plugin is hosted on google code : https://code.google.com/p/next-to/ (project name: 'next-to')
At the project page you will find more sample usages, usage explanations, the source code and a minified version.
Sunday, March 20, 2011
Generating dynamic forms :: part 1
CodeProject
Let me start out by saying, I know there are already a gazillion posts on the subject of generating dynamic forms using asp.net. I'm not posting this to teach the world something new, yet just to give in on the subject a little bit. This task is just so common that it's worth having a lot of examples out there and let people choose which suits them the best.
Generating dynamic forms can be done in so many different ways, and there is no 'right' way or 'wrong' way. Each way has its specific pros and cons.
I was put up to the task of creating dynamic forms from data in the db, again, this time for a web application at work. I searched the internet for articles on the subject to get ideas from and to get a direction. I saw so many different ways to do this, and got lost in all the information that i just decided to think it through on my own and give it my best shot.
I will post about the solution I came with in parts (since i'm guessing it will be long...).
In the first part, I will talk about making some basic decisions before starting to program.
Tehcnologies
I'm creating the application with asp.net. C# on the server side. For the client I'll be using javascript obviously, and leaning heavily on the jQuery framework since I know it the best.
In my DAL (Data Access Layer), I'll be using Fluent NHibernate to work with my DB (also a big fan of FNH).
Describing the forms
My forms are built like most forms - Each control is made up from a label and an input of some sort. The basic inputs are a textbox, a checkbox, a combo. Combo's for example can have multiple values, so they'll need to receive a list of values from somewhere. Each control will also need to implement validations.
I will also want to be able to implement special inputs like a datepicker which has a calendar you can choose from, or an address control which the street combo is linked cascadingly from the city combo.
So, so far, im guessing each control in the form will be described by the properties : Label, Name, Type, ListOfValues (this will be a list), Validations (also a list), CurrentValue.
Creating the forms
In my application, I want the forms to be opened in client-side dialog windows, because the user is supposed to view an available list of forms to fill out, open some of them, and fill their values. In the end, the user can choose to save all the information in the forms.
So, I will be implementing the creation of the forms on the client via javascript. I will render to the html source, all of the forms objects in json format, and when the user chooses to open a specific form, I will create it specifically in a dialog window.
I will then save the values into the json, and using ajax, send the information back to the server.
Form Interactions
Each control will have a 'Validate' method, so i can choose to call it when i want. Obviously, the best time to do this is when the user submits.
Submitting the form and a button for submition can be implemented on my own, with no connection to the form builder. I want my generic form builder to help me in the future in every situation, and i can't predict what submition scenarios i'll need in the future so i'll just implement it on my own everytime.
Submitting the form
Like stated earlier, saving the form values will go directly back into the json object where they came from. I will then send it to the server via ajax, and do the db updating on the server side.
Coming up...
In the next post I will start getting into the nitty gritty.
I will show how I built my db representation of the forms - What my entities look like, what are their properties and how they are built.
Let me start out by saying, I know there are already a gazillion posts on the subject of generating dynamic forms using asp.net. I'm not posting this to teach the world something new, yet just to give in on the subject a little bit. This task is just so common that it's worth having a lot of examples out there and let people choose which suits them the best.
Generating dynamic forms can be done in so many different ways, and there is no 'right' way or 'wrong' way. Each way has its specific pros and cons.
I was put up to the task of creating dynamic forms from data in the db, again, this time for a web application at work. I searched the internet for articles on the subject to get ideas from and to get a direction. I saw so many different ways to do this, and got lost in all the information that i just decided to think it through on my own and give it my best shot.
I will post about the solution I came with in parts (since i'm guessing it will be long...).
In the first part, I will talk about making some basic decisions before starting to program.
Tehcnologies
I'm creating the application with asp.net. C# on the server side. For the client I'll be using javascript obviously, and leaning heavily on the jQuery framework since I know it the best.
In my DAL (Data Access Layer), I'll be using Fluent NHibernate to work with my DB (also a big fan of FNH).
Describing the forms
My forms are built like most forms - Each control is made up from a label and an input of some sort. The basic inputs are a textbox, a checkbox, a combo. Combo's for example can have multiple values, so they'll need to receive a list of values from somewhere. Each control will also need to implement validations.
I will also want to be able to implement special inputs like a datepicker which has a calendar you can choose from, or an address control which the street combo is linked cascadingly from the city combo.
So, so far, im guessing each control in the form will be described by the properties : Label, Name, Type, ListOfValues (this will be a list), Validations (also a list), CurrentValue.
Creating the forms
In my application, I want the forms to be opened in client-side dialog windows, because the user is supposed to view an available list of forms to fill out, open some of them, and fill their values. In the end, the user can choose to save all the information in the forms.
So, I will be implementing the creation of the forms on the client via javascript. I will render to the html source, all of the forms objects in json format, and when the user chooses to open a specific form, I will create it specifically in a dialog window.
I will then save the values into the json, and using ajax, send the information back to the server.
Form Interactions
Each control will have a 'Validate' method, so i can choose to call it when i want. Obviously, the best time to do this is when the user submits.
Submitting the form and a button for submition can be implemented on my own, with no connection to the form builder. I want my generic form builder to help me in the future in every situation, and i can't predict what submition scenarios i'll need in the future so i'll just implement it on my own everytime.
Submitting the form
Like stated earlier, saving the form values will go directly back into the json object where they came from. I will then send it to the server via ajax, and do the db updating on the server side.
Coming up...
In the next post I will start getting into the nitty gritty.
I will show how I built my db representation of the forms - What my entities look like, what are their properties and how they are built.
Thursday, February 17, 2011
jQuery Templates
CodeProject
I am a big fan of jquery, as it is my favorite javascript framework. It has many great features already, and even more are being developed everyday. It simply makes developing responsive web applications much, much faster.
I am currently working on a large application that displays the user a lot of grids practically on every screen. The data inside the grids, as well as there basic structure can change according to different selections the user makes. It is very important for me to make this web application very responsive and fast, so it was pretty obvious from the start that I'd be using a lot of ajax and depending heavily on the client to do the rendering.
I had a standard ajax call, returning me a json object to work with. I then found myself trying to dynamically build a string that would represent the html markup of a
inside a table, appending it to the table as i iterate over the json properties.
This isn't very difficult to do at first, but will always look very messy, and will be much uglier to maintain...
Luckily, just in time, I came across the jQuery documentation of their new templates feature.
This allows you to design a small template of html, mark where the parameters will be, and bind a json object to that template.
Then, you can practically do whatever you want with it- append it to a table, or just use it as a list of data, and the best part is that the UI is seperated from the 'code', meaning it is very easy to maintain since you can change your template around freely and easily, while never changing the ajax calls nor javascripts.
I'll show some simple examples so you can see what i mean, and to help get you started on your own.
First, you obviously need to include the templates plugin into your html file (can be found here) :
The template you want to use, could be inserted into a script tag like this :
The ${} brackets tell the jquery where to place the data of the json object we will bind. The name inside the brackets must correlate with the properties of the binded json object.
This means our json object will be an array of objects (or one object if thats all we have) that all have the properties 'FirstName', 'LastName' and 'Email'.
The template I created represents a row in a table that i will bind to, and the table im going to bind it to, looks like this :
In order to load the template, I use the template() method :
Now, all we need to do is give the template a data source, and place it wherever we want. In our case we'll append it to the table so it 'fill' the table's data.
The final result will be the table with the data rendered into it.
So in conclusion...
The templates jQuery plugin can be extremely useful in binding data to an html template on the client for fast responsive applications. It also has many more great features like instructions that can cause your template to act different to different data situations.
It is important to note though, that all this is only in beta stage, and is subject to change. I however, already started using it, and so far so good... :)
I might be posting something more advanced about this soon, but until then -
You can read more about it here : http://api.jquery.com/category/plugins/templates/
I am a big fan of jquery, as it is my favorite javascript framework. It has many great features already, and even more are being developed everyday. It simply makes developing responsive web applications much, much faster.
I am currently working on a large application that displays the user a lot of grids practically on every screen. The data inside the grids, as well as there basic structure can change according to different selections the user makes. It is very important for me to make this web application very responsive and fast, so it was pretty obvious from the start that I'd be using a lot of ajax and depending heavily on the client to do the rendering.
I had a standard ajax call, returning me a json object to work with. I then found myself trying to dynamically build a string that would represent the html markup of a
This isn't very difficult to do at first, but will always look very messy, and will be much uglier to maintain...
Luckily, just in time, I came across the jQuery documentation of their new templates feature.
This allows you to design a small template of html, mark where the parameters will be, and bind a json object to that template.
Then, you can practically do whatever you want with it- append it to a table, or just use it as a list of data, and the best part is that the UI is seperated from the 'code', meaning it is very easy to maintain since you can change your template around freely and easily, while never changing the ajax calls nor javascripts.
I'll show some simple examples so you can see what i mean, and to help get you started on your own.
First, you obviously need to include the templates plugin into your html file (can be found here) :
<script language="javascript" type="text/javascript" src="Scripts/jquery-1.4.1.js"></script> <script language="javascript" type="text/javascript" src="Scripts/jquery.tmpl.js"></script>
The template you want to use, could be inserted into a script tag like this :
<script id="templateStructure" type="text/x-jquery-tmpl"> <tr> <td>${FirstName}</td> <td>${LastName}</td> <td>${Email}</td> </tr> </script>Notice that I gave the script an id, which we'll need soon, and the type is marked 'text/x-jquery-tmpl'.
The ${} brackets tell the jquery where to place the data of the json object we will bind. The name inside the brackets must correlate with the properties of the binded json object.
This means our json object will be an array of objects (or one object if thats all we have) that all have the properties 'FirstName', 'LastName' and 'Email'.
The template I created represents a row in a table that i will bind to, and the table im going to bind it to, looks like this :
<table border="1" id="templateTable"> <tr> <td><b>First Name</b></td> <td><b>Last Name</b></td> <td><b>Email</b></td> </tr> </table>So when I add the rows, the table will be filled with data.
In order to load the template, I use the template() method :
$('#templateStructure').template('myTemplate');This will load the template we defined in the script tag and call it 'myTemplate'.
Now, all we need to do is give the template a data source, and place it wherever we want. In our case we'll append it to the table so it 'fill' the table's data.
$(document).ready(function() { // our data object we will bind var myData = [ {FirstName:'Bob', LastName:'Jannovitz', Email:'bobby@gmail.com'}, {FirstName:'Howard', LastName:'Shennaniganz', Email:'howard@yahoo.com'}, {FirstName:'Joe', LastName:'Stoozi', Email:'joeii@hotmail.com'} ]; // load the template and name it 'myTemplate' $('#templateStructure').template('myTemplate'); // bind the data to the template and append to the table $.tmpl('myTemplate', myData).appendTo('#templateTable'); });
The final result will be the table with the data rendered into it.
So in conclusion...
The templates jQuery plugin can be extremely useful in binding data to an html template on the client for fast responsive applications. It also has many more great features like instructions that can cause your template to act different to different data situations.
It is important to note though, that all this is only in beta stage, and is subject to change. I however, already started using it, and so far so good... :)
I might be posting something more advanced about this soon, but until then -
You can read more about it here : http://api.jquery.com/category/plugins/templates/
Subscribe to:
Posts (Atom)