Saturday, November 8, 2014

Online multiple javascript compression tool


Minifying/compressing javascript files has become a standard for a while already when developing websites. It is very important to save as much space as you can and have your users downloading as little as possible to improve the performance of your site.
I don't think you'll find an article out there that denies it.

An important question that arises from this is which compressor/minifier should you use ?
Different famous open source projects use different compressors, and I'm guessing (or at least hoping) they chose them wisely relying on benchmarks they did on their own.
You see, each compressor works differently, so different code bases won't be affected in the same way by different compressors.

In the past I used to manually test my code against different compressors to see which one was best for me. I finally got sick of doing it manually, so decided to look for a tool that will do the job for me. Surprisingly, I didn't find one that did exactly that, so I quickly wrote a script that will do it for me. Then I decided to design a UI for it and put it online for others to enjoy as well.

I present to you : http://compress-js.com
You can copy text, or drag some js files to it, and choose which compressor you want. Or, and this is the default method, choose 'Check them all' which will compress your code using the most popular compressors and show you the results, and the compressed size from all of them. You can download the files directly from the site.

Here's a screenshot :


Currently the site can compress your javascript code with YUI Compressor, UglifyJS, JSMin and Google's Closure Compiler.
If you have any thoughts or suggestions on how to improve, feel free to drop a comment below. :)

Tuesday, November 4, 2014

Lazy loading directives in AngularJS the easy way


The past few months I've been doing a lot of work with AngularJS, and currently I'm working on a single page application which is supposed to be quite big in the end. Since I have the privilege of building it from scratch, I'm taking many client-side performance considerations in mind now, which I think will save me a lot of hard work optimizing in the future.

One of the problems main problems is HUGE amounts of js files being downloaded to the user's computer. A great way to avoid this is to only download the minimum the user needs and dynamically load more resources in the background, or as the user runs into pages which require a specific feature.

AngularJS is a great framework, but doesn't have anything built in that deals with this, so I did some research myself...
I ran into some great articles on the subject, which really helped me a lot (and I took some ideas from), but weren't perfect.
A great article on the subject is this one : http://www.bennadel.com/blog/2554-loading-angularjs-components-with-requirejs-after-application-bootstrap.htm
The important part is that it explains how to dynamically load angularjs directives (or other components) after bootstrapping your angularjs app.
What I didn't like about this article is that the writer's example requires RequireJS and jQuery along with all the AngularJS files you already have. This alone will make your app really heavy, and I think doesn't need to be like this.

Let me show you how I wrote a simple AngularJS service that can dynamically load directives.

The first crucial step is that you need to save a reference to $compileProvider. This is a provider that is available to us when bootstrapping, but not later, and this provider will compile our directive for us.
var app = angular.module('MyApp', ['ngRoute', 'ngCookies']);

app.config(['$routeProvider', '$compileProvider', function($routeProvider, $compileProvider) {
    $routeProvider.when('/', {
        templateUrl: 'views/Home.html',
        controller: 'HomeController'
    });

    app.compileProvider = $compileProvider;
}]);


Now, we can write a service that will load our javascript file on demand, and compile the directive for us, to be ready to use.
This is a simplified version of what it should look like :
app.service('LazyDirectiveLoader', ['$rootScope', '$q', '$compile', function($rootScope, $q, $compile) {
    
    // This is a dictionary that holds which directives are stored in which files,
    // so we know which file we need to download for the user
    var _directivesFileMap = {
        'SexyDirective': 'scripts/directives/sexy-directive.js'
    };

    var _load = function(directiveName) {
        // make sure the directive exists in the dictionary
        if (_directivesFileMap.hasOwnProperty(directiveName)) {
            console.log('Error: doesnt recognize directive : ' + directiveName);
            return;
        }

        var deferred = $q.defer();
        var directiveFile = _directivesFileMap[directiveName];

        // download the javascript file
        var script = document.createElement('script');
        script.src = directiveFile;
        script.onload = function() {
            $rootScope.$apply(deferred.resolve);
        };
        document.getElementsByTagName('head')[0].appendChild(script);

        return deferred.promise;
    };

    return {
        load: _load
    };

}]);


Now we are ready to load a directive, compile it and add it to our app so it is ready for use.
To use this service we will simply call it from a controller, or any other service/directive like this:
app.controller('CoolController', ['LazyDirectiveLoader', function(LazyDirectiveLoader) {
    
    // lets say we want to load our 'SexyDirective' all we need to do is this :
    LazyDirectiveLoader.load('SexyDirective').then(function() {
        // now the directive is ready...
        // we can redirect the user to a page that uses it!
        // or dynamically add the directive to the current page!
    });

}]);


One last thing to notice, is that now your directives need to be defined using '$compileProvider', and not how we would do it regularly. This is why we exposed $compileProvider on our 'app' object, for later use. So our directive js file should look like this:
app.compileProvider.directive('SexyDirective', function() {
    return {
        restrict: 'E',
        template: '<div class=\"sexy\"></div>',
        link: function(scope, element, attrs) {
            // ...
        }
    };
});


I wrote earlier that this is a simplified version of what it should look like, since there are some changes that I would make before using it as is.
First I would probably add some better error handling to look out for edge cases.
Second, We wouldn't want the same pages to attempt to download the same files several times, so I would probably add a cache mechanism for loaded directives.
Also, I wouldn't want the list of directive files (the variable _directivesFileMap) directly in my LazyDirectiveLoader service, so I would probably create a service that holds this list and inject it the service. The service that holds the list will be generated by my build system (in my case I created a gulp task to do this). This way I don't need to make sure this file map is always updated.
Finally, I think I will take out the part that loads the javascript file to a separate service so I will be able to easily mock it in tests I write. I don't like touching the DOM in my services, and if I have to, I rather separate it to a separate service I can easily mock.

I uploaded a slightly better (and a little less simplified) version of this over here : https://github.com/gillyb/angularjs-helpers/tree/master/directives/lazy-load

Thursday, October 9, 2014

Desktop applications with nodejs! ...as if winforms and wpf aren't dead already!


I used to disfavor javascript over other languages because it wasn't type-safe, it was hard to refactor, hard to write tests, find usages in the code, ...and the list goes on...
The past few years though, some amazing things have happened in the world that now make javascript an amazing language!

IDE's got much better! My personal favorite is WebStorm which has great auto-completion in javascript and supports many frameworks like nodejs and angular.

Web frameworks got much better! Newer and more advanced frameworks like angularJS and Ember allow you to write really organized and well structured javascript on the client side.

V8 was created and open sourced, which brought a whole variety of new tools to the table. Some of them being headless browsers like phantomJS which are great for automation testing, and creating quick web crawling scripts.

And my personal favorite - NodeJS! This tool is amazing! It can do so many things from being a fully functional and scalable backend server to a framework for writing desktop applications.


While looking into the code of PopcornTime I realized it was written in nodejs, with a framework called node-webkit. This was an amazing concept to me. It's basically a wrapper, that displays a frame with a website in it. The 'website' displayed is your typical client side code - html, javascript and css, so obviously you can use any framework you like, like angular or ember. This 'website' which is displayed in the frame can use all nodejs modules (directly in the js code) which gives you access to the operating system - you can access the file system, databases, networks and everything else you might need. Since nodejs runs on all major operating systems, you can also 'compile' your desktop app to run on any platform.
You can wrap all this as an executable file ('.exe' in windows) and easily tweak it not to show the toolbar which means the user has no way of knowing it's actually a website 'beneath' the desktop application they're using.


The steps taken to create a simple desktop application with node-webkit are super-simple!
(and easier than building a desktop application with any other language i've tried!)

First, I'm assuming you have nodejs and npm installed.
Now, download node-webkit : https://github.com/rogerwang/node-webkit#downloads

Start building your application just like you would a website. You can use the browser just like you're used to, to see your work.
When you want to start accessing node modules, you'll need to start running it with node-webkit.
In order to do this, just run the node-webkit executable from the command line with your main html file as a parameter.

C:\Utilities\node-webkit\nw.exe index.html


This will open your website as a desktop application.

You can now access all nodejs modules directly from the DOM!
Some of the operating system's api's are wrapped as node modules as well, so you can create a tray icon, native window menus, and much much more..

Debugging the app is also super simple and can easily be done with the Developer Tools, just like you would in Chrome! (you just need to configure your app to open with the toolbar visible, which you can define while developing in your package.json file)


I see so many benefits creating desktop applications like this, so I'm expecting to see many more apps running on this framework (or other nodejs-based frameworks) in the near future. (Except for major algorithms which probably would be better off written in C/C++. Hence, i'm not expecting to see the next Photoshop version be written in nodejs, but there are a ton of good examples out there which should be!)


Some good references :
- Node-Webkit Github page
- Introduction to HTML5 Desktop apps with node-webkit (a great tutorial to get started)

Saturday, August 9, 2014

AngularJS hack/tip :: Invoking JS code after DOM is ready


When working with AngularJS, you frequently update the DOM after the DOM was already 'ready'.
What I mean by that is that the browser will load the DOM, and the template will completely load. BUT, your template might have an 'ng-if' or 'ng-repeat' directive that will only be attached to the DOM slightly after, since you might be setting it with an ajax response inside the control.

This will happen when your code is similar to this pattern :
app.controller('MyAngularController', function($scope, $http) {
    $http.get('www.someURL.com/api').success(function(response) {
        // Add some data to the scope
        $scope.Data = response;

        // This caused the DOM to change
        // so invoke some js that will take care of the new DOM changes
        DoSomeJS();
    });
});

The main problem with this code is that most of the time when the method DoSomeJS() is invoked, the DOM changes caused by the changes to $scope won't be 'ready'.

This is because the way angularJS is built -
Each property on the scope has a 'watcher' attached to it, checking it for changes. Once the property is changed, it invokes a '$digest' loop which is responsible for updating the model and view as well. This is invoked asynchronously (for performance reasons i guess), and this actually gives you the great ability of invoking js code immediately after updating the scope without waiting for the DOM to be updated - something you'll probably want as well from time to time. (The nitty gritty details of how this works behind the scenes is interesting, but will take me too long to go through in this post. For the brave ones among us, I encourage you to look a bit into the code yourself --> https://github.com/angular/angular.js/blob/master/src/ng/rootScope.js#L667)


So, how can we invoke some JS code, and make sure it runs only after the DOM was updated ?
Well, one quick and hacky way to do this is to let a js timer invoke your code with a '0' delay. Since JS is single-threaded, running a timer with a 0ms delay doesn't always mean the JS runs immediately. What will happen in this case, it will push the code to 'the end of the line' and invoke it once the JS thread is ready.

The updated code looks like this :
app.controller('MyAngularController', function($scope, $http, $timeout) {
    $http.get('www.someURL.com/api').success(function(response) {
        // Add some data to the scope
        $scope.Data = response;

        // This caused the DOM to change
        // so invoke some js that will take care of the new DOM changes
        $timeout(DoSomeJS);
    });
});
Note: invoking '$timeout()' like we did is just like invoking 'setTimeout(fn, 0);' - $timeout is an angularJS service that wraps setTimeout.
A great read on how JS timers are invoked : Understanding Javascript timers

But wait, This whole solution is a hack, isn't it ?!...
Yes, and truth be told, when I first ran into this problem, this was the first solution I came up with. It was only after that I realized I don't want any js code in my controller touching my DOM.
I still decided to write this post though, to explain a little about JS timers and angular $digest.

The solution I would favor more in this case would be to have a custom directive on the DOM being inserted dynamically. Then adding the code modifying the DOM in the 'link' method of the directive.

And the code should look more like this :
app.directive('myDirective', function() {
    return {
        restrict: 'A',
        link: function(scope, elem, attrs) {
            // DO WHATEVER WE WANT HERE...
        }
    };
});

In angular directives describe various elements of the templates, and therefore I feel like they are the 'right' place for most of the code we need to modify our DOM. I like to keep my controllers clean from touching the DOM, and just have them construct the models they need to pass on to the template.

Thursday, July 24, 2014

Escaping '&' (ampersand) in razor view engine


Recently I ran into a really annoying problem with the asp.net razor view engine -
I was generating some url's on the server side, and trying to print them inside html tag attributes like 'href' or 'src'.

The problem was that all the ampersands ('&') were being encoded to '&'.
First thing I tried to do was print it out using the Html 'Raw' helper method, like this :
Some Link


This didn't work... :/
The weird thing about this was that when I searched the internet and found questions on stackoverflow, some people wrote that Html.Raw() worked for them and some said it didn't.

After a little more research (mostly based on some trial & error), I realized that razor will always encode strings inserted in attribute values. This is done for security reasons. The proper workaround is to simply put the whole tag inside the 'Raw()' method, like this:
@Html.Raw("Some Link)


This basically tells razor - "I know what I'm doing, just let me do it my way!" :)

Sunday, July 13, 2014

Saving prices as decimal in mongodb


When working with prices in C#, you should always work with the 'decimal' type.
Working with the 'Double' type can lead to a variety of rounding errors when doing calculations with them, and is more intended for mathematical equations.

(I don't want to go into details about what problems this can cause exactly, but you can read more about it here :
http://stackoverflow.com/questions/2129804/rounding-double-values-in-c-sharp
http://stackoverflow.com/questions/15330988/double-vs-decimal-rounding-in-c-sharp
http://stackoverflow.com/questions/693372/what-is-the-best-data-type-to-use-for-money-in-c
http://pagehalffull.wordpress.com/2012/10/30/rounding-doubles-in-c/ )

I am currently working on a project that involves commerce and prices, so naturally I used 'decimal' for all price types.
Then I headed to my db, which in my case is mongodb, and the problem arose.
MongoDB doesn't support 'decimal'!! It only supports the double type.

Since I rather avoid saving it as a double for reasons stated above, I had to think of a better solution.
I decided to save all the prices in the db as Int32 saving the prices in 'cents'.

This means I just need to multiply the values by 100 when inserting to the db, and dividing by 100 when retrieving. This should never cause any rounding problems, and is pretty much straight-forward. I even don't need to worry about sorting, or any other query for that matter.

But... I don't want ugly code doing all these conversions from cents to dollars in every place...

I'm using the standard C# mongo db driver (https://github.com/mongodb/mongo-csharp-driver), which gives me the ability to write a custom serializer for a specific field.
This is a great solution, since it's the lowest level part of the code that deals with the db, and that means all my entities will be using 'decimal' everywhere.

This is the code for the serializer :
[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public class MongoDbMoneyFieldSerializer : IBsonSerializer
{
    public object Deserialize(BsonReader bsonReader, Type nominalType, IBsonSerializationOptions options)
    {
        var dbData = bsonReader.ReadInt32();
        return (decimal)dbData / (decimal)100;
    }

    public object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
    {
        var dbData = bsonReader.ReadInt32();
        return (decimal)dbData / (decimal)100;
    }

    public IBsonSerializationOptions GetDefaultSerializationOptions()
    {
        return new DocumentSerializationOptions();
    }

    public void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
    {
        var realValue = (decimal) value;
        bsonWriter.WriteInt32(Convert.ToInt32(realValue * 100));
    }
}


And then all you need to do is add the custom serializer to the fields which are prices, like this:
public class Product
{
    public string Title{ get; set; }
    public string Description { get; set; }

    [BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
    public decimal Price { get; set; }

    [BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
    public decimal MemberPrice { get; set; }

    public int Quantity { get; set; }
}

That's all there is to it.

Monday, June 23, 2014

Drastically improving 'First Byte' and 'Page Load' (for SEO)


Improving your 'first byte' speed, and in general your 'page load' can be crucial for SEO. Google likes pages that render faster to the user, and, in some cases, will prioritize them higher than other pages in search results.

If you're not familiar with this, then here are some articles on the subject :
http://googlewebmastercentral.blogspot.co.il/2010/04/using-site-speed-in-web-search-ranking.html
http://blog.kissmetrics.com/speed-is-a-killer/
http://www.quicksprout.com/2012/12/10/how-load-time-affects-google-rankings/

Improving your site's performance can be a daunting task. There are probably many easy wins you can do that will improve the speed by a little, but quickly you will realize that better results will take much longer. Some improvements can take days, weeks and even months of infrastructure changes.

But why should your SEO suffer from this ?? Why not be a step ahead of google ??
Your site doesn't really need to be fast for you to get good SEO scores, you just need google to think your site is fast!

But how do you do that ?
Google will scan your site once every few days/weeks and cache the results for indexing. So let's beat google to it's own game.
Why don't we crawl our site first, cache the results to text files even, and when google comes around, just serve it the static pages we cached without any server calculations.

You can easily build a crawler using Selenium, phantomjs, zombiejs or pure nodejs. You don't even need to implement all the logic of a regular crawler since you're familiar with your site's domain.

For a real world example :
If your site is a big commerce site, then you know the structure of all your product pages. They're probably something like this :
http://www.YourCommerceSite.com/product/Product-Name/:Product-ID:

You can invoke this endpoint, while scanning all your different product id's from your db.
Then you can save them all to text files like this :
Product_.txt

When the google bot comes around (which you can easily detect by it's 'User-Agent' header) and requests a product page, then quickly give it the cached product page you stored on disk.
This might be stale by a few days/hours (as frequent as you decide to scan) but will still be good enough for indexing in google (since google's indexing isn't realtime anyway) and should be super fast!

Saturday, April 12, 2014

Debugging and solving the 'Forced Synchronous Layout' problem


If you're using Google Developer tools to profile your website's performance, you might have realized that Chrome warns you about doing 'forced layouts'.
This looks something like this :
In this screenshot, I marked all the warning signs chrome tries to give you so you can realize this problem.

So, what does this mean ?
When the browser constructs a model of the page in memory, it builds 2 trees that represent the DOM in memory. One is the DOM structure itself, and the other is a tree that represents the way the elements should be rendered on the screen.
This tree needs to always stay updated, so when you change an element's css properties for example, the browser might need to update these trees in memory to make sure the next time you request a css property, the browser will know it has updated information.

Why should you care about this ?
Updating both these trees in memory may take some time. Although they are in memory, most pages these days have quite a big DOM so the tree will be pretty big. It also depends on which element you change, since updating different elements might mean only updating part of the tree or the whole tree in different cases.

Can we avoid this ?
The browser can realize that you're trying to update many elements at once, and will optimize itself so that a whole tree update won't happen after each update, but only when the browser knows it needs relevant data. In order for this to work correctly, we need to help it out a little.
A very simple example of this scenario might be setting and getting 2 different properties, one after the other, as so :
var a = document.getElementById('element-a');
var b = document.getElementById('element-b');

a.clientWidth = 100;
var aWidth = a.clientWidth;

b.clientWidth = 200;
var bWidth = b.clientWidth;

In this simple example, the browser will update the whole layout twice. This is because after setting the first element's width, you are asking to retrieve an element's width. When retrieving the css property, the browser know's it needs updated data, so it then goes and updates the whole DOM tree in memory. Only then, it will continue to the next line, which will soon after cause another update because of the same reasons.

This can simply be fixed by changing around the order of the code, as so :
var a = document.getElementById('element-a');
var b = document.getElementById('element-b');

a.clientWidth = 100;
b.clientWidth = 200;

var aWidth = a.clientWidth;
var bWidth = b.clientWidth;

Now, the browser will update both properties one after the other without updating the tree. Only when asking for the width on the 7th line, it will update the DOM tree in memory, and will keep it updated for line number 8 as well. We easily saved one update.


Is this a 'real' problem ?
There are a few blogs out there talking about this problem, and they all seem like textbook examples of the problem. When I first read about this, I too thought it was a little far fetched and not really practical.
Recently though I actually ran into this on a site I'm working on...

Looking at the profiling timeline, I realized the same pattern (which was a bunch of rows alternating between 'Layout' and 'Recalculate Style').
Clicking on the marker showed that this was actually taking around ~300ms.













I can see that the evaluation of the script was taking ~70ms which I could handle, but over 200ms was being wasted on what?!...

Luckily, when clicking on the script in that dialog, it displays a JS stacktrace of the problematic call. This was really helpful, and directed me exactly to the spot.

It turned out I had a piece of code that was going over a loop of elements, checking each element's height, and setting the container height according to the aggregated height. This was being set and get in each loop iteration, causing a performance hit.

The problematic code looked something like this :
for (var i=0; i<containerItems.length; i++) {
   var item = containerItems[i];
   appendItemToContainer(item);
}

var appendItemToContainer = function(item) {
   container.clientHeight += item.clientHeight;
}

You can see that the 'for' loop has a call to the method 'appendItemToContainer' which sets the container's height according to the previous height - which means setting and getting in the same line.

I fixed this by looping over all the item's in the container, and building an array of their height's. Then I aggregated them all together and set the container's height once. This saved many DOM tree updates, and only left one which is necessary.

The fixed code looked something like this :
// collect the height of all elements
var totalHeight = 0;
for (var i=0; i<containerItems.length; i++) {
   totalHeight += containerItems[i].clientHeight;
}

// set the container's height once
container.clientHeight = totalHeight;

After fixing the code, I saw that the time spent was actually much less now -













As you can see, I managed to save a little over 150ms which is great for such a simple fix!!


Friday, February 21, 2014

Chrome developer tools profiling flame charts

I just recently, and totally coincidentally, found out that Chrome developer tools can generate flame charts while profiling js code!
Recently it seems like generating flame charts from profiling data has become popular in languages like Ruby, python and php, so i'm excited to see that chrome has this option for js code as well.

The default view for profiling data in the dev tools is the 'tree view', but you can easily change it to 'flame chart' by selecting it on the drop down in the bottom part of the window.

Like here :


Then you will be able to see the profiling results, in a way that sometimes is easier to look at.
You can use the mouse scroll button to zoom in on a specific area of the flame chart, and see what's going on there.

In case you're not familiar with reading flame charts, then here's a simple explanation -
  • Each colored line is a method call
  • The method calls above one another represent the call stack
  • The width of the lines represents how long each call was

And here you can see an example of a flame chart, and I marked a few sections that the flame chart points out for us, that are non-optimized TryCatchBlocks. In this case it's comfortable viewing it in a flame chart because you can see nicely how many method calls each try/catch block is surrounding.


Wednesday, February 19, 2014

Preloading resources - the right way (for me)


Looking through my 'client side performance glasses' when browsing the web, I see that many sites spend too much time downloading resources, mostly on the homepage, but sometimes the main bulk is on subsequent pages as well.

Starting to optimize
When trying to optimize your page, you might think that it's most important that your landing page is the fastest since it defines your users' first impression. So what do you do ? You probably cut down on all the js and css resources you can and leave only what's definitely required for your landing page. You minimize those and then you're left with one file each. You might even be putting the js at the end of the body so it doesn't block the browser from rendering the page, and you're set!

But there's still a problem
Now, your users go onto the next page, probably an inner page of your site, and this one is filled with much more content. On this page you use some jquery plugins and other frameworks you found useful and probably saved yourself hours of javascript coding, but your users are paying the price...

My suggestion
I ran into this same exact problem a few times in the past, and the best way I found of solving this was to preload the resources on the homepage. I can do this after 'page load' so it doesn't block the homepage from rendering, and while the user is looking at the homepage, a little extra time is spent in the background downloading resources they'll probably need on the next pages they browse.

How do we do this ?
Well, there are several techniques, but before choosing the right one, lets take a look at the requirements/constraints we have -
  • We want download js/css files in a non-blocking way
  • Trigger the download ourselves so we can defer it to after 'page load'
  • Download the resources in a way that won't execute them (css and js) (This is really important and the reason we can't just dynamically create a '<script/>' tag and append it to the '<head/>' tag!)
  • Make sure they stay in the browser's cache (this is the whole point!)
  • Work with resources that are stored on secure servers (https). This is important since I would like it to preload resources from my secured registration/login page too if I can.
  • Work with resources on a different domain. This is very important since all of my resources are hosted on an external CDN server with a different subdomain.

The different techniques are (I have tested all of these, and these are my notes)
1. Creating an iframe and appending the script/stylesheet file inside it
var iframe = document.createElement('iframe');
iframe.setAttribute("width", "0");
iframe.setAttribute("height", "0");
iframe.setAttribute("frameborder", "0");
iframe.setAttribute("name", "preload");
iframe.id = "preload";
iframe.src = "about:blank";
document.body.appendChild(iframe);

// gymnastics to get reference to the iframe document
iframe = document.all ? document.all.preload.contentWindow : window.frames.preload;
var doc = iframe.document;
doc.open();
doc.writeln("");
doc.close();

var iFrameAddFile = function(filename) {
    var css = doc.createElement('link');
    css.type = 'text/css';
    css.rel = 'stylesheet';
    css.href = filename;
    doc.body.appendChild(css);
}
    
iFrameAddFile('http://ourFileName.js');
This works on Chrome and FF but on some versions of IE it wouldn't cache the secure resources (https).
So, close, but no cigar here (at least, fully).

2. Creating a javascript Image object
new Image().src = 'http://myResourceFile.js';
This only works properly on Chrome. On FireFox and IE it would either not download the secure resources or download them but without caching.

3. Building an <object/> tag with file in data attribute
var createObjectTag = function(filename) {
    var o = document.createElement('object');
    o.data = filename;

    // IE stuff, otherwise 0x0 is OK
    if (isIE) {
        o.width = 1;
        o.height = 1;
        o.style.visibility = "hidden";
        o.type = "text/plain";
    }
    else {
        o.width  = 0;
        o.height = 0;
    }

    document.body.appendChild(o);
}
   
createObjectTag('http://myResourceFile.js');
This worked nicely on Chrome and FF, but not on some versions of IE.

4. XMLHttpRequest a.k.a. ajax
var ajaxRequest = function(filename) {
    var xhr = new XMLHttpRequest();
    xhr.open('GET', filename);
    xhr.send('');
}

ajaxRequest('http://myResourceFile.js');
This technique won't work with files on a different domain, so I immediately dropped this.

5. Creating a 'prefetch' tag
var prefetchTag = function(filename) {
    var link = document.createElement('link');
    link.href = filename;
    link.rel = "prefetch";
    document.getElementsByTagName('head')[0].appendChild(link);
}

prefetchTag('http://myResourceFile.js');


6. 'script' tag with invalid 'type' attribute
// creates a script tag with an invalid type, like 'script/cache'
// I realized this technique is used by LabJS for some browsers
var invalidScript = function(filename) {
    var s = document.createElement('script');
    s.src = filename;
    s.type = 'script/cache';
    document.getElementsByTagName('head')[0].appendChild(s);
}

invalidScript('http://myJsResource.js');
This barely worked in any browser properly. It would download the resources, but wouldn't cache them for the next request.


Conclusion
So, first I must say, that given all the constraints that I have, this is more complicated than I thought would be at first.
Some of the techniques worked well on all of the browsers for non-secured resources (non SSL) but only on some browsers for secured resources. In my specific case I just decided to go with one of those, and figure that some users will not have cached resources that are for SSL pages (these are a minority in my case).
But, I guess that given your circumstances, you might choose a different technique. I had quite a few constraints that I'm sure not everyone has.
Another thing worth mentioning is that I didn't test Safari on any technique. Again, this was less interesting for me in my case.
I also didn't think about solving this problem on mobile devices yet. Since mobile bandwidth is also usually much slower I might tackle this problem differently for mobile devices...