tag:blogger.com,1999:blog-30310408427311997602024-03-16T11:52:58.528-07:00[DebuggerStepThrough]Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.comBlogger59125tag:blogger.com,1999:blog-3031040842731199760.post-6776190826066360322015-07-03T03:30:00.000-07:002015-07-03T05:43:55.305-07:00Domain purchase scamI hold a few domains under my name, mostly because they're all related to some idea I once had and wanted to build. I didn't buy them with the intention of selling them, but recently I ran into the site Flippa.com which gives you a platform to sell domains. Since I never ended up doing anything with the domains I figured I'd put them up for sale, and if by some chance someone is really interested in one of the names I'll make a few bucks.<br/>
<br/>
A few days ago, someone tried to scam me. So I'm sharing this in hope to inform or educate some others.<br/>
<br/>
<b>How did they reach me ?</b><br/>
I believe the scammer saw I was trying to sell a site on flippa and decided to contact me. I'm 100% sure this isn't related to flippa in any way, so I'm not blaming them in any way. The scammer could've just seen my domain for sale, and found my email in the whois database.<br/>
I always put my real details in the whois database, which exposes you to spam and scams like this, but this doesn't bother me, you just need to be careful. There are sites offering you to purchase privacy in the whois database - I think this is useless, and wouldn't spend money on this.<br/>
<br/>
<b>First Email</b><br/>
I received an email from someone named "Shlomo Greenberg" saying he's representing an investor from Europe that's interested in a specific domain of mine. He asked if I'm willing to sell, and if so, we'll negotiate the details in the next mail.<br/>
<br/>
<b>Hints this might be a scam</b><br/>
The first hint I had is that the email was a 'gmail' account, but gmail was smart enough to notify me that the 'from' address might be forged.
I have never seen this message, but it looks like this :<br/>
<br/>
<div class="separator" style="clear: both; display:block; text-align: left;"><a href="http://2.bp.blogspot.com/-wpz9K5JSGow/VZaDFWYNFCI/AAAAAAAABC4/rUVTX8g5gz0/s1600/Screen%2BShot%2B2015-07-03%2Bat%2B1.03.46%2BPM.png" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" width="650" src="http://2.bp.blogspot.com/-wpz9K5JSGow/VZaDFWYNFCI/AAAAAAAABC4/rUVTX8g5gz0/s1600/Screen%2BShot%2B2015-07-03%2Bat%2B1.03.46%2BPM.png" /></a></div>
<br/>
Why is this a hint of a scam ? Because if someone is automating the emails, then they're sending it from some other server, and just forging the 'gmail' account to make it look like someone personally contacted me.<br/>
<br/>
That being said, this probably doesn't mean 100% scam! I have never seen this before, but I'm sure this might happen in cases where you're not being scammed. This shouldn't stop you immediately, but this is a clear sign to proceed with caution.<br/>
<br/>
<b>Some basic investigation</b><br/>
Because of the gmail notice that seemed fishy, and because I'm a curious person by nature, I decided to do some very basic research about the person contacting me. You don't have to be a private investigator to do this, the simplest common sense will take you a long way.<br/>
The mail was signed by a "Shlomo Greenberg" and it said "Lawyer" next to it. It also had an Israeli address. I searched the internet for his name, with and without a lawyer prefix, in English and in Hebrew, but couldn't find anything. I also searched on google maps for the address he added, but it seemed to lead to some coffee shop.<br/>
(I'm not saying this Shlomo Greenberg guy isn't a real lawyer, maybe he is, but I couldn't find anything about him online, and as someone who claims to represent a European investor I would imagine that something would come up).<br/>
<br/>
<b>My reply</b><br/>
At this point, I still wasn't sure if this is a scam or not, but figured I have nothing to lose. I replied saying that I'm willing to sell for $2000, and let me know if the investor can pay that amount.<br/>
<br/>
<b>The scam!</b><br/>
I received an email back saying that the price isn't a problem for the investor, but they want a "domain certificate" so they know it's legit.<br/>
What is this "domain certificate" ?<br/>
He added a link to a page on "Google Answers" that someone asked how to get a domain certificate, and some other user answered with a link to a site to buy certificates. He said I should go to that link, and get a certificate.<br/>
He also explained that the certificate is to give an evaluation about the price of the domain, validate ownership, and to do some basic due-dilligence on the trademark.<br/>
<br/>
<br/>
<b>At this point I knew it was a scam for quite a few reasons.</b><br/>
<br/>
First, the "Google Answers" link he gave me wasn't a real google answers page. It was linked to "www.google-answers.org" which isn't a domain owned by google (according to whois), but it was perfectly crafted to look like a google answers page.<br/>
Google answers is a product that closed a long time ago. I never searched anything on google even and ran into quality search results on google answers, so even if it was a real google answers page, I wouldn't give it any credit.<br/>
<br/>
Second, the link to get the domain certificate looked bad. I'm a web designer, so i have an eye for basic web design. It seemed very unprofessional, and didn't seem related to any official organization related to world wide web standards or anything like that.<br/>
<br/>
<br/>Third, the certificate cost ~$150.<br/>
<br/>
Lastly, if you're selling a domain, you shouldn't need to purchase any certificate of any sort!<br/>
Validation of ownership is done via whois, and if your details aren't there then you can simply transfer the domain via an escrow service which will easily protect both sides in the transfer.<br/>
There is no reason for someone selling a domain to do any due-dilligence. If someone ever asks you for this, it's bullshit. They can (and should) do all the research they want before purchasing the domain from you on there own.<br/>
Finally, I believe that the whole domain appraisal business is bullshit! I see people on flippa writing "this site got an appraisal of Xk dollars!". I don't believe there's an actual way to do a domain appraisal, and even if there is, it shouldn't mean anything to the seller. You should sell the site for as much as you can get someone to pay for it. If you can't get someone to pay more for it, then it doesn't matter how much you *think* it's worth, it's obviously not.<br/>
<br/>
<br/>
<b>Benefit of the doubt</b><br/>
At this point, although I was sure it was a scam, I wanted to see how this would roll out. Obviously I'm not going to buy a domain certificate, but I replied kindly, stating that I am not going to spend the money for the certificate, but the buyer can pay for it if they want to.<br/>
I explained that I'm willing to do an escrow transfer so we're both protected.<br/>
Needless to say, I didn't get any response back...<br/>
<br/>
<br/>
<b>Beware of scams!</b><br/>
There will always be people out there thinking of elaborate ways of scamming you, and spending time and money into crafting the techniques. The only reason they continue to do this is because it works to some extent.<br/>
Let's try to stop it, so the scammers will eventually realize it's not worth it neither.<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com41tag:blogger.com,1999:blog-3031040842731199760.post-23238131944252945152015-05-11T12:55:00.002-07:002015-05-11T12:55:47.009-07:00AngularJS custom directive with two-way binding using NgModelController<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
It took me a while, but I finally got it right!<br/>
I recently tried to create a custom directive with 2 way binding, using the 'ng-model' attribute. This was a little tricky at first - I ran into some articles, but they didn't seem to work for me, and I needed to make some tweaks to get it right.<br/>
<br/>
I don't want to go over everything I read, but just want to publish the changes or gotcha's you should know about.<br/>
<br/>
The best article I read on the subject is this one : <a href="http://www.chroder.com/2014/02/01/using-ngmodelcontroller-with-custom-directives/">http://www.chroder.com/2014/02/01/using-ngmodelcontroller-with-custom-directives/</a><br/>
I recommend reading it. It has the best explanation about how <b>'$formatters'</b> and <b>'$parsers'</b> work, and what's their relation to the ngModelController.<br/>
<br/>
<br/>
After reading that article, there are 2 problems I ran into.<br/>
<br/>
<b>1.</b> ngModelController.$parsers and ngModelController.$formatters are arrays, but 'pushing' my custom function to the end of the array didn't work for me. When changing the model, it never got invoked. To make this work, I needed to push it in the beginning of the array, using the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/unshift">Array.prototype.unshift</a> method.
<br/><br/>
<b>2.</b> The second problem I had was that I needed to pass ng-model an object. Passing it a value won't work. You might be thinking that it's obvious since passing a value won't suffice as a reference to it, but this wasn't obvious to me, since passing ng-model a value when using an 'input' element for example works and still updates it both ways.
<br/>
<br/>
For a full working example of a two-way binding directive using ngModelController (the ng-model attribute), you can take a look at this:<br/>
<a href="https://github.com/gillyb/angularjs-helpers/tree/master/directives/dropdown">https://github.com/gillyb/angularjs-helpers/tree/master/directives/dropdown</a>
<br/><br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com50tag:blogger.com,1999:blog-3031040842731199760.post-29832808913413176432015-04-27T09:57:00.000-07:002015-04-30T02:04:56.854-07:00Reviewing Kibana 4's client side code<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
I haven't written anything technical for a while, and that's mainly because the past year I changed jobs a few times. After working at Sears Israel for almost 3 years, I thought it's time to find the next adventure. I think I finally found a good match for me, and I'll probably write a whole post about that soon. <br/>
<br/>
For now, I'll just say that at the new startup I work at, we're doing a lot of work on the ELK stack, and I got to do a lot of work on Kibana. With years of experience on various client side applications, I still learned a lot from looking at kibana's code. I think there are many things here written really elegantly, so I wanted to point them out in a concentrated post on the subject. Also, there are some bad notes, mainly minor things (in my opinion) that I will mention as well.<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">At First Glance</span><br/><br/>
Kibana 4 is a large AngularJS application. The first thing I noticed when looking at the code is that it has a great structure. Many AngularJS tutorials (or any other tutorials for MVC frameworks) and code-bases I've worked on have the messy structure of a 'models' directory, a 'controllers' directory, and a 'views' (or 'templates') directory.<br/>
AngularJS did the right thing by organising the code by features/components, and not by code-framework definitions. This makes it much easier to navigate through the code base, and to easily add more features.<br/>
Having a code base organised by controllers, models, views, etc, doesn't do much for your code base except become a pile of unrelated features in each directory, violating the <a href="http://en.wikipedia.org/wiki/Separation_of_concerns">Separation of Concerns</a> principle.<br/>
<br/>
<a href="http://2.bp.blogspot.com/-D14KZw-W0dc/VT0_RTjxvJI/AAAAAAAAA2s/OmXuMQwG7Y0/s1600/kibana_structure.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-D14KZw-W0dc/VT0_RTjxvJI/AAAAAAAAA2s/OmXuMQwG7Y0/s320/kibana_structure.png" /></a>
<br/>
(In the image you can see each component grouped in it's own directory, which includes it's templates, it's code and it's styles all together)
<br/><br/>
In addition, most AngularJS applications I've seen have all their routes defined in one file (usually app.js or index.js), which goes along with many global definitions, and sometimes logic related to specific pages or models all in a single file with no relation to any feature.<br/>
Kibana's code is nicely organised, and each 'plugin' or 'component' (discover/visualize/dashboard/settings/etc) defines it's own routes in it's own controller. <br/>
They manage to do this by creating their own 'RouteManager' (<a href="https://github.com/elastic/kibana/blob/master/src/kibana/utils/routes/index.js">https://github.com/elastic/kibana/blob/master/src/kibana/utils/routes/index.js</a>). This basically defines the same api as angular's RouteManager, but it collects the routes you define, and in the end calls angular's route manager to actually add them (by calling routes.config here : <a href="https://github.com/elastic/kibana/blob/master/src/kibana/index.js#L41">https://github.com/elastic/kibana/blob/master/src/kibana/index.js#L41</a>).<br/>
This custom route manager also adds the ability to resolve certain things before the route is called, which is real useful in many situations.<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Javascript Libraries</span><br/><br/>
The creators of kibana did a great job (with a few minor exceptions that I will explain in the end) in choosing many open source javascript libraries to lean on while building kibana. It's usually a good idea to not reinvent the wheel, especially when someone already did a good job before you.
<br/>
<br/>
<u>RequireJS</u><br/>
RequireJS is a javascript module loader. It helps you create modular javascript code, and makes it really easy dealing with dependencies between modules. Kibana's code does a great job utilizing RequireJS by defining most javascript modules in the AMD standard.<br/>
<br/>
A really nice trick they did here that is definitely worth mentioning is the <b>'Private'</b> service they created. This is a wrapper that allows you to define a RequireJS module, with angularJS dependencies. This allows you to use angular's dependency injection abilities side-by-side with RequireJS' DI abilities.<br/>
<br/>
Regularly loading RequireJS modules in the code looks like this :<br/>
<pre class="brush:javascript">
define(function(require) {
var myService = require('my_service');
// now do something with myService
});
</pre>
<br/>
Using the 'Private' service you load modules like this :<br/>
<pre class="brush:javascript">
define(function(require) {
var myAngularService = Private(require('my_angular_service'));
// now you can use myAngularService
});
</pre><br/>
And most important is that my_angular_service looks like this :<br/>
<pre class="brush:javascript">
define(function(require) {
return function($q, $location, $routeParams) {
// all angular providers in the function parameters are available here!
};
});
</pre>
<br/>
The Private service uses angular's get() method to retrieve the $injector provider, and uses it to inject the dependencies we need.<br/>
(Take a look at the 'Private' service code here : <a href="https://github.com/elastic/kibana/blob/master/src/kibana/services/private.js">https://github.com/elastic/kibana/blob/master/src/kibana/services/private.js</a>)<br/>
<br/>
<br/>
<u>lodash!</u><br/>
If you're not familiar with lodash, you should be. It's the missing javascript utility library that will definitely help you DRY up your javascript code. It has many "LINQ"-like methods (for those familiar with C#), and many other basic methods you would usually write yourself to help iterate over json objects and arrays in javascript. One of the really nice features about lodash is that most methods you can chain to make your code more readable and lodash uses lazy evaluation so performance is amazing!<br/>
<br/>
I don't want to start writing about the features of lodash, but I strongly suggest reading their <a href="https://lodash.com/docs">docs</a>, and getting familiar with it.<br/>
Almost every service, component or controller in the kibana code starts with this line :<br/>
<pre class="brush:javascript">
var _ = require('lodash');
</pre>
<br/><br/>
They also did a really good job extending lodash with some utility methods of their own. Take a look at these files to see for yourself :<br/>
<a href="https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js">https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js</a><br/>
<a href="https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_notchainable.js">https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_notchainable.js</a><br/>
<br/>
(There's one thing I don't like here, which is the methods 'get' and 'setValue' - They do a 'deepGet' and 'deepSet' which is like saying "hey, i know i have something here in this object, but have no idea where it is". This just doesn't feel right... :/ )<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Some HTML5</span><br/>
<br/>
Throughout the code there has been some good use of html5 features. <br/>
The first one I noticed and really liked is the <b>'Notifier'</b> service (<a href="https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js">https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js</a>). I really like the abstraction here over notifying the user of different message types, and the abstraction over the browser's 'console' methods. The <b>'lifecycle'</b> method (<a href="https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js#L139">https://github.com/elastic/kibana/blob/master/src/kibana/components/notify/_notifier.js#L139</a>) is really neat, and uses the <b><a href="https://developer.mozilla.org/en-US/docs/Web/API/Console/group">console.group()</a></b> method to group messages in the browser's console. It also uses <b>'<a href="https://developer.mozilla.org/en-US/docs/Web/API/Performance/now">window.performance.now</a>'</b> which is really nice, and much better than using the older <b>'Date.now()'</b> method (it's more exact, and it's relative to the navigationStart metric).<br/>
<br/>
Kibana also makes use of the less-common <wbr/> tag. This is new to html5 and is intended to give you a little more control over where the line breaks when text overflows in it's container.<br/>
<br/>
There's also use of 'localStorage' and 'sessionStorage' for saving many local view settings in the different kibana pages. In general, they did a great job in persisting the user's state on the client side. When navigating between tabs, it keeps you on the last view you were in when returning to the tab.<br/>
<br/>
Another nice thing is that there is a lot of use with <a href="https://developer.mozilla.org/en-US/docs/Web/Accessibility/An_overview_of_accessible_web_applications_and_widgets">aria-* attributes</a>, and recently I see more and more of this in the newer commits. It's nice to see a big open source project dedicating time to these kinds of details.<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Object Oriented Programming</span><br/>
<br/>
There is a great deal of attention to the design of objects in the code.<br/>
First, I like the way inheritance is implemented here. A simple lodash 'mixin' allows for object inheritance.<br/>
<br/>
<pre class="brush:javascript">
inherits: function (Sub, Super) {
Sub.prototype = Object.create(Super.prototype, {
constructor: {
value: Sub
},
superConstructor: Sub.Super = Super
});
return Sub;
}
</pre>
(<a href="https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js#L23">https://github.com/elastic/kibana/blob/master/src/kibana/utils/_mixins_chainable.js#L23</a>)<br/>
<br/>
Many objects in the code use this to inherit all the properties of some base object. Here's an example from the 'SearchSource' object :<br/>
<pre class="brush:javascript">
return function SearchSourceFactory(Promise, Private) {
var _ = require('lodash');
var SourceAbstract = Private(require('components/courier/data_source/_abstract'));
var SearchRequest = Private(require('components/courier/fetch/request/search'));
var SegmentedRequest = Private(require('components/courier/fetch/request/segmented'));
_(SearchSource).inherits(SourceAbstract);
function SearchSource(initialState) {
SearchSource.Super.call(this, initialState);
}
// more SearchSource object methods
}
</pre><br/>
(<a href="https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/search_source.js#L9">https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/search_source.js#L9</a>)<br/>
<br/>
You can see the SearchSource object inherits all the base properties from the SourceAbstract object.<br/>
<br/>
In addition, all the methods that would've been static are defined on the object prototype. This is great mainly for memory usage. Putting a method on the object's prototype makes sure there's only one instance of the method in memory.<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Memory Usage</span><br/>
<br/>
Since kibana is a big single-page application, there is a need to be careful with memory usage. Many apps like kibana can be left on in a browser for a long time without any refresh, so it's important to make sure there are no memory leaks. AngularJS makes this easy to implement, but many programmers don't bother going the extra mile for this.<br/>
In the kibana code, many directives subscribe to the <b>'$destroy'</b> event and unbind event handlers not to hold references to unused objects.<br/>
<br/>
An example from a piece of kibana code (the css_truncate directive) :<br/>
<pre class="brush:javascript">
$scope.$on('$destroy', function () {
$elem.unbind('click');
$elem.unbind('mouseenter');
});
</pre>
(<a href="https://github.com/elastic/kibana/blob/master/src/kibana/directives/css_truncate.js#L41">https://github.com/elastic/kibana/blob/master/src/kibana/directives/css_truncate.js#L41</a>)<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Code Conventions</span><br/>
<br/>
Kibana's code is mostly very organized, and more importantly readable. A small negative point goes here for some inconsistencies with variable naming. There are classes that have public methods that start with '_' and some don't.<br/>
<br/>
For an example of this, look at the DocSource object. This file has even commented 'Public API' and 'Private API' but the naming convention differences between the two aren't clear.<br/>
(<a href="https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/doc_source.js">https://github.com/elastic/kibana/blob/master/src/kibana/components/courier/data_source/doc_source.js</a>)<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Code Comments</span><br/>
<br/>
I can say the code has enough comments, but I have no idea how much that actually is, since most of the code is readable without comments, which is an amazing thing. There are great comments in most places that should have them.<br/>
<br/>
Just a funny anecdote is that I was surprised to see comments that actually draw in ascii art the function they describe! Kudos!<br/>
<pre class="brush:javascript">
/**
* Create an exponential sequence of numbers.
*
* Creates a curve resembling:
*
* ;
* /
* /
* .-'
* _.-"
* _.-'"
* _,.-'"
* _,..-'"
* _,..-'""
* _,..-'""
* ____,..--'""
*
* @param {number} min - the min value to produce
* @param {number} max - the max value to produce
* @param {number} length - the number of values to produce
* @return {number[]} - an array containing the sequence
*/
createEaseIn: _.partialRight(create, function (i, length) {
// generates numbers from 1 to +Infinity
return i * Math.pow(i, 1.1111);
})
</pre><br/>
(<a href="https://github.com/elastic/kibana/blob/master/src/kibana/utils/sequencer.js#L29">https://github.com/elastic/kibana/blob/master/src/kibana/utils/sequencer.js#L29</a>)<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">CSS Styling</span><br/>
<br/>
Another great success here was using the 'less' format for css files. This allows for small and concise 'less' files, and reuse of css components easily (known as 'mixins'). There has been a great job here done with colors especially - All colors are defined in a single file (<a href="https://github.com/elastic/kibana/blob/master/src/kibana/styles/theme/_variables.less">https://github.com/elastic/kibana/blob/master/src/kibana/styles/theme/_variables.less</a>). Editing this file, you can easily create your own color scheme.<br/>
<br/>
(There are a few exceptions - mainly a few colors defined in js files or css files, but It's 99% covered in _variables.less).<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Build Process</span><br/>
<br/>
Kibana has a grunt build process setup. It compiles the css files, combines them and js files (without minifying, using r.js), adds parameters to the resource files for cache-busting, and some more small tasks.<br/>
I would be happy to see this upgraded to using gulp, which is stream based and has a much nicer api (in my opinion), but grunt still does the job.<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">Performance</span><br/>
<br/>
After writing so many good points about kibana's source code, this is where I lack good feedback.<br/>
Maybe it's because when building kibana they had in mind that it's not to be served over the internet, and it's just an internal tool, and maybe it's just because I'm overly sensitive after working for quite a while on the performance team at Sears Israel (working on ShopYourWay.com). Either way, if it was an online website, it's performance would be considered under-par.<br/>
<br/>
JS files aren't minified. They are combined, but not minified. Unfortunately, the code isn't even prepared to just minify the files. In order to do this, angularjs dependencies need to be defined with the dependencies declared as strings before the function itself. Otherwise angularjs's dependency injection mechanism won't work.<br/>
<br/>
CSS files aren't minified either, just combined.<br/>
<br/>
JS files are ~5MB !!! Yes, almost 5MB!! That's huge, and it's all downloaded on kibana's initial load. This could've been done in a few separate files, downloading only the ones needed for the initial view first. This would already be a great improvement.
Though there are advantages to not minifying the js, and I think that's what the creators had in mind - It's easier to debug with DevTools (no need for mapping files), and although initial load will take a long time, after that there is no wait on any other pages. If the resources are cached on your machine, then even getting back to kibana the second time should be really fast.<br/>
<br/>
There are also some libraries in the source code which I think are redundant and maybe could've been removed with a little extra work. One example is jquery, which I think is frowned upon using with angularjs. AngularJS comes with jqlite, which is a smaller version of jquery and should suffice.<br/>
<br/>
I hope it doesn't sound like I think they did a bad job - I'm pointing out some areas in the code that maybe could've been done differently. All in all the app is amazing, and works great! :)<br/>
<br/>
<br/>
<span style="font-size:16px; font-weight:bold;">In conclusion</span><br/>
<br/>
I had a great time learning and working (and still working) on kibana's code. I tried to show a lot of good things I like about the code, and point out a few minor bad things in the code. I hope you enjoyed reading this, and Kudos to you if you got to this point! :)<br/>
<br/>
I also hope to write another post about how kibana communicates with elasticsearch and maybe another one on how it renders the visualizations with the help of D3.js<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com12tag:blogger.com,1999:blog-3031040842731199760.post-48933620210751875272015-02-22T07:49:00.000-08:002015-02-22T07:49:00.067-08:00When 7 Billion users just aren't enough...<br/>
It seems like the only thing threatening facebook's user growth is earth's mortality rate.<br/>
It also seems that they're not going to let stop them anytime soon from growing even more!<br/>
<br/>
Facebook just released a new feature that allows you to assign another facebook account to control your account after you pass away.<br/>
<br/>
<a href="http://money.cnn.com/2015/02/12/technology/facebook-legacy-contact/">http://money.cnn.com/2015/02/12/technology/facebook-legacy-contact/</a><br/>
<br/>
I personally give them a HUGE amount of credit for the creativity. Let this be a lesson to all of us about making the most out of a situation where resources are limited.<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com3tag:blogger.com,1999:blog-3031040842731199760.post-40832424384371871082015-02-17T06:39:00.002-08:002015-02-17T23:48:06.528-08:00Simple nodejs desktop time tracking utility<br/>
I recently wrote about <a href="http://www.debuggerstepthrough.com/2014/10/desktop-applications-with-nodejs-as-if.html">Creating desktop applications with nodejs</a>...<br/>
Well, I was playing around a little with node-webkit (again!) - It's a nodejs framework for building cross-platform desktop applications. Within a few hours I built a super simple time tracking utility that I needed for quite some time!<br/>
<br/>
I know there are a ton of utilities like this out there already, but all of them have much more features than I want and need, and annoy me too much while using them. This utility does *nothing* but track time. You just add a task and it starts timing it. You can stop and start tasks, and just remove them when you're done.<br/>
<br/>
I'm not that fanatic about time productivity (yet!) that I need history graphs to show me how productive i've been lately. It's actually more for me to see if the tasks I'm working on take as long as I think they should.<br/>
<br/>
So here it is: <a href="https://github.com/gillyb/tt-trakr">https://github.com/gillyb/tt-trakr</a><br/>
<br/>
All the code is there.<br/>
There's also a compiled executable for windows ready inside the 'Installation' folder.<br/>
Now that I have a mac I want to compile it for mac too soon.<br/>
(I'll also probably be making some UI improvements and maybe adding some more small features in the future, so follow the repository if you're interested.)<br/>
<br/>
And here's a picture of what it looks like :<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-14D2UudrIV0/VONSGZCfM6I/AAAAAAAAAtc/eAB--_uQoEs/s1600/screenshot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-14D2UudrIV0/VONSGZCfM6I/AAAAAAAAAtc/eAB--_uQoEs/s400/screenshot.png" /></a></div>
<br/>
:),<br/>
Gilly.<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com38tag:blogger.com,1999:blog-3031040842731199760.post-18360912140719014862014-11-08T09:58:00.000-08:002014-11-08T09:59:59.361-08:00Online multiple javascript compression tool<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
Minifying/compressing javascript files has become a standard for a while already when developing websites. It is very important to save as much space as you can and have your users downloading as little as possible to improve the performance of your site.<br/>
I don't think you'll find an article out there that denies it.<br/>
<br/>
An important question that arises from this is which compressor/minifier should you use ?<br/>
Different famous open source projects use different compressors, and I'm guessing (or at least hoping) they chose them wisely relying on benchmarks they did on their own.<br/>
You see, each compressor works differently, so different code bases won't be affected in the same way by different compressors.<br/>
<br/>
In the past I used to manually test my code against different compressors to see which one was best for me. I finally got sick of doing it manually, so decided to look for a tool that will do the job for me. Surprisingly, I didn't find one that did exactly that, so I quickly wrote a script that will do it for me. Then I decided to design a UI for it and put it online for others to enjoy as well.<br/>
<br/>
I present to you : <a href="http://compress-js.com">http://compress-js.com</a><br/>
You can copy text, or drag some js files to it, and choose which compressor you want. Or, and this is the default method, choose 'Check them all' which will compress your code using the most popular compressors and show you the results, and the compressed size from all of them. You can download the files directly from the site.<br/>
<br/>
Here's a screenshot :<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-54LUzfzTeK8/VF5ZoJ7Ne1I/AAAAAAAAAgc/kGBLaWZ1trg/s1600/Screen%2BShot%2B2014-11-08%2Bat%2B7.49.27%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-54LUzfzTeK8/VF5ZoJ7Ne1I/AAAAAAAAAgc/kGBLaWZ1trg/s640/Screen%2BShot%2B2014-11-08%2Bat%2B7.49.27%2BPM.png" /></a></div>
<br/>
<br/>
Currently the site can compress your javascript code with YUI Compressor, UglifyJS, JSMin and Google's Closure Compiler.<br/>
If you have any thoughts or suggestions on how to improve, feel free to drop a comment below. :)<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com42tag:blogger.com,1999:blog-3031040842731199760.post-77550223963056072792014-11-04T13:02:00.000-08:002014-11-04T13:03:24.351-08:00Lazy loading directives in AngularJS the easy way<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
The past few months I've been doing a lot of work with AngularJS, and currently I'm working on a single page application which is supposed to be quite big in the end. Since I have the privilege of building it from scratch, I'm taking many client-side performance considerations in mind now, which I think will save me a lot of hard work optimizing in the future.<br/>
<br/>
One of the problems main problems is HUGE amounts of js files being downloaded to the user's computer. A great way to avoid this is to only download the minimum the user needs and dynamically load more resources in the background, or as the user runs into pages which require a specific feature.<br/>
<br/>
AngularJS is a great framework, but doesn't have anything built in that deals with this, so I did some research myself...<br/>
I ran into some great articles on the subject, which really helped me a lot (and I took some ideas from), but weren't perfect.<br/>
A great article on the subject is this one : <a href="http://www.bennadel.com/blog/2554-loading-angularjs-components-with-requirejs-after-application-bootstrap.htm">http://www.bennadel.com/blog/2554-loading-angularjs-components-with-requirejs-after-application-bootstrap.htm</a><br/>
The important part is that it explains how to dynamically load angularjs directives (or other components) after bootstrapping your angularjs app.<br/>
What I didn't like about this article is that the writer's example requires RequireJS and jQuery along with all the AngularJS files you already have. This alone will make your app really heavy, and I think doesn't need to be like this.<br/>
<br/>
Let me show you how I wrote a simple AngularJS service that can dynamically load directives.<br/>
<br/>
The first crucial step is that you need to save a reference to $compileProvider. This is a provider that is available to us when bootstrapping, but not later, and this provider will compile our directive for us.<br/>
<pre class="brush:javascript">
var app = angular.module('MyApp', ['ngRoute', 'ngCookies']);
app.config(['$routeProvider', '$compileProvider', function($routeProvider, $compileProvider) {
$routeProvider.when('/', {
templateUrl: 'views/Home.html',
controller: 'HomeController'
});
app.compileProvider = $compileProvider;
}]);
</pre>
<br/>
<br/>
Now, we can write a service that will load our javascript file on demand, and compile the directive for us, to be ready to use.<br/>
This is a simplified version of what it should look like :<br/>
<pre class="brush:javascript">
app.service('LazyDirectiveLoader', ['$rootScope', '$q', '$compile', function($rootScope, $q, $compile) {
// This is a dictionary that holds which directives are stored in which files,
// so we know which file we need to download for the user
var _directivesFileMap = {
'SexyDirective': 'scripts/directives/sexy-directive.js'
};
var _load = function(directiveName) {
// make sure the directive exists in the dictionary
if (_directivesFileMap.hasOwnProperty(directiveName)) {
console.log('Error: doesnt recognize directive : ' + directiveName);
return;
}
var deferred = $q.defer();
var directiveFile = _directivesFileMap[directiveName];
// download the javascript file
var script = document.createElement('script');
script.src = directiveFile;
script.onload = function() {
$rootScope.$apply(deferred.resolve);
};
document.getElementsByTagName('head')[0].appendChild(script);
return deferred.promise;
};
return {
load: _load
};
}]);
</pre>
<br/>
<br/>
Now we are ready to load a directive, compile it and add it to our app so it is ready for use.<br/>
To use this service we will simply call it from a controller, or any other service/directive like this:<br/>
<pre class="brush:javascript">
app.controller('CoolController', ['LazyDirectiveLoader', function(LazyDirectiveLoader) {
// lets say we want to load our 'SexyDirective' all we need to do is this :
LazyDirectiveLoader.load('SexyDirective').then(function() {
// now the directive is ready...
// we can redirect the user to a page that uses it!
// or dynamically add the directive to the current page!
});
}]);
</pre>
<br/>
<br/>
One last thing to notice, is that now your directives need to be defined using '$compileProvider', and not how we would do it regularly. This is why we exposed $compileProvider on our 'app' object, for later use. So our directive js file should look like this:<br/>
<pre class="brush:javascript">
app.compileProvider.directive('SexyDirective', function() {
return {
restrict: 'E',
template: '<div class=\"sexy\"></div>',
link: function(scope, element, attrs) {
// ...
}
};
});
</pre>
<br/>
<br/>
I wrote earlier that this is a simplified version of what it should look like, since there are some changes that I would make before using it as is.<br/>
First I would probably add some better error handling to look out for edge cases.<br/>
Second, We wouldn't want the same pages to attempt to download the same files several times, so I would probably add a cache mechanism for loaded directives.<br/>
Also, I wouldn't want the list of directive files (the variable _directivesFileMap) directly in my LazyDirectiveLoader service, so I would probably create a service that holds this list and inject it the service. The service that holds the list will be generated by my build system (in my case I created a gulp task to do this). This way I don't need to make sure this file map is always updated.<br/>
Finally, I think I will take out the part that loads the javascript file to a separate service so I will be able to easily mock it in tests I write. I don't like touching the DOM in my services, and if I have to, I rather separate it to a separate service I can easily mock.<br/>
<br/>
I uploaded a slightly better (and a little less simplified) version of this over here : <a href="https://github.com/gillyb/angularjs-helpers/tree/master/directives/lazy-load">https://github.com/gillyb/angularjs-helpers/tree/master/directives/lazy-load</a>
<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com325tag:blogger.com,1999:blog-3031040842731199760.post-13599451303408490722014-10-09T03:32:00.001-07:002014-11-04T13:08:32.895-08:00Desktop applications with nodejs! ...as if winforms and wpf aren't dead already!<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
I used to disfavor javascript over other languages because it wasn't type-safe, it was hard to refactor, hard to write tests, find usages in the code, ...and the list goes on...<br/>
The past few years though, some amazing things have happened in the world that now make javascript an amazing language!<br/>
<br/>
IDE's got much better! My personal favorite is WebStorm which has great auto-completion in javascript and supports many frameworks like nodejs and angular.<br/>
<br/>
Web frameworks got much better! Newer and more advanced frameworks like angularJS and Ember allow you to write really organized and well structured javascript on the client side.<br/>
<br/>
V8 was created and open sourced, which brought a whole variety of new tools to the table. Some of them being headless browsers like phantomJS which are great for automation testing, and creating quick web crawling scripts.<br/>
<br/>
And my personal favorite - NodeJS! This tool is amazing! It can do so many things from being a fully functional and scalable backend server to a framework for writing desktop applications.<br/>
<br/>
<br/>
While looking into the code of PopcornTime I realized it was written in nodejs, with a framework called node-webkit. This was an amazing concept to me. It's basically a wrapper, that displays a frame with a website in it. The 'website' displayed is your typical client side code - html, javascript and css, so obviously you can use any framework you like, like angular or ember. This 'website' which is displayed in the frame can use all nodejs modules (directly in the js code) which gives you access to the operating system - you can access the file system, databases, networks and everything else you might need. Since nodejs runs on all major operating systems, you can also 'compile' your desktop app to run on any platform.<br/>
You can wrap all this as an executable file ('.exe' in windows) and easily tweak it not to show the toolbar which means the user has no way of knowing it's actually a website 'beneath' the desktop application they're using.<br/>
<br/>
<br/>
The steps taken to create a simple desktop application with node-webkit are super-simple!<br/>
(and easier than building a desktop application with any other language i've tried!)<br/>
<br/>
First, I'm assuming you have nodejs and npm installed.<br/>
Now, download node-webkit : https://github.com/rogerwang/node-webkit#downloads<br/>
<br/>
Start building your application just like you would a website. You can use the browser just like you're used to, to see your work.<br/>
When you want to start accessing node modules, you'll need to start running it with node-webkit.<br/>
In order to do this, just run the node-webkit executable from the command line with your main html file as a parameter.<br/>
<br/>
<pre class="brush:javascript">
C:\Utilities\node-webkit\nw.exe index.html
</pre>
<br/>
<br/>
This will open your website as a desktop application.<br/>
<br/>
You can now access all nodejs modules directly from the DOM!<br/>
Some of the operating system's api's are wrapped as node modules as well, so you can create a tray icon, native window menus, and much much more..<br/>
<br/>
Debugging the app is also super simple and can easily be done with the Developer Tools, just like you would in Chrome! (you just need to configure your app to open with the toolbar visible, which you can define while developing in your package.json file)<br/>
<br/>
<br/>
I see so many benefits creating desktop applications like this, so I'm expecting to see many more apps running on this framework (or other nodejs-based frameworks) in the near future. (Except for major algorithms which probably would be better off written in C/C++. Hence, i'm not expecting to see the next Photoshop version be written in nodejs, but there are a ton of good examples out there which should be!)<br/>
<br/>
<br/>
Some good references :<br/>
- <a href="https://github.com/rogerwang/node-webkit">Node-Webkit Github page</a><br/>
- <a href="http://code.tutsplus.com/tutorials/introduction-to-html5-desktop-apps-with-node-webkit--net-36296">Introduction to HTML5 Desktop apps with node-webkit</a> (a great tutorial to get started)<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com12tag:blogger.com,1999:blog-3031040842731199760.post-80087126667915230102014-08-09T13:03:00.001-07:002014-08-09T13:05:39.502-07:00AngularJS hack/tip :: Invoking JS code after DOM is ready<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
When working with AngularJS, you frequently update the DOM after the DOM was already 'ready'.<br/>
What I mean by that is that the browser will load the DOM, and the template will completely load. BUT, your template might have an 'ng-if' or 'ng-repeat' directive that will only be attached to the DOM slightly after, since you might be setting it with an ajax response inside the control.<br/>
<br/>
This will happen when your code is similar to this pattern :<br/>
<pre class="brush:javascript">
app.controller('MyAngularController', function($scope, $http) {
$http.get('www.someURL.com/api').success(function(response) {
// Add some data to the scope
$scope.Data = response;
// This caused the DOM to change
// so invoke some js that will take care of the new DOM changes
DoSomeJS();
});
});
</pre><br/>
The main problem with this code is that most of the time when the method DoSomeJS() is invoked, the DOM changes caused by the changes to $scope won't be 'ready'.<br/>
<br/>
This is because the way angularJS is built -<br/>
Each property on the scope has a 'watcher' attached to it, checking it for changes. Once the property is changed, it invokes a '$digest' loop which is responsible for updating the model and view as well. This is invoked asynchronously (for performance reasons i guess), and this actually gives you the great ability of invoking js code immediately after updating the scope without waiting for the DOM to be updated - something you'll probably want as well from time to time.
(The nitty gritty details of how this works behind the scenes is interesting, but will take me too long to go through in this post. For the brave ones among us, I encourage you to look a bit into the code yourself --> <a href="https://github.com/angular/angular.js/blob/master/src/ng/rootScope.js#L667">https://github.com/angular/angular.js/blob/master/src/ng/rootScope.js#L667</a>)<br/>
<br/>
<br/>
<b>So, how can we invoke some JS code, and make sure it runs only after the DOM was updated ?</b><br/>
Well, one quick and hacky way to do this is to let a js timer invoke your code with a '0' delay. Since JS is single-threaded, running a timer with a 0ms delay doesn't always mean the JS runs immediately. What will happen in this case, it will push the code to 'the end of the line' and invoke it once the JS thread is ready.<br/>
<br/>
The updated code looks like this :<br/>
<pre class="brush:javascript">
app.controller('MyAngularController', function($scope, $http, $timeout) {
$http.get('www.someURL.com/api').success(function(response) {
// Add some data to the scope
$scope.Data = response;
// This caused the DOM to change
// so invoke some js that will take care of the new DOM changes
$timeout(DoSomeJS);
});
});
</pre>
Note: invoking '$timeout()' like we did is just like invoking 'setTimeout(fn, 0);' - $timeout is an angularJS service that wraps setTimeout.<br/>
A great read on how JS timers are invoked : <a href="http://javascriptweblog.wordpress.com/2010/06/28/understanding-javascript-timers/">Understanding Javascript timers</a>
<br/>
<br/>
<b>But wait, This whole solution is a hack, isn't it ?!...</b><br/>
Yes, and truth be told, when I first ran into this problem, this was the first solution I came up with. It was only after that I realized I don't want any js code in my controller touching my DOM. <br/>
I still decided to write this post though, to explain a little about JS timers and angular $digest.<br/>
<br/>
The solution I would favor more in this case would be to have a custom directive on the DOM being inserted dynamically. Then adding the code modifying the DOM in the 'link' method of the directive.<br/>
<br/>
And the code should look more like this :<br/>
<pre class="brush:javascript">
app.directive('myDirective', function() {
return {
restrict: 'A',
link: function(scope, elem, attrs) {
// DO WHATEVER WE WANT HERE...
}
};
});
</pre><br/>
In angular directives describe various elements of the templates, and therefore I feel like they are the 'right' place for most of the code we need to modify our DOM. I like to keep my controllers clean from touching the DOM, and just have them construct the models they need to pass on to the template.<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com1tag:blogger.com,1999:blog-3031040842731199760.post-13880550701842117302014-07-24T00:49:00.000-07:002014-07-24T00:49:09.623-07:00Escaping '&' (ampersand) in razor view engine<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
Recently I ran into a really annoying problem with the asp.net razor view engine -<br/>
I was generating some url's on the server side, and trying to print them inside html tag attributes like 'href' or 'src'.<br/>
<br/>
The problem was that all the ampersands ('&') were being encoded to '&'.<br/>
First thing I tried to do was print it out using the Html 'Raw' helper method, like this :<br/>
<pre class="brush:csharp">
<a href="@Html.Raw("http://www.myUrl.com?some=parameter&andAnother=parameter")">Some Link</a>
</pre><br/>
<br/>
This didn't work... :/<br/>
The weird thing about this was that when I searched the internet and found questions on stackoverflow, some people wrote that Html.Raw() worked for them and some said it didn't.<br/>
<br/>
After a little more research (mostly based on some trial & error), I realized that razor will always encode strings inserted in attribute values. This is done for security reasons. The proper workaround is to simply put the whole tag inside the 'Raw()' method, like this:<br/>
<pre class="brush:csharp">
@Html.Raw("<a href=\"http://www.myUrl.com?some=parameter&andAnother=parameter\">Some Link</a>)
</pre><br/>
<br/>
This basically tells razor - "I know what I'm doing, just let me do it my way!" :)<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com2tag:blogger.com,1999:blog-3031040842731199760.post-47085051975067960582014-07-13T11:30:00.000-07:002014-07-16T10:20:30.728-07:00Saving prices as decimal in mongodb<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
When working with prices in C#, you should always work with the 'decimal' type.<br/>
Working with the 'Double' type can lead to a variety of rounding errors when doing calculations with them, and is more intended for mathematical equations.<br/>
<br/>
(I don't want to go into details about what problems this can cause exactly, but you can read more about it here :<br/>
<a href="http://stackoverflow.com/questions/2129804/rounding-double-values-in-c-sharp">http://stackoverflow.com/questions/2129804/rounding-double-values-in-c-sharp</a><br/>
<a href="http://stackoverflow.com/questions/15330988/double-vs-decimal-rounding-in-c-sharp">http://stackoverflow.com/questions/15330988/double-vs-decimal-rounding-in-c-sharp</a><br/>
<a href="http://stackoverflow.com/questions/693372/what-is-the-best-data-type-to-use-for-money-in-c">http://stackoverflow.com/questions/693372/what-is-the-best-data-type-to-use-for-money-in-c</a><br/>
<a href="http://pagehalffull.wordpress.com/2012/10/30/rounding-doubles-in-c/">http://pagehalffull.wordpress.com/2012/10/30/rounding-doubles-in-c/</a> )<br/>
<br/>
I am currently working on a project that involves commerce and prices, so naturally I used 'decimal' for all price types.<br/>
Then I headed to my db, which in my case is mongodb, and the problem arose. <br/>
MongoDB doesn't support 'decimal'!! It only supports the double type.<br/>
<br/>
Since I rather avoid saving it as a double for reasons stated above, I had to think of a better solution.<br/>
I decided to save all the prices in the db as Int32 saving the prices in 'cents'.<br/>
<br/>
This means I just need to multiply the values by 100 when inserting to the db, and dividing by 100 when retrieving. This should never cause any rounding problems, and is pretty much straight-forward. I even don't need to worry about sorting, or any other query for that matter.<br/>
<br/>
But... I don't want ugly code doing all these conversions from cents to dollars in every place...<br/>
<br/>
I'm using the standard C# mongo db driver (<a href="https://github.com/mongodb/mongo-csharp-driver">https://github.com/mongodb/mongo-csharp-driver</a>), which gives me the ability to write a custom serializer for a specific field.<br/>
This is a great solution, since it's the lowest level part of the code that deals with the db, and that means all my entities will be using 'decimal' everywhere.<br/>
<br/>
This is the code for the serializer :<br/>
<pre class="brush:csharp">
[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public class MongoDbMoneyFieldSerializer : IBsonSerializer
{
public object Deserialize(BsonReader bsonReader, Type nominalType, IBsonSerializationOptions options)
{
var dbData = bsonReader.ReadInt32();
return (decimal)dbData / (decimal)100;
}
public object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var dbData = bsonReader.ReadInt32();
return (decimal)dbData / (decimal)100;
}
public IBsonSerializationOptions GetDefaultSerializationOptions()
{
return new DocumentSerializationOptions();
}
public void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var realValue = (decimal) value;
bsonWriter.WriteInt32(Convert.ToInt32(realValue * 100));
}
}
</pre>
<br/><br/>
And then all you need to do is add the custom serializer to the fields which are prices, like this:<br/>
<pre class="brush:csharp">
public class Product
{
public string Title{ get; set; }
public string Description { get; set; }
[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public decimal Price { get; set; }
[BsonSerializer(typeof(MongoDbMoneyFieldSerializer))]
public decimal MemberPrice { get; set; }
public int Quantity { get; set; }
}
</pre>
<br/>
That's all there is to it.<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-24669319532153643002014-06-23T22:00:00.000-07:002014-06-23T22:00:00.852-07:00Drastically improving 'First Byte' and 'Page Load' (for SEO)<br/>
Improving your <b>'first byte'</b> speed, and in general your <b>'page load'</b> can be crucial for SEO. Google likes pages that render faster to the user, and, in some cases, will prioritize them higher than other pages in search results.<br/>
<br/>
If you're not familiar with this, then here are some articles on the subject :<br/>
<a href="http://googlewebmastercentral.blogspot.co.il/2010/04/using-site-speed-in-web-search-ranking.html">http://googlewebmastercentral.blogspot.co.il/2010/04/using-site-speed-in-web-search-ranking.html</a><br/>
<a href="http://blog.kissmetrics.com/speed-is-a-killer/">http://blog.kissmetrics.com/speed-is-a-killer/</a><br/>
<a href="http://www.quicksprout.com/2012/12/10/how-load-time-affects-google-rankings/">http://www.quicksprout.com/2012/12/10/how-load-time-affects-google-rankings/</a><br/>
<br/>
Improving your site's performance can be a daunting task. There are probably many easy wins you can do that will improve the speed by a little, but quickly you will realize that better results will take much longer. Some improvements can take days, weeks and even months of infrastructure changes.<br/>
<br/>
<b>But why should your SEO suffer from this ?? Why not be a step ahead of google ??</b><br/>
Your site doesn't really need to be fast for you to get good SEO scores, you just need google to think your site is fast!<br/>
<br/>
<b>But how do you do that ?</b><br/>
Google will scan your site once every few days/weeks and cache the results for indexing. So let's beat google to it's own game.<br/>
Why don't we crawl our site first, cache the results to text files even, and when google comes around, just serve it the static pages we cached without any server calculations.<br/>
<br/>
You can easily build a crawler using <a href="http://docs.seleniumhq.org/">Selenium</a>, <a href="http://phantomjs.org/">phantomjs</a>, <a href="http://zombie.labnotes.org/">zombiejs </a>or pure <a href="http://nodejs.org/">nodejs</a>. You don't even need to implement all the logic of a regular crawler since you're familiar with your site's domain.<br/>
<br/>
<b>For a real world example : </b><br/>
If your site is a big commerce site, then you know the structure of all your product pages. They're probably something like this :<br/>
http://www.YourCommerceSite.com/product/Product-Name/:Product-ID:<br/>
<br/>
You can invoke this endpoint, while scanning all your different product id's from your db.<br/>
Then you can save them all to text files like this :<br/>
Product_<ProductId>.txt<br/>
<br/>
When the google bot comes around (which you can easily detect by it's 'User-Agent' header) and requests a product page, then quickly give it the cached product page you stored on disk.<br/>
This might be stale by a few days/hours (as frequent as you decide to scan) but will still be good enough for indexing in google (since google's indexing isn't realtime anyway) and should be super fast!<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com6tag:blogger.com,1999:blog-3031040842731199760.post-25989861063803822372014-04-12T13:12:00.001-07:002014-04-12T13:22:52.014-07:00Debugging and solving the 'Forced Synchronous Layout' problem<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
<br/>
If you're using Google Developer tools to profile your website's performance, you might have realized that Chrome warns you about doing 'forced layouts'.<br/>
This looks something like this :<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-go2AZSVxPuE/U0md7V3ODkI/AAAAAAAAAaU/d5yP3I_rQbw/s1600/forced_layout_amazon.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-go2AZSVxPuE/U0md7V3ODkI/AAAAAAAAAaU/d5yP3I_rQbw/s640/forced_layout_amazon.png" /></a></div>
In this screenshot, I marked all the warning signs chrome tries to give you so you can realize this problem.
<br/>
<br/>
<b>So, what does this mean ?</b><br/>
When the browser constructs a model of the page in memory, it builds 2 trees that represent the DOM in memory. One is the DOM structure itself, and the other is a tree that represents the way the elements should be rendered on the screen.<br/>
This tree needs to always stay updated, so when you change an element's css properties for example, the browser might need to update these trees in memory to make sure the next time you request a css property, the browser will know it has updated information.<br/>
<br/>
<b>Why should you care about this ?</b><br/>
Updating both these trees in memory may take some time. Although they are in memory, most pages these days have quite a big DOM so the tree will be pretty big. It also depends on which element you change, since updating different elements might mean only updating part of the tree or the whole tree in different cases.<br/>
<br/>
<b>Can we avoid this ?</b><br/>
The browser can realize that you're trying to update many elements at once, and will optimize itself so that a whole tree update won't happen after each update, but only when the browser knows it needs relevant data. In order for this to work correctly, we need to help it out a little.<br/>
A very simple example of this scenario might be setting and getting 2 different properties, one after the other, as so :<br/>
<pre class="brush:javascript">
var a = document.getElementById('element-a');
var b = document.getElementById('element-b');
a.clientWidth = 100;
var aWidth = a.clientWidth;
b.clientWidth = 200;
var bWidth = b.clientWidth;
</pre><br/>
In this simple example, the browser will update the whole layout twice. This is because after setting the first element's width, you are asking to retrieve an element's width. When retrieving the css property, the browser know's it needs updated data, so it then goes and updates the whole DOM tree in memory. Only then, it will continue to the next line, which will soon after cause another update because of the same reasons.<br/>
<br/>
This can simply be fixed by changing around the order of the code, as so :<br/>
<pre class="brush:javascript">
var a = document.getElementById('element-a');
var b = document.getElementById('element-b');
a.clientWidth = 100;
b.clientWidth = 200;
var aWidth = a.clientWidth;
var bWidth = b.clientWidth;
</pre>
<br/>
Now, the browser will update both properties one after the other without updating the tree. Only when asking for the width on the 7th line, it will update the DOM tree in memory, and will keep it updated for line number 8 as well. We easily saved one update.<br/>
<br/><br/>
<b>Is this a 'real' problem ?</b><br/>
There are a few blogs out there talking about this problem, and they all seem like textbook examples of the problem. When I first read about this, I too thought it was a little far fetched and not really practical.<br/>
Recently though I actually ran into this on a site I'm working on...<br/>
<br/>
Looking at the profiling timeline, I realized the same pattern (which was a bunch of rows alternating between 'Layout' and 'Recalculate Style').<br/>
Clicking on the marker showed that this was actually taking around ~300ms.<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-X7LyfxTjqPc/U0mYlFROhyI/AAAAAAAAAZ8/OjXPp-XgFq8/s1600/long_time.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-X7LyfxTjqPc/U0mYlFROhyI/AAAAAAAAAZ8/OjXPp-XgFq8/s1600/long_time.png" /></a></div><br/>
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
I can see that the evaluation of the script was taking ~70ms which I could handle, but over 200ms was being wasted on what?!...<br/>
<br/>
Luckily, when clicking on the script in that dialog, it displays a JS stacktrace of the problematic call. This was really helpful, and directed me exactly to the spot.<br/>
<br/>
It turned out I had a piece of code that was going over a loop of elements, checking each element's height, and setting the container height according to the aggregated height. This was being set and get in each loop iteration, causing a performance hit.<br/>
<br/>
The problematic code looked something like this :<br/>
<pre class="brush:javascript">
for (var i=0; i<containerItems.length; i++) {
var item = containerItems[i];
appendItemToContainer(item);
}
var appendItemToContainer = function(item) {
container.clientHeight += item.clientHeight;
}
</pre><br/>
You can see that the 'for' loop has a call to the method 'appendItemToContainer' which sets the container's height according to the previous height - which means setting and getting in the same line.<br/>
<br/>
I fixed this by looping over all the item's in the container, and building an array of their height's. Then I aggregated them all together and set the container's height once. This saved many DOM tree updates, and only left one which is necessary.<br/>
<br/>
The fixed code looked something like this :<br/>
<pre class="brush:javascript">
// collect the height of all elements
var totalHeight = 0;
for (var i=0; i<containerItems.length; i++) {
totalHeight += containerItems[i].clientHeight;
}
// set the container's height once
container.clientHeight = totalHeight;
</pre><br/>
After fixing the code, I saw that the time spent was actually much less now -<br/>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-vgGjLFplFYI/U0mcEexxoII/AAAAAAAAAaI/pqsyj9TRvlA/s1600/saved_time.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-vgGjLFplFYI/U0mcEexxoII/AAAAAAAAAaI/pqsyj9TRvlA/s1600/saved_time.png" /></a></div><br/>
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
As you can see, I managed to save a little over 150ms which is great for such a simple fix!!<br/>
<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com3tag:blogger.com,1999:blog-3031040842731199760.post-13207170790460322112014-02-21T21:00:00.000-08:002014-04-12T13:17:17.732-07:00Chrome developer tools profiling flame charts
<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
I just recently, and totally coincidentally, found out that Chrome developer tools can generate flame charts while profiling js code!<br/>
Recently it seems like generating flame charts from profiling data has become popular in languages like Ruby, python and php, so i'm excited to see that chrome has this option for js code as well.<br/>
<br/>
The default view for profiling data in the dev tools is the 'tree view', but you can easily change it to 'flame chart' by selecting it on the drop down in the bottom part of the window.<br/>
<br/>
Like here :<br/>
<a href="http://3.bp.blogspot.com/-Zk2NAXdJgPE/UwdiFR5MKdI/AAAAAAAAAY0/2BIx67j3_6M/s1600/open_flamechart.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-Zk2NAXdJgPE/UwdiFR5MKdI/AAAAAAAAAY0/2BIx67j3_6M/s1600/open_flamechart.png" /></a><br/>
<br/>
Then you will be able to see the profiling results, in a way that sometimes is easier to look at.<br/>
You can use the mouse scroll button to zoom in on a specific area of the flame chart, and see what's going on there.<br/>
<br/>
In case you're not familiar with reading flame charts, then here's a simple explanation -<br/>
<ul>
<li>Each colored line is a method call</li>
<li>The method calls above one another represent the call stack</li>
<li>The width of the lines represents how long each call was</li>
</ul>
<br/>
And here you can see an example of a flame chart, and I marked a few sections that the flame chart points out for us, that are non-optimized TryCatchBlocks. In this case it's comfortable viewing it in a flame chart because you can see nicely how many method calls each try/catch block is surrounding.<br/>
<br/>
<a href="http://4.bp.blogspot.com/-MYuymXD9th4/UwdiaVYdm0I/AAAAAAAAAZE/j79pz_WwC9E/s1600/flamechart.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-MYuymXD9th4/UwdiaVYdm0I/AAAAAAAAAZE/j79pz_WwC9E/s400/flamechart.png" /></a>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com4tag:blogger.com,1999:blog-3031040842731199760.post-25828768968245025422014-02-19T10:54:00.001-08:002014-04-12T13:17:07.394-07:00Preloading resources - the right way (for me)<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
<br/>
Looking through my 'client side performance glasses' when browsing the web, I see that many sites spend too much time downloading resources, mostly on the homepage, but sometimes the main bulk is on subsequent pages as well.<br/>
<br/>
<b>Starting to optimize</b><br/>
When trying to optimize your page, you might think that it's most important that your landing page is the fastest since it defines your users' first impression. So what do you do ? You probably cut down on all the js and css resources you can and leave only what's definitely required for your landing page. You minimize those and then you're left with one file each. You might even be putting the js at the end of the body so it doesn't block the browser from rendering the page, and you're set!<br/>
<br/>
<b>But there's still a problem</b><br/>
Now, your users go onto the next page, probably an inner page of your site, and this one is filled with much more content. On this page you use some jquery plugins and other frameworks you found useful and probably saved yourself hours of javascript coding, but your users are paying the price...<br/>
<br/>
<b>My suggestion</b><br/>
I ran into this same exact problem a few times in the past, and the best way I found of solving this was to preload the resources on the homepage. I can do this after 'page load' so it doesn't block the homepage from rendering, and while the user is looking at the homepage, a little extra time is spent in the background downloading resources they'll probably need on the next pages they browse.<br/>
<br/>
<b>How do we do this ?</b><br/>
Well, there are several techniques, but before choosing the right one, lets take a look at the requirements/constraints we have -<br/>
<ul>
<li>We want download js/css files in a non-blocking way</li>
<li>Trigger the download ourselves so we can defer it to after 'page load'</li>
<li>Download the resources in a way that won't execute them (css and js) (This is really important and the reason we can't just dynamically create a '<script/>' tag and append it to the '<head/>' tag!)</li>
<li>Make sure they stay in the browser's cache (this is the whole point!)</li>
<li>Work with resources that are stored on secure servers (https). This is important since I would like it to preload resources from my secured registration/login page too if I can.</li>
<li>Work with resources on a different domain. This is very important since all of my resources are hosted on an external CDN server with a different subdomain.</li>
</ul>
<br/>
<b>The different techniques are (I have tested all of these, and these are my notes)</b><br/>
1. Creating an iframe and appending the script/stylesheet file inside it<br/>
<pre class="brush:javascript">
var iframe = document.createElement('iframe');
iframe.setAttribute("width", "0");
iframe.setAttribute("height", "0");
iframe.setAttribute("frameborder", "0");
iframe.setAttribute("name", "preload");
iframe.id = "preload";
iframe.src = "about:blank";
document.body.appendChild(iframe);
// gymnastics to get reference to the iframe document
iframe = document.all ? document.all.preload.contentWindow : window.frames.preload;
var doc = iframe.document;
doc.open();
doc.writeln("<html><body></body></html>");
doc.close();
var iFrameAddFile = function(filename) {
var css = doc.createElement('link');
css.type = 'text/css';
css.rel = 'stylesheet';
css.href = filename;
doc.body.appendChild(css);
}
iFrameAddFile('http://ourFileName.js');
</pre>
This works on Chrome and FF but on some versions of IE it wouldn't cache the secure resources (https).<br/>
So, close, but no cigar here (at least, fully).
<br/>
<br/>
2. Creating a javascript Image object<br/>
<pre class="brush:javascript">
new Image().src = 'http://myResourceFile.js';
</pre>
This only works properly on Chrome. On FireFox and IE it would either not download the secure resources or download them but without caching.
<br/>
<br/>
3. Building an <object/> tag with file in data attribute<br/>
<pre class="brush:javascript">
var createObjectTag = function(filename) {
var o = document.createElement('object');
o.data = filename;
// IE stuff, otherwise 0x0 is OK
if (isIE) {
o.width = 1;
o.height = 1;
o.style.visibility = "hidden";
o.type = "text/plain";
}
else {
o.width = 0;
o.height = 0;
}
document.body.appendChild(o);
}
createObjectTag('http://myResourceFile.js');
</pre>
This worked nicely on Chrome and FF, but not on some versions of IE.
<br/>
<br/>
4. XMLHttpRequest a.k.a. ajax
<pre class="brush:javascript">
var ajaxRequest = function(filename) {
var xhr = new XMLHttpRequest();
xhr.open('GET', filename);
xhr.send('');
}
ajaxRequest('http://myResourceFile.js');
</pre>
This technique won't work with files on a different domain, so I immediately dropped this.
<br/>
<br/>
5. Creating a 'prefetch' tag
<pre class="brush:javascript">
var prefetchTag = function(filename) {
var link = document.createElement('link');
link.href = filename;
link.rel = "prefetch";
document.getElementsByTagName('head')[0].appendChild(link);
}
prefetchTag('http://myResourceFile.js');
</pre>
<br/>
<br/>
6. 'script' tag with invalid 'type' attribute
<pre class="brush:javascript">
// creates a script tag with an invalid type, like 'script/cache'
// I realized this technique is used by LabJS for some browsers
var invalidScript = function(filename) {
var s = document.createElement('script');
s.src = filename;
s.type = 'script/cache';
document.getElementsByTagName('head')[0].appendChild(s);
}
invalidScript('http://myJsResource.js');
</pre>
This barely worked in any browser properly. It would download the resources, but wouldn't cache them for the next request.<br/>
<br/>
<br/>
<b>Conclusion</b><br/>
So, first I must say, that given all the constraints that I have, this is more complicated than I thought would be at first.<br/>
Some of the techniques worked well on all of the browsers for non-secured resources (non SSL) but only on some browsers for secured resources. In my specific case I just decided to go with one of those, and figure that some users will not have cached resources that are for SSL pages (these are a minority in my case).<br/>
But, I guess that given your circumstances, you might choose a different technique. I had quite a few constraints that I'm sure not everyone has.<br/>
Another thing worth mentioning is that I didn't test Safari on any technique. Again, this was less interesting for me in my case.<br/>
I also didn't think about solving this problem on mobile devices yet. Since mobile bandwidth is also usually much slower I might tackle this problem differently for mobile devices...Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-90442263589901252562013-12-20T10:19:00.001-08:002013-12-20T10:19:15.206-08:00Prebrowsing - Not all that...<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
Six weeks ago, <a href="http://www.stevesouders.com">Steve Souders</a>, an amazing performance expert published a post called "<a href="http://www.stevesouders.com/blog/2013/11/07/prebrowsing/">Prebrowsing</a>".<br/>
In the post he talks about some really simple techniques you can use to make your site quicker. These techniques rely on the fact that you know the next page that the user will browse to, so you 'hint' to the browser, and the browser will start downloading needed resources earlier. This will make the next page navigation appear much quicker to the user.<br/>
<br/>
There are three ways presented to do this - They all use the <b>'link'</b> tag, with a different <b>'rel'</b> value.<br/>
<br/>
The first technique is <b>'dns-prefetch'</b>. This is really easy to add, and can improve performance on your site. Don't expect a major improvement though, the dns resolution itself usually doesn't take more than 150ms (from my experience). <br/>
I wrote about this to, in this blog post: <a href="http://www.debuggerstepthrough.com/2013/09/prefetching-dns-lookups.html">Prefetching dns lookups</a><br/>
<br/>
The second two techniques shown are <b>'prefetch'</b> and <b>'prerender'</b>.<br/>
Since these are really easy to add, once I read about this, I immediately added this to my site.<br/>
A little information about the site I'm working on : The anonymous homepage doesn't have SSL. From this page, most users sign-in or register. Both of these actions redirects the user to a SSL secured page. Since the protocol on these pages are https, the browser doesn't gain from the cached resources it has already since it thinks it's a different domain (and should). This causes the user to wait a long time on these pages just to have it's client download the same resources again but from a secured connection this time.<br/>
<br/>
So I thought it would be perfect to have the browser prerender (or prefetch) the sign-in page or the register page. I have a WebPageTest that runs a script measuring the performance of this page, after the user was at the anonymous homepage. <b>This test improved by a LOT. This was great! It was only a day after that I realized that the anonymous homepage itself was much slower...</b> :/<br/>
I guess this is because while the browser takes up some of it's resources to prerender the next page, it affects the performance of the current page. Looking at multiple tests of the same page I couldn't detect any point of failure except that each resource on the page was taking just a little longer to download. Another annoyance is that you can't even see what's happening with the prerendered page on utilities like WebPageTest, so you just see the effect on the current page.<br/>
<br/>
After reading a little more on the subject I found more cons to this technique. First, it's still not supported in all browsers, not even FF or Opera. Another thing is that Chrome can only prerender one page across all processes. This means I can't do this for 2 pages and I don't know how the browser will react if another site that is opened also requested to prerender some pages. You also won't see the progress the browser makes on prerendering the page, and what happens if the user browses to the next page before the prerendered page finished ? Will some of the resources be cached already ? I don't know, and honestly I don't think it's worth testing yet to see how all browsers act on these scenarios.<br/>
<b>I think we need to wait a little longer with these two techniques for them to mature a bit...</b><br/>
<br/>
<b>What is the best solution ?</b><br/>
Well, like every performance improvement - I don't believe there is a 'best solution' as there are no 'silver bullets'.<br/>
However, the best solution for the site I'm working on so far, is to preload the resources we know the user will need ourselves. This means we use javascript to have the browser download resources we know the user will need throughout the site on the first page they land, so on subsequent pages, the user's client will have much less to download.<br/>
<br/>
<b>What are the pros with this technique ?</b><br/>
1. I have much more control over it - This means I can detect which browser the user has, and use the appropriate technique so it will work for all users.<br/>
2. I can trigger it after the 'page load' event. This way I know it won't block or slow down any other work the client is doing for the current page.<br/>
3. I can do this for as many resources I need. css, js, images and even fonts if I want to. Basically anything goes.<br/>
4. Downloading resources doesn't limit me to guessing the one page that the user will be heading after this one. On most sites there are many common resources used among different pages, so this gives me a bigger win.<br/>
5. I don't care about other tabs the user has open that aren't my site. :)<br/>
<br/>
Of course the drawback with this is that opposed to the 'prerender' technique, the browser will still have to download the html, parse & execute the js/css files and finally render the page.<br/>
<br/>
<b>Unfortunately, doing this correctly isn't that easy. I will write about how to do this in detail in the next post (I promise!).</b><br/>
<br/>
I want to sum up for now so this post won't be too long -<br/>
In conclusion I would say that there are many techniques out there and many of them fit different scenarios. Don't implement any technique just because it's easy and because someone else told you it works. Some of them might not be a good fit for your site and some might even cause damage. Steve Souder's blog is great and an amazing fountain of information on performance. I learned the hard way that each performance improvement I make needs to be properly analyzed and tested before implementing.<br/>
<br/>
<br/>
<b>Some great resources on the subject :</b><br/>
- <a href="http://www.stevesouders.com/blog/2013/11/07/prebrowsing/">Prebrowsing</a> by Steve Souders<br/>
- <a href="http://www.igvita.com/posa/high-performance-networking-in-google-chrome/">High performance networking in Google Chrome</a> by Ilya Grigorik<br/>
- <a href="https://developer.mozilla.org/en/docs/Controlling_DNS_prefetching">Controlling DNS prefetching</a> by MDN<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-59833269979065742832013-11-11T18:00:00.000-08:002013-11-13T01:12:49.974-08:00Some jQuery getters are setters as well<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
A couple of days ago I ran into an interesting characteristic of jQuery -<br/>
Some methods which are 'getters' are also 'setters' behind the scenes.<br/>
<br/>
I know this sounds weird, and you might even be wondering why the hell this matters... Just keep reading and I hope you'll understand... :)<br/>
<br/>
If you call the element dimension methods in jquery (which are <b>height()</b>, <b>innerHeight()</b>, <b>outerHeight()</b>, <b>width()</b>, <b>innerWidth()</b> & <b>outerWidth()</b> ) you'll probably be expecting it to just check the javascript object properties using simple javascript and return the result.<br/>
The reality of this is that sometimes it needs to do more complicated work in the background...<br/>
<br/>
<b>The problem :</b><br/>
If you have an object which is defined as <b>'display:none'</b>, calling <b>'element.clientHeight'</b> in javascript, which should return the object's height will return <b>'0'</b>. This is because a <b>'hidden'</b> object using <b>'display:none'</b> isn't rendered on the screen and therefore the client never knows how much space it visually actually takes, leading it to think it's dimensions are <b>0x0</b> (which is right in some sense).<br/>
<br/>
<b>How jquery solves the problem for you :</b><br/>
When asking jquery what the height of a <b>'display:none'</b> element is (by calling <b>$(element).height()</b> ), it's more clever than that.<br/>
It can identify that the element is defined as <b>'display:none'</b>, and takes some steps to get the actual height of the element :<br/>
- It copies all the element's styles to a temporary object<br/>
- Defines the object as position:absolute<br/>
- Defines the object as visibility:hidden<br/>
- Removes 'display:none' from the element. After this, the browser is forced to 'render' the object, although it doesn't actually display it on the screen because it is still defined as 'visibility:hidden'.<br/>
- Now the jquery knows what the actual height of your element is<br/>
- Swaps back the original styles and returns the value.<br/>
<br/>
<b>Okay, so now that you know this, why should you even care ?</b><br/>
The step that jquery changes the styles of your element without you knowing, which forces the browser to 'render' the element in the background can take time. Not a lot of time, but still take some time. Probably a few milliseconds. Doing this once wouldn't matter to anyone, but doing this many times, lets say in a loop, might cause performance issues.<br/>
<br/>
<b>Real life!</b><br/>
I recently found a performance issue on our site that was caused by this exact reason. The 'outerHeight()' method was being called in a loop many times, and fixing this caused an improvement of ~200ms. (<a href="http://blog.kissmetrics.com/loading-time/?wide=1">Why saving 200ms can save save millions of dollars!</a>)<br/>
<br/>
I will soon write a fully detailed post about how I discovered this performance issue, how I tracked it down, and how I fixed it.<br/>
<br/>
<b>Always a good tip!</b><br/>
Learn how your libraries are working under the hood. This will give you great power and a great understanding of how to efficiently use them.Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-59983216536631037782013-11-09T12:06:00.001-08:002013-11-09T12:06:18.510-08:00Humanity wasted 14,526 years watching Gangnam Style"Humanity wasted 14,526 years watching Gangnam Style"...<br/>
This was the title of a link I posted on Hacker News about a week ago which linked to a website I created with a friend of mine (Gil Cohen) - <a href="http://wastedhumanity.com/9bZkp7q19f0">http://www.WastedHumanity.com</a><br/>
<br/>
<a href="http://4.bp.blogspot.com/-MsD-cfSDFPY/Un6Ux2PVO_I/AAAAAAAAAXk/pspiOZNdHyg/s1600/wastedhumanity.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-MsD-cfSDFPY/Un6Ux2PVO_I/AAAAAAAAAXk/pspiOZNdHyg/s400/wastedhumanity.png" /></a><br/>
<br/>
It seems like I managed to really annoy some people, and some even claim to hate me!<br/>
(The whole discussion can be seen here : <a href="https://news.ycombinator.com/item?id=6663474">https://news.ycombinator.com/item?id=6663474</a>)<br/>
<br/>
Well, I just wanted to say about this whole thing a few words -<br/>
The whole idea of this site was only a joke. Just me and my friend sitting in the living room one boring Friday watching some silly YouTube videos ourselves when I started thinking about how many times these videos were watched. It amazed me, so I started calculating it in my head. The programmer I am wouldn't allow me to just calculate this data manually so I started building a site that would do it for me. When we saw the numbers we were amazed and started joking about the things we could have done instead of this 'wasted time'...<br/>
<br/>
I didn't mean to laugh about how you decide to spend your time or make fun of anyone in anyway. I myself 'waste' a lot of time on YouTube, sometimes on silly videos while doing nothing, and sometimes countless hours listening to music as I work. I added at least a few views myself to each one of the videos seen on the site, and many more not on the site. I don't see that time as 'wasted'.<br/>
<br/>
I also know the calculation isn't a bit accurate, and that each (or at least most) of the facts on that site wasn't accomplished by one person so in reality it took much more than written. <br/>
<br/>
So, sorry if I hurt you. I know I made a lot of people laugh in the process, so it was totally worth it! :)<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com37tag:blogger.com,1999:blog-3031040842731199760.post-56490070950262646952013-10-21T02:12:00.000-07:002013-10-21T02:12:53.434-07:00A tale of asp.net, IIS 7.5, chunked responses and keep-alive<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
A while ago I posted about chunked responses - what they are and the importance of them. It turns out that we (where I work) were getting it all wrong.<br/>
<br/>
We implemented chunked responses (or at least thought so) quite a while ago, and it WAS working, in the beginning, but all of a sudden stopped.<br/>
<br/>
<b>How did I come to realize this ?</b><br/>
While analyzing waterfall charts of our site, which I've been doing regularly for quite a while now, I realized that the response doesn't look <i>chunked</i>.<br/> It's not trivial realizing this from a waterfall chart, but if you look closely and you're familiar with your site's performance you should notice this.
Since the first chunk we send the client is just the html 'head' tag, this requires almost no processing so it can be sent to the client immediately, and it immediately causes the browser to start downloading resources that are requested in the 'head' tag. If a response is chunked, in the waterfall, you should see the resources starting to be downloaded before the client even finishes downloading the html response from the site.<br/>
<br/>
<b>A proper chunked response should look like this :</b><br/>
<a href="http://1.bp.blogspot.com/-g6M3OW3FvrY/UmJ9bQ7KYzI/AAAAAAAAAXA/WZNkaJalXKk/s1600/amazon.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-g6M3OW3FvrY/UmJ9bQ7KYzI/AAAAAAAAAXA/WZNkaJalXKk/s320/amazon.png" /></a>v
<br/>
If you look closely you will realize that the response took long to download, which doesn't match the internet connection we chose for this test, which means the download didn't actually take that long, but the server sent part of the response, processed more of it, and then sent the rest.<br/>
<br/>
Here's an image of a response that isn't chunked :<br/>
<a href="http://1.bp.blogspot.com/-nxVNCpYQWRk/UmJ99k4ij9I/AAAAAAAAAXI/OqA8vh5nBRs/s1600/shopyourway.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-nxVNCpYQWRk/UmJ99k4ij9I/AAAAAAAAAXI/OqA8vh5nBRs/s320/shopyourway.png" /></a><br/>
<br/>
You can see that the client only starts downloading the resources required in the 'head' after the whole page is downloaded. We could've saved some precious time here, and have our server work parallel to the client that is downloading resources from our CDN.<br/>
<br/>
<b>What happened ?</b><br/>
Like I said, once this used to work and now it doesn't. We looked back at what was done lately and realized that we switched load balancers recently. Since we weren't sending the chunks properly, the new load balancer doesn't know how to deal with this and therefore just passes the request on without chunks to the client.<br/>
In order to investigate this properly, I started working directly with the IIS server...<br/>
<br/>
<b>What was happening ?</b><br/>
I looked at the response with Fiddler and WireShark and realized the response was coming in chunks, but not 'properly'. This means the 'Transfer-Encoding' header wasn't set, and the chunks weren't being received in the correct format. The response was just being streamed, and each part we had we passed on to the client. Before switching load balancer, it was being passed like this to the client, and luckily most clients were dealing with this gracefully. :)<br/>
<br/>
<b>So why weren't our chunks being formatted properly ?</b><br/>
When using asp.net, mvc, and IIS 7.5 you shouldn't have to worry about the format of the chunks. All you need to do is call <b>'HttpContext.Response.Flush()'</b> and the response should be formatted correctly for you. For some reason this wasn't happening...<br/>
Since we're not using the classic Microsoft MVC framework, but something we custom built here, I started digging into our framework. I realized it had nothing to do with the framework, and was more low level in Microsoft's web assemblies, so I started digging deeper into Microsoft's code.<br/>
<br/>
Using dotPeek, I looked into the code of 'Response.Flush()'...<br/>
This is what I saw :<br/>
<a href="http://3.bp.blogspot.com/-PwaeDMnZU44/UmJ_PadgNWI/AAAAAAAAAXQ/0dwLbngQYsg/s1600/responseflush.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-PwaeDMnZU44/UmJ_PadgNWI/AAAAAAAAAXQ/0dwLbngQYsg/s320/responseflush.png" /></a><br/>
<br/>
As you can see, the code for the IIS 6 worker is exposed, but when using IIS7 and above it goes to some unmanaged dll, and that's where I stopped going down that path.<br/>
<br/>
I started looking for other headers that might interfere, and started searching the internet for help... Couldn't find anything on the internet that was useful (which is why I'm writing this...), so I just dug into our settings.<br/>
All of a sudden I realized my IIS settings had the 'Enable HTTP keep-alive' setting disabled. This was adding the header 'Connection: close' which was interfering with this.<br/>
<br/>
I read the whole HTTP 1.1 spec about the 'Transfer-Encoding' and 'Connection' headers and there is no reference to any connection between the two. Whether it makes sense or not, It seems like IIS 7.5 (I'm guessing IIS 7 too, although I didn't test it) doesn't format the chunks properly, nor add the 'Transfer-Encoding' header if you don't have the 'Connection' header set to 'keep-alive'.<br/>
<br/>
<b>Jesus! @Microsoft</b> - Couldn't you state that somewhere, in some documentation, or at least as an error message or a warning to the output when running into those colliding settings?!!<br/>
<br/>
<b>Well, what does this all mean ?</b><br/>
The 'Connection' header indicates to the client what type of connection it's dealing with. If the connection is set to 'Close' it indicates that the connection is not persistent and will be closed immediately when it's done sending. When specifying 'keep-alive' this means the connection will stay opened, and the client might need to close it.<br/>
In the case of a chunked response, you should indicate the last chunk by sending a chunk with size '0', telling the client it's the end, and they should close the connection. This should be tested properly to make sure you're not leaving connections hanging and just wasting precious resources on your servers.<br/>
(btw - by not specifying a connection type, the default will be 'Keep-Alive').<br/>
<br/>
If you want to take extra precaution, and I suggest you do, you can add the 'Keep-Alive' header which indicates that the connection will be closed after a certain amount of time of inactivity.<br/>
<br/>
Whatever you do, make sure to run proper tests under stress/load to make sure your servers are managing their resources correctly.<br/>
<br/><br/>
<b>Additional helpful resources :</b><br/>
- <a href="http://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html">'Keep-Alive' header protocol</a><br/>
- <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html">HTTP/1.1 headers spec</a><br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-79596538604041673732013-09-19T04:51:00.000-07:002013-09-19T16:02:26.375-07:00Prefetching dns lookups<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
Since I've been working hard on latency and client side performance at my company, I've been analyzing several pages a day of our site and other big sites on the web, using mainly WebPageTest, looking for ways to optimize their performance. Viewing hundreds of waterfall charts, your eyes tend to get used to looking at the same kind of patterns and the same kind of requests.<br/>
<br/>
The DNS resolution, or 'DNS lookup' phase in the request was something I always thought should just be ignored. I mean, it pissed the hell out of me that it was there, but I honestly thought that there was nothing I can do about it...<br/>
<br/>
A while ago I thought about simply inserting the IP addresses of our CDN domains and other sub-domains we might have directly in the code to solve this. This is bad for 2 main reasons:<br/>
<b>1.</b> If your IP changes for some reason it forces you to change your code accordingly. (maybe not a scenario that should happen often or even at all, but still might)<br/>
<b>2.</b> (and this is much more important!) When using a CDN service like akamai, the dns lookup will give you different results according to where you are in the world. Since they have servers strategically placed in different geographical locations, a user from the USA will probably get a different IP than a user from Europe or Asia.<br/>
<br/>
Well, recently that all changed - I realized that you can direct the browser to prefetch the dns lookup at the beginning of the request, so that when the browser runs into the new domain it won't have to lookup up the dns again.<br/>
<br/>
To do this, all you need to add is this tag at the beginning of your page :<br/>
<pre class="brush:html">
<link rel="dns-prefetch" href="http://www.yoursite.com/">
</pre>
<br/>
Doing this on the domain you're currently on has no effect since the browser already did the dns lookup, but it can help when you know that in your page source you have calls to multiple sub-domains (for cdn's), calls to 3rd party libraries or ajax calls you make to other domains. Even if you know of a call that will happen on the next page the user lands on, you should still prefetch the dns lookup since the browser caches the results for a couple of minutes at least, and this should have no effect on the current page performance.<br/>
<br/>
The most common response I get when telling people about this, or reading about this on the internet is that the DNS lookup alone doesn't take <i>that</i> long. From my tests, I can say that the average DNS lookup time is under 100ms, although usually above 20ms, and sometimes it passes the 100ms. Even though this isn't the common case, you can still make sure time is saved for those 'unlucky' users.<br/>
...and besides, this is one of the easiest performance wins you have - It requires almost no work to implement!<br/>
<br/>
Just while writing this article I happened to test facebook.com, and check out how long the DNS lookup took on those 3 last requests!
<a href="http://1.bp.blogspot.com/-d6t6WfiF_uM/UjrgiRA_h9I/AAAAAAAAAWc/T2JO5z07fqU/s1600/facebook+dns.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-d6t6WfiF_uM/UjrgiRA_h9I/AAAAAAAAAWc/T2JO5z07fqU/s400/facebook+dns.png" /></a>
<br/>
(You can view the full results of this test here : <a href="http://www.webpagetest.org/result/130919_17_B60/1/details/">http://www.webpagetest.org/result/130919_17_B60/1/details/</a>)
Yep, you better believe your eyes - The DNS lookup on those last requests seemed to take 2 seconds!!<br/>
Now, I don't know why they took 2 seconds in that case, and I bet this is <b>really</b> rare, but it still happens sometimes, you can't argue with that.<br/>
But hey, If they would've requested to prefetch that last domain, it would still take that long! That's right, but it would've started much earlier, and could've still save hundreds of valuable milliseconds.<br/>
<br/>
So, my suggestions to you is, lets say you have 4 sub-domains for CDN's and you know you're going to call facebook's api at some point, you should put something like this in the head tag of your source :
<pre class="brush:html">
<link rel="dns-prefetch" href="http://cdn1.yoursite.com/">
<link rel="dns-prefetch" href="http://cdn2.yoursite.com/">
<link rel="dns-prefetch" href="http://cdn3.yoursite.com/">
<link rel="dns-prefetch" href="http://cdn4.yoursite.com/">
<link rel="dns-prefetch" href="http://api.facebook.com/">
</pre>
<br/><br/>
This will tell the browser to immediately start the dns fetching so that when the browser reaches those domains it will have the ip stored in the cache already.<br/>
<br/>
If you want to see what it looks like when you're prefetching the dns lookup properly, take a look at these WebPageTest results from amazon : <a href="http://www.webpagetest.org/result/130919_ZQ_J6V/1/details/">http://www.webpagetest.org/result/130919_ZQ_J6V/1/details/</a>
<br/>
You can clearly see that the dns lookup part of the request on some of the domains happen a lot before the browser reaches the actual resource on the timeline, and when it does, it doesn't need to wait for the dns lookup.<br/>
As usual, great work amazon! :)<br/>
<br/><br/>
<b>Some more resources on the subject :</b><br/>
- <a href="https://developer.mozilla.org/en/docs/Controlling_DNS_prefetching">MDN - Controlling DNS prefetching</a><br/>
- <a href="http://blog.chromium.org/2008/09/dns-prefetching-or-pre-resolving.html">Chromium Blog - DNS prefetching</a><br/>
- <a href="http://calendar.perfplanet.com/2012/speed-up-your-site-using-prefetching/">Performance Calendar - Speed up your site using DNS prefetching</a><br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com4tag:blogger.com,1999:blog-3031040842731199760.post-11014928433141671722013-09-04T04:40:00.000-07:002013-09-04T04:40:22.788-07:00All about http chunked responses<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
<b>A short background on HTTP and the 'Content-Length' header :</b><br/>
When sending requests over HTTP (hence, 'the web'), we send an HTTP request which consists of two main parts - the header of the request and the body. The header defines various details of the request body (e.g.: encoding type, cookies, request method, etc.). One of these details is the '<b>Content-Length</b>' specifying the size of the body. If you're building a website and aren't specifying this explicitly then chances are the framework you're using is doing this for you. Once you send the response to the client, the framework measures the size of the response and adds it to this header.<br/>
<br/>
In a normal request, looking at the headers with FireBug or Chrome developer tools, it should look like this (looking at google.com) :<br/>
<a href="http://2.bp.blogspot.com/-qSGRa3TiFFk/UicVFo2RiYI/AAAAAAAAAVA/smxgbVTzTSg/s1600/content_length.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-qSGRa3TiFFk/UicVFo2RiYI/AAAAAAAAAVA/smxgbVTzTSg/s320/content_length.png" /></a><br/>
<br/>
<b>So, what is a 'chunked response' ?</b><br/>
A 'chunked' response means that instead of processing the whole page, generating all of the html and sending it to the client, we can split the html into 'chunks' and send one after the other, without telling the browser how big the response will be ahead of time.<br/>
<br/>
<b>Why would anyone want to do this ?</b><br/>
Well, some pages on the site can take a long time to process. While the server is working hard to generate the output, the user sees a white screen and the browser is pretty much hopeless during this time with nothing to do and just displays a boring white screen to the user.<br/>
The work the server is doing might be to generate a specific part of the content on the page, and we might have a lot ready that we can already give the client to work with. If you have scripts & stylesheets in the <b><head/></b> of your page, you can send the first chunk with the 'head' tag html content to the user's machine, then the browser will have something to work with, meaning it will start downloading the scripts and resources it needs and during this time, and your servers can continue crunching numbers to generate the content to be displayed.<br/>
You are actually gaining parallelism by sending the client this first chunk without waiting for the rest of the page to be ready!<br/>
<br/>
Taking this further, you can split the page into several chunks. In practice, you can send one chunk with the 'head' of the page. The browser can then start downloading scripts and stylesheets, while your server is processing lets say the categories from your db to display in your header menu/navigation. Then you can send this as a chunk to the browser so it will have something to start rendering on the screen, and your server can continue processing the rest of the page.<br/>
<br/>
Even if the user only sees part of the content, and it isn't enough to work with, the user still gets a 'sense' of better performance - something we call 'perceived performance' which has almost the same impact.<br/>
<br/>
Many big sites are doing this, since this will most definitely improve the client side performance of your site. Even if it's only by a few milliseconds, in the ecommerce world we know that <a href="http://mashable.com/2012/03/14/slow-website-stats-infographic/">time is money!</a><br/>
<br/>
<b>How does this work ?</b><br/>
Since the response is chunked, you cannot send the '<b>Content-Length</b>' response header because you don't necessarily know how long the response will be. Usually you won't know how big the response will be, and even if you do, the browser doesn't care at this point.<br/>
So, to notify the browser about the chunked response, you need to omit the 'Content-Length' header, and add the header '<b>Transfer-Encoding: chunked</b>'. Giving this information to the browser, the browser will now expect to receive the chunks in a very specific format.<br/>
At the beginning of each chunk you need to add the length of the current chunk in hexadecimal format, followed by '\r\n' and then the chunk itself, followed by another '\r\n'.<br/>
<br/>
FireBug and Chrome dev tools both combine the chunks for you, so you won't be able to see them as they are really received by the browser. In order to see this properly you will need to use a more low level tool like <a href="http://fiddler2.com/">Fiddler</a>.<br/>
<br/>
This is how the raw response of amazon.com looks like using fiddler :<br/>
<a href="http://3.bp.blogspot.com/-e9FYMqh_I5s/UicYeEWzTHI/AAAAAAAAAVM/oZLdp6NNlks/s1600/chunked_response.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-e9FYMqh_I5s/UicYeEWzTHI/AAAAAAAAAVM/oZLdp6NNlks/s320/chunked_response.png" /></a><br/>
<b>Note : </b>I marked the required 'Transfer-Encoding: chunked' header, and the first line with the size of the chunk. In this case the first chunk is 0xd7c bytes long, which in human-readable format is 3452 bytes.<br/>
Also, it's interesting to note that you cannot really read the first chunk since it's encoded via gzip (which is also automatically decoded when using browser dev tools). When using fiddler, you can see the message at the top telling you this, and you can click it and have it decoded, but then the chunks are removed and you'll see the whole html output.<br/>
<br/>
<b>How can we achieve this with asp.net ?</b><br/>
When you want to flush the content of your site, all you need to do in the middle of a view is call '<b>HttpContext.Current.Response.Flush()</b>'.<br/>
It's that easy! Without you having to worry about it, the .net framework will take care of the details and send the response to the browser in the correct format.<br/>
<br/>
Some things that might interfere with this working properly :<br/>
- You might have to configure '<b>Response.BufferOutput = false;</b>' at the beginning of your request so the output won't be buffered and will be flushed as you call it.<br/>
- If you specifically add the 'Content-Length' header yourself then this won't work.<br/>
<br/><br/>
<b>For more helpful resources on chunked responses :</b><br/>
Wikipedia, and the spec details : <a href="http://en.wikipedia.org/wiki/Chunked_transfer_encoding">http://en.wikipedia.org/wiki/Chunked_transfer_encoding</a><br/>
How to write chunked responses in .net (but not asp.net) - <a href="http://blogs.msdn.com/b/asiatech/archive/2011/04/26/how-to-write-chunked-transfer-encoding-web-response.aspx">http://blogs.msdn.com/b/asiatech/archive/2011/04/26/how-to-write-chunked-transfer-encoding-web-response.aspx</a><br/>
Implementing chunked with IHttpListener - <a href="http://www.differentpla.net/content/2012/07/streaming-http-responses-net">http://www.differentpla.net/content/2012/07/streaming-http-responses-net</a><br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-34288139802678893082013-08-14T12:25:00.000-07:002013-08-14T12:33:32.995-07:00My talk on Latency & Client Side Performance
As an engineer in the 'Core team' of the company, we are responsible for making the site as available as we can, while having great performance and standing heavy load. We set high goals, and we're working hard to achieve them.<br/>
<br/>
Up until a while ago, we were focusing mainly on server side performance - Looking at graphs under various load and stress tests, and seeing how the servers perform, each time making more and more improvements in the code.<br/>
<br/>
A few weeks ago we started putting a lot of focus on latency and client side performance. I have taken control in this area and am following the results and creating tasks that will improve the performance every day.<br/>
<br/>
Since I've been reading a lot about it lately, and working on it a lot, I decided to create a presentation on the subject to teach others some lessons learned from the short time I've been at it...<br/>
<br/>
Here are the slides : <a href="http://slid.es/gillyb/latency">http://slid.es/gillyb/latency</a><br/>
<br/>
There are many details you'll be missing by just looking at the slides, but if this interests you than you should take a look anyway. The last slide also has many of the references from which I took the information for the presentation. I strongly recommend reading them. They are all interesting! :)<br/>
<br/>
I might add some future posts about specific client side performance tips and go much more into details.<br/>
I'm also thinking about presenting this at some meetup that will be open to the public... :)<br/>
<br/>
<img width="400" height="553" src="https://pbs.twimg.com/media/BRnfiNyCAAAgdBv.jpg:large">Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-13501076305448152272013-07-27T08:21:00.000-07:002013-07-27T08:26:08.448-07:00Improving website latency by converting images to WebP format<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
A couple of years ago Google published a new image format called WebP (*.webp). This format is supposed to be much smaller in size, without losing from the quality (or at least no noticeable quality). You can convert jpeg images to webp without noticing the difference, with a smaller image file size and even support transparency.<br/>
According to Ilya Grigorik (performance engineer at google) - you can save 25%-35% on jpeg and png formats, and 60%+ on png files with transparency! (<a href="http://www.igvita.com/2013/05/01/deploying-webp-via-accept-content-negotiation/">http://www.igvita.com/2013/05/01/deploying-webp-via-accept-content-negotiation/</a>)<br/>
<br/>
<b>Why should we care about this ?</b><br/>
Your web site latency is super important! If you don't measure it by now, then you really need to start. In commerce sites it's already been proven that better latency directly equals more revenue (<a href="http://www.strangeloopnetworks.com/resources/infographics/web-performance-and-ecommerce/amazon-100ms-faster-1-revenue-increase/">Amazon makes 1% more in revenue by saving 100ms</a>).<br/>
<br/>
<b>How is this new image format related to latency ?</b><br/>
If your site has many images, then your average user is probably spending a fair amount of time downloading those images. Think of a site like pinterest which is mostly contrived of user based images, then the user is downloading many new images with each page view.<br/>
While on a PC at your home, with a DSL connection this might not seem like a lot, but we all know that a big percentage of our users are using mobile devices, with 3G internet connection, which is much slower and they suffer from much longer download times.<br/>
<br/>
<b>What are our options ?</b><br/>
Just converting all our images to WebP is clearly not an option. Why ? Well, some people in the world have special needs. In this case i'm referring to people with outdated browsers (We all know who they are!).<br/>
BUT, we can still let some of our users enjoy the benefit of a faster site, and this includes many mobile users as well!<br/>
<br/>
We will need to make some changes to our site in order for us to support this, so lets see what we can do -<br/>
(Technical details on implementation at the end)<br/>
<br/>
<b>Option #1 - Server side detection :</b><br/>
When our server gets the request, we can detect if the user's browser supports webp, and if so reply with an html source that has '*.webp' image files in it.<br/>
This option comes with a major downside - You will no longer be able to cache the images on the server (via OutputCaching or a CDN like Akamai) since different users can get a different source code for the same exact page.<br/>
<br/>
<b>Option #2 - Server side detection of image request :<br/></b>
This means we can always request the same file name, like 'myImage.png'. Add code to detect if this client can support webp then just send it the same file but in webp format.<br/>
This option has a similar downside - Now we can cache the html output, but when sending the image files to the user we must mark them as 'non-cacheable' too since the contents can vary depending on the user's browser.<br/>
<br/>
<b>Option #3 - Client side detection :</b><br/>
Many big sites defer the downloading of images on the client only until the document is ready. This is also a trick to improve latency - It means the client will download all the resources they need, the browser will render everything, and only then start downloading the images. Again, for image intensive sites this is crucial, since it allows the user to start interacting with the site before waiting for the downloading of many images that might not be relevant at the moment.<br/>
This is done by inserting a client side script that will detect if the browser supports webp format. If so, you can change the image requests to request the *.webp version of the image.<br/>
The downside to this option is that you can only use it if the browser supports the webp format.<br/>
(btw - you can decide to go extreme with this and always download the webp version, and if the client doesn't support it, there are js decoders that will allow you to convert the image on the client. This seems a little extreme to me, and you probably will be spending a lot of time decoding in js anyway).<br/>
<br/>
<br/>
<b>The gritty details -</b><br/>
<br/>
<b>How can we detect if our browser supports webp ?</b><br/>
Don't worry, there's no need here for looking up which browsers support webp and testing against a list. Browsers that support webp format should claim they do when requesting images. We can see this done by Chrome (in the newer versions) :<br/>
<div class="separator" style="clear: both;"><a href="http://4.bp.blogspot.com/-tCRK71hjkog/UfPd4CBTpJI/AAAAAAAAAT4/p0vb4PVfOGw/s1600/accept+webp.png" imageanchor="1" style="margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-tCRK71hjkog/UfPd4CBTpJI/AAAAAAAAAT4/p0vb4PVfOGw/s320/accept+webp.png" /></a></div>
You can see in the request headers 'Accept: image/webp'<br/>
<br/>
<b>How do we do this on the client ?</b><br/>
In javascript we don't have access to the request headers, so we need to get creative.<br/>
There is a trick that can be done by actually rendering an image on the client, using base64 to store the image in the code, and then detect if the browser loaded the image successfully.<br/>
This will do the trick :<br/>
<pre class="brush:javascript">
$("<img>")
.attr('src', 'data:image/webp;base64,UklGRh4AAABXRUJQVlA4TBEAAAAvAQAAAAfQ//73v/+BiOh/AAA=')
.on("load", function() {
// the images should have these dimensions
if (this.width === 2 || this.height === 1) {
alert('webp format supported');
}
else {
alert('webp format not supported');
}
}).on("error", function() {
alert('webp format not supported');
});
</pre><br/>
<br/>
<b>How do we convert our images to webp format ?</b><br/>
We can do it manually using Google's converter - <a href="https://developers.google.com/speed/webp/docs/cwebp">https://developers.google.com/speed/webp/docs/cwebp</a><br/>
Doing it programatically depends on what language you're using.<br/>
There's a wrapper for C# - <a href="http://webp.codeplex.com/">http://webp.codeplex.com/</a><br/>
(and there are more for other languages, but not all - I'm actually looking for a java wrapper, and couldn't find one yet)<br/>
<br/>
<br/>
<b>So, should I run ahead and do this ?</b><br/>
All this good does come with a price, as all good things do... :)<br/>
There might be side affects you didn't think of yet. Some of them being the fact that if a user sends a link to an image that ends with webp and the user that receives this is using a browser that doesn't support it, then they won't be able to open the image.<br/>
More what, even if the user does use a new browser (e.i.: a new version of Chrome) and they save a webp file to disk, they probably won't be able to open it on their computer.<br/>
These are problems that facebook ran into, and eventually retreated from the idea of using webp. You can read all about that <a href="http://news.cnet.com/8301-1023_3-57580664-93/facebook-tries-googles-webp-image-format-users-squawk/">here</a>.<br/>
<br/>
<b>Which browsers did you say support this ?</b><br/>
According to www.caniuse.com - Chrome has obviously been supporting it for a while. Opera also supports it, and FireFox is supposed to start supporting this really soon as well. The most important news is that Android browsers, Chrome for Android and Opera mobile all support this which means many of your mobile users can gain from this change.<br/>
<br/>
<br/>
<b>If you're still reading and want more information -</b><br/>
- <a href="http://www.igvita.com/2013/05/01/deploying-webp-via-accept-content-negotiation/">Ilya Grigorik explains how to implement this using your CDN and NginX</a><br/>
- <a href="http://www.slideshare.net/guypod/a-picture-costs-a-thousand-words18062013">An excellent presentation on web image optimization by Guy Podjarny</a>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com3tag:blogger.com,1999:blog-3031040842731199760.post-68223781923892641292013-02-17T12:54:00.001-08:002013-07-15T01:09:50.376-07:00Getting started with nodejs - building an MVC site<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a>
A couple of weeks ago I started getting into nodejs. At first I was quite skeptic, I don't even recall why, but after playing with it for just a couple hours I started loving it. Seriously, It's so simple to use, and it seems like the nodejs eco-system is growing really fast. I'm not going to go into what nodejs is or how it works, so if you don't know, you should start by reading <a href="http://nodejs.org/">this</a>.<br/>
<br/>
What I am going to show here, is a really quick and simple tutorial as to how to get started building a website on nodejs using the MVC design pattern. I'll go over the quick installation of nodejs and walk through getting the very basic mvc wireframe website up and running.<br/>
(Since I've been a .net developer for quite a while, I might be comparing some of the terms used to the terminology .net programmers are familiar with)<br/>
<br/>
<b>Installing nodejs</b><br/>
First, download and install nodejs.<br/>
On ubuntu, this would be :<br/>
<pre class="brush:javascript">
sudo apt-get install nodejs
</pre>
(If your ubuntu version is lower than 12.10 than you need to add the official PPA. <a href="http://askubuntu.com/questions/49390/how-do-i-install-the-latest-version-of-node-js">Read this</a>)<br/>
<br/>
Now, you need to install the npm (nodejs package manager) :<br/>
<pre class="brush:javascript">
sudo apt-get install nodejs npm
</pre>
This will help us install packages built for nodejs. (exactly like 'Nuget' for Visual Studio users).<br/>
<br/>
<b>Starting our website</b><br/>
I've looked up quite a few mvc frameworks for nodejs, and I would say that the best one, by far, is <a href="http://expressjs.com/">expressjs</a>. It's really easy to use and it's being actively updated.<br/>
<br/>
Create a directory for your website, navigate their in the terminal, and type<br/>
<pre class="brush:javascript">
sudo npm install express
</pre><br/>
Now we need to tell nodejs how to configure our application, where are the controllers/models/views, and what port to listen to...<br/>
<br/>
Create a file called index.js in the website directory you created -<br/>
<br/>
First things first :
<pre class="brush:javascript">
var express = require('express');
app = express();
</pre><br/>
This defines 'app' as our expressjs web application, and gives us all the cool functionality that comes with the expressjs framework.<br/>
<br/>
After that we need to configure our application :<br/>
<pre class="brush:javascript">
app.configure(function() {
app.set('view engine', 'jade');
app.set('views', __dirname + '/views');
app.use(express.logger());
app.use(express.bodyParser());
app.use(express.cookieParser());
app.use(express.static(__dirname + '/scripts'));
app.use(express.static(__dirname + '/css'));
app.use(express.static(__dirname + '/img'));
app.use(app.router);
});
</pre><br/>
<br/>
The first two lines tells express we're going to use the 'jade' view engine to render our views. (This is like 'razor' but a little different, for people coming from the .net mvc). You can read about how the view engine works <a href="http://jade-lang.com/">over here</a>.
The next 3 lines tell express to use certain middleware. ('middleware' are like 'filters' in the asp.net mvc world) middleware intercept each request, and can do what it wants, including manipulating the request. Basically, each middleware is a method being called with the request object, response object, and 'next' object respectively.<br/>
The 'next' object is a function that calls the next middleware in line.<br/>
All the middleware are called in the same order that they are defined. The middlewares I use here are basic middlewares that comes with the expressjs framework, and just make our life much easier (by parsing the request object, the cookies to our request/response objects and logging each request for us).<br/>
<br/>
The final 3 lines of code, tell expressjs which directories have static files in them. This means that each request to a filename that exists in one of these files will be served as static content.<br/>
Note : if we put a file called 'main.css' in the '/css' folder, we request it by going to http://ourdomain.com/main.css and NOT by going to http://ourdomain.com/css/main.css. (This got me confused a little at first...)<br/>
<br/>
After all that, we need to add our models and controllers...<br/>
<pre class="brush:javascript">
require('./models');
require('./controllers');
</pre>
The nodejs default when requiring a directory is to look for the file 'index.js' in that directory, so what I did is create an index.js file in each of those directories, and inside it just add a couple of 'require()' calls to specific files in that directory.<br/>
<br/>
For models you can create javascript object however you like. On the projects I'm working on, I started using <a href="http://mongoosejs.com">mongoose</a> - which is like an ORM for mongodb. It's really simple to use, but I won't go into it for now... <br/>
<br/>
Finally, in our init.js file, we need to tell our app to listen to a certain port -
<pre class="brush:javascript">
app.listen(8888);
</pre><br/>
<b>Controllers</b><br/>
Defining controllers is really easy with express -
Each 'action' is a method, defined by GET or POST, the url (which can include dynamic parameters in it), and the function to call.
A typical controller looks like this :
<pre class="brush:javascript">
app.get('/about', function(request, response) {
// just render the view called 'about'
// this requires us to have a file called 'about.jade' in our 'views' folder we defined
response.render('about');
});
app.get('/user/:userId', function(request, response) {
// userId is a parameter in the url request
response.writeHead(200); // return 200 HTTP OK status
response.end('You are looking for user ' + request.route.params['userId']);
});
app.post('/user/delete/:userId', function(request, response) {
// just a POST url sample
// going to this url in the browser won't return anything..
// do some work...
response.render('user-deleted'); // again, render our jade view file
});
</pre>
<br/><br/>
So, that's the end of this. It's really basic I know, but I hope it will help you get started... :)<br/>
The main idea of this post was to show just how easy it is to get started with nodejs.<br/>
I think I will be posting a lot more about nodejs in the near future! :)<br/>
<br/>
Have fun! :)<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0tag:blogger.com,1999:blog-3031040842731199760.post-889328052960145272013-01-31T10:32:00.000-08:002013-02-09T11:52:11.630-08:00Problems with Google Analytics<a href="http://anyurl.com" rel="tag" style="display:none;">CodeProject</a><br/>
Most of the Google utilities I use are great - they usually have an intuitive design that make them frictionless and have most of the features someone needs. The features they have usually work as expected too, which isn't trivial with some other competing utilities.<br/>
<br/>
Lately I've been using Google Analytics and the truth is, I don't like what I see... :(<br/>
<br/>
The most annoying part of using Google Analytics is that there's no way of testing it!<br/>
It would seem like a trivial feature to me, but apparently not to the people at Google. Maybe most people don't have this problem, since you set up the analytics reports when first designing the website, and then the testing process is done on your production environment which could be really easy, and if you have no stats, then you obviously have nothing to ruin.<br/>
When I was trying to make some of the most minor changes to the way we report some of the stuff on the website I work on at my job, the first thing that interested me was how I was going to test the changes.<br/>
<br/>
When you have many users in production, there's no chance you'll notice the change you made when you login. Even if you would, I could by accidentally affect other analytics and I was obviously afraid to do so. So, I set up a test account, and tried reporting to the test account from my local machine. This didn't work since Google makes sure you make the request from the domain you registered in the GS (Google Analytics) account, which is great! After looking into this a little, I found out that I can tell GA to ignore the domain that the request is coming from so that this will work. From their documentation, this feature was meant to be for using multiple subdomains, but it works for reporting from any domain. Since this helped my cause, and I'm not afraid of others causing harm to my test account, I won't go into why this is a bad idea, and can be harmful to some other sites using this... :/<br/>
After doing all that, I came to realize that the analytics aren't reported in real time, which is also logical, since an analytics system usually needs to deal with large amounts of data, and it takes time to handle the load. (Not only it's not real-time, but it's pretty far from being almost real-time as well) BUT, this doesn't mean there shouldn't be a way around this for testing, like an option I could turn on, just so the reports effect will be seen in real time, even if it's limited for a really small number, just for testing!<br/>
<br/>
In case someone reading this ran into the same problem - The configuration setting I used like this :<br/>
<pre class="brush:javascript">
_gaq.push(['_setDomainName', 'none']);
</pre>
<br/><br/>
By the way - From my experience with the Adobe Omniture utility, they have a great 'debugging' utility that you can use a bookmarklet. It opens on any site, and shows you the live reports going out, which is a GREAT tool for testing, and should've been implemented by Google in the same way.<br/>
<br/>
Another issue I had (and frankly, still have) with GA, is that some of their documentation isn't full...
For example : There are some pages (like 'Page Timings') where you can view the stats of different pages, and the average. You can sort this list by 'page title' or some other parameters. The problem is that when you have many pages that are the same, but with dynamic content (meaning all the 'page titles' are different), you might want to group them by a 'user defined variable' that you report on that page.<br/>
Great! You have this option. ...BUT, in the documentation, the way you report a 'User defined variable' is by using the '_setVar' method. It continues on by stating that the '_setVar' method is soon to be deprecated and they don't recommend using it. Instead you should '_setCustomVar'. The problem here is that 'Custom Var' and 'User Defined Variable' aren't the same, and in some pages you can view one and in some the latter. There is no documentation anymore for the '_setVar' method, so I searched various blogs about people writing about this in the past, and found the way to use it, but it works in a different way, and I couldn't find a way to define it's lifespan (per session/page/user/etc.) like you can do with '_setCustomVar'.<br/>
<br/>
Long story short... It seems like they have quite some work to do on this before it's perfect, or close to being perfect, and I'm not 100% I'll be using this again as a full site solution for web page analytics.<br/>
<br/>Gilly Barrhttp://www.blogger.com/profile/15736348037155591283noreply@blogger.com0