Friday, July 3, 2015

Domain purchase scam

I hold a few domains under my name, mostly because they're all related to some idea I once had and wanted to build. I didn't buy them with the intention of selling them, but recently I ran into the site which gives you a platform to sell domains. Since I never ended up doing anything with the domains I figured I'd put them up for sale, and if by some chance someone is really interested in one of the names I'll make a few bucks.

A few days ago, someone tried to scam me. So I'm sharing this in hope to inform or educate some others.

How did they reach me ?
I believe the scammer saw I was trying to sell a site on flippa and decided to contact me. I'm 100% sure this isn't related to flippa in any way, so I'm not blaming them in any way. The scammer could've just seen my domain for sale, and found my email in the whois database.
I always put my real details in the whois database, which exposes you to spam and scams like this, but this doesn't bother me, you just need to be careful. There are sites offering you to purchase privacy in the whois database - I think this is useless, and wouldn't spend money on this.

First Email
I received an email from someone named "Shlomo Greenberg" saying he's representing an investor from Europe that's interested in a specific domain of mine. He asked if I'm willing to sell, and if so, we'll negotiate the details in the next mail.

Hints this might be a scam
The first hint I had is that the email was a 'gmail' account, but gmail was smart enough to notify me that the 'from' address might be forged. I have never seen this message, but it looks like this :

Why is this a hint of a scam ? Because if someone is automating the emails, then they're sending it from some other server, and just forging the 'gmail' account to make it look like someone personally contacted me.

That being said, this probably doesn't mean 100% scam! I have never seen this before, but I'm sure this might happen in cases where you're not being scammed. This shouldn't stop you immediately, but this is a clear sign to proceed with caution.

Some basic investigation
Because of the gmail notice that seemed fishy, and because I'm a curious person by nature, I decided to do some very basic research about the person contacting me. You don't have to be a private investigator to do this, the simplest common sense will take you a long way.
The mail was signed by a "Shlomo Greenberg" and it said "Lawyer" next to it. It also had an Israeli address. I searched the internet for his name, with and without a lawyer prefix, in English and in Hebrew, but couldn't find anything. I also searched on google maps for the address he added, but it seemed to lead to some coffee shop.
(I'm not saying this Shlomo Greenberg guy isn't a real lawyer, maybe he is, but I couldn't find anything about him online, and as someone who claims to represent a European investor I would imagine that something would come up).

My reply
At this point, I still wasn't sure if this is a scam or not, but figured I have nothing to lose. I replied saying that I'm willing to sell for $2000, and let me know if the investor can pay that amount.

The scam!
I received an email back saying that the price isn't a problem for the investor, but they want a "domain certificate" so they know it's legit.
What is this "domain certificate" ?
He added a link to a page on "Google Answers" that someone asked how to get a domain certificate, and some other user answered with a link to a site to buy certificates. He said I should go to that link, and get a certificate.
He also explained that the certificate is to give an evaluation about the price of the domain, validate ownership, and to do some basic due-dilligence on the trademark.

At this point I knew it was a scam for quite a few reasons.

First, the "Google Answers" link he gave me wasn't a real google answers page. It was linked to "" which isn't a domain owned by google (according to whois), but it was perfectly crafted to look like a google answers page.
Google answers is a product that closed a long time ago. I never searched anything on google even and ran into quality search results on google answers, so even if it was a real google answers page, I wouldn't give it any credit.

Second, the link to get the domain certificate looked bad. I'm a web designer, so i have an eye for basic web design. It seemed very unprofessional, and didn't seem related to any official organization related to world wide web standards or anything like that.

Third, the certificate cost ~$150.

Lastly, if you're selling a domain, you shouldn't need to purchase any certificate of any sort!
Validation of ownership is done via whois, and if your details aren't there then you can simply transfer the domain via an escrow service which will easily protect both sides in the transfer.
There is no reason for someone selling a domain to do any due-dilligence. If someone ever asks you for this, it's bullshit. They can (and should) do all the research they want before purchasing the domain from you on there own.
Finally, I believe that the whole domain appraisal business is bullshit! I see people on flippa writing "this site got an appraisal of Xk dollars!". I don't believe there's an actual way to do a domain appraisal, and even if there is, it shouldn't mean anything to the seller. You should sell the site for as much as you can get someone to pay for it. If you can't get someone to pay more for it, then it doesn't matter how much you *think* it's worth, it's obviously not.

Benefit of the doubt
At this point, although I was sure it was a scam, I wanted to see how this would roll out. Obviously I'm not going to buy a domain certificate, but I replied kindly, stating that I am not going to spend the money for the certificate, but the buyer can pay for it if they want to.
I explained that I'm willing to do an escrow transfer so we're both protected.
Needless to say, I didn't get any response back...

Beware of scams!
There will always be people out there thinking of elaborate ways of scamming you, and spending time and money into crafting the techniques. The only reason they continue to do this is because it works to some extent.
Let's try to stop it, so the scammers will eventually realize it's not worth it neither.

Monday, May 11, 2015

AngularJS custom directive with two-way binding using NgModelController

It took me a while, but I finally got it right!
I recently tried to create a custom directive with 2 way binding, using the 'ng-model' attribute. This was a little tricky at first - I ran into some articles, but they didn't seem to work for me, and I needed to make some tweaks to get it right.

I don't want to go over everything I read, but just want to publish the changes or gotcha's you should know about.

The best article I read on the subject is this one :
I recommend reading it. It has the best explanation about how '$formatters' and '$parsers' work, and what's their relation to the ngModelController.

After reading that article, there are 2 problems I ran into.

1. ngModelController.$parsers and ngModelController.$formatters are arrays, but 'pushing' my custom function to the end of the array didn't work for me. When changing the model, it never got invoked. To make this work, I needed to push it in the beginning of the array, using the Array.prototype.unshift method.

2. The second problem I had was that I needed to pass ng-model an object. Passing it a value won't work. You might be thinking that it's obvious since passing a value won't suffice as a reference to it, but this wasn't obvious to me, since passing ng-model a value when using an 'input' element for example works and still updates it both ways.

For a full working example of a two-way binding directive using ngModelController (the ng-model attribute), you can take a look at this:

Monday, April 27, 2015

Reviewing Kibana 4's client side code

I haven't written anything technical for a while, and that's mainly because the past year I changed jobs a few times. After working at Sears Israel for almost 3 years, I thought it's time to find the next adventure. I think I finally found a good match for me, and I'll probably write a whole post about that soon.

For now, I'll just say that at the new startup I work at, we're doing a lot of work on the ELK stack, and I got to do a lot of work on Kibana. With years of experience on various client side applications, I still learned a lot from looking at kibana's code. I think there are many things here written really elegantly, so I wanted to point them out in a concentrated post on the subject. Also, there are some bad notes, mainly minor things (in my opinion) that I will mention as well.

At First Glance

Kibana 4 is a large AngularJS application. The first thing I noticed when looking at the code is that it has a great structure. Many AngularJS tutorials (or any other tutorials for MVC frameworks) and code-bases I've worked on have the messy structure of a 'models' directory, a 'controllers' directory, and a 'views' (or 'templates') directory.
AngularJS did the right thing by organising the code by features/components, and not by code-framework definitions. This makes it much easier to navigate through the code base, and to easily add more features.
Having a code base organised by controllers, models, views, etc, doesn't do much for your code base except become a pile of unrelated features in each directory, violating the Separation of Concerns principle.

(In the image you can see each component grouped in it's own directory, which includes it's templates, it's code and it's styles all together)

In addition, most AngularJS applications I've seen have all their routes defined in one file (usually app.js or index.js), which goes along with many global definitions, and sometimes logic related to specific pages or models all in a single file with no relation to any feature.
Kibana's code is nicely organised, and each 'plugin' or 'component' (discover/visualize/dashboard/settings/etc) defines it's own routes in it's own controller.
They manage to do this by creating their own 'RouteManager' ( This basically defines the same api as angular's RouteManager, but it collects the routes you define, and in the end calls angular's route manager to actually add them (by calling routes.config here :
This custom route manager also adds the ability to resolve certain things before the route is called, which is real useful in many situations.

Javascript Libraries

The creators of kibana did a great job (with a few minor exceptions that I will explain in the end) in choosing many open source javascript libraries to lean on while building kibana. It's usually a good idea to not reinvent the wheel, especially when someone already did a good job before you.

RequireJS is a javascript module loader. It helps you create modular javascript code, and makes it really easy dealing with dependencies between modules. Kibana's code does a great job utilizing RequireJS by defining most javascript modules in the AMD standard.

A really nice trick they did here that is definitely worth mentioning is the 'Private' service they created. This is a wrapper that allows you to define a RequireJS module, with angularJS dependencies. This allows you to use angular's dependency injection abilities side-by-side with RequireJS' DI abilities.

Regularly loading RequireJS modules in the code looks like this :
define(function(require) {
    var myService = require('my_service');
    // now do something with myService

Using the 'Private' service you load modules like this :
define(function(require) {
    var myAngularService = Private(require('my_angular_service'));
    // now you can use myAngularService

And most important is that my_angular_service looks like this :
define(function(require) {
    return function($q, $location, $routeParams) {
        // all angular providers in the function parameters are available here!

The Private service uses angular's get() method to retrieve the $injector provider, and uses it to inject the dependencies we need.
(Take a look at the 'Private' service code here :

If you're not familiar with lodash, you should be. It's the missing javascript utility library that will definitely help you DRY up your javascript code. It has many "LINQ"-like methods (for those familiar with C#), and many other basic methods you would usually write yourself to help iterate over json objects and arrays in javascript. One of the really nice features about lodash is that most methods you can chain to make your code more readable and lodash uses lazy evaluation so performance is amazing!

I don't want to start writing about the features of lodash, but I strongly suggest reading their docs, and getting familiar with it.
Almost every service, component or controller in the kibana code starts with this line :
var _ = require('lodash');

They also did a really good job extending lodash with some utility methods of their own. Take a look at these files to see for yourself :

(There's one thing I don't like here, which is the methods 'get' and 'setValue' - They do a 'deepGet' and 'deepSet' which is like saying "hey, i know i have something here in this object, but have no idea where it is". This just doesn't feel right... :/ )

Some HTML5

Throughout the code there has been some good use of html5 features.
The first one I noticed and really liked is the 'Notifier' service ( I really like the abstraction here over notifying the user of different message types, and the abstraction over the browser's 'console' methods. The 'lifecycle' method ( is really neat, and uses the method to group messages in the browser's console. It also uses '' which is really nice, and much better than using the older '' method (it's more exact, and it's relative to the navigationStart metric).

Kibana also makes use of the less-common <wbr/> tag. This is new to html5 and is intended to give you a little more control over where the line breaks when text overflows in it's container.

There's also use of 'localStorage' and 'sessionStorage' for saving many local view settings in the different kibana pages. In general, they did a great job in persisting the user's state on the client side. When navigating between tabs, it keeps you on the last view you were in when returning to the tab.

Another nice thing is that there is a lot of use with aria-* attributes, and recently I see more and more of this in the newer commits. It's nice to see a big open source project dedicating time to these kinds of details.

Object Oriented Programming

There is a great deal of attention to the design of objects in the code.
First, I like the way inheritance is implemented here. A simple lodash 'mixin' allows for object inheritance.

inherits: function (Sub, Super) {
    Sub.prototype = Object.create(Super.prototype, {
        constructor: {
            value: Sub
        superConstructor: Sub.Super = Super
    return Sub;

Many objects in the code use this to inherit all the properties of some base object. Here's an example from the 'SearchSource' object :
return function SearchSourceFactory(Promise, Private) {
    var _ = require('lodash');
    var SourceAbstract = Private(require('components/courier/data_source/_abstract'));
    var SearchRequest = Private(require('components/courier/fetch/request/search'));
    var SegmentedRequest = Private(require('components/courier/fetch/request/segmented'));

    function SearchSource(initialState) {, initialState);

    // more SearchSource object methods



You can see the SearchSource object inherits all the base properties from the SourceAbstract object.

In addition, all the methods that would've been static are defined on the object prototype. This is great mainly for memory usage. Putting a method on the object's prototype makes sure there's only one instance of the method in memory.

Memory Usage

Since kibana is a big single-page application, there is a need to be careful with memory usage. Many apps like kibana can be left on in a browser for a long time without any refresh, so it's important to make sure there are no memory leaks. AngularJS makes this easy to implement, but many programmers don't bother going the extra mile for this.
In the kibana code, many directives subscribe to the '$destroy' event and unbind event handlers not to hold references to unused objects.

An example from a piece of kibana code (the css_truncate directive) :
$scope.$on('$destroy', function () {

Code Conventions

Kibana's code is mostly very organized, and more importantly readable. A small negative point goes here for some inconsistencies with variable naming. There are classes that have public methods that start with '_' and some don't.

For an example of this, look at the DocSource object. This file has even commented 'Public API' and 'Private API' but the naming convention differences between the two aren't clear.

Code Comments

I can say the code has enough comments, but I have no idea how much that actually is, since most of the code is readable without comments, which is an amazing thing. There are great comments in most places that should have them.

Just a funny anecdote is that I was surprised to see comments that actually draw in ascii art the function they describe! Kudos!
 * Create an exponential sequence of numbers.
 * Creates a curve resembling:
 *                                                         ;
 *                                                         /
 *                                                        /
 *                                                     .-'
 *                                                 _.-"
 *                                            _.-'"
 *                                      _,.-'"
 *                               _,..-'"
 *                       _,..-'""
 *              _,..-'""
 *  ____,..--'""
 * @param {number} min - the min value to produce
 * @param {number} max - the max value to produce
 * @param {number} length - the number of values to produce
 * @return {number[]} - an array containing the sequence
createEaseIn: _.partialRight(create, function (i, length) {
    // generates numbers from 1 to +Infinity
    return i * Math.pow(i, 1.1111);


CSS Styling

Another great success here was using the 'less' format for css files. This allows for small and concise 'less' files, and reuse of css components easily (known as 'mixins'). There has been a great job here done with colors especially - All colors are defined in a single file ( Editing this file, you can easily create your own color scheme.

(There are a few exceptions - mainly a few colors defined in js files or css files, but It's 99% covered in _variables.less).

Build Process

Kibana has a grunt build process setup. It compiles the css files, combines them and js files (without minifying, using r.js), adds parameters to the resource files for cache-busting, and some more small tasks.
I would be happy to see this upgraded to using gulp, which is stream based and has a much nicer api (in my opinion), but grunt still does the job.


After writing so many good points about kibana's source code, this is where I lack good feedback.
Maybe it's because when building kibana they had in mind that it's not to be served over the internet, and it's just an internal tool, and maybe it's just because I'm overly sensitive after working for quite a while on the performance team at Sears Israel (working on Either way, if it was an online website, it's performance would be considered under-par.

JS files aren't minified. They are combined, but not minified. Unfortunately, the code isn't even prepared to just minify the files. In order to do this, angularjs dependencies need to be defined with the dependencies declared as strings before the function itself. Otherwise angularjs's dependency injection mechanism won't work.

CSS files aren't minified either, just combined.

JS files are ~5MB !!! Yes, almost 5MB!! That's huge, and it's all downloaded on kibana's initial load. This could've been done in a few separate files, downloading only the ones needed for the initial view first. This would already be a great improvement. Though there are advantages to not minifying the js, and I think that's what the creators had in mind - It's easier to debug with DevTools (no need for mapping files), and although initial load will take a long time, after that there is no wait on any other pages. If the resources are cached on your machine, then even getting back to kibana the second time should be really fast.

There are also some libraries in the source code which I think are redundant and maybe could've been removed with a little extra work. One example is jquery, which I think is frowned upon using with angularjs. AngularJS comes with jqlite, which is a smaller version of jquery and should suffice.

I hope it doesn't sound like I think they did a bad job - I'm pointing out some areas in the code that maybe could've been done differently. All in all the app is amazing, and works great! :)

In conclusion

I had a great time learning and working (and still working) on kibana's code. I tried to show a lot of good things I like about the code, and point out a few minor bad things in the code. I hope you enjoyed reading this, and Kudos to you if you got to this point! :)

I also hope to write another post about how kibana communicates with elasticsearch and maybe another one on how it renders the visualizations with the help of D3.js

Sunday, February 22, 2015

When 7 Billion users just aren't enough...

It seems like the only thing threatening facebook's user growth is earth's mortality rate.
It also seems that they're not going to let stop them anytime soon from growing even more!

Facebook just released a new feature that allows you to assign another facebook account to control your account after you pass away.

I personally give them a HUGE amount of credit for the creativity. Let this be a lesson to all of us about making the most out of a situation where resources are limited.

Tuesday, February 17, 2015

Simple nodejs desktop time tracking utility

I recently wrote about Creating desktop applications with nodejs...
Well, I was playing around a little with node-webkit (again!) - It's a nodejs framework for building cross-platform desktop applications. Within a few hours I built a super simple time tracking utility that I needed for quite some time!

I know there are a ton of utilities like this out there already, but all of them have much more features than I want and need, and annoy me too much while using them. This utility does *nothing* but track time. You just add a task and it starts timing it. You can stop and start tasks, and just remove them when you're done.

I'm not that fanatic about time productivity (yet!) that I need history graphs to show me how productive i've been lately. It's actually more for me to see if the tasks I'm working on take as long as I think they should.

So here it is:

All the code is there.
There's also a compiled executable for windows ready inside the 'Installation' folder.
Now that I have a mac I want to compile it for mac too soon.
(I'll also probably be making some UI improvements and maybe adding some more small features in the future, so follow the repository if you're interested.)

And here's a picture of what it looks like :


Saturday, November 8, 2014

Online multiple javascript compression tool

Minifying/compressing javascript files has become a standard for a while already when developing websites. It is very important to save as much space as you can and have your users downloading as little as possible to improve the performance of your site.
I don't think you'll find an article out there that denies it.

An important question that arises from this is which compressor/minifier should you use ?
Different famous open source projects use different compressors, and I'm guessing (or at least hoping) they chose them wisely relying on benchmarks they did on their own.
You see, each compressor works differently, so different code bases won't be affected in the same way by different compressors.

In the past I used to manually test my code against different compressors to see which one was best for me. I finally got sick of doing it manually, so decided to look for a tool that will do the job for me. Surprisingly, I didn't find one that did exactly that, so I quickly wrote a script that will do it for me. Then I decided to design a UI for it and put it online for others to enjoy as well.

I present to you :
You can copy text, or drag some js files to it, and choose which compressor you want. Or, and this is the default method, choose 'Check them all' which will compress your code using the most popular compressors and show you the results, and the compressed size from all of them. You can download the files directly from the site.

Here's a screenshot :

Currently the site can compress your javascript code with YUI Compressor, UglifyJS, JSMin and Google's Closure Compiler.
If you have any thoughts or suggestions on how to improve, feel free to drop a comment below. :)

Tuesday, November 4, 2014

Lazy loading directives in AngularJS the easy way

The past few months I've been doing a lot of work with AngularJS, and currently I'm working on a single page application which is supposed to be quite big in the end. Since I have the privilege of building it from scratch, I'm taking many client-side performance considerations in mind now, which I think will save me a lot of hard work optimizing in the future.

One of the problems main problems is HUGE amounts of js files being downloaded to the user's computer. A great way to avoid this is to only download the minimum the user needs and dynamically load more resources in the background, or as the user runs into pages which require a specific feature.

AngularJS is a great framework, but doesn't have anything built in that deals with this, so I did some research myself...
I ran into some great articles on the subject, which really helped me a lot (and I took some ideas from), but weren't perfect.
A great article on the subject is this one :
The important part is that it explains how to dynamically load angularjs directives (or other components) after bootstrapping your angularjs app.
What I didn't like about this article is that the writer's example requires RequireJS and jQuery along with all the AngularJS files you already have. This alone will make your app really heavy, and I think doesn't need to be like this.

Let me show you how I wrote a simple AngularJS service that can dynamically load directives.

The first crucial step is that you need to save a reference to $compileProvider. This is a provider that is available to us when bootstrapping, but not later, and this provider will compile our directive for us.
var app = angular.module('MyApp', ['ngRoute', 'ngCookies']);

app.config(['$routeProvider', '$compileProvider', function($routeProvider, $compileProvider) {
    $routeProvider.when('/', {
        templateUrl: 'views/Home.html',
        controller: 'HomeController'

    app.compileProvider = $compileProvider;

Now, we can write a service that will load our javascript file on demand, and compile the directive for us, to be ready to use.
This is a simplified version of what it should look like :
app.service('LazyDirectiveLoader', ['$rootScope', '$q', '$compile', function($rootScope, $q, $compile) {
    // This is a dictionary that holds which directives are stored in which files,
    // so we know which file we need to download for the user
    var _directivesFileMap = {
        'SexyDirective': 'scripts/directives/sexy-directive.js'

    var _load = function(directiveName) {
        // make sure the directive exists in the dictionary
        if (_directivesFileMap.hasOwnProperty(directiveName)) {
            console.log('Error: doesnt recognize directive : ' + directiveName);

        var deferred = $q.defer();
        var directiveFile = _directivesFileMap[directiveName];

        // download the javascript file
        var script = document.createElement('script');
        script.src = directiveFile;
        script.onload = function() {

        return deferred.promise;

    return {
        load: _load


Now we are ready to load a directive, compile it and add it to our app so it is ready for use.
To use this service we will simply call it from a controller, or any other service/directive like this:
app.controller('CoolController', ['LazyDirectiveLoader', function(LazyDirectiveLoader) {
    // lets say we want to load our 'SexyDirective' all we need to do is this :
    LazyDirectiveLoader.load('SexyDirective').then(function() {
        // now the directive is ready...
        // we can redirect the user to a page that uses it!
        // or dynamically add the directive to the current page!


One last thing to notice, is that now your directives need to be defined using '$compileProvider', and not how we would do it regularly. This is why we exposed $compileProvider on our 'app' object, for later use. So our directive js file should look like this:
app.compileProvider.directive('SexyDirective', function() {
    return {
        restrict: 'E',
        template: '<div class=\"sexy\"></div>',
        link: function(scope, element, attrs) {
            // ...

I wrote earlier that this is a simplified version of what it should look like, since there are some changes that I would make before using it as is.
First I would probably add some better error handling to look out for edge cases.
Second, We wouldn't want the same pages to attempt to download the same files several times, so I would probably add a cache mechanism for loaded directives.
Also, I wouldn't want the list of directive files (the variable _directivesFileMap) directly in my LazyDirectiveLoader service, so I would probably create a service that holds this list and inject it the service. The service that holds the list will be generated by my build system (in my case I created a gulp task to do this). This way I don't need to make sure this file map is always updated.
Finally, I think I will take out the part that loads the javascript file to a separate service so I will be able to easily mock it in tests I write. I don't like touching the DOM in my services, and if I have to, I rather separate it to a separate service I can easily mock.

I uploaded a slightly better (and a little less simplified) version of this over here :