Sunday, June 24, 2012

Running UI tests on local website with development server

These past couple of days I started writing a utility program that will invoke some UI tests against a certain website of my choice. Obviously, I needed to write tests for the functionality of this utility, so I will be sure that it works.
I wanted the tests to be as real as possible, and so I decided to run them against a real site, like google (could've been any other site, this doesn't really matter for the sake of the story).

The tests worked great, but then I realized that they could easily be broken, with no connection to my work - No one promises me that google's page structure (or anything I want to test) will always stay the same, and I don't even know which tests i'll want to add to it in the future, and if I'll have all the resources on the site of my choice.

I decided that I'll create a custom mvc site for the sake of my tests.
At first I configured the site on IIS locally and this was great, but then I started thinking another step forward, and realized I have another problem - I want someone else to be able to download the source code, and run the tests immediately without having to create a local website and going through all the proper configurations.

I realized that I could use the development server for this.
First, I needed to give the site a static port to run on in the development server :
(In the project properties page, go to the 'Web' tab, and mark 'Use Visual Studio Development Server' -> 'Specific Port')

Then, I created a class called 'DevelopmentServer' responsible for loading the development server, and shutting it down at the end :
public class DevelopmentServer : IDisposable
    private readonly Process _devServer;

    public static int Port = 1212;
    private string _devServerExe = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE";
    private string _testSitePath = @"C:\Dev\MySitePath\";

    public DevelopmentServer()
        _devServer = new Process {
            StartInfo = { 
                FileName = _devServerExe,
                Arguments = string.Format("/port:{0} /path:{1}", Port, _testSitePath)

            Console.WriteLine("Unable to start development server");

    public void Dispose()
The port number and the library paths are just hard coded for the sake of the example. It makes more sense to put them in some configuration file of your choice.
(Note: You might have to change the path to the file WebDev.WebServer40.exe which is the binary of the development server, according to the version you have. The earlier version of this file is called WebDev.WebServer20.exe)

Finally, I just created an instance of this class upon FixtureSetUp, and disposed of it on FixtureTearDown.

Thursday, June 21, 2012

Redirecting external Process output to Console.Writeline (or elsewhere)

As part of some code I was writing recently, At some point my app needed to trigger some other command line utility. I did this easily, using the Process class like this :
var myProcess = new Process {
    StartInfo = {
        FileName = "C:\\path\\to\\my\\cmd\\utility.exe",
        Arguments = " --some --random --args"

This was working great, with one exception - Sometimes the process was throwing an error, causing the console window to close immediately, and I didn't have time to view this error.
I knew that you can tell there was an error by the process's exit code (myProcess.ExitCode), but while debugging it was important to know what error was happening and actually see the output of this process.

Digging a little into the Process class, I easily found that you can redirect the process's output elsewhere. You just need to add :
// This needs to be set to false, in order to actually redirect the standard shell output
myProcess.StartInfo.UseShellExecute = false;
myProcess.StartInfo.RedirectStandardOutput = true;

// This is the event that is triggered when output data is received.
// I changed this to Console.WriteLine() - you can use whatever you want basically...
myProcess.OutputDataReceived += (sender, args) => Console.WriteLine(args.Data);


myProcess.BeginOutputReadLine(); // without this, the OutputDataReceived event won't ever be triggered

That's it! Now i was getting all I needed from the running process, and it was much easier to find the problem this way. :)

Enjoy :)

Saturday, June 16, 2012

Having fun web crawling with phantomJs

A couple of weeks ago, a colleague of mine showed me this cool tool called phantomJs.
This is a headless browser, that can receive javascript to do almost anything you would want from a regular browser, just without rendering anything to the screen.

This could be really useful for tasks like running ui tests on a project you created, or crawling a set of web pages looking for something.

...So, this is exactly what i did!
There's a great site I know of that has a ton of great ebooks ready to download, but the problem is that they show you only 2 results on each page, and the search never finds anything!

Realizing that this site has a very simple url structure (e.g.: website/page/#), I just created a quick javascript file, telling phantomjs to go through the first 50 pages and search for a list of keywords that interest me. If i find something interesting, it saves the name of the book along with the page link into a text file so i can download them all later. :)

Here's the script :
var page;
var fs = require('fs');
var pageCount = 0;


function scanPage(pageIndex) {
 // dispose of page before moving on
 if (typeof page !== 'undefined')

 // dispose of phantomjs if we're done
 if (pageIndex > 50) {

 // start crawling...
 page = require('webpage').create();
 var currentPage = 'your-favorite-ebook-site-goes-here/page/' + pageIndex;, function(status) {
  if (status === 'success') {
   window.setTimeout(function() {
    console.log('crawling page ' + pageIndex);
    var booksNames = page.evaluate(function() {
     // there are 2 book titles on each page, just put these in an array
     return [ $($('h2 a')[0]).attr('title'), $($('h2 a')[1]).attr('title') ];
    checkBookName(booksNames[0], currentPage);
    checkBookName(booksNames[1], currentPage);
   }, 3000);
  else {
   console.log('error crawling page ' + pageIndex);

// checks for interesting keywords in the book title,
// and saves the link for us if necessary
function checkBookName(bookTitle, bookLink) {
 var interestingKeywords = ['C#','java','nhibernate','windsor','ioc','dependency injection',
  'inversion of control','mysql'];
 for (var i=0; i<interestingKeywords.length; i++) {
  if (bookTitle.toLowerCase().indexOf(interestingKeywords[i]) !== -1) {
   // save the book title and link
   var a = bookTitle + ' => ' + bookLink + ';';
   fs.write('books.txt', a, 'a');

And this is what the script looks like, when running :
Just some notes on the script :
  • I added comments to try to make it as clear as possible. Feel free to contact me if it isn't.
  • I hid the real website name from the script for obvious reasons. This technique could be useful for a variety of things, but you should check first about legality issues.
  • I also added an interval of 3 seconds between each website crawl. Another precaution from putting too much load on their site.

In order to use this script, or something like it, just go to the phantomjs homepage, download it, and run this at the command line :
C:\your-phantomjs-lib\phantomjs your-script.js

Enjoy! :)

Friday, June 8, 2012

My opinion on GIT vs. SVN

I finally decided to convert to git...
Yes, this sounds like a religious statement (just like saying "i'm converting to christianity, judaism or islam") because it is!

Let me give some background -
At my work, we have a main svn repository, and we used to all use subversion (with Tortoise SVN and ankhSvn for VS compatibility). This was all great, until one day, some people decided that git is so much better than svn and they started convincing the others to try it out. So some of us did (or more accurately, attempted to try it out) by installing git extensions. In case you're not familiar, git extensions is a windows gui for git that is extendable so there's a plugin called git-svn that allows you to work git-based while actually having a svn server in the background. I went along and did just the same a couple of weeks ago. I also hosted a personal project on github to get the feeling of working with a real git repository (and not just using git-svn).

I must say, that working with git, is great!
...But also working with svn was great!
If i try to look back on it, I don't really remember the same people that are fans of git complaining about using svn the way they "claim" to have been complaining about it now. I think their more in love with the "coolness" of using git, since it seems like the cooler trend now-a-days.
Github has become so popular lately, and not for nothing - Their website is really easy and comfortable to use, and they have great tools for socializing and communicating on distributed projects over the internet. With that said, if they were hosting svn as well, I don't think it would be that big a difference.

All in all, git is great, but has it's share of problems. Svn has it's totally different share of problems as well. If there's a really good reason for me convincing you to start using git, it's because I truly believe that great developers should be well aware of the advantages and disadvantages each tool set gives you and that it's always good learning to use a new tool every once in a while. That's the only way to make the decision on which tools are best for you or your project.

Git and SVN are completely different in how they work, and it's really interesting to dig deep in to..
Here's a great post I read a while ago, that explains it really good : Understanding Distributed Version Control Systems