Hasselt University

25 July, 2011

The last blog post I wrote about my master thesis was on June 1st. The final blog post has been long overdue. To the (very few) readers interested in the technical details, I apologize for the long delay in writing about the last part.
That last blog post was about FP-Growth. This one is about FP-Stream. Whereas FP-Growth can analyze static data sets for patterns, FP-Stream is capable of finding patterns over data streams. FP-Stream relies on the FP-Growth for significant parts, but it’s considerably more advanced. So, in essence, this phase only adds the capability to mine over a stream of data. While that may sound like it is not much, the added complexity of achieving this turns it into a fairly large undertaking.

1 June, 2011

The previous blog post covering my master thesis was about the libraries I wrote for detecting browsers and locations: QBrowsCap and QGeoIP.
On the very day that was published, I reached the first implementation milestone, which implied that it was already finding causes of slow page loads, but not over exactly specified periods of time, but rather over each chunk of 4,000 lines that was read from an Episodes log file. To achieve this, an implementation of the FP-Growth algorithm was completed, which was then modified to add support for item constraints.

FP-Growth {#FP-Growth}

Thoroughly explaining the FP-Growth algorithm would lead us too far. Hence, I’ll include a brief explanation below. For details, I refer to the original paper, “Mining frequent patterns without candidate generation” by J. Han, J. Pei, Y. Yin and R. Mao which can easily be downloaded when searched for through Google Scholar.

22 April, 2011

I’m thrilled to announce that I’ll be joining Facebook’s Site Speed team in Palo Alto, California on September 26, 2011 for a 12-week internship!

After almost two months of being in contact with Facebook, I finally got the liberating call with the verdict yesterday evening: I’ve been accepted!

Backstory {#backstory}

For those of you who want to read it, here’s the full backstory.

Excitement {#excitement}

On February 24, I was contacted via the contact form on my website by Jason Sobel of Facebook. He’s a member of the Site Speed team and mentioned their article about BigPipe (which is the technology they developed to make Facebook load twice as fast). Apparently he had come across my master thesis and my website (i.e. this website) and was interested in my work on making websites faster. Jason asked if I was up for a chat some time to find out what I’ve been working on and so he could give a sense of what the Facebook Site Speed team does. There even was a mention of possibly joining Facebook: “maybe our team would be an interesting opportunity for you?”.

1 March, 2011

In December and January, I’ve continued working on my master thesis, while simultaneously preparing for my exams in January (which I passed without problems).
In a previous blog post, I had indicated that I ran into problems while parsing dates: Qt uses the system locale for this, but on Mac OS X there turned out to be a severe performance problem with that functionality. I solved that by developing QCachingLocale, which is a class that introduces a caching layer to prevent said performance degradations.

Further parsing {#further-parsing}

Now, parsing the date was of course only one tiny part of the problem: I also had to parse the episodes information embedded in each Episodes log file line (which is trivial), as well as map the IP address to a physical location and an ISP and map the user-agent string to a platform and actual browser.
Finally, we also want to map the episode duration to either duration:slow, duration:acceptable or duration:fast. This is called ‘discretization’: continuous values (in our case: durations) are mapped to discrete values.

21 November, 2010

QCachingLocale speeds up Qt’s slow QSystemLocale::query() calls by caching the answers. This seems to be particularly necessary on Mac OS X 10.6.

The other day I was working on my master thesis, on the parser that is going to parse Episodes log files. I had finished a rough version that parses all fields on an Episodes log line. Unfortunately, performance turned out to be extremely poor: 4.8 seconds for parsing 1000 lines.

After a bit of research, it became clear that it was the call to QDateTime::fromString() that was the cause of the performance issues. Unable to figure it out on my own — I tried for an hour or so, I hopped onto the #qt IRC channel and I posted a simple test case that could reproduce the problem:

19 November, 2010

After almost a year since the last master thesis blog post, it’s about time to finally break the silence.

Much has happened since then.

I’ve read a lot for my literature study. It’s quite an adaptation (and a challenge!) to read virtually solely about data mining and statistics. Many of the papers were poorly written (in the typical, extremely awful, overly verbose Academic English). It’s an even larger challenge to actually write about it, in a consistent manner that’s sufficiently formal, yet also understandable.
This is also the reason I haven’t blogged about the progress of my literature study: it is so technical, abstract and complex that it is extremely unlikely that it would have piqued anyone’s interest (although it actually is very cool, sometimes). To be honest, the only thing that kept me going was the anticipation of being able to build something truly useful, possibly game-changing.

Fortunately, on June 24, 2010, 15:00 I successfully defended the literature study of my master thesis, resulting in a score of 16/20!

17 February, 2010

In this final article in my bachelor thesis series, I explain how I proved that the work I had done for my bachelor thesis (which includes the Episodes module, the Episodes Server module, the CDN integration module and File Conveyor) actually had a positive impact on page loading performance. For that, I converted a fairly high-traffic web site to Drupal, installed File Conveyor to optimize & sync files to both a static file server and an FTP Push CDN, used the CDN integration module to serve files from either the static file server or the FTP Push CDN (the decision to pick either of those two is based on the visitor’s location, i.e. the IP address), measure the results using Episodes and prove the positive impact using Episodes Server’s charts.

Previously in this series:

16 February, 2010

In this article, I explain the rationale behind the CDN integration module for Drupal 6, which was written as part of my bachelor thesis. It supports integration with both Origin Pull CDNs (out-of-the-box) and Push CDNs (by using File Conveyor).
Note that development of version 2 of this module has already begun! Version two will also be ported to Drupal 7.

Previously in this series:

15 February, 2010

In this extensive article, I explain the architecture of the “File Conveyor” daemon that I wrote to detect files immediately (through the file system event monitors on each OS, i.e. inotify on Linux), process them (e.g. recompress images, compress CSS/JS files, transcode videos …) and finally, sync them (FTP, Amazon S3, Amazon CloudFront and Rackspace CloudFiles are supported).

Previously in this series:


So now that we have the tools to accurately (or at least representatively) measure the effects of using a CDN, we still have to start using a CDN. Next, we will examine how a web site can take advantage of a CDN.

3 February, 2010

This weekend on Sunday, February 7, we’ll have a full day of Drupal talks at the 10th edition of FOSDEM, Europe’s biggest, free-est and open-est software conference.

FOSDEM, is a free and non-commercial event organized by the community, for the community. Its goal is to provide Free and Open Source developers a place to meet. The Drupal project was granted a developer room at FOSDEM to do exactly that: to share knowledge about Drupal.

The presentations schedule for the Drupal devroom features interesting speakers such as Robert Douglass, Károly Négyesi, Roel de Meester and Kristof van Tomme and even more interesting subjects as mobile device design, AHAH, eID and Views 3. Everyone is invited to attend the presentations.