[2010-01-23] Spawner in the Works

I have started using Firefox 3.6 and it does feel a little faster than its predecessor, though it's definitely not as snappy as Chrome. I should note that this is how Firefox 3.0 and 3.5 also felt at the time of their release, only to not feel that fast as time wore on and we received successive security and stability updates. I wonder why.

For Firefox 3.5, one of the reasons could be that my perception was altered by having used Chrome and then having gone back to using Firefox. Another reason could be that the SQLite databases used by Firefox bloat up over time and need to be "vacuumed" periodically - I use a simple script to do this for me, though it doesn't seem to have that much of an effect on Firefox (except on the disc-usage of my profile).

Firefox also consumes a lot of memory after several hours of browsing, thus slowing it further. This is not that bad a problem for me since I close my browser and log off when I'm done at the end of the day, but some people apparently like to keep their browsers running for days on end while keeping tens of tabs open. For such people, the problem is exacerbated and using Firefox can be frustrating. They seem to have a much better experience with Chrome.

The Firefox developers have been relentlessly plugging memory leaks and improving its memory usage. It is sobering to note that even if they have absolutely no memory leaks, they will still have to contend with memory fragmentation. They have been using a better memory allocator (jemalloc) since Firefox 3.0 that reduces fragmentation but does not eliminate it entirely. Short of using a compacting garbage collector, the only way out seems to be to use a multi-process architecture like that in Chrome.

While a multi-process model might seem good for UI responsiveness and shielding against misbehaving add-ons and plug-ins, its benefit in reining in memory usage seems to be under-appreciated. The benefit of such a model was first pointed out to me by my friend Kingshuk who noted it in the architecture of the Apache HTTP server. The MaxRequestsPerChild directive in the server configuration file controls how many requests a child server process can handle before it is killed. This limits the amount of memory such a process can consume due to memory leakage. Once the process dies, the operating system will reclaim all the memory that was allocated to it.

It seems that the Firefox developers are already moving in that direction with a project code-named Electrolysis. I hope that this results in a better browser for those of us who have decided to stick with Firefox.

(Originally posted on Blogspot.)

Other Posts from 2010