This is the talk I gave at March's London Web Standards meet-up in London. It covers how we create Glow and make it a quality library. The talk has notes available for each slide and a video will be published soon.
I’m Frances I’m a developer Glow, The BBCs Open Source JavaScript Library Some of you may be most familiar with me in the microformats world, or you may even have met me at pubstandards – in which case, I apologise. I’m basically your average web nerd, and I’m based in west london, which is very convenient for work… this is more than I can say for the rest of the team…
We’ve got Jake Archibald. Comedy northerner who’ll be showing you his latest stand-up routine after I’m done. He just moved out of London and now lives somewhere near Brighton or France.
Michael Mathews – author of JSDocToolKit – which some of you may have heard of or be using… and he mostly works from his countrified home up out of London.. And then there’s…
Well, actually.. That’s it. There’s only 3 of us. Well, to be honest, it’s just not just us.
We have a product owner who is with us for about 50% of his time He helps us with various bits of internal politics and defines high-level aspirations like “we want a library and can we make it do carousels”
50% of his time
a scrum master who bears an uncanny resemblance to Matt Berry for a 6 th of his time.
a scrum master who bears an uncanny resemblance to Matt Berry for a 6 th of his time. Mostly, we’re left to our own devices.
So, we have various issues. There’s only 3 of us (build glow2, bug fix, relase on glow 1, IRC, mailing lists, learning, events) Lack of face-to-face time – need to be efficient Have to run tight ship
We need to decide how to make what we’re making. In our case on Glow, most of this boils down to discussions on what should go into the API. Needs to be a way to work over email, IRC or in person
Miscommunication is the enemy here – more upset and disagreements occur when a specification has been misunderstood or read with a different tone than any other thing in the process. There’s nothing worse than getting 90% into building a beautiful new method than finding out that you’ve misunderstood what the rest of the team meant. Plus, we just don’t get to see each other face to face all that often, so we need to communicate accurately and quickly over email. This is how we do it. Knowledge silos – sharing the decision making – getting a unified feel – don't want ot be able to spot one person's style
Example – someone makes a suggestion for this method Difficult to see what’s important / no context / no examples Prose tends to be scan-read
We can pick out a few key words – maybe what we want to name it, but it’s still a bit vague.
This is better – clearly states all the necesarry values and an example Used to send this via email – now put in source files and commit to git Helps us keep a log of what changes we’ve made – someone new can pick it up
We can highlight some of the interesting parts – and that’s most of it – there’s very little wastage Also creates a mini-spec for that piece of work – this is what will be refered back to to check the work was done correctly. The person proposing the API isn't necessarily the person who will write the code. Once this is all commited, the author sends a meeting request so people can have a look at it before the bun-fight
The bun-fight is essentially what we call locking ourselves in a room and going through the proposed API. The JSDocs are printed, we take coffee and jolly ranchers into an office space with some pens and we go through it line-by-line. Sometimes, even at this early stage, someone can get rather attached to an idea they’ve had, or more commonly, we just just plain don’t agree, so there’s a whole lot of grumbling, maybe a few raised voices, but in theory, we emerge from the meeting, in one piece, with an aPI the whole team not only likes, but understands. Real example – ideal if bits get crossed out – less work! Sometimes an underlying feature is identified – this API shelved, new API written. Menu vs focusssable
And as a nice side effect of this process, our docs are written for us – and since they're comments that live along side the code, they're also subject to our strict code review, which I’ll talk a bit more about later. We’re actually considering adding an extra layer to this where the example you can see will actually become code extracted from the unit tests, so that we know it’s a working example – copy and paste woes – and this leads me on to the next topic…
This is pretty fundamental. Well, other than writing any code at all – making sure it works is obviously something you want to do before you can ship a product. We're very test driven development focused, and after we've done the docs, we write the unit tests. In the first instance, examples from the JSDocs are turned into unit tests, and then we add additional tests and edge cases, and begin to write the code that makes those test pass. We used to use our own test harness for unit tests, but for Glow 2, we've switched over to Qunit, which I'm sure some of you have seen before, but for those that haven't, this is how it works...
Qunit is a javascript based test framework from the jquery team We used to use our own framework, but for glow 2 have decided to switch to something a bit more standard and then also the upkeep overhead is someone else’s problem We did donate a new theme though, because their old default one was somewhat of an eyesore.
If you did the previous work in getting the documentation right and agreeing what you should be making, the first thing to test is the example. Here you’ll see that we’re setting up a bunch of tests for the unwrap method – we start by letting qunit know this by adding the module call – here you can specify some extra methods to be formed before and after each set of tests
Now we add in a new set of tests – give it a vaguely meaningful name and here you can also say how many asserts to expect.
Add some basic tests to check that the API is correct and that unwrap is indeed a function available to us
And then we add some tests to check how the actual method behaves – does it return a nodelist as we expect, are it’s parents correct after it’s been unwrapped. There would obviously be a lot more tests that this for production, but testing the obvious first is the best thing to do.
We use both the API docs and these unit tests to sanity check our work. At the end of the module we have code reviews, which as a small team we're able to do all together, but if you were in a larger team It would probably be a good place to get some people involved who may otherwise have not seen the piece of work you've been doing. Our code reviews are more extensive and possibly slightly non-traditional. We perform them at a computer, with the reviewee at the helm, then the reviewers get to read through the doucmenttation, checking it for sense – can they guess how the implementation will work, does it return what they expected and so on... then the unit tests are given a look over, with suggestions for possible edge-cases that may have been missed, and only then are the real lines of code looked at – and at this point, if the docs and tests are thorough enough, they're inevitably only suggestions for performance improvements. Which leads me somewhat seamlessly to the final problem I wanted to talk about tonight...
I ran throught this stuff with Jake and he suggested conflic resolution I laughed You will fall out sometimes, but the best I can say is, as long as you talk to each other civily and understand that all suggestions are for the greater good, that and a few cookies and a coffee will sort it out.
This is the final thing I wanted to talk about briefly, and that’s performance. Jake gave a fantastic presentation at Full Frontal on performance in JavaScript and I recommend that, but what matters to us on Glow, and probably matters to a lot of users, is how do we *know* we’re fast – and when it comes to a library, being fast really really matters So.. we benchmark. In Woosh.
Why Woosh?| Well, it's based on Slickspeed, which will provide you with an overall runtime for your scripts, but for us, although those stats are intesreting, it doesn't actually help us in terms of figuring out where we suck. What we wanted was a lot more granulatrity and we wanted to be able to produce tests quickly so it wouldn't impact on our development time.
Creating tests is pretty easy. Woosh provides you with all the major libs, but you can add your own references to scripts and use those too. After that, it’s just a case of adding tests for each method and deciding what kind of test you want – woosh offers 2 types.
This is how long to run 1000 times – note the first param states how many times We had problems with this when testing events – YUI 3 was very slow and took a long time to complete – time outs and boredom So added a second kind of test..
This is over time – This is best for day-to-day development, since you’re capping execution time length so you’re not waiting around indefinitely
So, what do the tests actually look like when they run? I could show you some screenshots, but why not make this only a tiny bit less boring by showing you a live demo – at least there's all the fun of potential disaster. To chrome...
So, what do the tests actually look like when they run? I could show you some screenshots, but why not make this only a tiny bit less boring by showing you a live demo – at least there's all the fun of potential disaster. To chrome...
So, what do the tests actually look like when they run? I could show you some screenshots, but why not make this only a tiny bit less boring by showing you a live demo – at least there's all the fun of potential disaster. To chrome...
So, what do the tests actually look like when they run? I could show you some screenshots, but why not make this only a tiny bit less boring by showing you a live demo – at least there's all the fun of potential disaster. To chrome...
There's obviously loads of other things that go into making a good javascript project, or really any good piece of web development work, but in only half an hour I hope I've given you an overview of some of the areas I think are interesting. For example, we also have a manual test framework that we’ve written, and I believe Jake will be using that to show some of the work on keyboard events in the next presentation.
And that’s me done. I’ve been Frances. I twitter as phae, and you can find everything else about me on fberriman.com. All the examples you've seen and a ton more are available on the glow github repo, and again, we are open source so we'd love for you to get involved. We’re going to hold questions after Jake and the beer break, so on the off chance you’ve got some questions or comments, scribble them down on the back of a beer-mat and we’ll go over them later. Thanks a lot!