Tag Archives: open source

Leveraging travic-ci & heroku for rapid deployment of Thimble

7 May

As we got closer to a usable mashup of Brackets and Thimble we wanted to allow other people to play with what we’d done so far.

By leveraging Travis-ci hooks I was able to automate a deployment of our app to Heroku whenever we updated our main branch. This was easier than I’d anticipated (you can see the plethora of excellent documentation for more information on the process) and also surfaced some issues revolving around heavily interdependent apps:

1. Local is a racetrack, deployed is a city block

The reality of local development of Webmaker is that most of the pain has been removed. The excellent webmaker-suite module clones, installs and configures the Webmaker application ecosystem based on which of their core and peripheral servers you need running.

Aside from new environment variables we introduced, we never had to touch the config. All of the features tied to the tight coupling with other parts of the ecosystem, like login and publishing makes, “just worked”. Not so in deployment.

We had to accept that, at least at first, there was only so much we could expose to others for testing, and that our application would look far more incomplete than it actually was.

2. When deployed, the pitstop is miles away

An automated deployment system also meant that we were held hostage by the length of the continuous integration process if something broke on Heroku. Reverting breaking changes had a time delay not found locally, and considering our workflow (imagine Thimble as a tree with a complicated root system of dependancies and submodules) things could get pretty messy as we tracked down a problem.

Add to that the time it took to redeploy and it became clear that we had to be more mindful of what we pushed and where.

3. If local is a… drawing of a donut, then… something something bagels

The main takeaway from this process was that Heroku wasn’t at all ideal for deploying this particular application. It was a picture of a donut to a donut. What we really needed was a full deployment of the Webmaker ecosystem! So that became the next goal in our automagical deployment journey.

Demo: Thimble just levelled up

3 Feb

Ever since Mozfest of 2013, CDOT has wanted to power up Webmaker’s Thimble code editor by combining it with Adobe Brackets, a fully featured desktop code editor we’d hacked to work in a browser. Now, we have our first prototype of this combination in action!

Screen Shot 2015-02-03 at 1.55.47 PM

Here’s the quick tl;dr:

Takeaway 1: Much features, very free

Swapping out Thimble’s analogue of a combustion engine for the analogue of space wizardry in the form of Brackets means we get all of the Brackets functionality we want with near-zero development cost. Want an example? How about inline colour previews of CSS on hover?

Screen Shot 2015-02-03 at 1.29.55 PM

You want MORE you say? Then how about an inline colour picker for that CSS?

Shiny!

Shiny!

Inline preview of img tags? Sure!

Screen Shot 2015-02-03 at 1.29.42 PM

Takeaway 2: I hear you like extensions, so have some

Brackets comes with a built-in extension system. Do we need to add a feature we always wanted? Or one we lost in the transition to using Brackets? Build it as an extension. In fact, that’s what we’re doing next – replacing the features we love about Thimble with extension equivalents.

Takeaway 3: Oh, that old UI element? Totally optional

End game is a Thimble that can expose new tools to users layer by layer as they progress as developers. Maybe they don’t need a tool bar to start! Disable it. Maybe they need to use a full file tree instead of a single html file. Enable it! Linting code, multi-file projects, all of these are possibilities.

What do you think? Around the office, we gave it a good ole’

Screen Shot 2015-02-03 at 1.26.50 PM

How to dig through an app’s view rendering system

22 Jan

Ever come across a web application on github where all you’re concerned with is the client-side portion? Ever realize that you still need to dig through the back-end in order to understand how the hell it generates all of those client-side views? Ever give up, make a cup of tea, cry a little bit and watch hours of Looney Tunes on YouTube? Well I have. Except that last part, which I will neither confirm nor deny.

But I’ve found it does get easier. Here are some tips:

1. Locate the name of the rendering engine

How hard can this be? Pretty hard. Using nodejs apps as an example, their actual code can be laid out in so many unique-as-a-snowflake-isn’t-it-beautiful ways that it isn’t as easy as it appears. But it can be done.

Look through the app’s manifest file for templating/rendering engines you recognize. If you don’t recognize anything, but you know views are generated on the server (here’s looking at you “/views” directory!) do a global search for any of the individual view’s file names. The goal is to trace the code back to the part where the app configures its rendering engine. Once you know what they’re using, you’ll know what documentation to look up.

2. Locate the view or fragment you need, and figure out how it connects to the main view

If the view you’re looking for is a full view, rather than a fragment, skip to the next step.

Otherwise, look through the main views and see which of them pulls in the fragment you’re looking for. Often starting with the index view is a good idea.

3. Find and read the render call, and trace where the data goes

Consider this your entry point into the views proper. Cuz it is.

For view fragments, finding where the parent view is being rendered is key. The most important variables are often passed in this step and are then propagated to view fragments from there.

4. Use this chain to understand the code you wanted to know in the first place

Now you have a direct line between app and view, and you can see what information is being passed in from here. Follow the bouncing ball, take deep breaths, and it’ll all be fine.

If not, there’s always Looney Tunes!

A Github contributor/project guide

20 Jan

I wrote a guide to using github in a workflow as part of a course last semester. Here’s a quick dump of it:

A Github Overview


This document covers my understanding of best practices using Github and git as version control and issue tracking tools. We may not need to implement all of this stuff, but the majority of it will be helpful to use consistently.

First Steps: Setting up a development environment for a new project

To follow the workflow I describe here, a couple of prerequisites have to be satisfied.

First, clone the central repository

This assumes basic git knowledge, so I won't cover the details. If you don't want to put in your Github user/pass every time you push or pull, you should set up an ssh key locally and with Github.

Then, fork the central repository

Because our work will eventually be merged into a central code repository, that represents the latest official version of whatever the project is, we need a way to store our own work on github – without affecting the central repository. The easiest way to do this is to fork a repo:

  • Navigate to the main repository on Github and click "Fork", then your account name.

Forking a repo

Selecting your user

Finally, set up your local git remotes for this project.

Git remotes are references to non-local copies of a git repository. For a useful workflow, at least two are required:

  • origin – Points to your fork of a project's repository
  • upstream – Points to the main repository for a project
  1. We must change rename the remote named origin to upstream with git remote rename origin upstream. By default, git will set the origin remote to point to the repository you cloned from. In this case, assuming you've followed these instructions, that will be the main repository rather than your fork.
  2. Add your fork of the repo as the origin remote with git remote add origin GIT_URL

Working with issues

An issue or bug describes a specific problem that needs to be solved on a project. Some examples are, fix a crash, update documentation or implement feature A. On occasion, issues will be so big that they could be better described as a series, or collection, of smaller issues. An issue of this type is called a meta issue, and are best avoided unless completely necessary.

Issues serve a number of non-trivial purposes:

  1. They scope a piece of work, allowing someone to take responsiblity for it.
  2. They provide a place for discussion of the work, and a record of those conversations
  3. If well scoped, they provide a high-level view of what needs to be accomplished to hit release goals.

Labels

Github provides a way to mark issues with labels, providing an extra layer of metadata. These are useful in cases that are common on multi-developer projects:

  1. Prioritization of issues, marking them as critical, bug, crash or feature (among others)
  2. Identifiaction of blockers, by marking connected issues as blocked or blocking
  3. Calls to action, such as needs review or needs revision

Applying a label

Creating labels is fairly easy:

Creating a label

Blockers

A blocker, with respect to issues, is an issue whose completion is required before another issue can be completed. With good planning blockers can mostly be avoided, but this isn't always true.

If an issue is blocking another issue, label it as blocker and in the issue description, mark which issue it blocks:

A blocker

Likewise, if an issue is blocked, label it as blocked and mark which issue blocks it:

Blocked

Creating an issue

The line between over and under documenting work with issues is thin. Ideally, every piece of work should have an issue, but this relies on skillful identifcation of pieces of work. "Implement a feature" is a good candidate, while "add a forgotten semi-colon" probably isn't.

The key point to remember is that collaboration relies on communication, and issues provide a centralized location for discussion and review of work that is important to a project.

For this reason, as soon as you can identify an important piece of work that logically stands on its own, you should file an issue for it. Issues can always be closed if they are duplicates, or badly scoped.

After identifying a good candidate, follow these guidelines when creating an issue:

  1. Name the issue with a useful summary of the work to be done. If you can't summarize it, it's probably a bad candidate.
  2. Describe the issue properly. If it's a crash or bizzarre behaviour, include steps to reproduce (STR)!

Milestones, project planning and triage

Just like issues represent a logical unit of work, milestones represent logical moments where development hits a larger target. They can be useful for prioritizing issues, and can even have due date attached to them. They aren't always necessary, but can be very helpful when skillfully determined.

In a project you are a key member of, they should be discussed. The act of triaging is prioritizing issues and making sure that the most important ones are addressed first. Milestones can be useful in this pursuit.

While creating an issue, you can add it to a milestone easily:

Adding to a milestone

Workflow basics

A workflow all the work other than writing code that goes into fixing a bug or solving an issue. The actual writing of code fits into the workflow, but it is useful to seperate the ideas at first.

The steps in a workflow will logically flow from the contribution guidelines of a particular project, but a good framework can be established and applied in most cases:

  1. Claim an issue, usually by assigning yourself to it (if you have permissions) or by commenting on the issue saying you want to solve it.
  2. Create a local branch based on master, whose name indicates which issue you've selected, and what the issue covers. E.g. git checkout -b issue3-contributorGuidelines
  3. Develop your patch, and commit as needed
  4. When ready for a review, push the branch to your fork.
  5. Open a pull request against the main repository.
  6. Flag a reviewer so they can get to work reviewing your code.
  7. Follow the review process
  8. When the review is finished, condense your commits into their most logical form (see below) and force push your changes with git push origin -f BRANCH_NAME. NOTE: This will overwrite all the commits on your remote branch, so be sure you won't lose work
  9. Merge your code in if you have permissions, either on github itself or though the commandline.
  10. Delete your local and remote branches for the issue. You've done it!

Good commits

Like issues, commits must be well scoped. At most you should have one commit per logical unit of work. If issues are well scoped, this means one commit per issue. The purpose of this is to make it easy to undo logically separated pieces of work without affecting other code, so you might end up with more than one commit. Aim for one as you start, and it will keep your work focused.

As a final note, a good format for your commit messages is: "Fixed #XXX – Issue summary", where XXX is the issue number. When done this way, the issue you reference will be automatically closed when the commit is merged into the repository.

Opening a pull request

A pull request is a summary of the changes that will occur when a patch is merged into a branch (like master) on another repository. Opening them is easy with Github.

After pushing a branch:

Quick file

Manually:

Manual file

As always, make sure to communicate the pull request's purpose well, along with any important details the reviewer should know. This is a good place to flag a reviewer down.

The review process – having your code reviewed

During review, you and a number of reviewers will go over your patch and discuss it. When you need to make changes to the code based on a review, commit it seperately from the main commits of your work for the issue. This helps preserve the comments on the pull request.

When your code has reached a point where it is ready for merging, you can combine your commits into their final form with the interactive rebase command. Interactive rebasing is a key git skill, but has serious destructive potential. Make sure to read the link in this paragraph in full before attempting it.

The review process – reviewing someone's patch

A reviewer has two important jobs, sometimes split amongst two or more reviewers:

  1. Test the code
  2. Walk through the code thoroughly, commenting on changes that should be made.

Be polite, and explain your comments if necessary. If you aren't sure about something, invite discussion. The code's quality is the point.

A major difficulty for reviewers is finding time to review when writing patches of their own. This can be mitigated somewhat by discussing it with contributors ahead of time, so you can both be working on the code at once without interrupting development of your own patches.

Comments can me made directly on code in a pull request:

Adding a comment

Proper communication on Github

Issue tracking's main appeal is providing a place to solve problems through discussion, and have that conversation available to reference from that point on. Pull requests and issues usually require some conversation. Key guidelines are mostly common-sense (respect each other, etc.) but some specific ones are:

  1. Check your github notifications at regular intervals, so people get the feedback they need.
  2. Learn the github markup language (a variant of markdown) to help communicate with code examples, links and emphasis.
  3. Control expectations by being explicit about what you can and cannot handle.

MakeDrive: Bi-directional Syncing in Action

19 Jun

Our CDOT team has been hard at work developing MakeDrive’s ability to sync filesystems between browser clients. Previously, we’d demo’d the ability to sync changes in one browser code editor up to a central MakeDrive server, called uni-directional syncing.

Now, we’d like to proudly present a screencast showing MakeDrive performing bi-directional syncs between two browser client sessions, in two different browsers.

A special thanks to:

  • Gideon Thomas for his persistent work on the rsync code, allowing syncing filesystems over the web
  • Yoav Gurevich for a reliable test suite on the HTTP routes involved in the syncing process
  • Ali Al Dallal & David Humphrey for guidance, coaching and code wizardry throughout the process

[JavaScript] Introduction to JavaScript

5 Sep

This is the second in a planned series of introductory articles on JavaScript. It is not an introduction to programming, but rather a quick crash course on what makes the language unique.

Tl;Dr Takeaways:

  • JavaScript is a scripting language for the web, and more recently, the web server
  • JavaScript is loosely typed, with one variable able to store many kinds of data over its lifetime
  • JavaScript is prototypal, not classical, and handles object-oriented goals slightly differently
  • JavaScript functions are first-class citizens, making JavaScript very expressive and unusual to read for programmers new to it

What is it?

JavaScript is a multi-paradigm interpreted language that is an implementation of the ECMA-262 standard for ECMAScript, currently in its fifth iteration. Because it was designed to be used primarily as a scripting language, it contains a high level of abstraction and no direct means of managing memory.

For example, having the JavaScript engine handle all the memory management means that we can do things with variables that would otherwise be more complex:

var a; // This line creates a variable "a"

a = "this variable now contains a string";
a = 12; // This variable now contains a number
a = function thisVariableNowContainsAFunction() {};

JavaScript has some other distinct differences from classical (or class-based) languages like C++ and Java. Because JavaScript is prototype-based, it handles the object-oriented concept of inheritance differently.

Rather than instantiating a class, of which instances may or may not already exist, each object in JavaScript must inherit directly from an existing object – which becomes the prototype it is cloned from. This will be covered in detail in a later post, but here’s a quick visual example:

var q,
    Obj1 = { ... }; // Object declaration syntax
                    // pseudocoded for brevity

    q = new Obj1(); // Clones Obj1

A second major difference is the inclusion of functions as first-class citizens of the language. First-class citizens are types that are able to be assigned into a variable, passed as a parameter to a function, and returned as the return value of a function.

As a result, it is very common to see function calls like this one:

var q = someFunction( "a string", function( booyah ) {
  if ( booyah ) {
    alert( "woo!" );
  }
});

At this point, most JavaScript n00bz have a similar, and quite confused reaction. If you look closely, the second parameter of the call to the function called someFunction is another function, complete with the function syntax you’ve already seen.

Why do we use it?

Originally, JavaScript was used entirely for client-side (read, web-page) based programming in Netscape and these days it basically is programming for the web. More recently though, other technologies have provided virtualized browser environments that run in other settings.

NodeJS is an excellent example, allowing server-side programming in JavaScript. Combined with how expressive the language is, it’s quickly become a favourite in the web development scene and is becoming a valuable skill.

Conclusion

I’m going to stop here. Next up will be a discussion of types, and what an object really is in JavaScript.

Please feel free to leave comments if you have questions or any feedback!

Overview: Unit-testing in Node.js

17 May

This week, I discovered the joys (*cough*) of Unit-testing while developing my first Node.js module.  In short, a unit-test suite programatically tests a piece of software’s design specs against its implementation.   Once set up, unit-tests are intended to be reused whenever development or maintenance occurs on the software the tests were written for.  The only time the tests themselves need to change is when the design of the code they’re testing changes.

In my case, I wrote unit tests for two separate pieces of software.  This led to two slightly different applications of the idea, though coding the tests was almost exactly the same.

Unit Tests for Internal Logic

UPDATE: Thanks to Rick Eyre for an elegant and concise explanation of unit tests in white-box testing!

From Rick:

One thing though — unit testing doesn’t always have to be black-box testing. You can do white-box testing using unit tests as well. This is where you design the unit tests with an understanding of the internal mechanisms of the code. So the unit tests are designed to hit certain paths through the code, etc.

Unit testing refers to the level of test i.e. at the most atomic of levels, not the integration, or system levels.

Unit-testing is primarily a black-box testing technique – that is to say, whatever happens inside the code being tested is irrelevant.  Unit-tests are only concerned with what goes into the block of code in question, and what comes out afterwards.

As stated above, the results are checked against what is required of the test-block.  This way, a programmer making changes to an existing program will immediately see if she broke the code, because the unit tests will fail when the black box doesn’t behave as expected.

Unit-tests can be written against the internal mechanisms of a program, but only if those mechanisms are highly reusable and unlikely to change in function – otherwise, the work required to maintain the unit-tests becomes an inefficient use of time.  Unit-tests shine the most when a software architecture is stable, and unlikely to change much.

Unit Tests for an API Service

This is the best example I could find on the value of Unit-Tests.

For the Webmaker initiative, Mozilla created a number of different services that would run on separate servers.  Their interactions were through API calls made over HTTP.  Before they were ever coded, these API’s were conceived and laid out so all the different components of the Webmaker project had the same assumptions.  If the Login server needed to check on Metadata from the metadata server, the developers needed some idea of what that would look like.

After the design spec was created, and an initial implementation was developed, I was tasked with writing tests to confirm that the API implementation of the Login server matched its design.

A basic unit test looks like this:

describe( 'basicauth', function() {
  var api = 'http://wrong:combo@' + hostAuth.split("@")[1] + "/isAdmin?id=";
  before( function( done ) {
    startServer( done );
  });

  after( function() {
    stopServer();
  });

  it( 'should error when auth is incorrect', function( done ) {
    var user = unique();

    // Create a user, then attempt to check it
    apiHelper( 'post',  hostNoAuth + '/user', 200, user, done, function ( err, res, body, done ) {
      apiHelper( 'get', api + user.email, 401, {}, done);
    });
  });
});

Each “unit” will be checking a single code-block. In this case, the functionality being tested is basic authentication (line 1). Inside each unit is at least one test, whose purpose is to confirm our assumptions about the unit being tested.

First, actions are declared to set up the environment for the test (line 3) and clean up afterwards (line 7). For this unit, it’s quite basic: Start and then stop the server.

Unit Test helper libraries are set up to make reading the tests a semantic affair – it should be somewhat straightforward. In this case, the one test in the suite (line 11) is checking that the correct error code is returned when authentication fails. In this suite of unit tests, a helper function was written to reduce the repetition of code (lines 15/16 both call it). The logic hidden by the helper function would look something like this:

  it( 'should error when auth is incorrect', function( done ) {
    // Creates a unique user
    var user = unique();  

    // Call to the login server with bad credentials goes here and
    // returns data to be tested.
    var results = someCall( user );

    // Unit-test node.js module method: compares the results against what we 
    // expected, throwing an error if there's a problem
    assert.ok(results, expectedResults);
    done(); // Signals that all tests for this unit
            // are complete.
  });

First, data necessary to test the unit is generated (line 3). This data is then passed in the call to the code block we’re testing (line 7), whose return data will be checked. Line 11 uses a special Node.js method to compare the result of the call and what the design spec says the result should be. When all “assert” statements are executed, the unit-test programmer includes “done()” to signal the end of that specific test in the unit (line 12).

By repeating this process for every possible outcome, for every possible call, we end up with a robust suite of tests that quickly highlight when something has gone wrong in the development process.

Not bad!

Howto: Working with Open Bugs on Bugzilla

9 May

It’s been a few weeks now, and I’m finally getting comfortable with how Bugzilla fits into the Webmaker team’s workflow.  To my surprise (but probably not to yours), it ended up playing a central role.

What is a bug?

First thing to do if you’re working with the Bugzilla system is to understand what a bug isn’t.

Society at large considers a bug to be a “bug-in-the-system”, or a kind of massive technical glitch – the kind that ruins marriages, abuses kittens and generally causes wide-scale havoc for everyone.  Mozilla, and Bugzilla, by comparison, consider bugs to be synonymous with tasks or problems-to-be-solved.  This is quite a distinction!  It was only when I started considering bugs to be tasks needing completion that I understood the system we use.

Why do we use them?

Considering the nature of open-source development at Mozilla, it makes sense to have a centralized way of tracking work.  It’s a key tool for accountability, and for posterity.  Need to find out where that pesky commit came from?  Want to find that obscure technical reference tucked away in a discussion about the work of yore?  Bugs my friend!

Bugs.

Wanna know what to do today?  Check the bugs!  Wanna know what other people are doing today? Check the bugs!  Finished some work and want others to know about it? Bugs bugs bugs bugs bugs.  Get it yet?

But how to use them…

If you are assigned to an existing bug, it makes the workflow quite simple.  Bugs will have (ideally) all of the information you need to tackle the task they represent right in their description, and in the discussion surrounding it.  So…

Step 1: Find it on Bugzilla

In this example, I’m going to be working with bug 869592.  The first section of an open bug (when you have proper permissions) will look like this:

Screen Shot 2013-05-09 at 3.34.40 PM

Immediately, there are a few pieces of useful information.  The title of the bug gives a quick overview of the work to be completed, while the information along the right hand side tells you who filed it and when.

Things get more interesting on the left side of this section where you can see a bunch of useful things including:

Screen Shot 2013-05-09 at 4.07.16 PM

The bug’s current status, search keywords and which product it is focused on

and…

Screen Shot 2013-05-09 at 4.09.26 PM

Who it’s assigned to, which bugs need to be finished before work can begin, and which bugs it blocks from completion

Step 2: Gather Information

This is the what phase of the workflow.  You have the bug in front of you, so how does it help?  Here’s what the second section of the page will look like:

Comments and comments and comments an- BUGS!

Comments and comments and comments an- BUGS!

A few things questions to ask yourself:

  1. Does the description (the first comment box on the page) give me the information I need to get started doing the work?
  2. Are there any attachements (supporting documents/links) that I need to check out?

If the answer to number one is no, resist the urge to ask the question on the bug by posting a comment.  This isn’t a forum: you aren’t engaging in a back and forth conversation – at least, not principally.  Instead, if you have a question, visit the #webmaker channel (or whichever one is appropriate) on Mozilla’s IRC server and ask it there.  If it’s clarification you’re needing, this is the best way to get it.

On the other hand, if the question is critical to the bug itself, still ask it on IRC! Then record the answer in the bug afterwards.  This leaves the bug as a record of important moments in the completion of the task, rather than a message board to post on.  Looking back, it will be far more useful this way.

Step 3: Getting work reviewed

So now you’re done.  Or at least, you THINK you’re done.  Unless you’re one of the supervisors of the project, you don’t get to make that claim until you’ve had at least one person look your work over and give it the acclaimed R+ OF EXCELLENCE!! 

The Mozilla review process ensures consistency and quality in our work – which is important for open-source projects to be taken seriously.  First, figure out what you want someone to review.  In the case of this example bug, I wrote code that needed to be merged into the master Mozilla repository for the project.  So, I made a pull request on github to be reviewed by someone with more experience.

To attach it to the bug, you’ll need to click on Add an attachement:

Screen Shot 2013-05-09 at 4.56.33 PM

See the link? It’s in purple 🙂

Then, you’ll be presented with a screen and the following form components:

Screen Shot 2013-05-09 at 5.08.40 PM

Paste a link here, or click “attach a file” to… well, you know.

Screen Shot 2013-05-09 at 5.11.17 PM

Keep it as descriptive as possible

Now you need to decide who should review this attachement for the bug.  When in doubt, ask around.

Screen Shot 2013-05-09 at 5.12.46 PM

 For giggles (tee-hee!) I’ll be flagging myself by selecting the “?” and then typing a tag I gave my name in my profile:

Screen Shot 2013-05-09 at 5.13.15 PM

Aren’t I helpful for volunteering?

Clicking my name assigns me to be the reviewer of the attachement.

Step 4: Rinse and repeat

Eventually, your reviewer will change that “?” to a ““, and then, eventually, to a “+” indicating you’re good to finalize your work (merge pull requests, close the bug etc.)

Speaking of which…

Step 5: Closing your bug

All finished? Good.  Go to the bug’s main page, scroll to the bottom and adjust Status to Resolved/Fixed. Like so:

Screen Shot 2013-05-09 at 5.23.46 PM

If “FIXED” doesn’t immediately appear, you don’t have the proper permissions to close a bug.  If that’s intentional, fine.  Otherwise, speak to someone on IRC to get it resolved.

And voila!  Bug closed.  Or squished, as I like to imagine it as…

Welcome to Open Source Development! A Student’s Perspective

23 Apr

Hi!  If you’re like I was a couple of weeks ago, you have very little knowledge about, or experience with, how Open-Source development really works.

I mean, really works!

Even if you aren’t completely new to the concepts, you may find this blog helpful.  I’ll be overviewing the ideas behind the movement in this article, and as I post more about the workflow and tools needed to succeed in an open-source environment, you will find them linked in this paragraph and the following lists:

TL;DR

Open-Source development:

  1. …relies on clear, effective (location agnostic; read digital) communication
  2. …is iterative, preferring incremental improvement over initial polish
  3. …directs conversations and meetings into the open, forcing individual accountability
  4. …is decentralized, meaning we work with people all over the world! (see point 1)
  5. …is distributed, meaning different people are working on different tiny parts of the same project all the time.
  6. …is interconnected, meaning other people’s contributions directly affect what we are working on, sometimes on a minute-by-minute basis
  7. …is inherently difficult, requiring a diverse interpersonal and technical skillset
  8. …is inherently rewarding, giving us skills, experience and a portfolio you would never get elsewhere

Open-Source Development requires:

  1. Excellent time management
  2. Flexibility, for when requirements rapidly and successively change (and they will, even as you work)
  3. Courage, to ask the questions you need answered to move forward
  4. Focus, because there’s a lot to learn.
  5. A suite of productivity tools, because we’re only human

Now for the full version:

Free as in freedom.  As in free speech. As in free hugs.  As in…

Open-source is really a philosophy first. And, as with any system built on a philosophy, it’s much easier to understand the system if you can wrap your head around the principals it rests on.  The core open-source principals seem to be expressed in slightly different terms depending on where you look.  I’ll be exploring them [DISCLAIMER: I don’t know it all!] on my own terms, and I think they boil down like this:

  1. The Principle of Purpose (or, Community)
  2. The Principle of Openness (or, Transparency)
  3. The Principle of Contribution (or, Meritocracy)
  4. The Principle of Participation (or, Collaboration)
  5. The Principle of Iteration (or, Rapid Prototyping)

Each of these builds on the idea that came before it, and I’ll explore each of these in this post.  Remember that these are just my thoughts on the subject, so feel free to comment if you think I’ve overlooked (or misunderstood) something.

The Principle of Purpose (or, Community)

At its core, open-source is about achieving a goal together.  But in order for us to do it together, there has to be a goal we all can agree is worthwhile. This, in turn, leads to the initiatives that become open-source projects.  Each project’s goal resolves issues the community suffers from.  So, as common needs arise, the open-source response is: “Let’s fix the problem together, because together we can accomplish more!

The Principle of Openness (or, Transparency)

Hand-in-hand with community driven action is the need for every member of that community to be able to understand everything about an open-source project if they have the time – from its deadlines, to the technology being used and even who is working on the project.  Without this level of transparency, people are unable to contribute fully because they end up missing information that, if they had possessed it, might have allowed them to contribute even more!

Openness leaves us vulnerable, with all of our mistakes and weaknesses on display for everyone involved to see. It demands a level of integrity and honesty that is difficult in modern life, but ultimately servers a greater purpose in a team.  After all, if everyone knows our weaknesses, they can then use us to our strengths.  (And accomplish more!  A pattern emerges…)

The Principle of Contribution (or, Meritocracy)

I mentioned integrity, and honesty, and strengths and weaknesses and all sorts of character traits.  I did this because, at its core, open-source only respects one thing: competency.  If you can do a job, that no one else can do, you should do it.  If you can’t do a job, then you should pass it on.  If you can but you need help, you should be honest about that, and see what people with more experience than you will suggest.  These are the common-sense consequences of focusing purely on the final result, and it tells us two things:

  1. Honesty about your level of ability is never punished, and
  2. Those that contribute to what needed to be done are the ones that get the credit and the responsibility.

Why those conclusions? The first one was described in the last principle but appears again because part of a meritocracy is sticking to your strengths (areas of competence).  This says nothing about our ability to learn new skills, but rather that if we can’t do the job, and someone else can, we should let them do it and be honest about it.  The second point is important, because it keeps our egos in check.

If you couldn’t/didn’t/wouldn’t contribute to something, your name simply will not be on it (credit) and you shouldn’t expect to be consulted about changes or congratulated on a job well done (responsibility).  The dark side is, those that do things have to clean up the messes they made if things go wrong.  But thats a topic that should be saved for the last point.

The Principle of Participation (or, Collaboration)

With all this talk about ability, and a brutal respect for competency, you might think that open-source development is elitist and exclusive – but you couldn’t be further from the truth.  The fact is, that when a group of people work towards a goal according to the last 3 principles, a fourth naturally emerges: that no one is without value, and without the ability to contribute.  In the end, there’s always something that can be done, no matter what your ability is.  If your motivation is to help and not to get bragging rights, then you’ll always be a valuable (and valued) member of any open-source community.

Let me give you an example:

A man wants to build himself a house.  He is very good at coming up with crafty ways of solving mechanical problems, and so has come up with a blueprint.  However, he is frail and unable to build any of it himself.  At the local church/community center, he asks for help on this project and receives a flood of support.  Before he knows it, he has 10 people ready to help him, all asking what they should be doing.

If it was your house, how would you decide on each person’s role in the process?  Chances are you would find out what everyone’s strengths and weaknesses were, and assign tasks from there.  The man is good with mechanical design, so he writes the plans.  The first volunteer is excellent at managing teams of people, so the man delegates this part of the job to her.  The second volunteer is burly and strong, but not too bright.  The man, being weak and frail, is relieved and assigns him the task of hauling all the lumber they’ll need.  Eventually, the 10th volunteer says,

“I’m not very good at anything sir, I just wanted to help :)”

Would the man turn them away?  Would you?

In open-source, at the end of the day, the job must get done! If the person organizing the volunteers also has to write the blueprints, or the person hauling lumber also has to chop trees, the task will take longer.  But for every person without a distinct strength, there is a job that will allow others to do the job only they can do.  And that’s the point.

The Principle of Iteration (or, Rapid Prototyping)

Finally, we come to the most concrete of the principles.  This one says that a job well done doesn’t need to be well done right now.  To go back to my previous analogy, if it’s the dead of winter, the volunteers won’t be so fussed about the colour of the veneer trimmings in the man’s new kitchen.  Better to get the house up!  And veneer is so last season anyway…

When contributing to an open-source project, goals must be small and time-oriented.  Read: Deadlines! Like it or not, they’re here to stay and, secretly, deadlines are your friends.  ESPECIALLY in open-source!

Why?  Because open-source contributors don’t ever care about making something polished the first time.  Just get it done on time!  If it works properly, and doesn’t sacrifice good code for quicker delivery, it doesn’t need to have all the bells and whistles.   In a sense, the iterative process is the most helpful restriction you’ll ever run across and can be compared to a similar time-management truism: The 80% rule.

The 80% rule says that, if you’ve done roughly 80% of the work, it’s time to move on to the next task.  Clearly this is a guideline, but the point is the same.  Too much time can be wasted chasing perfection.

Get it done!

Conclusion

This should be enough to chew on for now.  Keep in mind that open-source development is not a straight-line, and can be very demanding.  To that end, as I post more content, this article will be updated.