My first product team

8 Jan

So, in other news, nothing huge or anything, but I graduated from college (!)

So, in other other news, I’m thrilled to say that I was also able to find a job pretty much immediately (!!!).

I’m extremely thankful, and I’ll be posting a little bit more about the weight of these two facts, along with the people behind them, another day. Instead, I’d like to recount my experience of diving into a proprietary codebase (you don’t mind signing an NDA do you?) and trying to figure out the best way to learn it.

Haven’t you done this before?

My work with the Centre for the Development of Open Technology had me delving into different open-source libraries, sometimes deeply, sometimes not, but always as a new contributor. The focus then was figuring out just enough to be able to fix something urgent, or to implement some new feature. It might be that I should be taking a similar approach with my new work, but this remains to be seen.

The only time I ever intimately knew a codebase was when I either helped build it from scratch or gutted it entirely, effectively rebuilding it from scratch. This time I’m in a position where I need to be an expert across the domains our team is responsible for, and this warrants a rethink of my approach.

So what about that?

On the one hand, not much has changed. There’s still work to be done at the expense of technical debt, deep learning and refactors. On the other hand, I was hired to relieve my full-stack co-worker of the back-end side of things. I won’t be much of a relief if he has to clean up my messes when I didn’t understand the implications of the changes I was making!

It boils down to research versus product development. I’m not solving problems in uncharted lands anymore. Now it’s very much that “we know it’s possible, so get on and do it”.  Since this is a salaried position, I think it’s reasonable to expect me to learn the codebase, even if I have to put in some extra time outside standard business hours. If anything, it’s an expansion of the investment required and I’m okay with that.

My first big task is to implement a metrics collection system, similar to New Relic, for our product. This is to be an internal tool we can use as a basis for optimization. In the words of our product lead, Johnny: “To optimize, first we must quantify.”

The design process starts today. Updates to follow!

Thank you

27 May a3iwyhkec0p32

It takes a lot to slow my mind down, to just be me.  I can be acerbic, callous and I often worry about a vein of arrogance in my personality that I can’t seem to shake. I have a big mouth too, which doesn’t help matters. I’m also fortunate and privileged beyond belief, and I’d like to take some time to calmly recognize the people and circumstances that became the shoulders I stand on.

A thank you to my mentors, seniors and patient colleagues

All of my success is owed to the people who gave me a chance, sometimes many of them. My father, Bill Sedgwick, is the first and last of them, encouraging and reprimanding me in all of my choices. His shoulders are the broadest, his perspective the clearest. Michael Ginn, a longtime friend and mentor in both life and the martial arts will take the second place. Family will always be patient, but friends have an out. Ten years and counting he never took it.

The next on my list, David Humphrey, is also the most recent. A widely respected software engineer and professor, he allowed me two chances to involve myself with Mozilla, a place where I met the likes of Pomax (Mike Kamermans)Jon Buckley, Kate Hudson and countless others. In the course of my work under Dave I tried absorbing more than just technical knowledge. His character and work ethic, which earned him an award from the Governor General of Canada, were just as humbling.

My colleagues at Seneca College’s Centre for the Development of Open Technology stayed patient with me, which I can’t imagine was easy. I think specifically of Chris DeCairosAli Al DallalGideon Thomas and many more. My enjoyment of being outclassed largely started with them.

I have a career I love now, and it is thanks to all of you. The most precious resource is opportunity, and I’m devoting my life to paying those opportunities forward❤

 

a3iwyhkec0p32

Lessons from the battlefield for an intermediate programmer

26 May

My biggest challenge as an intermediate-level programmer is almost poetic, is desperately tense and is more and more obvious the further I develop in the field. I’ve reached the point where walking through code is almost second nature. No more cold sweats at what looks like a magical jump in logic, or raised blood pressure at the sight of yet another third-party API, or reinforcing the fetal-shaped imprint on my bed when a bug defies all attempts to understand it.

At this point it’s not about solving problems. It’s about efficiency. My latest piece of work was relatively meaty, with sizeable problems that needed carefully considered solutions. It also took me a week longer to solve than I would have liked, so I decided to analyze my performance and draw lessons from the experience. Here’s what I observed:

Lesson No. 1: Problems in isolation are easier to solve

Background:

The web app I’m developing this summer needed to be refactored to work with a brand new authentication server, using a completely different authentication protocol. The codebase itself is messy, test-free and highly coupled. A complete refactor (my definite preference) was out of the question since there’s simply too much real work to do to worry about technical debt.

And so I fell into my first trap. My attention was torn between implementing the new authentication protocol, and not breaking the mess of a codebase in the process. Jumping back and forth between these two goals left me confused about the causes of my errors, and mentally exhausted from the task switching.

Solution: Next time, separate the problems, solving the most basic first

  • Identify which parts of the problem are easiest to isolate
  • Solve those problems in isolation, even if it’s contrived
  • Use perfected solutions of simple problems as a basis for the more complicated ones

Benefits:

  • Attention isn’t split between different domains
  • Problems don’t “cross pollinate”, confusing the source of an issue
  • Saves time

Lesson No. 2: If you have to really know a new technology, slow down and build it out as you learn it

Background:

Oauth2 is pretty cool, and I know that now. Learning it could have gone a little faster, if I used more haste and less rush. Relying on my instinct by skipping over learning fundamental terms and concepts led me down the wrong path. I struggled to find a good example implementation, so I tried to cobble together the concepts enough to empower me to implement it all at once. Not a good idea!

Solution: Make an effort to learn terms and concepts without losing patience, implementing something as soon as possible

  • Small confusions snowball as the pieces come together, so be thorough in research
  • Find or create examples that further or test your understanding of the piece you don’t understand
  • Solidify the learning by implementing something that works as soon as possible, even if it’s incomplete or contrived.

Benefits:

  • Would cut down the amount of “back and forth” referencing of material since you’re more familiar with the ideas
  • Surfaces misunderstandings through broken implementations, helping to solidify the trouble parts faster

To the future

In his book Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else, Geoff Colvin points out that masters of their craft approach every task in predictable way.

First, they have specific goals aimed at besting their previous efforts as they go into a task. Second, they check in with those goals as they perform the task, making sure they’re staying focused on improvement. Finally, they reflect on the experience to build the goals for their next attempt. Adopting this was my motivation for this post, and seemed like the best response to my disappointment over my performance.

To success!

Leveraging travic-ci & heroku for rapid deployment of Thimble

7 May

As we got closer to a usable mashup of Brackets and Thimble we wanted to allow other people to play with what we’d done so far.

By leveraging Travis-ci hooks I was able to automate a deployment of our app to Heroku whenever we updated our main branch. This was easier than I’d anticipated (you can see the plethora of excellent documentation for more information on the process) and also surfaced some issues revolving around heavily interdependent apps:

1. Local is a racetrack, deployed is a city block

The reality of local development of Webmaker is that most of the pain has been removed. The excellent webmaker-suite module clones, installs and configures the Webmaker application ecosystem based on which of their core and peripheral servers you need running.

Aside from new environment variables we introduced, we never had to touch the config. All of the features tied to the tight coupling with other parts of the ecosystem, like login and publishing makes, “just worked”. Not so in deployment.

We had to accept that, at least at first, there was only so much we could expose to others for testing, and that our application would look far more incomplete than it actually was.

2. When deployed, the pitstop is miles away

An automated deployment system also meant that we were held hostage by the length of the continuous integration process if something broke on Heroku. Reverting breaking changes had a time delay not found locally, and considering our workflow (imagine Thimble as a tree with a complicated root system of dependancies and submodules) things could get pretty messy as we tracked down a problem.

Add to that the time it took to redeploy and it became clear that we had to be more mindful of what we pushed and where.

3. If local is a… drawing of a donut, then… something something bagels

The main takeaway from this process was that Heroku wasn’t at all ideal for deploying this particular application. It was a picture of a donut to a donut. What we really needed was a full deployment of the Webmaker ecosystem! So that became the next goal in our automagical deployment journey.

Demo: Thimble just levelled up

3 Feb

Ever since Mozfest of 2013, CDOT has wanted to power up Webmaker’s Thimble code editor by combining it with Adobe Brackets, a fully featured desktop code editor we’d hacked to work in a browser. Now, we have our first prototype of this combination in action!

Screen Shot 2015-02-03 at 1.55.47 PM

Here’s the quick tl;dr:

Takeaway 1: Much features, very free

Swapping out Thimble’s analogue of a combustion engine for the analogue of space wizardry in the form of Brackets means we get all of the Brackets functionality we want with near-zero development cost. Want an example? How about inline colour previews of CSS on hover?

Screen Shot 2015-02-03 at 1.29.55 PM

You want MORE you say? Then how about an inline colour picker for that CSS?

Shiny!

Shiny!

Inline preview of img tags? Sure!

Screen Shot 2015-02-03 at 1.29.42 PM

Takeaway 2: I hear you like extensions, so have some

Brackets comes with a built-in extension system. Do we need to add a feature we always wanted? Or one we lost in the transition to using Brackets? Build it as an extension. In fact, that’s what we’re doing next – replacing the features we love about Thimble with extension equivalents.

Takeaway 3: Oh, that old UI element? Totally optional

End game is a Thimble that can expose new tools to users layer by layer as they progress as developers. Maybe they don’t need a tool bar to start! Disable it. Maybe they need to use a full file tree instead of a single html file. Enable it! Linting code, multi-file projects, all of these are possibilities.

What do you think? Around the office, we gave it a good ole’

Screen Shot 2015-02-03 at 1.26.50 PM

How to dig through an app’s view rendering system

22 Jan

Ever come across a web application on github where all you’re concerned with is the client-side portion? Ever realize that you still need to dig through the back-end in order to understand how the hell it generates all of those client-side views? Ever give up, make a cup of tea, cry a little bit and watch hours of Looney Tunes on YouTube? Well I have. Except that last part, which I will neither confirm nor deny.

But I’ve found it does get easier. Here are some tips:

1. Locate the name of the rendering engine

How hard can this be? Pretty hard. Using nodejs apps as an example, their actual code can be laid out in so many unique-as-a-snowflake-isn’t-it-beautiful ways that it isn’t as easy as it appears. But it can be done.

Look through the app’s manifest file for templating/rendering engines you recognize. If you don’t recognize anything, but you know views are generated on the server (here’s looking at you “/views” directory!) do a global search for any of the individual view’s file names. The goal is to trace the code back to the part where the app configures its rendering engine. Once you know what they’re using, you’ll know what documentation to look up.

2. Locate the view or fragment you need, and figure out how it connects to the main view

If the view you’re looking for is a full view, rather than a fragment, skip to the next step.

Otherwise, look through the main views and see which of them pulls in the fragment you’re looking for. Often starting with the index view is a good idea.

3. Find and read the render call, and trace where the data goes

Consider this your entry point into the views proper. Cuz it is.

For view fragments, finding where the parent view is being rendered is key. The most important variables are often passed in this step and are then propagated to view fragments from there.

4. Use this chain to understand the code you wanted to know in the first place

Now you have a direct line between app and view, and you can see what information is being passed in from here. Follow the bouncing ball, take deep breaths, and it’ll all be fine.

If not, there’s always Looney Tunes!

A Github contributor/project guide

20 Jan

I wrote a guide to using github in a workflow as part of a course last semester. Here’s a quick dump of it:

A Github Overview


This document covers my understanding of best practices using Github and git as version control and issue tracking tools. We may not need to implement all of this stuff, but the majority of it will be helpful to use consistently.

First Steps: Setting up a development environment for a new project

To follow the workflow I describe here, a couple of prerequisites have to be satisfied.

First, clone the central repository

This assumes basic git knowledge, so I won't cover the details. If you don't want to put in your Github user/pass every time you push or pull, you should set up an ssh key locally and with Github.

Then, fork the central repository

Because our work will eventually be merged into a central code repository, that represents the latest official version of whatever the project is, we need a way to store our own work on github – without affecting the central repository. The easiest way to do this is to fork a repo:

  • Navigate to the main repository on Github and click "Fork", then your account name.

Forking a repo

Selecting your user

Finally, set up your local git remotes for this project.

Git remotes are references to non-local copies of a git repository. For a useful workflow, at least two are required:

  • origin – Points to your fork of a project's repository
  • upstream – Points to the main repository for a project
  1. We must change rename the remote named origin to upstream with git remote rename origin upstream. By default, git will set the origin remote to point to the repository you cloned from. In this case, assuming you've followed these instructions, that will be the main repository rather than your fork.
  2. Add your fork of the repo as the origin remote with git remote add origin GIT_URL

Working with issues

An issue or bug describes a specific problem that needs to be solved on a project. Some examples are, fix a crash, update documentation or implement feature A. On occasion, issues will be so big that they could be better described as a series, or collection, of smaller issues. An issue of this type is called a meta issue, and are best avoided unless completely necessary.

Issues serve a number of non-trivial purposes:

  1. They scope a piece of work, allowing someone to take responsiblity for it.
  2. They provide a place for discussion of the work, and a record of those conversations
  3. If well scoped, they provide a high-level view of what needs to be accomplished to hit release goals.

Labels

Github provides a way to mark issues with labels, providing an extra layer of metadata. These are useful in cases that are common on multi-developer projects:

  1. Prioritization of issues, marking them as critical, bug, crash or feature (among others)
  2. Identifiaction of blockers, by marking connected issues as blocked or blocking
  3. Calls to action, such as needs review or needs revision

Applying a label

Creating labels is fairly easy:

Creating a label

Blockers

A blocker, with respect to issues, is an issue whose completion is required before another issue can be completed. With good planning blockers can mostly be avoided, but this isn't always true.

If an issue is blocking another issue, label it as blocker and in the issue description, mark which issue it blocks:

A blocker

Likewise, if an issue is blocked, label it as blocked and mark which issue blocks it:

Blocked

Creating an issue

The line between over and under documenting work with issues is thin. Ideally, every piece of work should have an issue, but this relies on skillful identifcation of pieces of work. "Implement a feature" is a good candidate, while "add a forgotten semi-colon" probably isn't.

The key point to remember is that collaboration relies on communication, and issues provide a centralized location for discussion and review of work that is important to a project.

For this reason, as soon as you can identify an important piece of work that logically stands on its own, you should file an issue for it. Issues can always be closed if they are duplicates, or badly scoped.

After identifying a good candidate, follow these guidelines when creating an issue:

  1. Name the issue with a useful summary of the work to be done. If you can't summarize it, it's probably a bad candidate.
  2. Describe the issue properly. If it's a crash or bizzarre behaviour, include steps to reproduce (STR)!

Milestones, project planning and triage

Just like issues represent a logical unit of work, milestones represent logical moments where development hits a larger target. They can be useful for prioritizing issues, and can even have due date attached to them. They aren't always necessary, but can be very helpful when skillfully determined.

In a project you are a key member of, they should be discussed. The act of triaging is prioritizing issues and making sure that the most important ones are addressed first. Milestones can be useful in this pursuit.

While creating an issue, you can add it to a milestone easily:

Adding to a milestone

Workflow basics

A workflow all the work other than writing code that goes into fixing a bug or solving an issue. The actual writing of code fits into the workflow, but it is useful to seperate the ideas at first.

The steps in a workflow will logically flow from the contribution guidelines of a particular project, but a good framework can be established and applied in most cases:

  1. Claim an issue, usually by assigning yourself to it (if you have permissions) or by commenting on the issue saying you want to solve it.
  2. Create a local branch based on master, whose name indicates which issue you've selected, and what the issue covers. E.g. git checkout -b issue3-contributorGuidelines
  3. Develop your patch, and commit as needed
  4. When ready for a review, push the branch to your fork.
  5. Open a pull request against the main repository.
  6. Flag a reviewer so they can get to work reviewing your code.
  7. Follow the review process
  8. When the review is finished, condense your commits into their most logical form (see below) and force push your changes with git push origin -f BRANCH_NAME. NOTE: This will overwrite all the commits on your remote branch, so be sure you won't lose work
  9. Merge your code in if you have permissions, either on github itself or though the commandline.
  10. Delete your local and remote branches for the issue. You've done it!

Good commits

Like issues, commits must be well scoped. At most you should have one commit per logical unit of work. If issues are well scoped, this means one commit per issue. The purpose of this is to make it easy to undo logically separated pieces of work without affecting other code, so you might end up with more than one commit. Aim for one as you start, and it will keep your work focused.

As a final note, a good format for your commit messages is: "Fixed #XXX – Issue summary", where XXX is the issue number. When done this way, the issue you reference will be automatically closed when the commit is merged into the repository.

Opening a pull request

A pull request is a summary of the changes that will occur when a patch is merged into a branch (like master) on another repository. Opening them is easy with Github.

After pushing a branch:

Quick file

Manually:

Manual file

As always, make sure to communicate the pull request's purpose well, along with any important details the reviewer should know. This is a good place to flag a reviewer down.

The review process – having your code reviewed

During review, you and a number of reviewers will go over your patch and discuss it. When you need to make changes to the code based on a review, commit it seperately from the main commits of your work for the issue. This helps preserve the comments on the pull request.

When your code has reached a point where it is ready for merging, you can combine your commits into their final form with the interactive rebase command. Interactive rebasing is a key git skill, but has serious destructive potential. Make sure to read the link in this paragraph in full before attempting it.

The review process – reviewing someone's patch

A reviewer has two important jobs, sometimes split amongst two or more reviewers:

  1. Test the code
  2. Walk through the code thoroughly, commenting on changes that should be made.

Be polite, and explain your comments if necessary. If you aren't sure about something, invite discussion. The code's quality is the point.

A major difficulty for reviewers is finding time to review when writing patches of their own. This can be mitigated somewhat by discussing it with contributors ahead of time, so you can both be working on the code at once without interrupting development of your own patches.

Comments can me made directly on code in a pull request:

Adding a comment

Proper communication on Github

Issue tracking's main appeal is providing a place to solve problems through discussion, and have that conversation available to reference from that point on. Pull requests and issues usually require some conversation. Key guidelines are mostly common-sense (respect each other, etc.) but some specific ones are:

  1. Check your github notifications at regular intervals, so people get the feedback they need.
  2. Learn the github markup language (a variant of markdown) to help communicate with code examples, links and emphasis.
  3. Control expectations by being explicit about what you can and cannot handle.