Thursday, September 25, 2014

Agile Is Overripe

Haters Welcome

I wrote a blog post criticizing Scrum, and a bunch of people read it. A lot of people seemed to be talking about it too. I started regularly seeing 50+ notifications when I signed into Twitter, which was a lot for me.

There weren't a lot of people defending Scrum. Most of the tweets looked like this:

Of the tweets which defended Scrum, they mostly looked like this example of the No True Scotsman fallacy:

I've seen this from people who are old enough to know better, including one Agile Manifesto co-author, so it's entirely possible there's a little war afoot in the world of Scrum, over how exactly to define the term. Sorry, Scrum hipsters, but if there is indeed such a war, you either are losing it, or (more probably) you already lost it, years ago. I'm going to use the term as it's commonly understood; if you have an issue with the default understanding of the term, I recommend you take it up with Google, Wikipedia,,, and so on and so forth. I don't care enough to differentiate between Scrum Lite and Scrum Classic, because they both taste like battery acid to me.

However, I did get one person - literally only one person - telling me that Scrum actually works, and that includes planning poker:

(As it happens, it's someone I know personally, and respect. Everyone should watch his 2009 CUSEC presentation, because it's deep and brilliant.)

Another critic ultimately led me to this blog post by Martin Fowler, written in 2006:

Drifting around the web I've heard a few comments about agile methods being imposed on a development team by upper management. Imposing a process on a team is completely opposed to the principles of agile software, and has been since its inception...

a team should choose its own process - one that suits the people and context in which they work. Imposing an agile process from the outside strips the team of the self-determination which is at the heart of agile thinking.

I'm hoping to find out more, later, about what it's like when you're on a Scrum team and it actually works. To be fair, not every Scrum experience I've had has been a nightmare of dysfunction; I just think the successes owe more to the teams involved than to the process. And regarding Fowler's blog post, a lot of the people who endorsed my post seemed to do so angrily. So I would guess that many, many of these "fuck yeah" tweets came from people who had Scrum imposed on them, rather than choosing it. And therefore I think both of these areas of criticism are worth listening to.

However, of all the criticisms of my blog post that I saw, literally every single one overlooked what is, in my opinion, my most important criticism of Scrum: that its worst aspects stem from flaws in the Agile Manifesto itself.

Quoting the original post:

I don't think highly of Scrum, but the problem here goes deeper. The Agile Manifesto is flawed too. Consider this core principle of Agile development: "business people and developers must work together."

Why are we supposed to think developers are not business people?


The Agile Manifesto might also be to blame for the Scrum standup. It states that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation." In fairness to the manifesto's authors, it was written in 2001, and at that time git log did not yet exist. However, in light of today's toolset for distributed collaboration, it's another completely implausible assertion...

In addition to defying logic and available evidence, both these Agile Manifesto principles encourage a kind of babysitting mentality.

Sorry, Agile, I am in fact both a business person, and a developer, at the same time. Since my business involves computers, being competent to use them only supercharges my business mojo. This is how I achieve the state of MAXIMUM OVERBUSINESS.

More seriously, I recently started a new job at a company called Panda Strike; our CEO convinced me that the real value in the Agile Manifesto was that it facilitated a change in business culture which was actually inevitable due to a technological shift which happened first.

Moore's Law Created Agile

Agile development replaced waterfall development, an era of big design up front. In waterfall development, you gather requirements, write a spec, get approval on the spec, build your software to match that spec, then throw it over a wall to QA, and only show it to your users once you're done. It's important to realize that big design up front powered a ton of incredible success stories, including putting astronauts on the moon, plus nearly everything in software before the late 80s or early 90s, with the possible exception of the Lisp machine.

I don't want to bring back that era, but to be fair, we lost some things in this paradigm shift. And I think it's pretty easy to imagine how rapid prototyping, iterative development, and YAGNI might all be inappropriate for putting astronauts on the moon. That kind of project wouldn't fit a "design as you go" mentality. It would look like something out of The Muppet Show, except people would die.

In the very early days of computing, you'd spend a lot of time working out your algorithm before turning it into a stack of punch cards, because you wouldn't get a lot of chances to run your code; any error was very expensive.

Big design up front made an enormous amount of sense when the machinery of computing was itself enormous also. But that machinery isn't enormous any more, and hasn't been enormous for a long time. According to someone who's done the math:

a tweaked Motorola Droid is capable of scoring 52 Mflop/s which is over 15 times faster than the 1979 Cray 1 CPU. Put another way, if you transported that mobile phone back to 1987 then it would be on par with the processors in one of the fastest computers in the world of the time, the ETA 10-E, and [those] had to be cooled by liquid nitrogen.

Like all benchmarks, however, you need to take this one with a pinch of salt... the underlying processors of our mobile phones are probably faster than these Java based tests imply.

In between the day of the Cray supercomputer and the modern landscape of mobile phones which can run synthesizers good enough for high-profile album releases and live performances, there was the dawn of the personal computer. As the technology got smaller, faster, and cheaper, Moore's Law rendered a whole lot of management practices obsolete. Development cycles of two entire years were common at the time, but new teams using new technology could churn out solutions in months rather than years, and PowerBuilder developers launched a revolution underneath COBOL devs, starting around 1991, in the same way Rails developers later dethroned Java, starting around 2005, after it became possible to build simple web apps in minutes, rather than months.

In our lifetimes, it may become possible for software-generating software to churn out new apps in seconds, rather than minutes, and if/when that occurs, the culture of the tech industry (which, by then, may be equal to the set of all industries) will need to change again. It's hard to see that far with accuracy, but as far I know, there are basically just two ways a business culture can transform: evolution and persuasion. Evolution is where every business which ignores the new reality just fucking dies.

Persuasion is where you come up with a way to sell a new idea to your boss. This is pretty much what the Agile Manifesto was for. In the early days of Agile, the idea that your boss would force it on you was a contradiction in terms. Either you forced it on your boss, or it just didn't happen at all.

Obviously, times have changed. Quoting Dave Thomas, one of the Agile Manifesto's original authors:

The word "agile" has been subverted to the point where it is effectively meaningless, and what passes for an agile community seems to be largely an arena for consultants and vendors to hawk services and products.

So I think it is time to retire the word "Agile."

Epic Tangent: Ontology Is Overrated

One of the best tech talks I've ever heard, "Ontology Is Overrated" by Clay Shirky, covers a related topic. It's ancient in web terms, hailing from all the way back in 2005, when Flickr and del.ici.ous were discovering the incredible power of tagging, something we now take for granted. The talk includes an interpretation of why Google crushed Yahoo, during the early days of Web search engines. A sea change in technology brought with it a philosophical sea change, which Yahoo ignored - even going so far as to re-establish obsolete limitations - and which Google exploited.

I'll summarize the talk, since text versions don't appear to be online any more. You can still read a summary, however, or download the original audio, which I definitely recommend. It's a talk which stuck with me for almost ten years, and I've heard and given many other talks during that time.

When you look at the Dewey decimal system, which librarians use for storing books on shelves, it looks like a top-down map of all ideas. But it fails very badly as a map of all ideas. Its late 19th-century roots often become visible.

Consider how the Dewey decimal system categorizes books on religion, in 2014:
  • 200 Religion
  • 210 Natural theology
  • 220 Bible
  • 230 Christian theology
  • 240 Christian moral & devotional theology
  • 250 Christian orders & local church
  • 260 Christian social theology
  • 270 Christian church history
  • 280 Christian denominations & sects
  • 290 Other & comparative religions
Asian religions get the number 299, and they have to share it with every tribal and/or indigeneous religion in Australia, Africa, and the Americas, as well as Satanism, Discordianism, Rastafarianism, and Pastafarianism. Buddhism, however, shares the number 294 with every other religion which originated in India. So at best that's a number and a half, out of 100 available, for Asian religion, and all associated topics. Asia contains about 60% of the world's population.

As a map of all the ideas about religion, this is horribly distorted, but it's not actually a map of ideas about religion. It's really just a list of categories of physical books in the collections of American libraries.

Before Google existed, Yahoo first arose as a collection of links, and soon grew large enough to be unwieldy - at which point, Yahoo hired an ontologist and categorized its links into 14 top-level categories, creating in effect a Dewey decimal system for the web. But Yahoo innovated a little, bringing in an element of Unix. If you clicked on the top-level category "Entertainment," you'd get a "Books@" link, where the little "@" suffix served to indicate a symlink. Clicking that would land you in "Books and Literature," a subcategory of "Arts," because according to Yahoo, "Books" were not really a subcategory of "Entertainment."

Librarians use a similar workaround in their systems, namely the fractional decimals which indicate subcategories, so you can say (for example) that a book is about Asia, and about religion. These workarounds are inevitable, because (for example) books can be both literature and entertainment. Or, to be more general, categories are social fictions, and to put a book about Asian religion in the Asia category, rather than the religion category, is to say that its Asian-ness is more important than its religion-ness. The hierarchical nature of ontology means it always imposes the priorities of whichever authority or authorities created the hierarchy in the first place. But with a library, you have an excuse, because a physical book can only be in one place at a time. With web links, there's no excuse.

So, rather than applying this legacy physical-shelf-locating paradigm to a set of web pages, Google allowed you to simply search the entire web. You could never expect librarians to pre-construct a subcategory called "books which contain the words 'Minnesota' and 'obstreperous,'" but Google users in 2005 could work with exactly that subcategory any time they wanted. Flickr and took these ideas much further, creating ad hoc quasi-ontologies by allowing users to tag things however they wanted, and then aggregating these tags, and deriving insight from them.

(Today, unfortunately, you might not get results containing both "Minnesota" and "obstreperous" if you searched Google for those words. Google's lost a tremendous amount of signal through its use of latent semantic indexing to detect synonyms, and to other, similar compromises. This diminishes the Google praise factor in Shirky's talk, but doesn't harm his overall argument in any important way. What does suggest a possible need for revision is the emergence of filter bubbles, where companies try to pre-emptively derive user-generated categories, and then confine you to them, based on what category of user they estimate you to be. Filter bubbles thus impose a new kind of crowd-sourced ontology, which holds serious dangers for democracy.)

Anyway, although this was a fantastic talk, the main point I want to make is that Google defeated Yahoo here by recognizing the whole concept of ontology for the unnecessary, inessential historical relic that it was. Google even briefly used the DMOZ project, an open-source categorization of everything on the web - yes, this actually existed, and it started life with the name Gnuhoo, because of course it did - but dumped DMOZ because nobody even used it when they could just search instead. Ontology is overrated, and Yahoo's failure to recognize that cost them an enormous market.

The Agile Manifesto existed because developers and consultants had begun to recognize that many ideas in tech management were unnecessary, inessential historical relics. Although it opposed these ideas, it didn't even argue that they should be thrown out entirely, just that they were overrated.

Remember, waterfall development reigned supreme. The Agile Manifesto did a great thing in improving working conditions for a lot of programmers, and in achieving new success stories that would have been impossible under the old paradigm. But I can't praise the Agile Manifesto for tearing down the status quo without also acknowledging that over time, it has become the new status quo, and we will probably have to tear it down too.

Synchrony Is The New Ontology

The most obvious flaw in the Agile Manifesto is the claim that face-to-face conversation is the best way for developers to communicate. It's just not true. There's a reason we write code onto screens, rather than dictating it into microphones. Face-to-face communication has a lot of virtues, and there are certainly times when it's necessary, but it's not designed to facilitate extremely detailed changes in extremely large code bases, and tools which are designed for that purpose are often superior for the task.

Likewise, I don't want to valorize a tired and harmful stereotype here, but there's a lot of development work where you can go days without needing to talk to anyone else for more than a few moments.

In many industries, companies just do not need to have synchrony or co-location any longer. This is an incredible development which will change the world forever. Do not expect the world of work to look the same in 20 years. It will not.

It's not just programming. Overpriced gourmet taco restaurants no longer need locations.

In 2001, when the Agile Manifesto was written, Linux was already a massive success story for remote work and asynchronous development. But it was just one such story, and somewhat anomalous. In 2014, nobody on the web is building a business without open source. Because of that fact, and because of the fact that just about every open source project runs on remote work and asynchronous development, we can also say that there are very, very few new technology companies today which do not already depend on the effectiveness of remote work and asynchronous dev, because these businesses would fall apart without their open source foundations, and those foundations were built with remote work and async dev.

The bizarre thing about most companies in this category, however, is that although they absolutely depend on the success of remote work and async dev, and although they absolutely and literally could not exist without the effectiveness of remote work and async dev, they nonetheless require their employees to all work in the same place at the same time.

Consider that GitHub's a distributed company where a lot of people work remote. Consider also that a lot of startups run development entirely through GitHub. This means a lot of CTOs will happily bet their companies on libraries and frameworks developed remotely, and a product which was developed remotely, yet they don't do remote dev when it comes to running their own companies.

Yahoo put ontology onto its web links simply because it never questioned the common assumption that if you want to navigate a collection of information, you do so by organizing that information into a hierarchy.

Why do tech companies have offices?

In this case, the Agile Manifesto just went stale. It's just a question of the passage of time. The apps, utilities, and devices we have for remote collaboration today are straight-up Star Trek shit by 2001 standards.

In 2001, when the Manifesto was written, you could argue against Linux as a model for development in general. Subversion was still new. Java (developed at a corporation, inside office buildings) was arguably superior to Perl, which was probably the best open source alternative at the time. There weren't profitable, successful companies built this way. You could call Linux a fluke. But we have profitable, successful, remote-oriented companies today, and legions of successful open source projects have validated the model as well.

A software development process that doesn't acknowledge this technological reality is just silly.

Two-year development cycles and big design up front were to 1990s programming as ontology was to 1990s web directories. They were ideas that had to die and Agile was right to clear them away. But that's what synchrony and co-location are today, and the Agile Manifesto advocates in favor of both.

And this synchrony thing isn't the only problem in the Agile Manifesto. I may blog in future about the other, deeper problems in the Manifesto; I already covered the "businesspeople vs. developers" problem in the Scrum post.

Monday, September 22, 2014

A Pair of Quick Animations

Five seconds or less, done in Adobe After Effects.

I made the music for this one. No special effects, just shapes, luma masks, and blending modes.

This one is mostly special effects.

Wednesday, September 17, 2014

Why Scrum Should Basically Just Die In A Fire

Conversations with Panda Strike CEO Dan Yoder inspired this blog post.

Scrum, the Agile methodology allegedly favored by Google and Spotify, is a mess.

Consider story points. If you're not familiar with Scrum, here's how they work: you play a game called "Planning Poker," where somebody calls out a task, and then counts down from three to one. On one, the engineers hold up a card with the number of "story points" which represents the relative cost they estimate for performing the task.

So, for example, a project manager might say "integrating our login system with OpenAuth and Bitcoin," and you might put up the number 20, because it's the maximum allowable value.

Wikipedia describes the goal of this game:

The reason to use Planning Poker is to avoid the influence of the other participants. If a number is spoken, it can sound like a suggestion and influence the other participants' sizing. Planning Poker should force people to think independently and propose their numbers simultaneously. This is accomplished by requiring that all participants show their card at the same time.

I have literally never seen Planning Poker performed in a way which fails to undermine this goal. Literally always, as soon as every engineer has put up a particular number, a different, informal game begins. If it had a name, this informal game would be called something like "the person with the highest status tells everybody else what the number is going to be." If you're lucky, you get a variant called "the person with the highest status on the dev team tells everybody else what the number is going to be," but that's as good as it gets.

Wikipedia gives the alleged structure of this process:
  • Everyone calls their cards simultaneously by turning them over.
  • People with high estimates and low estimates are given a soap box to offer their justification for their estimate and then discussion continues.
  • Repeat the estimation process until a consensus is reached. The developer who was likely to own the deliverable has a large portion of the "consensus vote", although the Moderator can negotiate the consensus.
  • To ensure that discussion is structured; the Moderator or the Project Manager may at any point turn over the egg timer and when it runs out all discussion must cease and another round of poker is played. The structure in the conversation is re-introduced by the soap boxes.
In practice, this "soap box" usually consists of nothing more than questions like "20? Really?". And I've never seen the whole "rinse and repeat" aspect of Planning Poker actually happen; usually, the person with lower status simply agrees to whatever the person with higher status wants the number to be.

In fairness to everybody who's tried this process and seen it fail, how could it not devolve? A nontechnical participant has, at any point, the option to pull out an egg timer and tell technical participants "thirty seconds or shut the fuck up." This is not a process designed to facilitate technical conversation; it's so clearly designed to limit such conversation that it almost seems to assume that any technical conversation is inherently dysfunctional.

It's ironic to see conversation-limiting devices built into Agile development methodologies, when one of the core principles of the Agile Manifesto is the idea that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation," but I'll get to that a little later on.

For now, I want to point out that Planning Poker isn't the only aspect of Scrum which, in my experience, seems to consistently devolve into something less useful. Another core piece of Scrum is the standup meeting.

You probably know this, but just in case, the idea is that the team for a particular project gathers daily, for a quick, 15-minute meeting. This includes devs, QA, project manager(s), designers, and anyone else who will be working to make the project succeed, or who needs to stay up-to-date with the project's progress. The standup's designed to counter an even older tradition of long, stultifying, mandatory meetings, where a few people talk, and everybody else loses their whole day to no benefit whatsoever. Certainly, if you've got that going on in your company, anything which gets rid of it is an improvement.

However, as with Planning Poker, the 15-minute standup decays very easily. I've twice seen the "15-minute standup" devolve into half-hour or hour-long meetings where everybody stands, except for management.

At one company, a ponderous, overcomplicated web app formed the centerpiece of the company's Scrum implementation. Somebody had to sit to operate this behemoth, and since that was an informal privilege, it usually went to whomever could command it. In other words: management.

At another company, the Scrum decay took a different route. As with the egg timer in Planning Poker, Scrum standups offer an escape clause. In standups, you can defer discussion of involved topics to a "parking lot," which is where an issue lands if it's too complex to fit within the meeting's normal 15-minute parameters (which also include some constraints on what you can discuss, to prevent talkative or unfocused people from over-lengthening the meeting).

At this second company, virtually everything landed in the parking lot, and it became normal for the 15-minute standup to be a 15-minute prelude to a much longer meeting. We'd just set the agenda during the standup, and the parking lot would be the actual meeting. These standups typically took place in a particular person's office. Since arriving at the parking lot meant the standup was over, that person, whose office we were in, would feel OK about sitting down in their own, personal chair. But the office wasn't big enough to bring any new chairs into, so everyone else had to stand. The person whose office we were always in? A manager.

Scrum's standups are designed to counteract an old tradition of overly long, onerous, dull meetings. However, at both these companies, they replaced that ancient tradition with a new tradition of overly long, onerous, dull meetings where management got to sit down, and everybody else had to stand. Scrum's attempt at creating a more egalitarian process backfired, twice, in each case creating something more authoritarian instead.

To be fair to Scrum, it's not intended to work that way, and there's an entire subgenre of "Agile coaching" consultants whose job is to repair broken Scrum implementations at various companies. This is pure opinion, but my guess is that's a very lucrative market, because as far as I can tell, Scrum implementations often break.

I recommend just skimming the first few seconds of this.

Scrum's ready devolution springs from major conceptual flaws.

Scrum's an Agile development methodology, and one of its major goals is sustainable development. However, it works by time-boxing efforts into iterations of a week or two in length, and refers to these iterations as "sprints." Time-boxed iterations are very useful, but there's a fundamental cognitive dissonance between "sprints" and "sustainable development," because there is no such thing as a sustainable sprint.

This man's pace is probably not optimized for sustainability.

Likewise, your overall list of goals, features, and work to accomplish is referred to as the "backlog." This is true even on a greenfield project. On day 1, you have a backlog.

Another core idea of the Agile Manifesto, the allegedly defining document for Agile development methodologies: "working software is the primary measure of progress." Scrum disregards this idea in favor of a measure of progress called "velocity." Basically, velocity is the number of "story points" successfully accomplished divided by the amount of time it took to accomplish them.

As I mentioned at the top of the post, a lot of this thinking comes from conversations with my new boss, Panda Strike CEO Dan Yoder. Dan told me he's literally been in meetings where non-technical management said things like, "well, you got through [some number] story points last week, and you only got through [some smaller number] this week, and coincidentally, I noticed that [some developer's name] left early yesterday, so it looks pretty easy who to blame."

Of course, musing, considering, mulling things over, and coming to realizations all constitute a significant amount of the actual work in programming. It is impossible to track whether these realizations occur in the office or in the shower. Anecdotally, it's usually the shower. Story points, meanwhile, are completely made-up numbers designed to capture off-the-cuff estimates of relative difficulty. Developers are explicitly encouraged to think of story points as non-binding numbers, yet velocity turns those non-binding estimates into a number they can be held accountable for, and which managers often treat as a synonym for productivity. "Agile" software exists to track velocity, as if it were a meaningful metric, and to compare the relative velocity of different teams within the same organization.

This is an actual thing which sober adults do, on purpose, for a living.

"Velocity" is really too stupid to examine in much further detail, because it very obviously disregards this whole notion of "working software as a measure of progress" in favor of completely unreliable numbers based on almost nothing. (I'm not proud to admit that I've been on a team where we spent an entire month to build an only mostly-functional shopping cart, but I suppose it's some consolation that our velocity was acceptable at the time.)

But, just to be clear, one of velocity's many flaws is that different teams are likely to make different off-the-cuff estimates, as are different members of the same team. Because of this, you can only really garner anything approaching meaningful insight from these numbers if you compare the ratio of estimated story points to accomplished story points on a per-team, per-week basis. Or, indeed, a per-individual, per-week one. And even then, you're more likely to learn something about a team's or individual's ability to make ballpark estimates than their actual productivity.

Joel Spolsky has an old but interesting blog post about a per-individual, velocity-like metric based on actually using math like a person who understands it, not a person who regards it as some kind of incomprehensible yet infallible magic. However, if there's anything worth keeping in the Agile Manifesto, it's the idea that working software is the primary measure of progress. Indeed, that's the huge, hilarious irony at the center of this bizarre system of faux accountability: with the exception of a few Heisenbugs, engineering work is already inherently more accountable than almost any other kind of work. If you ask for a feature, your team will either deliver it, or fail to deliver it, and you will know fairly rapidly.

If you're tracking velocity, your best-case scenario will be that management realizes it means nothing, even though they're tracking it anyway, which means spending money and time on it. This useless expense is what Andy Hunt and Dave Thomas termed a broken window in their classic book The Pragmatic Programmer - a sign of careless indifference, which encourages more of the same. That's not what you want to have in your workplace.

Sacrificing "working software as a measure of progress" to meaningless numbers that your MBAs can track for no good reason is a pretty serious flaw in Scrum. It implies that Scrum's loyalty is not to the Agile Manifesto, nor to working software, nor high-quality software, nor even the success of the overall team or organization. Scrum's loyalty, at least as it pertains to this design decision, is to MBAs who want to point at numbers on a chart, whether those numbers mean anything or not.

I've met very nice MBAs, and I hope everyone out there with an MBA gets to have a great life and stay employed. However, building an entire software development methodology around that goal is, in my opinion, a silly mistake.

The only situation I can think of where a methodology like Scrum could have genuine usefulness is on a rescue engagement, where you're called in as a consultant to save a failing project. In a situation like this, you can track velocity on a team basis to show your CEO client that development's speeding up. Meanwhile, you work on the real question, which is who to fire, because that's what nearly every rescue project comes down to.

In other words, in its best-case scenario, Scrum's a dog-and-pony show. But that best-case scenario is rare. In the much more common case, Scrum covers up the inability to recruit (or even recognize) engineering talent, which is currently one of the most valuable things in the world, with a process for managing engineers as if they were cogs in a machine, all of equal value.

And one of the most interesting things about Scrum is that it tries to enhance the accountability of a field of work where both failure and success are obvious to the naked eye - yet I've never encountered any similarly elaborate system of rituals whose major purpose is to enhance the accountability of fields which have actual accountability problems.

Although marketing is becoming a very data-driven field, and although this sea change began long before the Web existed at all - Dan Kennedy's been writing about data-driven marketing since at least the 1980s - it's still a fact that many marketers do totally unaccountable work that depends entirely on public perception, mood, and a variety of other factors that are inherently impossible to measure. The oldest joke in marketing: "only half my advertising works, but I don't know which half."

And you never will.

YouTube ads have tried to sell me a service to erase the criminal record I don't have. They've reminded me to use condoms during the gay sex that I don't have either. They've also tried to get me to buy American trucks and country music, neither of which will ever happen. No disrespect to the gay ex-convicts out there who do like American trucks and country music, assuming for the sake of argument that this demographic even exists, it's just not my style. Similarly, Facebook's "targeted" ads usually come from politicians I dislike, and Google's state-of-the-art, futuristic, probablistic, "best-of-breed" ads are worse. The only time they try to sell me anything I even remotely want is when I've researched something expensive but decided not to buy it yet. Then the ad follows me around every web site I visit for the next month.

Please buy it. Please. You looked at it once.

Even in 2014, marketing involves an element of randomness, and probably always will, until the end of time.

Anyway, Scrum gives you demeaning rituals to dumb down your work so that people who will never understand it can pretend to understand it. Meanwhile, work which is genuinely difficult to track doesn't have to deal with this shit.


I don't think highly of Scrum, but the problem here goes deeper. The Agile Manifesto is flawed too. Consider this core principle of Agile development: "business people and developers must work together."

Why are we supposed to think developers are not business people?

If you join (or start) a startup, you may have to do marketing before your company can hire a marketing person. The same is true for accounting, for sales, for human resources, and for just about anything that any reasonable person would call business. You're in a similar situation if you freelance or do consulting. You're definitely in a better position for any of these things if you hire someone who knows what they're doing, of course, but there's a large number of developers who are also business people.

Perhaps more importantly, if you join or start a startup, you can knock the engineering out of the park and still end up flat fucking broke if the marketing people don't do a good job. But you're probably not going to demand that your accountants or your marketing people jump through bizarre, condescending hoops every day. You're just going to trust them to do their jobs.

This is a reasonable way to treat engineers as well.

By the way, despite that little Dilbert strip a few paragraphs above, my job title at Panda Strike is Minister of Propaganda. I'm basically the Director of Marketing, except that to call yourself a Director of Marketing is itself very bad marketing when you want to communicate with developers, who traditionally mistrust marketing for various reasons (many of them quite legitimate). This is the same reason the term "growth hacker" exists, but as a job title, that phrase just reeks of dishonesty. So I went with Minister of Propaganda to acknowledge the vested interest I have in saying things which benefit my company.

However, despite having marketing responsibilities, my first act upon joining Panda Strike was to write code which evaluates code. I tweaked my git analysis scripts to produce detailed profiles of the history of many of the company's projects, both open source and internal products, so that I could get a very specific picture of how development works at Panda Strike, and how our projects have been built, and who built them, and when, and with which technologies, and so on.

As an aside, I first developed this technique on a Rails rescue project. It was the first thing I did on the project, but the CTO, having an arrogant and aloof attitude, had no idea. So on my first day, after I did this work, he introduced me to the rest of the team, telling me their names, but nothing else about them. But I recognized the names from my analysis of the git log. I noticed that the number one JavaScript committer had a cynical and sarcastic expression, that most of the team had three commits or less, and that the number one Ruby committer wasn't anywhere in the building.

This CTO who had told me nothing then said to me, "OK, dazzle me." As you can imagine, I did not dazzle him. I fired him. (Or, more accurately, I and my colleagues persuaded his CEO to fire him.)

Anway, the whole point of this is simple: there's absolutely no reason to assume that a developer is not a business person. It's a ridiculous assumption, and the world is full of incredibly successful counterexamples.

The Agile Manifesto might also be to blame for the Scrum standup. It states that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation." In fairness to the manifesto's authors, it was written in 2001, and at that time git log did not yet exist. However, in light of today's toolset for distributed collaboration, it's another completely implausible assertion, and even back in 2001 you had to kind of pretend you'd never heard of Linux if you really wanted it to make sense.

Well-written text very often trumps face-to-face communication. You can refer to well-written text later, instead of relying on your memory. You can't produce well-written text unless you think carefully. Also, technically speaking, you can literally never produce good code in the first place unless you produce well-written text. There are several great presentations from GitHub on the value of asynchronous communication, and they're basically required viewing for anybody who wants to work as a programmer, or with programmers.

In fact, GitHub itself was built without face-to-face communication. Basecamp was built without face-to-face communication as well. I'm not saying these people never met each other, but most of the work was done remote. Industrial designer Marc Newson works remote for Apple, so his work on the Apple Watch may also have happened without face-to-face communication. And face-to-face communication plays a minimal role in the majority of open source projects, which usually outperform commercial projects in terms of software quality.

In addition to defying logic and available evidence, both these Agile Manifesto principles encourage a kind of babysitting mentality. I've never seen Scrum-like frameworks for transmuting the work of designers, marketers, or accountants into cartoonish oversimplifications like story points. People are happy to treat these workers as adults and trust them to do their jobs.

I don't know why this same trust does not prevail in the culture of managing programmers. That's a question for another blog post. I suspect that the reasons are historical, and fundamentally irrelevant, because it really doesn't matter. If you're not doing well at hiring engineers, the answer is not a deeply flawed methodology which collapses under the weight of its own contradictions on a regular basis. The answer is to get better at hiring engineers, and ultimately to get great at it.

I may do a future blog post on this, because it's one of the most valuable skills in the world.

Credit where credit's due: the Agile Manifesto helped usher in a vital paradigm shift, in its day.

Sunday, August 31, 2014

iOS 6 CSS Turns Futura Into Futura Condensed Extra Bold

If you're seeing this happen, no, you're not going crazy; I've seen it too, on both Chrome and Safari. (I haven't tested if it happens with other operating systems or browsers, although I may later.)

The threshold for this effect is font-weight: 500. At that weight, font-family: "Futura" will indeed produce Futura; at font-weight: 501 and above, font-family: "Futura" will actually produce Futura Condensed Extra Bold.

By the way, at any weight, font-family: "Futura Condensed Extra Bold" will produce Times New Roman. I'm not sure why; the charitable explanation is ignorance.

Tuesday, August 26, 2014

My Name Is Giles, And I Am A Panda

Reg Braithwaite occasionally chastises himself for writing about writing code, instead of just writing code. This makes an easy opportunity for those of us (like me) who enjoy trolling him.


Let's call Mr. Braithwaite's bluff. I don't think this tweet is true, but it could be. You can do this in bash:

Not the most elegant code, of course. And I only tested this with vi /etc/hosts vs. MacBook THELONIUS Air vi /etc/hosts, so it may fail on other commands. I'd expect it to have issues with output redirection, so this might actually be a superior implementation:

But this approach will give you false positives with no middle name supplied, and costs you the vital sassback feature.

If you want to become a better programmer, you could do worse than to just pick somebody like @raganwald, follow them on Twitter, and then, any time they complain about not having a particular tool or product, implement that tool or product. Of course, some complaints would be harder to resolve than others.

Despite the golden opportunity a tweet like this presents — i.e., I could look like a genius if I went off and implemented HTML5 with type-checking — I wanted to reply to it by telling Reg to go and implement this himself. Less effort, better trolling, so: pure win.

However, I decided not to troll Reg about this, since I kind of troll him too often (for instance, consider this blog post). Instead, I phrased it as general advice and got a bunch of retweets out of it.

The remark about "pop culture" is a reference to an Alan Kay interview that both Reg and I frequently reference:

...computing spread out much, much faster than educating unsophisticated people can happen...

the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture...

If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did...

A commercial hit record for teenagers doesn’t have to have any particular musical merits. I think a lot of the success of various programming languages is expeditious gap-filling. Perl is another example of filling a tiny, short-term need, and then being a real problem in the longer term. Basically, a lot of the problems that computing has had in the last 25 years comes from systems where the designers were trying to fix some short-term thing and didn’t think about whether the idea would scale if it were adopted. There should be a half-life on software so old software just melts away over 10 or 15 years.

(Just as an aside, the absence of this half-life is one major difference between programming languages and spoken languages. Where human languages naturally morph over time, computer languages can only morph through their formal definitions, making them weirdly immortal.)

It was a different culture in the ’60s and ’70s; the ARPA (Advanced Research Projects Agency) and PARC culture was basically a mathematical/scientific kind of culture and was interested in scaling, and of course, the Internet was an exercise in scaling. There are just two different worlds, and I don’t think it’s even that helpful for people from one world to complain about the other world—like people from a literary culture complaining about the majority of the world that doesn’t read for ideas. It’s futile.

I don’t spend time complaining about this stuff, because what happened in the last 20 years is quite normal, even though it was unfortunate. Once you have something that grows faster than education grows, you’re always going to get a pop culture.

To recap, I was saying that all you need to do is dig through the great old ideas of the earlier, more rigorous culture and you'll find pure gold. Of course, @raganwald had already got there:

This is especially relevant in any discussion of Alan Kay, because one very interesting thing about Alan Kay is that he and Steve Jobs frequently predicted that the same technologies (and the same uses of those technologies) would take over the world. But Alan Kay never seemed hugely interested in when those technologies would take over the world, while Steve Jobs was obsessed with questions like that, and that, in a nutshell, is why Alan Kay is a name known only to serious programmers, while Steve Jobs is a name known to everyone.

When the Mac first came out, Newsweek asked me what I [thought] of it. I said: Well, it’s the first personal computer worth criticizing. So at the end of the presentation, Steve came up to me and said: Is the iPhone worth criticizing? And I said: Make the screen five inches by eight inches, and you’ll rule the world. — Alan Kay

And this is as good a time as any to demystify another of my tweets:

I'm wearing a panda costume because I've joined a company called Panda Strike.

I've known the CEO, Dan Yoder, for years through the Ruby community in Los Angeles. In fact, I met him through the ruby-lang mailing list back in 2007, when I was trying to put together a Ruby users' group. And in 2008, LA Ruby got off the ground, with help from Dan's company and mine, but let's get back to the main theme here. Dan got to a particular future a little early.

Around the time Sinatra emerged, Dan had just written a web framework called Waves, which had a terse API sort of similar to Sinatra's, but was built on the foundation of a deeper understanding of the web and HTTP. (Panda Strike co-founder Matthew King also worked on Waves.) Here's a very superficial comparison of Sinatra and Waves. Both the Sinatra and Waves code examples are setting up a handler for hitting the /show URL with a GET request.

You'll notice in this example that while the on :get, :show syntax works in terms of an HTTP method and a path, which is pretty similar to Sinatra's get "/show", the object model starts with a resource, which is a thing which handles URLs. Waves had a models directory, like Rails, but it had a resources directory too, and that directory played a more central role. Waves was a resource-first framework, whereas you could think of Rails as a model-first framework, and consider Sinatra a URL-first microframework.

Waves and Sinatra both had these routing methods where you would define a URL handler by specifying first an HTTP method and next a path. Both frameworks disregarded MVC in favor of a URL-matching approach. That almost qualifies as an example of multiple discovery, the phenomenon where multiple people make the same scientific discoveries and/or inventions at roughly the same time.

Even though these APIs are only a little different, the Sinatra approach had a serious advantage in its aggressive simplicity. But it kind of skipped the whole issue of resources, and that core idea at the root of Waves, that a framework should put resources first, was ahead of its time. In recent years, it's kind of become a thing to break overly model-focused legacy Rails apps into resource-oriented services.

I think Waves was delivering a future just a little too early. But Waves is Ruby history now. Today Panda Strike (which includes several other developers and a third co-founder, devops lead Lance Lakey) runs mostly on CoffeeScript and Node.js.

Any controversy will have to wait for now.

I'll probably get into these tech choices in future blog posts, and Dan already has.

  • CoffeeScript syntax matches JavaScript semantics better than JavaScript syntax does.
  • It's great to share code between client and server.
  • It's great when your network programming APIs are based on sockets and streams.

Anybody who's read my blog in recent years has seen me growing disillusioned with Rails. I think Rails is making the wrong moves with pretty much everything Rails's creator David Heinemeier Hansson has blogged about in recent years, from client-side JavaScript to hypermedia APIs to TDD.

I think Panda Strike's taking an approach to the web which is more modern, more relevant, and more innovative, both in terms of the projects we're using, and the ones we're creating. I think we're going to deliver some futures right on time. I'm planning to blog about that some more, soon.

Thursday, August 21, 2014

Humans Need Not Apply

Thursday, July 31, 2014

The Bizarre Bazaar: Who Owns Express.js?

A GitHub Drama

After abandoning Node.js for Go, TJ Holowaychuk apparently made his separation official by selling off the branding and official GitHub "ownership" of his Express framework to StrongLoop, a Node.js company whose projects include software services, consulting services, support, and free software. (StrongLoop's CEO, by the way, is no stranger to the concept of businesses based on free software, having previously sold his startup Makara to Red Hat and developed Red Hat's OpenShift product - Red Hat being the company which pioneered open source business models.)

As an aside, I'm often disturbed by how many things GitHub is these days.

The latest Node.js drama undermines my tweeted theory, because much of the drama unfolds on GitHub. So maybe GitHub's a Twitter which used to be an emacs?

Anyway, here's the history. If my retelling fails at fairness, apologies to all involved.

First, StrongLoop announced the sponsorship on its blog. A major Express contributor immediately filed an issue on GitHub: "This repo needs to belong in the expressjs org." The discussion that unfolded there is interesting (although currently locked), but here's a summary: Holowaychuk transferred ownership to StrongLoop without either asking or informing the Express community beforehand. StrongLoop's been committed to Node.js for a good while now, and hopes to support Express with documentation and continued development. However, the Express community may have taken over for Holowaychuk some time ago, so there's some contention over whether or not the "ownership" of the project was legitimately his to transfer in the first place.

An angry blog post argues that it was not:

When TJ Holowaychuk lost interest in maintaining Express, he did the right thing (for a change) by letting others take over and keep it going. In open source, that meant the project was no longer his, even if it was located under his GitHub account – a common practice when other people take over a project.

Keeping a project under its original URL is a nice way to maintain continuity and give credit to the original author. It does not give that person the right to take control back without permission, especially not for the sole purpose of selling it to someone else...

What makes this particular move worse, is the fact that ownership was transferred to a company that directly monetizes Express by selling both professional services and products built on top of it. It gives StrongLoop an unfair advantage over other companies providing node services by putting them in control of a key community asset. It creates a potential conflict of interest between promoting Express vs. their commercial framework LoopBack (which is built on top of Express).

This move only benefits StrongLoop and TJ Holowaychuk, and disadvantages the people who matter the most – the project active maintainers and its community.

Holowaychuk responded with a blog post of its own, pointing out that he had communicated with Doug Wilson of the Express community, asking Wilson if he'd like some of the proceeds of the deal:

My intent was to share said compensation with Douglas since he has been the primary maintainer on Express lately. I signalled that intent by emailing him...

I don't want to wade into the drama here, which is why I've made an effort here to be dispassionate and objective. I'm totally happy to let that shake out however it shakes out. But I have to admit that I think there's a really interesting question at the heart of all this: who owned the Express web framework? Was it really Holowaychuk's to sell?

I find this question interesting because it reminds me of a totally wrong theory I cooked up recently: that being free is what ruined the Ruby web framework Rails.

Totally Wrong Theory: Being Free Ruined Rails

I've previously argued that the Rails/Merb merge was a mistake, and that Rails went off the rails. I came up with my new, totally wrong theory when I was trying to figure out how the Rails/Merb merge happened in the first place.

Before I get into it, I want to point out that one of the major flaws in my theory here is that Rails isn't actually ruined. As I said, the theory is a totally wrong theory (and being totally wrong is obviously another one of its flaws). But I want to explore the idea to illustrate some of the flaws in the purist, old-school definitions of open source software. Because I don't think that theory is correct, either.

That theory comes from Eric Raymond's The Cathedral and the Bazaar, which provides a great statement of the classic concept of what open source is, and what open source means. This essay, and the book it later became, first articulated the idea that "with enough eyeballs, all bugs are shallow," and laid out 19 rules of open source development. For example, "the next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better." Or, "release early. Release often. And listen to your customers."

The Cathedral represents a software development model where developers build code in private and release it in public. The Bazaar represents a model where all development occurs in public. Raymond argues for the Bazaar in favor of the Cathedral. I don't know how development worked in Express, or how it will proceed now, but Rails uses a hybrid model, where the majority of development occurs in public, yet certain decisions happen in private.

Many other projects use this model as well. (Obviously, in the case of Express, the decision to sell sponsorship occurred in private.)

The Rails/Merb merge is one example of a major decision which occurred in private. There was no public debate, just a sudden announcement, with a big thank you to the Merb team for all the free help that would get Rails 2 to Rails 3. But free help isn't always free.

37Signals (now Basecamp) have long advocated turning down unnecessary feature requests, and Rails creator David Heinemeier Hansson took the idea to absurd lengths with his description of Rails as an "omakase" framework. But one explanation for the Rails/Merb merge is that EngineYard said "we'll pay for Rails 3 to happen, as long as Rails 3 is also Merb 2," and members of Rails core forgot their own advice about turning down unnecessary feature requests because, for once, the unnecessary feature requests came along with the offer of (unnecessary) free work.

To be clear, the feature requests, and the free work to support them, were unnecessary in my opinion, but not in the opinion of the people who made the merge happen. I'm going to make an attempt to be objective regarding Express, but when it comes to Rails, that train has already sailed. It's my belief that the Rails/Merb merge brought Rails an incomplete but ambitious modularity it didn't actually need, and that there's an inherent irony here, because Mr. Hansson vigorously and scornfully opposed adding a different kind of modularity to Rails apps: stuff like moving business logic out of Rails models and into simple Ruby objects, moving application logic out of Rails entirely and treating it as a library instead of a framework, and wrapping Rails's ActionMailer system in simpler API calls like Email.send.

Good Modularity and Bad Modularity

The general theme: how to unfuck a monorail. Many Rails developers wrestle with this theme, but Mr. Hansson seems (in my opinion) to dismiss it categorically and without any significant consideration. (Indeed, so many Rails developers wrestle with this issue that I think it's fair to call it a crucial moment in the lifecycle of most Rails apps.)

Some of these things are a lot easier to do because of the Rails/Merb merge, yet it's interesting to contrast Mr. Hansson's hostility to these ideas with his embrace of the merge. On the one hand, we saw claims of a powerful modularity that either failed to materialize or which proved useful to only a few people.

On the flip side, Rails's creator seemed pretty contemptuous of people who created simpler, more practical forms of modularization to suit the needs of their individual applications. It's a fascinating contradiction, from the developer who once lambasted "architecture astronauts," to attack pragmatic modularization with very immediate causes, while championing an abstract modularity with less obvious usefulness.

I think this was an error in judgement, and I think it happened because the work seemingly came for free. Because why else would a team famous for ignoring feature requests happily embrace an incredibly ambitious set of feature requests?

Managing open source frameworks takes time. Writing code takes time; discussing pull requests takes time; and running a private chat room for your "open source" project takes time.

To unpack that last statement, gaining access to the private Rails core Campfire is a key step in becoming a member of Rails core:

Yehuda gave me access to control the LightHouse tickets and to the Rails CampFire...The fact that I was invited to be a part of the Rails Core Team really surprised me. It was unexpected until I read Yehuda in CampFire saying that the guys with commit access should join the core team after the release of Rails 3 and David was OK with that.

Here's where Rails operates as a hybrid between the Cathedral and the Bazaar. Its core team's private Campfire chat functions as a Cathedral, but its GitHub activity functions as a Bazaar.

The Bizarre Bazaar

The Cathedral and the Bazaar argues that the Bazaar is superior because no one person is smarter than a community of smart people, and because nobody can craft a One True Design™ which is better suited to a problem space than the design which will emerge if you allow lots of people to work on the problem.

This is obviously very different from the "omakase" philosophy of Rails. And there are benefits to that philosophy. When Rails first came on the scene, it seemed like an Apple product - impeccably designed, shaped by the kind of singular focus no community could ever achieve. No community would ever decide in aggregate to shape a project around REST in 2006, or to unilaterally replace JavaScript with CoffeeScript. Communities typically suck at boldness, as well as beautiful user experience, and one of Rails's greatest innovations was treating the developer's user experience as one of the most important aspects of its design.

Yet the "omakase" philosophy also created a community which operates on the foundation of an unspoken shared disregard for the community's alleged leadership. The sign of an experienced Rails developer is a weird duality; a skilled Rails dev knows the recommendations of Rails core, and ignores or contradicts most of them. As Steve Klabnik said, Rails really has two default stacks, the "omakase" stack and the "Prime" stack, which could also be described as the official stack and the stack which is the default for everybody except 37Signals and utter newbies. There is something just deeply, dementedly messed-up about a community where following best practices, or believing that the documentation is correct, are both sure signs of cluelessness.

Rails is not the only open source project to feature this half-Cathedral, half-Bazaar hybrid. (You could call it a bizarre Bazaar.) Ember works in a similar way, and Cognitect's transit-ruby project features the following disclaimer in their README:

This library is open source, developed internally by Cognitect. We welcome discussions of potential problems and enhancement suggestions on the transit-format mailing list. Issues can be filed using GitHub issues for this project. Because transit is incorporated into products and client projects, we prefer to do development internally and are not accepting pull requests or patches.

(This disclaimer, of course, did not prevent people from filing pull requests anyway, one of which was unofficially accepted.)

Sidekiq & Sidekiq Pro

I believe 37Signals and EngineYard both have funded some of Rails's development, and that they're far from alone in this. I know ENTP did the same when I worked for them, and I believe that's also true of Thoughtbot, Platformatec, several other companies, and of course a staggering number of independent individuals. I'm certain Twitter directly funded some of the work on Apache Mesos, and that Google indirectly funded it as well by contributing to Berkeley's AMP Lab, where Mesos originated. While "open source" was the opposite of corporate development when the idea first swept the world, today most successful open source projects have seen a company, or several companies, pay somebody to work on the project, even though the project then gives the work away for free.

It's an amazing evolution in the economics of software, and something I think everybody should be grateful for.

However, I know of an alternate model, and I have to wonder how Rails might have handled the Merb merge differently, if it had been using this model instead. This is the Sidekiq and Sidekiq Pro model.

In his blog post How to Make $100K in OSS by Working Hard, Mike Perham wrote:

My Sidekiq project isn’t just about building the best background processing framework for Ruby, it’s also a venue for me to experiment with ways to make open source software financially sustainable for the developers who work on it hundreds of hours each year (e.g. me)...

When Sidekiq was first released in Feb 2012, I offered a commercial license for $50. Don’t like Sidekiq’s standard LGPL license? Upgrade to a commercial license. In nine months of selling commercial licenses, I sold 33 for $1,650...

In October last year I announced a big change: I would sell additional functionality in the form of an add-on Rubygem. Sidekiq Pro would cost $500 per company and add several complex but useful features not in the Sidekiq gem...

In the last year selling Sidekiq Pro, I sold about 140 copies for $70,000. Assuming I’ve spent 700 hours on Sidekiq so far, that’s $100/hr. Success! Sales have actually notched up as Sidekiq has become more popular and pervasive: my current sales rate appears to be about $100,000/yr.

If I recall correctly, when he wrote this blog post, Perham was also working full-time as Director of Infrastructure at an ecommerce startup. His blog now lists his job as Founder and CEO of Contributed Systems, whose first product family consists of Sidekiq and Sidekiq Pro. Perham seems to have discovered a really effective model for funding open source software.

What if Rails had used this model? I like to think there's an alternate universe where this happened; where 37Signals gave away Rails for free, and charged a licensing fee for an expanded, more powerful version called Rails Pro.

Rails & Rails Pro

I like to imagine that in this alternate universe, when people wrestled with the paralyzing monorail stage of the Rails app lifecycle, Mr. Hansson and the other members of the Rails core team would have had no choice but to listen to their users, because their business depended on it. I also like to imagine that in this alternate universe, a Merb merge would not have been possible. The financial incentives to think carefully before accepting feature requests, even when they arrive in the form of code, would have been stronger.

But this business model raises a whole bunch of questions, because so many people and companies contributed so much time and effort to make Rails in the first place. Would they have done the same, in this alternate universe? It's one thing when you're contributing to a project "everybody" owns, and another when you're contributing to somebody else's business. (Sidekiq certainly sees a lot of contributions, but Perham does most of the work, and I can't currently peek at the contrib graph for Sidekiq Pro.)

And consider: What happens if Mike Perham wants to sell Sidekiq and Sidekiq Pro? For that matter, what happens if 37Signals wants to sell their interest in Rails? And what if Express had been using this business model? Can you hand off your semi-open-source, semi-commercial project for somebody else to run?

From the "About Us" section on the home page for Mike Perham's company Contributed Systems:

We believe that open source software is the right way to build systems; building products on top of an open source core means the software will be maintained and supported for years to come.

Contributed Systems is a play on the computer science term "distributed systems" and the fact that we allow anyone to contribute to our software.

Sidekiq's popular for a reason: it's really good. And if Sidekiq Pro accepts contributions just like Sidekiq does, then it's neither really open source nor closed source, but more like "gated community source." (Because it's a Ruby gem, so my guess is that it is open source for those who have access to it, but you have to pay to get that access in the first place.)

There's an enormous mess of contradictions here. Software development is basically the only industry making money right now in the entire United States. And yet its foundation is this basically communist idea that everybody will contribute to the greater good. The idea that you can sell sponsorship and/or ownership of a project, as with TJ Holowaychuk and StrongLoop, really exposes these contradictions.

Law Is Hard, Let's Go Hacking

Perham's "gated community" model might be the best approach. Most open source licenses prefer to avoid these issues by entirely disavowing any and all responsibility. It's simpler, but I doubt it's as sustainable. Here's the MIT Public License:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.


Translation: "no money changes hands, and you can do anything you want as long as you acknowledge authorship, but we take no responsibility at all for anything which happens, so don't ask us for shit."

I am not a lawyer. If you're a lawyer reading this, I have a question for you: would disavowing any and all warranties still even be possible under the law if money had changed hands?

In a similar vein, I don't think any true Bazaar exists, in the sense of Eric Raymond's metaphor, because it's customary in open source projects to yield final decision-making power to whoever started the project, and to refer to that person as the project's Benevolent Dictator for Life. (If Holowaychuk had any real right to sell Express, this might be where it came from.) That one individual person's final decision-making process is inherently closed, and could only be truly open if we all developed the power of telepathy. I think this "BDFL" custom exists because it's much easier to skirt the issue of the contributors' social contract than it is to define anything more specific.

(Pirate Party founder Rick Falkvinge talks about this extensively in his book Swarmwise, which is essentially about how to use the development model of open source software for political purposes instead. He makes the point that adding a formal voting process to a chaotic, ad hoc organization is most likely to alienate the people who would otherwise become the organization's most productive members, because highly productive contributors are not typically fans of overly bureaucratic process.)

Communist Capitalism or Capitalist Communism?

The open source movement dates back as far as the late 1970s, although at that time it was known as the free software movement, and that is actually a different thing. Whereas the free software movement saw software transparency as a requirement for a free society, open source seeks to fit the superior utility of open development practices into a business framework.

The "open source" label was created at a strategy session held on February 3rd, 1998 in Palo Alto, California, shortly after the announcement of the release of the Netscape source code. The strategy session grew from a realization that the attention around the Netscape announcement had created an opportunity to educate and advocate for the superiority of an open development process...

The conferees also believed that it would be useful to have a single label that identified this approach and distinguished it from the philosophically- and politically-focused label "free software."

Open source projects very often use a communist methodology for capitalist purposes. There are times when this duality is tremendously entertaining; for instance, any time a Linux sysdamin tells you "Communism doesn't work," you get a free joke. Likewise, you get a free joke any time somebody tells you that Linux proves all software should be open source, and the joke is the user interface for Blender, an open source 3D graphics and animation package with notoriously incompetent UX. I think it's extremely likely that the only way to produce good software is to balance capitalist interests against a communist methodology, and if I'm correct about that, it would certainly qualify as one of the many reasons software is inherently hard to get right. The inherent tension between these two forces is tremendous. The drama around Express's transfer of ownership springs from that.

I'd love to give you a pat answer to the question of who owns Express.js, but I think it's a big question.

Update: Mike Perham wrote me to say that his customers can contribute to Sidekiq Pro, and that 5-10 customers have, although he has to own the copyright, to keep the publishing/licensing issues from being insane.

Sunday, July 6, 2014

Why And How Bayhem Works

Saturday, June 28, 2014

Real Talk

Sir Francis Walsingham (c. 1532 – 6 April 1590) was principal secretary to Queen Elizabeth I of England from 20 December 1573 until his death, and is popularly remembered as her "spymaster"...

Walsingham was driven by Protestant zeal to counter Catholicism, and sanctioned the use of torture against Catholic priests and suspected conspirators...Walsingham tracked down Catholic priests in England and supposed conspirators by employing informers, and intercepting correspondence. Walsingham's staff in England included the cryptographer Thomas Phelippes, who was an expert in deciphering letters and forgery, and Arthur Gregory, who was skilled at breaking and repairing seals without detection.

Book burning was common in this Elizabethan police state...

Shakespeare's England: It is a land forced into major cultural upheaval for the second time in ten years. It is a society divided by intolerance, a population cowed beneath the iron fist of a brutal and paranoid Police State. It is an unequal society of great wealth and unimaginable poverty...

And just to be clear, I'd probably vote for Hillary in 2016. I'm just saying, if you think it's bad that America hasn't yet caught up to where England was in 1979, you've underestimated the scope of the problem.