How to define Developer Productivity

I believe in the beauty of simplicity. Couple of years ago, in my old team, we faced a problem of measuring how much work we can fit into a single sprint. Having a tangible estimate was important to us in order to ensure constant flow of features, don’t overpromise, have a sense of accomplishment or be able to fit a bit of needed technical work. After some back and forth we settled on a very simple, if not simplistic model, counting the number of tasks we complete per sprint. And it worked splendidly. Thanks to our consistency in planning activities we were splitting the work on tasks of a more or less similar size. This effectively enabled the ‘Law of large numbers’ for us, and therefore we could predict our capacity quite reliably. This short backstory shows how much you can achieve with simple methods that are cheap and easy to implement. How does this relate to developer productivity? Before answering this question, I invite you to join me on a small segue.

map-image

Without a shadow of a doubt we can say that IT is one of the most rapidly developing industries, and still has great perspectives. In early 2000s IT was competing with the Financials in the S&P500 index for the title of the biggest economic sector. Since obtaining this title in 2010, IT successfully established itself as the leader, increasing its share in the index to more than 25% over the last 10 years (leaving financials at only 10%). When the expectancy of the next dot-com bubble burst started to grow, 2020 came with the COVID pandemic. Over the next 18 months we’ve seen unprecedented growth in the IT sector, speeding up the digital transformation by as much as 5 years by some estimates.

Yet, the Developer Productivity doesn’t get a fraction of the attention it deserves.

You can find some papers and research on the topic of Developer Productivity, but it’s a wasteland compared with the studies run on Customer Behaviours or Cosmos. I’m baffled by how little attention this topic gets. We all know that the products delivered by IT can have a powerful impact on other sectors of the economy, increasing their productivity exponentially. Every single percent of improvement of Developer Productivity has the potential to generate a compound effect on the output of IT and the whole economy by extension.

What’s the reason behind researchers staying clear of this topic? I think there are two main problems. First, the most obvious one, is the low appeal of researching IT compared to other topics, easier to understand by the average Joe. Can you imagine having a chat over Sunday dinner with your spouse about the effect containerisation had on the IT industry? The second problem is the sheer amount of time needed to comfortably understand the discussions between Subject Matter Experts. It may be truly terrifying. Interestingly, I observed the same trend when hiring for Product Owners to internal products, built by developers for developers. Majority of experienced people prefer to work in the domain where it’s easy for them to impersonate or empathise with the client. Only the most ambitious and truly exceptional Product Owners are up to the challenge presented by internal products.

Why does it matter to know Developer Productivity in your company?

The main argument behind investing the time into understanding Developer Productivity in your company is the constant improvement process. Without knowing what the productivity currently is and where it was, you cannot reliably determine if the changes introduced in your organisation are making things better or worse. Without the feedback loop, how can you make better decisions in the future or decide to keep changes you’ve trialled? An excellent example is provided by the COVID pandemic. Many, if not all companies in the IT industry were pushed to adopt a remote working model for their developers. We are (hopefully) close to the end of the pandemic now and you have to decide if your people should return to the office. How do you want to make this decision, if you don’t know what the productivity level was when working in an office, or how it changed by going remotely for more than a year? What will happen if you make a wrong decision? Will you lose your workforce or be outrun by competition? Managers in the companies that invested early into understanding their Developer Productivity are in a much better position now to make this choice.

I’m sold. So, how to define Developer Productivity?

Great, let’s cut to the chase. The main problem we have is lack of a proper definition of what Developer Productivity is. Sparse research in the area is not helping of course. We know neither how to define it, nor how to measure it. To address this issue we will challenge ourselves to come up with something simple yet impactful.

Let’s start defining Developer Productivity with a general definition of Productivity. Following the investopedia:

“Productivity, in economics, measures output per unit of input, such as labor, capital, or any other resource. […] At the corporate level, productivity is a measure of the efficiency of a company’s production process, it is calculated by measuring the number of units produced relative to employee labor hours or by measuring a company’s net sales relative to employee labor hours.”

This definition provides solid bases for our discussion, comparing the output to effort. We can easily define effort needed in the development space to produce output, through the perspective of developer’s time or money spent on wages and tools. For the sake of simplicity we will pick the time spent. What’s difficult here is the definition of output. Measuring a company’s net sales is great, but useless for Developer Productivity. It can be affected by various factors outside of the developers influence (sales, ads, market trends), rendering it way too broad for our case.

This leaves us with the “number of units produced”, following the investopedia definition. With few notable expectations like software-house and body-leasing companies in the IT space, arguably “number of units” related to software makes little to no sense without additional context. Taking a look at wikipedia definition of Programming Productivity doesn’t help either:

“Productivity traditionally refers to the ratio between the quantity of software produced and the cost spent for it.”

What’s the “quantity of software”? The word “traditionally” seems to yield some hints, as in fact back in the day the notion of productivity counted in the Lines Of Code (LOC) was a thing. Nowadays though, you can achieve the same result in many different ways, utilising tons of different technologies abstracting away complexity. The LOC for different implementations of the same feature may be vastly different. Following the old definition one could argue that a developer solving a problem in 100 lines of code in 10 hours was less productive than their counterpart writing 1000 lines at the same time. This argument can be quickly disproved, as the less code you have the easier the maintenance is (assuming the same quality and readability). Therefore the first solution is better, as it doesn’t harm your future productivity.

Defining the ‘quantity’ in relation to software

Each company has some Software Development Life Cycle (SDLC). It may be documented or not, but the fact of having software in production proves that there is a SDLC. It usually consists of some discovery, implementation, delivery and analysis phases. I believe that measuring how many cycles you are able to complete in a certain time-period is currently the best ‘quantity’ definition we can use.

There are some caveats you should be aware of before using SDLC cycles as your measure.

  1. The input needs to be homogeneous. If your cycles are very different the data will be all over the place, thus rendering this method useless. You need to be consistent in the standards set for each of the phases in SDLC, producing the cycles of similar “complexity”.
  2. The cycles need to be frequent enough to draw conclusions. From my experience, to have anything close to a meaningful metric you need at least a hundred cycles each quarter, but the more the better generally.
  3. Watch out for gamification of metrics. People may game the results depending on the company culture, splitting the work into smaller capabilities and artificially bumping the metric, or purposefully coupling them together to send a message through the company that they are unhappy. Generally you should track your team productivity and improve it, but it shouldn’t be set as a goal for you and your team by a manager.

In some cases you may not be able to measure cycles, for example due to low accessibility of the data. The next best things are rollouts. A rollout starts with a Pull Request, and ends when the change is applied to all relevant environments. It omits the discovery and analysis phases of the SDLC, so to have a complete picture you should also track the number of epics/features/capabilities delivered. If rollouts are not within the reach, you can alway count the number of Pull Requests and Deployments to get any tracking in place, and improve your metrics over time.

Putting everything together

By this definition, the Developer Productivity is the number of SDLC cycles delivered relative to employees hours worked. Having the metric defined and implemented, you can start collecting the data, and consciously improve selecting the best ideas out there. I also recommend visualising SDLC through System Thinking perspective, as it helps to locate the wait time in the cycle and focus on the most important initiatives improving the Developer Productivity.

Interested in productivity in IT? Observe me on twitter or medium for any future articles on productivity myths or good and bad practices in this space.