Applying Line Impact to the Real World
To fill out the story of how Line Impact works (and why it matters), here are some examples tailored to our various constituencies.
Line Impact is built to measure the degree to which a commit evolves the code base. In practice, it's often easiest for developers to think of Line Impact as corresponding to the "cognitive load" of a commit.
For example, if you copy/paste a long function, you might change 1,000 lines of code, but that effort requires negligible cognitive load, so it scores negligible Line Impact. In contrast, bug fixing a single line of code inside a five-year-old function requires a high degree of cognitive load. Line Impact helps put those two commits on equal footing, in spite of the disparity in "lines changed."
Developers who first discover Line Impact often wonder what kind of "magic" is used to refine "lines of code" into cognitive load. This article gets detailed in describing how it happens. tl; dr there's no magic necessary: just systematic ignoring and downweighting the 95% of changed lines that don't correspond to cognitive load.
Unlike code metrics provided by competing products, Line Impact isn't a black box. If you want to dig in and get a visceral understanding for how Line Impact is calculated, try our diff viewer. It lets you probe the specifics of how your work translates to Line Impact. If you find commits whose impact doesn't match your expectations, your manager can refine how Line Impact is derived for your project.
As an engineer, here are three specific ways Line Impact works to benefit you:
- Prove (specifically, with data) how meetings and other disruptions to your flow state result in less work getting done
- Substantiate why a particular ticket is taking more time to get done than was originally estimated (i.e., because it requires more work than had been projected).
- Discover which types of code (view, model, tests, components, documentation, etc) you are most effective in relative to your peers. Focus on it as a specialty, or use the info to diversify your range of expertise.
Imagine if you had to judge your Sales team based on how well-liked they were among their peers, instead of by their deals closed? The incentive for an employee would be to invest their spare energy into deepening their political alliances. In effect, this is how most engineering teams operate today. Absent quantified data on coding performance, salary reviews are driven largely by peer opinion. You would prefer if their incentive was instead to write more, better quality code.
This is why it has been called "holy grail" of engineering managers to possess a single, reliable metric that corresponds to meaningful code output. We built Line Impact to be that metric. It allows developers to cut out the "middle man," so they can advance their career by "simply" being great at writing code. Even if they weren't born with expert social skills.
Ostensibly, Line Impact works by measuring the "meaningful" lines of code contributed by each developer. But our research has illustrated that only about 5% of all lines of code are material to the task being pursued. How can we be confident that a metric is reliable if it needs to work with many different technology stacks? And how do we factor in the variability of what is important to a particular engineering team?
The secret sauce that makes Line Impact trustworthy is that it's not a "one size fits all" solution. Its biggest "feature" is its flexibility. Your technical leadership team can empirically tune Line Impact so it matches their "ground truth" about the most effective developers in your organization. Even if you're working in an "ancient" 20 year old code base, this flexibility means that your measurements can be just as accurate as if you were measuring the work done on a brand new mobile app.
As a CEO, here are three specific ways Line Impact can benefit you:
- Experiment with company policies (e.g., work from home) confidently, knowing that you can directly compare the output of developers in the control case vs test case.
- Average company adopting code metrics has seen 5-15% increase in measurable engineering throughput. Multiply that by your engineering budget.
- Normalized measurement of code output across different technology stacks = company-wide transparency into how the pace of technology output is changing over time.
Ready to try it for yourself? Request a demo or start a free 15-day trial now.