Developer and Team Productivity

Being productive is obviously a good thing. Can we measure it, though? Should we measure it? There’s been mainly failed attempts like lines of code in the past. Currently, there are new tools to measure productivity, like using Get Metrics. Nick Hodges joined the show out to discuss the good and the bad of measuring developer and team productivity, including how we can improve productivity.

Welcome to Test and Code today in Test and Code have Nick Hodges talking about developer productivity. But before we jump in, Nick, welcome to the show. And tell us a little bit about yourself.

Thank you, Brian. I appreciate being here. This is really fun.

I’m the developer advocate at Rollbar and I’m a longtime developer software development manager, just fairly new in the developer advocacy role and really enjoying it and really glad to be here. Thank you.

Awesome.

So you came up with the topic idea of developer productivity, but why did you start caring about developer productivity?

Well, I was a developer for a long time, and of course, I wanted to be as productive as I could.

As a developer, you can kind of know how productive you’re being. You can kind of feel it. But then I became a manager and I realized that was one of my jobs was to make sure that my team was productive, the developers on my team were productive. And I’m guessing that’s not as easy as I found out. I should say that it’s not as easy as one thinks to figure out how productive your developers are.

There wasn’t at the time anyway, back in the day a real way to measure that.

They tried the classic lines of code or story point completion and all that stuff and all those were very gameable and didn’t work.

So I became very interested in it. Actually about light years ago in the Internet time, but only ten years ago in the calendar time, I wrote an article called Can You Measure Developer Productivity?

And I came to the conclusion that, no, you cannot. And one of the reasons I came to that conclusion was a lot of well known software development writers had come to the same conclusion as well, Martin Fowler, Joel Spolsky, Scott McConnell. Some of those guys just basically said, you know, at this particular juncture, we can’t measure developer productivity because it’s pretty much impossible to game or pretty much impossible to find something that isn’t gamble.

Okay, so that was my conclusion. And I came to realize that somehow, some way, though, I knew who my productive developers were. Yeah, there were people on the team that were very good and people on the team that were good. And then occasionally you’d run across somebody that was not good.

Strangely enough, it was pretty obvious based upon your intuition, your experience, your knowledge, the things that you could kind of observe subjectively.

But the objective measurement of developer productivity was always very big and challenging. So I got kind of interested in it and still am.

So did something change? You said ten years ago we couldn’t measure it.

I think now we can.

And the question of whether we should or not is kind of different from whether we can. But I think the advent of sort of Get won the source control wars or ten years ago, Mercury was kind of up there somewhat popular, but somehow Get just rose to the top. And I think 99% of the repositories out there probably and Get these days, I think any material we used to have, our material I used to where I worked had our repository of material, and we eventually switched over to Get just because it was hard to find people that knew the curiosity. Basically we’re using material. People say, oh, you guys are behind the times. Whatever.

Anyway, it’s a very advanced thing, but it’s kind of like a beta Max argument.

It is. Yeah.

And places like GitHub, GitLab Bitbucket, those kind of places now provide you with this API that let you pull all kinds of really interesting information out of your out of your Git repositories.

And there’s a lot of information available, individual developers, there’s information in there about how often people are checking in, how long things are taking between check in and code reviews, and all kinds of really interesting information that you can Garner from that. You can pull from that. And I think you can actually find out who is doing things, how individuals are doing things, how many pull requests they’re creating, how many pull requests they’re closing, how frequently they do them, all those kind of things that you can measure.

Now, again, if you start measuring things, usually the results, particularly with knowledge workers like developers, you end up with the ability to gain that. If you start measuring and say, okay, the best developers in my organization all do X number of pull requests per week, well, you’ll get that number of pull requests per week. Whether that’s a good thing or a bad thing. Right.

Whether that’s good or bad, I don’t know. And then certainly if you start measuring developers based upon certain criteria inside of Git that can become competitive, it can become bad for morale, it can become Big Brother. Nobody wants to be Big Brother and stuff. So it’s an interesting question about what you can do and what you should do.

Okay.

Well, what should you do? Should you measure this?

Well, my recommendation and the conclusion I’ve come to is that you should measure the team. You can measure team and project measurements with tools that provides, but that individuals should probably not be measured or at least not compared.

I’ve often thought it would be interesting because there’s tools out there like Linear B and some of the other tools that provide this information for you as a service and as a product.

And I’ve often thought that while you should measure the team systems, there would be a cool feature where maybe you could have just the only person who could see their individual statistics were the developer themselves, as opposed to sort of the manager. So if you were logged into a Jellyfish or something like that, you could see your own stats, but nobody else could see your stats. I thought that would be kind of a cool way of doing it. But I think what happens when you were to say allow individual stats to be broadcast out to everybody, you would start getting a competition or a rivalry or I’m not sure where I’m trying to think of it between developers that you don’t want because you want teamwork development is a team sport, in my view.

You don’t want people saying, oh, he’s got more he’s got more pull requests or he’s done more code reviews than me. I need to jump on that and start doing them. Well, you do kind of need to jump on and start doing them, but you don’t want to be doing them just to get the numbers up. You want to be doing them because that’s the right thing to do and it’s part of the process.

Yeah.

I’m trying to figure out which ones I would really care about. I’d probably want to know things like when a I don’t know if you can measure this stuff, though. I’ve never looked into it, but things like when a poll request is opened, how long does it take before it gets reviewed by people?

Yeah.

That’s something that can be measured. And in fact, that’s an important measurement. I think for many teams monitor that exact thing, try and reduce that. One of the main measures that people measure now is cycle time.

Yeah.

And then one of the segments of cycle time is pull request wait time. And yeah, that’s a very important statistic.

Yes. Because when I’m in the flow, I’ve got a PRN on both ends of it. If somebody else has a full request that they’d like me to review, I’m like, man, I’m doing stuff.

I’ll do it later. But if I’ve got a PR, I want everybody to just drop what they’re doing and go review it. Right now, somewhere in between is good. We don’t want people to jump out of the flow if they’re in the middle of something, but at the same time, having those sit around too long, it just slows us down.

If you’re going to provide feedback to somebody, you don’t want the feedback to be two days later, hopefully you can get it within hours.

Right. Because their mind is still in the problem.

Yeah. Another really interesting measurement that these tools provide is average PR size, because you don’t want a pull request to be too small, but you don’t want it too big either. And so as a team, you need to decide how big you want your pull request to be, and then you want them to be kind of in that general area, in that general vicinity of what a perfect pull request size would be. It depends on the team.

But about 4000 lines of code changed, right, right.

And every once in a while you’ll have a one line change, of course. But for new features in particular, you don’t want to go three weeks and then turn in 60 files and 5000 lines of code. Like I said, it’s not something you want to review. And then of course, nobody will want to review that. And so it might sit around. And then there’s all kinds of bad things that happen when your pull request size gets too big. So request size is a real good one.

I hadn’t really thought about that too much before.

I can’t remember a recent interview where I was talking about somebody was talking about how breaking down large refactorings into smaller chunks. So part of it was cleaning up the code so that it was compliant to a Pep page or something, maybe have that be one pull request and then the actual changing of the behavior be another one to just make it, to break it into the different stages so that people can review it easier and if not multiple PR, at the very least, kind of reordering commits so that the PR can be viewed with the different commits. It’s interesting idea.

Commit size is another interesting thing to measure because you don’t want to be a single commit to be 50 lines of code. But you don’t want to have like 51 line code commits either, necessarily.

So you want to try and commit small atomic things that occur, but you don’t want to be committing too small. And again, don’t want to be. It’s like the Goldilocks, right? You don’t want small, you don’t want your commits too big, you want them just right.

Yeah. Well, like in the PR one or the code clean up one.

If you’ve got some old code that’s not compliant to the naming standards or something like that and you’re not doing functional changes, you’re just doing nonfunctional changes, then that’s easy to review. Tons of lines of code. You can just say, hey, I threw black on it and it changed everything. But that’s all in one commit. There’s nothing else in there.

Pretty easy to review that.

Yeah, fair enough.

Okay, so any other interesting ones that I think that are interesting for teams.

The one that’s hard to measure sometimes, but that I think is real important, too. And it’s one that has gotten a lot of attention out there is deployment time, the average time it takes from the time you commit to the main branch and that it actually ends up in production, however long that time is. And there’s a lot of people out there who say that should be 15 minutes or less.

Wow. Yeah.

Charity Majors from Honeycomb is very famous for saying 15 minutes or bust.

You want the time from the moment you check your code in to the time you get actually start seeing feedback from it in production to be as short as possible, 15 minutes. Again, with the idea that if you start seeing problems, you still have that issue fresh in your mind. It’s not something that happened four days ago or four months ago, for that matter.

Right.

And so if a problem occurs, you can immediately fix the problem and have the code fresh in your mind. The house of cards that you’ve built up is still kind of there. Perhaps shortening that distance from keyboard to production is really critical.

I think so.

Do these metrics that you’re talking about sort of line up with your gut feel for productive people when you sort of compare the two?

Yeah, that’s a good question. I think so.

Again, most of these measurements, these measurements talking about our team measurements, on average, the team doesn’t necessarily individuals don’t necessarily need to worry about how long it takes saved code from keyboard to production. But as on average as the team does it? I think it does.

But I think, say you’re a senior developer or a tech lead or even non technical manager or whatever. If you’re doing code reviews, you end up knowing which developers are the ones who turn in code reviews that are well done and good size, all those good things, and you get that gut feel just from your memory and your remembrance. But if you have numbers to back that up, particularly in terms of team averages, that can be very helpful as well.

Go ahead.

If a manager is not actually involved in the code too much, do you think they should still be involved with code reviews or.

No, probably not. I would think not.

Code reviews probably should be done by fellow coders. As a matter of fact, I’m a believer that even junior people should be doing code reviews of senior people, one from a learning process and two so they can ask questions and see, hey, why did you do that?

Why did you do it this way?

Why did you choose that particular means of method of doing something?

And a junior developer might find problems in a senior developer’s code that might be a little ruffle, some feathers or be a little issue of pride or whatever, but that could happen.

Oh, yeah. And the learning advantage is huge, even with the questions of like, why did you do that? What does this have to do with this other piece of code?

It’ll find holes in the onboarding documentation and stuff, too.

Yeah, exactly.

One of the questions we had down was what are some drags on this? What causes productivity to go down?

Well, that’s an interesting question. And in my mind, it gets to kind of what productivity is like a really hard bug can drag productivity down. Right. If there’s like a really super challenging issue that needs to be fixed, and it takes you two weeks to track it down, fix it and get it out in deployment. That could be a real productivity Buster. But then the question becomes, is that a productivity Buster or was it just a hard problem or is it just a hard problem? And of course, that’s another reason not to measure people against each other is because sometimes people, senior developers maybe work on hard problems and the junior developers are given the easier problems to start, and then maybe that ends up balancing out over time. But if somebody’s working on hard problems all the time, they’re not going to seem as productive as other people.

Yeah.

I think it’s like you were saying, measuring the team makes sense because as a team this has to flow. But if generally the different personalities are such that somebody really likes to do code reviews a lot and jumps on those right away, great. It’s okay if it’s one or two people that are doing the ones that are jumping on that right away.

Right now I’ve been in organizations where all the code reviews were done by the technical managers, and then occasionally we did them with the technical leads, not the manager per se, but the technical leads. But I would encourage teams to have everybody do code reviews, have everybody look in process.

Historically, they’ve been sort of weekly things, too. That’s another thing. I think that’s changed a lot. They used to be say every Friday we’d review everything that happened during the week, whereas now I think the idea is to try and review them as close as possible to the check in time when the pull request is created so they happen more dynamically rather than in a planned fashion.

Well, if you did it weekly, would you hold off the PR or just go ahead and merge it and go back and review them after the fact?

I think historically, back when code reviews first started to happen, I think it was a weekly thing, but I think it was also at a time when you wouldn’t necessarily deploy immediately.

It would be, say, on a client server environment where you only deploy your new version, say, every quarter, or you shoot every year for that matter.

Yeah.

I mean, I used to work at Boreland, where on the Delphi team and C Plus Plus Builder and Delphi and JB builder tools like that, they’d release once a year, new versions would release once a year.

I guess it wasn’t really that long ago, or maybe like you said, Internet time. It was a long time ago, but years wise, not really where code reviews were not something easy right now when I refer to a code review, I’m thinking of the code review part of PR in GitLab or GitHub or something, and you could just see all the changes. What changed? You can see the old and the new and what got deleted, things like that.

But it wasn’t that long ago where code review meant pulling people into a room and throwing the new code up on the projector and somebody talks through it, talks about what’s going on. Yeah.

You had to go to your get client and find diffs, whereas now it’s just all right there. Like you said, I think it has to do with our deployment schedules. I think so much of the development we do now is SAS based and web based that you can deploy multiple times a day.

I think Amazon deploys 1000 or multiple thousands times a day.

Yeah. Okay. So one of the things in order to get to there, there’s the obvious need to monitor. Right. And that’s one of the places where Rollbar fits in. But if it’s not Rollbar, somebody else monitoring is essential. Now, you couldn’t deploy 15 minutes after a commit without monitoring in production, right?

Correct. Yeah.

And that’s a cool thing. I think that you can because of monitoring.

When you add in the fact that you’ve got a lot of the code you write is running on the client side, it’s important to know right away that you can know an error is occurring before your customer actually realizes it necessarily or before your customer actually can get around to reporting it. You could maybe even find it and fix it in a matter of minutes if you do. And like I said, like you said, one of the things Rollbar does is report those errors, even from the client side, into a system that lets you see them right away. So you can be monitoring production constantly and you can associate a particular error with a particular release or even a code check in. And that makes for a pretty easy way to either roll back or immediately fix the problem that you see. And hopefully something that happens that can happen before your customer notices it.

Yeah. And then even if you’re not deploying straight from 15 minutes to commit to production or something, even if it is like a weekly schedule or something, a lot of people are still doing, like a development server or something. Where to the developer, it may as well be deploying in 15 minutes. Because if you try to get that cycle time from commit to getting it on an integrated server with everybody else’s code, and then you can run longer running tests or whatever metrics you need to run against it, that’s still a good thing, even if you’re not going all away, right?

Yeah, absolutely.

Anytime you can get feedback on what you’ve recently checked in as quickly as possible, that’s really good. If you can get 15 minutes feedback here, it helps prevent technical debt. It helps reduce bug fixing time. There’s all kinds of goodness that happens when you’re getting feedback very quickly.

I’m, like suddenly super jealous because I still got C Plus Plus code and it takes 15 minutes to build the thing.

15 minutes is probably fast.

Yeah. It’s a cool project to get down to 15 minutes compiles and test suites and stuff, of physical stuff. But web and microservices and stuff are different beasts than programming devices.

Very different. Absolutely.

But all similar sorts of things, even if it’s a different scale, bring these numbers down is a good thing.

Okay. So we talked about monitoring, but are there other ways to the question is where are the biggest gains? But how can we increase productivity? I guess.

Well, that’s an interesting question as well.

I think that productivity can happen when people are working on things that they want to work on, when they’re feeling empowered to make their own decisions about how to do things.

I think if they can feel like they’re growing and improving their skills and mastering, what their trade, and then if they feel like they have a reason for doing what they’re doing, they’re contributing to the company. I think all those things, general morale issues really are the thing that make people more productive. And of course, training and learning and mentoring and all those things can help make a junior developer into a senior developer.

I think that those are the kinds of things that I think a manager can really work on. And then there’s the other, more practical side, like making sure they have a very fast computer.

I’ve never understood the hesitancy to not get a developer the fastest computer because it pays for itself. Right. I mean, you’re talking about 15 minutes compile times think if you could get a faster computer and that could come down to ten minutes and you compile five times a day, that’s half an hour a day. Times what, 200 days a year? That’s 100 man hours. That’s 100 person hours. That’s what a week’s worth of productivity right there just by buying some new computer.

Yeah. That’s like the cheapest when a company can do it.

I know.

Upgrading software, larger screens, man. Absolutely.

All those things have good keyboard, comfortable desk, making people feel like they’re valuable just by the hardware you buy.

And those are just fractions of what a developer salary is also.

Absolutely. Yeah. You look at an hourly wage for a developer if it saves them an hour a day. I mean, that’s a big win.

Yeah, definitely. Especially multiplying it over every five days a week.

200 work days a year. Yeah, absolutely.

One of the things around productivity, if we talk about productivity, is how that incorporates into performance reviews.

Yeah.

Got any thoughts around that?

I do. I’m not a big fan of the performance review.

I’ve seen performance reviews of developers where it all comes down to measuring number and measuring certain things people like I mentioned earlier, the classic lines of code or any type of specific number that you want to measure.

And then there’s been Microsoft used to do stack ranking and GE is famous for firing their bottom 10% every year, which seems kind of insane to me, but after a while, maybe for the first few years, that would work. But after a while you’ve got some people left and then you’re going to go far. I don’t know anyway.

But performance reviews, I think the more general they can be, the better.

One of the best places I ever worked at had a performance review where each quarter you would list three things you accomplished, your manager would list three things that they thought you could work on. And then you’d get ranked either in the top 90% to 95% tile, which was basically would put you on a performance improvement program at that point. And then if you got ranked in the zero to 5% or whatever, or the bottom 5%, they basically said, go grab your stuff.

But the notion that we’re going to give a letter grade or some type of very specific number to developers in particular, I think is challenging and dangerous and challenging in that I think it’s hard to do in an objective manner, but I think it’s dangerous because you can cause people to leave, you can cause morale, you can cause attrition there’s all kinds of things that can happen if somebody’s given us 77 and they thought they were an 85 or whatever.

Yeah. Numbers are weird.

But there’s also stuff like, I don’t know, I don’t know how this sort of relates to productivity, but this is one of those gut feel sort of things. But also, I know there’s some people that if you give them something, it’s going to get done and it’s going to get done quickly. Sure. And then other people that if they can’t get it done quickly, they’ll come back and they’ll search for answers and stuff like that. And there’s other people that, you know the same thing. If you give them to them, it’s just going to take a while.

And that may or may not be a good thing. If they’re also attention to detail sort of person, that might be the prime thing for some tasks.

Exactly right.

But there’s that that happens. And so I guess let’s not get around performance reviews. But if we don’t tie these metrics into performance reviews, what are we doing for what are we using these metrics for at all? Is it for looking for problems? Is it for what?

Well, if you measure the team metrics, it’s sort of a continuous looking for problems.

If your pull request size starts rising, you can go back to the team and say, hey, guys, we need to start shrinking our pull request. Not a lot, but it’s starting to get above what we want.

I guess for me, performance reviews, maybe I’m weird.

I’m never motivated by them.

And then the other thing, too, is you don’t want to wait till the performance review to let a person let a developer know how they’re doing.

You want to let them know right away, and then if you’ve let them know throughout the year, hey, you’re doing a great job. I rely on you. I always know you’re going to come through.

Or if you’ve been counseling a developer who maybe isn’t doing that great, a performance review seems redundant to me.

Yeah.

Sometimes get the feeling that performance reviews are driven by HR. You know, that they want to have a number for everybody. They want to have a letter grade for everybody, whatever.

That may be part of it.

But if somebody wants it, some people, like I said, I’m not somebody who is really into getting a performance review.

As long as you’re not fired me and you think I’m doing a good job, I’m okay. And you’ll let me know that throughout the year. I’m fine. I don’t need the performance review. Some people like them. And maybe the policy could be that if you like a performance review, I’ll give you one.

If you don’t want one. I won’t.

I guess for me as a software manager as well, the only positive side I think of performance reviews is that because I will have them, I have to think about them throughout the year. I guess if I didn’t have to do performance reviews, I might not be paying attention to what somebody’s doing good. What somebody’s doing bad.

Keeping track of it.

Yeah, trying to keep track of it, whether or not keeping track of it is a good thing. But I think that is important for you to say. It wasn’t really the topic of the podcast, but just it should be more real time, just like the cycle time, just like you’re trying to get code faster.

Good things and bad things should be visible to that. Feedback should go back to the developer as fast as possible.

Yeah, you never want a performance review if you’re doing them to be a surprise. Never.

Nobody should ever come into their performance review and go, Holy mackerel.

I also know that there’s a legal standpoint of if there were problems, it’s helpful for a company to have written evidence that problems have been around for a while so that if they have to let somebody go, they don’t.

Yeah, that’s a nice euphemism, too, isn’t it? Let somebody go as if it’s their decision to leave.

We’ll allow you to go.

We’ll allow you to leave.

You could stay, but we’re not going to pay you anymore.

And actually, you can’t stay because we’re taking your badge. But you can hang out in the parking lot if you want.

You can still connect to the WiFi out in the parking lot. Oh, wait, we can’t do your WiFi.

Okay, never mind.

But I’ve never really thought about when you brought up the topic of developer productivity. I kind of cringed them. Like, I don’t know if I want to talk about that, because it brings up the whole lines of code thing, but utilizing some of these things to just even talk with the team about to go pick a handful of them to measure or just find out what’s easy to measure and talk to the team even and say, what should we measure? What should we look at as a team, or just pick a few that you think are important and then bring it back and say, this is interesting.

Most of our code reviews get looked at, like, pretty quickly, like within a day or something, but is a day too long? Do you want to try to have people feel about that?

That’s the kind of thing that can happen when you’re measuring things as a team, and the team can take responsibility for that. And no individuals called out and nobody’s getting in trouble because they were working on that very challenging, difficult bug. And it took too long, and all of a sudden the chart doesn’t look right. And then the VP of engineering, who hasn’t written a line of code in their lives, says, why is Sarah’s taking so long to fix that bug? Well, it’s a hard bug, that kind of thing. You don’t want those individual things called out or pointed out.

Yeah. I remember when there was a group I was on, not the group I’m on now a different company where we’re trying to get unit test numbers up to try that, which is if you’re measuring something, there’s a way to game it.

User test numbers went up.

Yeah.

Unit test numbers go up. And we were measuring it.

We had a way to measure it and we were testing more, which is a good thing. But they were liking to see those graphs go up.

And then we noticed that at one point there was a refactor done to a test file, and then instead of doing it with version control, it was copied to another name and then modified to a different algorithm. And then also sometimes that’s a good thing. You’re comparing it in production to see if they’re still operating the same. And then it got left there. And so there was a big jump, and then it got noticed like, oh, this is duplicate test. It’s just taking an extra five minutes for no reason. Let’s delete this because it’s redundant. Now the graph drops suddenly we lost 50 tests.

So managers are upset and said, what is going on? Why are we reducing test coverage? We’re not. We just had okay, this is silly that we have to do this explaining. But yes, this happened.

That’s exactly right.

The notion that sometimes the best thing you can do is delete 100 lines of code. Right.

Yeah.

Reduce 100 lines of code down to ten. That might be the greatest thing. That might even be a big win.

Yeah.

But I’m glad that we weren’t measuring test time. I mean, we probably should, because a test suite should be fast also.

But it’s really fast. If you put no Ops in all the test functions and don’t assert on anything.

Test count goes up, speed goes down.

Yeah. I mean, there’s a reason for this stuff to be there, but you have to do common sense.

Absolutely.

Any other topics around productivity that we want to cover?

Actually, there was something interesting. I put a poll up on my LinkedIn, my LinkedIn account the other day profile, and I asked the question, do you prefer a developer who would be really, really slow but wrote almost no bugs, or somebody who is maybe, say, average time and wrote average bugs? In other words, is there any advantage to somebody who maybe takes twice as long to do something but makes it almost impossible to find bugs in their code?

And Interestingly, most people wanted that second one, and I thought, which one is the second one? I’m sorry, the second one is slow, but very high quality code. Okay. Interesting. An average developer or a good developer who does things fairly quickly, but there’s bugs in their code. And this actually stemmed out of a very real life situation that I had back in Boreland. I had a developer on the JBuilder team who was notorious for this. He was very slow. But, I mean, his downstream costs were very minimal because it was like a badge of honor for QA to find a bug in his code before.

Holy crap, you found a bug in Bill’s code? Are you kidding me?

What show everybody.

But it would take probably twice as long to finish a project.

Now, in the environment of shipping, once a year, that’s probably okay. Right? Because then you got time to I wrote a blog about it, actually. And I used the example of, say, a project takes a person one week and they find three bugs.

But those bugs are reported very quickly because we test we’ve got the production reporting bugs, like tool, like a Rollbar, if I may, then they get those bugs fixed in three days. And so they’ve used a total of eight working days to create a new feature, whereas the slower quality guy writes the person writes the feature in two weeks and there’s no bugs. Well, who was faster?

That kind of is an interesting activity question.

And maybe in this day and age, the person who writes high quality code isn’t as valuable as the person who writes good quality code but can very quickly find the errors in their code and fix them. It’s just something to think about. I don’t know, in terms of the real productivity of a developer.

Yeah, it is. Interesting. Different styles. I don’t think you could compare the two. Really?

Yeah. Ultimately, somebody’s writing good quality code, it’s hard to argue against that ultimately.

Except for if it takes like three times as long or four times as long.

Interesting.

Also, it used to be more real without quick systematic testing.

That’s pretty thorough.

And also system level testing and testing in production, things like that.

If you can’t do that, then there’s definitely wins on the quality side.

Yeah.

And it used to be, I guess, utilizing that, we still had issues back then of having people get frustrated with slower people because I’m doing this stuff, putting this out really quickly. What’s going on with you? Yeah. But if you take a look here code, there’s got a bunch of bugs in it. And just because we don’t have the test suite to catch them, we’ll catch them later or hopefully we’ll catch them later and people just wouldn’t test that. I mean, like you said, it also depends on what you’re measuring.

If you measure when we have the ability to just throw it over the wall of QA and debugging and polishing, it isn’t counted against the time it took you to develop the feature. The time to develop the feature is really just looking at how fast did you push it to the next team?

Then you’re going to why do quality code just push it to the next team as soon as you have an idea down.

I was kind of a Brad about this sometimes, and I would sometimes say, well, all the features are finished except for none of them work.

And we’re now in the debug phase.

So we better get writing code because we’re going to have a lot of bugs to fix.

Yeah.

It’s just got one defect that it’s not implemented yet.

Good point.

I like that old school stuff like that.

I think that things are changing because the quick turnaround times just have changed the way things work. Observability has just changed the way code works. Right. I can remember being on beta test the software packages back 20 years ago when people would write the code and the QA people would test it, but then it would hit the beta testers and beta testers, we would find bugs. And now the time from the point when the code was written until the bug was reported is measured in weeks or months.

And that’s costly.

The farther away a bug is found from the time it was written, the more costly it gets. Well, like I said, if you’re turning them around and finding those bugs in 15 to 20 minutes, well, that reduces that cost quite drastically.

Yeah. One of the benefits that we don’t really talk about and cost savings of finding things quickly and getting into production quickly is kind of like the snowball effect of what happens to code. So actually, even an easy to fix bug today might be hard a month from now or two months from now because you’re going to build architecture on top of that feature.

And now if you want to yank that feature or move it to a different refactor and move it to change it completely because it was buggy, now you have to change all the dependency parts of it and everything, whereas finding it quickly is beneficial.

Yes, absolutely.

Also it’s changed to how we test. Because even that term beta tester.

Oh, yeah.

Companies just guess what, we’re all beta testers. They’ll just segment. Yeah.

You didn’t think you volunteered, but you did. Yes.

With like percentage deployments and stuff. People deploy a feature to 1% of the population and monitor to see if it’s going getting slower for them or if anybody complains.

Yeah. Feature flags are a great invention. I mean that’s just a really powerful tool.

Oh, yeah. Feature flags because you can in production put a feature in and out, I guess. Do people use feature flags for percentage deployments as well?

I don’t know.

When I think of feature finds, I generally think of a SAS web application deployment or mobile.

But I don’t see any reason why you couldn’t. For a more traditional client server type application or just desktop application, you could, I suppose. I don’t know. I suppose you have to have an Internet connection.

Yeah. But I mean when people say 10% of the people coming in get one view like a B testing sort of thing, but a B testing with features.

I think that happens.

I know you can deploy to specific customers. You can just deploy to a certain percentage of people. I think you can do all that. Yeah. I don’t see any reason why you wouldn’t want to.

Yeah. We’re all beta testers.

We’re all beta testers now. Yeah.

There’s people that even know what beta testing is.

Do you think so?

I don’t know. I guess some of the newer folks out there, earlier career folks might never even heard that term so much anymore. I don’t know if it’s even used that popularly.

So, yeah, we have changed the term.

I guess we still have it, but we call it Early Access program.

There you go. Yeah. Nice Euphorbia testers.

Yeah, it sounds better. It’s Early Access.

That sounds like a Microsoft ism if there ever was one.

I’ve seen it from a lot of companies when they started it, though.

I think that was the reasonable one.

I sure appreciated it with even book writing and that wasn’t possible a decade ago. But both additions to the Pipest book I put out, we had beta readers. So when we have like about half the book written or completed and edited, release it to beta readers and it’s just somebody you can buy the book early.

How would you do that? Do you have a platform for that?

Yeah. So Pragmatic is the platform for it. So we would release I forget what we had. So the 16th chapter book, I think we had nine chapters when it released to everybody or eight or nine and then people can read it. It’s an ebook at that point. It’s not a physical book.

And then we had basically Erada page where people could log in and say hey, On Page Like 43, There’s A Typo Or Something, And Then We Could Fix Those As We Go. Or People Could Say, Yeah, This Is Totally Confusing.

I Might Have To Rewrite That’s A Powerful Thing.

More Eyeballs On A Book In Particular. I’ve Read A Few Books Over The Years, And The More Eyeballs You Can Get On It, The Better. Yeah, absolutely.

Yeah. And Just The Ease With Ebooks And Ebook Readers, I Guess Some Really Great Feedback I Got Was From Sending It Out To Experts In The Field. So We’d Pick, Like, Maybe Eight Or Ten People That Really Know What They’re Doing And Send Them Out, And If They’ve Got Time, They Can Review Parts.

Also Focus With Me with Code Reviews, It’s Good To Have A Checklist Of What You’re Looking For. Yeah, Absolutely. Of Like, Well, Hey, You’re Looking For Does It Actually Function You’re Not Looking For? Is It Implemented Exactly How You Would? Because That’s Not Fair.

Similar To Books, I Picked Different People For Different Reasons. So, Like, There Was Somebody That Was A Teacher. So I Said, Could You Look At The Questions That I Asked At The End Of The Chapter And See How Good Those Are?

And Then Other People That Were More Experts In The Tool Itself? I Would Say, Yeah, Make Sure I Didn’t Screw Something Up.

But Then So Getting Feedback From Them And Then From The Beta Readers, It Was Extra Work, Of Course. But In The End, We Got A More Solid Book. And Books Are More Similar To A Traditional How Software Used To Be Written.

Sure.

Where You Give It To A Few People, Then You Give It To A Few More People And You Have Testers And At Some Point You Have To Say It’s Done And Release It.

You Can Do Second, You Can Do Second Printing To Fix Some Things, But Things Like that. But Anyway, It’s Changed A Lot. Plus, Being Able To Just Use Version Control For Software Version Control To Write A Book Was Great.

Yeah, that Is Really Neat. Absolutely. Just Text, Right?

Yeah.

Anyway, So It Was Fun Talking With You About Productivity.

Thank You Very Much.

Yes. It Gave Me A Lot To Think About, And I’m Going To Have To Go Look At Some Of These Features.

All Right. Cool.

Very Good.

Thanks.

Thank You, Nick.

Interesting Discussion And A Lot To Think About. Thank you. Patreon. Supporters. You Keep Me Motivated. Seriously. Thank you. Become A Supporter By Visiting Test And Code. Comsupport. Every Dollar Helps. If You Found This Episode Interesting Or Useful, Please Share It Tweet About It. Tell A Friend Share With A Coworker Help The Show Grow By One. Thanks For Listening. Now Go Out And Try. Have Fun coding.

Creators and Guests

Brian Okken
Host
Brian Okken
Software Engineer, also on Python Bytes and Python People podcasts
person
Guest
Nick Hodges
Developer Advocate at RollBar
Developer and Team Productivity
Broadcast by