BlogProduct Management
Who Sets the Definition of ‘Done’ in an Agile Product Team?

Who Sets the Definition of ‘Done’ in an Agile Product Team?

minute read

I sat down with Tyler Hilker, Director of Strategy & Design, and Tucker Sauer-Pivonka, Director of Product Management, to see how they would answer the question, ‘Who Sets the Definition of Done in an agile Product Team?’. They had no shortage of ideas to share around the topic, and what follows is a transcription of my interview with them. Though this shows one perspective of how things are done at Crema, you need the right culture and leadership to make our process work within your own company. If you already have a collaborative team that understands the importance of trust and transparency, this recipe should work wonders for your agile product team.



Q: So, has Definition of Done (DoD) ever been a problem for us here at Crema? Do we have a consistent definition of what DoD is for us?


Tuck: That’s a great question. I can't speak to the entirety of Crema’s history, but we've encountered this from time-to-time. It’s a common problem on teams, but I don't think that we encounter it that often.


Here’s a scenario: We have a brand new client, and we go through this on-boarding phase of trying to figure out their norms with the team. We do what we can to make sure that everybody's on the same page of what we’re working to achieve. Once that’s set, then the definition of done comes a little bit more naturally, and we can get into more specifics about how to set it.


Tyler: It also varies with the kind of product that it is. If it's a greenfield product, then the definition of done is going to be different than a legacy product because of how much is known about each piece of functionality. It can also depend on how fast you're trying to move or how strong the team is (i.e. how well they understand how the piece of work relates to the business, users, and existing technology).


Part of the reason that we haven't had a very difficult time with it is because we review stories together when we write them rather than the PM going off by themselves and deciding everything on their own. That way, the developers and designer get to speak into it. We do some definition of done-type exercises as early as the strategy and alignment session. We ask, ‘How do we know when this epic is done?’ and answer with, ‘when it includes these features for the MVP’. And at the story level, it kind of ladders up to that. We write the stories together, so everybody's on the same page as early and often as possible.

A team on the same page is a happy team


In terms of variations, some clients simply don't know enough to know when a certain piece of functionality is done. They're relying on us as product and technology experts to decide when it's secure enough or when it's stable enough. Other clients are really savvy and know more about their particular industry. Cyber-security professionals, for instance, know when it meets a certain set of criteria that we might not be as familiar with. Then they'll say, ‘Actually, it needs to have these three acceptance criteria in addition to that to meet this definition of done’ and we work it in.


Tuck: Our product strategists would suggest a set of features (like the product vision) to show that these are the things that we want the product to do. The product strategist is involved in this early phase, and they might also be providing ideas of functionality.


Tyler: A strategist might say: 'This functionality needs to meet these three criteria, and it’s up to you how you do it. It just needs to do these three things in any number of different ways.'

They might draw out some high-level flows, call out some specific details, provide examples for directions to go, and set other expectations. They’re leaving the exact execution up to the designer, developer, and PM.


Tuck: Then, our designer (depending on the project) will take that and build upon those requirements. Maybe you don't have all the details fleshed out yet, but you’re kind of generally going in that direction. And it gets further and further defined as we go along.


Next, the product manager will come in and take the first stab at building out more detailed acceptance criteria—really thinking through that flow and saying, ‘Okay, I'm going to break all this work down in a way that I think makes sense for the team.’ From there, the team will review and break down the stories further.


There are several touchpoints with the client throughout to make sure that as we're defining things further and further it’s still making sense. If not, we just make adjustments as needed.


After we have the initial stories ready to go, we'll do a dev team review (usually in a sprint kickoff). Or maybe if it's a big set of stories, we might do an ad hoc meeting where we go through those stories. We’ll make some adjustments to those stories, but everybody's getting aligned on what the acceptance criteria should look like. When I say dev team, I'm including the test engineer in that term; they're the last gatekeeper for considering something done for us.


They begin the dev work based on those acceptance criteria. Then, they pass to test engineering. Test engineers take a look at the acceptance criteria and test the story according to those. If something was caught by them that doesn't pass the acceptance criteria, they send it back.


The test engineer is the final gatekeeper to consider it done based on those acceptance criteria. They might find something entirely new that we didn't account for with an acceptance criteria --that happens frequently. We then create a separate story and/or bug that gets added to the backlog. Then, we define it and put it into the next sprint, or wherever is most appropriate based on priority.


Tyler: I’m going to take this shape [see above] and then rotate it a bit. If this is a block of work overtime, then the strategist will say, ‘these are the three criteria I need you to work with’; the work has boundaries and shape, but there’s room to work with it. Then the designer will say, ‘Okay, in order to meet these criteria, I'm going to need it do these things’ which adds some more definition & specificity. Then, the PM will take each one of these stories and slice those up further. The dev will do the same. Doing this together (or at least back-and-forth in short iterations) helps ensure each new layer of definition is consistent with the others.


For any designer story, there are probably at least a handful of developer stories. At the end, the test engineer has the job of validating every single one of these and even then some, because the designer might say, ‘It needs to be accessibility compliant with this standard.’ Test engineers will then need to run that by every piece of that criteria to make sure that it fits this designer’s criteria.


It looks like the test engineers are only tracking one piece of definition of ‘done-ness’, but it tracks all the way back here. It all builds on each other ideally, but it’s not always this neat. The test engineer is down here saying, ‘Yep, all of this is met.’ And because we do this all together every other week, we all agreed that this is definition of done.

A Crema test engineer hard at work


The tricky part is when the developer gets down here [into the highly-defined criteria] and what we worked with at the beginning isn’t accurate. They know all kinds of things that the designer doesn't know about the system, about what APIs are available, the cost of certain APIs. etc. That has to be validated within the team, because the developer will sometimes say, ‘This criteria that you had up here actually doesn't work out because of something we just learned.’ They might say, ‘The thing you want here isn't actually possible, because we don't have that data.’ So we come back and say, ‘Okay, well how can we work our way back up one layer at a time and still fulfill this criteria with a set of data that we do have?’


Q: Can you tell me a story about when that happened?


Tyler: We were working on a financial-type product, and the designer designed it in a certain way that wasn't consistent with how the payment data was structured or business process. The client approved the designs and it looked great, but then when the developers got to building it, they realized it wasn’t actually lining up with reality. The client didn't know that, and because the issue was so deep in the stack, the developers didn't have a full grip on the issue until we got there. From there, we worked our way backward to make some tweaks so that it fit.


Tuck: I think the other thing that's important here--in relation to that--is that even if you go through a deep-stack technical discovery process, it’s still important to have these check-ins with the entire team at points throughout the engagement. With the whole team present, people will start rolling on each other's ideas and say ‘oh, actually there's an easier way to do this that's a less expensive route.’ The client may want the cheaper option, so they can speak into that. Or maybe they want to go the more expensive route because they're getting all these other trade-offs with it.


Tyler: To our developers’ credit, they will frequently use their knowledge of what the designer’s doing and the intent behind it to make really smart design recommendations. Typically, one of three things happen:


  1. They’ll build it as designed because everything’s in line and the client’s on board.
  2. As the designer is designing, the developer will speak into that and say, ‘Hey, actually I have a better idea because I know how the data is structured, and I saw this example online that might improve it.’ Then, they'll work together to make that happen.
  3. If it does get all the way down, the developer can say, ‘What if I do this instead of doing it the way that you want me to, because it will save me X many hours and provide a similar result?’ Because the designer knows what they're trying to achieve, they can say, ‘Yeah, that's actually just a different expression of what I want to go for.’ So the developer (knowing what the definition of done is as well as the overall strategic goals that we want to achieve) can speak into those things really well.



Q: Knowing that our teams are set up to where developers aren't in meetings at the beginning of an engagement, has that ever bit us in the butt (in terms of definition of done in an agile team), or has there been pretty good communication along the way?


Tyler: There has been good communication. On most of our Design and Prototype engagements, there's always a developer on board as a sort of technical advisor, even if they’re not in every discussion. We always frame that up with clients to say, ‘We want to make sure this is technically realistic. We're going to do some technical exploration on your side, so that we know roughly what we're working with.’ But by no means would we present this as bomb-proof and ready to go.


Q: How do we avoid the disconnect between Design Prototyping engagements and reality?


Tuck: A few ways we avoid it is:


  1. We actually add that technical consultant to the contract. We haven’t always done that—it’s a recent development that we've put in place to try and avoid those scenarios where we're designing something that's going to take a really long time or there's just a lot of technical limitations behind it. We need somebody to be able to speak into that.
  2. A lot of times we use existing frameworks, like Material-UI. Frameworks like these help us move a little bit quicker, and they’re easier and safer to use because they've been proven out. There are rules that go along with them, so it makes some decisions a little bit easier. And that can reduce the complexity.


However, sometimes you're just going to have projects where clients want you to use a data set or connect to different systems that you don't yet have access to. They'll give you a rough idea, but within that engagement, you don't have a developer (possibly due to budget) to create a proof of concept. That's a risk that we just have to call out: ’One risk on this project is that since we can't access ‘xyz’, we might not have total confidence in some decisions at this stage. We might have to scale back some of these decisions once you get funding on the project.’


Tyler: One thing that I think we do really well is that we don't set the definition of done any earlier than we have the confidence to do so. Like we said before, as the strategist, I'm not saying exactly when something's done. We refine it more through each stage, so that the people who are defining that as done are the people closest to that actual work. This is all done with consult from the client along the way. For example, one of our clients defines their specific definition of done, because they have the closest interaction with the actual customers (they're getting those feedback loops).


I think if I were to set a principle for who sets the definition of done, it should be the team--the people that are most closely tied to the work itself--and the end users of that piece of work. So the people who will be using it—through user interviews & usability testing--and the people who are making it should speak most strongly into the definition of done.


The product owner (PO) for one of our accounts can speak into it at the same level that I do from a strategy perspective. He knows it needs to do these things because these are the things he’s hearing feedback about. So he’ll tell his product manager to do these things, but she’ll get very specific because she's really close to that work. He's not going to tell her those specifics, because he'll be off a bit (and he knows that). That’s why he’s a great PO. There are some POs who will go too deep and specific, and they'll get it wrong. So those teams then churn and waste all kinds of time on work that doesn’t matter.


Tuck: Building on that a little bit, it's more about not who defines the definition of done, but how the definition of done is defined. When the team picks that up in a sprint, they've all--at that point--reviewed it. They've all agreed to it, and they say ‘okay in this sprint, here's these sets of stories that we're going to complete with the definition of done within each one of them.’


One important point is that the definition of done or acceptance criteria within those stories is detailed enough so the team knows what they need to do. However, they're not so detailed that I'm telling a developer how to do their job. And that's where some teams make a lot of mistakes; they tell the developer in incredible detail how something needs to work. So then when it gets to test engineering and doesn't follow that exact thing, they're going to reject it. That might need to be the case for some compliance-related things. A lot of the times though, it's more about getting the right level of detail in that definition of done so that the development team has the autonomy to make decisions within that sprint to accomplish that goal of the story.


Tyler: I’m thinking about the difference between the Definition of Done and the Definition of Ready. On the spectrum of ‘this is 100% and this is 0% complete.’ This is ’100% this feature could not get any better for this particular user.’ You could theoretically develop 50% of the product’s functionality and have it ready for the market, because you need to get feedback on it.


So, if I'm making a note-taking app, 100% is something like Evernote. But right here is notepad, where all I can do is type characters--it doesn't format, it doesn't do anything unless I work really hard to get it to do something. But then there are gradations here that allow formatting, exporting, and functionality.


With this payment product we mentioned before, we--as a team--had to decide what level of ‘ready’ or ‘done’ we were prepared to release a feature at. Is it at 20%, where the client can only process certain types of transactions online on this new system? Or is it 80% where they can process all kinds of transactions for all but one type of user? Where are you ready to release this particular feature, and then what's your plan to do that?


So each of these are variations of different definitions of what is ready.


The reason I make that distinction is that the disagreement on a team often comes from a disagreement about what 100% means and what ‘ready to put into the world’ means. Most teams will say if pushed, ’Okay we agreed that this isn't the full embodiment of this particular feature and this user story request. We're all on board with that.’ But they have a problem agreeing on where it is on the spectrum.


Q: When people are disagreeing, who’s the ultimate decision-maker?


Tuck: In that scenario for Crema, that would be the client. We can speak into our recommendation of what point you’ll be ready enough. But ultimately, if there's a disagreement, the client is responsible. It's their product. They're paying for it. They're making the decision. We can push really hard on that and we do challenge our clients on that, but they get to say if they want it released or not.


Now, that's different, like you said, with the Definition of Done, because we've agreed all the way up to that point. Maybe we're at the 80% mark—that still falls within what we call ‘done.’ But maybe the definition of ready has changed over time for the product, because they have interviewed more people and they've thought of more features. So the 80% all of a sudden becomes 50% overnight, because they've come up with a whole new slew of features. That’s why we have working agreements.


Tyler: The reason I bring this up (even though it's not a part of the Crema canon of how we work), is because it’s consistent with how we talk about trade-offs with our clients.


For example, let’s say our team is building a custom component.  We agreed that 100% embodiment was a fully-featured, totally custom component with all kinds of functionality, but that was going to take six times longer than delivering a similar component that was just using standard native elements. There's a trade-off in there, and designers, strategists, PMs, developers, and clients have to all discuss what those trade-offs are going to be.


More often than not at Crema, we would tend to go for the lighter-weight version of something common or standard that uses an existing component from the design system, and then go big on something that's actually going to matter, something that's going to significantly differentiate the experience. This gets into how we talk about trade-offs, which are a big aspect of Definition of Done.


Tuck: Coming back to the DoD and who sets it, I think something that's important to remember is that setting the Definition of Done doesn't mean the Definition Done can't change. It just means that a change either has to happen before that sprint starts (in which that thing gets picked up), or that something new gets added to the backlog if it changes during that sprint.


You never change the Definition of Done on any active story that's being worked on. For the development team, that means that if they have a sprint that has a ‘Forgot Password’ functionality but then the client decides at the last minute that for some security reason, they want to add a second verification step to retrieve your forgotten password, that would be a new thing. This is because we've already all agreed on what the definition is, and the team's already moving in that direction. We would rather go ahead and get that done.


Maybe we say that it's ready enough for release, and then we add in the second factor in the second release. It’s tough when a client comes to us with a request even after the development team has started that work. Pretty much up until that point of that sprint starting, everything can change. But at the point in time where we have agreed to not change anything within that sprint, the team can keep moving and not change it. Otherwise, if you do change it, you end up in this cycle of not really being able to get any work done. Then, the client gets upset that they're not seeing results. It's because things keep changing and it ends up snowballing into this huge thing.


Tyler: There’s also the instance when, either out of speed or awareness, something was built in a way that's not consistent with the design. It's basically like a visual QA--Does this look like the way that the designer designed it? And if not, what are those reasons, and are they worth it or not? If it saved the developer 16 hours to do it one way rather than the way the designer did it, that might be worth it if it's not critical to the experience. We can then log it as a bug and come back and do it in the next sprint. The trick with that is that the whole team has to be on board with that.


So there's a lot of back and forth early on to say, ‘How can we do this differently?’ or ‘I really like how you did that.’ Our designers are really good about asking, ‘Are there better ways that I could do this?’



Q: Thinking about the psychological aspect of ‘definition of done’, does it help team members feel more accomplished being able to cross off several things on a list, or is it more linear to where you can only get to certain features if you finish one?


Tyler: Yes. We write stories as small as meaningfully possible. We don't want to make them too small to the point where it just feels like we're doing busy work and removed from solving meaningful problems, but we do want to make them small enough so that team members can cross several of them off in any given sprint. You feel like every day you're knocking off a couple stories. It's important to the developers, PM, and the client as that psychological progress moves forward. And it also shows you how well you know a piece of work.


Tuck: Yes, there are some dependencies sometimes where you have to do things in a certain order. It doesn't mean that things can't be broken down. Where I want to make a distinction though is that oftentimes what we'll do is that a PM might break down stories how they think they should be broken down. They might still end up being big, so that's why we go through an estimation and review session. I might even intentionally make something that I know probably needs to be broken down. I do this because I'm not sure how the dev team would want to break that story down, but I want to make sure it's all included. I might have a huge story, but then as a team we say, ‘how can this be broken down further because they gave a big estimate?’ And one of the reasons is absolutely a psychological aspect of it. You need to be able to move things along and you want to have that accomplishment throughout the sprint.


But the other piece of it is not just from a psychological perspective, but also having multiple team members being able to work on things is important. If you're working with a bunch of big stories, it means that they're not actually going to move over for testing until the very end of the sprint. So you end up passing everything over to testing, and they don't have time to test everything. The sprint's going to end, and they need time to do their job. This also helps with the flow of testing throughout the sprint, because then they can be testing throughout the sprint. There's still stories that get pushed over on the last day that the test engineers are scrambling to get done, but you want to try to avoid it and break it down for smoothness.


You want to make sure that no story is in progress for more than a couple days. Even for a really complex story. If you have something that's hanging out in ‘in-progress’ forever, the PM should investigate whether there’s a blocker or they’re running into something that hasn’t been discussed. That can also help make sure the team is hitting the results that they need to in the sprint.


Q: So to get back to the original question, who sets ‘definition of done’ in an agile product team…are you guys are saying that it's really the entire product team and that it can change along the sprint. But the client is the ultimate decision-maker on what gets prioritized?


Tuck: Close. The only the only thing I would tweak there is that when the sprint starts, the items within that sprint don't change. That ‘definition of done’ is set in stone as soon as you hit the start button. Anything that's outside of that sprint in the backlog or getting ready to be done can all change up until that point in time where you're starting the sprint. When you hit the ‘Go’ button, that's when the product team and the client all agree, ‘this is the definition of done for this set of stories.’


At the end of this sprint, I’m going to evaluate if you hit your goals based on whether all of these are moved to done that we’ve agreed on. Anything new would be a new item in the backlog or an adjustment to acceptance criteria on other stories in the backlog.



Tyler: It's the team's job, because if the team isn't reviewing things at the end before it goes into production, then it's not done. The approval process needs to bounce like this up through each level where everybody nods to say ‘yes, that fits my definition of done and every criteria I had for this feature.’


It's not just the test engineer at the end that says, ‘Yep, it's good’ and I don't get to look at it. If a designer sees something different in prod or in stage than what they design, they absolutely have grounds to say, ‘Why is it that way?’ They don't have grounds to hold it up and say, ‘you can't do it like that’, but they do have grounds to ask questions and say ‘my intent with doing it the way that I did was this.’ They might say, ‘As I understand it, this doesn’t meet the intent we agreed on earlier. Can we talk about this?’ Sometimes the designer will have to concede to the greater reasoning because it’s still consistent with this strategy here.


Q: After a test engineer has reviewed, is there a big meeting where everyone has a chance to look at all this together?


Tyler: We do—every two weeks in our spring kickoffs or spring retros. Crema does these both in the same meeting, though not all teams do that. Some teams wait until the end of the project to have a retro. For every sprint kickoff, a product team will gather in a room, the developers will demo the work and explain how it works, and the designer will ask questions and provide any feedback. Before that meeting, the designer should have some sense of visibility into the work in progress via a stage environment or conversations with the developers.


In those retro meetings is when everybody says, ‘Yeah that looks good’ or asks questions. These take place every two weeks. It goes back to everybody and everybody checks a box in Jira and it works its way back up.


Tuck: We're showing these backward steps, but it’s inherently built into the flow the work. Reviewing is part of it, but if everything's going fine, there shouldn't really be much of a disconnect. The things that there might be disconnects probably need to be new stories or new features. These are things that we just didn't think of until we were actually seeing it live, but that would need to be a new thing. We still satisfied the requirements of what we all agreed on.

Conclusion

We believe the entire product team should set the DoD together, with flexibility for changes prior to the sprint starting. The client is the ultimate decision maker on what gets prioritized, but the DoD is set in stone once the sprint starts. It’s the role of the product manager to evaluate whether goals were hit at the end of the sprint. This process spreads accountability among the team and ensures there’s buy-in from every team member prior to the sprint starting.


*If you have additional questions about setting the DoD or want to learn more about how we do things at Crema, reach out to Tyler at tyler@crema.us or Tuck at tucker@crema.us.

Last updated
Nov 9, 2022

Subscribe to theloop

Subscribe to our weekly newsletter of specially-curated content for the digital product community.