Fitting Scrum for Software Development — Part II
Many software teams use Scrum, but it comes with challenges. While it originated in software development, its creators made it broad enough to work across industries. The idea? Teams should adapt and improve it while sticking to core principles. But in reality, that rarely happens. Instead, teams get stuck in rigid processes, perfecting rituals instead of shaping them to fit their needs.
Another big consequence is that Scrum emphasises management but overlooks engineering practices. But without strong engineering foundations, it barely works for software development [Fowler09].
In this series, we adjust Scrum to better fit software development, integrating engineering practices where needed.
In the first article, we explored how to make daily stand-ups more efficient — focusing on blockers and tracking only the tickets that matter. We also advised against breaking backlog items into generic tasks like development, testing, or support. We recommended considering a ticket done only after shipping it to production. That likely raised even more questions.
How should we break them down instead? With many small, value-driven tickets, how do we ship them despite dependencies? And how can we name tickets in a way that is clear, avoids confusion, and conveys maximum information?
Let’s tackle those questions.
Break down by acceptance criteria.
Earlier we talked about tracking value instead of activities. Sounds great in theory, but in practice? Not so simple. Scrum teams work in time-boxed iterations, usually two weeks long. To fit within a sprint, the team needs to break down PBIs (product backlog items) into smaller pieces. And this is where teams often struggle.
The easiest — and most tempting — way to split work is by implementation activity. This allows for endless breakdowns, right down to a single line of code. But this approach comes with several problems.
Let’s look at an example. Suppose we have a PBI: “Provide an online payment capability via credit cards through integration with a third-party PSP (payment service provider).” After brainstorming, the team breaks it down into the following tickets:
- The UX of the payment page.
- [Frontend] Payment page.
- [Backend] Create new supporting objects.
- [Backend] Create a new context factory and builder.
- [Backend] Integrate with PSP APIs.
- [Backend] Create an endpoint for starting a payment session.
- [Backend] Create PAYMENT_LOG database table.
- Testing.
At first glance, this might make sense to the developers. But it’s already too cryptic for the product manager (PM), engineering manager (EM), or even fellow engineers. The next day, after starting the implementation, the developer might choose a different approach. This can make some tickets outdated before work even begins.
Even when the expected business outcome is clear (which isn’t always the case), implementation is a learning process. You explore the problem, develop a mental model, start working, learn new things, adapt, and repeat. That means predefined implementation steps soon become outdated, leading to constant adjustments.
Before you know it, “activity” tickets flood your board, and no one — especially the reporter — understands them. Yet, we need to keep track of our progress towards the Sprint Goal. This results in many long stand-ups and frustrating status meetings.
Another issue? A false sense of progress. Teams complete a lot of small activity tickets, giving the illusion of movement. Velocity looks good; engineering managers are happy. Yet, this statistic does not give us reliable data to track progress.
The biggest issue with this approach is that it forces us into a waterfall style. Until the team completes all the tickets, nothing is functional. This blocks acceptance testing, hypothesis validation, and experiments. The team cannot deliver early prototypes to see how partners interact with them. How is a PM supposed to do their job if anything meaningful only gets developed after three months?
Try breaking down PBIs by acceptance criteria instead. Here are some guidelines on how to do this:
- Split by CRUD (create, read, update, delete) operations.
- Split by happy path vs. edge cases. Happy paths are often enough for a proof of concept or even production launches. For example, skip handling a payment method’s amount limit by returning a general error. Later, refine it with a more specific message.
- Split by platform or device. Start by supporting the easiest device and expand to others later.
- Split by roles or personas. Focus on the most important roles first, and leave secondary roles for later.
- Split by data size or scope. Begin with smaller, less complex customer data and expand to larger, more complex cases as needed.
- Split by level of functionality. Put in place basic functionality first, such as simple fields or algorithms, and add more sophisticated features later.
- Split by geography. For example, local payment methods vary across countries, so this can be a natural split.
You can also approach splitting using these heuristics:
- Every sentence is a candidate for a separate story.
- If you have an enumeration, each item is a candidate for a separate story.
- If there are conjunctions like “and” or “or,” each clause they connect can become its own story.
In our example, one possible way to split the PBI into stories would be:
- “Users should be able to select a single invoice and pay it online via credit card in the Netherlands”.
- “Users should be able to select a single invoice and pay it online via credit card worldwide.”
- “Users should be able to select many invoices and pay them online via credit cards worldwide.”
- “Users in the Netherlands should be able to pay many invoices via iDEAL.”
- “Users in China should be able to pay many invoices via WeChat.”
- “After a successful payment, a confirmation email should be sent to the partner’s email address.”
- “A user should be able to store her payment details so that she does not need to re-enter them.”
The above suggestion is not the only possible approach. You should consider what is meaningful in your context. For example, in our context, it made sense to first implement the payment of a single invoice because. It already covers 95% of partners’ needs and is process-wise very different from paying many invoices. It was also useful to split by country and payment method, as different payment methods behave differently.
Even if we change our implementation midway, the tickets remain the same. The PM can do the acceptance testing or demo features as they are ready to get early feedback.
The biggest advantage is that the team can roll out MVPs (minimum viable products) to verify the ideas. In the above example, the first ticket already allows us to proof-test the end-to-end process and cover the needs of 95% of our customer base. Even if we had to switch to another critical feature after delivering the first ticket, we still produced most of the impact.
There is one caveat, though. The problem is that if you break your high-level PBI into many small stories, you most often cannot release them to production. And if you are not shipping stories into production, you are losing fast feedback cycles. Yet, there is a way to fight this. Let’s discuss this in the next section.
Use feature flags to keep stories shippable.
Imagine you’ve broken down a PBI using acceptance criteria:
- “Partners should be able to select a single invoice and pay it online via credit card in the Netherlands”.
- …
- “After a successful payment, a confirmation email should be sent to the partner’s email address”.
Some stories can ship immediately — great! But often, you can’t deliver each one on its own. Take the confirmation email, for example. It must go out every time a payment succeeds.
You might be tempted to bundle “successful payment” and “email” into one big story. Seems simpler, right? But that’s how you end up with massive tasks that drag on for weeks (or months) while the team wrestles with dependencies. Eventually, things go sideways, and everyone starts merging stories out of fear. At that point, why bother breaking them down at all?
One easy fix is to relax the definition of done. Consider an Increment finished once it gets to the QA or staging environment. But this approach causes more problems than it solves. It clutters your environment, complicates the process, and — worst of all — delays valuable customer feedback.
source A good definition of done should assume that an Increment is shipped to production.
This is where feature flags save the day. They’re easy to adopt and make breaking down work actually work.
If a story isn’t immediately shippable — hide it behind a feature flag. Simple rule. Big impact as it allows one not to think about any dependencies when breaking down the stories.
Trust me, this small investment will make life easier for everyone. This is one of the engineering practices that makes Scrum actually work.
It’s important to point out that just because it[Scrum] doesn’t include technical activities within its scope should not lead anyone to conclude that it doesn’t think they are important. — Martin Fowler [Flaccid Scrum].
Feature flags do more than untangle dependencies. They help when requirements change mid-development (which they always do). Or when halfway through, you discover something unexpected (which you always will). With feature flags, you can ship the working parts without exposing the half-baked ones.
Feature flags are essential in adopting trunk-based development, continuous delivery, and deployment automation. These practices predict high delivery performance.
That said, don’t go overboard. Too many flags, and you’ll end up with a messy, hard-to-maintain codebase. Use them when needed, but also plan work so you might not need them at all — like tackling the confirmation email first. And please, clean up old flags when you’re done. It’s a minor hassle, but the trade-offs — faster releases, smoother collaboration, and better risk management — are worth it.
See the bigger picture with coloured cards.
You should be able to see at a glance what the team is working on at any given moment. Are we drowning in bugs? Are we focusing too much on technical tasks? Do we have a balanced scope, with a healthy mix of features and improvements? To achieve this kind of visibility, try this simple trick.
Start by defining the key work item types that matter most. For most teams, three types are enough: stories for new features, bugs for fixing defects, and technical improvements for maintaining long-term stability. Stories show work that adds value for the customer. This can be a functional feature, or a non-functional improvement, like better security or performance. Bugs focus on addressing known issues or incidents affecting users. Technical improvements help reduce technical debt, clean up outdated features, or support future development, even if there’s no immediate impact on the customer.
Pic.1 You should be able to see at a glance what the team is working on at any given moment. In the picture, we can see a healthy balance: feature tickets (green), a few defects (red), and technical improvements (blue).
Once you’ve identified these types, map them to your JIRA tickets. To make things even clearer, colour-code your tickets. I like using green for features, red for bugs, and blue for technical improvements. This way, a quick glance at the board instantly shows where the team’s focus is. If there’s too much red, it might be time to rethink priorities. If everything is blue, customers might start wondering when they’ll see something new. Keeping this balance visible helps guide better decisions without adding extra process overhead.
Make ticket names customer recognisable.
We want our team board to be easily digestible by everybody (the team, stakeholders, leadership). We sometimes need to look back at the past. This could be for a retrospective or during end-of-year assessments. The record of completed work should also be clear and simple.
If we followed the approach of tracking only value-added tickets, then we should already have a clear board and the log of completed work. Yet, we can still improve it a bit further.
Here are some recommendations.
The ticket name should be no more than 8–10 words. There is no sense in making a long name which will anyway be cut in half when on the board. So, the name should be short, yet convey maximum information.
Use well-known abbreviations, such as NL instead of the Netherlands, COO instead of change of ownership, to make names shorter.
Pic. 2 Ticket names should be short enough to fit into the card. The JIRA cuts off names that are too long when it displays them.
Add the most important information at the beginning, since the text at the end is more likely to be hidden when displayed on the board.
The ticket name must show its added value. It should highlight key differentiators and limitations, like functionality or geography. It shouldn’t be too vague or too narrow. Take the story “Partners should be able to select a single invoice and pay it online via credit card in the Netherlands”. The ticket name of “Single invoice payment in the NL” is better than something like “Improve payment experience”. The former captures the most important aspect of the ticket — the user will pay only one invoice at a time and only in the Netherlands.
The ticket name should not dictate the solution or implementation details unless they are part of the acceptance criteria. This is especially important for bugs. We create a ticket usually at an early stage when we do not have full information on the problem. Hence, the solution may very well not be the one that first came to mind when we created the bug. So it is better to name bugs based on facts or symptoms.
Imagine you get an incident where a user reports that she is getting an internal error while trying to open a page. Even though you are pretty sure that it is a validation problem, you should name it after the observable symptoms/facts. So instead of “Fix loading of the partner list”, name it “Internal error while opening page xxx”.
Technical tasks very often suffer from being too generic and without a defined scope. Common tickets are like “Clean up code”, “Improve performance”, “Refactor the code”. And who knows what gets done under such tickets! This is, in general, a symptom of another problem that we touched on in the previous article — not focusing on the value, but on the process. Make sure you understand what you are doing, what the acceptance criteria are, and break down the ticket to be more specific. Make sure your name conveys enough information about the acceptance criteria of a technical task.
Tickets, in most situations — excluding bugs — ought to link with Epics. JIRA displays the epic names on the ticket card when using the Agile board. This adds extra context, allowing one to squeeze more information into the card.
Conclusion
Great teams don’t follow a framework; they shape it to fit their needs. They experiment, adjust, and refine their ways of working. No team is the same as another, and no team is the same as it was a year ago. Continuous improvement isn’t a nice-to-have — it’s essential. A process that worked yesterday might not work tomorrow, and the best teams are the ones that evolve. Methodologies provide guidelines, but making them effective is an ongoing effort. At the end of the day, success isn’t about picking the “right” process — it’s about improving the one you have.