In my recent assignment as an Agile coach, I worked with teams that were a good mix of experienced Agile members and those new to Agile or the domain. According to our transformation road map, all members had taken Agile training to learn about the Agile Manifesto and Agile principles, including hands-on workshops on various ceremonies. However, once they started working on sprints, I found that members were more concerned about application life cycle management (ALM) tools than about being Agile and following Agile methods. They asked such questions as:

  • How should we create releases or iterations?

  • Can we increase the sprint length because there is lot of work to do?

  • What happens to defects in the ALM tool?

  • Can we log unit testing or internal quality assurance (QA) defects in the ALM tool?

  • How can the QA team test code in the development environment?

  • How do we write user stories using fields available in the ALM tool?

  • What is the relationship between story points and task effort?

  • If a story is not complete, how do we manage the story in the ALM tool?

  • Can we have system-development life cycle (SDLC)-like tasks under a user story?

In retrospect, I think I was successful in coaching them on Agile and Scrum basics, but they’re still not thinking in the context of Agile principles and relating all questions to these underlying principles.

Agile talks about delivering business value in each sprint and release. This means that the team needs to look at functionalities rather than just time lines. Since teams and the product owner were not able to think in terms of smaller user stories (the rule of thumb I follow: One user story should be done in two to three days), teams always ask for extra time, or user stories spill over to the next sprint. This eventually leads to Waterfalling the sprints, as development happens in one sprint and QA or testing happens in the next sprint.

Increasing the sprint length is not the solution to all problems. Rather, it adds risk and increases work in progress. This is because teams start on multiple user stories at a time rather than swarm on one user story (due to lack of cross-functionality), leading to low acceptance and lower velocity.

Sticking to a Definition of Done at the sprint or release level helps teams ensure that stories are really done, and only those stories are then moved to the next phase of a review or presented as a demo. Here the ScrumMaster’s role is important, but in practice I still see the ScrumMaster pushing for user stories for review as the team runs against time. In such cases, we are increasing technical debt and developing the product on a weak foundation. When user stories are written solely by the product owner (PO) or the proxy PO, dependencies are not thought through properly. This leaves teams to consider spike stories or sprints to uncover the challenges. And due to these dependencies, issues arise, such as incomplete user stories and low team velocity.

To add to the complexity, sometimes the developer doesn’t know much about eXtreme programming (XP) practices like test-driven development (TDD), SOLID design principle, simple design, peer code review, pair programming, mocking, refactoring, evolving design, and so on. The team looks at these dependencies as impediments rather than opportunities to use XP practices. For dependent tasks or pieces of software, teams still need all the puzzle pieces to work on the release.

I have experienced projects that required an Oracle package implementation and customization in which development teams asked how to perform TDD. It is fairly simple and wise to use simple test procedures to test actual functional code. But that mindset is often still missing in teams. They always look at TDD as overhead rather than an enabler for the long run. All members should take XP practices seriously to ensure the success of the Agile framework.

The QA role in Agile is more of a proactive role than a reactive role. We expect QA to enable the development process by asking the right questions of the development teams and collaborating more with the PO. But often in reality, QA is still working in silos by not accepting open invites from the development team to test the software on their machines to ensure quality code. This causes rework and a communication divide. In an ideal Agile world, with cross-functional teams doing all the activities such as development, testing, and deployment, it means that everyone knows how to code, whether it’s development-related code, automation code, or deployment-related script code. However, QA teams continue to look to development teams for simple scripts that they can use, and they don’t want to go beyond the traditional way of working or learn new coding languages. Given these limitations, what we test isn’t really 100 percent tested, though we might claim test cases are covering 100 percent of it. Agile testing quadrants or exploratory testing remain unexplored by QA teams. They still work as quality gatekeepers rather than software enablers.

Another pitfall I see is that teams break down user stories in SDLC phases like analysis, coding, review, QA test preparation, QA testing, and defect fixing. Perhaps this is happening to show that each member is working at full capacity for the sprint.

As a PO or an outsider to the team, I won’t be able to make out anything from the tasks. A PO surely won’t know if a team is covering Scenario 1, Scenario 2 or Scenario -1. What is important for teams is assure the PO and stakeholders that they have understood the user stories (business requirement) and know exactly what they need to develop and test (all possible permutations and combinations). In that regard, it’s imperative that the team have functionality-level tasks rather than phase-level tasks.

Each sprint, we expect shippable software. We consider scenarios in which few (or no) user stories are undone. In that case, the PO’s or stakeholders’ decision making becomes easy, because they can review functional-level tasks rather than SDLC phases.

ScrumMasters who come from a product manager role often attempt to find a relationship between story points and task-level estimates before they have even started working on sprints. They tend to forget the basics of estimation and sizing and go ahead with an easier way of co-relating ideas, which doesn’t work in practice.

Sizing is used to judge only the complexity (including other parameters that the team can define for themselves) and to aid predictability in terms of the team’s capability in each sprint. Having the mindset of a ScrumMaster, teams always tend to give their “expert opinion” on sizing during sprint planning or poker game exercises.

All Chickens and Pigs (stakeholders and Scrum team) need to look at Agile from a completely different perspective, not just from an ALM tooling perspective. The ALM tool is just a means for a big visible information radiator, and a tool for how you best understand and implement the Agile principles.

 

About Author

Alhad Akole
Zensar Technologies Ltd

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...


Avatar

Posted by Alhad Akole

Leave a reply

Your email address will not be published. Required fields are marked *