This is more of a general question about agile best practices, so feel free to redirect me if there is a better space to bring it up.
My problem is that I'm having trouble breaking down a user story into smaller user stories, without getting into the jungle of technical details.
To give some background, I'm providing hardware support for a new device in my driver API. The API is generally the same across all supported devices, so I already know what my high level user stories need to be. The problematic user story that I'm having trouble breaking down is, "As a calibration API user, I want to self calibrate the device." (I decided not to define this user story as an epic because there is a lot more functionality related to calibration.)
From the user's perspective, self calibration is simple: call the API function that performs self calibration! There are no options or parameters; the function does exactly what it says it will do. However, implementing self calibration is actually quite involved from a software perspective. It requires implementing a complex procedure with lots of math, which historically has taken several months of research and back-and-forth with analog and digital engineers in order to define the procedure. It terms of work breakdown, I'd like to capture this work somehow.
My first thought for capturing the work was to bucket all the tasks under the high level user story, but that seems like the wrong direction, especially since I already know the user story isn't going to fit within a single sprint, probably not even two! My second thought was to break down the user story even further, but I'm concerned about revealing too much technical detail in the child user stories that the user wouldn't (and shouldn't) know about. For example, I could write this child user story, "As a calibration API user, I want to self calibrate the device so the magic constants are accurate". This user story is big enough to count as a child user story, yet small enough to fit within a sprint. However, end users should know absolutely nothing about the *magic* constants!
So, my real question is this: is my concern about revealing too much technical detail irrational? Am I thinking about user stories all wrong?
Any feedback is appreciated!
EricNash kamwi02 glola01 grama23 what do you think?
I'm curious as to the source of your concern about "revealing too much technical detail". Is there a reason why the end user of the API would ever even see the user stories in Agile Central? Ultimately the user story needs to provide the development team and the product owner with enough information to deliver the right increment of work and to know that it meets the relevant acceptance criteria.
This is just my superficial impression but the ability to self-calibrate the device looks feature-sized to me. I wouldn't expect the end user of that feature to have any interest or concern with the increments of software development work that it takes to deliver that feature. Those increments need to be appropriately sized and described so that the developers can implement the feature, and the teams are the audience for that, not the user. Am I off-base there?
I'm curious as to the source of your concern about "revealing too much technical detail".
After reading through your comments and Michael's, I think my problem was that I assumed the only user was the end user. I was imagining sitting down with an end user and churning out user stories as if the API was being developed from scratch: what features would an end user reasonably ask for in this scenario? However, it's clear to me now that "user" in a user story can be any entity interested in some feature for some reason.
Those increments need to be appropriately sized and described so that the developers can implement the feature, and the teams are the audience for that, not the user. Am I off-base there?
That makes sense. Understanding the audience and consumers of user stories definitely helps.
These are my "go to" notes when i'm having trouble.
FEEDBACK (large stories - may be an epic)
F - Flow - Process Flow - How Story fits into an Application Workflow
E - Effort - Developers Level of Effort or Functionality Items Value
E - Entry - How Customer See's or Enters Data
D - Data Operations - Actions like Read, Update, Delete, Ingest, Export, Notify
B - Business Rules - Breakdown Technical Complexity
A - Alternatives - Additional ways client request/value can be delivered
C - Complexity - Diamond (Greater Value) Mine - Split by unique value added
K - Knowledge (spikes) - Need more information
I - Independent - Independent but not necessarily functionally independent.
N - Negotiable - How to deliver customer request is not the focus.
V - Valuable - Clear Client value not blanket statement of a task performed.
E - Estimable - If struggling to estimate then story is too big or not clearly designed.
S - Small - user story should be delivered within a two-week sprint. T - Testable - does meet acceptance criteria or deliver customer value.
As a User* = a person, application, down/upstream system that will receive added value
I want = A SMALL piece of a functionality or expected result from an action
So that = A specific Benefit that provides Value (that refines scope)
*The END USER, ROLE, or can even be a SYSTEM -
It can be an up or down-stream system that is expecting some type of input
"As a calibration API user, <-- is this the true receiver of value?
I want to self calibrate the device <--- Is this a small part of a function or expected result?
so the magic constants are accurate". <--- is this a new or added benefit?
*The END USER, ROLE, or can even be a SYSTEM -It can be an up or down-stream system that is expecting some type of input
*The END USER, ROLE, or can even be a SYSTEM -
This was the key piece of understanding I was missing about user stories. I thought the user stories needed to be written from the end user's point of view.
I'd break this functionality down by having the self-calibration get more and more complex with each story. I find it helps to go back to why we decompose work - to get feedback early and often. That may mean showing customers working software to learn if you're building the right thing as well as removing risks in your product by incrementally delivering value.
For self-calibration, the story breakdown might looks like this:
I think you could avoid stories that use a system as the actor by having a story like:
As a End User
I want the self-calibration to align metric X to 95% acceptance
So that the system avoids errors of type Y
Hopefully that helps, and if you'd like more details, please let me know and I'm happy to talk more!