Agile Analytics for the win
In the olden days, a businessperson would decide what should be built, a coder would code it, a user would use it (maybe), and then an analyst would try to figure out what happened. Today, high-performing companies charter interdisciplinary teams to iteratively improve outcomes in a particular area using agile. That’s a bit abstract, so here’s a more tangible view of what it means in practice:
That’s a testable description of agile and you’ll know it’s working if:
- the team is using continuous design to improve the percentage of features that see high engagement
- the team is using agile to increase its velocity in terms of feature output
- the team is using DevOps to create a more continuous pipeline and release to users more frequently
Where do analytics come into play? Not at the end, and not after the fact. A high-functioning team is using agile analytics to bring focus and coherence to their work across design, coding, and delivery.
How do you know if you’re getting to agile analytics? I’ve observed seven focal points with teams that have a high-functioning practice:
1. All Ideas are Testable
When I asked him about his company’s practice of agile, a CTO friend of mine told me: ‘You know- you can’t just take a nine-month idea, slice it into two-week iterations, and get agile.’ We’re bad at making our ideas testable. I’ve variously been a founder, CEO, advisor, and investor for decades and when I get a new idea I still start with ‘Wouldn’t it be cool if…’.
And that’s OK- but not once you decide to develop that idea. At that point, render your idea testable. I like if the formulation ‘If we [do something] for [some specific persona or segment], then they will [respond in some specific, measurable way].’ For example, if we ran a company that fixes air conditioning systems and decided it would be cool (pun intended!) to make an app for our field technicians, we might render something like ‘If we build an app for the field technicians, then they will use it and it will increase their billable hours.’
2. Big Ideas get Tested
This is the essence of Lean Startup- that in order to minimize waste ideas get tested with product proxies (MVP’s) before they’re candidates for being built. And a lot of companies have a team off somewhere doing something Lean Startup-ish. But do the big ideas get tested? The ones that the company is investing the big bucks in and hoping will drive its organic growth? That’s an important question.
3. All User Stories are Readily Testable
The user story serves as a centerpiece for iterative development. It has the format ‘As a [user persona], I want to [do something], so that I can [achieve some testable reward].’ That last clause about the testable reward? That’s super important and a cornerstone of agile analytics.
For every user story, it should be clear how you would prototype that story, put it in front of a test subject, prompt them with a goal, and see if they achieve that goal or not.
4. Key User Stories get Tested
And do key user stories get tested? I do a lot of work with teams and we spend time on writing more testable user stories. I’ve never met anyone who thought writing better user stories was a bad idea. But it’s the teams that make a habit of testing early and often, with interactive prototypes, for instance, that actually stick with the practice of making their stories testable.
Beyond the obvious benefit of using evidence to find the right design early, prototype testing also creates a more focused and coherent transition to analytics in the software once it’s released.
5. Experiments are Instrumented into Code
Instrumenting analytics into code is easy and affordable and most companies do it. That said, it’s the team that’s carrying strong, user-centric hypotheses through their product pipeline that’s going to pick the right focal points for the experiments those observations are supporting.
For example, one project our Masters of Science in Business Analytics students are working on is the US FDA’s ‘MedWatch’ site. On it, users submit information adverse reactions to drugs. Let’s say we’re trying to make it easier for a busy doctor to submit these reactions in order to increase the data we collect. What should we A/B test? There are a lot of ‘interesting’ possibilities, but without validated learning on what that doctor is thinking and wanting when they visit the site, we’re unlikely to invest in A/B tests that really move the needle on performance.
6. Analytics are Part of Retrospectives
Successful teams don’t demo their software- they interpret experiments. Working in short, 1-2 week sprints is a common feature of agile. Teams talk about how things went, why, and how they want to modify their practice of agile. Thankfully, this is common practice.
What’s less common is for teams to make a habit of reviewing their experiments during those retrospectives. Ultimately, we’re creatures of habit, and so a team that’s not explicitly creating time to review their experiments is probably not going to get to agile analytics. Here’s an interview with Travefy, a company that’s made a habit out of making analytics part of their agile rituals: David Chait on Agile Retrospectives.
7. Decisions are Implied by Analytics
Are decisions implied by the team’s analytics, or is the plan just to ‘review the numbers’? The team that’s practicing agile analytics already knows the implication of their observations because their observations are tied to experiments and the experiments are tied to decisions. For example, are you really ready to kill that feature if it sees low engagement? What if a user complains and says they absolutely have to have it? Agile analytics makes the job of deciding easy.