With Development Cycle Coupling, you are in a position to support or suggest a wider range of methods. As a design process offers multiple options or potential improvements, proposing and testing product variants is an activity you can suggest and support.
Some product decisions can only be evaluated in context, such as when the experience is shaped largely by the users’ own data or their manipulation of it.
We approach this type of decision when the design process offers multiple options and a lack of clear opinion on what’s preferable, or a new and competing idea is testable at scale but unvalidated.
A good A/B test will point to a clear winner of multiple variant offerings, allowing the team to select one and move forward. Consider what you need to know to decide on one option over the other, and how you will act once a decision is made.
Therefore, establish the metrics for success, and ensure they are instrumented in the product. Set a clear target for A/B test volume and roll out to the smallest possible footprint of users necessary to obtain significant results within major groups of user behavior. Align the team on when and how the decision will be made in advance. Work with an analyst to clearly interpret the results of important or larger-scale tests.
Depending on the depth of measurement, new questions may arise that can be answered by blending in light qualitative exploration.
Use Product Analytics to monitor and evaluate the test. To solidify learning, conduct User Interview to understand why certain behavior occurred with a specific variant or in the scope of a user segment. Capture and convey any significant insights with Effective Reporting.