3. Refine

Build, release and test - Through feedback and usability testing, we iterate to make products and features more effective for people.

3.1 Release

3.1.1 Objective

You have built a shared understanding of what you are building in each epic. You have UI/UX mockups ready to be implemented. Now it is time to bring your engineering team to full speed. You’ll build features incrementally, test them, refine and when happy with the result, you’ll release it.

At Catalpa, we build our products in an incremental and iterative way. We deploy frequently so we can put a new release in the hands of our users early in the process. We test it again, we learn, and we refine our product.

3.1.2 What you will be doing

3.1.3 Setps and tools you can use

a. Epic selection *🥇

  • Do an epic selection - start by defining which epic/unit of work you’re going to work next. You will be focusing on one epic at a time. To do this, the team gets together for an epic selection meeting, during which the team will agree on what is the best sequence to deliver the value defined for the epic.

b. Epic development using SCRUM methodology *🥇

  • Epic planning - now that you know what the first epic will be, you will need to make sure the design document for the epic is updated and all information needed to build this piece of functionality is ready for the team.

  • Sprints - sprints aren’t about wildly executing work as fast as possible, sprints are designed to improve efficiency by increasing shared understanding and reducing uncertainty. The sprint is not an action or a meeting, it is a time-box of events that keep the team going, making our work clear, organised and estimable. At Catalpa, we mostly use 2-week sprints, although in some cases we opt for a week sprint.

  • Daily standups - standup meetings have huge benefits and require very low effort. They help dependencies, blockers or unclarity be identified early in the process, for example, how a feature should work. The basic rules are simply to never skip a standup and stick to the three simple questions:

    • 1) What have you worked on yesterday? 2) What will you work today? 3) What is preventing you from doing your work?

    • Daily standups can be conducted in-person (if you are lucky enough to have all team under the same roof), online (over a Meet call, for example) or offline (like a Slack channel - this one can be useful when you have team members in very different time zones)

  • Epic retrospectives are great opportunities to identify how to improve teamwork by reflecting on what worked, what didn’t, and why. We recommend running a retrospective with your team every end of an epic or project milestone.

    • Although group sessions are ideal to promote discussion and sharing ideas, in some cases you may need to do this asynchronously with your team

    • A great exercise for the retrospective is the ‘Stop, Start, Continue’ activity - it is easy to implement and works well in a number of different contexts

c. Product quality assurance *🥇

  • Code review - this is essentially a peer review where code is reviewed by another person. To learn more about this step, reach out to your lead engineer.

  • Feature list - Make an effort to maintain a list of features. This should be a list of concise sentences describing features supported by your product. This is useful to drive testing, reviews, comms, documentation and presentations. Keep it low on detail, you can provide a link to more detail if you have one. See an example of the feature list for Bero (you will need to be logged into GitHub to view).

  • Functional review (manual and automated)

    • Manual tests — it’s useful to manually review newly deployed functionality. The core objective is to get an understanding of whether the conditions of satisfaction are being met. This can be done by following the corresponding user stories and associated prototypes. The reviewer will then be able to make informed decisions about the quality of the implementation and determine if any follow up is needed.

    • Automated tests — as a product grows, it usually becomes hard to keep track of how everything is supposed to work and if it does work. Engineers can develop targeted tests that will run against certain parts of the code, ensuring that the intended outcome is being achieved every time a new version is deployed. Should any new changes interfere with old code, these tests are an efficient way to solve the problem before making the new version available to users.

    • Accessibility tests – review implementation against the accessibility principles defined earlier for the project. It is important to confirm that new functionalities are accessible, according to the accessibility requirements of the product. The accessibility tests can be conducted manually, for example, using our product with a screen reader. There are also online tools that can be used to run accessibility tests which provide a result of standards met and recommendations for improvements on product accessibility.

  • Visual review – please refer to Catalpa’s Visual Review Checklist Guide. This document provides a guide on how to conduct a visual review of digital design projects and features at Catalpa.

d. Deployment *🥇

  • For most products, Catalpa’s engineering team prepares 2 server environments:

    • A staging server - usually only accessible to the Catalpa team, used to test and review features, test content, experiment etc. Sometimes staging servers can be used for training sessions to avoid data entry issues or manipulation that may interfere with production/live databases. Ask your lead engineer to understand which environment is available and when to use each.

    • A production server - is where things are live. This is the final product and the environment that our end-users will access. Usually new release candidates are first deployed to staging for testing, and afterwards deployed to staging. Deployments to production should be communicated to the team and/or to customers/users if they are already using them and these updates are relevant to them.

    • In some cases, we may deploy a training server that would be used by users in training sessions and exercises. This is not frequently used. Each product server/environment has a cost associated with it (hosting but also engineers time), so be mindful of this when making a decision about what environments you will need.

  • Deployments are performed and are the responsibility of the lead engineer of your team. Discuss any questions or specific requirements with your lead engineer .

  • Sometimes deployments may impact access to a product or product feature. Make sure you coordinate with the lead engineer to determine the best time for deployments to reduce any downtime for your users.

e. Prepare documentation

  • Code documentation - this will be done by the engineering team as they code product features.

  • Product features and functionalities list, that can be done in several formats:

  • Design documentation - the importance of this task will depend on a number of factors – for example, whether it is a core or bespoke product, the size of the team and whether there are >1 designers working on the product, and the length of product implementation.

    • At a very basic level, a product will have the design principles written in a document, an InVision prototype, and visual assets for the UI in a shared drive.

    • More complex and larger products benefit from implementing a Design System Manager, such as the one available on InVision. Discuss the most appropriate solution for the product with the product lead. A Design System Manager would include some key design elements, such as:

      • A brief summary about the design system and how to use the DSM

      • Design principles / foundations

      • Brand elements and how to use the brand

      • UI/UX components

      • Openly DSM is a great example of how this tool can be used

    • Besides a Design System Manager and other documents, we strongly recommend InVision prototypes to be updated and linked on relevant user stories and GitHub issues. This allows for future consultation and is helpful for people doing quality assurance and visual reviews so they can compare a prototype (how it is supposed to look/feel/work) against the actual implementation (how it looks/feel/works).

  • User documentation

    • Define what documentation is relevant for your product. This is not limited to but may include:

      • End user documentation

      • System admin documentation

      • Content management documentation

      • Content specification documentation

      • Materials for workshops and facilitated training sessions

      • Other formats such as: video tutorials, exercises for in-job training

    • Define a structure for any type of user documentation you will need to prepare. Keep in mind what your users need to be able to perform using the product, what the key features and functionalities are, and the most important tasks they need to complete through the product.

    • Define the tone and language. Depending on your users and requirements of your contract, you may need to consider developing training materials and user documentation in 1 or more languages. Make sure you are using a tone and level of language that is adequate for the user's literacy. Make sure any developed materials are easy to understand, comprehensible without being boring, and that information is easy to find and understand.

    • Select format(s) for the user materials that are aligned with the context that users will be accessing them and that they are easy to adjust and update within an incremental and iterative product development process. Printing formats may not be the best option if you need to keep updating materials, and you may want to consider something like GitBook which can be updated in real-time. However, online-only materials may not be ideal for contexts where access to the internet is not reliable or available.

    • Write user documentation, prepare additional materials and consider using rich media and visual assets such as screenshots, screen recordings, animated gifs, illustrations, video tutorials etc.

    • Any materials (original and translations) should go through a proofreading step, if possible, to make sure any inconsistencies or typos are found and corrected.

    • Once materials are ready, publish (or print them) and ensure users have access to them and are aware of how to use them.

    • Here are some examples of user documentation that we prepared in the past:

f. Test and iterate

  • Usability testing (production) *🥇

    • Similar to what is described on step 2.2 Implement > d. Usability testing, but at this point you will want to test the implementation, that is, the live website to ensure it is working for the end-users as expected.

  • Accessibility testing

    • Similar to 3.1 Release > b. Product quality assurance, conduct accessibility testing on the live product to understand how accessible the product is

    • You can use an online tool like Lighthouse to diagnose and better understand opportunities to improve product's accessibility and performance. Lighthouse is a tool to help measure the performance of your progressive web app, but it also uses it to review and test our website for common accessibility issues. By the time your review is complete, you’ll have a clear list of what you need to fix on your website in order to meet accessibility standards. We have prepared a document with guidance on how to better take advantage of Lighthouse.

  • Usage data and metrics - usage data can also be an interesting way to learn more about how users are interacting with the product and find opportunities to improve. If the product offers a way to view and analyse statistics, for example through Metabase or Google Analytics, it is recommended to review and analyse data from time-to-time.

    • An example of this method in practice: while preparing reports on usage of Bero’s formative assessment feature, the team found that numbers being reported by teachers were far from what was expected - data was showing an unexpected average on number of students per classes and a surprisingly number of students selecting the wrong answer. A further analysis helped the product team understand that the numbers weren’t lying but rather that there was a UI/UX issue that was leading teachers to use the formative assessment feature in a way that wasn't expected in the original design. In fact, they were recording ‘by default’ data on the UI when they were having a look/previewing the questions in the formative assessment or failing to understand how to change the number of students in the UI. This allowed product and design to work on an alternative solution, running usability tests in the iteration and improving the overall experience and usage of the feature.

  • Interviews and other direct feedback - if your budget and resources allow you to do so, interviewing your users post-deployment of the product can provide you and the team with informative and meaningful feedback.

    • Identify what you want to learn, select participants and run an interview to learn more about their feelings towards the product, the user experience, struggles using the product, and how well the functionalities are meeting their needs and expectations.

  • Summarise findings and iterate:

    • Review findings against product roadmap

    • Review findings and open ‘Enhancement’ issues - focused on improving existing features look & feel or their experience - on Github for the design and engineering team to iterate

Last updated