Mindset AI can now drive better user engagement with my solution

A SaaS platform needed a better way to monitor and make the content more engaging to their app users. Explore how my strategy helped achieve just that.

Adam Wieclawski
10 min readMar 5, 2023

TYPE OF PROJECT: User Engagement, B2B, SaaS, Native App

MY RESPONSIBILITIES: Research, Data Analysis, Ideation, Product Design, UX Writing, Design Documentation

TOOLBOX: Figma, Miro, Jira, Confluence

PRODUCT DESIGN FRAMEWORKS: Scrum, Epic stories, 5W2H, Product-Led Growth Flywheel, Design Systems

UX METHODS: Stakeholder interviews, Sales and support interviews, Metrics review, Goal prioritisation, Competitive analysis, Personas, Hierarchical Task Analysis, Userflows, Wireframing, Rapid prototyping, A/B testing, Peer reviews

Mindset AI’s product, a fully configurable app, helps training providers scale their expertise. Different app flows help the clients deliver the next leadership workshop or provide a daily feed of mentoring content to the end user. Think, Productboard and Squarespace combined for business coaches and microlearning providers.

A B2B-B2C hybrid distribution model at Mindset AI relies on two formats: the host platform and the native app accessible to the end users. The host platform is the place where the client publishes the desired content in the form of workshops, announcements or learning pathway flows. In turn, the app is the product where the users access the submitted posts.

Faced with a dwindling rate of active users, my job was to find a solution that could strike a perfect balance between stimulating user engagement and providing ample contextual detail to product admins about what generates the most attention among app users.

UNDERSTANDING THE COMPLEX ECOSYSTEM OF THE MINDSET AI PRODUCT WAS KEY TO SOLVING THE USER ENGAGEMENT ISSUE DOWN THE LINE

The paradox of the active user

With over 1,000 monthly active users, Mindset AI’s clients needed a clearer way of understanding which flows available on the app attracted the most attention. The data for recent quarters showed that despite a growing user base, the ratio of net active users decreased. After the initial discussion with the design team, I was eager to delve deeper into the question of the active user metric to find out what might be hindering its growth.

THE PROBLEM OF A DWINDLING RATE AMONG ACTIVE USERS

During my research, one of the articles I came across spoke of the paradox of the active user — a situation where an eager bricoleur skips reading a manual in favour of constructing a garden shed as soon as they lay their hands on a toolbox. It can be argued that had they only read the manual first, building the shed would have been a quicker and smoother process.

However, as toolbox providers, the last thing we ought to do is preach to the eager DIYer about the importance of studying the manual first. The said paradox lies in the fact that our pure intentions could be misinterpreted, affecting the determination of even the most enthusiastic shed-builders.

The paradox strikes again

To paraphrase the issue in digital terms, oftentimes users will remain motivated by a specific goal as opposed to a curated path to achieve it that the product is prompting them to follow. I started to suspect that this difference might be the core reason behind the falling active user rate at Mindset AI.

To investigate further, I scheduled an initial round of 6 interviews. This initial discovery stage included 3 talks with end users, 2 with app administrators, and a chat with a sales representative at Mindset AI. The last one, in particular, was a true eye-opener. It confirmed my initial hunch that the regimented channels for submitting content on the app not only confuse the app administrators but also make the user interactions with the app content stale and unadaptable to their true needs.

“As we transition from a start-up to a fledging scale-up, we have developed various forms of content on the app based on client requests. When a client wanted a feature to share videos during training sessions, we delivered it in the following sprint. When they need a way to check employee moods weekly, we provided that too. However, this eagerness to ship features has resulted in a complex web of communication channels. Although these channels are beneficial, we need to present them to end users in a clear and actionable manner, ensuring their purpose is easily understood.”

— A sales representative at Mindset AI

THE SAME PIECE OF CONTENT, DIFFERENT METHODS OF DELIVERY; THE CLIENTS NEEDED TO UNDERSTAND WHICH FORMS OF ENGAGEMENT WERE THE MOST IMPACTFUL AND IN WHAT CAPACITY THE END USER INTERACTED WITH THEM

As the paradoxical abundance of choice teaches us, having too many options to compare not only mentally exhausts us but also triggers a fear of missing out on something important once we make a decision.

While the company had the data concerning the number of users engaging with each piece of content, they needed more evidence on the nature of each interaction. What articles were the users bookmarking to read later? For what reason would they abandon the completion of a learning pathway? My solution needed to provide an intuitive way of answering these questions.

The conundrum of too many options

With such a complex multi-channel product, gauging the success of various content delivery methods is no easy feat. To get a holistic view of the problem, I decided to run two concurrent discovery sessions.

My interviews with sales representatives at Mindful AI were product-centric and aimed at defining what channels of delivery the company suggested their clients use. In turn, my time with stakeholders had a more behavioural edge, intended to track their preferences when prompted with user scenarios.

I scrupulously noted the motivations the users mentioned during our discussions. Surprisingly, none of them viewed any of the app flows as redundant. Instead, they recognized the value of utilizing different engagement methods. What went amiss, however, was a clear explanation of how a flow could enhance the score of the next assessment or how the app could help the user establish a daily check-in routine.

HIERARCHICAL TASK ANALYSES FOR TWO PERSONAS, MATTHEW (AN APP ADMIN) AND MINDY (AN APP END USER), GENERATED THE INITIAL CONCEPTS FOR THE FEATURES DURING OUR IDEATION SESSIONS

Step by step, the hierarchy of user actions

Alert to these prompts, I started to map out the imaginary paths of achieving these goals by the end user in the form of hierarchical task analysis (HTA). I picked this formula as a way of transferring user needs into a phased sequence of events that would take the user from their current state to the desired one.

Similar to storyboarding, the HTA allows for a quick breakdown of stages in the user journey. For instance, the hypothetical path for a user aiming to improve their score in the next assessment would involve: 1. Checking the previous score, 2. Accessing the next assessment, 3. Setting a minimum score target, 4. Receiving the score, and 5. Optionally retaking the test if the score is unsatisfactory.

Drawing on these structured narratives, I started to consider how they could translate into tangible flows on the app. I discussed my findings with a team of 3 senior designers at Mindset AI. Their positive feedback on my approach marked the starting point for reconsidering the declining active user rate as a chance to reevaluate how the app flows could help the users achieve their individual goals.

NEARING THE END OF THE IDEATION STAGE, I STARTED SHARING WITH MY TEAM MEMBERS THE USERFLOWS TO REPRESENT HOW GOAL-TRACKING TACTICS COULD ADDRESS THE USER'S NEEDS. HERE, MY EARLY LOW-FI ITERATIONS OF CUSTOM GOALS

From strangers to champions

Moving into the design stage, the main challenge was to find a remedy that would facilitate user interactions and, simultaneously, assuage the multi-flow featuritis that was affecting the app admins. My final design needed to grant greater autonomy to the user while offering an unmitigated way of tracking how they would use each segment to the app admin.

Increased flexibility in user interaction coupled with increased visibility of user actions on the admin’s end, therein lay the sweet solution my efforts aimed to achieve. To identify tasks that would fulfil the needs of both users and administrators, I referred to a product-led growth flywheel shared with me by the design team.

The flywheel was a crucial tool in delineating the elements that could help both actors get familiar with the upcoming changes, from the introduction of different CTAs to the addition of tooltips in the interface.

Given the timeframe of the project, together with the design team, we agreed to focus my initial design work on taking stakeholders from the “strangers” stage to the “beginners” stage. Once tested and refined, I would continue developing the idea towards the “champion” stage.

A PRODUCT-LED GROWTH FLYWHEEL ELABORATED BY OUR DESIGN TEAM HELPED ME NAVIGATE THE PROCESS OF STREAMLINING MY WORK. WITH THE MODEL IN MIND, I CARRIED ON PROTOTYPING ONLY THOSE FEATURES THAT WOULD IMPROVE THE PRODUCT IN THE MOST EFFICIENT WAY WITH THE MAXIMUM IMPACT ON THE USER EXPERIENCE

Five Ws, Two Hs

To deliver the first version of the prototype, I used Mindset AI’s design systems. The access to the component library helped me create rapid prototypes of the app and the desktop admin platform experience. To validate my ideas and make sure I was on track to improve the easiness of interactions, I conducted guerilla usability sessions, testing the initial flows with two stakeholders. Their feedback helped me refine the prototype throughout the design process.

When in doubt about the next stages, I would often refer to the 5W2H concept map that I had compiled with the design team. This well-established technique, commonly used in journalism and police investigations, has been embraced by the UX community as a means of crafting digital experiences that remain mindful of the purpose and target audience behind each proposed solution.

THE 5W2H CONCEPT MAP HELPED ME CALIBRATE MY DESIGN PROCESS IN A WAY THAT WAS MINDFUL OF THE NEEDS OF BOTH THE END USER AND THE APP ADMIN

Greater autonomy, better visibility

After many iterations, talks with developers and a series of rapid prototyping, I designed a way of mapping user interactions through goal tracking. The new feature allows users to save content, arrange dates of completion and set metric-related goals for learning experiences, all in response to the needs expressed during the user interviews.

Simultaneously, the tracking allows the app admin to see how the user interacts with their content, providing enough contextual detail to know what assessments attract the most attention, how many users stick to their daily check-in routine or how many users have bookmarked their article.

A TOOLTIP NAVIGATES THE USER FROM THE ASSESSMENT SCREEN TO THE GOAL-TRACKING FEATURE WHERE THE USER IS ABLE TO SET A MINIMUM SCORE TO ACHIEVE

On the side of the host platform, the coaching clients have the ability to check not only the sheer volume of engagement but also see in what way the users intend to interact with the app content. A galore of KPIs can be verified from one place on the platform.

ON THE APP ADMIN PLATFORM, MY AIM WAS TO GATHER THE MOST PERTINENT QUALITATIVE AND QUANTITIVE FEEDBACK IN ONE PLACE FOR THE ADMIN’S CONVENIENCE. THE INTERFACE SHOWS WHICH PIECE OF CONTENT ON THE APP GAINS TRACTION AND FLESHES OUT THE DETAILS OF USER INTERACTIONS

The new functionality prompts the user to accomplish a goal-related task, such as the completion of a self-imposed deadline with in-app notifications. Once clicked on, the notification leads the user to a goal bank where they can access and edit all of the goals they attached to particular pieces of content.

Mindful of the initial interaction with the new concept, part of my design work consisted of creating walkthrough screens that could translate the perks of the new functionality to a user who has never explored it before.

WITH A VIEW TO CLARIFYING THE CONCEPT OF 4 TYPES OF GOALS, I DESIGNED A SET OF WALKTHROUGH SCREENS THAT COULD ENCOURAGE THE USER TO EXPLORE A NEW FUNCTIONALITY

See how this product works on adamwieclawski.com

Testing, documenting, learning

To test the solution, I set up a Maze survey that would serve as a System Usability Scale for the testing cohort. At the end of the prototype flows, testers were prompted with an alert dialogue to sound off on the proposed solution via the Maze site.

Five days into beta testing, we collected both quantitative and qualitative feedback. Both end users and app admins expressed positive feedback on the goal-tracking feature, with 90% stating it helped them effectively monitor and prioritize the app content based on their needs.

As the project was coming to a close, my final task was to compile a comprehensive package of UX documentation. This documentation would serve as a ready-to-go resource for the tech team to commence the future implementation of my goal-tracking concept. Throughout the project, I maintained close collaboration with the tech team, engaging in regular calls with developers and exchanging insights on Slack. Their technical expertise proved invaluable in crafting a solution that not only met user expectations but also ensured seamless development and shipment down the line.

USING CONFLUENCE DOCUMENTATION, I WAS ABLE TO COMPILE A READY-TO-GO HANDOVER THAT THE TECH TEAM COULD PICK UP IN ORDER TO IMPLEMENT MY GOAL-TRACKING CONCEPT

Resources that helped me

At the start of this project, I didn’t expect that the solution for driving user engagement that can simultaneously provide better visibility of user activity to clients was to be found in goal tracking. My time at Mindset AI was therefore an invaluable lesson in confronting assumptions and working with many variables that a start-up environment is never short of.

To gain insights during the initial discovery phase, I referred to the FAQs provided by the sales team at Mindset AI. This proved to be a brilliant way to identify potential pain points within the product. I highly recommend this strategy for designers working on multi-dimensional software challenges.

For those who are keen to learn more about the cognitive psychology behind the paradoxes described in this study, I strongly recommend Nielsen Norman Group’s articles (I have attached the links in the corresponding sections) along with Steve Krug’s “Don’t Make Me Think”. Krug’s work is particularly effective in promoting common sense when designing solutions for end users.

If you wish to know more about the 5W2H method, I think this short and sweet blog post could be a good place to start. For a deeper dive, I suggest this article by a UX researcher, Sarah Dood, who explains how to make the most of this method during your next design challenge.

--

--