Making it easier for marketers to understand their performance

 

Role
Sr. Product Designer

Team
5

Timeline
12 months

 
 

A Brief Project History

 

Most of Emma’s competitors offered better analytics tools for marketers. CM had invested a lot of time and resources to build the back-end needed to support a better marketing analytics experience.

When I joined the team, I benefited from lots of institutional momentum to bring an MVP quickly to market and iterate to design an amazing experience for marketers and scale it from there.

 
 

My amazing team

The core team consisted of a Lead Product Manager, Lead Engineer, and a team of four (4) Engineers, including BE and FE

Stakeholders included Account Managers, Support team member, Technical services, Executive leadership.

 

My responsibilities as design lead

I worked as Product Designer on the end-to-end design, relentlessly advocating for the user the entire way.

Partnered with product and development leads to collectively define the strategic roadmap for our insights offering, including defining the vision of the product, scoping sprint and cycle deliverables, and negotiated deliverables based on user needs, technical feasibility, and market positioning.

Worked closely with the product and executive team to define the vision of insights, beyond an MVP offering, advocating for a vision-first approach, and created frameworks and artifacts to evangelize ideas, drive decisions, and gain team alignment.

Designed and built prototypes and socialized design progress to the broader stakeholder and executive team. 

Contributed end-to-end, product strategy, generative and evaluative research, wireframing, prototyping, testing, visual design, systematizing components into our design system, QA and post launch analysis.

Worked in concert with the development team to ensure that I understood system constraints and how those constraints impacted the design at scale, and that designs were technically feasible and the work was being implemented to meet our users' diverse requirements.

 
Rectangle.png

My Process

 
Rectangle.png
 

Setting the stage

I joined the team at a time when reporting and analytics ranked highest on an opportunity assessment conducted in partnership with the customer advisory board in close contact with customers.

 
 
 

We had to start with our users as our north star, not market validation.

I shifted the narrative from high level market and customer validation to problem definition. How do we enable our users to solve the problems they have and the jobs they are trying to accomplish with analytics? Problems that, up until this point in time, had not been fully identified.

There are common denominators in the data that marketers need to derive insights from (think the obvious ones, open rate, click rate, etc.). However, the ways in which marketers prioritize and put this data work were drastically different, given the size of their department, the industry they operate within, level of sophistication, etc.

Bringing an MVP analytics offering to market that accomplishes table stakes doesn’t mean we’ve advanced our position in the market, or even caught up. Because it doesn’t mean that we’ll meet user expectations - Emma user’s expectations.

 

The problems for marketers today

 
 

Home page offered extremely limited view into performance

The home page only displays open rate and click rate at the aggregate level, and only the last 30 days, that included all campaigns. It didn’t offer the detail and flexibility that users needed to understand their marketing performance.

 

Campaigns metrics made it impossible to understand the big picture

Campaign level metrics were much easier to view in-app, and this was often user’s first choice whenever they had time to analyze metrics.

However, these metrics didn’t display performance trends at an aggregate level, and to compare multiple campaigns, users would have get it through some other means

 
Artboard Copy 4.png

Trends graphs gave zero context

The graphs offered provided left the user going back and forth between data points and manually exporting some data.

Again it was easier for the user to just export all the data and manipulate it themselves outside of the app.

 

Compare mailings was incredibly arduous to use

The compare mailing feature generated preconfigured reports, but often these reports didn’t meet user expectations, and it was easier for them to just export all the raw data and manipulate it themselves.

 
Artboard Copy 9.png
 
 

I studied the competitive market

The ESP market is highly competitive and saturated with ESPs positioned within the SMB space to mid-market space. In order to understand where Emma could carve out a competitive advantage, I had to familiarize myself with other ESP offerings.

 

Understanding the business: positioning Emma required a comprehensive understanding of competitor offerings

 
 

I discovered user perspectives

Emma comprises of two separate platforms. One is the ESP, which serves subaccount owners, typically small business owners. The other is HQ, which serves larger mid-market franchises, typically with multiple locations.

After conducting an initial round of interviews to uncover user needs, I synthesized and clustered the data, uncovering two primary personas with very different goals, use cases, and needs.

 

I developed our target user personas

After conducting discovery research, I guided the team towards two primary personas: manager level markets of a single sub-account and Director level personas who oversaw multiple subaccounts.

 
 

Prioritizing, inclusively

As a team, we collectively shared ideas, and prioritized.

 

User surveys

User prioritization of feature offerings

n=220
 

User prioritization of metrics offerings

n=220

For confidentiality reasons I have omitted the actual values for these metrics.

 
 

Design

 

I tested two approaches, embedded content versus cards.

Once I defined the approach, I tested different layouts.

 

Lo-fi iteration towards the optimal information design

I tested multiple layouts with users to find the right information design and visual hierarchies that aligned with their preferences.

 
 
 

I used a mix of research methods to test prototypes

 

All-in-all, we sent out 6 different UsabilityHub tests and met with around 20 customers face-to-face, using a mix of different testing methods, including preferences, workflows, rankings, first-clicks, task completion, etc.

 
 

Our initial release started small and then scaled

After the initial release, feedback overall was positive. One main issue was the call-to-action and ease-of-access to detailed data. I tested different button positions, but ultimately decided to pivot.

The mockup below was our MVP and phase 1 release

 
Initial release.png
 
 
 

A deeper dive into process, challenges and outcomes

 
 

Iterations in hi-fi

 

The mockup below shows various iterations of information design and layout

 
 

Pivoting based on user analytics

 

The mockups below show our very first pivot

 
 
 

Tracking customer satisfaction score

We tracked CSAT scores from post-launch through multiple releases and post-MVP improvements in a very systematic way to understand if the improvements we were release were in fact increasing customer sentiments.

 
 

Post-launch adoption

We defined adoption as accounts who used insights at least once per week consecutively over a one-month cycle post-launch. 34% adoption rate for Plus and Premium accounts (our target audience) wasn’t bad. Our target was 30%.

Pro

Plus

Premium

Legacy

Parent

HQ sub

For confidentiality reasons I have omitted the actual values for these metrics.

 
 

6-mo post-launch drop

After around 6 months, we noticed that overall use was starting to drop. This didn’t have us too alarmed, but we dug in, to see what was going on.

 
 

For confidentiality reasons I have omitted the actual values for these metrics.

 
 
 

Deepening our understanding

We decided to dive in and conduct research on specific verticals using insights, in order to see if there were any trends in terms of gaps, opportunities, and organizational needs.

In order to sell the team on the value of this approach, I created higher fidelity organizational personas to articulate the research findings.

 

We learned a plethora of valuable insights. For instance, HQ needs were very different than our stand alone account needs. Pro accounts were typically our least sophisticated user base, and HQ sub-accounts in certain verticals often don’t care nearly as much about growth, mainly just engagement.

 
 

Understanding the organization

 

Aligning around a hypothesis

Based on the research, one central theme surfaced. Users were pleased with the high level overview of growth and engagement, but couldn’t action on that data in a way that helped the team and helped them adjust their marketing strategy.

 
 

Introducing the jobs to be done framework

I used Clayton Christensen’s Jobs to be Done framework. In doing so we discovered that there were six (6) primary jobs that users conducted on a frequent basis.

 
 

Will it help me compare my performance?

Users were comparing performance, engagement, and growth, one timeframe to another. We tested different approaches to enable easy comparison in product.

 
 

Will it help me share data across my team?

We learned that users were often sharing reports, on specific days for weekly / monthly meetings, or generally, across the marketing org. So, we helped automate that process by giving users the ability to send weekly reports out to member on their team. Working with our branding team, we designed an emailed report, that displays the metrics of their choosing.

First, I tested approach - a modal, a wizard, or a page

The user data suggested a modal. Then, I a/b tested a tabbed navigation

Against a walkthrough navigation

Artboard Copy 3.png

The walkthrough won, due to higher completion rates and lower error rates

Artboard Copy 6.png
 
 

Will it help me send at the optimal time to increase engagement?

One of main jobs users were trying to do, was to figure out when was the best time in the day and week to send a campaign.

 
 

Will it help me personalize my content based on engagement?

Another primary job users were trying to do, was to break down their audiences into different segments representing their level of engagement, so that they could then market to those audiences with customized content.

 
 

Will it help let me compare campaign engagement and audience growth?

Users needed the ability to understand their campaign performance at a high level, and at a detailed level, and quickly move in and out of those contexts as necessary.

 
 

Project outcomes

  1. Over 30% healthy customer adoption at launch

  2. Drove additional 12% adoption, up to over 40% through additional post-MVP features

  3. 15% reduction in professional service requests

  4. 10% reduction in support cases

  5. 8% increase in segmentation use

  6. Similar increases to other tangential feature set, such as organization of channels (groups, forms, etc.)

  7. 9% reduction in compare mailings use

  8. 5% reduction in campaign level metrics

  9. 10% reduction in exports among target segments

  10. 2nd highest viewed webinar in Emma history

  11. Trending over 70% CSAT scores

  12. Positive NPS correlation to insights use, which is a driver for retention

 

Key performance indicators

 

12-mo post launch adoption increase

Pro

Plus

Premium

Legacy

Parent

HQ sub

For confidentiality reasons I have omitted the actual values for these metrics.

 
 
 

12-mo post launch growth

We accomplished an increase in average number of monthly visitors, accounts and clicks. In other words, scaling the insights experience with the features we decided to prioritize increased use and stickiness.

Customers who adopted insights were over 20% more likely to renew.

For confidentiality reasons I have omitted the actual values for these metrics.

 
 

Upward trending CSAT scores

Over the course of a year, we tracked CSAT scores and fell below to 60 to trending above 70. Increases correlated to post-launch feature releases.

Graph: month-to-month post launch csat scores

For confidentiality reasons I have omitted the actual values for these metrics.

 
 
 

Vision explorations

Throughout the project, we received numerous requests for a centralized dashboard view from both our least sophisticated and our more sophisticated marketers. Both ultimately wanted to control custom views into marketing performance.

Furthermore, both our more sophisticated marketers started requesting more advanced aggregate metrics for social, impressions, shares, conversion rates. At this stage, we were confident that we enough data, validation and interest to justify building out a dashboard to accommodate these needs, and offering it as a solution for our premium subscription.

 
 

Learnings & challenges

Looking back, there are a number of challenges that I had to navigate and adapt to.

 

Value versus speed-to-value

We have an HQ counterpart to our ESP. This account type serves a very different user base (HQ administrators) and set of use cases than our regular accounts (called subaccounts). These users wanted an insights experience as well.

Insights was designed for subaccount managers, not HQ administrators. HQ admins were more interested in operational level data, not performance data. Instead of asking, how did these 100 campaigns perform, an HQ admin would ask, how are these 100 subaccounts performing?

We made a collective decision to focus design and engineering efforts on scaling sub-account capabilities, rather than designing an HQ-specific experience. Sub-accounts is where marketing performance matters most. Also, we knew we could release an offering much faster for them than for HQ. For HQ, we planned on releasing automated reports, and slowly productize.

However, with the release of insights, HQ customers requested their own in-app insights experience and weren’t pleased that sub-accounts got an offering and they didn’t. These are our top paying customers, they have loud voices, and their needs started to impact product level decisions and priorities throughout.

If I were to do it all over, I would have worked on both in tandem. Although focusing on one problem set at a time and making sure we nailed the solution was the right move, I think I could have designed for both, since I was already considering the HQ persona, and the entire product ecosystem from a constraint / architecture perspective, and factoring that into the subaccount experience from a scalability perspective.

 

Experience takes a backseat to accuracy

Throughout the project, I engaged engineering early on to factor in design at scale, system constraints, and edge cases. Engineering are a designer’s best friend when it comes to these aspects of a project.

One edge case, however slipped through all of our sights. We didn’t account for user processing of unsubscribes and archived subscribers. This caused discrepancies in growth metrics between campaign level and insights level displays.

Ultimately, this was an engineering challenge and there was little design could do to actually fix it the problem. However, I figured out a way that design could help.

I supported the engineering team by quickly scaling back implemented designs that excluded the metrics in question, to give them time to fix the issue.

The fallout was pretty extreme for some customers though, who adopted these metrics into their own workflows and were depending on them for major campaign sends.

Trying to understand how much of the negative feedback from various channels such as CSAT, NPS and support, was experience based versus performance based was difficult and added complexity when it came to factoring in feedback for testing, design and decision making moving forward.

If I were to do it again, I would build out a more formalized beta-release program / process where design is more involved in helping out with uncovering these type of issues earlier on.



Rectangle.png