SecurityScorecard

A Sales-Ready Scorecard

Summary: Within my first year at the company, I shipped a redesign that significantly boosted sales enablement by simplifying complex data, directly impacting time-to-value.

This effort also addressed user feedback, boosting perceived product maturity and customer satisfaction. Ultimately, my redesign elevated design as a partner to the company, increasing trust in the design function.

Intro

The CEO was looking for ways to speed up time-to-value and shorten the sales cycle. Prospects were becoming customers but only once they trusted the scoring.

The latest feedback pointed to an underwhelming scorecard experience that left prospects with scoring questions.

The CEO tasked the head of product to use new benchmarking work from the data science team to better contextualize the scorecards and speed up time-to-value.

My Role

As a new contract product designer, I was tasked to design how the data science benchmarking work would appear in the product.

My Partners

CEO, main stakeholder

Head of product (my manager)

Data science team

1 front-end engineer

Explorations: Benchmarking

I followed my manager's recommendation to start with the Score Breakdown, a list of threat factors used to calculate Scorecards.

I designed basic benchmark graphs and ideated with the data science team and manager.

Reviewing User Feedback

New to the company and its industry, I caught up on the latest user feedback around unintuitive Scorecards and straight-A "wastelands".

The current Scorecard design alone wasn't impactful with prospects who needed external nudges to see value.

Information Architecture (IA) Double Trouble

Separately, I identified nearly 100% information overlap with little visual distinction in the scorecard.

Users were switching between two prominent views to understand their Scorecard.

I fact checked with my technical partners to confirm the outcome of my audit.

A Vision in 4 Parts

Seeing a bigger picture, I realized design could play a bigger role than expected.

I wanted to redesign the scorecard that would:

  1. Reduce information overlap

  2. Sharpen visual hierarchy

  3. Show product reach

  4. Motivate the user to action

Part 1:
Reduce information overlap

Both the Score Breakdown and Issues listed factors, factor grades, and detected factor issues.

Separately, Issues included the count of factor issue findings.

Additionally, my manager wanted issue severity levels added in.

I used the real estate of a table row component to land a visual direction for merging the two views' info overlap.

Part 2:
Sharpen visual hierarchy

Merging the two views forced me to disentangle the hierarchy of objects.

  • A company had factors.

  • A factor had issues.

  • An issue had findings.

Additionally, a factor had different severities of issues, affecting the scoring.

I wanted to land a view that intuitively showed how factors and issue affected scoring.

Part 3:
Show product reach

By this time, I had grasped the scoring model enough to suggest feasible ideas that got implemented.

One example was learning that there was a finite number of issues.

This empowered me to propose not only listing issues with findings, but also issues without findings.

This way, prospects could immediately see the breadth of issues scanned.

Part 4:
Motivate the user

Additional research showed that users prioritized items based on on issue severity and number of findings.

I proposed a severity level summary that surfaced this information sooner in the user experience.

This update also helped address the wasteland straight-As effect mentioned earlier.

Back to Benchmarks:
A twist in the plot

I kept running into an ambiguity with the benchmark percentiles using the same 0-100 scale as the factor grades.

I wanted clarity so to better convey their differences in my designs.

I raised this to the data science team who came back, actually confirming the two metrics were directly related.

To my surprise, my final benchmarking proposal was to simply present the percentile as a number beside the factor grade.

Odds & Ends:
List simplication

The 10 factors were divided into primary and secondary threat factors.

Based on my discussions with sales engineers, the division felt forced and wasn't impactful to sales calls.

I proposed simplifying the groups into one list of 10 factors, with which all partners agreed.

This simplification quickly made its way to our marketing collateral, some of which I also designed in my early days.

Odds & Ends:
Percentile clean-up

When I joined the company, there was inconsistent use of the percentile throughout the scoring.

The 10 factors had letter grades but no number.

The overall company score had a grade and number but would be inconsistently presented as a percentile or percentage.

I saw an opportunity to set a style guideline to clean up the way the product presents percentiles.

Results

The redesign was one of the first revamps of the scorecard at the company.

It set a new tone for what stakeholders and partners could expect from product design, that of visual and functional problem solving.

10 Years Later:
A design that lives on

Nearly 10 years later, my scorecard design still lives in the product, referenced in the company's help center.

While the Score Breakdown has had touchups due to an evolving design library, the overall structure remains in place.

Of course, the business has grown. The company's success no longer hinges on the Factors list, nor should it.

Still, one thing I take pride in as a designer is work that ages well and serves well.