Enterprise Design Systems Management

Maintain Adoption Levels

Ben Callahan

Ben Callahan

Sparkbox
Enterprise Design Systems Management

Check out a free preview of the full Enterprise Design Systems Management course

The "Maintain Adoption Levels" Lesson is part of the full, Enterprise Design Systems Management course featured in this preview video. Here's what you'd learn in this lesson:

Ben discusses defining health metrics that are important for the design system to help articulate an adoption health score. A discussion regarding teeter-totter maturity, what causes it, and how to avoid continuing to move between stage two and three.

Preview
Close

Transcript from the "Maintain Adoption Levels" Lesson

[00:00:00]
>> Now this actually isn't enough either, we also have to think about the health of specific characteristics of your adoption. So, I like to think about this, I just call these health metrics. And so, there needs to be a set of health scores, for specific metrics that you care about, that are important to your organization.

[00:00:19]
So I'll give you a couple examples, if the version of the design system that's in use is a really important metric to you. Then you might determine that there's a grade you can assign, to the system version. So if somebody is using the current version, they're getting a grade A.

[00:00:36]
If they're using the current version back one, they get a grade B, anything two or older is a grade C, right? Again, this is just an example and you may not care as much about the version of the system, perhaps you do, your versioning by component, right. So maybe there's a different metric for you.

[00:00:54]
But what you're getting at, is a specific metric that we wanna track the health of across those different levels of adoption. Another example might be product coverage so how much of a product is using the system. So 75% or more of my product is built using the system, that's phenomenal that's a grade A.

[00:01:16]
50 to 75, maybe is a B, so on and so forth. And so again, there may be a handful of these health metrics, that you define. and these are gonna grow and change as your system does and as your organization does, okay? So don't make these set in stone, but this is how you get at a much more nuanced definition of adoption.

[00:01:38]
That also gives people a vision of how they can come on board as subscribers, and tracks the health of all of those adoptions. I will say it's really hard to track this stuff. And there are all kinds of articles out there, if you just Google, like, how do I track design system adoption, right.

[00:01:58]
You're gonna get a lot of stuff. And probably 80% of that Is like somebody who's written a little open source code scraper. That goes out and looks at repos looks for includes of your design system components in product production code bases or whatever. That's great but that's a tiny slice of adoption, okay?

[00:02:21]
That's one discipline that's a developer somewhere that and all that means is they included it in their code. It doesn't actually mean they necessarily use it. [LAUGH] That much, right? So, what we need to recognize is that adoption can happen across every discipline. So I'm interested in understanding, how are my product owners using the design system?

[00:02:41]
How are my UX writers using the design system, how are my front-end designers, front-end developers using the design system? QA, right I mean, one place for all of these things, to show me exactly how they should look and feel thats a Q&A dream. They should be loving this work, so how do we think about adoption in a much more perspective I want to think about all those disciplines.

[00:03:06]
Anybody who's Java it is is to touch and interact with, how we get those interfaces out into the open should be considered here. And of course it's not enough, to track it. You also have to maintain it this is really hard, okay. As far as I can tell my best sort of like just based on the interviews and stuff the best advice that I can give you in terms of maintaining adoption.

[00:03:33]
Is actually more about building trust with your subscribers, right? This is trust built with communication, with consistency and with transparency. The example I think is like easy is when we were talking earlier about synchronization, keeping all these assets in sync, right, so I have a figma file that's got a button in it.

[00:03:53]
I've got a react component that has a button. I've got an angular, that's got I have a prototype maybe something inure that's a button. All of these things are different assets, different tangible pieces and parts that different discipline needs to do their work, with a button. So now I've got four different versions of this thing out there, this button.

[00:04:11]
So when I make a change in one, how do I make sure we're synchronizing that across all of these assets that we have. so I can guarantee you your assets are never gonna always be in sync. It's next to impossible to do that, especially as you scale. So what you have to do, and this is a way that you maintain that trust.

[00:04:31]
Is you're really good with transparency about the state of those components, right? So what, we're talking about is, again, closing the gap between their expectation, when they use the thing and the reality of using the thing. I want to be clear and say, actually this button in React is in the backlog to be updated for a new version but, we'll support that change.

[00:04:55]
If you decide to integrate this, will help you with the migration up to the new version. But, you got to let them know that's what they're getting. So, that they don't get the Figma thing, see I'm gonna use the button to use the button and then it's not what they saw in the Figma, right.

[00:05:08]
Now you're creating tension and you're breaking trust. So that's kind of what we need to do here. I'll also say in terms of tracking this stuff, there is no tool that does all of this for you, okay? I mean, I wish that I could tell you, it existed you just have to install it and it works.

[00:05:26]
But we're literally talking about, tracking all these different nuances. Of different usage and different tools, I mean this is not simple to do. And so, early on you can't do all of these, right? You need to pick one simple path, and you need to choose a couple metrics of health.

[00:05:44]
And that might be a very manual process early on, where maybe once a quarter. By sending a few emails to ask few folks how things are going, right, it can be that simple early on. But over time, you can start to develop some more tools to do these things there are.

[00:05:59]
There's obviously an API and figma that lets you see some of this, there's even analytics in figma. So you can do some work to kind of pull the data you get from a specific tool. Maybe some of the code scraping things combined with that combined with some pulse surveys or whatever.

[00:06:13]
I will get you to a place where you can report on the health of your adoption, across these different tiers. And then I'll just cover one other concept here in stage two, which is super common. And I have heard so many stories from folks, who have been in this what I just call a teeter totter maturity.

[00:06:34]
They've been in this problem where they get Stage 2, and they do all this work and they feel like they've got good adoption level, they have a good understanding of adoption. And then all of a sudden they're in Stage 3, working on other problems and pretty quickly they see adoption fall off, and they fall back into almost crisis mode, like how do we fix adoption?

[00:06:55]
And so this is a very normal experience, and It's really frustrating and a lot of folks end up giving up. And in fact, we were talking a little bit about Dan Moll earlier, he's got a great talk on the design system graveyard. Which is a real thing, a lot of our clients are on their fifth or sixth attempt now.

[00:07:14]
And that's probably because of something like this, right? Where they get into here and they just they can't get people to adopt. Or they do for a while and then they stop using it or something happens, it's a really frustrating scenario. As best as I can tell that this happens because the same team that was so focused on releasing that first version of the system in stage one.

[00:07:37]
Is now being asked to figure out how to make the system really easy to adopt. They're being asked to learn how to provide really good trust, building customer support, to their existing subscribers. And it's a new system, so it's gonna need support, that is a big job, right?

[00:07:57]
Customer support for design systems is a big job. And almost nobody has that role on their teams, especially in stage two. And we're also saying you gotta continue to add new features, and fix all the bugs that surface. And I mean we're just asking too much of these folks.

[00:08:10]
So in the face of all of that work, it's really easy to leave off of all the stuff that will make it easy to adopt. We just push it aside, and we just think, I've got this fire I got to put out, right? But that means that you're spending a lot of time supporting new adoptions, right.

[00:08:30]
Instead of evolving the system in a way that sort of codifies that ease of adoption automating the adoption process in some ways. You're gonna have to take the time until you take the time to make it easy to adopt it's really hard for that to actually last.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now