Q&A: Sheila Campbell and Jonathan Rubin on federal website usability testing

Tools

The effectiveness of federal websites varies about as much as the agencies found across government, and it's not unusual for the least user-friendly sites to draw the ire of an inspector general or become the focus of a Government Accountability Office report.

Two General Services Administration officials recently sat down with FierceGovernmentIT to discuss their efforts to help agencies improve their digital presence and better serve citizens. Sheila Campbell, director of the center for excellence in digital government, and Jonathan Rubin, First Fridays usability program manager, explained GSA's web usability testing services as well as ways it's institutionalizing lessons learned through blog posts and training.

FierceGovernmentIT: When did the First Fridays program begin?

Sheila Campbell: The program began a couple years ago. The First Fridays usability testing program is really part of a larger evolution that's happening within our office--the Office of Citizen Services and Innovative Technologies here at GSA.

About 5 years or so ago, we were primarily citizen facing and provided a lot of direct services to citizens through USA.gov and so forth. We pretty much realized that USA.gov was only as good as the other federal websites that we were pointing people to. So, we really set out at that point to work with agencies directly to provide support to them in improving their federal websites and digital services.

Over the past 5 years, we've done that through a variety of programs. You may be familiar with them; DigitalGov University, we provide training through that program. And we support large communities of practice like the Federal Web Managers Council and the Content Managers listserv, which is about 3,000 federal, state and local web and new media professionals across the country.

Usability was always a big part of that. We were always providing tons of best practices and training, but we didn't have a specific, sort of approach to how we could support agencies there. So, Nicole Burton, who used to be with our office, did a phenomenal job of saying, 'Okay, where can we have the biggest impact? Where can we really provide support to agencies?'

She realized that it was through this very simple user testing that we could provide the highest impact. She developed a really specific plan and followed a methodology where we invite an agency to join us, and we recruit participants and we do that once a month. That's why it was called the First Fridays program initially. And it just took off and was incredibly successful because of the simplicity of it and also the practical approach.

Part of the thinking was that it wasn't making sense for agencies to spend a gazillion dollars and end up with 80-page reports that are going to sit on the shelf. More practically what we were trying to do was to really use the importance of user-centered design and usability testing within agencies, and the best way to do it was with these quick hits. They can spend just a half day really learning about what the top problems are on their website and making quick, simple fixes.

It started off really as a demonstration project--we were trying to demonstrate the power of user-centered design, but it evolved into a more formal program. We also called it First Fridays and did it the first Friday of every month, but now they're held more often.

FGIT: How does the assessment process work?

Campbell: More often than not they approach us. They've heard about the program now. They know how effective it is. They come to use and say, 'We'd really like to have our site go through the First Fridays method.' And so we sit down with them and first find out what their main business goals are. What are the main things they want to do to improve the website? What kind of data are they getting from their customers that tells them what needs to improve?

Then what we'll do is we'll work to develop a set of scenarios, a set of tasks that we'll walk people through during the test. And then, of course, we need to recruit people.

Typically, it's three people that we test. There's a lot of research out there that shows you don't really need to test 30 people. With three, or five, or a small number you're actually capturing the vast majority of problems on a website. So, again, that goes back to our focus on it being very simple and very practical.

FGIT: How do you recruit?

Campbell: It depends on who the audience for the website is. We try to match it up with who a typical visitor might be for that website. Sometimes, if we work with the Department of Labor for example, they might want to recruit somebody who would be a typical visitor to their website. So, we work with them to recruit those folks, and of course we conduct a test.

The testing is in one room and the observers are in another room. And the real power of the test of course is that we invite the people behind the website. So, we have the agency representative there, and hopefully their folks that work on the website--developers and other contractors or key stakeholders.

They get to directly observe how real people are navigating through the website. And the reason it's so powerful--it's like any other product design approach--the designers get so close to the design itself of the product that it's sometimes hard for them to see what their problems might be with the product.

It's usually very, very eye opening. It's very useful for them to see real users navigating the site and encountering things they didn't realize people were encountering. Getting that voice of the customer, that's so critical, and we obviously do it in a very simple, straightforward way.

FGIT: When you're conducting the tests, are there certain things that come up often, some recurring problems?

Campbell: Yeah, there are. Of course, there are some things that are unique to a particular website, but for the most part we have been seeing some patterns and I'll walk you through some of those. And that is another thing that we're really trying to emphasize is that the program is not just these one-off hits, but to really build a body of knowledge around what we're seeing in terms of usability testing.

Jon's done a terrific job on HowTo.gov, sort of documenting before and after and also some of the key things that we find.

Fundamentally, and it sounds funny because it seems so common sense, but one of the biggest problems sometimes is a lack of purpose. What we'll find sometimes as we walk people through the tasks is that it's not clear to them what the purpose of the site is. They get to the homepage, there's so much content there that it's overly cluttered and they're just trying to get their tasks done. People are very task focused. They're generally not trying to surf around, they have a very specific thing that they want to accomplish.

Sometimes these sites are built where they're making an emphasis on pushing a particular initiative or internal agency program, and that's fine, but it's sometimes at the detriment of people trying to get to their top task. That's definitely one of the top things we see just really a lack of a clear purpose.

The other thing is just kind of boring, confusing navigation. People can't find what they need. They may be starting on a homepage but they found the page through a search engine. And if the navigation isn't clear to them, they can get easily lost.

One of the things we see particularly on government websites is confusing jargon. You know, with the Plain Writing Act and lots of emphasis on utilizing plain language, it still is a bit of a problem on government websites where we use the inside-baseball terminology. We overuse acronyms, we think people are going to understand the language of the organization. That is something, I think, where this method is very effective.

It is very clear, very clear when people get tripped up by that language, and then it's just a show stopper. What they need is right behind a particular label, but they have no idea what a particular label is. So, we have lots of examples where we've just made a very simple change from a term that's too technical to a term that a user's going to use and that can just immediately improve people's ability to find what they need.

And, again, it seems common sense, but a lot of times there are just too many words. People get on a page and it's all words. Sometimes it's content that was just in a .pdf document or something and it's sort of slapped up on a webpage. People are just looking for a quick answer to a question and they have to wade through paragraphs and paragraphs of text. 

Through the testing program, our office has been able to work with agencies and show them what happens when information is buried in really, really dense text and how they can lighten it up. And this is where we have a lot of best practices on HowTo.gov on how to chop up the information--use tables and frankly sometimes just pulling out all the extraneous information. It's just too wordy. They can often say it in half the amount of words or even a third of the words.

The final recurring issue is relevant search results. Because with the testing, sometimes people are searching, sometimes they're navigating. And, as you know, people expect to find what they need right away. Often times search results don't give them what they want. Either the search engine isn't delivering great results or the information on the page hasn't been tagged appropriately. So, it's just really hard for people to wade through that information.

So, part of what we do is, after we run the three participants through the test, Jon and his team will do a debrief with the observers, and they collectively, as a group, identify what the top problems were, and they walk through some different solutions.

Again, it's very much oriented toward the practical approach, where we don't leave it at 'Hey, that was a nice test, we'll see you tomorrow.' It's more like, 'Hey, what are the top things we can fix?' and the other part of it is, 'What are the top things we can fix within the next 30 days.'

We want to be very action oriented and don't want to draw it out. Because the longer we draw it out, the less likely the agency is to really make the changes--trying to keep this very simple, focused and practical.

FGIT: As for making these actionable changes, who are you typically working with? Is it the IT department? Is it someone from public affairs that has ownership of the websites? Contractors?

Campbell: Because the program is so popular and effective. We actually do have to triage the requests that come in. So, we do try and put an emphasis on high-impact websites--those that tend to get a lot of traffic and really impact people's daily lives. So, we get sites like weather.gov, Library of Congress, Army and Department of Transportation. And more and more we're testing the mobile sites. It's a whole range of sites. We work with dozens of agencies, which has been great because not only has the project been very deep, it's been very broad.

In terms of who we get at the table, absolutely it's the IT department. It's the developers, the tech folks. It's also the content folks, and so within agencies it depends on how their web team is structured. Sometimes it's in the CIO shop; more and more it's in the public affairs shop. Sometimes we've had a higher-level senior representative there. Sometimes we've gotten the CIO there.

Jonathan Rubin: Getting that mix is really important. You want to have those players represented so that you have agreement on all those pieces. So, when you present a solution you can just go ahead and do it.

Campbell: That's been a challenge with any program like this is to get that buy-in. It's easier if you know if you can realistically make these changes. One of the things Jon's team also does is check in with the partners 30 days later and say, 'Hey, these were the fixes that we all agreed you guys were going to make, how are you doing on that? Do you need some support or any more help?'

Because we really want to work in partnership with them and we want to help them. So, if they have questions we want to be there to really make sure we can affect that change.

FGIT: Can you give me one example of an agency using the program and then making a major improvement to its site?

Campbell: One of the agencies where we saw a real improvement that we were really thrilled with was weather.gov.

We worked with the NOAA team, and again, a lot of it is just doing a lot of preparation--making sure we got the key stakeholders, we got really good buy-in all across the board. It made for a really successful test with the weather.gov folks. What we found was weather.gov was kind of falling into a similar trap that some of these more scientifically oriented sties fall into, in that they do have multiple audiences--they do have scientists but they also have a very big, general public audience. And they were using some terms that were more scientific and weren't resonating with the average person.

With the earlier version of the site, you would go on to the site and get this map of the United States and there were various tabs. One of the tabs was 'climate,' but a person looking for weather information didn't know that's where the weather forecast was, because, frankly, they were looking for the words 'weather forecast.'

And after seeing three people go through the test with simple scenarios to find the weather forecast in your region, they weren't going to that tab because 'climate' meant something completely different to them. And frankly, it is something different. It's a much broader, sort of global thing.

Again, it seems simple, common sense and straightforward, but until you do the testing a lot of times that kind of solution doesn't present itself. The weather.gov team went back and they made that fix. They immediately saw results--improved usability, improved customer experience--and sometimes these small changes can have a big impact.

The other example is the Army. Jon, do you want to talk about that?

Rubin: Sure. So for the Army, its basic functions were about recruiting and basically information about the Army. And the number one thing people were looking for was about benefits for them and their families, but where it was on the site, was actually buried.

What would happen was, people looked, they couldn't find it, they'd contact the helpdesk and this would pile up into more work that costs money. So, following tests, we recommended they put the important link right on the homepage and a quick link right at the top, right-hand corner.

That's exactly what they did, and instantly traffic started moving from the helpdesk to the quick link. They saw the helpdesk volume decrease, which everybody loved. This really simple process of what are people looking for and how do we help them prioritize worked.

FGIT: Do you have a simple takeaway or tip for agencies on what they can do to make their website more usable?

Campbell: What we've really found through this program--although we sort of knew before but have seen play out in real life--is that when you're developing any product and designing any product you just absolutely have to have your customers at the forefront.

You have to have them at the center of everything you do. So, at the beginning, the middle, the end you have to design a product based on your customers needs. And if you don't do that, you end up developing a product that you've invested a ton of money on and doesn't always work.

And in government, especially with the financial situation, we literally can't afford to continue to build these products without having them go through user testing, without getting feedback from customers.

And so, I think that is the most important advice we have for folks who are developing any digital product or any digital service. Take that user-centered design approach and really embed it into the lifecycle--the design lifecycle. Don't wait until you've already built the product because it will cost you a zillion times more than if you had gotten it early on.

You can do it with even just a prototype. We've even done some hallway testing, where we just show people a piece of paper with just a sketch of a website and get super-valuable feedback way early in the design process. There are multiple ways to do it.

FGIT: It sounds like some of the solutions aren't even technical--they're around management, stakeholder involvement, and visual, more like low-tech design.

Campbell: Absolutely. Sometimes it is something very technical with an application on the backend, but a lot of times the top problems are terminology, too much information, too many words on the page, sometimes it could be a design thing or the tone. Technology is a piece of it but other times it's things like a governance issue.

For more:
- read a Howto.gov blog post about First Fridays
- click through a before and after gallery on website usability improvements

Related Articles:
GAO: IRS needs long-term web strategy
GAO recommends usability improvements for saferproducts.gov
The top ways federal websites messed up in 2012