Design

Hiring is hard, so here’s a model to make it easy

4 min read
Nathan Kinch
  •  Jun 16, 2017
Link copied to clipboard

For design leaders, the success of their product is what matters most. To make this so, we need people—great people.

So, today’s question of focus: Do we hire full-time employees (FTEs), contractors, or work with an agency to achieve what matters most?

The right answer to this question is contextual. It’s filled with nuance and ambiguity.

“We can’t build great products without great people.”

Twitter Logo

This has a direct impact on >X, so we decided to conduct research. Our objective was to better understand the role an agency’s website had on this decision.

This article will expose insights from that research. It will also highlight a heuristic technique for these situations. This tool is simple, visual, and easy to start using now.

Let’s get started.

What’s a heuristic?

According to Wikipedia: “A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”  

Perfect isn’t a real thing in digital product design.Twitter Logo There’s effective, ineffective, and a bunch of “stuff” in between. In these types of environments, a heuristic gives people a frame of reference.

Today, design leaders must effectively synthesize large quantities of disparate information. From this they must make fast, “solid” decisions. Because of this, heuristics have a heap of value.

“In digital product design, there is no ‘perfect.'”

Twitter Logo

Brandon Chu wrote about 7 heuristics for being a product director. Others have been writing about this for a long time.

The long and the short: this isn’t new, but it is valuable.

Heuristics for hiring

To better understand how certain hiring decisions are made, we engaged 15 product and design leaders from the US, EU, and Australia in contextual inquiry user interviews. In these sessions we focused on the situational context of hiring decisions. This helped us better understand their desired outcome.

Related: The 3 most powerful heuristics designers can use

To summarize, we learned 3 things:

  1. If the work is a long-term strategic priority, hire FTEs and nurture their talent
  2. If the work is important and the timeline is tight, hire contractors and move fast
  3. If the work is “left field” or outside your realm of expertise, hire a specialist agency to execute

This understanding led to the development of a simple model.

The heuristics of hiring: A decision graph

This graph gives decision-makers a simple, visual way to approach the question of who to hire under which circumstances.

Here’s how to use it.

Step 1

Define your hiring purpose.

Here’s an example: You’re a travel agency. You think you want to build a chatbot. You think this because you want to reduce your cost of customer support. You read about it on TechCrunch and think it’s cool.

*For all our designers out there, I can feel the burn right now. You’re thinking, “Why aren’t they searching for the root cause? Maybe bad UX, a broken system, or simply a dysfunctional business process is the cause…”

This is why you have jobs. Onward!

Step 2

Assign a score from 0–10. Do this for Strategic Priority and Urgency.

Our example continued: You assign a score of 4 for strategic priority. This is because the initiative is reactionary rather than strategic. It’s reactionary because your support costs doubled last quarter. You assign a score of 10 for urgency as a result.

Step 3

Plot the scores on the graph.

Bringing our example to a close: After plotting this out, you see “hire agency” seems like the logical path forward. You proceed to search for agencies specializing in the design of conversational interfaces. You also look for agencies working with leading chatbot frameworks.

A few days later you’re sitting in a meeting with the agency. They ask why you want to build a chatbot… You can imagine how the rest of the conversation goes.

If you’re now thinking, “Cool, but what goes in the bottom left quadrant?” Think about your organization. What happens to stuff that has low strategic priority and no relative urgency? I may have said enough.

This process is no exact science. It’s a heuristic. Call it the simple synthesis of what we learned about how leaders currently make this hiring decision.

Like the example above, a few minutes spent scribbling on a printed version of this model helps with clarity. It gives decision-makers a clear view of the moving parts. It makes the decision for which category to hire a bit easier. And, if the decision must be made quickly, it can help with that too.

The surprising thing we’ve found is that the graph helps with team dynamic and context. By this we mean the graph helps existing team members understand why they’re growing their FTEs. It helps them understand why they’re bringing in a bucket load of new contractors. And it even helps them understand why that secretive agency has been brought into the mix.

For us, this was a valuable exercise. But we needed to go further. We wanted to focus effort on the agency decision option. We wanted to understand the role an agency’s website had on the final decision. Would the website experience impact the outcome? Or was it all about who knows who?

From first impression to contact

To find out, we ran basic usability tests with 15 decision-makers. We added contextual inquiry to help extract further insights.

In these sessions, we tested variations of content, CTAs, and visual design using InVision. This kept our costs low and gave us the ability to test hypotheses quickly.

We used 3-Pillar Design to help. This methodology enables us to assess human experiences, and it helps us understand value, meaning, and engagement from a user’s perspective. It gives us a strong proxy measure for whether an experience is good, bad, or somewhere in between.

From this we could determine if “better” digital experiences impacted hiring decisions.

To use this method, we needed to work with each participant on definitions for each key pillar.

What we learned is that value, or the efficacy of the experience, basically meant how quickly a decision-maker got the gist—how quickly they could tell whether we were any good at what we do. Then, how easy it was to get in touch with us. And, of course, how long it took us to get back to them.

We learned that meaning depends. It depends on the strength of the business pain, budget availability, and existing resources. A key insight was that these types of people only tend to spend time looking at companies like ours when the relative priority is high. In short, they’re ready to hire, and they’re fairly set on the agency option.

What we learned about engagement was that it meant more to design leaders than other decision-makers. But the little moments of “designed delight” were well received. We even had some of the interview attendees referring us onto people they knew.

So how did we actually get these scores?

We added a sub set of questions to our user interviews. After we defined each of the 3 pillars with participants, we asked them to score each pillar from 0–10.

These measures are largely subjective. They’re also completely contextual. Yet the sum of the scores serves its purpose well. It gave us a clear view of the impact our experience had on sentiment and the propensity people have to contact us after navigating our website.

Our past experience tells us that value, meaning, and engagement are fluid measures. To account for this we asked the question at 3 separate stages of the experience. You can see this plotted above. As expected, different scores were given at different stages of the experience.

So what was the result?

Decision weighting

People who gave the highest VME scores had the greatest likelihood to refer us to people they knew. This was not an expected outcome of the qualitative research—it was a bonus.

Related: Getting started with data-driven design

Discovering this was a great start. But to determine if the small scale qualitative research effort had any real validity, we needed more data. So we started looking at the quantitative data we have access to via Google Analytics.

We looked at a number of key metrics, including:

  • Goals
  • Unique visitors
  • Page views
  • Content
  • Bounce rate
  • Traffic sources
  • Time on site, and
  • In-page analysis

Rather than taking a broad view, we focused on:

  1. The correlation between specific pageviews (i.e. viewing both Manifesto and Clients pages) and a contact submission
  2. The correlation between total time on site and a contact submission
  3. The correlation between specific referral sources

We learned that:

  1. If a visitor viewed both the clients and manifesto pages of our website, they were almost twice as likely to contact us
  2. More time on site resulted in higher conversion. This was most prevalent for direct visitors
  3. Popular content*, when compared to other referral sources, increased the likelihood of a contact submission by >35%

*Our internal definition of popular content is content we’ve contributed to that has >250 publicly available mentions (i.e Twitter), and/or has >1000 readers/viewers

This led us to develop further hypotheses. Many of which are undergoing experiments as you read this.

We then went back to our interview participants to ask a few follow up questions. The key one being the weight they placed on a referral versus their own discovery process.

You know the answer. In all cases the referral had the greatest weighting attitudinally. But this depended on who the referral came from.

What it depended on was trust. Trusted sources were described as people or publications. So if an agency is referred by trusted sources, it’s more likely to be hired. Seems kind of obvious.

What’s not immediately obvious is how an agency can give themselves this competitive advantage.

Earned trust: The competitive advantage

Trust is a complex variable. It means different things to different people. We’ve come to learn trust is earned through transparency and delivery.Twitter Logo It’s the sum of the entire experience.

If you’re an agency, start by communicating exactly what you do and why you do it. Do this in a legible and comprehensible format.

If you do this you can achieve transparency.

A simple example of this is our cookie notice. This tells website visitors how we use cookies and why. It also gives them the opportunity to gain further insights or opt out completely.

“Trust is earned through transparency and delivery.”

Twitter Logo

More importantly, it makes consent explicit. It gives people choice. If they refuse cookies or don’t act at all, Google Analytics services aren’t enabled during their visit.

We’re left with delivery. The easy part, right? Not quite.

By doing what you say you’re going to do many times over, you can prove delivery.

Delivery is often shown through client logos and testimonials. This is then correlated to real products and services in market.

Because of our specific approach to privacy, we don’t prove delivery this way. This makes “proving” a little tricker.

So, we’ve found a few of ways to start progressing past that.

How, you ask?

It comes down to 3 things:

  1. Providing a taste for who we’ve worked with
  2. Anonymously sharing insights (we always seek prior approval to share this way), and
  3. Speaking to decision-makers as regularly as we can

Providing a taste gives people a sense of our breadth and depth. It’s a proxy for our ability to operate effectively in different environments. It’s transparency in a privacy preserving context.

Anonymously sharing insights is a way for us to highlight how our process helped produce customer and business value. It may be less ideal than branded case studies, but it’s proving a good compromise. Again, it’s transparency aligned to our context.

Speaking to decision-makers gives the market an opportunity to size us up face-to-face. They can grill us on the detail, unpack our approach, and ideally get a sense for how we might work together. Through these interactions, we can start earning trust.

Like other small firms, we rely heavily on word of mouth. We don’t expect this to change, nor do we hope it will.

Although this is the case, this exercise taught us our website matters. It’s part of what helps us earn the trust of people who may one day choose to work with us.

At the end of the day, what do we have if we don’t have trust?

Want to share your insight here on the InVision Blog?

We’re always looking for guest contributors. Get in touch on Twitter: @InVisionApp.

You’ll love these posts, too

Collaborate in real time on a digital whiteboard