OVERVIEW
The current experience of discovering candidates on the Vettery platform aligns more closely with the search-heavy solutions that recruiters come to us to escape. Inputting and repeatedly tweaking filters to only receive hundreds of job seeker results. Our users, recruiters and hiring managers, spend extra time sourcing and evaluating job seekers that might not be a match. They also don’t have the ability to provide explicit feedback in the product to inform the matching algorithm. Through anecdotal client feedback, we learned that users prefer a curated list of job seeker recommendations that meet their requirements rather than a list of 100+. In the end they want to save time and fill the top of their funnel with qualified and relevant applicants.
OBJECTIVE
How might we align our product to customer expectations and start the journey towards a more curated and matching-first experience?
Through multiple user interview sessions focused on UX, we have received mixed reviews about the platform’s ease of use. We have a hypothesis that these mixed reviews are due to the challenges between what client users are told to expect on Vettery and what they actually experience. Specifically, client users are told that they will be experiencing a vetted, tailored experience on Vettery, but when they use the platform, they are presented with a list of all candidates similar to the experience on LinkedIn or other volume-first experiences. This leads to confusion and misalignment in expectations across the whole product.
It’s also important to note that this experience will be built in React, Vettery’s new frontend framework. All the components were designed in collaboration with myself and the design team. We negotiated upfront with engineering and leadership that all new features will be designed and built with the Vettery DLS. ‘Top picks’ would be the first client-facing feature with the Vettery DLS.
THE PROCESS
The existing ‘Create a role’ experience allows users to set up a role in Vettery, add filters, view candidates, and send out interview requests right from the platform. Once they send the request, the ‘Multi-Candidate Recos’ algorithm recommends 3 candidates similar to the one they just reached out to. The algorithm takes into account their inferred preferences, think actions like clicks and saves. The more the client interacts with the platform the smarter the algorithm gets. We knew early on that this particular algorithm would be powering the curated list for ‘Top picks’ and wanted to start our discovery work with an internal test to validate the recommended results.
We got feedback early on from the sales team and clients that the job seeker recommendations were relevant and they would send an interview request to them, a strong signal to the recommendation engine. During the experiment, we noticed that participants had to be reminded to provide feedback. We also learned that they prefer to only give feedback when a candidate wasn’t a match and they would rather send the interview request for the candidates they liked and were a match.
Since this would be a brand new and unfamiliar experience to our users we ran a fake door test to get a list of users we could speak to and conduct usability testing with. It was also a clear indicator of a strong interest in an experience like this on Vettery.
Top Picks fake door test
Majority of users who clicked on the ‘Top Picks’ tab expressed interest in speaking with the product team, validating our first assumption if there is a need for a curated list of candidates for their open roles.
User Interviews Takeaways:
Our goal for the user research was to understand how recruiters and hiring managers evaluate candidate profiles and what information they need to send an interview request.
-
Hiring managers evaluate candidate profiles based on the role requirements. For example if they are looking for a Fullstack engineer with React and Python experience, they expect to see both of those tech skills listed on their profile and resume. Another example could be the candidates current location and location preference as many Vettery clients source from local candidates.
-
To evaluate job seekers strengths they look to see how many times their skills were mentioned in the resume as well as the context in which they used the skill.
-
They also look to see if the candidate has both the right years of experience per role and aligned role preferences
-
Some users also look for jumpiness in the resume. They look at the average tenure per company to make this assessment.
Data Science Brainstorm:
What additional user feedback (explicit and implicit) does the algorithm need to deliver more accurate recommendations?
-
Sending an interview request is a strong indicator that this is a good recommendation
-
Knowing why a hiring manager doesn’t like a candidate profile is just as (if not more) valuable as them sending an interview request.
-
Multiple choice feedback is the preferred format, versus free form text, for the algorithm.
-
The ideal time to ask for explicit feedback is when they give a 'thumbs down' on a candidate based on a few key attributes as they relate to the role.
-
We also learned that we could improve and speed up the quality of algorithmic recommendations with explicit user feedback
It was clear that the success of this experience would be dependent on the quality of user feedback per candidate recommendation. As the lead designer on this project, I wanted to create a seamless and engaging experience that would make it easy for them to evaluate and to provide feedback. We decided to redesign the candidate card to incorporate the insights from users. To help users provide feedback more easily, I introduced the ‘Vet-Streak’ – inspired by DuoLingo. ‘Vet-Streak’ is designed to encourage users to visit the Vettery ‘Match quiz’ daily creating their ‘Streak’. My hypothesis here is that more consistent and direct feedback a user provides to the ‘Multi-Candidate Recos’ algorithm the better the recommendations and the more satisfied our clients will be.
Part of our process is to collaborate with cross-functional members of the company. It’s essential to collaborate with cross-functional teams so that the UX can reflect different perspectives from folks of various disciplines. Effective collaboration also increases transparency and collaboration from the Product & Design team with the rest of the company. I teamed up with the Product Manager to organize a cross functional remote brainstorm – in the middle of a pandemic – using the Miro app. Our goal was to create solutions to show candidates' qualifications to recruiters in a way that demonstrates why they are both relevant and desirable. Also to tell each candidate's unique story on Vettery in a way that is relevant to recruiters.
I combined the ideas generated from the brainstorm and presented them to engineering making sure they’re aware of the feature being proposed and to catch anything where our tech might not be able to adequately support. From there, I created a design brief to be presented during design review 1, where I present a written document including the flow and proposed approach. No designs are shown at this stage. The goal is to make decisions on the flow and scope, and to answer any open questions without getting hung up on designs. I have to admit it took some adoption from stakeholders, but in the end we all realized that projects were more successful because of it.
Based on decisions made in DR1, I start working in Sketch creating wireframes with our Vettery Design System in mind then wrapping it up in a clickable prototype in Invision. During this process there are ongoing discussions with the PM and Engineering Lead.
Old Vettery candidate list
New Vettery candidate list
Based on decisions made in DR1, I start working in Sketch creating wireframes with our Vettery Design System in mind then wrapping it up in a clickable prototype in Invision. During this process there are ongoing discussions with the PM and Engineering Lead.
This project was put on hold for a later quarter so this is where the completed work ends. My next steps to bring this project to fruition are...
-
Set up time with users for initial thoughts and reactions with users > iterate > test > repeat
-
Informal design sharing sessions with PM’s, broader design team, and engineers
-
Design review 2: present designs to all stakeholders. Goal is to ensure designs match decisions from DR1. Based on this meeting, an engineering design doc can be started for scoping.
-
Finalized designs are shared in Invision with tour points for functionality and interactions
-
Stories are created in JIRA and reviewed in grooming with product, design, and engineering
-
As engineers start working in the code, I’ll start to give feedback so we can quickly iterate as needed.