top of page

Indeed Check

  Project Brief  

Explaining Sponsored Jobs with GenAI

Researching and designing a GenAI chatbot to help employers understand the value and ROI of Sponsored Jobs on Indeed.

Chatbot mock 01.png
Two Pane.png

Project Summary

Discovery research uncovered significant user confusion with the concept of sponsoring jobs, especially with SMB employers. I designed around this pain point to create a GenAI solution that helped explain the value and underpinnings of job sponsorship in a personalized way to that user.

Role

I led user research and design from our discovery phase through the end of the project. Research consisted of qualitative interviews and an AI powered analysis of chat transcripts. I designed and iterated through several solutions and coordinated with external teams on our final design strategy.

Hypothesis

A GenAI powered chatbot can enhance employer decision making around Sponsored Jobs on Indeed by providing personalized feedback using the user's data and clarifying the underpinnings of Indeed's recommended budget, resulting in increased revenue for Indeed.

Process Overview

I partnered with a product manager, an engineer, a data scientist, and an engineering manager to define an initial test and an MVP for this project.

  User Interviews  

User Interviews

GenAI really shines when it comes to explainability and distilling complex topics into digestible information. For this project, my PM and I aimed to leverage this capability of GenAI in a way that improved the job posting experience for employers. To start, we had to identify compelling user problems. 

Identifying User Problems

While we knew we wanted to utilize GenAI in a potential solution, we didn't know what problems we were addressing. I led user interviews with Indeed employers, while my PM led interviews with customer service reps at Indeed to identify common pain points. 

Research Questions

  1. What pain points do employers encounter posting jobs from Indeed?

  2. Where do employers experience anxiety or angst when posting jobs?

  3. What insights might improve their hiring experience on Indeed?

Methodology

Using Zoom, I led 4, 45 minute interviews with employers that post jobs on Indeed. Ideally, we wanted to talk to more employers but due to the timeline and some cancellations we weren't able to. However, even with these 4 employers, we began to see patterns emerging.  

Key Findings from Employers:

  • Sponsoring jobs on Indeed was the primary source of confusion for all 4 employers.​

  • When employers outspent their budgets, they felt they didn't understand Sponsored jobs.  

  • Employers didn't have a solid understanding of the outcome of sponsoring a job.

  • Employers wanted to understand how a sponsored job would compared to a job with no budget.

  • Employers were interested in having more actionable data but were sensitive to information overload.

Vector.png

HR Manager

Non-profit

Vector.png

HR Manager

Aviation

Vector.png

HR Manager

Summer Camp

Vector.png

HR Manager

Tourism

Customer Service Rep Interviews

My PM simultaneously led interviews with 4 customer service reps at Indeed to get their take on the most common problems they addressed with employers and how they went about solving them.

CSRep.png
CSRep.png
CSRep.png
CSRep.png

Key Findings from CS Reps:

  • Employers lack understanding of market conditions and competition on Indeed.

  • Employers are often intimidated and unaware of available analytics.

  • Proactivity in resolution builds trust with clients

  • Clients may not see the value in spending if they've had success with posting for free.

  • Clients have tight budgets and need to see clear ROI to justify spending. 

Employer Pain Points

We heard 4 key pain points repeatedly throughout these 8 interviews. These pain points gave us confidence that there compelling user problems to solve within the Sponsored Jobs product. 

noun-explosion-4991309 1.png

Employers do not have a clear understanding of Sponsored Jobs on Indeed and its ROI. 

noun-explosion-4991309 1.png

Employers have tight budgets and are sensitive to overspending.

noun-explosion-4991309 1.png

Employers want actionable data to inform decisions but are sensitive to data overload.

noun-explosion-4991309 1.png

Employers don't have a deep understanding of competition and market conditions and how that affects Sponsored Jobs on Indeed.

Research Readout

I compiled findings and insights into a slide deck and held a research readout for the team. These interviews helped us identify pain points that we could design an MVP around.

  ChatGPT Analysis  

UXR with ChatGPT

The user interviews helped us understand that employers struggled to understand and see the value in job sponsorship. We still needed to understand the magnitude of this problem and how customer service reps helped users work through it. 

Methodology

Our team sourced 4k chat transcripts from the customer service team from interactions that started on the Sponsored Jobs page. I created a research plan outlining our key research questions and partnered with a developer to leverage ChatGPTs API to synthesize the high level findings.

Research Questions

  1. What are the most frequent themes?

  2. What are the most common questions about Sponsored Jobs?

  3. What were the themes in positively resolved conversations?

Docs.png

4k chat transcripts

AI.png

ChatGPT API

Key Findings

  • Employers were mainly interested in understanding daily budgets, pricing models, budgets types, and how to attract quality candidates.

  • About 50% of these conversations were about the recommended budget of Sponsored Jobs.

  • In 97% of positively resolved conversations, the support reps provided examples.

  • When reps provided examples, users were more likely to spend the recommended budget. 

Product Direction

The findings of this analysis paired with the interview insights painted a clear picture that Sponsored Jobs was a difficult concept for clients. There clearly was an opportunity here to leverage GenAI's finesse in explainability to help clients understand how Sponsored Jobs work and how it can help them hire effectively.

  MVP Ideation  

MVP Ideation

Following our discovery research, I started ideating on ways we could leverage GenAI powered explainability on the Sponsored Jobs page.

Embedded Explainability Mocks

My initial ideation explored the concept of embedded explainability. I went through a few different ideas and shared them with the team and stakeholders after coming up with about 8 different variants. Below are some of them:

Sponsorship Guide

I designed my first idea around the user desire for understanding how a sponsored job compares to a free job. This mock leverages GenAI to explain how Indeed came to the conclusions regarding how many clicks they can expect.

Alert

My first idea seemed to dominate the sponsorship page with information that should be secondary. My second idea featured a sponsorship guide that only appeared when the user lowered their budget below Indeed's recommendation.

MonAI Initial Mocks 01.png
MonAI Initial Mocks 03.png

Data Visualization I

One limitation with GenAI's explainability finesse is that it's entirely copy. The first variants relied a lot on text, and for this one I tried to leverage data visualizations to complement the text. Additionally, I made the "AI Sponsorship Guide" expandable so the user could toggle it open if it sparked their interest.

Data Visualization II

The chart in this idea was designed to change as the user adjusted their budget below or above the recommended budget to illustrate how that affects their job's competitiveness. It also had the "AI Guide" that users could open or close as they needed to reduce how much space it took up. 

Competitive Data and Explainability.png
MonAI Initial Mocks 02.png

Socializing Designs

I shared the ideation with our core team and one of our GenAI leads. While the mocks were well received, they ended up prompting a more existential question about whether or not this type of "sponsorship guide" was the best way to present this data to the user.

Design Concerns

When socializing these designs, the team brought up two main points of concern. The first was whether or not we even needed GenAI to do this type of implementation. The engineers felt that this design could be achieved by leveraging programmatic rules and didn't necessarily need GenAI.

 

The second was that this was a lot of information to explain to the user on a single page and could feel overwhelming. These concerns pushed the team to think of other ways we could address user confusion around Sponsored Jobs.

  Chatbot Ideation  

Chatbot Ideation

Due to the amount of information needed to effectively explain Sponsored Jobs and the desire to make the answer feel personalized, the team pivoted our approach to a GenAI chatbot instead of continuing with the "embedded explainability" approach.

Chat Entry Point

One of the big concerns with the chatbot approach was that users normally start chats when they feel they have a problem. Our research indicated that users may not fully understand Sponsored Jobs, but they may not necessarily feel that this is a problem. I ideated on some ways we could nudge users to start a chat.

Alert Variant

My first idea was to introduce a banner on the page that a user could start a chat from. If the user lowered their budget below the recommendation, this banner turned into an alert to nudge the start a chat to understand why. 

Initial entrypoint ideation 04.png
Initial entrypoint ideation 02.png

Pop Up Variant

This iteration popped up a chat window whenever the user would lower their budget below the recommendation. This window also had an alert explaining why the chat appeared. While this would help us achieve our goal of getting more users to start a chat, it felt a bit intrusive and also hard to execute from an eng perspective. 

Prompt Variant

Past user research indicated that prompts were effective methods to nudge users into a chat. For Indeed's job seeker facing AI chatbot, almost 90% of interactions started with a micro-prompt within the chat experience. I wanted to template this known success with prompts and introduce it into our experience.

 

We ended up aligning on this entry point design for our test. 

Initial entrypoint ideation 03.png

  Usability Test  

Chatbot Usability Test

After aligning on an entry point, I wanted to conduct a usability test on the design to gauge employers' receptiveness to this feature and ensure there weren't any usability concerns.  

Background

At this point in the project, our team partnered with the monetization team to test on their page. Since the Sponsored Jobs page evolves quickly, there was a possibility our test might exist on 1 of 2 different experiences. For this user test I tested our chatbot on each of the potential designs.

Methodology

I conducted an unmoderated user test on usertesting.com and evaluated our chatbot design across two design variants with 5 participants in each group.

Research Questions

  1. How do employers react to the chatbot on the Sponsored Jobs page?

  2. What do employers think of the prompts?

  3. Does the Sponsored Jobs variant impact user perception of the chatbot?

Tested Experiences

Below are the two variants on the initial screens of the user test.

Variant 1

CustomTest.png

Variant 2

TiersTest.png

Key Findings

Overall, participants had a net positive impression of this product, but did have some concerns along the way. Some patterns in user feedback also began to emerge that helped us understand our target persona a bit more.

Transparency

Users appreciated that the chatbot provided rationale behind the budget recommendations.

Preference for Humans

Some users found AI to be a "cop out" and wanted real human interaction, especially since they were being asked to spend money.

Generic Content

Users felt the content of the chatbot was too generic and didn't provide enough added value when compared to the content of the page.

Concerns about Budget

These participants echoed what we had heard in past research: They didn't understand Sponsored Jobs budget and were skeptical on its ROI.

Ease of Use

Users found the chatbot easy to use and appreciated the ability to ask questions right on the Sponsored Jobs page.

Prompts

The prompts on the page felt relevant to users, especially for users who hadn't seen Sponsored Jobs before.

Customizaton

Users appreciated the opportunity to customize their budget and receive personalized recommendations from the chatbot.

Variants

Users expressed more negative feedback in Variant 2. They felt that the chatbot repeated content that was present in the budget tiers.

  MVP I Test  

Chatbot Smoke Test

Before we invested in developing and training an AI on Sponsored Jobs, we wanted to determine if there was enough employer interest to begin with. 

Test Goal

Test our prompted design against the current control to determine if it improves the chat CTR on this page.

Methodology

We tested our chatbot design with 1% of employers on the Sponsored Jobs page. Instead of going to an AI chatbot, our prompts connected the user with customer service agents. This way, we could evaluate whether or not employers wanted to interact with a chatbot in this way.

SmokeTest1.png

Headline + Prompts

Our test design featured a headline above the prompts, giving the users a high level idea of why $30 was the recommended budget. The prompts helped users easily engage in conversation with the chatbot.

Chat Drawer

Ideally, the prompts would have conveyed to the chat popup, but due to dev and time constraints we couldn't manage this in time for launch. 

SmokeTest2.png

Test Results

Compared to the control experience, our design improved CTR for all employers on this page, but especially employers who were sponsoring a job for the first time. 

All Employers

143% Improvement

1.71% CTR | 0.7% Control

New Employers

163% Improvement

3.80% CTR | 1.44% Control

Outcome

With this test data in hand, the team and stakeholders agreed to proceed with this project and start developing an AI model to address these questions. While engineers worked on that, I worked on iterating on the design based on user test and stakeholder feedback.

  MVP I Iterations  

Iterating on the Chatbot Design

Our initial design was meant to test the effectiveness of the prompts in engaging employers. With that question answered, design needed to account for the full experience of an AI chatbot for Sponsored Jobs.

Progressively Disclosing Prompts

After this test launched, we received internal feedback regarding the large size of the prompts on this page. I put together some iterations and aligned with the design manager on this animation below to progressively disclose the prompts and not crowd the page.

Proposed Design

With the prompt treatment squared away, I started designing the chatbot experience based on user testing feedback and some design rules of thumb for AI products based on past user research. Additionally, I collaborated with our visual design team to ensure our AI experience was consistent with Indeed's AI brand guidelines.

ChatDesign2.png

Streaming Response

To mitigate overwhelming the user, we leveraged a "streaming response" design similar to ChatGPT. In this version, we also planned to pass the prompt to the chatbot to improve upon our initial UX.

Prompts

Indeed launched their job seeker AI product, Coworker early this year. The research from this product indicated that users heavily preferred to interact with prompts as opposed to freeform typing to communicate with the chatbot. We wanted to ensure we had prompts in our experience too.

Feedback

Feedback is a crucial component of products leveraging GenAI due to its occasional hallucinations or incorrect output. We included feedback on a per message basis in this experience. 

ChatDesign3.png
ChatWithRep.png

Eject Button

In user testing, we almost always encountered a user who was not optimistic about AI. To account for the diversity of user preferences, we wanted to allow them to switch to a human representative whenever they wanted to. 

Unforeseen Reorg

After aligning on these designs as a team, Indeed went through another round of layoffs and a reorg, which resulted in Incubator's dissolution. I ended up moving to a different team than the rest of this project, but kept supporting it through its launch.

AI Experimentation Iterations

The AI Experimentation team inherited this project with a bit of a different charter. Their goal was to identify ways GenAI could enhance value on existing Indeed experiences. Our project fit this charter, but we wanted to bring GenAI more to the forefront of this experience. Below are some iterations I did to fit this charter better.

2PaneIdeation.png

Variant 1: 2 Pane

In all of these variants, the chat experience is more front and center instead of being hid behind a button. This variant presents the chatbot as a 50/50 split with the budget selection and presents market insights right off the bat. 

Variant 2: Wizard

In this variant, I explored this page as if it was a wizard experience. We knew at this stage we wanted to focus on new employers, and this variant really held the hand of new users to explain Sponsored Jobs before they selected their budget tier. This product team preferred this variant.

WizIdeation.png
StackedIdeation.png

Variant 3: Stacked

The chatbot here lives within the budget tiers and is a little less in your face. One limitation with this chatbot is that once it is expanded, it takes up a really large portion of the page. After presenting this variant to the team, we decided to not move forward with this option, due to the long nature of the chatbot's output. 

  Variant User Test  

Testing Design Variants

I wanted to test these design variants against the control experience to gauge our target users' perception of this experience. I wanted to collect more feedback before we proposed a design for the next test to help steer the conversation with external stakeholders.

Test Structure

I tested the 2 variants below against the control variant in an A/B split on usertesting.com. I asked users for their feedback on these experiences and then asked them to rank their preference of the 3. 

Control

2PaneIdeation.png

Wizard Variant

WizIdeation.png

2 Pane Variant

2PaneIdeation.png

Key Findings

Users strongly preferred the 2 pane variant in this user test. Most users appreciated the support from the AI chatbot, but they did not feel it was their JTBD on this page. This feeling led to the "Wizard" variant being the least popular, even behind the control.

Control Lacks Context

Users felt the chatbot experiences provided added context that the control variant lacked.

Generic Content

Users felt the chatbot's content repeated what was on the page and wanted more concrete examples.

Wizard Variant

Users felt that chatbot was complementary to their budget decision and did not expect it to be so prominent.

Explainability

Users appreciated the AI's ability to explain hiring insights and data. They sometimes expected it to make a budget recommendation. 

Human Preference

1 user had a very negative impression of AI, and did not want it part of their experience at all. 

2 Pane Variant

Users felt the 2 pane variant provided a visual balance of support and decision making compared to the other options.

Finalizing the Test Design

We met with external stakeholders after this test to socialize these designs and propose the 2 pane variant for our next test. In this meeting, we discovered we could not modify the design of the budget tiers section of the page at all. Luckily, this worked out with our 2 pane variant. We ended up making the split 70/30 instead of 50/50 to better accommodate the budget tiers portion.

50/50 Split

2PaneIdeation.png

70/30 Split

TwoPaneFinal.png

  MVP II Design  

Final Chatbot Design

After aligning on the 70/30 implementation, I conduced 1 more usertesting.com study, which yielded very similar feedback to the 50/50 variant in the initial variant test. I finalized our 70/30 design and also made some visual tweaks like getting rid of the gray background to make the chatbot feel less intrusive.

01.png

Loading Screen

We leveraged a new AI design animation from the visual design team to have a custom animation here. This helped to ensure the chatbot was noticed once it fully loaded.

Streaming Response

The streaming response also helped draw user attention to the relevant hiring insights the chatbot was describing in the first message. 

​

Even if the user didn't want to interact with the chatbot, we hoped these hiring insights still provided some context regarding the recommended budget.

04.png
TwoPaneFinal.png

Prompts

These prompts helped nudge the user to chat with the chatbot by leveraging common questions we heard in research. They could also easily switch to a human support agent if they didn't care to interact with AI. 

Freeform Input

Our chatbot also supported freeform input in case the users had a question that wasn't covered by the prompts.

 

We also leveraged a "typing" animation here so the user wouldn't get frustrated waiting for the AI's output.

02.png
05.png

Feedback

One limitation of GenAI is that it hates to admit when it's wrong and sometimes will hallucinate a response. We added feedback mechanisms on a per message basis to ensure users could report these errors when they happened, or alternatively provide positive feedback.

  Test II Outcome  

Results

After running the test for about a month, we finally got results on the effectiveness of this solution. Below are the top findings.

Key Findings

50% of users who used the chat took the rec'd budget.

AI chat is effective in encouraging users to spend on Sponsored Jobs when they engage with it.​​

Low 2.9% CTR

Not all users wanted to interact with the chatbot, and this low CTR made its success short reaching. This was consistent with test 1's CTR, so not entirely unexpected.​​

Higher CTR for new users

New users chatted about 1% more of the time compared to existing users. 

Conclusions

A GenAI powered chatbot was an effective way to explain Sponsored Jobs to the small number of employers who engaged with it. If this solution were to move forward, it needed to more effectively engage employers on the Sponsored Jobs page to justify its prominence. 

Next steps

To follow up this experiment, I'd advocate for testing different visual iterations of the chat to see if that impacted engagement. I'd also advocate for testing different explainability solutions on this page to measure conversion. It's very possible an AI chatbot simply isn't the most effective ways to explain Sponsored Jobs to users and encourage them to accept Indeed's recommended budget.

Indeed Check

Usability Test
User Int
ChatGPT UXR
MVP Ideation
Chatbot Ideation
MVP I Test
MVP I Iterations
Variant Test
MVP II Design
Test Outcome
bottom of page