Yes, yes, yes….I know what you’re probably thinking…another blog about ChatGPT. It’s a lot, isn’t it? So much talk, so much hype! There have been a slew of vendors launching ChatGPT-related products, many of which are just a marketing ploy and rebrandings of already-developed Chatbots. So, what’s the reality?
How can Generative AI (GenAI) and Large Language Models (LLM) like ChatGPT be used as a force for good in hiring, and how can they be misused? Let’s unpack the good, the bad, and the ugly of ChatGPT, the “biggest” GenAI/LLM tool on the block, in the context of hiring.
Let’s start with the good, and in my opinion, the biggest benefit to hiring teams…
First off: You’re gonna have more time
Yes, that’s right…if ChatGPT is used correctly, it will give you more time in your day to do all the important things that you need to do. You’ll have more time for strategic work, to get around to the important projects that slip, you might even complete a to-do list!
ChatGPT can take on the more repetitive, laborious parts of a recruiter’s job. For example:
- Craft job descriptions
- Draft emails to candidates
- Handle scheduling and common candidate questions
- Generate suggested interview questions
- Derive salary benchmarking data
Now, currently, I wouldn't trust ChatGPT to do these on its own, and I’m not saying it’s going to do these tasks perfectly. However, it will give you a very good start to work from.
Take a look at this example:
Imagine having more time to focus on the “human” side of recruiting — building relationships, nurturing the best candidates, etc. Contrary to the fears that many have today, ChatGPT is not going to eliminate the role of a recruiter. It can’t replace a very human process, however, there will be a shift in the skills we use.
What about Legal and Privacy concerns?
Samsung’s legal team has just banned employees from using ChatGPT after some staff uploaded sensitive code into the tool. So, don’t go pumping company or customer confidential information into ChatGPT without consideration!
I’m sure your legal team is in some form reviewing GenAI and the use of LLMs at your organization, and if they aren’t, they will be soon. But, what are they worried about? What should you look out for to protect your company, your people, and your candidates? Here are a few things to think about…
Hallucinations: fact vs. fiction
ChatGPT can hallucinate. Wild, right? This means that based on all the inputs, it assumes something that may not be accurate. As an example, we asked ChatGPT to summarize a candidate's answers to interview questions, some of which it did well, some of which it made up or “hallucinated”. To be fair, when questioned, it does own up for its mistakes. See here:
The point being, we can’t blindly trust ChatGPT with no checks and balances (yet).
All LLMs are trained on the collective writing of humans across the world, past and present. Unfortunately, this means that the same biases that exist in the real world can also appear in the output of ChatGPT. It’s not human, and it doesn’t have human morals and filters. It’s imperative to apply common sense and human decency to anything you’re asking it to produce, particularly in the context of HR and Talent Acquisition.
Like any technology, ChatGPT can be a double-edged sword, and in my view, its limitations are far outweighed by its benefits. With any major change in the market, there are going to be massive opportunities for those who dive in and understand the effects of that change. Ultimately, it’s not the technology itself, but the way in which it is applied that will make all the difference. And this is certainly true for HR and Talent Acquisition in understanding how to attract, retain, and grow employees.
If you’re interested in diving into this topic a bit more, I was recently interviewed by Forbes to discuss what AI tools mean for people leaders. You can find the article here.
Founder & CEO
P.S.) I thought it would be interesting to ask ChatGPT to write its own version of this blog. Drop me a line here and let me know which you prefer!