by: Kay Francoeur
We recently observed that we’ve entered a watershed moment for generative AI, most notably natural language processing (NLP) AI like OpenAI’s ChatGPT. Generative AI is rapidly moving from mere Zeitgeist to a “new normal” pervading the mainstream. This technology is on our minds (and everyone else’s) for good reason – it’s changing the way we seek and process information, having a major influence on investment trends, and is poised to make big changes to the broader macroeconomic landscape. For us at ECA, as an executive search firm that relies heavily on data and is constantly testing and improving the way that we deploy this information, it’s essential that we learn how to generate, classify, and verify this new type of data and use it most effectively in our work to place star performers with PE and corporate clients.
As we continue to integrate with this technology, questions concerning how and when to use it are permeating many of our decisions, large and small. Increasingly, the question is not should you use AI in your work but how should you use it, both in the sense of using AI most effectively but also maintaining certain ethical standards and boundaries while doing so.
At ECA, we’re testing out best practices for using generative AI internally and evaluating how it will influence our recommendations to candidates and clients moving forward. The brilliant X factor of ChatGPT is that while you can use it like Google (accompanied by careful fact-checking) and for simple tasks like revising your CV, it can also serve as a thought partner for complex job search and industry forecasting questions. As Conor Grennan, Dean at NYU’s Stern School of Business, put it in a recent viral LinkedIn post, generative AI like ChatGPT is more Seabiscuit than pack mule – using it for utilitarian functions alone belies its greater paradigm-shifting potential. If it “wants” anything, it wants to show off.
In upcoming articles, we’ll be exploring the potential of ChatGPT for the future of executive recruiting from multiple angles, ranging from practical suggestions for how it can assist with simpler tasks – e.g., using ChatGPT to shine up your application materials and profile to get noticed by recruiters in the first place – to digging into some of the more nebulous questions inspired by the rapid adoption of this tech. What impact might generative AI have on the interview process, and what aspects of the process will emerge as more or less important than before? What role might NLPs play in forecasting trends in Private Equity, and in the broader market?
At ECA, we’re largely bullish on the potential of this tech to enhance our existing rigorous evidence-based approach, but we still have a lot of questions surrounding best practices and where to draw the lines of acceptable use.
Answering these questions requires leaving them as open-ended and agile as possible, and returning to ask them from new angles as ChatGPT evolves. This technology is changing at a rate that is outpacing our ability to understand how exactly it’s learning. While nothing has yet crossed the tenuous threshold from “narrow AI” (ability to execute on a specific, defined task) to “general AI” (showing intelligence across a range of cognitive tasks), that moment might not be far off: for example, OpenAI’s new GPT-4 has reached a level of “power-seeking” sophistication that allows it to successfully trick humans into doing gig work on its behalf.
Returning to more tangible concerns, stay tuned for the next articles in this series breaking down actionable suggestions for using generative AI to support your job search, and our initial thoughts on how the interview process as a whole might shift in response to the ubiquity of ChatGPT as career coach.
Kay Francoeur is a Project Manager at ECA Partners. She can be reached at [email protected]