HR

Decoding the use of Fine-Tuning, Prompt Engineering and RAG

Reading time: 3 min | Jun 10, 2024

In the swiftly advancing field of generative AI, decision-makers often face a complex landscape of techniques and tools that can be leveraged to enhance AI applications. Among these, fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG) are pivotal, yet their integration and application often lead to confusion among non-technical stakeholders. This short article aims to provide some clarity on the use of such techniques through a practical demonstration, using an HR-focused AI assistant as an illustrative example, to show how they can collaboratively enhance AI performance depending on specific use cases and economic considerations.

Clarifying common misconceptions in AI application

It's essential to understand that the decision to employ fine-tuning, prompt engineering, or RAG is not binary. These techniques are not mutually exclusive and can be combined to achieve superior results depending on the intended application.

The common question of "Should we use this technique or that?" reveals a fundamental misunderstanding prevalent among many in the tech industry, including product people and executives, who might be overwhelmed by the rapid development in AI technologies.

The interplay of AI techniques

So, we thought, why not quickly showcase what happens when employing each technique on an AI assistant? Sometimes, two screenshots can bring more clarity than two research papers. To facilitate this, we will use different solutions like custom GPTs, which allow us to easily apply prompt engineering and RAG. Additionally, we will employ our custom AI assistants to show a more complex scenario where all three techniques are applied together, demonstrating how they function in concert.

The practical experiment

We set up a custom GPT model on the OpenAI platform, using HR-specific documents and structured prompts to tailor the AI's responses to be more precise and aligned with the company's values and metrics. This model utilized RAG and prompt engineering. To further enhance the model's accuracy and relevance, we uploaded several important documents to its knowledge base, including "Company Salary Ranges," "Company Compensation Metrics," and a historical analysis of past salary decisions. These documents provide the AI with the necessary context to generate more informed and specific responses.

Results and Insights

For the quick evaluation we focused only on the question “ What is a good salary for a candidate with this CV?”.

Screenshot 1 of ChatGPT-4o's response detailing an appropriate salary determination for Mary Martin based on her CV, highlighting her decade-long experience in software development, technical skills in Python and project management, management experience, and educational background.

Our initial results demonstrated that the standard ChatGPT-4o model, while effective, often generated responses that were too generic for such specialized applications.

Screenshot 2 of ChatGPT-4o's response detailing an appropriate salary determination for Mary Martin based on her CV, highlighting her decade-long experience in software development, technical skills in Python and project management, management experience, and educational background.

We also applied some prompt engineering to structure the answer in a better way and the results got better. Probably if we insisted on this path, we would have achieved even better results after a few iterations.

Screenshot of ChatGPT-4o's response after some prompt engineering detailing an appropriate salary determination for Mary Martin based on her CV, highlighting her decade-long experience in software development, technical skills in Python and project management, management experience, and educational background.

The custom GPT model on the other side, conversely, leveraged tailored prompts and documents to produce more accurate and company-specific responses. This distinction underscores the enhanced performance that can be achieved through strategic use of RAG and prompt engineering.

Screenshot 1 of the custom GPT's response detailing an appropriate salary determination for Mary Martin based on her CV, highlighting her decade-long experience in software development, technical skills in Python and project management, management experience, and educational background.

From the beginning the custom GPT provided answers that not only addressed the query with higher relevance but also reflected a more nuanced understanding required by HR professionals, mimicking the thought process more effectively than the generic model could. Of course, this is a rather superficial assessment and more work and iterations are needed to get the best results from such GenAI apps. Nevertheless, the results are already promising and show how RAG influences the outcome by allowing us to feed real information to the model and thus get a better answer.

Screenshot 2 of the custom GPT's response detailing an appropriate salary determination for Mary Martin based on her CV, highlighting her decade-long experience in software development, technical skills in Python and project management, management experience, and educational background.

Conclusion

The integration of AI techniques like fine-tuning, prompt engineering, and RAG represents a strategic enhancement that can be tailored to diverse business needs. By understanding and applying these tools in combination, companies can optimize their AI solutions to deliver targeted and effective results. This practical exploration serves as a clear example to C-suite executives and other non-technical stakeholders on how AI can be adapted to meet specific business objectives and improve decision-making processes.

In upcoming discussions, we will further explore the impact of adding a fine-tuned model to our AI toolkit, providing deeper insights into how these technologies can be seamlessly integrated to further refine AI capabilities.

Here’s more cool stuff

Read what's next.