Data quality essential in training ChatGPT
It is approaching a year since OpenAI launched ChatGPT to the public, with adoption rates skyrocketing at an unprecedented pace.
By February 2023, Reuters reported an estimated 100 million active users. Fast forward to September and the ChatGPT Web site has attracted nearly 1.5 billion visitors, showcasing the platform’s immense popularity and integral role in today’s digital landscape. Willem Conradie, CTO of PBT Group, reflects on this journey, noting the significant usage and adoption of ChatGPT across various sectors.
“The rise of ChatGPT has highlighted significant concerns. These range from biased outputs, question misinterpretation, inconsistent answers, lack of empathy, to security issues. To navigate these, the concept of Responsible AI has gained momentum, emphasising the importance of applying AI with fair, inclusive, secure, transparent, accountable, and ethical intent. Adopting such an approach is vital especially when dealing with fabricated information when ChatGPT provides incorrect or outdated information,” says Conradie.
Of course, the platform’s versatility extends beyond public use. It serves as a powerful tool in corporate environments, enhancing various business processes such as customer service enquiries, email drafting, personal assistant tasks, keyword searches, and creating presentations. For the best performance, it is essential that ChatGPT provides accurate responses. This necessitates training on data that is not only relevant to the company but also accurate and timely.
“Consider a scenario where ChatGPT is employed to automatically service customer enquiries with the aim of enhancing customer experience by delivering personalised responses. If the underlying data quality is compromised, ChatGPT may provide inaccurate responses, ranging from minor errors like incorrect customer names to major issues like providing incorrect self-help instructions on the company’s mobile app,” says Conradie. “Such inaccuracies could lead to customer frustration, ultimately damaging the customer experience and negating the intended positive outcomes.”
Addressing such data quality concerns is paramount. Ensuring relevance is the first step. This requires the data used for model training to align with the business context in which ChatGPT operates. Timeliness is another critical factor, as outdated data could lead to inaccurate responses. The data must also be complete. Ensuring the dataset is free from missing values, duplicates, or irrelevant entries are important as these could also result in incorrect responses and actions.
Moreover, continuously improving the model through reinforcement learning, incorporating user feedback into model retraining cycles, is essential. This assists ChatGPT, and conversational AI models in general, to learn from their interactions, adapt, and enhance their response quality over time.
“The data quality management practices highlighted here, while not exhaustive, serve as a practical starting point. They are applicable not just to ChatGPT, but to conversational AI and other AI applications like generative AI. All this reinforces the importance of data quality across the spectrum of AI technologies,” concludes Conradie.