Baidu is certain that its AI-driven chatbot won’t make mistakes on “important and sensitive topics” because of its experience in adapting its search engine to Chinese regulatory constraints, the firm said on Tuesday.
The ChatGPT-like Ernie bot, which Reuters testing has shown refuses to answer a wide range of inquiries on politics, particularly those referring to Chinese government leaders, was not yet available, according to Baidu CEO Robin Li, who stated this during a conference call with analysts. Instead, the business is waiting for government approval.
Li used the technical word used in the industry to describe the phenomenon known as “hallucination,” which occurs when AI models produce results that are not consistent with expectations. “For important and sensitive topics, we have to make sure artificial intelligence will not hallucinate,” Li added.
Given that LLM (large-language model), the model utilized by many AI chatbots like ChatGPT and Ernie bot, is essentially a probabilistic model, he continued, “this task is not trivial at all.”
Industry regulation, according to Li, was not yet finalized, and the business will continue to revise its plan as necessary.
According to him, Baidu has been doing searches in China for more than 20 years and has a wealth of knowledge on both the local way of life and the legal system. Contrarily, businesses that don’t have a history of collaborating closely with regulators or delivering proper internet material may encounter major difficulties.
China’s cyberspace regulator issued proposed regulations last month for generative AI-powered services like Ernie Bot, stating that the socialist principles of the nation have to be upheld in the content produced by this cutting-edge technology. According to Li, Baidu would gain from these actions. “We believe that regulators’ active engagement in generative AI in the early stages will raise the bar to entry, and we are well positioned for that,” the executive stated.