Did China’s Baidu discover scaling laws before OpenAI? A debate rekindles in AI circles

Latest neighborhood discussions have reignited the talk over whether or not Chinese language tech big Baidu might have developed the important thing theoretical groundwork for large-scale synthetic intelligence (AI) fashions earlier than America’s OpenAI.
Giant fashions, or “basis fashions”, are on the forefront of AI development, with their fast iterations driving cutting-edge purposes. Whereas america is usually considered as main superior AI mannequin innovation, some argue that China might have began exploring these ideas earlier.

Central to large-model growth is the “scaling regulation” – a precept that asserts the bigger the coaching knowledge and mannequin parameters, the stronger the mannequin’s intelligence capabilities. Broadly credited to OpenAI’s 2020 paper, “Scaling Legal guidelines for Neural Language Fashions”, this concept has since turn out to be a cornerstone in AI analysis.

The OpenAI paper confirmed that growing mannequin parameters, coaching knowledge and compute assets boosts efficiency following a power-law relationship. This perception guided the event of subsequent large-scale AI fashions.

Nevertheless, Dario Amodei, a co-author of the OpenAI paper and former vice-president of analysis on the firm, shared in a November podcast that he had noticed comparable phenomena as early as 2014, throughout his time at Baidu.

“After I was working at Baidu with [former Baidu chief scientist] Andrew Ng in late 2014, the very first thing we labored on was speech recognition techniques,” Amodei mentioned. “I observed that fashions improved as you gave them extra knowledge, made them bigger and skilled them longer.”

Sensi Tech Hub
Logo