BEIJING, April 8 (Xinhua) — China is soliciting public opinions on drafts of 22 national standards covering data fundamental vocabulary, the National Technical Committee 609 on Data of Standardization Administration of China said on Wednesday.
The move is part of China’s accelerated push to clarify terminology and establish unified standards in the data field, which aims to bolster the high-quality development of the artificial intelligence (AI) industry, according to the committee.
The draft standards are intended to support teaching, research, production, business and technical exchanges within the data sector.
They specify Chinese and English names and definitions for key terms such as token, data value, data asset, trusted data space, and data annotation industry.
With the growing adoption of AI and big data technologies, terms like computing power, token and high-quality dataset have become increasingly common in daily discourse, drawing widespread public attention and discussion.
Experts believe that establishing national standards — including for fundamental data vocabulary, city-wide digital transformation terminology and high-quality dataset classification guidelines — will help create a consistent language system for the data factor market and offer standardized support for the high-quality growth of the digital economy.
“The substantial increase in average daily token calls fully demonstrates that China’s AI development has entered a stage of rapid growth,” Liu Liehong, head of the National Data Administration, said last month.
China’s average daily token calls have exceeded 140 trillion in March this year, which represents a surge of more than 1,000 times compared with 100 billion at the beginning of 2024, and an increase of over 40 percent compared with 100 trillion at the end of 2025, according to Liu. ■
