China Telecom’s AI Research Institute announces the training of a 100-billion-parameter AI model using purely domestic computing power, marking a significant milestone in the country’s AI capabilities despite Western technology sanctions.
China Telecom’s AI Research Institute Claims Landmark Achievement with Domestically Powered AI Model
Beijing, China – In a significant technological development, China Telecom’s AI Research Institute has announced the successful training of a 100-billion-parameter artificial intelligence model using exclusively domestically produced computing power. This achievement, which suggests that Chinese AI initiatives are continuing to advance despite the limitations imposed by Western technology sanctions, represents a notable milestone in the country’s AI capabilities.
The model, named TeleChat2-115B, was revealed in a GitHub update on September 20. According to the update, the model was “trained entirely with domestic computing power and open sourced.” The project’s GitHub page provides additional insights, indicating that the AI model was trained using 10 trillion tokens of a high-quality corpus in both Chinese and English.
A key detail from the GitHub page mentions compatibility with the “Ascend Atlas 800T A2 training server.” This server, a product of Huawei, is said to support the Kunpeng 920 7265 or Kunpeng 920 5250 processors, which respectively run 64 cores at 3.0GHz and 48 cores at 2.6GHz. These processors are built using the Arm 8.2 architecture and produced with a 7nm process, showcasing advanced domestic manufacturing capabilities.
At a parameter count of 100 billion, TeleChat2-115B lags behind some of the most recent AI models, such as Meta’s LLaMA models and OpenAI’s GPT-3, which are estimated to have parameter counts of 400 billion and 200 billion, respectively. Though parameter count is not the sole determinant of an AI model’s utility or power, a lower count typically suggests that a model requires less computational power for training.
Interestingly, no mention of a GPU was found in the announcements, likely because the Ascend training server features only a modest GPU intended for display purposes, with a resolution of 1920 × 1080 at 60Hz and 16 million colours. This indicates that the infrastructure used to train TeleChat2-115B may not be comparable with the high-end setups available to AI researchers outside China. Nevertheless, this does not appear to have hindered China Telecom’s progress in AI research.
China Telecom, a major player in the telecommunications industry with annual revenue exceeding $70 billion and more than half a billion wired and wireless subscriptions, has long been a significant user and proponent of OpenStack. Despite restricted access to the latest international technology, the carrier’s vast scale and resources have enabled it to support ambitious AI projects effectively.
The successful development of TeleChat2-115B exemplifies China’s resilience and growing self-reliance in technology, marking an important step in its quest to become a global leader in artificial intelligence.
For now, the open-sourced TeleChat2-115B model stands as a testament to the capabilities of Chinese technology and its ongoing efforts to overcome external limitations through innovation and resourcefulness. The model’s performance and potential applications in various fields are yet to be fully explored, but its creation underscores China’s ongoing momentum in AI development.
Source: Noah Wire Services


