The new rules that dictate its usage may mean China learns their lesson first in the outcomes of AI regulation
Alibaba Group on Tuesday showed off its new generative AI model, said to be its own ‘version of ChatGPT’. The new AI large language model, named Tongyi Qianwen, will initially be integrated into DingTalk, Alibaba’s workplace messaging app, and to Tmall Genie, Alibaba’s voice assistant.
The unveiling of Tongyi Qianwen was swiftly followed by draft rules outlining how generative AI services should be managed published by the Cyberspace Administration of China. Under these draft regulations, Chinese tech companies are called upon to register generative AI products with China’s cyberspace agency and submit them to a security assessment before they can be released to the public.
As governments become more involved in the oversight of AI development, arguments for industry self-regulation have become a moot point. China’s latest rules signal a hard stance for how generative artificial intelligence services could be managed, providing clarity for other governments to develop their own framework.
Bureaucratic approaches to AI
Under the draft rules from the Cyberspace Administration of China, tech companies will be responsible for the “legitimacy of the source of pre-training data” to ensure content reflects the “core value of socialism”. Additionally, they require users to verify their real identity before using their products.
Concerns raised by the development of AI aren’t new to international governing bodies and China is not alone in taking bureaucratic action. It was only a few weeks ago that UNESCO called on all governments to expedite the implementation of a Global Ethical Framework in the training of the most powerful AI systems. To date, more than 40 countries are already working with UNESCO to develop AI checks and balances at the national level, building on UNESCO’s collective readiness assessment tool, titled the Recommendation on the Ethics of Artificial Intelligence.
AI regulation trends from around the world
It should be noted that AI in the United States is still largely unregulated. AI regulation has yet to receive much traction in US Congress, although privacy-related regulations around AI are expected to start rolling out at the state level this year. Meanwhile, the European Union has proposed sweeping legislation known as the AI Act that would classify which kinds of AI are “unacceptable” and banned, or at “high risk”.
As recently as last week however, there have been calls to European officials to pursue even broader regulations. Experts feel that regulation should be considered around how AI is developed, including around how data has been collected, who was involved in the collection and training of the technology and more. Beyond the US and the EU, Brazil is also working towards AI regulation, with a draft law under consideration by the country’s senate.
What will these new rules mean for AI development
As China leads the pack with guidelines for AI, some speculate the trade off for a more orderly and socially responsible deployment of the technology will slow down progress. It’s a difficult situation to have governing bodies try to regulate something they may not understand fully (such as AI). The urgency is justified, however it is unclear what this scramble for immediate regulation might do to those actually working in the space.
Instead of fostering innovation, governments are looking at AI through their unique lenses of harm reduction at all costs. Although holding companies accountable is a better alternative to banning the technologies outright, China might sooner see the loss of smaller companies and startups due to challenges that come with participation in the bureaucratic processes.