Aiming to be first in the world to have the most advanced forms of artificial intelligence while also maintaining control over more than a billion people, elite Chinese scientists and their government have turned to something new, and very old, for inspiration—the human brain.
In one of thousands of efforts underway, they are constructing a “city brain” to enhance the computers at the core of the “smart cities” that already scan the country from Beijing’s broad avenues to small-town streets, collecting and processing terabytes of information from intricate networks of sensors, cameras and other devices that monitor traffic, human faces, voices and gait, and even look for “gathering fights.”
Equipped with surveillance and visual processing capabilities modelled on human vision, the new “brain” will be more effective, less energy hungry, and will “improve governance,” its developers say. “We call it bionic retina computing,” Gao Wen, a leading artificial intelligence researcher, wrote in the paper “City Brain: Challenges and Solution.”
The work by Gao and his cutting-edge Peng Cheng Laboratory in the southern city of Shenzhen represents far more than just China’s drive to expand its ever more pervasive monitoring of its citizens: it is also an indication of China’s determination to win the race for what is known as artificial general intelligence.
This is the AI that could not only out-think people on a vast number of tasks and give whoever controls it an enormous strategic advantage, but which has also prompted warnings from experts in the West of a potential threat to the existence of civilization if it outwits its human masters and runs amok.
Gao’s is just one of about 1,000 papers seen by Newsweek that show China is forging ahead in the race for artificial general intelligence, which is a step change beyond the large language models such as Chat GPT or Bard already taking societies by storm with their ability to generate text and images and find vast amounts of information quickly.
“Artificial general intelligence is the ‘atomic bomb’ of the information field and the ‘game winner’ in the competition between China and the United States,” another leading Chinese AI scientist, Zhu Songchun, said in July in his hometown of Ezhou by Wuhan in Hubei province, according to Jingchu Net, an online website of the Hubei Daily, a Communist Party media outlet.
Just as in the 1950s and ’60s when Chinese scientists worked around the clock to build the atomic bomb, intercontinental missile and satellite, “We need to develop AI like the ‘two bombs and one satellite’ and form an AI ‘ace army’ that represents the national will,” Zhu said.
China aims to lead the world in AI by 2030, a goal made clear in the official “China Brain Project” announced in 2016. AI and brain science are also two of half a dozen “frontier fields” named in the state’s 15-year national science plan running from 2021 to 2035.
There’s AI—and then there’s AGI
There are major differences between the “narrow AI” systems in use now, which cannot “think” for themselves but can perform tasks such as writing a person’s term paper or identifying their face, versus the more ambitious, artificial general intelligence that could one day do better than humans at many tasks.
U.S. scientists are also working on AGI, though efforts are mostly scattered, unlike in China where research institutes devoted to it have received many hundreds of millions of dollars in state funding, say Western AI scientists who have worked with their Chinese counterparts in cutting-edge fields but who asked not to be identified due to political sensitivities there surrounding the topic.
A report published in August by The Millennium Project, a Washington, D.C.-based futurist think tank that warns of grave dangers for humanity from AGI, AI doyen Geoffrey Hinton said his expectations for when it might be achieved had dropped recently from 50 years to less than 20—and possibly as little as five.
“We might be close to the computers coming up with their own ideas for improving themselves and it could just go fast. We have to think hard about how to control that,” Hinton said.
A major difference between the West and China is the public debate over the dangers AI, including very advanced AI, could pose.
“AI labs are recklessly rushing to build more and more powerful systems, with no robust solutions to make them safe,” Anthony Aguirre of the U.S.-based Future of Life Institute told Newsweek, referring largely to work in the U.S. More than 33,000 scientists signed a call by the institute in March for a six-month pause on some AI development—though it went unheeded.
Few such existential concerns are expressed in public in China.
In April, Chinese leader Xi Jinping told the Politburo, “Emphasize the importance of artificial general intelligence, create an innovation system (for it),” according to state news agency Xinhua. Xi has frequently called for Chinese scientists to pursue AI at high speed—at least 13 times in recent years. Xi also told the Politburo that scientists should pay attention to risk, yet so far the key AI-related risk cited in China is political, with a new law introduced in August putting first the rule that AI “must adhere to socialist core values.”
Li Zheng, a researcher at the China Institute for Contemporary International Relations in Beijing, wrote in July that China was “more concerned about national security and public interest” than the E.U. or the U.S.
“The U.S. and Western countries place more emphasis on anti-bias and anti-discrimination in AI ethics, trying to avoid the interests of ethnic minorities and marginalized groups from being affected by algorithmic discrimination,” Li wrote in the Chinese-language edition of Global Times.
“Developing countries such as China emphasize more on the strategic design and regulatory function of the government.”
Li’s institute belongs to the Ministry of State Security.
China’s AGI Research
The extent of China’s research into AGI was highlighted in a study by Georgetown’s Center for Security and Emerging Technology titled “China’s Cognitive AI Research” that concluded China was on the right path and called for greater scrutiny of the Chinese efforts by the U.S.
“China’s cognitive AI research will improve its ability to field robots, make smarter and quicker decisions, accelerate innovation, run influence operations, and perform other high-level functions reliably with greater autonomy and less computational cost, elevating global AI risk and the strategic challenge to other nations,” the authors said in the July study.
Examining thousands of Chinese scientific papers on AI published between 2018 and 2022, the team of authors identified 850 that they say show the country is seriously pursuing AGI including via brain science, the goal singled out in China’s current Five Year Plan.
The studies include brain science-inspired investigations of vision such as Gao’s, perception aiming for cognition, pattern recognition research, investigations of how to mimic human brain neural networks in computers, and efforts that could ultimately lead to human-robot hybrids, for example by placing a large-scale brain simulation on a robot body.
Illustrating that, a report in late 2022 by CCTV, China’s state broadcaster, showed a robot manipulating a door handle to open a cupboard as a scientist explained that the “depth camera” affixed to its shoulders can also “recognize a person’s physique and analyze their intentions based on this visual information.”
Significantly, the Georgetown researchers said, “There was an unusually large number of papers on facial, gait, and emotion recognition” in the Chinese papers, as well as sentiment analysis, “errant” behavior prediction and military applications.
In addition to the papers identified in the study, others seen by Newsweek explored human-robot value alignment, “Amygdala Inspired Affective Computing” (referring to a small part of the brain that processes fear), industrial applications such as “A closed-loop brain-computer interface with augmented reality feedback for industrial human-robot collaboration,” and, from July 2023, “BrainCog: A Spiking Neural Network based, Brain-inspired Cognitive Intelligence Engine for Brain-inspired AI and Brain Simulation.”
A forthcoming study by the U.S. researchers will focus on China’s use of “brain-computer interface” to enhance the cognitive power of healthy humans, meaning essentially, to mold their intelligence and perhaps even, given the political environment in China, their ideology.
Fact or (Science) Fiction?
Some scientists view AGI as unattainable science fiction. But Max Riesenhuber, co-director at the Center for Neuroengineering at Georgetown University Medical Center and one of the researchers in the U.S. study, believes that China is on a path that is more likely to succeed than the large language models such as ChatGPT, the current preoccupation in the West.
“ChatGPT by design cannot go back and forth in its reasoning, it is limited to linear thinking,” Riesenhuber said in an interview.
In China, “There is a very sensible realization that LLMs are limited as models of real intelligence. A surefire way to get to real intelligence is to reverse engineer the human brain, which is the only intelligent system that we know of,” Riesenhuber said.
He cautioned that it was unclear if anyone, including China, would succeed at this next level AI. “However, getting closer to the real thing will already have payoffs,” he said.
In speeches and state documents seen by Newsweek, Chinese officials say they are aiming for “first-mover advantage” in AGI. Scientists say the potentially self-replicating nature of AI means that could give them a permanent edge.
“General artificial intelligence is the original and ultimate goal of artificial intelligence research,” Zhu, the founder and director of China’s leading AGI institute, the Beijing Institute of General Artificial Intelligence (BIGAI), said in an online event earlier this year. “I specifically decided to use AGI as part of the institution’s name to distinguish it from dedicated artificial intelligence,” such as “face recognition, object detection or text translation,” Zhu said.
Zhu, like many other leading AI scientists in China, is a former student and professor in U.S. universities.
AGI with Chinese Characteristics
China’s quest for AGI won top-level, public policy support in 2017, when the government published a “New Generation AI Development Plan” laying out a path to leading the field globally by 2030.
Public documents seen by Newsweek reveal wide and deep official support. One from May this year shows the Beijing Municipal Science and Technology Commission signed off on a two-year effort to position the city as a global leader in AGI research, with significant state subsidies.
Beijing will focus on “brain-like intelligence, embodied intelligence…and produce enlightened large models and general intelligence,” it said. A one million-square-feet, “Beijing General Artificial Intelligence Innovation Park” is due to be finished at the end of 2024.
Beijing’s northwest Haidian university district of Zhongguancun, China’s “Silicon Valley,” is a hub of the research. In addition to BIGAI, the Beijing Academy of Artificial Intelligence (BAAI) is in the capital as are Peking and Tsinghua universities, also highly active in AI research. The centers interconnect.
This concentration of R&D and commercial activity will create “a ‘nuclear explosion point’ for the development of general artificial intelligence,” according to the district government. The district and city governments did not respond to requests for comment.
Other key research locations include Wuhan, Shanghai, Shenzhen, Hefei, Harbin and Tianjin.
Safety First
The prospect of anyone developing superintelligent AGIs raises enormous safety concerns, says Nick Bostrom, director of the Future of Humanity Institute at Oxford University, when asked about China’s work on AGI.
“We don’t yet have reliable methods for scalable AI alignment. So, if general machine superintelligence were developed today, we might not be able to keep it safe and controlled,” Bostrom told Newsweek.
AI as it is today was already sufficiently concerning with its potential for engineering enhanced chemical or biological weapons, cybercrime, automated drone swarms or its use for totalitarian oppression, or for propaganda or discrimination, he noted.
“Ultimately, we all sit in the same boat with respect to the most serious risks from AGI. So, the ideal would be that rather than squabbling and fighting on the deck while the storm is approaching, we would work together to get the vessel as ready and seaworthy as much as possible in whatever time we have remaining,” Bostrom said.
Others have questioned the likelihood of an agreement on ethical implementation being respected even if it is agreed globally.
“Ethical considerations are undoubtedly a significant aspect of AI regulation in China. However, it remains unclear whether regulators have the capacity or willingness to strictly enforce rules related to ethics,” said Angela Zhang, director of the at the Philip K. H. Wong Centre for Chinese Law at the University of Hong Kong.
“Given the prevailing political concerns, I don’t believe that ethical considerations will be the dominant factor in China’s future decisions on AI regulation,” Zhang said.
Despite that, she said it was necessary to try to get agreement between China and the U.S. as the world’s leading AI powerhouses, and voiced hopes it might be possible.
“It would be a missed opportunity not to involve China in establishing universal rules and standards for AI,” Zhang said.
A Western AI expert who interacts with Chinese AI scientists and policymakers said that while Chinese officials were interested in international measures on safety or an international body to regulate AI, they must be part of setting the rules, or the effort would certainly fail. Questions to the Institute for AI International Governance of Tsinghua University went unanswered.
In a statement to Newsweek, Liu Pengyu, a spokesman for the Chinese embassy in D.C., made clear China wanted an “extensive” role in future AI regulation. “As a principal, China believes that the development of AI benefits all countries, and all countries should be able to participate extensively in the global governance of AI,” Liu said.
Worrying About a Revolution
At home, the greatest fear over AI highlighted by Chinese officials is that it could lead to domestic rebellion against Communist Party rule. Commentators have publicly voiced concerns over the perceived political risks of foreign-made, generative AI.
“ChatGPT’s political stance and ideology will subconsciously influence its hundreds of millions of user groups, becoming the most massive propaganda machine for Western values and ideology,” Lu Chuanying, a governance, internet security and AI expert at the Shanghai Institutes for International Studies, wrote in the Shanghai media outlet The Paper.
“This is of great concern to our current ideology and discourse system,” Lu wrote.
The concern prompted a new AI law that took effect in China on August 15. The first of five main provisions states that generative AI, including text, images, audio and video, “must adhere to socialist core values, must not generate incitement to subvert state power, overthrow the socialist system, damage national security or interests” and “must not hurt the national image.”
Hard at work in their laboratories, some Chinese researchers have also been focusing on what AI could mean for ideology. One paper aiming for AGI noted that “unhealthy information can easily harm the psychology of college students.” The title of the paper: “An intelligent software-driven immersive environment for online political guidance based on brain-computer interface and autonomous systems.”
Such an application of the technology, it said, was inevitable.
Didi Kirsten Tatlow can be reached at d.kirstentatlow@newsweek.com. Find her on Twitter @dktatlow