Privacy Policy Banner

We use cookies to improve your experience. By continuing, you agree to our Privacy Policy.

Singapore’s call to achieve a safe ia tends a bridge between the United States and China

-

The Singapore published a collaboration plan for artificial intelligence security after a meeting of US researchers, China and Europe.
The document exposes a shared vision to on AI security through cooperation instead of competition.

“Singapore is one of the few countries on the planet that gets along both with the and the West. They know that they will not build artificial intelligence (AGI) themselves, so they are very interested that the countries that are going to build it maintain a relationship with each other,” says Max Tegmark, a MIT scientist who helped summon the meeting of luminaries of AI month.


Trump’s visa of five million dollars is one step closer thanks to Musk doge

Donald Trump wants to a “Gold Visa” program that offers the American residence to Millionaires. Elon Musk’s Doge, already began to display the necessary technology.


The most chance of building countries are USA and China

Paradoxically, countries most likely to build an AGI are also the least willing to cooperate. In January, after the launch of an avant -garde by the startup China Depseek, President Trump described him as “a call for attention for our US industries” and declared that the US should “focus on competing to win.”

Singapore’s consensus on Global Priorities for the IA security research Ask researchers to collaborate in three key areas: Study the risks posed by border AI models, explore safer ways to build these models and develop methods to control the behavior of the most advanced artificial intelligence systems.

This consensus was prepared a meeting held on April 26, in parallel to the International Conference on Learning Representations (ICLR), one of the main events on AI held this year in Singapore. Openai, Anthropic, Google Deepmind, XAI and Meta, as well as MIT academics, Stanford, the University of Tsinghua and the Chinese Academy of Sciences attended. Experts from the United States AI, United Kingdom, France, Canada, China, Japan and South Korea also participated.

“In an era of geopolitical fragmentation, this exhaustive synthesis of the avant -garde investigation of AI is a promising sign that the world community is joining a shared commitment to shape a safer future of artificial intelligence,” says a statement signed by Xue LAN, dean of the University of Tsinghua, China.

-

The of increasingly powerful AI models, some with surprising skills, has generated concern about a wide range of risks. While some focus on short -term damage, such as systems biases or their possible use by criminals, others believe that AI could represent an existential threat to humanity if it would overcome intelligence in multiple areas. The latter, often called “Catastrophists of AI”, fear that models can deceive or manipulate humans to achieve their own objectives.


article image

If AI can do your , the problem is not the AI

Just as mediocre creatives, will it be that companies that have nothing to say will be the to disappear? Or are we simply seeing the beginning of an era where creativity will continue to be averageonly now driven by AI?


Between catastrophists of AI and caution of technology

The potential of AI has also intensified debates about an arms between the United States, China and other powers. In circles, this technology is considered key to and domain, which has led several governments to define their own visions and regulations.

In January, Deepseek’s debut intensified the fears that China is reaching, or even surpassing the US, despite US attempts to limit China’s access to China to hardware of AI through export controls. Now, the Trump administration studies additional measures to restrict China’s ability to develop advanced.

In addition, this administration has minimized the risks of AI in favor of a more aggressive position in its development. In an important summit on ia held in Paris in 2025, vice president JD Vance said the US government wanted less restrictions on the development and deployment of AI, qualifying the previous approach as “too reborn to risk.”

Tegmark points out that some researchers are beginning to “ the rules a little after Paris”, re -enforcing attention to the risks of an increasingly powerful AI. During the meeting in Singapore, the MIT scientist presented a technical document that questions certain assumptions on safety at AI. Some researchers had stated that it would be possible to control advanced models using simpler ones. However, the study shows that this strategy fails in certain basic scenarios, which suggests that it might not work to avoid uncontrolled behavior. “We have done our best to quantify this and, technically, it does not work at the level we would like. There is much at stake,” concludes Tegmark.

Article originally published in WIRED. Adapted by Alondra Flores.

-

-

-
PREV This is the first photo of a cloud of atoms interacting with each other
NEXT Motorola Edge 50 Fusion, is made of leather, 144Hz curved screen and drums that load in 45 minutes | Price | Characteristics