It’s so reminiscent of Asimov that it’s scary.

Imagine a world where robots and artificially intelligent personal assistants are an integral part of our work environment (well, maybe we are already there), but how do we ensure that these digital and mechanical assistants do not become threats?

Google’s DeepMind laboratory has taken a rather dramatic but also relevant step by presenting its Robot Constitution, a series of rules clearly inspired by Isaac Asimov’s famous Laws of Robotics.

These standards are intended to serve as a deontological guide for developers of artificial intelligence applications and also include it within the databases of these virtual assistants so that they have it completely internalized. In this way, they seek not only to improve safety in human-robot interaction, but also to open new possibilities in the development of artificial intelligence.

Isaac Asimov and the laws of robotics

Let’s see the origin of all this. In 1942, Isaac Asimov presented the world with his Three Laws of Robotics, a set of guidelines intended to ensure that robots would never harm humans. These laws, which have endured in popular and scientific culture, can be summarized as follows:

  • A robot cannot harm a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given by humans, except when such orders conflict with the First Law.
  • A robot must protect its own existence to the extent that this protection does not conflict with the First or Second Law.

DeepMind has taken this legacy and modernized it for the context, creating a framework that combines Asimov’s ethics with the new reality of AI.

The new “Robot Constitution”

DeepMind has developed a “Robot Constitution” that functions as a set of safety prompts, programmed to ensure that robots do not select dangerous tasks. This system uses visual language models (VLM) and large-scale language models (LLM) that collaborate to understand the environment, adapt to new situations, and decide appropriate tasks.

In a recent test, robots equipped with DeepMind’s AutoRT system demonstrated their ability to operate in office environments without incident. These robots are not just autonomous; They can be controlled remotely or follow predefined scripts, ensuring they always act safely and efficiently.

In addition to its “Robot Constitution”, DeepMind has introduced other technologies such as SARA-RT and RT-Trajectory. SARA-RT improves the precision and speed of the Robotic Transformer RT-2 model, while RT-Trajectory allows robots to perform specific physical tasks, such as cleaning a table, more effectively.

A crucial aspect of these developments is the inclusion of a force threshold in the robots’ joints, which automatically stops them if a certain pressure is exceeded. A manual shut-off switch has also been incorporated so that human operators can deactivate the robots in an emergency.

Experiments in the real world

Google has deployed 53 AutoRT robots in four different office buildings over seven months, accumulating more than 77,000 trials. These experiments have shown that robots can integrate into human environments without causing problems, which is a testament to the effectiveness of the security system they have implemented, according to their findings.




 
For Latest Updates Follow us on Google News
 

-

PREV New ways of doing things – G5news
NEXT The WhatsApp feature you should disable to avoid being scammed – Teach me about Science