Courtesy is an inherent virtue of human interactions. Saying “please” and “thanks” is a good habit, but with artificial intelligence, as the case of Chatgpt, It would not be the best solution.
According to several investigations and statements of experts, such as the OpenAi CEO, Sam Altman, using these two expressions has a considerably high cost in terms of energy consumption, and, and, and, Although in some cases you can improve AI responses, there are reasons to reconsider if it is worth doing.
Chatgpt, like many other artificial intelligence models, works through a complex process that involves thousands of computer operations to generate responses to user consultations. Every time an application is made, the AI must process the entry into its entirety, dividing it into tokens (smaller units of text), which increases the time and resources required, especially when the interactions are longer and longer due to courtesy.
Sam Altman, CEO of Openai, addressed this problem in a conversation with a user in X. The user asked about the costs derived from the interactions educated with the language models, to which Altman responded with humor, estimating that Openai has spent “DECENDS OF MILLIONS Well spent dollars ”due to the extra words that users include courtesy.

Although CEO’s response was jokingly, the reality is that interactions with words such as “please” or “thanks” require greater computational processing, which translates into more energy consumption and, therefore, greater costs for the company.
This phenomenon is the result of how the chatgpt language model works. If additional words are included in a consultation, such as those mentioned above, The amount of tokens that the AI must process is increased, which requires more calculation timegreater use of graphic processing units (GPU) and, ultimately, a greater burden on the servers that operate the system.
This additional consumption of resources may seem insignificant in a single case, but when these interactions are multiplied by millions of daily users, the costs are triggered.
The energy consumption of AI is not limited only to monetary costs. It also has a considerable ecological footprint, because Data centers that house models such as ChatGPT require huge amounts of electricity.
-
According to estimates of the specialized environment Tom’s Hardwareeach average chatgpt consultation consumes approximately 0.3 watts-moan (WHH) of electricity. Although this is a tenth of what was previously estimated, It remains a considerable number if the platform users scale is taken into account.
The concern for the environmental impact of AI is not new. The International Energy Agency (IE) projects that by 2030, the data centers will consume about 945 Teravatios-Hora (TWH) of electricity, which represents a significant increase with respect to the current 415 TWH.
Besides, The AI models not only demand electricity to function, but also to cool the servers and avoid overheatingOr, which implies additional water consumption. A report by the University of California revealed that up to a response of two to three words from AI can consume between 40 and 50 milliliters of water in the computational process.
Despite the high energy cost, some research suggests that being educated when interacting with AI could, in certain cases, improve the quality of the responses generated. A study conducted by the University of Cornell in the United States evaluated how interactions with ia varied depending on whether a courteous or impolite language was used.

The results showed that, as in human interactions, polite language usually generates more effective and satisfactory responses, while rude indications can trigger a lower performance of the model.
However, the study also pointed out that not all polite language is equally cash. In certain contexts, excess courtesy may not be beneficial. In particular, language models seem to respond better to a moderate courtesy level in languages such as English, While in other languages, such as Japanese, a high degree of formality seems to be more favorable.
This finding highlights that the AI models are not only influenced by human norms, but also reflect the linguistic and cultural peculiarities that they acquire through their training data.
The study concluded that, although educated language can improve interaction with AI, this effect is not always consistent and depends largely on the cultural context and the courtesy level. What is clear is that educated interactions, such as those that include “please” and “thanks”, They can influence how the model responds, but does not guarantee a significant improvement in the quality of the answers.