The artificial intelligence business threatens to turn science into pseudoscience

The artificial intelligence business threatens to turn science into pseudoscience
The artificial intelligence business threatens to turn science into pseudoscience

This article was originally published on The Conversation.

When we talk about artificial intelligence (AI) we must orient ourselves rigorously from the foundations of the sciences that support it. In other words, we should never rely on artificially generated expectations around ‘new products’ that they try to sell us, as well as on the tremendous advertising hyperbole (‘hype’) that sweeps all media. Exaggerated claims about its successes seriously damage the reputation of AI as a science and can lead to it being used almost as a pseudoscience, in the manner of astrology, flat earthism and homeopathy.

summers and winters

The scientist Jonathan Grudin published an article in 2009 in which he illustrated the sharp oscillations in interest, funding and real advances in AI throughout its history. It used the metaphor of the seasons of the year, which allows us to understand how AI has evolved since its birth.

Now, ending the first quarter of the 21st century, we find ourselves in a critical period, but future deep chasms of ‘AI winters’ are not unlikely, given that there is still much to research and understand about this important area of ​​knowledge.

The scientific community in this field is aware of how far we are from modeling true computational nervous systems or understanding what intelligence consists of. These researchers also assume that there is no evidence – neither mathematical, nor physical, nor biological – nor is there any known existence of any prototype equivalent to the thinking capabilities of a human brain. For this reason, great effort is still necessary in multiple areas if we want to advance from science and with serious and firm steps.

Generative AIs are degenerative

The most popular products of the AI ​​industry in recent times are ‘generative AI’ based mainly on computational neural networks trained with large language models (LLM). The best known examples are ChatGPT, MidJourney and Sora, capable of generating text, graphics, sound and videos.

These computational tools are trained with enormous amounts of data present on the Internet, produced by people who have legal rights of authorship of their material, but which circulates freely on the Internet.

Many companies tend to avoid lawsuits or save costs and use data generated by their own AI to continue training their machines. From mathematics it is shown that this recursive training of the machine with data previously generated by the machine itself produces a statistical effect called ‘model collapse’.

That collapse leads to misinformation, content degeneration, increasing incorrect learning models, and even a possible collapse of the internet as a trusted source.

We know that many AIs generate good-looking but fake content. If a web search engine is based on generative AI like ChatGPT, the result could be very convincing, but incorrect. If we accept it as valid and do not verify it, and also release that content as training for AI, one of our most important sources of information would be filled with misinformation.

Malpractice in the AI ​​industry

The AI ​​business does not stop, although science shows that there is no such thing as strong or general AI (‘AGI’) and that existing particular tools (weak AI) need significant improvements to correctly assist in solving some problems.

Many technology companies continue to amplify the ‘hype’ to continue growing their financial results. To give a few examples of important businessmen in the sector who know that what they say is not possible, we can cite recent statements or ‘predictions’ by Elon Musk and Mark Zuckerberg, who announce the imminent introduction of this non-existent general artificial intelligence in their products. .

A perhaps even more reprehensible practice is the founding by Sam Altman (CEO of OpenAI, creator of ChatGPT) of the company Worldcoin, dedicated to cryptocurrencies. Very striking has been his campaign around the world to capture biometric data of people (adults or minors) through ‘orbs’, attractive devices for youth, that scan the iris. The clientele captured through this campaign gives up their data in exchange for a token (a digital voucher).

The justification for Worldcoin is to try to offer, in the future, a universal income in cryptocurrencies to compensate for the loss of jobs due to the advancement of AI. Fortunately, this crude statement has allowed several countries to ban its actions, but many have already been caught by the company.

The cult of singularity

The well-known engineer and inventor Ray Kurzweil, author of many books on AI, spiritual machines and transhumanism, wrote a great publishing success in 2005 titled ‘The Singularity is Near’.

The “technological singularity” is a hypothetical event in which machines will surpass humanity in intelligence.

Kurzweil’s argument is based on a simplistic idea that he called the “law of accelerating returns,” in which he postulates that scientific and technological growth is exponential, like a generalized version of Moore’s law of the computer industry.

Relevant members of the industry and computer science, such as Gordon Moore himself (co-founder of Intel) and Paul Allen (co-founder of Microsoft) and Mark Greaves, have expressed for many years a frontal disagreement with the prediction of the singularity. One reason is that neuroscience doesn’t work like computers and we haven’t even begun to unravel the layers of complexity that prevent us from understanding how our own minds work.

Kurzweil founded Singularity University, based in California, in 2008 to “gather, educate and inspire a group of leaders.” His main book, written using scientific terms, but dogmatic and without expressing any doubt in his statements or prophecies, is rather a text of faith in his transhumanism. The Singularity University is very similar to other cults such as Scientology that have emerged in the last century.

Kurzweil will release his new sequel in the summer of 2024: ‘The Singularity Is Nearer’, which will surely become a bestseller. He announces that he is trying to expand his university with new campuses in Tel Aviv and Seville.

Kurzweil’s new text predicts the advent of the singularity by 2029. While the scientific community tries to advance knowledge, the AI ​​business does not hesitate to sow confusion, sell whatever and transform that science into a dangerous pseudoscience.

 
For Latest Updates Follow us on Google News
 

-

PREV Link from The Legend of Zelda died and his grave is in Final Fantasy
NEXT the brand’s new smartphones