The computer science discipline is in crisis

The computer science discipline is in crisis
The computer science discipline is in crisis

The computer science discipline is in crisisBING AI image generator for T21/Prensa Ibérica, developed with DALL·E technology.

The mastery of language achieved by AI has plunged the computer science discipline into a deep crisis that separates academics from industry professionals, when the priority now is research aimed at minimizing the potential risks of intelligent systems.

Related

The computing discipline has entered a deep crisis since the launch of ChatGPT in 2022, says the Israeli computer scientist and mathematician Moshe Y. Vardi in an article published in the journal Communications of ACM.

Artificial intelligence (AI), which for decades was a subfield of computer science characterized by overpromises and underdeliverers, has now reached a tipping point, according to Vardi.

Mastery of language, considered the holy grail of AI, has suddenly materialized, allowing computers to communicate fluently in natural language.

And although they sometimes generate meaningless content, the language is always polished. This advance has brought the dream of artificial general intelligence, which always seemed to be beyond the horizon, closer to a tangible reality, says Vardi.

Concern about AI

And this has not left the sector indifferent. Last January, almost 3,000 researchers involved in AI developments were subjected to a great survey about the future of Artificial Intelligence and most expressed substantial uncertainty about the long-term value of advancing this technology.

The concern of these experts focuses mainly on the uncontrolled spread of false information, the large-scale manipulation of public opinion, the authoritarian control of populations and the worsening of economic inequality.

The outlook for the future of AI projected by these experts would begin to take shape in 2028, when with at least a 50% probability, intelligent algorithms will build a payment processing site from scratch, as well as create a song indistinguishable from an original. and will allow the autonomous adjustment of a large language model (as is the case of chatGPT).

If science continues uninterrupted, according to previous estimates, the probability that unaided machines will outperform humans in all possible tasks is estimated at 10% by 2027 and 50% by 2047. And the probability that All human occupations becoming fully automatable would reach 10% in 2037 and 50% in 2116, according to participants in the big January survey.

Risk estimation

Most respondents expressed varying levels of concern about potential future risks: while 68.3% think good outcomes from superhuman AI are more likely than bad, of these net optimists, 48% gave at least a 5% chance of extremely bad outcomes, such as human extinction, while 59% of net pessimists gave a 5% or higher chance of extremely good outcomes.

Between 38% and 51% of respondents gave at least a 10% chance that advanced AI would lead to outcomes as bad as human extinction.

More than half suggested that “substantial” or “extreme” concern about six different scenarios related to AI, including disinformation, authoritarian control and inequality.

We can say that there is disagreement about whether faster or slower progress in AI would be better for the future of humanity, while there is broad agreement that more should be given. priority to research aimed at minimizing the potential risks of AI systems. This is the main conclusion drawn from the survey.

Artistic recreation of the impact of AI on the computer science discipline.

AI, in focus

Although this survey raised some suspicions, considering that experts exaggerated apocalyptic scenarios, now all eyes are on one of the largest AI conferences, NeurIPS 2024, the thirty-eighth edition of this initiative on neural information processing systems, which will take place from 9 to 15, 2024 at the Vancouver Convention Center.

For this edition, the conference has called for presentations that will address a variety of topics within the field of AI, including deep learning, reinforcement learning, probabilistic methods, optimization, and social and economic aspects of AI, among others.

Additionally, this year, NeurIPS has placed particular emphasis on the intersection of AI with human creativity through its “Creative AI” track, which focused on the theme of “Ambiguity.” This track seeks to highlight the multifaceted and complex challenges that arise when applying AI to promote and challenge human creativity.

The Creative AI track invites research papers and artworks that showcase innovative approaches to AI and machine learning in art, design and creativity.

Submissions that question the use of private and public data, consider new forms of authorship and ownership, challenge notions of the ‘real’ and the ‘not real’, as well as human and machine agency, are especially encouraged. provide a path forward to redefine and nurture human creativity in this new era of generative computing.

Little strategic consensus

The question behind all this deployment is whether NeurIPS will be able to articulate a consensus on how to address concerns about AI.

The problem is that the AI ​​community is deeply divided into two branches: academia on the one hand and industry on the other, explains Vardi.

Academic researchers are comfortable with the ACM Code of Ethics, which requires computing professionals to consistently support the public good. The ACM is the largest professional society in computing, and although it has a special interest group on artificial intelligence (SIGAI), the general feeling is that the ACM “gave off” AI many years ago, according to Vardi.

Responsible use of computing

Industrial AI researchers are in another league: they tend to work in for-profit corporations, which often talk about corporate social responsibility, but in practice focus on profit maximization. Additionally, they have access to large-scale data and computing that academic researchers can only dream of.

What Vardi proposes to resolve this dilemma is to pick up the baton of social responsibility that was left in the air when the Computer Professionals for Social Responsibility (CPSR) was dissolved in 2013, the global organization that since 1983 promoted the responsible use of technology. computing, particularly in the war environment (remember that that year the Euromissile crisis between NATO and the Warsaw Pact was in full swing).

CPSR should be in charge of convening and moderating the entire community’s reflection on the future of AI, according to Vardi. The idea is already in the air, but nothing indicates, at the moment, that it will be implemented. The computer science discipline is certainly in shock.

Reference

Thousands of AI Authors on the Future of AI. Katja Grace et al. arXiv:2401.02843v2 [cs.CY]. DOI:https://doi.org/10.48550/arXiv.2401.02843

 
For Latest Updates Follow us on Google News
 

-