ChatGPT barely reaches the mark in the EBAU exams

With hardly any hesitation and without having to travel to the University of La Rioja (UR), ChatGPT has demonstrated its ingenuity when facing the exams in Spanish Language and Literature, Foreign Language, History of Spain and Philosophy to which it takes In just over a week, 1,429 students – of the 1,433 who enrolled in the mandatory phase – took the Riojan EBAU tests.

Its result, however, has not been brilliant. Artificial intelligence only offers sufficient content to pass the four common subjects that are part of the compulsory phase – in which students can choose between History of Spain or Philosophy and take the exam in the optional subject of the other subject – with some grades that are far from outstanding. Of all of them, this tool only stands out easily in the Foreign Language test, which it managed to pass with a grade close to outstanding, with an 8.5. It is not surprising either, because English is their natural language.

AI fails to provide personal opinion, respond schematically, or fail to contextualize facts

The rest of the scores obtained range between 6 and 7.25. All due, in part, to the fact that ChatGPT makes errors when providing a verified opinion on some of the questions asked in the exams, when writing a personal comment or when resorting to the schemes to write the answers (when one of the requirements basics of the EBAU is that they have to be answered in detail). It also fails miserably when it comes to contextualizing the facts, since it does not introduce data or historical references or explanations.

For its qualification, four teachers from two institutes in the capital of Rioja, who in some cases also act as correctors in the Selectivity tests, have been responsible for examining the performance and capacity of this artificial intelligence system with the same criteria with those who rate the responses of the students who face the EBAU in Rioja

History of Philosophy

No depth or development

In the History of Philosophy, artificial intelligence was close to getting the grade: it passed the exam with a 6.75. In his answers, nothing he comments is incorrect, but he is superficial when it comes to explaining the historical, cultural and philosophical context of the author and talking about his work. He fails to write a short essay on the previous text that appears on the exam.

Although his response “has some coherence, there is no clear thesis that is defended with solid and convincing arguments. There is also no depth or development in the commentary. His text is correct, but it is empty and does not have a clear meaning,” explains Luis Iván Masip de la Rosa, head of the department of this subject at the IES Cosme García. When developing Kant’s moral theory, the way in which his answer is worded “does not demonstrate a good understanding of the topic, because instead of connecting the different issues that must be addressed in the explanation of this question, it is reduced to a set of correct statements, but scattered and disconnected.

THE PHRASE

«There is no clear trend that can be defended with solid and convincing arguments»

Luis Iván Masip de la Rosa

Professor of Philosophy at Cosme

It only excels in identifying the main ideas of the text, recognizing the argumentative structure, the most important conclusions and relating the fragment presented in the first block of the exam with the author’s general theory. In this question, the teacher points out, “he seems really advantageous to me compared to the average student, because his summary is clear and perfectly captures the ideas of the text. The only drawback that could be made is that the list of the general theory is too brief, but it makes up for it by the quality of the summary that he proposes.

Grammatical errors in tenses and perfect written expression

Grammatical errors in tenses and perfect written expression

The Foreign Language test was where this artificial intelligence tool shone the most, managing to obtain a score of 8.5 in this exam. A note that, however, is not surprising either, since this language constitutes the natural language of ChatGPT.

Despite this, he makes certain “unimportant” errors in the reading comprehension section that, as the head of the English department at IES Inventor Cosme García, Rubén Elías, points out, “anyone could make.” ChatGPT, for example, makes a mistake when answering one of the five questions posed about a text “when it is obvious which is the correct answer,” says Elías.

THE PHRASE

«His writing is perfect, typical of a cultured native speaker with a great capacity for written expression»

Ruben Elias

Head of the English department at IES Cosme García

This artificial intelligence tool also makes some grammatical errors in the tenses of sentences, but it compensates for them with its written expression, which “stands out greatly compared to an average student.” His writing on what his favorite restaurant is – one of the two topics he could choose in the Writing – is “perfect, typical of an educated native speaker with a great capacity for written expression,” highlights the head of the department. .

Traced and very schematic answers

Spanish Language and Literature

Traced and very schematic answers

The answers that ChatGPT has provided to the Spanish Language and Literature exam leave much to be desired. In this test, the artificial intelligence manages to pass, but without much ease, with only a score of 6. All due, in part, to the fact that it does not fully comply with the content requested in the questions and that it answers several of them in a schematic way, when one of the requirements of the university entrance tests is that the questions be written. answers.

As with the first block of the exam, in which you are asked to answer several questions about a text, in the literature section you also develop your answers with numbering and vignettes, when, as Beatriz Lara, teacher of this subject at the IES Tomás Mingot of Logroño, “they are asked to write as much as possible and not to use headings or points typical of an outline.”

THE PHRASE

«Only a personal comment appears at the end. The rest is a judgment and a reaffirmation of the text.”

Beatriz Lara

Spanish Language Teacher at IES Tomás Mingot

This artificial intelligence tool also fails to provide a “confirmed and well expressed” opinion. In the response that ChatGPT generates on the topic that appears in the text, “only a personal comment appears at the end of it, while the rest of the response is a judgment and a reaffirmation of what the original text and object of analysis. Furthermore, it is not presented in a coherent manner with the use of paragraphs or expressions typical of the argumentative text. The same thing happens when he proceeds to summarize the text: he describes it and uses “traced” words when it could be “more personal.”

Without historical references, acronyms or proper names

Without historical references, acronyms or proper names

In the History of Spain, this artificial intelligence tool has managed to pass the university entrance exam with a grade of 7.25. Although ChatGPT answers each of the questions posed in an orderly and structured manner, without major gaps, it fails in its development.

Its text is correct, but it is “too schematic and linear, without connectors in the writing,” says Carlos Gil, professor of History at the IES Inventor Cosme García. Furthermore, his answers lack any historical perspective or explanation: some key figures are missing, and he does not include specific data or the historical context in which to insert each period.

It does not confuse concepts or mix dates. Due to its structure, there is no perceived “imbalance between some questions or others or fatigue when writing an entire exam, which happens to many students,” says Gil. From his responses, however, “it is clear that what is written has no soul. “It seems that whoever wrote these answers has never been to class.” For this reason, ChatGPT “could use a teacher to explain the contexts of each era and the topic, the historical perspective, the human faces that populate history and the relationship and importance of that past with the present we live in. Let’s hope that this human factor will always be essential.

Some examples of the lack of brilliance demonstrated in this matter are his responses regarding the Civil War, in which “he makes no mention of the coup d’état or the violence it generated. There is not a single data on those killed in the rear and those offered at the end are poor and disproportionate. There are no causes or factors that explain this violence, there is hardly any distinction between both rearguards. And everything is very schematic, without reasoning or explanation,” says the History teacher.

THE PHRASE

«What is written has no soul. “It seems that whoever wrote the answers has never been to class.”

Carlos Gil

History Professor at IES Inventor Cosme García

In the case of the transition, “it seems like a process predetermined in advance in which only King Juan Carlos I and Adolfo Suárez participated. There is not a single mention of the anti-Franco opposition or political or social mobilizations. “It does not mention the Amnesty Law, the Moncloa Pacts… nor do there appear acronyms or proper names of the political parties and movements.”

 
For Latest Updates Follow us on Google News
 

-

PREV This will be the cut of the $20 billion from the Petro government
NEXT Ireland and Argentina. Hunger and technology