The Debunking of assumptions about AI behavior: Evidence from 64,000 Conversations with Large Language Models

The Debunking of assumptions about AI behavior: Evidence from 64,000 Conversations with Large Language Models

Wednesday, 04 September 2024, 15:00-16:15

Room: Zoom

Presenter: Stamatogiannakis Antonios, IE Business School / IE University, Madrid, Spain

The development of Artificial Intelligence (AI) for applications such as assisting or replacing humans rests on three fundamental assumptions behind AI development: Human-likeness - AI can improve, or at least emulate human performance; Monotonic improvement - more advanced AI performs better than less advanced AI; and Unidimensionality - different models within the same family are distinguished unidimensionally, by how advanced they are. Two initial studies show that both lay users and programmers hold these assumptions. Nine pre-registered experiments contribute by empirically testing and, in some cases challenging these assumptions, using large language models' (LLMs) responses. These experiments tested six well-established human biases and heuristics across 64,000 conversations with LLMs (GPT-3.5-turbo and GPT- 4). Although in a few cases we find that LLMs overcome human biases, in most of the cases, they were systematically biased. Most interestingly, we also find evidence against the aforementioned three assumptions, such as (a) reverse biases, (b) the less advanced (and now discontinued) model outperforming the more advanced one, and (c) differences between the models that are hard to explain by model advancement alone. Given the prevalence of the above assumptions, we discuss the implications for a more fruitful study, use, and regulation of AI.

 

Zoom Link: https://uoc-gr.zoom.us/j/88659969718?pwd=g6bjYPDCuUQo1bzVxjjbgQL4xFN1f3.1

See also

Τμήμα Οικονομικών Επιστημών

myEcon Newsletter

Εγγραφείτε στην λίστα ειδοποιήσεων του Τμήματος Οικονομικών Επιστημών.