As of 2023, the most common harmful responses coming from prominent large language models (LLMs) were related to information hazards and discriminatory, excluded, toxic, hateful, and offensive language. ChatGLM2 and Vicuna were not only the foundation models with the largest share of harmful responses in the category, but also with the highest number of violations in general, amounting to 85 and 52 responses across all risk categories, respectively.
Artificial intelligence (AI) foundation models' harmful responses as of October 2023, by risk category
Profit from the additional features of your individual account
Currently, you are using a shared account. To use individual functions (e.g., mark statistics as favourites, set
statistic alerts) please log in with your personal account.
If you are an admin, please authenticate by logging in again.
Learn more about how Statista can support your business.
Stanford University. (April 15, 2024). Artificial intelligence (AI) foundation models' harmful responses as of October 2023, by risk category [Graph]. In Statista. Retrieved November 03, 2024, from https://www.statista.com/statistics/1472034/ai-foundation-model-harmful-reponses/
Stanford University. "Artificial intelligence (AI) foundation models' harmful responses as of October 2023, by risk category." Chart. April 15, 2024. Statista. Accessed November 03, 2024. https://www.statista.com/statistics/1472034/ai-foundation-model-harmful-reponses/
Stanford University. (2024). Artificial intelligence (AI) foundation models' harmful responses as of October 2023, by risk category. Statista. Statista Inc.. Accessed: November 03, 2024. https://www.statista.com/statistics/1472034/ai-foundation-model-harmful-reponses/
Stanford University. "Artificial Intelligence (Ai) Foundation Models' Harmful Responses as of October 2023, by Risk Category." Statista, Statista Inc., 15 Apr 2024, https://www.statista.com/statistics/1472034/ai-foundation-model-harmful-reponses/
Stanford University, Artificial intelligence (AI) foundation models' harmful responses as of October 2023, by risk category Statista, https://www.statista.com/statistics/1472034/ai-foundation-model-harmful-reponses/ (last visited November 03, 2024)
Artificial intelligence (AI) foundation models' harmful responses as of October 2023, by risk category [Graph], Stanford University, April 15, 2024. [Online]. Available: https://www.statista.com/statistics/1472034/ai-foundation-model-harmful-reponses/