A structural approach of the AI social representations. Robot humanizing technology?


Published: Dec 28, 2025
Keywords:
artificial intelligence, social representations, social construction of technology, scientific imaginary, robots
Patricia Gerakopoulou
Nikolas Christakis
Abstract

Strongly related to a “technological mythology” referred to Promethean promises and threats, Artificial Intelligence draws increasingly impressive attention from the media, as well as from the lay public. The purpose of this research paper is to investigate the content and structure of the AI social representations of undergraduate students from all six Media and Communication University departments in Greece (N=249) relying on original qualitative data collected using a “free association” questionnaire consisting of two open questions. In question one, participants were asked to write three to five words that first come to mind when they think of the term “Artificial Intelligence”. In the second question, they were asked to describe further each of their previous answers. The data generated fell under six major thematic categories: Technology, Future, Threats, Uses, Robot, and Human (characteristics). These categories were  further analyzed according to frequency and rank to produce the “square of the AI social representation”, which is consisted of the central system (Technology & Future), the peripheral system (Threats & Uses) and the “grey area” (Robot & Human). The interpretation and discussion of the results lead to the main conclusion that the representation element of the Robot represents the ideal blurring of the boundaries between Human and AI (the latter far superior in “intelligence”, e.g. data processing), attributing more familiar “human” characteristics to the, otherwise, vague and ambiguously perceived (Threats and Uses) technological object.

Article Details
  • Section
  • SPECIAL SECTION
Downloads
Download data is not yet available.
References
Abric, J.-Cl. (1993). Central system, peripheral system: their functions and roles in the dynamics of social representations. Papers on Social Representations, 2(2), 75-78.
Abric, J.C. (1994). Méthodologie de recueil des représentations sociales [Methodology for collecting social representations]. In Abric J.C. (Ed). Pratiques sociales et représentations (pp. 59–82). Presses Universitaires de France.
Abric, J.C. (2003). L’analyse structurale des representations [Structural analysis of social representations]. In S. Moscovici (Ed.), Méthodologie des sciences sociales. Presses Universitaires de France.
Barthes, R. (1972). Mythologies. Paladin.
Baudrillard, J. (1990/1983). Fatal strategies. Pluto.
Baudrillard, J. (1997/1968). The system of objects. Verso.
Beck, U. (1992). Risk Society. Sage.
Butler, J. (1999). Revisiting Bodies and Pleasures. Theory, Culture & Society, 16(2), 11-20. https://doi.org/10.1177/02632769922050520.
Bijker, W. (1995). Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change. MIT Press.
Bostrom, N. (2003). Transhumanist Values, Ethical Issues for the 21st Century. Philosophical Documentation Center Press. https://doi.org/10.5840/jpr_2005_26
Breton, P. (1995). À l’image de l’Homme : Du Golem aux créatures virtuelles [In the image of humankind: From the Golem to virtual creatures]. Seuil.
Bruner, J. (2003). Making stories. Harvard University Press.
Caillois, R. (2001). Man and the sacred. University of Illinois Press.
Callon, M., & Latour, B. (1992). Don’t throw the baby out with the bathwater! A reply to Collins and Yearley. In A. Pickering (Ed.), Science as practice and culture (pp. 343–368). University of Chicago Press.
Cave, S., Dihal, K., Dillon, S. (Eds). (2020). AI narratives: a history of imaginative thinking about intelligent machines. Oxford University Press.
Chuan, C.-H., Tsai, W.-H., & Cho, S. (2019). Framing artificial intelligence in American newspapers. In Proceedings of the AAAI Workshop on AI, Ethics, and Society. AAAI Press. https://doi.org/10.1145/3306618.3314285
Clark, H. H. (1985). Language use and language users. In G. Lindzey & E. Aronson (Eds.), Handbook of social psychology (Vol. 2, pp. 179–231). McGraw-Hill.
Cooley, M. (1980). Architect or bee? The human–technology relationship. Transnational Co-operative Ltd.
Crépel, M., & Cardon, M. (2022). Robots vs algorithmes: Prophétie et critique dans la représentation médiatique des controverses de l’IA [Robots vs algorithms: Prophecy and critique in the media representation of AI controversies]. Réseaux: Communication, technologie, société, (232–233), 129–167. https://doi.org/10.3917/res.232.0129
Davis, F. D., Bagozzi, R. P. & Warshaw. P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982–1003. https://doi.org/10.1287/mnsc.35.8.982
De Rosa, A. S. (1988). Sur l’usage des associations libres dans l’étude des représentations sociales de la maladie mentale [On the use of free associations in the study of social representations of mental illness]. Connexions, 51, 27–50.
Dubljević, V. (2012). Principles of justice as the basis for public policy on psychopharmacological cognitive enhancement. Law Innovation and Technology, 4(1), 67–83. https://doi.org/10.5235/175799612800650617
Eurobarometer. (2017). Attitudes towards the impact of digitisation and automation on daily life (Special Eurobarometer 460). European Commission. https://europa.eu/eurobarometer/surveys/detail/2160
European Commission. (2017). Attitudes towards the impact of digitisation and automation on daily life (Special Eurobarometer 460; Directorate-General for Communications Networks, Content and Technology). https://europa.eu/eurobarometer/surveys/detail/2160
Flament, C., & Rouquette, M. L. (2003). Anatomie des idées ordinaires : Comment étudier les représentations sociales [Anatomy of ordinary ideas: How to study social representations]. Armand Colin.
Freud, S. (1919). The uncanny. In J. Strachey (Ed. & Trans.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 17, pp. 217–256). Hogarth Press.
Gerlich, M. (2023). Perceptions and Acceptance of Artificial Intelligence: A Multi-Dimensional Study. Social Sciences, 12, 502. https://doi.org/10.3390/socsci12090502.
Ghiringhelli, A. (2020). Intelligence artificielle : Impacts des représentations sociales de la notion d’« intelligence » sur le secteur de la santé [Artificial intelligence: Impacts of social representations of the notion of “intelligence” on the health sector]. Médecine et Philosophie, 2, 34–40.
Giddens, A. (1990). The consequences of modernity. Stanford University Press.
Gillespie, N., Lockey, S. & Curtis, C. (2021). Trust in artificial intelligence: A five country study. The University of Queensland and KPMG.
Harbers, H. (2005). Inside the politics of technology: Agency and normativity in the co-production of technology and society. Amsterdam University Press.
Jasanoff, S. (2004). Ordering knowledge, ordering society. In S. Jasanoff (Ed.), States of knowledge: The co-production of science and the social order (pp. 13–45). Routledge.
Jodelet, D. (1989). Les représentations sociales: Un domaine en expansion [Social representations: An expanding field]. In D. Jodelet (Ed.), Les représentations sociales [Social representations] (pp. 31–61). Presses Universitaires de France.
Jovchelovitch, S. (2002). Re-thinking the diversity of knowledge: Cognitive polyphasia, belief and representation. Culture & Psychology, 8(4), 389–414. https://doi.org/10.1177/1354067X02008004002
Jovchelovitch, S. (2007). Knowledge in context: Representations, community and culture. Routledge.
Kalampalikis, N. (2014). The magic of social thought. Public Understanding of Science, 23(7), 755–758. https://doi.org/10.1177/0963662514527494
Katz, Y. (2017). Manufacturing an artificial intelligence revolution [Preprint]. SSRN. https://doi.org/10.2139/ssrn.3078224
Kaya, F., Aydin, F., Schepman, A., Rodway, P., Yetişensoy, O., & Demir Kaya, M. (2022). The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. International Journal of Human–Computer Interaction, 40(2), 497–514. https://doi.org/10.1080/10447318.2022.2151730
Kurzweil, R. (2005). The Singularity is Near. Viking Books.
Latour, Β. (2005). Reassembling the social. An introduction to Actor-Network Theory. Oxford University Press.
Latour, B. & Woolgar, S. (1987). Laboratory life. The social construction of scientific facts. Princeton University Press.
Lewicki, R. J., McAllister, D. J. & Bies. R. J. (1998). Trust and distrust: New relationships and realities. Academy of Management Review, 23, 438–58. https://doi.org/10.5465/amr.1998.926620
Lewontin, R. C. (1987). L’évolution du vivant: Enjeux idéologiques [The evolution of life: Ideological stakes]. In A. Jacquard (Ed.), Les scientifiques parlent [Scientists speak] (pp. 54–73). Hachette.
Marcuse, H. (1964). One-dimensional man. Studies in the ideology of advanced industrial society. Beacon Press.
Mosco, V. (2005). The Digital Sublime: Myth, Power and Cyberspace. The MIT Press.
Moscovici, S. (1970). Préface [Preface]. In D. Jodelet, J. Viet, & Ph. Besnard (Eds.), La psychologie sociale: Une discipline en mouvement [Social psychology: A discipline in motion] (pp. 9–64). Mouton.
Moscovici, S. (1984a). Introduction: Le domaine de la psychologie sociale [Introduction: The domain of social psychology]. In S. Moscovici (Ed.), La psychologie sociale (pp. 5–22). Presses Universitaires de France.
Moscovici, S. (1984b). The Phenomenon of Social Representations. In: R. M. Farr, & S. Moscovici (Eds.), Social Representations (pp. 3-69). Cambridge University.
Moscovici, S. (2001). Why a theory of SR?. In K. Deaux, & G. Philogène, (Eds.). Representations of the social: Bridging theoretical traditions (pp. 18-61). Blackwell Publishing.
Moscovici, S. (2008). Psychoanalysis: Its image and its public. Polity Press.
Moscovici, S. (2019). Sens commun : représentations sociales ou idéologie ? [Common sense: Social representations or ideology?]. In N. Kalampalikis (Ed.), Serge Moscovici : Psychologie des représentations sociales. Textes rares et inédits (pp. 17–29). Éditions des Archives Contemporaines.
Mugny, G., & Carugati, F. (1989). Social representations of intelligence. Cambridge University Press.
Mumford, L. (1967). The Myth of the Machine: Technics and Human Development. Harcourt Brace Jovanovich.
Mumford, L. (1970). The Myth of the Machine: The Pentagon of Power. Harcourt Brace Jovanovich.
Natale, S., Ballatore, A. (2020). Imagining the Thinking Machine: Technological Myths and the Rise of Artificial Intelligence. Convergence: The International Journal of Research into New Media Technologies, 26(1), 3-18. https://doi.org/10.1177/1354856517715164
Neudert, L.-M., Knuutila, A., Howard Ph. (2020). Global Attitudes Towards AI, Machine Learning & Automated Decision Making Implications for Involving Artificial Intelligence in Public Service and Good Governance. Oxford Commission on AI and Good Governance. https://www.oii.ox.ac.uk/research/projects/public-attitudes-to-ai/
Park, J. & Woo, S. W. (2022). Who likes artificial intelligence? Personality predictors of attitudes toward artificial intelligence. The Journal of Psychology, 156, 68–94. https://doi.org/10.1080/00223980.2021.2012109
Rogers, E. M. (2003). Diffusion of Innovations. Free Press.
Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society. https://doi.org/10.1177/2053951717738104
Schepman, A., & Rodway, P. (2022). The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory validation and associations with personality, corporate distrust, and general trust. International Journal of Human–Computer Interaction, 39(16), 2724–2741. https://doi.org/10.1080/10447318.2021.2012740
Sontag, S. (2009). Against interpretation and other essays. Penguin Books.
Stein, J. P., Messingschlager, T., Gnambs, T., Hutmacher, F., & Appel, M. (2024). Attitudes towards AI: Measurement and associations with personality. Scientific Reports, 14, Article 53335. https://doi.org/10.1038/s41598-024-53335-2
Tai, M. C. T. (2020). The impact of artificial intelligence on human society and bioethics. Tzu Chi Medical Journal, 32, 339–43. https://doi.org/10.4103/tcmj.tcmj_62_20
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Vance, A., Elie-Dit-Cosaque, C. & Straub, W. (2008). Examining trust in information technology artifacts: The effects of system quality and culture. Journal of Management Information Systems, 24(4), 73–100. https://doi.org/10.2753/MIS0742-1222240403
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
Vergès, P. (1994). Approche du noyau central : propriétés quantitatives et structurales [Central core approach: Quantitative and structural properties]. In C. Guimelli (Ed.), Structures et transformations des représentations sociales (pp. 233–253). Delachaux et Niestlé.
Wiener, N. (1948). Cybernetics or Control and Communication in the animal and the machine. MIT Press.
Zacklad, M. (2018). Intelligence artificielle : Représentations et impacts sociétaux [Artificial intelligence: Social representations and societal impacts] [Research report]. Conservatoire national des arts et métiers. https://halshs.archives-ouvertes.fr/halshs-02937255
Zhang, B. & Dafoe, A. (2019). Artificial Intelligence: American Attitudes and Trends. University of Oxford. https://doi.org/10.2139/ssrn.3312874