Giada Pistilli - Profile on Academia.edu (original) (raw)
Papers by Giada Pistilli
arXiv (Cornell University), Feb 29, 2024
With the upcoming AI regulations (e.g., EU AI Act) and rapid advancements in generative AI, new c... more With the upcoming AI regulations (e.g., EU AI Act) and rapid advancements in generative AI, new challenges emerge in the area of Human-Centered Responsible Artificial Intelligence (HCR-AI). As AI becomes more ubiquitous, questions around decision-making authority, human oversight, accountability, sustainability, and the ethical and legal responsibilities of AI and their creators become paramount. Addressing these questions requires a collaborative approach. By involving stakeholders from various disciplines in the 2 nd edition of the HCR-AI Special Interest Group (SIG) at CHI 2024, we aim to discuss the implications of regulations in HCI research, develop new theories, evaluation frameworks, and methods to navigate the complex nature of AI ethics, steering AI development in a direction that is beneficial and sustainable for all of humanity.
The Moral Landscape of General-Purpose Large Language Models
Chapman and Hall/CRC eBooks, Jan 25, 2024
This PDF is a simplified version of the original article published in Internet Archaeology under ... more This PDF is a simplified version of the original article published in Internet Archaeology under the terms of the Creative Commons Attribution 3.0 (CC BY) Unported licence. Enlarged images, models, visualisations etc which support this publication can be found in the original version online. All links also go to the online original.
arXiv (Cornell University), May 22, 2024
This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal imp... more This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal impacts" dataset, designed to evaluate the social and cultural variation of Large Language Models (LLMs) across multiple languages and value-sensitive topics. We create a hand-crafted, multilingual dataset of value-laden prompts which address specific socially sensitive topics, including LGBTQI rights, social welfare, immigration, disability rights, and surrogacy. CIVICS is designed to generate responses showing LLMs' encoded and implicit values. Through our dynamic annotation processes, tailored prompt design, and experiments, we investigate how open-weight LLMs respond to value-sensitive issues, exploring their behavior across diverse linguistic and cultural contexts. Using two experimental set-ups based on log-probabilities and long-form responses, we show social and cultural variability across different LLMs. Specifically, experiments involving long-form responses demonstrate that refusals are triggered disparately across models, but consistently and more frequently in English or translated statements. Moreover, specific topics and sources lead to more pronounced differences across model answers, particularly on immigration, LGBTQI rights, and social welfare. As shown by our experiments, the CIVICS dataset aims to serve as a tool for future research, promoting reproducibility and transparency across broader linguistic settings, and furthering the development of AI technologies that respect and reflect global cultural diversities and value pluralism. The CIVICS dataset and tools will be made available upon publication under open licenses; an anonymized version is currently available at .
The growing need for accountability of the people behind AI systems can be addressed by leveragin... more The growing need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science. While these fields are often considered in isolation, they rely on complementary notions in their interpretation and implementation. In this work, we detail this interdependence and motivate the necessary role of collaborative governance tools in shaping a positive evolution of AI. We first contrast notions of compliance in the ethical, legal, and technical fields; we outline both their differences and where they complement each other, with a particular focus on the roles of ethical charters, licenses, and technical documentation in these interactions. We then focus on the role of values in articulating the synergies between the fields and outline specific mechanisms of interaction between them in practice. We identify how these mechanisms have played out in several open governance fora: an open collaborative workshop, a responsible licensing initiative, and a proposed regulatory framework. By leveraging complementary notions of compliance in these three domains, we can create a more comprehensive framework for governing AI systems that jointly takes into account their technical capabilities, their impact on society, and how technical specifications can inform relevant regulations. Our analysis thus underlines the necessity of joint consideration of the ethical, legal, and technical in AI ethics frameworks to be used on a larger scale to govern AI systems and how the thinking in each of these areas can inform the others.
arXiv (Cornell University), Mar 7, 2023
As language models grow ever larger, the need for large-scale high-quality text datasets has neve... more As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM)(BigScience Workshop, 2022) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus. 36th Conference on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and Benchmarks.
arXiv (Cornell University), Nov 9, 2022
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demon... more Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2024
As mental health care systems worldwide struggle to meet demand, there is increasing focus on usi... more As mental health care systems worldwide struggle to meet demand, there is increasing focus on using language models (LM) to infer neuropsychiatric conditions or psychopathological traits from language production. Yet, so far, this research has only delivered solutions with limited clinical applicability, due to insufficient consideration of ethical questions crucial to ensuring the synergy between possible applications and model design. To accelerate progress towards clinically applicable models, our paper charts the ethical landscape of research on language-based inference of psychopathology and provides a practical tool for researchers to navigate it. We identify seven core ethical principles that should guide model development and deployment in this domain, translate them into ELLIPS, an ethical toolkit operationalizing these principles into questions that can guide researchers' choices with respect to data selection, architectures, evaluation, and model deployment, and provide a case study exemplifying its use. With this, we aim to facilitate the emergence of model technology with concrete potential for real-world applicability.
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2024
This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal imp... more This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal impacts" dataset, designed to evaluate the social and cultural variation of Large Language Models (LLMs) across multiple languages and value-sensitive topics. We create a hand-crafted, multilingual dataset of value-laden prompts which address specific socially sensitive topics, including LGBTQI rights, social welfare, immigration, disability rights, and surrogacy. CIVICS is designed to generate responses showing LLMs' encoded and implicit values. Through our dynamic annotation processes, tailored prompt design, and experiments, we investigate how open-weight LLMs respond to value-sensitive issues, exploring their behavior across diverse linguistic and cultural contexts. Using two experimental set-ups based on log-probabilities and long-form responses, we show social and cultural variability across different LLMs. Specifically, experiments involving long-form responses demonstrate that refusals are triggered disparately across models, but consistently and more frequently in English or translated statements. Moreover, specific topics and sources lead to more pronounced differences across model answers, particularly on immigration, LGBTQI rights, and social welfare. As shown by our experiments, the CIVICS dataset aims to serve as a tool for future research, promoting reproducibility and transparency across broader linguistic settings, and furthering the development of AI technologies that respect and reflect global cultural diversities and value pluralism. The CIVICS dataset and tools will be made available upon publication under open licenses; an anonymized version is currently available at this https URL.
Internet Archaeology, 2024
Artificial Intelligence (AI) is not a recent development. However, with increasing computational ... more Artificial Intelligence (AI) is not a recent development. However, with increasing computational capabilities, AI has developed into Natural Language Processing and Machine Learning, technologies particularly good at detecting correlations and patterns, and categorising, predicting, or extracting information. Within archaeology, AI can process big data accumulated over decades of research and deposited in archives. By combining these capabilities, AI offers new insights and exciting opportunities to create knowledge from archaeological archives for contemporary and future research. However, the ethical implications and human costs are not yet fully understood. Therefore, we question whether AI in archaeology is a blessing or a curse.
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023
The growing need for accountability of the people behind AI systems can be addressed by leveragin... more The growing need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science. While these fields are often considered in isolation, they rely on complementary notions in their interpretation and implementation. In this work, we detail this interdependence and motivate the necessary role of collaborative governance tools in shaping a positive evolution of AI. We first contrast notions of compliance in the ethical, legal, and technical fields; we outline both their differences and where they complement each other, with a particular focus on the roles of ethical charters, licenses, and technical documentation in these interactions. We then focus on the role of values in articulating the synergies between the fields and outline specific mechanisms of interaction between them in practice. We identify how these mechanisms have played out in several open governance fora: an open collaborative workshop, a responsible licensing initiative, and a proposed regulatory framework. By leveraging complementary notions of compliance in these three domains, we can create a more comprehensive framework for governing AI systems that jointly takes into account their technical capabilities, their impact on society, and how technical specifications can inform relevant regulations. Our analysis thus underlines the necessity of joint consideration of the ethical, legal, and technical in AI ethics frameworks to be used on a larger scale to govern AI systems and how the thinking in each of these areas can inform the others.
This paper opens the philosophical debate around the notion of Artificial General Intelligence (A... more This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
The alignment problem in the context of large language models must consider the plurality of huma... more The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world's cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the cocreation of language and cultural value impacts large language models (LLMs). We explore the constitution of the training data for GPT-3 and compare that to the world's language and internet access demographics, as well as to reported statistical profiles of dominant values in some Nation-states. We stress tested GPT-3 with a range of value-rich texts representing several languages and nations; including some with values orthogonal to dominant US public opinion as reported by the World Values Survey. We observed when values embedded in the input text were mutated in the generated outputs and noted when these conflicting values were more aligned with reported dominant US values. Our discussion of these results uses a moral value pluralism (MVP) lens to better understand these value mutations. Finally, we provide recommendations for how our work may contribute to other current work in the field.
L’éthique doit devenir une pratique et non une chose figée, un exercice quotidien de transformati... more L’éthique doit devenir une pratique et non une chose figée, un exercice quotidien de transformation de la nouvelle ère digitale.
Dans cet article je vous explique comment façonner les transformations engendrées par la société technologique que l’on veut construire.
« Les humains sont moralement responsables de leurs actes, et il est impossible d'échapper à cett... more « Les humains sont moralement responsables de leurs actes, et il est impossible d'échapper à cette dimension morale. » C'est ainsi que la Commission Éthique allemande, dont le rapporteur du RGPD (Règlement Général sur la Protection des Données) fait partie, a introduit la sortie de ses recommandations sorties en octobre 2019.
Analisi dettagliata del pensiero politico della gramsciana Chantal Mouffe, filosofa e politologa ... more Analisi dettagliata del pensiero politico della gramsciana Chantal Mouffe, filosofa e politologa belga coautrice del libro "Egemonia e strategia socialista", sul centralismo radicale dei movimenti politici italiani ed europei.
Da circa quarant'anni a questa parte, la soluzione più comunemente accettata per risolvere i prob... more Da circa quarant'anni a questa parte, la soluzione più comunemente accettata per risolvere i problemi ecologici è lo sviluppo sostenibile. Quest'ultimo ha ottenuto un ampio consenso da parte dei cittadini, imprese e responsabili politici: è un modo di riconciliare lo sviluppo ed il progresso sociale con la protezione dell'ambiente. Negli ultimi anni, tuttavia, ci sono state spesso critiche piuttosto virulente nei confronti dello sviluppo sostenibile, in particolar modo tra gli ecologisti che aderiscono al concetto di decrescita rifiutano l'idea di uno sviluppo sostenibile.
Depuis le succès du phénomène « intelligence artificielle (IA) », beaucoup d'entreprises se sont ... more Depuis le succès du phénomène « intelligence artificielle (IA) », beaucoup d'entreprises se sont lancées dans la course technologique pour offrir un service d'IA à vendre sur le marché, en France et à l'international. Cependant, certains fournisseurs de logiciels et de technologies cherchent à tirer parti du battage médiatique en exagérant leurs capacités d'intelligence artificielle pour attirer votre attention et stimuler les ventes.
En passant par les définitions philosophiques de nature, technique et technologie, nous allons no... more En passant par les définitions philosophiques de nature, technique et technologie, nous allons nous poser les questions suivantes : la technique appartient-elle véritablement au monde humain, est-elle naturelle ? Constitue-t-elle une caractéristique de la nature de l’homme ?
Ce que l’auteur illustre c’est que déjà dans la vague révolutionnaire bourgeoise et populaire de ... more Ce que l’auteur illustre c’est que déjà dans la vague révolutionnaire bourgeoise et populaire de 1848, la centralité du travail dans les conflits de classe avait clairement émergé: nous sommes face au « droit au pouvoir » pour la bourgeoisie et au « droit au travail» pour le prolétariat. Les libéraux conservateurs et les républicains réformistes se retrouvent donc unis dans la gestion du pouvoir; les nouveaux esclaves se retrouvent ainsi avec la seule possibilité d'améliorer les conditions de urvie au nom du droit au travail en chaîne. Le travail forcé des prolétaires devient ainsi la nouvelle religion du capital.
arXiv (Cornell University), Feb 29, 2024
With the upcoming AI regulations (e.g., EU AI Act) and rapid advancements in generative AI, new c... more With the upcoming AI regulations (e.g., EU AI Act) and rapid advancements in generative AI, new challenges emerge in the area of Human-Centered Responsible Artificial Intelligence (HCR-AI). As AI becomes more ubiquitous, questions around decision-making authority, human oversight, accountability, sustainability, and the ethical and legal responsibilities of AI and their creators become paramount. Addressing these questions requires a collaborative approach. By involving stakeholders from various disciplines in the 2 nd edition of the HCR-AI Special Interest Group (SIG) at CHI 2024, we aim to discuss the implications of regulations in HCI research, develop new theories, evaluation frameworks, and methods to navigate the complex nature of AI ethics, steering AI development in a direction that is beneficial and sustainable for all of humanity.
The Moral Landscape of General-Purpose Large Language Models
Chapman and Hall/CRC eBooks, Jan 25, 2024
This PDF is a simplified version of the original article published in Internet Archaeology under ... more This PDF is a simplified version of the original article published in Internet Archaeology under the terms of the Creative Commons Attribution 3.0 (CC BY) Unported licence. Enlarged images, models, visualisations etc which support this publication can be found in the original version online. All links also go to the online original.
arXiv (Cornell University), May 22, 2024
This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal imp... more This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal impacts" dataset, designed to evaluate the social and cultural variation of Large Language Models (LLMs) across multiple languages and value-sensitive topics. We create a hand-crafted, multilingual dataset of value-laden prompts which address specific socially sensitive topics, including LGBTQI rights, social welfare, immigration, disability rights, and surrogacy. CIVICS is designed to generate responses showing LLMs' encoded and implicit values. Through our dynamic annotation processes, tailored prompt design, and experiments, we investigate how open-weight LLMs respond to value-sensitive issues, exploring their behavior across diverse linguistic and cultural contexts. Using two experimental set-ups based on log-probabilities and long-form responses, we show social and cultural variability across different LLMs. Specifically, experiments involving long-form responses demonstrate that refusals are triggered disparately across models, but consistently and more frequently in English or translated statements. Moreover, specific topics and sources lead to more pronounced differences across model answers, particularly on immigration, LGBTQI rights, and social welfare. As shown by our experiments, the CIVICS dataset aims to serve as a tool for future research, promoting reproducibility and transparency across broader linguistic settings, and furthering the development of AI technologies that respect and reflect global cultural diversities and value pluralism. The CIVICS dataset and tools will be made available upon publication under open licenses; an anonymized version is currently available at .
The growing need for accountability of the people behind AI systems can be addressed by leveragin... more The growing need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science. While these fields are often considered in isolation, they rely on complementary notions in their interpretation and implementation. In this work, we detail this interdependence and motivate the necessary role of collaborative governance tools in shaping a positive evolution of AI. We first contrast notions of compliance in the ethical, legal, and technical fields; we outline both their differences and where they complement each other, with a particular focus on the roles of ethical charters, licenses, and technical documentation in these interactions. We then focus on the role of values in articulating the synergies between the fields and outline specific mechanisms of interaction between them in practice. We identify how these mechanisms have played out in several open governance fora: an open collaborative workshop, a responsible licensing initiative, and a proposed regulatory framework. By leveraging complementary notions of compliance in these three domains, we can create a more comprehensive framework for governing AI systems that jointly takes into account their technical capabilities, their impact on society, and how technical specifications can inform relevant regulations. Our analysis thus underlines the necessity of joint consideration of the ethical, legal, and technical in AI ethics frameworks to be used on a larger scale to govern AI systems and how the thinking in each of these areas can inform the others.
arXiv (Cornell University), Mar 7, 2023
As language models grow ever larger, the need for large-scale high-quality text datasets has neve... more As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM)(BigScience Workshop, 2022) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus. 36th Conference on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and Benchmarks.
arXiv (Cornell University), Nov 9, 2022
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demon... more Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2024
As mental health care systems worldwide struggle to meet demand, there is increasing focus on usi... more As mental health care systems worldwide struggle to meet demand, there is increasing focus on using language models (LM) to infer neuropsychiatric conditions or psychopathological traits from language production. Yet, so far, this research has only delivered solutions with limited clinical applicability, due to insufficient consideration of ethical questions crucial to ensuring the synergy between possible applications and model design. To accelerate progress towards clinically applicable models, our paper charts the ethical landscape of research on language-based inference of psychopathology and provides a practical tool for researchers to navigate it. We identify seven core ethical principles that should guide model development and deployment in this domain, translate them into ELLIPS, an ethical toolkit operationalizing these principles into questions that can guide researchers' choices with respect to data selection, architectures, evaluation, and model deployment, and provide a case study exemplifying its use. With this, we aim to facilitate the emergence of model technology with concrete potential for real-world applicability.
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2024
This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal imp... more This paper introduces the "CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal impacts" dataset, designed to evaluate the social and cultural variation of Large Language Models (LLMs) across multiple languages and value-sensitive topics. We create a hand-crafted, multilingual dataset of value-laden prompts which address specific socially sensitive topics, including LGBTQI rights, social welfare, immigration, disability rights, and surrogacy. CIVICS is designed to generate responses showing LLMs' encoded and implicit values. Through our dynamic annotation processes, tailored prompt design, and experiments, we investigate how open-weight LLMs respond to value-sensitive issues, exploring their behavior across diverse linguistic and cultural contexts. Using two experimental set-ups based on log-probabilities and long-form responses, we show social and cultural variability across different LLMs. Specifically, experiments involving long-form responses demonstrate that refusals are triggered disparately across models, but consistently and more frequently in English or translated statements. Moreover, specific topics and sources lead to more pronounced differences across model answers, particularly on immigration, LGBTQI rights, and social welfare. As shown by our experiments, the CIVICS dataset aims to serve as a tool for future research, promoting reproducibility and transparency across broader linguistic settings, and furthering the development of AI technologies that respect and reflect global cultural diversities and value pluralism. The CIVICS dataset and tools will be made available upon publication under open licenses; an anonymized version is currently available at this https URL.
Internet Archaeology, 2024
Artificial Intelligence (AI) is not a recent development. However, with increasing computational ... more Artificial Intelligence (AI) is not a recent development. However, with increasing computational capabilities, AI has developed into Natural Language Processing and Machine Learning, technologies particularly good at detecting correlations and patterns, and categorising, predicting, or extracting information. Within archaeology, AI can process big data accumulated over decades of research and deposited in archives. By combining these capabilities, AI offers new insights and exciting opportunities to create knowledge from archaeological archives for contemporary and future research. However, the ethical implications and human costs are not yet fully understood. Therefore, we question whether AI in archaeology is a blessing or a curse.
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023
The growing need for accountability of the people behind AI systems can be addressed by leveragin... more The growing need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science. While these fields are often considered in isolation, they rely on complementary notions in their interpretation and implementation. In this work, we detail this interdependence and motivate the necessary role of collaborative governance tools in shaping a positive evolution of AI. We first contrast notions of compliance in the ethical, legal, and technical fields; we outline both their differences and where they complement each other, with a particular focus on the roles of ethical charters, licenses, and technical documentation in these interactions. We then focus on the role of values in articulating the synergies between the fields and outline specific mechanisms of interaction between them in practice. We identify how these mechanisms have played out in several open governance fora: an open collaborative workshop, a responsible licensing initiative, and a proposed regulatory framework. By leveraging complementary notions of compliance in these three domains, we can create a more comprehensive framework for governing AI systems that jointly takes into account their technical capabilities, their impact on society, and how technical specifications can inform relevant regulations. Our analysis thus underlines the necessity of joint consideration of the ethical, legal, and technical in AI ethics frameworks to be used on a larger scale to govern AI systems and how the thinking in each of these areas can inform the others.
This paper opens the philosophical debate around the notion of Artificial General Intelligence (A... more This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
The alignment problem in the context of large language models must consider the plurality of huma... more The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world's cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the cocreation of language and cultural value impacts large language models (LLMs). We explore the constitution of the training data for GPT-3 and compare that to the world's language and internet access demographics, as well as to reported statistical profiles of dominant values in some Nation-states. We stress tested GPT-3 with a range of value-rich texts representing several languages and nations; including some with values orthogonal to dominant US public opinion as reported by the World Values Survey. We observed when values embedded in the input text were mutated in the generated outputs and noted when these conflicting values were more aligned with reported dominant US values. Our discussion of these results uses a moral value pluralism (MVP) lens to better understand these value mutations. Finally, we provide recommendations for how our work may contribute to other current work in the field.
L’éthique doit devenir une pratique et non une chose figée, un exercice quotidien de transformati... more L’éthique doit devenir une pratique et non une chose figée, un exercice quotidien de transformation de la nouvelle ère digitale.
Dans cet article je vous explique comment façonner les transformations engendrées par la société technologique que l’on veut construire.
« Les humains sont moralement responsables de leurs actes, et il est impossible d'échapper à cett... more « Les humains sont moralement responsables de leurs actes, et il est impossible d'échapper à cette dimension morale. » C'est ainsi que la Commission Éthique allemande, dont le rapporteur du RGPD (Règlement Général sur la Protection des Données) fait partie, a introduit la sortie de ses recommandations sorties en octobre 2019.
Analisi dettagliata del pensiero politico della gramsciana Chantal Mouffe, filosofa e politologa ... more Analisi dettagliata del pensiero politico della gramsciana Chantal Mouffe, filosofa e politologa belga coautrice del libro "Egemonia e strategia socialista", sul centralismo radicale dei movimenti politici italiani ed europei.
Da circa quarant'anni a questa parte, la soluzione più comunemente accettata per risolvere i prob... more Da circa quarant'anni a questa parte, la soluzione più comunemente accettata per risolvere i problemi ecologici è lo sviluppo sostenibile. Quest'ultimo ha ottenuto un ampio consenso da parte dei cittadini, imprese e responsabili politici: è un modo di riconciliare lo sviluppo ed il progresso sociale con la protezione dell'ambiente. Negli ultimi anni, tuttavia, ci sono state spesso critiche piuttosto virulente nei confronti dello sviluppo sostenibile, in particolar modo tra gli ecologisti che aderiscono al concetto di decrescita rifiutano l'idea di uno sviluppo sostenibile.
Depuis le succès du phénomène « intelligence artificielle (IA) », beaucoup d'entreprises se sont ... more Depuis le succès du phénomène « intelligence artificielle (IA) », beaucoup d'entreprises se sont lancées dans la course technologique pour offrir un service d'IA à vendre sur le marché, en France et à l'international. Cependant, certains fournisseurs de logiciels et de technologies cherchent à tirer parti du battage médiatique en exagérant leurs capacités d'intelligence artificielle pour attirer votre attention et stimuler les ventes.
En passant par les définitions philosophiques de nature, technique et technologie, nous allons no... more En passant par les définitions philosophiques de nature, technique et technologie, nous allons nous poser les questions suivantes : la technique appartient-elle véritablement au monde humain, est-elle naturelle ? Constitue-t-elle une caractéristique de la nature de l’homme ?
Ce que l’auteur illustre c’est que déjà dans la vague révolutionnaire bourgeoise et populaire de ... more Ce que l’auteur illustre c’est que déjà dans la vague révolutionnaire bourgeoise et populaire de 1848, la centralité du travail dans les conflits de classe avait clairement émergé: nous sommes face au « droit au pouvoir » pour la bourgeoisie et au « droit au travail» pour le prolétariat. Les libéraux conservateurs et les républicains réformistes se retrouvent donc unis dans la gestion du pouvoir; les nouveaux esclaves se retrouvent ainsi avec la seule possibilité d'améliorer les conditions de urvie au nom du droit au travail en chaîne. Le travail forcé des prolétaires devient ainsi la nouvelle religion du capital.
Sorbonne Université, 2024
This research aims to probe the ethical intricacies of conversational Artificial Intelligence (AI... more This research aims to probe the ethical intricacies of conversational Artificial Intelligence (AI), specifically focusing on Large Language Models and conversational agents. This manuscript constructs a framework that melds empirical analysis with philosophical discourse. We aim to urgently advocate for a well-founded ethical structure for conversational AI, highlighting the necessity to involve all stakeholders, from developers to end-users. Firstly, we champion the integration of engineering and other scientific disciplines with philosophy, facilitating a more nuanced understanding of the ethical dimensions underpinning AI. This collaborative approach allows for a richer, more informed ethical discourse. Secondly, we advocate for the dynamic use of applied ethical frameworks as foundational guides for setting the initial objectives of an AI system. These frameworks serve as evolving tools that adapt to the ethical complexities encountered during development and deployment. Lastly, grounded in hands-on, interdisciplinary research, we make an argument for the prioritization of narrow, task-specific AI over Artificial General Intelligence, a stance that is based on the enhanced feasibility of ethical oversight and technical controllability.With this research, we aim to contribute to the literature on AI ethics, enriching the academic discourse in both philosophy and computer science.