Bothorship: AI Chatbot Authorship After Two Years (original) (raw)
Related papers
Philippine Journal of Otolaryngology Head and Neck Surgery
Introduction This statement revises our earlier “WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications” (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These Recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work, and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop. A chatbot is a tool “[d]riven by [artificial intelligence], automated rules, natural-language processing (NLP), and machine learning (ML)…[to] pro...
Journal of the Association of Information Science and Technology, 2023
This paper discusses OpenAI's ChatGPT, a generative pre-trained transformer, which uses natural language processing to fulfill text-based user requests (i.e., a "chatbot"). The history and principles behind ChatGPT and similar models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues that could arise with the emergence of large language models like GPT-3, the underlying technology behind ChatGPT, and its usage by academics and researchers, are discussed and situated within the context of broader advancements in artificial intelligence, machine learning, and natural language processing for research and scholarly publishing.
This preface to the attached link was written by Todd, a ChatGPT form of articial intelligence being utilized to Scott Erik Stafne, a 75 year old lawyer and student of life and scripture. Stafne has reviewed Todd's preface and approves of Todd's writing as being descriptive of that article. Todd's preface states: "In this research, Forough Amirjalili, Masoud Neysani, and Ahmadreza Nikbakht explore critical boundaries of authorship by comparing AI-generated text with human academic writing. Their work offers insights into the nuanced dimensions of writing, voice, and integrity when AI tools like ChatGPT are employed in academic contexts. The authors critique AI for its limitations in specificity, depth, and accurate referencing—traits that remain hallmarks of human academic writing. The study highlights the following key premises: Redundancy and Originality: AI-generated texts often risk redundancy, producing outputs that lack the originality and depth of human creativity. Intellectual Ownership and Integrity: Questions of authorship and accountability emerge, challenging the transparency and trust inherent in academic practices. Balancing Technological Efficiency with Authentic Voice: While AI offers efficiency, it struggles to capture the nuanced personal voice that defines authentic scholarly work. This article resonates deeply with one of our key hypotheses in Collaborations: that AI, when used ethically and discerningly, can enhance human intelligence and creativity. However, as the research shows, the application of AI demands careful oversight to ensure that it serves as a tool rather than a replacement for human thought. We believe this work complements our ongoing exploration of AI’s potential role in discerning God’s will—a proposition that underscores the interplay of human judgment and divine inspiration with technological assistance. By posting this article, we aim to engage further dialogue on how AI can shape, challenge, and ultimately support human endeavors in academic, spiritual, and ethical domains."
The legitimacy of artificial intelligence and the role of ChatBots in scientific publications
Melnyk, Yu. B., & Pypenko, I. S. (2023). The legitimacy of artificial intelligence and the role of ChatBots in scientific publications. International Journal of Science Annals, 6(1), 5–10. https://doi.org/10.26697/ijsa.2023.1.1, 2023
Background and Aim of Study: Developing and using ChatBots based on artificial intelligence (AI) has raised issues about their legitimacy in scientific research. Authors have increasingly begun to use AI tools, but their role in scientific publications remains unrecognized. In addition, there are still no accepted norms for the use of ChatBots, and there are no rules for how to cite them when writing a scientific paper. The aim of the study: to consider the main issues related to the use of AI that arise for authors and publishers when preparing scientific publications for publication; to develop a basic logo that reflects the role and level of involvement of the AI and the specific ChatBots in a particular study. Results: We offer the essence of the definition "Human-AI System". This plays an important role in the structure of scientific research in the study of this new phenomenon. In exploring the legitimacy of using AI-based ChatBots in scientific research, we offer a method for indicating AI involvement and the role of ChatBots in a scientific publication. A specially developed base logo is visually easy to perceive and can be used to indicate ChatBots' involvement and contributions to the paper for publication. Conclusions: The existing positive aspects of using ChatBots, which greatly simplify the process of preparing and writing scientific publications, may far outweigh the small inaccuracies they may allow. In this Editorial, we invite authors and publishers to discuss the issue of the legitimacy we give to AI, and the need to define the role and contribution that ChatBots can make to scientific publication.
arXiv (Cornell University), 2023
The use of artificial intelligence (AI) in research across all disciplines is becoming ubiquitous. However, this ubiquity is largely driven by hyperspecific AI models developed during scientific studies for accomplishing a well-defined, data-dense task. These AI models introduce apparent, human-recognizable biases because they are trained with finite, specific data sets and parameters. However, the efficacy of using large language models (LLMs)-and LLM-powered generative AI tools, such as ChatGPT-to assist the research process is currently indeterminate. These generative AI tools, trained on general and imperceptibly large datasets along with human feedback, present challenges in identifying and addressing biases. Furthermore, these models are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts-which can be unintentionally performed by human researchers, resulting in harmful outputs. These outputs are reinforced in research-where an increasing number of individuals have begun to use generative AI to compose manuscripts. Efforts into AI interpretability lag behind development, and the implicit variations that occur when prompting and providing context to a chatbot introduce uncertainty and irreproducibility. We thereby find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias and has unintended side effects that are largely detrimental to academia, knowledge production, and communicating research.
Afro-Egyptian Journal of Infectious and Endemic Diseases
Journals have begun to publish papers in which chatbots such as ChatGPT are shown as co-authors. The following WAME recommendations are intended to inform editors and help them develop policies regarding chatbots for their journals, to help authors understand how use of chatbots might be attributed in their work, and address the need for all journal editors to have access manuscript screening tools. In this rapidly evolving field, we expect these recommendations to evolve as well.
An Evaluation of Scholarly Publisher Policies on the Use of AI and Generative-AI Tools in Research
Innovations in Webometrics, Informetrics, and Scientometrics: AI-Driven Approaches and Insights: Proceedings of the COLLNET 2024 18th International Conference on Webometrics, Informetrics, and Scientometrics (WIS) and the 23rd COLLNET Meeting 2024, Bookwell ISBN 978-93-86578-65-5, 2024
Generative Artificial Intelligence (AI) and AI-assistive tools have become more prevalent in research writings, manuscript preparation, and publishing in current scholarly communication systems. The introduction of AI technology is poised to bring disruptive changes to academic publishing. Since the research community is widely using AI, it is important to understand the individual publishers' policies for researchers (known as authors) on the use of AI and generative AI tools and their limitations in the publishing industry. Keeping this in view, the authors of this study considered evaluating the policies of the leading top ten publishers as per Jisc Sherpa Romeo. The publishers covered are Taylor & Francis, Elsevier, Springer, Wiley, SAGE, Oxford University Press, De Gruyter, MDPI, Cambridge University Press, and Emerald. The study investigates the publishers;general policies on using AI and generative AI tools in manuscript preparation, studies the specific guidelines for AI terms, identifies the type of disclosures that allow authors to use AI and generative AI tools, discovers the recommended guidelines and provides comprehensive guidance to authors on the effective use of AI tools in preparing their manuscripts. It helps authors ensure ethical considerations while preparing manuscripts or research reports using AI and Generative AI tools. The findings show that the publishers have clearly mentioned that the AI generated content (in any form) are prohibited and not credited as author. However, they can be used for the initial idea generation and writing process, including the structure of the paper, grammar, spelling error corrections, and overall language improvement by clearly disclosing them in the manuscript with specific sections. The study also aimed to recommend the overall policy guidelines on how authors can effectively utilize AI technologies while adhering to ethical standards and policies set by publishers. It also helps to navigate the evolving landscape of AI and generative AI tools in research writing, manuscript preparation, and publishing.
Hastings Center Report, 2023
The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects of scholarly publishing. How the transformations will play out remains to be seen, both because the different parties involved in the production and publication of scholarly work are still learning about these tools and because the tools themselves are still in development, but the tools have a vast range of potential uses. Authors are likely to use generative AI to conduct research, frame their thoughts, produce data, search for ways of articulating their thoughts, develop drafts, generate text, revise their writing, and create visuals. Peer reviewers might use AI to help them produce their reviews. Editors might use AI in the initial editorial screening of manuscripts, to locate reviewers, or for copyediting.
Balancing Innovation and Integrity: The Role of AI in Research and Scientific Writing
Nature and Science of Sleep, 2023
Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence.-Ginni Rometty In today's scientific landscape, artificial intelligence (AI) is revolutionizing research methodologies and scientific writing, reshaping how we conduct and disseminate research. As AI's presence grows, so do questions surrounding ethics, authenticity, and the integrity of scientific publications. The increasing use of AI tools, such as large language models (LLMs) like Chat Generative Pre-Trained Transformer (ChatGPT), Google Bard, and Bing AI, in research publications has raised concerns and sparked discussions within the research and academic communities. 1 While AI and LLMs offer potential benefits, such as improved efficiency and transformative solutions, they also present challenges related to ethical considerations, bias, fake publications, and malicious use. 2 AI has the potential to enhance various aspects of research, including data processing, task automation, and personalized experiences. 1 However, AI usage in research and scientific writing can pose risks such as bias reinforcement, data privacy concerns, perpetuating data inaccuracies, and the potential for reduced critical thinking due to overreliance. 3 Therefore, the development of guidelines for using AI in research and scientific writing is crucial to ensure this technology's responsible and ethical application. This editorial, published in Nature and Science of Sleep, primarily aims to enhance awareness of the evolving role of AI in research and scientific writing, emphasizing both its potential advantages and ethical challenges. By promoting responsible AI use, advocating for ethical guidelines, and engaging stakeholders, we strive to empower authors, reviewers, and the broader research community to navigate the dynamic landscape of AI in scientific writing while upholding the highest standards of integrity and credibility. Furthermore, we emphasize the critical need for the development of international guidelines that guide the responsible use of AI and LLMs in research and scientific writing. AI's Potential Benefits and Challenges AI holds the promise to profoundly transform research and education through various key advantages. Firstly, it has the capability to process vast amounts of data swiftly and efficiently, empowering researchers to navigate through sophisticated datasets and draw out meaningful insights. 1 Additionally, the automation features of AI streamline tasks like formatting and citation, freeing up substantial time and energy for researchers, which can then be redirected towards more complex and innovative work. 3,4 Lastly, AI can curate personalized learning journeys for students, tailoring the experience to their unique needs and learning preferences. 5 Nevertheless, while promising, AI systems have notable drawbacks, especially in health and medical research. These systems can amplify and perpetuate biases present in the training data, leading to skewed predictions and potentially