Could AI help you to write your next paper? (original) (raw)
Related papers
Journal of Nature and Science of Medicine, 2023
This article examines the advantages and disadvantages of Large Language Models (LLMs) and Artificial Intelligence (AI) in research and education and proposes the urgent need for an international statement to guide their responsible use. LLMs and AI demonstrate remarkable natural language processing, data analysis, and decision-making capabilities, offering potential benefits such as improved efficiency and transformative solutions. However, concerns regarding ethical considerations, bias, fake publications, and malicious use also arise. The objectives of this paper are to critically evaluate the utility of LLMs and AI in research and education, call for discussions between stakeholders, and discuss the need for an international statement. We identify advantages such as data processing, task automation, and personalized experiences, alongside disadvantages like bias reinforcement, interpretability challenges, inaccurate reporting, and plagiarism. Stakeholders from academia, industry, government, and civil society must engage in open discussions to address the ethical, legal, and societal implications. The proposed international statement should emphasize transparency, accountability, ongoing research, and risk mitigation. Monitoring, evaluation, user education, and awareness are essential components. By fostering discussions and establishing guidelines, we can ensure the responsible and ethical development and use of LLMs and AI, maximizing benefits while minimizing risks.
The Tech Magazine, 2024
This is discussion of whether AI tools can take over the process of writing. It was published on The Tech Magazine, ISSN: 2753 9024, 24 October 2024, pages, 28-31
Large Language Models: Machine Learning Down to Earth
Brainz Magazine, 2023
The business marketplace has been flooded with waves of technology trends that periodically surface and become present on every other sales pitch from technology vendors and build up as the ultimate necessity in the minds of many CIOs. There is a variety of examples to mention: downsizing, rightsizing, outsourcing, offshoring, consumerization of IT, and the adoption of Cloud technologies, among others. And like virtual tsunamis, these trends come impacting, sometimes disrupting, and even influencing the performance or changing the perception of a corporation (or government) in the eyes of the consumer. It is not different today: tailgating the Cloud adoption, and the spreading of a Big Data mindset to solve problems and build solutions comes Artificial Intelligence (AI) and Machine Learning (ML). Vendors quickly packaged “AI” offerings, often pre-existing call-center-related scripts and chatbots, the door to the CIOs’ IT team was opened, and the offerings began to mature beyond call centers and the diverse medical and micro-biological use. University and Academic AI-related research were reinforced by highly specialized and well-funded dedicated groups in companies like Google, Facebook, and IBM. Smaller organizations, collectives, and startups are also receiving funding and investments. Open AI became an important presence under many aspects and circumstances. Anyone can interact online and in real-time with its chatbot, obtaining real-life value as a work tool for many areas. When asked “please explain what Large Language Models are?”, the OpenAI chatbot responded (OpenAI, 2023): “Large Language Models (LLMs) are a type of artificial intelligence (AI) model that is designed to understand and generate human-like language. These models are trained on massive amounts of text data, such as books, articles, and web pages, and use advanced algorithms to learn the patterns and structures of language. LLMs can perform a wide range of natural language processing (NLP) tasks, such as language translation, sentiment analysis, text classification, and question-answering. They can also generate human-like text, such as short stories, news articles, and even poetry. Some examples of LLMs include GPT-3 (Generative Pre-trained Transformer 3), BERT (Bidirectional Encoder Representations from Transformers), and T5 (Text-to-Text Transfer Transformer). These models have achieved significant breakthroughs in NLP and are widely used in industry and academia for various applications.”(grammar errors found in the AI-generated text were left unchanged on purpose).
arXiv (Cornell University), 2023
The use of artificial intelligence (AI) in research across all disciplines is becoming ubiquitous. However, this ubiquity is largely driven by hyperspecific AI models developed during scientific studies for accomplishing a well-defined, data-dense task. These AI models introduce apparent, human-recognizable biases because they are trained with finite, specific data sets and parameters. However, the efficacy of using large language models (LLMs)-and LLM-powered generative AI tools, such as ChatGPT-to assist the research process is currently indeterminate. These generative AI tools, trained on general and imperceptibly large datasets along with human feedback, present challenges in identifying and addressing biases. Furthermore, these models are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts-which can be unintentionally performed by human researchers, resulting in harmful outputs. These outputs are reinforced in research-where an increasing number of individuals have begun to use generative AI to compose manuscripts. Efforts into AI interpretability lag behind development, and the implicit variations that occur when prompting and providing context to a chatbot introduce uncertainty and irreproducibility. We thereby find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias and has unintended side effects that are largely detrimental to academia, knowledge production, and communicating research.
Journal of the Association of Information Science and Technology, 2023
This paper discusses OpenAI's ChatGPT, a generative pre-trained transformer, which uses natural language processing to fulfill text-based user requests (i.e., a "chatbot"). The history and principles behind ChatGPT and similar models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues that could arise with the emergence of large language models like GPT-3, the underlying technology behind ChatGPT, and its usage by academics and researchers, are discussed and situated within the context of broader advancements in artificial intelligence, machine learning, and natural language processing for research and scholarly publishing.
AI-mediated English for research publication purposes
Journal of English for Research Publication Purposes
In our previous editorial we discussed two significant interrelated exigencies in the field of English for Research Publication Purposes (ERPP): the role of technology in the dynamics and developments of the processes and practices of knowledge construction and dissemination, and the pedagogy of ERPP as an under-researched and under-represented domain. An issue that is attracting increasing attention in 2023 is the key role that Artificial Intelligence (AI) can play / is playing in changing the landscape and dynamics of scholarly work, including academic publication. The appearance of technologies such as ChatGPT as an open AI technology in late 2022 is a good example in that respect. The emergence of such technologies raises this important question: Is AI the new normalcy in our academic life and will it revolutionize the way we interact, create, and circulate knowledge? Certainly, we are facing issues regarding the philosophy, integrity, and ethics of knowledge production and dissemination and new imaginations in ERPP in particular. As the growing discussions both online and in-person show, both academia and the general public are marvelled by the affordances and capabilities of emerging AI technologies such as ChatGPT, Google's Bard and Microsoft's Sydney. However, what is, still controversial and debatable is the capacities of such technologies for producing human-like discourse, thought, and learning and how, and to what extent, such technologies can impact the dynamics of knowledge production and exchange. Some scholars such as Noam Chomsky prefer to be on the cautious side and are hesitant as to whether mechanical minds can be on a par with or improve on human brains. Although Chomsky and colleagues consider such technologies as a step forward, they warn against their "false promise" claiming that ChatGPT "exhibits something like the banality of evil: plagiarism and apathy and obviation" (Chomsky et al., 2023, para. 17).
AI Technology and Academic Writing
International Journal of Adult Education and Technology
Evidence shows that artificial intelligence (AI) has become an important subject in academia, representing about 2.2% of all scientific publications. One concern for doctoral programs is the future role of AI in doctoral writing due to the increase in AI-generated content, such as text and images. Apprehensions have been expressed that the use of AI may have a negative impact on a doctoral student's ability to think critically and creatively. In contrast, others argue that using AI tools can provide various benefits resulting in rigorous research. This conceptual article first discusses the developing relationship between AI and dissertation writing skills. Second, the article explores the origins of the traditional dissertation and outlines 21st-century dissertation options which reflect contextual needs and utilization of AI. Third, identified writing challenges are highlighted before turning to an in-depth examination of AI-generated tools and writing craft skills required to...
DECODING AI AND HUMAN AUTHORSHIP: NUANCES REVEALED THROUGH NLP AND STATISTICAL ANALYSIS
This research explores the nuanced differences in texts produced by AI and those written by humans, aiming to elucidate how language is expressed differently by AI and humans. Through comprehensive statistical data analysis, the study investigates various linguistic traits, patterns of creativity, and potential biases inherent in human-written and AIgenerated texts. The significance of this research lies in its contribution to understanding AI's creative capabilities and its impact on literature, communication, and societal frameworks. By examining a meticulously curated dataset comprising 500K essays spanning diverse topics and genres, generated by LLMs, or written by humans, the study uncovers the deeper layers of linguistic expression and provides insights into the cognitive processes underlying both AI and human-driven textual compositions. The analysis revealed that human-authored essays tend to have a higher total word count on average than AI-generated essays but have a shorter average word length compared to AIgenerated essays, and while both groups exhibit high levels of fluency, the vocabulary diversity of Human authored content is higher than AI generated content. However, AIgenerated essays show a slightly higher level of novelty, suggesting the potential for generating more original content through AI systems. The study also identifies a lower prevalence of gender bias in AI-generated texts but a higher presence of biased topics overall. These findings highlight the strengths and limitations of AI in text generation and the importance of considering multiple approaches for comprehensive analysis. The paper addresses challenges in assessing the language generation capabilities of AI models and emphasizes the importance of datasets that reflect the complexities of human-AI collaborative writing. Through systematic preprocessing and rigorous statistical analysis, this study offers valuable insights into the evolving landscape of AI-generated content and informs future developments in natural language processing (NLP).
AI and Humans: Friends or Foes
SocArXiv, 2023
The advent of large language models (LLMs), has raised a strong debate across different academic fields, as well as in the general media. As a form of general purpose artificial intelligence tool, it has split the public opinion into two rather opposing camps: One that believes that it is a technology that will benefit humankind as a whole, by making us more efficient in producing texts and in other creative tasks and the other that sees it as an existential threat to our species once it acquires the ability to self-improve beyond human cognitive abilities. In this article, we look at the short history of humans and information technology, and discuss some of the benefits and risks of recent AI developments and the impact it is already having on how we understand the frontier between human and machine intelligence.