Need help writing your paper? Have general research questions? Get 24/7 help from a librarian!
Artificial intelligence can be a helpful tool in the information literacy toolbox for topic brainstorming, writing refinement, and cross-disciplinary perspectives. Generative AI could also give first-generation college students a chance to compete on an equal footing in culturally biased environments, as they are often more resourceful and creative in their approach to academics and campus life. However, there are known issues with ChatGPT’s accuracy and reliability. It can appear to researchers as an authority on any given subject by providing citations to sources that do not exist or are not relevant to the topic being discussed. This can make it difficult for researchers to find accurate information and can lead to them making incorrect conclusions. Unfortunately, ChatGPT, like a large percentage of Artificial Intelligence, also has well-documented issues of bias and misinformation. This is because AI systems are trained on data that is often biased, and this bias can be reflected in the output of the system. For example, if an AI system is trained on a dataset of news articles that are mostly from one political perspective, the system may be more likely to generate text that reflects that perspective. Additionally, AI systems can be easily manipulated to generate false or misleading information. For example, an attacker could create a fake news article and then use an AI system to generate text that supports the article. This text could then be spread online, making it difficult to distinguish between real and fake news. In some cases, an overcorrection to combat bias has led to these communities being left out of the conversation completely. It is important to be aware of the potential for bias and misinformation when using AI systems and to take steps to mitigate these risks. We look forward to working with any of you that are interested in integrating AI to develop best practices.
Chatbots: Pity the chatbot. The derided “computer says no” tool that seemed to be known more for blocking direct human interaction with shops, banks, airlines, and insurance companies has finally found affection, and an extraordinary amount of free publicity, via OpenAI’s ChatGPT.
IT HAS BEEN said that algorithms are “opinions embedded in code.” Few people understand the implications of that better than Abeba Birhane. Born and raised in Bahir Dar, Ethiopia, Birhane moved to Ireland to study: first psychology, then philosophy, then a PhD in cognitive science at University College Dublin.
The field of Natural Language Processing (NLP) has seen significant advancements in recent years, thanks in large part to the development of powerful language models such as ChatGPT. ChatGPT, short for Chat Generative Pre-trained Transformer, is a large-scale neural language model developed by OpenAI that is capable of generating human-like responses to natural language input. With its impressive performance on a range of language tasks, ChatGPT has quickly become one of the most widely used language models in NLP research and application
It’s hard to believe that ChatGPT appeared on the scene just three months ago, promising to transform how we write. The chatbot, easy to use and trained on vast amounts of digital text, is now pervasive. Higher education, rarely quick about anything, is still trying to comprehend the scope of its likely impact on teaching — and how it should respond.
ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced.
Aside from stringing together human-like, fluid English language sentences, one of ChatGPT’s biggest skillsets seems to be getting things wrong. In the pursuit of generating passable paragraphs, the AI-program fabricates information and bungles facts like nobody’s business. Unfortunately, tech outlet CNET decided to make AI’s mistakes its business.
America is in a crisis of trust and truth. Bad information has become as prevalent, persuasive, and persistent as good information, creating a chain reaction of harm. It makes any health crisis more deadly. It slows down response time on climate change. It undermines democracy.
A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.
Just like how the Internet dramatically changed the way we access information and connect with each other, AI technology is now revolutionizing the way we build and interact with software. As the world watches new tools such as ChatGPT, Google’s Bard, and Microsoft Bing, emerging into everyday use, it’s hard not to think of the science fiction novels that not so subtly warn against the dangers of human intelligence mingling with artificial intelligence. Society is in a scramble to understand all the possible benefits and pitfalls that can result from this new technological breakthrough. ChatGPT will arguably revolutionize life as we know it, but what are the potential side effects of this revolution?
The remarkable progress made and the apparent user preferences for direct answers, this paradigm shift comes at a price which is higher than one might expect at first sight, affecting both users and search engine developers in their own way.
Researchers used ChatGPT to produce clean, convincing text that repeated conspiracy theories and misleading narratives.
Given that a majority of LMs’ training data is scraped from the Web without informing content owners, their reiteration of words, phrases, and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both the size of LMs and their training data increase, raising concerns about indiscriminately pursuing larger models with larger training corpora. Plagiarized content can also contain individuals’ personal and sensitive information.
The U.S. Department of Education's Office of Educational Technology released a new report, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, this week. The report acknowledges that the rapid pace of artificial intelligence (AI) advances is impacting society and summarizes opportunities and risks for AI in teaching, learning, research, and assessment. AI enables new forms of interaction among students, teachers and computers by way of voice, gestures, sketches and other human modes of communication, according to the report. These new forms may be leveraged to, for example, support students with disabilities, provide an additional “partner” for students working on collaborative assignments or help a teacher with complex classroom routines.
Recent breakthroughs in natural language processing (NLP) have permitted the synthesis and comprehension of coherent text in an open-ended way, therefore translating the theoretical algorithms into practical applications.
Historically, the majority of digital assistants, including chatbots, have been assigned names, voices, visual representations, and even "personalities" that are stereotypically feminine and reflect patriarchal ideology. This cross-sectional descriptive study of chatbots associated with large academic libraries in the United States found that there are few extant library chatbots, and in a major departure from trends, there are even fewer that are gendered. This is promising, in that it signals-whether intentionally or not-that the practices of creators and adopters are countering entrenched tendencies to typecast digital assistants as women, which may signal more feminist and gender-inclusive technology design to come
The publishers of thousands of scientific journals have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.
As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an operationalization of truth, where distinct, often conflicting claims are smoothly synthesized and confidently presented into truth-statements.
This article discusses the invisible human workforce that makes many "efficient" gains by technology possible, like removing abusive, hateful language from ChatGPT's knowledgebase: "Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries"
Zarifhonarvar, Ali. “Economics of Chatgpt: A Labor Market View on the Occupational Impact of Artificial Intelligence.” SSRN, 9 Feb. 2023, papers.ssrn.com/sol3/papers.cfm?abstract_id=4350925.
Brown, August. "AI IN ENTERTAINMENT; is Pop's Requiem Already Written? Tech Won't Replace Your Favorite Artist, but the Battered Music Industry Braces for More Upheaval." Los Angeles Times, Jul 10, 2023. ProQuest, https://ccco.idm.oclc.org/login?url=https://www.proquest.com/newspapers/ai-entertainment-is-pops-requiem-already-written/docview/2834856281/se-2
Everywhere you look lately, people are discussing the potential negative uses and consequences of the AI-driven chatbot ChatGPT. Many are concerned about the potential for ChatGPT and other “large language models” (LLMs) to spread a fog of disinformation throughout our discourse, and to absorb the racism and other biases that permeate our culture and reflect them back at us in authoritative-sounding ways that only serve to amplify them. There are privacy concerns around the data that these models ingest from the internet and from users, and even problems with the models “defaming” people.
ChatGPT has taken the world by storm. Within two months of its release it reached 100 million active users, making it the fastest-growing consumer application ever launched. Users are attracted to the tool’s advanced capabilities – and concerned by its potential to cause disruption in various sectors.
Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.
When you think about AI technology, fan works, and copyright, what excites you? And, what keeps you up at night?
A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well.
A copyright lawyer and fair use expert weighs in on the legal implications of these new AI technologies.
More Copyright ...
The input to generative AI (training data) - Should it be considered fair use? This is widely debated.
Argument A. No it's copyright violation
OpenAI Sued for Using Everybody's Writing to Train AI - "The class action suit, filed in California, alleges that failing to follow proper procurement guidelines, including seeking the consent of those who produced that content in the first place, amounts to straight-up data theft."
This will affect not only OpenAI, but Google, Microsoft, and Meta, since they all use similar methods to train their models.
Argument B. Yes, it's fair use
Thoughts from Creative Commons:
Fair Use: Training Generative AI - Stephen Wolfson
Better Sharing for Generative AI - Catherine Stilher
Thoughts from UC Berkeley Library Copyright Office
UC Berkeley Library to Copyright Office: Protect fair uses in AI training for research and education
Thoughts from EFF: Electronic Frontier Foundation:
AI Art Generators and the Online Image Market - Katharine Trendacosta and Cory Doctorow
How We Think About Copyright and AI Art - Kit Walsh
“Done right, copyright law is supposed to encourage new creativity. Stretching it to outlaw tools like AI image generators—or to effectively put them in the exclusive hands of powerful economic actors who already use that economic muscle to squeeze creators—would have the opposite effect.”
Other countries
The Israel Ministry of Justice has issued an opinion: the use of copyrighted materials in the machine learning context is permitted under existing Israeli copyright law.