The rapid pace of development in generative AI chatbots has raised concerns about intellectual property and data privacy. These AI tools, typically overseen by private companies, are trained on massive datasets that are not always public. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt.
Organizations such as OpenAI have asked users to ensure that outputs used in subsequent work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information. However, studies have shown that generative AI tools might do both. Timothée Poisot, a computational ecologist at the University of Montreal in Canada, is concerned that artificial intelligence could interfere with the relationship between science and policy in the future.
Chatbots such as Microsoft’s Bing, Google’s Gemini, and ChatGPT were likely trained using data that included Poisot’s work. Because these chatbots often don’t cite original content in their outputs, authors are stripped of the ability to understand how their work is used and to verify the credibility of the AI’s statements. “There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what, where the information is coming from, and who should be credited,” Poisot says.
The approach to AI regulation is likely to differ between the United States and Europe.
AI chatbots raise privacy issues
AI companies are increasingly interested in developing products marketed to academics.
In May, OpenAI announced ChatGPT Edu, a platform with extra analytical capabilities and the ability to build custom versions of ChatGPT. Legal scholars and researchers caution that when academics use chatbots, they expose themselves to risks they might not fully anticipate or understand. “People who are using these models have no idea what they’re really capable of, and I wish they’d take protecting themselves and their data more seriously,” says Ben Zhao, a computer-security researcher at the University of Chicago who develops tools to shield creative work, like art and photography, from being scraped or mimicked by AI.
Academics today have limited recourse in controlling how their data are used or having them ‘unlearned’ by existing AI models. Research is often published open access, making it more challenging to litigate the misuse of published papers or books. Zhao notes that most opt-out policies “are at best a hope and a dream,” and many researchers do not even own the rights to their creative output, having signed them over to institutions or publishers that may enter partnerships with AI companies.
Representatives from publishers such as Springer Nature, the American Association for the Advancement of Science, PLOS, and Elsevier say they have not entered such licensing agreements. Wiley and Oxford University Press have brokered deals with AI companies, while Taylor & Francis has a $10-million agreement with Microsoft. The Cambridge University Press is developing policies that will offer an ‘opt-in’ agreement to authors, who will receive remuneration.
Neuroscientist reveals a new way to manifest more financial abundance
Breakthrough Columbia study confirms the brain region is 250 million years old, the size of a walnut and accessible inside your brain right now.
Related Stories from SmallBizTechnology
- Doc warns arsenic in tap water may be causing kidney cancer in millions of Americans, and here’s why the FDA’s safe levels might not be so safe
- Walmart worker’s chilling restroom video exposes a hidden threat to women everywhere
- New conflict fears as Russian warship fires on German helicopter over Baltic Sea