smallbiztechnology_logo (1)

Generative AI chatbots spark privacy concerns

4 Min Read
AI Privacy

The rapid pace of development in generative AI chatbots has raised concerns about intellectual property and data privacy. These AI tools, typically overseen by private companies, are trained on massive datasets that are not always public. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt.

Organizations such as OpenAI have asked users to ensure that outputs used in subsequent work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information. However, studies have shown that generative AI tools might do both. Timothée Poisot, a computational ecologist at the University of Montreal in Canada, is concerned that artificial intelligence could interfere with the relationship between science and policy in the future.

Chatbots such as Microsoft’s Bing, Google’s Gemini, and ChatGPT were likely trained using data that included Poisot’s work. Because these chatbots often don’t cite original content in their outputs, authors are stripped of the ability to understand how their work is used and to verify the credibility of the AI’s statements. “There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what, where the information is coming from, and who should be credited,” Poisot says.

The approach to AI regulation is likely to differ between the United States and Europe.

AI chatbots raise privacy issues

AI companies are increasingly interested in developing products marketed to academics.

In May, OpenAI announced ChatGPT Edu, a platform with extra analytical capabilities and the ability to build custom versions of ChatGPT. Legal scholars and researchers caution that when academics use chatbots, they expose themselves to risks they might not fully anticipate or understand. “People who are using these models have no idea what they’re really capable of, and I wish they’d take protecting themselves and their data more seriously,” says Ben Zhao, a computer-security researcher at the University of Chicago who develops tools to shield creative work, like art and photography, from being scraped or mimicked by AI.

See also  Apple investigates mysterious iPhone alarm mute glitch

Academics today have limited recourse in controlling how their data are used or having them ‘unlearned’ by existing AI models. Research is often published open access, making it more challenging to litigate the misuse of published papers or books. Zhao notes that most opt-out policies “are at best a hope and a dream,” and many researchers do not even own the rights to their creative output, having signed them over to institutions or publishers that may enter partnerships with AI companies.

Representatives from publishers such as Springer Nature, the American Association for the Advancement of Science, PLOS, and Elsevier say they have not entered such licensing agreements. Wiley and Oxford University Press have brokered deals with AI companies, while Taylor & Francis has a $10-million agreement with Microsoft. The Cambridge University Press is developing policies that will offer an ‘opt-in’ agreement to authors, who will receive remuneration.

Share This Article
Sophia has propelled her company to the pinnacle of the industry. Through her strategic leadership, Sophia continues to redefine the technological landscape, pushing boundaries and shaping the future of the tech world.