The Embiossa Foundation

Artificial Intelligence

With the breakthrough of generative AI, convincingly realistic content can now be created and spread with minimal resources. At the same time, the concept of classical search engines is increasingly being displaced and expanded by large language models and agentic AI. In particular, agentic AI can carry out ever more complex tasks autonomously. These technical advances are hard to stop and do bring real benefits.

But they are already being used systematically for disinformation campaigns and to influence public opinion (think “fake news” and “hybrid warfare”), a problem that is worsening and poses huge challenges to democracies. They can also be used to take phishing and other fraud to a new level or to intentionally introduce backdoors into open-source projects.

Plausible-sounding but fabricated answers from chatbots are common. Users sometimes accept them uncritically. They can occur unintentionally (see “LLM hallucinations”) or through targeted manipulation during training (see “model poisoning”). People rely on chatbots for more and more everyday tasks—from research and creative or logical thinking to seeking advice on life decisions and mental-health issues. This can at once cause people’s own skills to atrophy, further reduce real human interaction, and even create dangerous feedback loops between humans and chatbots (see “AI psychosis”).

The social part of the internet is being transformed by the rapidly growing share of bots posing as humans and by their generated content appearing in places that should be for humans—social networks, forums, Wikipedia, open-source repositories. As a result, the quality of many resources declines uncontrollably (see “AI slop”), and authentic human exchange becomes harder, with few effective ways to escape it (see “dead internet theory”). Social processes are distorted with unforeseeable consequences—for example, the discussions essential to a democracy and independent research for opinion formation. Casual exchanges about everyday personal topics are affected as well.

The scale and consequences of this development urgently need study. An open question is how to make it straightforward—or even possible—to tell who is human and what is authentic. Traditional techniques like watermarks and bot detection, and newer tools to spot AI-generated content, have so far proven unreliable and lag behind progress. Research into better methods is therefore needed. One possibility is a system of cryptographic signatures and certificates that can attest to human or nonhuman origin in general, or to authorship by specific persons or groups. That approach raises new questions about practicality, possible loopholes, and requirements for privacy and anonymity. Another research strand that has so far made no breakthrough is automated fact-checking.

The systematic gathering of personal data on users—up to building detailed personality profiles—was long criticized for its massive potential for abuse. Nevertheless, it remains common practice and occurs on a larger scale than ever. The widespread adoption of chatbots raises this practice—and with it the risk—to a new level. People now share personal data with chatbots to an unprecedented extent, and often entirely voluntarily (though usually without awareness of the consequences).

On social networks, people generally post what is meant for others to see. With classic tracking—say, which websites are visited—only indirect indicators about interests or life situations can be collected. But when people share information with chatbots, they do so assuming no one is listening. They reveal intimate things—opinions, experiences, worries, wishes, and problems—often more openly than with their closest friends. Company internals, trade secrets, and code snippets are also frequently shared thoughtlessly to get quick help at work.

These data are used to further improve AI models. It has been shown that common models can be induced to reproduce their training data verbatim (see “verbatim memorization”), which already poses a huge copyright and privacy problem. Beyond that, data gathered from chatbot users make it possible to monitor, analyze, predict, and in principle also steer human behavior with unprecedented precision. While classical social networks mainly influence users through recommendation algorithms and moderation decisions—which already carry enormous potential for abuse—the potential with chatbots goes significantly further: users integrate them directly into their decision-making, treating them as subject-matter experts, personal mentors, and closest confidants.