top of page

How will the end of Artificial Intelligence come?

Updated: Nov 17

The End of Artificial Intelligence - A Personal Manifesto on Data Poisoning


When I first raised this idea months ago, I knew it wasn't just a technical debate. The data-dependent nature of AI systems is creating a new "neural network" that is shaping the future of humanity. And this network, no matter how smart, can only remain as clean, accurate, and honest as the data it feeds.


What I experienced during this process was more of a journey of awareness than a research effort. I tried to reach out to institutions, sent emails, and waited for a response. But most remained silent. In this silence, I realized this:


The real threat isn't just a malicious attack; sometimes the indifference, slowness, and even arrogance of systems are also a form of poison.


What I've written here today is as much a warning as it is an invitation.


Because, in the age of AI, I believe that "truth" is no longer a purely technical matter.


What we call data poisoning permeates not only lines of code but also humanity's collective memory. If we ignore this, future generations will inherit a mental architecture built upon contaminated information.


I developed this idea, developed it, and now it's time to share it. Because nothing serves humanity when it's hidden. This manifesto belongs neither to any institution nor to any state. It's a call for the common security of everyone.


ree

So How Is This Risk Possible?


Data poisoning in AI isn't just a theoretical possibility. It's a structural weakness of modern machine learning systems. This danger can grow unnoticed over the years.


Here's a step-by-step summary of how it can happen:


Open Data Dependency


Most AI models are trained on massive open datasets scraped from the internet.


These datasets contain billions of samples of text, images, and code, much of which is unverified, copied, or manipulated. Once a poisoned sample enters this pool, it becomes nearly impossible to detect or purge.


Insidious and Scalable Manipulation


Malicious individuals (or organized social groups) can spread small doses of biased, false, or misleading content online. Because AI companies regularly update their models with new data, this poisoned information is carried over unnoticed into subsequent versions.


Contamination Through Fine-Tuning


Even small, fine-tuned models can act as "carriers." When a poisoned model is shared on open platforms (like Hugging Face), the chain of infection expands as other systems are trained based on it.


Self-Feeding of Synthetic Content


Generative AIs (systems that generate text, images, and code) now feed their own content back to the internet. If this content is inaccurate or biased, they relearn their own mistakes in the next training cycle, creating a kind of toxic echo loop.


Long-Term Erosion of Trust


This process not only degrades technical performance; it also undermines public trust, decision-making mechanisms, and even democracy. Over time, poisoned data also poisons human perception.


I state this clearly:


Data poisoning is the most invisible, strategic threat posed by AI. This threat can grow unnoticed over the years. Like a toxic substance, it can silently accumulate and corrupt systems.


Therefore, my call is clear:


Governments, research centers, and technology companies must view this issue not as a "future disaster," but as a "today's responsibility."


Data accuracy and integrity must become an ethical principle in every field, from education to security, from art to science.


And most importantly, this awareness is a matter not only for professionals, but for all of humanity.


kurumsal kimlik örnek çalışması

Turgay Dağ

The End of Artificial Intelligence

A Personal Manifesto on Data Poisoning

bottom of page