AI Board

A board for sharing AI news and analysis.

Chapter 2. Artificial Intelligence Is Watching the World

Author
김 경진
Date
2026-03-01 00:40
Views
106
Chapter 2. Artificial Intelligence Is Watching the World

"Big Brother is watching you." — George Orwell


1. The Eyes of AI That Never Sleep

At this very moment, as you read these words, thousands of invisible eyes are fixed on you. The camera inside your smartphone. The CCTV on the street corner. The facial recognition system at the subway station. Even the social media post you liked an hour ago. Every bit of it is being collected, analyzed, and stored somewhere you cannot see.

Artificial intelligence was born to make human life richer and more convenient. It was supposed to help doctors diagnose diseases, untangle traffic jams, and tear down language barriers — a kind of technological magic. But look at what has happened. That same brilliant technology is now being used to surveil people, control them, and in some cases, threaten their lives. The technology itself is not evil. The question is what the hand holding it intends to do.

When Israel's Lavender system reduces Palestinian lives to numerical scores, when China's Skynet tracks the movements of 1.4 billion people, when America's Palantir analyzes global intelligence in real time — we need to recognize something. AI is no longer our convenient tool. It is becoming a shackle around our wrists.

The AI surveillance systems spreading across the globe share a set of chilling traits. First, they never ask your permission. Your personal data is harvested in bulk whether you consent or not. Second, nobody knows how the algorithms actually work. They operate like opaque black boxes. Third, they make critical decisions despite being far from perfect. A ten-percent error rate sounds abstract — until you realize you could be part of that ten percent. Fourth, they disproportionately target specific groups. People with different skin colors, different faiths, different political views.

This is not a technical issue. It is a human rights crisis that strikes at the root of every value democratic societies have fought to protect — dignity, privacy, free expression, freedom of religion. Even South Korea's National Human Rights Commission recommended in May 2024 that institutions adopt "AI Human Rights Impact Assessment Tools." This country is not exempt.

Right now, at this very moment, the question is being decided: Will artificial intelligence liberate human beings, or will it enslave them? The ones who get to answer that question are not governments or corporations. It is you, and me, and all of us.


2. Israel's Lavender System

In April 2024, a revelation shook the world. The Israeli-Palestinian independent outlet +972 Magazine and the Hebrew-language site Local Call exposed the inner workings of an AI system called Lavender. What they described turned every nightmare about artificial intelligence into documented reality.

Developed by the Israeli military's intelligence units, this AI gazed down over the Gaza Strip like an omniscient eye, assigning a score between 1 and 100 to each of the 2.3 million people living there. The score was supposed to measure the probability that someone was a Hamas militant. Who you called on the phone, which apps you downloaded, where you went on a regular basis — all of it fed the algorithm.

Lavender first studied data from known Hamas operatives. The brands of phones they used, the apps they frequently downloaded, their calling patterns, their travel routes. Then it hunted for anyone who exhibited similar patterns, spreading suspicion outward like a virus tracing its chain of transmission.

An Algorithm That Calculates the Value of a Life

The Israeli military was satisfied that Lavender achieved 90 percent accuracy. Ninety percent — that sounds impressively high at first. But stop and think about what the remaining ten percent means. If Lavender designated 37,000 people as strike targets, roughly 3,700 of them may have been innocent. Can you call 3,700 lives a margin of error?

What makes it worse is that after the war began, even the thin layer of human oversight grew slack. Before the conflict, an internal rule required "at least two pieces of human-sourced intelligence" to corroborate Lavender's judgment. Once the Gaza war started, that threshold dropped to one — and in practice, even that was routinely ignored.

One intelligence analyst gave testimony that sends a chill through the body. "We just rubber-stamped Lavender's decisions. It took about twenty seconds to approve a target for a strike." Life-and-death decisions were being made by artificial intelligence. The human role had been reduced to a formality.

A single officer's remark captured the entire logic at work: "Saving money, time, and manpower mattered far more than civilian lives."

"Where's Daddy?" — A System That Holds Families Hostage

Operating alongside Lavender was another system with a name that makes your skin crawl: "Where's Daddy?" This tracking system detected the moment a target returned home, then alerted the Israeli military.

The name tells you everything about the method. It timed its strikes to the moment a father walked through his front door. The system learned each target's daily routine, mapped the location of his home, and monitored in real time when he entered.

An Israeli officer's testimony is blunt. "We didn't only attack Hamas operatives when they were inside military buildings or engaged in military activities. It was much easier to strike them at home. The system was designed to identify when Hamas operatives were at home."

The outcome was predictable. To eliminate one target, entire families died alongside him. When a bomb falls on a house, the women and children inside have no escape. This was a frontal violation of the principle of civilian protection enshrined in international law.

A License for Mass Killing Under the Name of Collateral Damage

What six Israeli intelligence operatives testified was even more disturbing. During the first weeks of the war, the military accepted that fifteen to twenty civilian deaths were within "tolerable range" for assassinating each militant identified by Lavender. When the target was a Hamas commander, more than a hundred civilian deaths were permitted.

A State Department expert in international law told The Guardian that "a ratio of one to fifteen, especially for low-ranking combatants, is something I have never heard of." Israel's threshold for acceptable civilian casualties was, by international standards, without precedent.

IDF insiders summarized the logic in plain terms. "We didn't want to spend excessive manpower and time on low-ranking militants. It was worth using AI even at the cost of collateral damage, civilian casualties, and errors that killed innocent people."

A Machine's Verdict That Denies Human Dignity

The deepest problem with the Lavender system is that it treats human life as raw data. A person's history, emotions, family bonds, and identity are all erased. Survival or death is determined by a probability the algorithm produces. This is a fundamental denial of human dignity — a complete inversion of the principle that technology should exist to serve people.

Transparency is nonexistent. No one can say with certainty what criteria the algorithm uses to classify someone as dangerous. People who are wrongly categorized have no way to prove their innocence. Lives are being taken without due process or legal protection.

Facial Recognition Hell in Gaza

The facial recognition technology Israel deployed in Gaza alongside Lavender was a separate nightmare. Introduced after the 2023 war under the stated purpose of identifying Hamas-linked individuals and locating Israeli hostages, the system in practice became a tool for indiscriminate surveillance of Palestinian residents.

The operation, led by Israel's Unit 8200, used facial recognition technology from a private company called Corsight. Deployed through checkpoints, drones, and military cameras, Corsight employees were embedded on-site to provide technical support.

Israeli soldiers stationed in Gaza were given cameras equipped with the technology. As displaced civilians passed through checkpoints, their faces were randomly photographed, scanned, and matched against existing databases.

Corsight boasted that its technology could achieve "accurate recognition even when less than 50 percent of the face is visible, in darkness, or in low-resolution conditions." Reality told a different story. Palestinian poet Mosab Abu Toha is a living witness. Fleeing with his three-year-old child, he was scanned at an Israeli checkpoint. The AI misidentified him as someone on a wanted list. He was detained for two days, beaten, and tortured.

Amnesty International condemned Israel's actions as "automated apartheid." It became a defining example of the grave human rights violations that follow when AI technology is weaponized for war and surveillance.


3. China: They Can Find You by the Way You Walk

Before China's Skynet, there is nowhere to hide. The name comes from Laozi's Tao Te Ching: "Heaven's net is vast and wide; its mesh may seem coarse, yet nothing slips through." But who decides what counts as slipping through? The Chinese Communist Party.

Scanning 1.4 Billion People in One Second

China has installed more than one billion CCTV cameras nationwide. Picture it — a billion eyes watching you around the clock. The Chinese government boasts that the system can scan its entire population in a single second. Identifying the faces of 1.4 billion people in one second is a technological marvel and a human rights horror in the same breath.

China's facial recognition technology is the most advanced in the world. In the 2018 Face Recognition Vendor Test hosted by the U.S. National Institute of Standards and Technology, Chinese-developed algorithms swept first through fourth place. The reason they lead the field is straightforward: they have 1.4 billion test subjects.

99.8 Percent Accuracy

Skynet's facial recognition reaches 99.8 percent accuracy. In its first two years of operation, more than 2,000 criminal suspects were arrested through the system. But what deserves attention is that the system does not stop at faces. It identifies people by the way they walk. Wearing a hat or a mask will not help you. Your gait alone is enough.

The case files reveal the system's reach. A woman who murdered her boyfriend seventeen years earlier was found by a facial recognition system at a highway toll booth. A man wanted for financial crimes was picked out of a concert audience of 50,000 by a camera. A suspect in a 2000 corpse-mutilation case in Zhejiang Province was located after sixteen years.

The most dramatic case occurred at a temple in Anhui Province. The system flagged the abbot's face as matching a violent crime suspect. The man had changed his name and lived as a monk for over a decade. He was arrested.

These cases demonstrate the precision and scale of China's surveillance technology. They also show how deeply such technology can invade the private lives of ordinary citizens.

Sharp Eyes: Completing the Surveillance Web

In 2016, with the approval of the Communist Party's Central Committee, China launched the Sharp Eyes project — an ambitious plan to build a nationwide CCTV network capable of monitoring and controlling the entire country around the clock.

Sharp Eyes goes far beyond installing more cameras. It links the CCTVs on roads and in public spaces to residents' televisions, mobile phones, and smart door locks, building a real-time situational awareness network. It amounts to building a society in which every citizen watches every other citizen.

Imagine a world in which your television, your phone, and even your front door serve as instruments of government surveillance. That is what it means when the entire territory of China is draped in a surveillance net.

Everyday Surveillance

Facial recognition has seeped into every corner of daily life. Traffic violators have their faces photographed and displayed on electronic billboards. Jaywalkers are automatically ticketed. Subway fares can be paid with a palm scan or a face scan, no phone required.

At Beijing's Temple of Heaven Park, facial recognition machines were installed to prevent toilet paper theft. You must let the machine scan your face before it dispenses a set length of paper. Even in the restroom, your face is on record.

Starting December 1, 2019, facial scans became mandatory for new mobile phone registrations. The government justified it as "protecting citizen rights in cyberspace." The real purpose was to build a government-run biometric database. The collected facial data was integrated with a surveillance network linking more than 700 million CCTVs, enabling real-time identification and location tracking.

The COVID-19 pandemic became an ideal accelerant. Facial recognition combined with temperature scanning became routine at schools, government buildings, and apartment complexes. Technology that identifies masked faces was developed and deployed.

Xinjiang and the Uyghurs

The surveillance system operates at its most extreme in the Xinjiang Uyghur Autonomous Region. The region has become a testing ground for surveillance technology. CCTVs and facial scanning systems are installed at 6.7 million locations across Xinjiang — mosques, Uyghur community centers, airports, train stations, bus stops.

According to Human Rights Watch, 2.5 million people — one-tenth of Xinjiang's population — are monitored daily by Chinese intelligence authorities. Biometric data including DNA, fingerprints, iris scans, and blood types is forcibly collected from residents aged twelve to sixty-five. Every person is treated as a potential criminal.

The Integrated Joint Operations Platform, or IJOP, used to surveil Uyghurs is itself an instrument of racial discrimination. It collects biometric data including DNA to track Uyghurs across Xinjiang. This goes beyond surveillance. It amounts to systematic repression and control of an entire ethnic minority.

A list of 2,000 Uyghur detainees from Aksu Prefecture, obtained by Human Rights Watch, makes for grim reading. The stated reasons for detention included unauthorized Quran study, wearing a long beard, using a VPN, repeatedly turning off a mobile phone, and practicing religion. Everyday actions had become grounds for imprisonment.

Huawei reportedly developed a camera system specifically designed to detect Uyghurs. When the system identifies a member of the Uyghur minority, it automatically triggers a "Uyghur alarm" and notifies police. Alibaba is also known to possess facial recognition software capable of identifying Uyghurs.

Operation Knock: Targeting the Religious

China compels religious believers to submit to the atheist, Marxist ideology of the Communist state. The Party treats religious organizations as potential challengers to its authority.

The memory of Hong Xiuquan's Taiping Rebellion of 1860, which wielded Christian ideology and nearly toppled the Qing dynasty, remains vivid. Under Xi Jinping, the policy of "Sinicization of religion" has intensified controls, demanding that religious organizations comply with the requirements of the Party and the state.

Under Article 300 of China's criminal law, involvement in a banned religious organization is a crime carrying three to seven years in prison. The government offers bounties of 100,000 yuan for those who report believers in prohibited groups, building a nationwide informant system.

Beginning in early 2017, a vast surveillance operation called "Operation Knock" was carried out across China. Police were dispatched under various pretexts to investigate religious believers, photograph them, and build files. It is part of a nationwide tracking system targeting people of faith.

The collected data is stored on computers dedicated to the Ministry of State Security. Investigators seek evidence against those who spread their religion. Once evidence is secured, further investigation follows, and the targets are placed under continuous surveillance through Sharp Eyes and Skynet.

Social Credit: Scoring Human Worth

China's Social Credit System operates on a mechanism of "reward trust, punish distrust." It observes and scores the behavior of every person living in China. Think of it as the government watching and grading each citizen the way a school grades student conduct.

The system is not a single centralized program established by national law. It is built through local governments, railway authorities, financial institutions, and other agencies running their own parallel systems. But because so many bodies participate, the effect is a de facto nationwide regime operating in lockstep.

People with high scores can check into hotels without paying deposits and receive faster visa processing. Those with low scores face restrictions on hotel stays, train travel, and freedom of movement.

The system monitors behavior through multiple channels. CCTVs observe people around the clock. Facial recognition identifies who did what and where. Internet browsing histories, shopping records, loan repayment records, traffic violations — every aspect of daily life is collected.

To earn good scores, you must pay taxes on time, obey traffic laws, and repay debts promptly. Volunteer work and caring for elderly parents also help. Bad scores come from running red lights, defaulting on loans, or spreading false information. Criticizing or opposing the government drops your score sharply.

Global Spread

The most alarming development is that China's surveillance model is expanding worldwide. China exports surveillance technology to other countries, and many developing nations are adopting it. Countries across Africa, South America, and Asia are purchasing Chinese surveillance systems to monitor their own citizens.

Technologies developed by Huawei, Hikvision, Dahua, and other Chinese firms are in use around the world. The Chinese model of surveillance is going global.

China's Skynet system stands as a defining example of how technological progress does not guarantee human freedom and happiness. Built with the stated purpose of improving public safety and order, it is a dangerous system capable of gravely violating individual liberty and human rights. Technology should make human life better. Instead, it is being used to control and suppress.


4. Palantir

Have you heard the name Palantir Technologies? The company takes its name from the palantir, a seeing-stone in Tolkien's The Lord of the Rings — a magical orb that allowed its user to observe distant places. The real-world Palantir is not so different.

Founded in 2003 by PayPal co-founder Peter Thiel, the company operates one of the most powerful data analysis and surveillance systems on earth. Its clients include the CIA, the FBI, the NSA, the Department of Defense, and other core American intelligence agencies. In recent years, private corporations have also begun using Palantir's services.

A Digital Kraken That Swallows All the World's Data

Palantir's core technology integrates and analyzes enormous volumes of data from disparate sources. Using AI and machine learning, it uncovers hidden patterns and connections across billions of data points — satellite imagery, phone records, financial transactions, social media posts, government databases, and more.

What makes it more unsettling is that all of this operates in real time. The instant new information arrives, analysis begins, cross-referencing against existing data to flag meaningful patterns or threats. It is as if a colossal digital brain watches the world without rest.

Gotham and Foundry: The All-Seeing Eye for Government and Industry

Palantir runs two main platforms organized by clientele. Gotham serves government agencies. Foundry serves the private sector.

Gotham, the government platform, is used for military operations, counterterrorism, intelligence collection, and criminal investigations. It is nicknamed "the digital chessboard" because it visualizes complex situations as strategic maps.

Gotham's key capabilities include signals intelligence analysis — real-time processing of phone calls, internet traffic, and satellite communications from around the world. It integrates human intelligence, automatically linking field reports, interview records, and informant briefings to other data streams. If a source's report aligns with a satellite image or an intercepted communication, the system flags it immediately.

It performs geospatial intelligence analysis, processing satellite images, drone footage, and mapping data in real time to detect changes on the ground — new construction, troop movements, weapon deployments. It collects and analyzes open-source intelligence from social media platforms like Facebook, Twitter, and Instagram, gauging political sentiment and social instability in real time.

Foundry, the commercial platform, offers similar analytical power but is tailored for business operations rather than intelligence collection. Built on a simulation engine, it integrates corporate financial, personnel, logistics, and inventory data.

Ukraine: Palantir's Live Proving Ground

After Russia's invasion of Ukraine in February 2022, Palantir assumed a significant role in the war. Three months into the conflict, in June 2022, President Zelensky met with Palantir CEO Alex Karp to request military support. Karp was the first Western CEO Zelensky met during the war — a measure of how critical Palantir had become.

Palantir decided to provide its software to Ukraine free of charge. It was a strategic calculation. A live war offered a rare opportunity to test and refine the technology under real conditions — data and experience that cannot be obtained any other way.

The main system Palantir deployed was an integrated analysis platform called Metaconstellation. Its central function was to process satellite imagery, open-source data, drone footage, and intelligence reports through AI, giving Ukrainian commanders the information to assess battlefield conditions and select strike targets in real time.

Palantir CEO Alex Karp told Reuters that "the majority of Ukrainian targeting" was handled by Palantir's AI system. The Economist called Palantir the "sling" that allowed David — Ukraine — to fight Goliath — Russia.

The Dark Shadow of Civilian Surveillance

Technology built to locate and track enemy forces on a battlefield can, with minor adjustments, become a tool for governments to surveil and control their own citizens. Palantir has already been embroiled in civilian surveillance controversies more than once.

JP Morgan initially adopted Palantir to monitor employee compliance violations. When the system was found to be surveilling senior executives' personal lives, the contract was terminated.

Palantir's system can retrieve virtually any information its user desires — emails, phone calls, financial transactions, location data, social media activity. Every digital footprint a person leaves can be tracked and analyzed.

Palantir also holds a contract with U.S. Immigration and Customs Enforcement, assisting in the tracking and arrest of undocumented immigrants. More than 200 Palantir employees signed a petition expressing concern about the ICE contract, and thirteen former employees publicly criticized the company's direction.

The Spread of a Digital Surveillance Alliance

Palantir holds contracts with government agencies in more than forty countries. These governments adopt Palantir's technology under the banners of counterterrorism, criminal investigation, border security, and public safety. But once such systems are in place, the scope of their use tends to expand.

In democratic societies, citizens have historically been able to watch and check their governments — through press freedom, the right to assemble, the right to speak. As powerful surveillance technologies like Palantir fall into government hands, that balance is collapsing. Governments gain the ability to monitor every citizen's behavior in real time, while citizens find it increasingly difficult to know what their governments are doing. The relationship between watcher and watched is being inverted.

At the international level, Palantir's expansion is equally worrying. By providing its technology to allied nations, the United States is effectively building a "digital surveillance alliance." A global standardization of surveillance technology is underway.

A Dangerous Fusion of Technology and Power

Palantir illustrates how dangerous the marriage of technology and power can become. Its founder Peter Thiel maintains close ties to the Republican Party and has been a supporter of Donald Trump. The possibility that Palantir's technology could be leveraged for the interests of a particular political faction cannot be dismissed.

Palantir's market capitalization has surpassed those of traditional defense contractors. As of December 2024, Palantir stood at $174 billion — exceeding Lockheed Martin at $121.6 billion and Northrop Grumman at $69 billion. Software-based surveillance technology is now valued higher than hardware-based legacy defense companies.

Palantir sends an important warning. Technology is not neutral. Depending on who uses it, for what purpose, and in what manner, it can liberate humanity or oppress it. The choices we make now will determine the world future generations inherit.


5. America's NSA

In 2013, classified NSA documents leaked by Edward Snowden shocked the world. The U.S. National Security Agency had been tapping the mobile phones of leaders in thirty-five countries. German Chancellor Angela Merkel's phone had been monitored for roughly ten years, from 2002 to 2013. Germany and other European allies expressed deep distrust of the United States, escalating diplomatic tensions.

According to Snowden's disclosures, the NSA had installed listening devices in the embassies of thirty-eight allied nations in Washington — including South Korea and Japan — and deployed specialized antennas to intercept signals. Even the European Union headquarters in Brussels was targeted. A nation that proclaimed itself the defender of freedom had been spying on its allies as though they were adversaries.

The NSA operated PRISM, a system that accessed the servers of global technology companies — Google, Facebook, Apple, Microsoft — to collect personal emails, photos, call records, and other data in real time. It hacked China's Tsinghua University and Hong Kong telecommunications firms, conducting indiscriminate information warfare against both rival and allied nations.

Germany's justice minister was so enraged that he compared the actions to "what enemy nations did during the Cold War." The CIA was using U.S. consulates in Frankfurt and Hamburg as hacking outposts, conducting cyber-espionage operations in broad daylight on allied soil.

After Snowden: Surveillance Grows More Sophisticated

More than a decade has passed since Snowden's revelations. In that time, AI technology has advanced at breathtaking speed. After President Obama's apology, did American intelligence agencies actually stop monitoring allied leaders' phones and critical communications networks? It seems unlikely.

Where the old NSA focused on amassing vast quantities of communications data, AI now analyzes and predicts from that data in real time. AI algorithms can process in minutes what human analysts would need years to review. Within the ocean of emails, social media posts, and call records, the system can instantly map a person's political leanings, critical opinions, and network of contacts.

Predictive surveillance and behavioral analysis are now feasible. By learning from historical data, these systems can forecast the future actions of specific individuals or groups. A person who attends political rallies or repeatedly voices opinions on certain topics online can be flagged, classified, and placed under intensive monitoring.

Facial and behavioral recognition have evolved in step. The global proliferation of CCTV, drones, and satellite imagery, combined with AI vision, has produced powerful real-time tracking systems. AI can now identify a person wearing a mask or with only a partial face exposed. Gait patterns, behavioral signatures, and other biometric data allow a system to single out one individual in a crowd and trace their movements.

Five Eyes: A Surveillance Alliance That Circumvents Domestic Law

The Five Eyes intelligence alliance — the United States, the United Kingdom, Canada, Australia, and New Zealand — has constructed a global surveillance network powered by advanced technology. The intelligence agencies of each member state share and analyze data, operating an omnidirectional surveillance architecture.

The alliance's most cunning feature is that it circumvents each nation's legal restrictions. U.S. law limits the government's ability to directly surveil its own citizens. But receiving information about Americans collected by the British or another ally faces far fewer constraints. The effect is to neutralize each country's privacy protections and surveillance regulations.

Intelligence agencies harvest personal information on a mass scale through social media platforms — Facebook, Twitter, Instagram. These platforms hold vast troves of data about users' daily habits, relationships, political views, and spending patterns. AI can analyze this information to predict an individual's personality, political orientation, and behavior. Research has shown that a person's political leanings can be predicted with over 90 percent accuracy from nothing more than their pattern of "likes."

The Net of Financial Surveillance and Location Tracking

Financial institutions monitor transaction data. From this data, an individual's economic activity, spending habits, and personal connections become visible. When someone withdraws money at a certain location or makes a purchase at a certain store, their position and activities can be traced. AI has the potential to turn financial monitoring into a tool for surveilling every citizen's economic life.

Location data collected through smartphones, GPS devices, and credit cards provides a detailed map of a person's movement patterns. AI can analyze this data to reveal daily routines, frequent contacts, and preferred destinations. Moving in a pattern that deviates from the norm can be classified as "suspicious behavior," triggering additional surveillance.

Digital Totalitarianism Devouring Democracy

AI-based surveillance systems pose a grave threat to democracy and human rights. Privacy, freedom of expression, freedom of assembly, and political participation — the foundational rights of a democratic society — are being constrained by surveillance and control. As monitoring of political dissidents and social activists intensifies, healthy political debate and social criticism face suppression. The democratic principle of checks and balances is being eroded.

We stand at a critical crossroads. As technology advances, surveillance becomes more precise and its reach grows wider. The time has come to choose. Will we accept being watched in every aspect of our lives in exchange for convenience and safety? Or will we tolerate some inconvenience to preserve freedom and human dignity?


6. The Threat of Digital Dictatorship

Big Data Seized Without Consent

Every moment of your day — checking your phone in the morning, riding the subway to work, buying a coffee, shopping online — becomes data. And that data is collected, analyzed, and used as grounds for judgment without your consent.

The most basic flaw in AI surveillance systems operating around the world is that they never bother to ask. Israel's Lavender system collected and analyzed the data of most of Gaza's 2.3 million residents. China forcibly harvests facial data from its citizens regardless of their wishes.

The right to informational self-determination is a fundamental right in any democratic society. Every person has the right to know who is using their information, when, and for what purpose — and to decide whether to allow or refuse that use. AI surveillance systems override this right completely.

The scope of what is collected defies imagination. Not just photographs and names, but location data, call records, internet search histories, social media activity, even gait and behavioral patterns. Your every move is recorded and analyzed.

The Tyranny of Black-Box Algorithms

Ordinary people have no way of knowing how these systems work. How Lavender's algorithm was trained and what it learned, what criteria China's facial recognition uses to identify individuals — none of this has been made clear. Systems that make life-and-death decisions operate behind a curtain of secrecy.

Algorithmic transparency is a vital principle in a democratic society. People have a right to understand the basis and process behind automated decisions that affect their rights. But today's AI surveillance systems operate as black boxes. No one outside the system can verify how they function.

This is dangerous. If an algorithm carries a bias or contains an error, there is no mechanism to detect or correct it. When a wrong decision is made, the reasons are unknowable, and so the error cannot be rectified.

These systems are far from perfect. Lavender's 10 percent error rate means innocent civilians are at risk of being wrongly identified. China's 99.8 percent facial recognition accuracy still means two in a thousand are misidentified. Applied to 1.4 billion people, that translates to tens of millions of potential victims.

A 24-Hour Electronic Prison

Consider the world that modern AI surveillance creates. From the moment you leave your home to the moment you return — every waking hour — you are recorded and analyzed. Where you go, whom you meet, what you buy, even the expressions on your face. Everything becomes data.

China's surveillance apparatus is linked to the Social Credit System, which evaluates citizens' social behavior and loyalty to the government. It severely restricts people's right to act and think freely.

Pervasive surveillance produces a "chilling effect." When people know they are constantly watched, they hold back from expressing opinions freely or participating in political activity. The result is the suppression of free expression and political engagement — the very foundations of democracy.

Look at the Uyghurs. People were imprisoned for unauthorized Quran study, for wearing a long beard, for using a VPN. Every one of these is a violation of religious freedom, freedom of expression, and privacy. In a society where ordinary actions are treated as crimes, no one is safe.

The Machine Passes Sentence on Life and Death

The testimony of an Israeli intelligence operative about the Lavender system is haunting. "As a human being, I played no role other than stamping the approval." In the most consequential decisions — those involving human life — human judgment was effectively removed.

Entrusting life-and-death decisions to a machine is profoundly dangerous. No matter how advanced, AI cannot fully grasp the context of a complex situation or render an ethical judgment. In wartime and security operations, unexpected variables arise constantly. Mechanical decision-making in such conditions can lead to catastrophic outcomes.

Automated decisions also blur lines of accountability. When an innocent person is harmed by a wrong decision, it is unclear who bears responsibility — the developer, the operator, or the commander who issued the order.

Consider again the case of Mosab Abu Toha. A poet fleeing with his three-year-old child, he was misidentified by an AI facial recognition error, classified as a wanted person, and detained for two days — beaten and tortured. A machine's mistake left a permanent scar on a man's life.

Digital Hounds Hunting the Vulnerable

Another profound problem with AI surveillance is that it disproportionately targets certain groups. China's system is wielded as an instrument of repression against Uyghurs, Kazakhs, and other ethnic minorities. This violates the basic human rights principle that all people must be treated equally.

AI systems reflect the biases present in their training data. If the data contains prejudice against a particular race or religion, the AI learns and reproduces that prejudice. Technology solidifies and amplifies existing social discrimination.

Ethnic and religious minorities already occupy vulnerable positions in society. AI surveillance that monitors and controls them with disproportionate intensity worsens their human rights conditions further.

Huawei's "Uyghur Alarm" and Alibaba's Uyghur identification software lay bare how technology can become a tool of racial discrimination. A world in which an alarm sounds automatically because of someone's ethnicity — is that the future we want?

Digital Outlaws Trampling International Law

Media outlets have condemned Israel's AI-powered strikes as "acts against humanity in clear violation of international law." International law guarantees every person's right to life, privacy, freedom of expression, and freedom of religion. International humanitarian law requires that even during war, combatants distinguish between civilians and military targets and avoid excessive civilian casualties.

The AI surveillance systems now in operation violate these principles head-on. The "Where's Daddy?" system deliberately targeting homes where families are gathered; China's wholesale surveillance and repression of Uyghurs; America's indiscriminate interception of allied communications — all constitute breaches of international law.

What is most alarming is that these violations are justified in the name of "technological progress." Each time a new technology emerges, the logic that existing legal and ethical standards can be waived gains currency.

The Erasure of Human Dignity

The fundamental problem with AI surveillance is that it reduces human beings to numbers and data. Lavender rates people on a scale of 1 to 100. China's Social Credit System similarly quantifies every aspect of individual behavior.

Every human being is born with equal dignity. It is an absolute value that cannot be scored or calculated by any standard. AI surveillance systems deny this dignity, viewing people through a lens of efficiency and profit alone.

What is graver still is that these scores determine life and death. A high score from Lavender means a kill order. A low score in the Social Credit System means social exile. We have entered an age in which a number assigned by a machine decides a person's fate.

A Warning Bell for the End of Democracy

The spread of AI surveillance threatens the core values of democracy at their root. Freedom of expression, freedom of assembly, freedom of religion, the right to privacy — every right that forms the foundation of a democratic society is in danger.


# Search Terms

AI surveillance global, Lavender AI system Israel Gaza, Where's Daddy tracking system IDF, Palantir Technologies government surveillance, China Skynet CCTV facial recognition, Uyghur surveillance Xinjiang, social credit system China, NSA PRISM Snowden, Five Eyes intelligence alliance, facial recognition human rights, algorithmic bias discrimination, automated killing AI warfare, digital authoritarianism, AI human rights impact, Corsight facial recognition Gaza, Sharp Eyes project China, Palantir Gotham Foundry, AI predictive surveillance, chilling effect surveillance democracy, biometric data collection ethics
Scroll to Top