UPDATE 01. June 2021: For the First Time, Drones Autonomously Attacked Humans. This Is a Turning Point.

ICYMI: Killer drone ‘hunted down a human target’ without being told to + We need legislation against ‘killer robots,’ Human Rights Watch and ECOTERRA Intl. say

NO ONE WOULD BE SAFE - Campaign to Stop Killer Robots

An Autonomous Weaponized Drone "Hunted Down" Humans Without Command For First Time

ECOTERRA Intl. Demands Immediate Global Moratorium on All BigTech Kill-System Developments



By James Felton - 31 MAY 2021

An autonomous drone may have hunted down and attacked humans without input from human commanders, a recent UN report has revealed. As well as being the first time such an attack by artificial intelligence (AI) has taken place on humans, it's unclear whether the drone may have killed people during the attack which took place in Libya in March 2020.

The report to the UN Security Council states that on March 27, 2020, Libyan Prime Minister Fayez al-Sarraj ordered "Operation PEACE STORM", which saw unmanned combat aerial vehicles (UCAV) used against Haftar Affiliated Forces. Drones have been used in combat for years, but what made this attack different is that they operated without human input, after the initial attack with other support had taken place.

"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions," according to the report. 

"The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."

The KARGU is a rotary-wing attack drone designed for asymmetric warfare or anti-terrorist operations, which according to the manufacturers "can be effectively used against static or moving targets through its indigenous and real-time image processing capabilities and machine learning algorithms embedded on the platform." A video showcasing the drone shows it targeting mannequins in a field, before diving at them and detonating an explosive charge.

Against human targets, the drones proved effective.

"Units were neither trained nor motivated to defend against the effective use of this new technology and usually retreated in disarray," the report reads. "Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems, which were proving to be a highly effective combination."

The report did not go into specifics about whether there were casualties or deaths connected with the attack, although they note that the drones were "highly effective" in helping to inflict "significant casualties" on enemy Pantsir S-1 surface-to-air missile systems. It's perfectly possible that the first human has been attacked or killed by a drone operated by a machine learning algorithm.

The attack, whether it produced casualties or not, will not be welcomed by campaigners against the use of "killer robots"

"There are serious doubts that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity," says the Human Rights Watch. "Human Rights Watch calls for a preemptive ban on the development, production, and use of fully autonomous weapons."

Among other concerns is that AI algorithms used by the robots may not be robust enough, or else trained on datasets with flaws within them. As well as being open to errors (such as a Tesla tricked into swerving off the road) there are countless examples of biases within machine-learning tech, from facial recognition that doesn't recognize non-white skin tones, to cameras that tell Asian people to stop blinking, to racist soap dispensers that won't give you soap if you're black and self-driving cars that are more likely to run you over if you are not white.

Now, it appears, we could soon be trusting life and death decisions to tech that may be open to similar problems.


James Felton

James Felton


For the First Time, Drones Autonomously Attacked Humans. This Is a Turning Point.


The Kargu-2 quadcopter is armed with an explosive charge and can attack autonomously EMRE CAVDAR/STM

Drone experts have long dreaded this moment. Drones may have attacked humans fully autonomously for the first time.

By KYLE MIZOKAMI - 01. June 2021

  • Libyan forces reportedly used Kargu-2 drones to autonomously seek out and attack human targets.
  • This is the first recorded case of using a self-hunting drone against people.
  • Drone experts say this extremely dangerous development could be dangerous to people far beyond the traditional battlefield.

The world’s first recorded case of an autonomous drone attacking humans took place in March 2020, according to a United Nations (UN) security report detailing the ongoing Second Libyan Civil War. Libyan forces used the Turkish-made drones to “hunt down” and jam retreating enemy forces, preventing them from using their own drones.

The field report (via New Scientist) describes how the Haftar Affiliated Forces (HAF), loyal to Libyan Field Marshal Khalifa Haftar, came under attack by drones from the rival Government of National Accord (GNA) forces.

After a successful drive against HAF forces, the GNA launched drone attacks to press its advantage. From the report:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (above) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.

These military drones may have autonomously attacked humans for the first time ever last year, according to a United Nations report. While the full details of the incident, which took place in Libya, haven’t been released and it is unclear if there were any casualties, the event suggests that international efforts to ban lethal autonomous weapons before they are used may already be too late.

The robot in question is a Kargu-2 quadcopter produced by STM

The report says Turkey supplied the drones to Libyan forces, which is a violation of a UN arms embargo slapped on combatants in the conflict.

The Kargu-2 (“Hawk”), from Turkish defense contractor STM, is a quadcopter drone designed to carry a weapons payload. STM’s marketing video below explicitly describes Kargu-2 as being capable of autonomous attack.

This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

How does it work? First, the shooter loads coordinates into the Kargu-2 drone’s software, and then launches the drone. The drone will travel to those coordinates, identify likely “targets,” and execute a dive maneuver, swooping down on the target and blowing itself up as it detonates a shotgun-like explosive package.

stm karguIn military parlance, this process is known as “fire and forget,” which means once the shooter launches the drone, they can do something else, like relocate, prepare another attack, or even go eat lunch.

Drone experts have been dreading this moment while advocating for a ban on autonomous attack drones.

“The UN report implying first use of autonomous weapons against soldiers paints an uncertain picture—however, that’s the point,” Zachary Kallenborn, an official U.S. Army “Mad Scientist” and national security consultant, tells Pop Mech. He continues:

“The first use of autonomous weapons in war won't be heralded with a giant fireball in the sky and dark words on how humanity has become Death, Destroyer of Worlds. First use of autonomous weapons may just look like an ordinary drone. The event illustrates a key challenge in any attempt to regulate or ban autonomous weapons: how can we be sure they were even used?”

autonomous rotary wing attack drone uav kargu production in turkey's capital

Kargu-2 drones under production at STM, Ankara, Turkey, June 2020. ANADOLU AGENCYGETTY IMAGES

The Kargu-2 indeed looks like any other quadcopter drone. The major difference is the software, which might be difficult to obtain from scattered bits of plastic for forensic analysis. This raises the question: Could military forces modify civilian drones into human-hunting counterparts to attack civilians?

There are some events in the history of mankind, like the atomic bomb test in Alamogordo, New Mexico in 1945, that are so profound, they serve as a divider between one social, economic, or military era and another.

The events in Libya may similarly divide the time when humans had full control of weapons, and a time when machines made their own decisions to kill.





This Drone Could Change America's War Strategy


How The U.S. Hid the Atomic Bomb


BIG SHAME: Often under the disguise of CONservation Organizations or "Peacekeeping" Missions, the military has developed and tested since over 40 years the most heinous surveillance and autonomous killing systems in the remotest corners of especially African countries that lack until today any regulations and legislation on such technologies - just like African children still serve as guinea pigs to Big Pharma corporations and its Medical Mafia . AFRICA WAKE UP!!!



Killer drone ‘hunted down a human target’ without being told to

By Paula Froelich - 29. May 2021

Arnold Schwarzenegger could’ve seen this one coming.

After a United Nations commission to block killer robots was shut down in 2018, a new report from the international body now says the Terminator-like drones are now here.

Last year “an autonomous weaponized drone hunted down a human target last year” and attacked them without being specifically ordered to, according to a report from the UN Security Council’s Panel of Experts on Libya, published in March 2021 that was published in the New Scientist magazine and the Star.

The March 2020 attack was in Libya and perpetrated by a Kargu-2 quadcopter drone produced by Turkish military tech company STM “during a conflict between Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Army,” the Star reports, adding: “The Kargu-2 is fitted with an explosive charge and the drone can be directed at a target in a kamikaze attack, detonating on impact.”

The drones were operating in a “highly effective” autonomous mode that required no human controller and the report notes:

“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability” – suggesting the drones attacked on their own.

Zak Kallenborn, at the National Consortium for the Study of Terrorism and Responses to Terrorism in Maryland, said this could be the first time that drones have autonomously attacked humans and raised the alarm.

“How brittle is the object recognition system?” Kallenborn asked in the report. “… how often does it misidentify targets?”

Jack Watling at UK defense think tank Royal United Services Institute, told New Scientist: “This does not show that autonomous weapons would be impossible to regulate,” he says. “But it does show that the discussion continues to be urgent and important. The technology isn’t going to wait for us.”

In August of last year, Human Rights Watch warned of the need for legislation against “killer robots” while NYC mayoral candidate Andrew Yang has called for a global ban on them – something the US and Russia are against.


The future of crimefighting tech is here: drones and RoboDog


‘Doug to the Rescue’: Drone pilot saves animals in global disaster zones

A herd of elephants is trekking across China and just got caught napping

Gas of the future? Drone refuels Navy fighter jet in midair

Spooked terns abandon 2,000 eggs after drone crash


We need legislation against ‘killer robots,’ Human Rights Watch says

By Hannah Sparks - 10. August 2021

We need legislation against ‘killer robots,’ Human Rights Watch says

Getty Images/iStockphoto

It’s a Hollywood blockbuster premise rooted in our not-so-distant future.

For decades, robot thrillers such as “The Terminator,” “Blade Runner” and “Westworld” have warned viewers that our reliance on artificial intelligence is a real threat to civilization. Now, real-life researchers with the Human Rights Watch are sounding the alarm on potentially world-ending “killer robots,” according to a new report.

The message comes as part of their Campaign to Stop Killer Robots, which calls for a global ban on “fully autonomous weapons.”

In the words of Arnold Schwarzenegger’s Terminator: Hasta la vista, baby.

“Removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action,” said Mary Wareham, advocacy director of the arms division at Human Rights Watch, in a press release on HRW.org, whose campaign is pushing for “an international ban treaty” on AI-operated weapons.

Their review, which analyzed defense policies from 97 countries that outlined stances on lethal robotics, revealed that a majority of lawmakers believe human intervention is fundamental to ethical weapons systems.

United Nations Secretary-General António Guterres has called the advanced programs “morally repugnant and politically unacceptable” and urged countries to take action.

The authors explain that autonomous weapons “would decide who lives and dies, without … inherently human characteristics such as compassion that are necessary to make complex ethical choices.” Aside from many other potential pitfalls of programming death machines, HRW suggests their use would also make unclear who would be held responsible for unlawful acts of war committed by an autonomous weapon: the computer programmers or the military commanders?

So far, at least 30 countries, including Austria, Brazil and Chile, are seeking to put a global ban on the use of these weapons, according to the report.

However, a minority of influential countries, namely Russia and the United States, have hampered talks — while notably “investing heavily in the development of various autonomous weapons systems,” the report claims.



Amazon tests warehouse robots it claims will reduce worker injuries

Meet Grace, the health care robot COVID-19 created

Spanish chef gives thumbs up to a robot-made paella



Campaign to Stop Killer Robots

Help us achieve our goal of banning fully autonomous weapons.



Contact your government

Ask if they support the calls to ban fully autonomous weapons that would select and attack targets without human control.


Non-governmental organizations can apply to become a member of the campaign. There are no fees. Learn more and apply.

Government representatives should endorse and work for a ban on fully autonomous weapons.

Technology companies should publicly commit not to engage in the development of fully autonomous weapons and call for a ban.

Individual experts working in artificial intelligence, robotics and related fields are encouraged to join the International Committee for Robot Arms Control, a co-founder of our campaign.

2021 First Session CCW meeting on lethal autonomous weapons systems

June 28 - July 5

Geneva, Switzerland



Robot dog on trial in the city centre

Der Roboter von der US-Firma Boston Dynamics läuft während eines PR-Termins über den Rathausmarkt
The robot from the US company Boston Dynamics walks around the Rathausmarkt during a PR meeting.

Photo: Daniel Reinhardt/dpa

Hamburg - Many Hanseatic citizens were amazed!

A robot resembling a dog or a creature from Star Wars caused astonishment in Hamburg's city centre on Thursday.

Der Roboter soll bei einer Sicherheitsfirma als robotergestützte Sicherheitslösungen mit künstlicher Intelligenz eingesetzt werden
The robot is to be used by a security company as a robotic security solution with artificial intelligence Photo: Daniel Reinhardt/dpa

Berlin-based security service provider Ciborius unveiled a new technology for security guards that will be deployed across Germany.

The four-legged robot "Spot" from Boston Dynamics can secure buildings without tiring, the company explained.

It has an all-round camera and is programmed in such a way that it can make its rounds agilely on its own. If it detects inconsistencies, it reports it.