Palantir, the secretive data behemoth linked to the Trump administration, expands into Europe

Corporations work in cohorts with survillance states and rogue governance.

By Nicolas Kayser-Bril (*) - 11. November 2019

The data analysis company, known in particular for running the deportation machine of the Trump administration, is expanding aggressively into Europe. Who are its clients?

Palantir was founded in 2004, in the wake of the September 11 attacks. Its founders wanted to help intelligence agencies organize the data they collected, so that they would identify threats before they could strike. It is widely rumored that its tools helped find Osama Bin Laden prior to his assassination in 2011 (another theory is that the US simply bribed Pakistani officials).

But Palantir is not good at making money. The company has never been profitable, in large part because it had to customize its products for each client, making economies of scale impossible. A new product launched in 2017, called Foundry, is supposed to solve this problem. Europe became the testing ground for this new commercial strategy, which relies largely on Foundry.

AlgorithmWatch asked close to forty German companies about their links to Palantir and browsed hundreds of open sources to map Palantir’s clients.

Palantir’s software is nothing special. Despite claims that it could turn “data landfills into gold mines,” it simply provides a visual interface that lets clients interact with their own data streams. It is built on top of existing technologies such as Apache Spark, a cluster-computing framework. An employee, who might not be privy to every product of the company, wrote in 2016 that Palantir did “no artificial intelligence”, “no machine learning” and “no magic”.

These relatively modest capabilities might explain why several clients, including American Express and Coca-Cola, dropped Palantir in the last few years. Giovanni Tummarello, co-founder of the Ireland-based Siren.io, a competitor, claimed in 2017 to have signed some of Palantir’s former clients, mostly due to lower prices.

Rooted in politics

What makes Palantir worth watching are its political ramifications. Peter Thiel, a co-founder and investor in the company, served in Donald Trump’s transition team in 2016. The CIA, via its In-Q-Tel investment arm, took an early participation in the firm. Today, Palantir provides US immigration authorities with software that helps them implement their policy of separating children from their parents at the border.

In Europe, the company presented itself as a barrier against terrorism. It signed the secret services of France and the Danish police as clients in 2016 after attacks were perpetrated there. In Denmark, laws had to be changed to allow for the collection of personal data to “prevent” future crimes – and feed Palantir’s software.

While Palantir claims that clients remain in control of their data, it strives to convince them to share most of it in anonymized format, in order to improve its offering towards all customers. Skywise, a tool based on Foundry and developed together with Airbus, encourages airlines to share their data for everyone to benefit.

Giving Palantir such control over private or public-sector data could be dangerous. In 2017, the New York police department decided to discontinue their relationship with Palantir, claiming their tool brought too little value for money. Palantir refused to release the data its client entrusted the platform in an open format, holding it de facto hostage.

Despite such practices and the breadth of the competition, some public-sector organizations still present Palantir as unavoidable. The police in Hesse, a German Land, said as much when it bypassed standard public tender procedures to buy their tool.

Palantir’s CEO, Alex Karp, sits on the boards of German giants BASF and Axel Springer and was seen cosying up to then-defense minister Ursula von der Leyen at the 2018 Munich Security Conference. There is no doubt that he will do everything he can to peddle more of Palantir’s wares to companies and public services in Europe.

But trusting the company requires a leap of faith. Palantir previously misrepresented its involvement in the Cambridge Analytica scandal and lied about its role in the deportation system set up by the Trump administration. Parliaments across Europe should keep a close watch on it.

 

(*) Author: 

Nicolas Kayser-Bril •  • GPG Key
Additional research by Boris Kartheuser

 

N.B.: If you are a coder or implementer in an entity that uses malicious A/IS or biased algorithms in decision making, please blow the whistle by sending the info from a protonmail account to

READ ABOUT THE BIGGEST SCANDAL IN THE UN:

World Food Programme embraces CIA-linked Data Miner Palantir

MUST READ:

 

AI Professor Details Real-World Dangers of Algorithm Bias

 

By Sidney Fussell - 12. August 2017

Screengrab: Kate Crawford’s “The Trouble With Bias” at NIPS 2017

Screengrab: Kate Crawford’s “The Trouble With Bias” at NIPS 2017

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives—in healthlaw enforcementsex, etc.—it can’t outpace the biases of its creators, humans. Kate Crawford, a Microsoft researcher and co-founder of AI Now, a research institute studying the social impact of artificial intelligence, delivered an incredible keynote speech, titled “The Trouble with Bias,” at Neural Information Processing System Conference on Tuesday. In Crawford’s keynote, she presented a fascinating breakdown of different types of harms done by algorithmic biases.

As she explained, the word “bias” has a mathematically specific definition in machine learning, usually referring to errors in estimation or over/under representing populations when sampling. Less discussed is bias in terms of the disparate impact machine learning might have on different populations. There’s a real danger to ignoring the latter type of bias. Crawford details two types of harm: allocative harm and representational harm.

“An allocative harm is when a system allocates or withholds a certain opportunity or resource,” she began. It’s when AI is used to make a certain decision, let’s say mortgage applications, but unfairly or erroneously denies them to a certain group. She offered the hypothetical example of a bank’s AI continually denying mortgage applications to women. She then offered a startling real world example: a risk assessment AI routinely found that black criminals were a higher risk than white criminals. (Black criminals were referred to pre-trial detention more often because of this decision.)

Representation harms “occur when systems reinforce the subordination of some groups along the lines of identity,” she said—essentially, when technology reinforces stereotypes or diminishes specific groups. “This sort of harm can take place regardless of whether resources are being withheld.” Examples include Google Photos labeling black people as “gorillas,” (a harmful stereotype that’s been historically used to say black people literally aren’t human) or AI that assumes East Asians are blinking when they smile.

Crawford tied together the complex relationship between the two harms by citing a 2013 report from LaTanya Sweeney. Sweeney famously noted the algorithmic pattern in search results whereby googling a “black-sounding” name surfaces ads for criminal background checks. In her paper, Sweeney argued that this representational harm of associating blackness with criminality can have an allocative consequence: employers, when searching applicants’ names, may discriminate against black employees because search results are tied to criminals.

“The perpetuation of stereotypes of black criminality is problematic even if it is outside of a hiring context,” Crawford explained. “It’s producing a harm of how black people are represented and understood socially. So instead of just thinking about machine learning contributing to decision making in, say, hiring or criminal justice, we also need to think about the role of machine learning in harmful representations of identity.”

Search engine results and online ads both represent the world around us and influence it. Online representation doesn’t stay online. It can have real economic consequences, as Sweeney argued. It also didn’t originate online—these stereotypes of criminality/inhumanity are centuries old.

As Crawford’s speech continued, she went on to detail various types of representational harm, their connections to allocation harms and, most interestingly, the ways to diminish their impact. As is often suggested, it seems like a quick fix to either break problematic word-associations or remove problematic data, what’s often called “scrubbing to neutral.” When Google image search was shown to have a pattern of gender bias in 2015, showing almost entirely men when users searched for terms like “CEO” or “executive,” they eventually reworked the search algorithm so it’s more balanced. But this technique has its own ethical concerns.

“Who gets to decide which terms should be removed and why those ones in particular?” Crawford asked. “And an even bigger question is whose idea of neutrality is at work? Do we assume neutral is what we have in the world today? If so, how do we account for years of discrimination against particular subpopulations?”

Crawford opts for interdisciplinary approaches to issues of bias and neutrality, using the logics and reasoning of ethics, anthropology, gender studies, sociology, etc, and rethinking the idea there there’s any one, easily quantifiable answer.

“I think this is precisely the moment where computer science is having to ask much bigger questions because it’s being asked to do much bigger things.”

(*) AUTHOR

Sidney Fussell Email Twitter -Of course I have pages. I had pages five years ago. How anyone can believe I don’t defies belief.

 

READ ALSO:

 

 

‘Black Data’ Is the Reason Why Smart Policing Is Still Incredibly Biased

 

The New Tech That Could Turn Police Body Cams Into Nightmare Surveillance Tools

 

Illinois Scraps Child Abuse Prediction Software for Not 'Predicting Much'