China is Becoming a Blockchain-Powered Orwellian Dystopia
By Shilpa Lama - 30. October 2019
China seems to be going all-in with the mission to integrate blockchain technology into the state machinery and the world probably shouldn’t ignore the possible ramifications. Especially not now — after President Xi Jinping’s open and substantiated endorsement of the technology a few days ago.
No doubt, the Chinese President’s speech was bullish for the still-nascent-but-promising technology and highlighted its potential in various walks of life. However, beneath the surface, there was enough substance to cause worry among the proponents of a free and democratic Chinese society.
These worries are perhaps justified too considering the communist regime’s questionable reputation with human rights in Tibet, Hong Kong, and even in Mainland China.
A Blockchain-Powered Orwellian Dystopia
For those out of the loop, Xi has made it abundantly clear during his iconic speech that the Chinese blockchain community should rule the roost by setting policies and conventions globally, as BeInCrypto had reported previously.
Of course, there’s nothing wrong with wanting to see your country spearheading the development and adoption of potentially revolutionary technology.
However, the Chinese government is notorious for blatantly misusing technology to suppress dissent and infringe on the civil rights of nearly 1.5 billion people. Case in point — the mass surveillance system with highly sophisticated facial recognition, the Great Firewall, and the truly Orwellian social credit scoring system.
Going by these past trends, it would be too presumptuous to think that President Xi’s government will refrain from reinforcing these draconian systems with blockchain technology and its offshoots. The likelihood of just the opposite unfolding is going stronger by the day with the government being ever so close to releasing its own digital currency.
Potential Danger From China’s Digital Currency
Despite all the obvious benefits, a cashless economy also has its fair share of drawbacks — especially when an authoritarian regime controls all facets of the digital monetary system. In the case of China, it’s soon-to-be-released will be a yuan-pegged digital currency built atop a permissioned ledger. That is quite unlike any blockchain-powered digital asset such as Bitcoin or Ethereum.
Because the underlying ledger itself is permissioned and issued by a centralized authority, the Chinese government will enjoy total control over the network. Furthermore, the digital wallets required to store this digital currency will also be issued by the central bank, giving the government unrestricted access to all transaction data.
Once the government has total command over this enforced-cashless economy, the use of blockchain for tightening its grip over the population becomes even easier.
Blockchain for Social Credit and Digital Tracking
The government in China already controls the country’s cyberspace with an iron fist. State-sponsored censorship of content critical of the government is rampant and so is the unapologetic monitoring of online traffic.
A blockchain-based system in the disguise of social welfare schemes can further add to these diabolical practices. For example, any such system can allow the government to store digital identities of citizens on a blockchain and then use the same system to conduct real-time monitoring of their movement, financial transactions, social media accounts, and other digital footprints.
With a whole range of interconnected databases, any such network is likely to be a lot more comprehensive as compared to even the most intrusive surveillance programs in Western democracies, or for that matter, in most parts of the world.
Worse even, a blockchain network capable of tracking citizens in real-time will add more to teeth to the Chinese government’s social credit score system, which basically ranks citizens based on their ‘social value’ and loyalty to the government.
A lower-score on this draconian credit score system can have far-reaching consequences that go far beyond the realm of personal finances. For example, a low social credit score can render citizens unable to find good employment or even send their children to good public schools.
Shilpa Lama is a network engineer and management graduate who is deeply passionate about artificial intelligence and blockchain technology. She has been associated with several leading science & tech publications throughout her career as a journalist and columnist.Full-time foodie, semi-skilled musician, wannabe novelist.
Images courtesy of Shutterstock.
China has become the life testing ground for what has been dreamed up by immoral Anglo-American developers earlier:
Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.
Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.
Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.
Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.
“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.
Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.
Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.
Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.
This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.
Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.
Michal Kosinski. Photograph: Lauren Bamford
Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”
Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.
Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”
Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.
He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”
There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.
Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”
He also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.
Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.
The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.
Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”
Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.
“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”
Contact the author: