UPDATE 22. May 2020: EU Planning “Vaccination Passport” Since 2018

POLOGUE: The Australian government and its mainstream society did proof that they are not even able to prevent and fight devastating forest fires successfully (without the rains the fires still would burn), respect the Aboriginal owners of the continent, handle refugees with dignity, protect the Great Barrier reef and other natural gems or - not at least - free and bring home their most famous journalist and publisher, Julian Assange. Therefore nobody can expect that they manage the COVID-19 situation benevolently and for the benefit of the people. Thus the people must rise up against these machinations. One big problem is that the very body - the United Nations Organization - that is supposed to safeguard our universally guaranteed right to privacy, is striving itself for the global domination slot and is on the very forefront of surveillance of people. With implementing it by runs on helpless refugee communities and camps, UNHCR and UNICEF are the spearheads and worst offenders unleashed by the UN - having paid since years app-developers millions to create the tools, without that the public noticed or taxpayers of the funding governments authorized them to do so. The corralling systems developed by these agencies since many decades hold even more in stock for you, since the populace is only seen as chattel. 

George Orwell didn't know how easy it is to coerce modern man into submission in 2020.

Efficacy, ideology and COVIDSafe

Questions remain over measuring the effectiveness of the Australian government’s tracking app, COVIDSafe, along with the mission creep it is sure to entail

By Dr James Parker, University of Melbourne - 11. May 2020

Two weeks ago, governments in Queensland and Western Australia announced that they would start lifting some restrictions on movement and gatherings. Hours after these announcements, COVIDSafe, Australia’s new ‘contact tracing’ app, was released.

It’s hard to imagine that the timing was a coincidence.

Privacy isn’t the only concern when it comes to the COVIDSafe app. Picture: Scott Morrison

The federal government had already signalled that the app would be one of three key pillars of Australia’s ‘exit strategy’ from shutdown.

Amid all the talk of ‘war bonds’ and ‘national service’, text message campaigns and promises that the app would help us get back to the pub, that exit strategy is now being implemented.

COVIDSafe has so far been downloaded 5.5 million times, or about halfway towards the government’s target of 40 per cent of the population.

When the draft legislation detailing the app’s regulatory framework was finally released on 4 May, it was mostly what we’d been promised. From a privacy perspective, as things currently stand, COVIDSafe seems relatively benign (though it depends whether you’re comparing it to a mask or My Health Record).

But privacy isn’t the only concern when it comes to COVIDSafe. And there are some major assumptions and blind spots in the way the conversation has so far been conducted.

The most important of these is efficacy: the idea that the app will actually work if enough people download it. Currently, there has been far too little questioning of this basic premise.

TraceTogether in Singapore was only downloaded by 25 per cent of the population. Picture: Shutterstock

Of course, there has been plenty of scientific modelling (much of which, by the way, suggests that even Australia’s 40 per cent target is far too low).

But TraceTogether, the Singaporean app on which COVIDSafe is directly based, was a spectacular failure.

Since its release on 20 March, and with downloads currently sitting at around 1.4 million or 25 per cent of the population, the number of coronavirus cases in Singapore has risen from 385 to nearly 22,000.

No doubt, the reasons for this 57-fold increase are complicated. But as far as the app is concerned, the issues are about more than download numbers.

Bluetooth is not a robust technology.

Even its inventors are concerned about its reliability in the context of contact tracing. And being required to keep your Bluetooth on constantly exposes you to a significantly increased security risk.

Initial problems with the app’s functionality on iPhones have already pushed the government to move towards integrating the Apple-Google collaborative API (the software intermediary that allows two applications to talk to each other).

There are some major assumptions and blind spots when it comes to contact tracing technology. Picture: Getty Images

The developers of TraceTogether are doing the same. The tech giants have so much power here that it is proving near impossible for sovereign nations to go it alone.

And of course, users still need to remember to take their phones with them, keep their Bluetooth on and not buried deep in a bag where functionality will be impaired.

According to Associate Professor Adam Dunn, Head of Biomedical Informatics and Digital Health in the School of Medical Sciences at the University of Sydney in a recent talk, even if 40 per cent of the population downloaded the app and, optimistically, half of them used it properly at all times, “the likelihood of registering a contact between any two people is four per cent”.

Even in this “very optimistic” scenario, as Associate Professor Dunn puts it, fewer than one in twenty potential contacts will be captured.

At present, he says, the chance is “effectively closer to zero”. And that’s without accounting for the vagaries of Bluetooth noted above.

When Scott Morrison says easing restrictions depends on adequate uptake of COVIDSafe, it’s worth keeping these numbers and the Singaporean case in mind.

This is a population-wide experiment, make no mistake. We have very little idea whether digital contact tracing will work in Australia, or what ‘working’ would even look like.

Digital contact tracing will remain part of the global conversation for a long time. Picture: Getty Images

How long will community transmission rates need to remain ‘low’, for instance, in order for COVIDSafe to be considered a success? What precise role will the app need to have played in this respect?

What if, as in Singapore, numbers start to rise again? Will this be presented to us as evidence of the app’s failure, that – far from being a magic technological bullet – it provoked a false sense of security?

Or, as seems more likely, will rising numbers be framed as reason to demand yet more data along with stricter enforcement measures?

When and according to what criteria will Greg Hunt determine that “the use of COVIDSafe is no longer required” for the purposes of the app’s enabling legislation?

Is there any plausible scenario here where COVIDSafe doesn’t help to smooth the way for further dataveillance and control by both the state and corporations in the name of public health?

The only thing we can be certain of when it comes to COVIDSafe is that it will not be a quick fix. As a result, the most important questions politically and legally speaking are still to come.

The fact that, for now, the government has been persuaded not to demand our location data, as in the UK or India, or make the app a condition of access to public or private space, as in South Korea, India and China, should be cold comfort.

As Australian journalist Bernard Keane argues in Crikey, the question isn’t so much whether COVIDSafe is a threat to your privacy, but whether the government is.

Scott Morrison says easing restrictions depends on adequate uptake of the COVIDSafe app. Picture: Getty Images

This is a government that raids journalists’ homes, that not only legislated “one of the most comprehensive and intrusive data collection schemes in the western world” with its notorious metadata retention laws but then immediately handed access to that data to the very agencies it had expressly ruled out in the first instance.

Look at the laundry list of ‘tech fails and data breaches’ published recently in the Guardian.

In this context, concerns about ‘mission creep’ are not paranoid. They are empirically justified. Indeed, Scott Morrison has already shown his hand here. He’d make COVIDSafe mandatory if he could. And he’s not the only one.

Let’s see what happens if the numbers spike again.

Even if they don’t and COVIDSafe goes more quietly than it arrived, digital contact tracing will remain part of the global conversation for a long time to come. And it is only one of many putative technological fixes that will require even more political and legal attention.

Meanwhile, the ‘machine diagnostics’ business is booming, and with scant oversight.

One Chinese startup has already sold 1000 pairs of ‘coronavirus-fighting smart glasses that can ‘see’ your temperature’ to governments, businesses and schools across the country.

The ‘machine diagnostics’ business is booming, and with scant oversight. Picture: Getty Images

A professor at the University of South Australia and the Department of Defence is currently developing a ‘pandemic drone’ designed to detect coronavirus symptoms from above.

Unsurprisingly, ‘innovative’ new surveillance technologies are coming out of the woodwork. In the US, Andrew Ng’s Landing AI has developed a ‘social distancing detector’ that would monitor employee movements in their workplace, and issue an alert when anyone is less than the desired distance from a colleague.

And Clearview AI, the notorious US company that already provides facial-recognition services to the Australian police, is currently touting facial-recognition based contact tracing to three US States.

For writer, activist, and political thinker Naomi Klein, “a coherent Pandemic Shock Doctrine is beginning to emerge”.

COVIDSafe is not so much the problem, in other words, as the broader technological and political ideology of which it is a part, and will help to entrench.

The tech writer and critic Evgeny Morozov calls it ‘technological solutionism’: the growing belief, both in governments and publics across the world, that, whatever the policy alternatives, no matter the risks or likely efficacy, “there’s an app for that”. ‘To save everything, click here’ as the title of his 2013 book memorably put it.

As public services and structures of accountability are increasingly dismantled (remember, COVIDSafe has yet to be debated in Parliament), the appeal of technological solutionism only grows.

This way of thinking is much more dangerous than COVIDSafe, or indeed any individual app. It is a way of thinking we urgently need to resist.

Banner: Getty Images



, University of Melbourne,  - First published on in Legal Affairs


Mobile Big Brother surveillance and governance units are rapidly deployed in Australia.

The privacy paradox: why we let ourselves be monitored

Digital virtual assistants make life more convenient but we are trading off our privacy. Here are some simple steps to keeping the AI where you want it

By Gabby Bush, Adam Lodders, Associate Professor Tim Miller, and Professor Jeannie Marie Paterson - 12. May 2020

On a normal day for many of us, Alexa, Cortana, Siri or our digital assistant of choice, will helpfully tell us what the weather is while we get ready for work. When we return home they will play us some music.

But increasingly Silicon Valley’s virtual assistants are keeping us company pretty much all the time. They are probably listening to our meetings, noting our routines, and video-calling our friends and family. Meantime, applications like Fitbit and Strava are tracking our exercise patterns.

Digital assistants are now part of daily life for many people. Picture: Getty Images

These technologies learn and understand us by identifying patterns in our behaviour to better accommodate our requests, but there is often a trade-off in relation to our data, privacy, and security. We have humanised them with names, and the typically female voice, reassures us by playing on our biases to make these compromises seem less sinister.

Despite this, many of us still welcome the convenience and maybe even the apparent connection they offer – perhaps even more so at a time when many are isolated at home in the wake of the COVID-19 pandemic.

Why do we let this happen? What should we be doing differently?

This disconnect between our desire for privacy and actual behaviours has been called the Privacy Paradox.

This phenomenon describes the disconnect between us expressing concerns for our privacy online, yet continuing to use digital devices that have considerable potential for eroding our privacy, and even autonomy.

It seems that while we value privacy, the experience of managing our digital privacy proactively is often too difficult and time consuming. In particular, the contracts setting out what digital service providers will do with our data are long and complex. We may also lack expertise in managing the technical aspects of digital privacy protection.

We value our privacy, but we also value the convenience that our devices offer. Picture: Burst/Unplash

There are also some behavioral biases that come into play. People are poor at assessing future risks and either exaggerate or downplay them according to current experience. Also, due to ‘present-bias’ or the desire for instant gratification, people tend to choose present gain over future benefits. This means if the privacy risk is abstract it will be downplayed particularly in the face of a present reward – like the convenience of a voice activated assistant, the pull of social media, or a response to a present threat.

Over 5 million Australians have downloaded COVIDsafe – the Federal Government’s tracing app – as the importance of a real health crisis may override concerns about privacy and the long-term security of the data collected.

Conversely, if there are concerns and no immediate reward, people may avoid even low-risk applications like participating in the Census.

Even where we do try and be more proactive, the navigation of the technology itself can be daunting. Moreover, our relationship with digital services is determined by the terms, conditions and privacy policies they present to us – usually our only option is to take it or leave it.

But there are simple practical steps we can take to better manage our privacy:

  • Delete accounts and associated apps that you no longer use, and delete personal data that apps record. Data the digital assistants record is stored on an account and can be deleted. For example, in a Google Home, go to ‘My Activity’ in your settings and delete all data, or for an Amazon Echo, the ‘Manage my device’ settings give the ability to do this.
  • If you don’t want to read the terms and conditions, pay attention to what access an application is requesting – for example to your microphone, contacts, camera, location. You should consider whether you’re comfortable sharing access for the service. For example, a video app on your phone is going to need to access the microphone and camera to record content.
There are simple steps we can take to improve our privacy. Picture: Shutterstock
  • Regularly review the privacy settings on your services as the default settings often change.
  • Turn off input devices like microphones and cameras when they aren’t in use.
  • Turn off features that allow people to remotely access your device, like incoming calls on digital assistants, which can be dialled into without the owner knowing.

There are also some protections in the legal system for individuals concerned about privacy. For example, if a service promises to treat your data in a specific way, then they must do so. Failure to act in accordance with their promises is misleading and in contravention of Australian Consumer Law.

Recently, the ACCC brought an action in the Federal Court against Google for misrepresenting how they collect, and individuals can manage, their location data.

But there are also calls for more wide-ranging reform to both privacy and consumer protection laws to enhance privacy protection in a digital age.

Practical strategies and law reform can help to ensure we are spending quality time, not privacy-eroding time, with our devices.

Melbourne Law School is launching a new expert panel series focusing on the current and predicted effects of COVID-19. On 19 May 2020 a webinar will examine the justifications for the dramatically enhanced digital and physical surveillance and restriction we are experiencing in the wake of COVID-19.

The newly launched Centre for AI and Digital Ethics at the University of Melbourne is dedicated to the cross-disciplinary research of AI with particular focus on ethics, regulation and the law.

Banner Image: Getty Images



REPORT: EU Planning “Vaccination Passport” Since 2018 “Roadmap on Vaccination” outlines 3 year plan for boosting “vaccine confidence” and advancing “electronic tracking”

Image source

By Kit Knightly - 22. May 2020


A report published by the European Commission in late 2019 reveals that the EU has been looking to increase the scope and power of vaccination programmes since well before the current “pandemic”.

The endpoint of the Roadmap is, among many other things, to introduce a “common vaccination card/passport” for all EU citizens.

This proposal will be appearing before the commission in 2022, with a “feasibility study” set to run from 2019 through 2021 (meaning, as of now, it’s about halfway through).

To underline the point: The “vaccination roadmap” is not an improvised response to the Covid19 pandemic, but rather an ongoing plan with roots going back to 2018, when the EU released a survey of the public’s attitude toward vaccines titled “2018 State of Vaccine Confidence”

On the back of this research, the EU then commissioned a technical report titled “Designing and implementing an immunisation information system”, on – among other things – the plausibility of an EU-wide vaccination monitoring system.

In the 3rd quarter of 2019 these reports were all combined into the latest version of the the “Vaccination Roadmap”, a long-term policy plan to spread vaccine “awareness and understanding” whilst counteracting “vaccine myths” and combatting “vaccine hesitancy”.

You can read the entire report here, but below are some of the more concerning highlights [emphasis throughout is ours]:

  • “Examine the feasibility of developing a common vaccinationcard/passport for EU citizens
  • “Develop EU guidance for establishing comprehensive electronic immunization information systems for effective monitoring of immunization programmes.”
  • overcome the legal and technical barriers impeding the interoperability of national immunisation information systems”

On the 12th September 2019, at the joint EU-WHO “Global Vaccination Summit”, they announced the “10 Actions Towards Vaccination for All”, which cover much of the same ground.

One month later, in October 2019, Event 201 was held.

For those who don’t know, Event 201 was a simulated pandemic exercise focusing on a zoonotic novel coronavirus originating in bats. It was sponsored by Johns Hopkins Center for Health Security, the World Economic Forum, and the Bill & Melinda Gates Foundation.

The result of the simulation was seven key suggestions.

In November of 2019, these suggestions were published as a “call to action”.

One month later, China reported the first cases of Covid19.

To be clear here (and forestall any below-the-line arguments): this is not about vaccines, their effectiveness, safety or lack thereof.

The point is that proposed COVID countermeasures, which have been presented to the public as emergency measures thought up on the fly by panicking institutions, have in fact existed since before the emergence the disease.

They already wanted to monitor your vaccination records and tie that to your passport, introduce mandatory vaccinations and clampdown on “misinformation”. They just didn’t have a reason yet.

This was a situation which required a crisis and, fortuitously, it got one.

The exact ratio of contrivance to happenstance will never be known. What we DO know, at this point, is that Sars-Cov-2 is nothing like the threat originally reported, they admit as much themselves.

We also know they keep churning out the fear anyway.

And, thanks to documents like this, maybe now we’re starting to see why.




Do we really need a tracking app and can we trust it?


The cost to freedom in the war against COVID-19


Modelling the spread of COVID-19


Using intersectionality to understand who is most at risk of COVID-19


Our new research reality in the face of COVID-19


Are our new virtual workplaces equitable?