UPDATE 08. July 2021: Whopping 36 States and DC Sign Onto Major Antitrust Suit Against Google

UPDATE 07. July 2021: Trump sues Facebook, Twitter, Google over platform bans - Trump wants the courts to force the social media platform to reinstate his and some of his supporters accounts. CLASS ACTION

UPDATE 10. June 2021: FINALLY! BREAKING NEWS: The Department Of Justice Allows Complaint That Mark Zuckerberg And Facebook Discriminated Against Americans — Case Now Goes To Trial

UPDATE 28. May 2021: FACEBOOK EXPOSED BY PROJECT VERITAS

UPDATE 27. May 2021: Facebook whistleblower to Tucker Carlson: It's 'highly immoral' to censor users with vaccine concerns - Morgan Kahmann was suspended by Facebook after leaking documents to Project Veritas + BREAKING: Facebook Data Technician Morgan Kahmann Goes On Record Following Suspension as a Result of Secret Internal ‘Vaccine Hesitancy’ Documents Lea + Facebook Whistleblower Who Leaked “Vaccine Hesitancy” Docs Morgan Kahmann Goes on Record After Suspension (VIDEO) + Facebook Whistleblower Suspended From Job: “I have two kids. I have my wife” [VIDEO] - Fundraiser, please help!

UPDATE 15. April 2021: Leaked documents show that Google and the FTC have been engaged in a decades-long criminal cover-up

UPDATE 03. March 2021:  Google’s FLoC Is a Terrible Idea

GET THE YouTube THREE -->>>>>>>>

and FIX THE SILICON SIX !

ICYMI: Facebook would have let Hitler buy ads for 'final solution' + Congresswoman Won't Let Mark Zuckerberg WEASEL His Way Out Of Her Question About Tracking People! + Social Media is a Threat to Democracy

Facebook, Google business models a 'threat to human rights': Amnesty report

Google and Facebook are evil - it is the users who have to stay away from them, because legislators are too slow and easily bribed. Google and LinkedIn are owned by Alphabet/Microsoft (ex Bill Gates) and Facebook by Mark Zuckerberg.

By jcg/stb (Reuters, AP) - 21. Novmber 2019

Amnesty International said in a report that Facebook and Google's "surveillance-based business model" is inherently incompatible with the right to privacy. The NGO urged governments to take action.

Amnesty International in a report has said tech giants Facebook and Google should be forced to abandon what it calls a "surveillance-based business model" that is ''predicated on human rights abuse.''

"Despite the real value of the services they provide, Google and Facebook's platforms come at a systemic cost," the human rights group said in its 60-page report SURVEILLANCE GIANTS: HOW THE BUSINESS MODEL OF GOOGLE AND FACEBOOK THREATENS HUMAN RIGHTS published on Thursday.

Amnesty said that by gathering up personal data to feed advertising businesses, the two firms carry out an unprecedented assault on privacy rights.

'Faustian bargain'

Amnesty said the companies force people to make a ''Faustian bargain,'' where they share their data and private information in exchange for access to Google and Facebook services.

The NGO said this was problematic because both firms have established "near-total dominance over the primary channels through which people connect and engage with the online world," giving them unprecedented power over people's lives.

"Their insidious control of our digital lives undermines the very essence of privacy and is one of the defining human rights challenges of our era," said Kumi Naidoo, Amnesty International's secretary general.

Google and Facebook also present a threat to other human rights, including freedom of expression and the right to equality and non-discrimination, Amnesty said.

The report has called for governments to implement policies that allow people's privacy to be protected, while ensuring access to online services.

"Governments have an obligation to protect people from human rights abuses by corporations," the group said. 

"But for the past two decades, technology companies have been largely left to self-regulate."

Facebook pushes back

Facebook disagreed with the report's conclusions. Steve Satterfield, the company's public policy director, rejected the notion that the business model was "surveillance-based'' and noted that users sign up voluntarily for the service.

"A person's choice to use Facebook's services, and the way we collect, receive or use data, all clearly disclosed and acknowledged by users, cannot meaningfully be likened to the involuntary (and often unlawful) government surveillance'' described in international human rights law, Facebook said in a 5-page response letter to Amnesty International.

Google also disputed Amnesty's findings but did not provide an on-the-record response to the report. The company provided input and publicly available documents, Amnesty added.

View report in English

DOWNLOAD PDF

MUST WATCH

You Will Wish You Watched This Before You Started Using Social Media

You Will Wish You Watched This Before You Started Using Social Media | The Twisted Truth

This might be one of the most important videos I've edited in 2018. After everything that has been going on with the privacy crisis and Facebook CEO Mark Zuckerberg going to Washington to speak with members of Congress, I felt that this video was timely. I think social media can be good but we must be careful with how we use it.

===

Susan WokeCicki is directly responsible for destroying millions of YouTube videos and the work of their creators

https://media.gab.com/system/media_attachments/files/076/651/599/small/1ff0732080efc201.jpg

===

UPDATES:

Whopping 36 States and DC Sign Onto Major Antitrust Suit Against Google

By Erin Coates - 08. July 2021

A group of 36 states and Washington, D.C., filed a lawsuit against Google on Wednesday targeting the tech giant’s control of its Android app store.

Eight states — Arizona, Colorado, Iowa, Nebraska, New York, North Carolina, Tennessee and Utah — are leading the antitrust suit, which was filed in California federal court, Politico reported.

State Attorneys General File New Antitrust Lawsuit Against ...

The lawsuit challenges Google’s plan to force all app developers who use the Google Play Store to pay a 30 percent commission on sales of digital goods or services.

It is the first to challenge Google’s control in the mobile app store market and will be heard by U.S. District Judge James Donato of the Northern District of California, an appointee of former President Barack Obama.

The states and D.C. — which have a mix of Republican and Democratic leaders — said Google has a monopoly in the market for distributing apps on the Android operating system, controlling 90 percent of the market.

The lawsuit said Google favors its Play Store and gives developers “no reasonable choice” but to distribute through it, The Washington Postreported.

“Google has taken steps to close the ecosystem from competition and insert itself as the middleman between app developers and consumers,” the states’ attorneys general said.

The states said Google’s control has harmed consumers and app developers, especially as it plans to take a commission for in-app purchases.

“Google has served as the gatekeeper of the internet for many years, but, more recently, it has also become the gatekeeper of our digital devices — resulting in all of us paying more for the software we use every day,” New York Attorney General Letitia James said in a statement.

“Once again, we are seeing Google use its dominance to illegally quash competition and profit to the tune of billions,” she said. “Through its illegal conduct, the company has ensured that hundreds of millions of Android users turn to Google, and only Google, for the millions of applications they may choose to download to their phones and tablets.”

Arizona Attorney General Mark Brnovich also issued a statement, saying, “I have always prioritized consumer privacy and holding big tech companies accountable. Google’s conduct not only stifles competition and innovation, but it also costs Android users and app developers more money. Arizona and the coalition are fighting to protect consumers and to make sure everyone plays by the rules.”

Google dismissed the lawsuit as “meritless” in a blog post and said the changes the plaintiffs want risk “raising costs for small developers, impeding their ability to innovate and compete, and making apps across the Android ecosystem less secure for consumers.”

“This lawsuit isn’t about helping the little guy or protecting consumers,” said Wilson White, Google’s senior director of public policy.

“It’s about boosting a handful of major app developers who want the benefits of Google Play without paying for it.”

Google’s Play Store is the default app store on Android phones, but users can download apps from other stores and install them from other sources, according to Politico.

Google’s current policy requires app developers to use its payment system for app purchases made through the Play Store, though it hasn’t enforced the rule very strongly in the past.

The Big Tech company said it would start enforcing the rules in September, leading to an uproar from companies.

Google said it would drop commission for sales to 15 percent, but only on the first $1 million generated by the app developer, The Post reported.

The state attorneys general argue that the commission is “extravagant” and will result in a market in which Google has an unfair advantage.

This article appeared originally on The Western Journal.

===

Trump sues Facebook, Twitter, Google over platform bans

Trump To Sue Facebook's Mark Zuckerberg, Twitter's Jack ...Trump files class action lawsuit against social media companies: 'It's not a fair situation'

By Jonathan Allen and Teaganne Finn - 07. JUly 2021

WASHINGTON — Former President Donald Trump said Wednesday that he filed class-action lawsuits against tech giants Facebook, Twitter and Google — along with their CEOs, Mark Zuckerberg, Jack Dorsey and Sundar Pichai — because of bans imposed on him and others.

Trump sues Facebook, Twitter, Google over platform bans

 

"We're demanding an end to the shadow banning, a stop to the silencing, a stop to the blacklisting, vanishing and canceling," Trump said at a news conference in Bedminster, New Jersey, adding that "we are asking the court to impose punitive damages."

He spoke from behind a lectern bedecked with an insignia designed to look like the presidential seal and in front of a backdrop reminiscent of a White House portico.

Trump argued that the suspension of his social media accounts amounts to an infringement on the First Amendment's guarantee that speech won't be curtailed by the government.

Fundamental to that case is his relatively novel contention that the major tech firms function as arms of the federal government rather than as private companies.

"The Founding Fathers inscribed this right in the very first amendment to our Constitution because they knew that free speech is essential to the prevention of, look ... the prevention of horror," said Trump, who called the case a "pivotal battle" for the right to free speech.

Trump is filing the suits as class actions instead of simply on his own behalf, contending that the social media platforms should not enact limits on other conservative users.

Throughout his statement and a question-and-answer session with reporters that followed it, Trump veered far off the lawsuit to offer his thoughts on combating Covid-19, criminal justice, crime rates, the withdrawal of U.S. troops from Afghanistan and other current events.

But the focus of his commentary, and that of lawyers and policy advisers who accompanied him, was on a lawsuit that represents an escalation of his long-running battle with social media platforms that have suspended his accounts. In January, Trump's Twitter account — with 88 million followers — was permanently banned.

Representatives for Twitter and Facebook declined to comment.

Dorsey said in January that the service faced an "extraordinary and untenable circumstance" given the risk of real-world violence. He said in a series of tweets that banning Trump was the right decision, even as he said it raised questions about how to keep the internet open to all.

Unlike Twitter, which banned Trump, Facebook and YouTube have not deleted his accounts. Trump has 35 million followers on Facebook, 24 million on Instagram and 2.8 million on YouTube.

Author:

Image: Jonathan Allen

Jonathan Allen is a senior national politics reporter for NBC News, based in Washington.

Teaganne Finn is a political reporter for NBC News.

Dylan Byers contributed.

===

FINALLY! BREAKING NEWS: The Department Of Justice Allows Complaint That Mark Zuckerberg And Facebook Discriminated Against Americans — Case Now Goes To Trial

mark-zuckerberg-facebook-lawsuitBy WP - 10. June 2021

The Department of Justice has recently given the green light to a lawsuit charging Facebook with a policy of discriminating against thousands of job-seekers because they are American.

“Employers take note: Dept of Justice ALJ allows complaint that Facebook discriminated against US workers …. The case now goes to trial,” lawyer William Stock stated.

“As the Court has previously held, allegations of manipulating the hiring practice to disqualify individuals based on citizenship, meet the legal standard in this forum for stating a claim upon which relief can be granted,” it was noted in the June 2 decision by the department’s Office of the Chief Administrative Hearing Officer (OCAHO).

The decision rejected Facebook’s plea to dismiss the December 2020 discrimination complaint by officials working for President Donald Trump.

A Florida-based tech entrepreneur who has filed several lawsuits against Fortune 500 subcontractors for discriminating against Americans said:

“It’s great that OCAHO is doing this. The EEOC [Equal Employment Opportunity Commission] needs to jump on board, and the Department of Labor needs to jump on board.”

This man has already worked through OCAHO to force two companies to settle discrimination cases. He has another five discrimination cases at OCAHO and is preparing to file many more.

Trump’s complaint said Facebook hid job advertisements from eager American graduates so U.S.-based managers could pretend that the only qualified candidates for the jobs were the company’s growing population of temporary foreign workers who want to get green cards.

The lawsuit said the company discriminated against the many thousands of Americans who applied for roughly 2,600 jobs at Facebook.

As reported by Breitbart News:

There is growing evidence executives at many Fortune 500 companies — and their tiers of subordinate contractors — prefer mid-skilled visa workers to independent and cooperative American professionals.

The roughly 1.5 million foreign workers are cheaper, compliant, and controllable, partly because they are foreign contractors and can be sent home by lower-level managers for any cause.

Most importantly, Fortune 500 executives prefer visa workers because they cannot do what so many American tech experts used to do — quit to develop innovative products elsewhere that threaten the share value held by their executives.

The foreign workers are also cheaper, compliant, and controllable because most want to win the hugely valuable, government-provided, deferred bonus of citizenship from their executives.

The scale of discrimination against American graduates — including engineers, therapists, doctors, designers, software programmers, scientists, architects, statisticians, and managers — helps to drag down salaries for American male and female graduates in a wide variety of white-collar careers.

===

FACEBOOK EXPOSED BY PROJECT VERITAS

First published on BITCHUTE May 28th, 2021. e-mail:

channel image

The Highwire with Del Bigtree

The Highwire with Del Bigtree

FACEBOOK EXPOSED BY PROJECT VERITAS

This week Project Veritas revealed evidence of Facebook deliberately filtering and censoring any discussion or debate that questions the safety of vaccines, even if it’s true.

#Facebook #Censorship #ProjectVeritas #JamesOkeefe #WhistleBlower #TheHighwire #DelBigtree

===

Facebook Whistleblower Suspended From Job: “I have two kids. I have my wife” [VIDEO]

By Patty McMurray - 27. May 2021

Project Veritas’ James O’Keefe interviewed Facebook data center technician Morgan Kahmann, the whistleblower who came forward with explosive internal documents that allegedly reveal an “experiment” or algorithm test, that Facebook performed on their social media platform that demoted (or hid) comments by users based on their “Vaccination Hesitancy score.”

The way it works, according to a memo from Facebook employee Hendrick Townley, comments that disparage or criticize COVID shots (also referred to as vaccinations) would be hidden from the newsfeed. “Reducing the visibility of these (vaccination hesitant) comments represents another significant opportunity for us to remove barriers from vaccination that users on the platform may potentially encounter,” Townley wrote.

The Facebook whistleblower explained the risks he took by coming forward with this information, telling O’Keefe, “I have two kids, I have a wife—and it’s like if I lose my job, what do I do?” he said, adding, “but that less of a concern to me,” he said,  as he agreed with O’Keefe that his conscience would not allow him to remain silent about the censorship on Facebook.

Kahmann explained the events that took place before he was notified by Facebook that his job would be suspended: “I was at work and I got a message from my supervisor—out of the blue actually— who told me, ‘Go ahead and clean up your stuff, gather your personal belonging and meet me in the lobby of the building.'” He explained,  “They’re basically going to have me meet with the investigative and grill me on this whole situation.”

Watch the incredible interview here:

If you’d like to help Morgan Kahmann, who’s been suspended from his job at Facebook, where he worked as a data center technician, please go to his GiveSendGo fundraising account: https://www.givesendgo.com/ExposeFacebook

===

BREAKING: Facebook Data Technician Morgan Kahmann Goes On Record Following Suspension as a Result of Secret Internal ‘Vaccine Hesitancy’ Documents Lea

  • One of two Facebook Insiders featured in Project Veritas’ recent #ExposeFacebook series that revealed the company’s censorship of vaccine concerns on a global scale has officially come out of the shadows.

  • Data Center Technician Morgan Kahmann felt compelled to speak out publicly and shine a light on Facebook’s decision to hide their censorship plans from users.

  • Kahmann informed Project Veritas of a Facebook decision to suspend and remove him from his office pending an “Investigatory Meeting” with Human Resources. Meeting was called off at the last minute.

  • Kahmann: "What happened was I was at work, and I got a message from my supervisor, out of the blue, basically saying 'go ahead and wrap up your area and clean up your stuff, gather your personal belongings and meet me in a meeting room in the lobby of the building.' They're basically going to have me meet with the investigative team and grill me on this whole situation."

  • Kahmann was able to secure additional internal Facebook documents before his suspension that confirm the “vaccine hesitancy” algorithm is live and being implemented globally across Facebook and Instagram platforms.

  • Kahmann hopes more are inspired to Be Brave, and Do Something about the wrongdoing they witness inside Big Tech, media, and government.

  • Kahmann has set up a fundraiser on the Christian crowdfunding platform GiveSendGo to support his wife and children as he attempts to move forward in his now uncertain future.

JOK and Morgan thumb

[WESTCHESTER, N.Y. – May 27, 2021] Project Veritas released a new video today featuring Morgan Kahmann, a Facebook whistleblower who decided to reveal his identity and go on the record after being targeted by Facebook for exposing their efforts to censor vaccine concerns on a global scale.

Kahmann is one of two Facebook insiders that came to Project Veritas with this information, and he explained in an interview how he found out about his suspension. 

"What happened was I was at work, and I got a message from my supervisor, out of the blue, basically saying 'go ahead and wrap up your area and clean up your stuff, gather your personal belongings and meet me in a meeting room in the lobby of the building,’” Kahmann said. “They're basically going to have me meet with the investigative team and grill me on this whole situation."

Kahmann also explained to Project Veritas what motivated him to blow the whistle on Facebook’s wrongdoing in the first place.

“What would happen if this [Facebook Vaccine Hesitancy Comment Demotion policy] was scaled larger and scaled to Twitter and the internet as a whole is way worse than anything that could happen from me getting fired from my job,” he said. “To me, that, it far outweighs that. Because it’s about more than me. It’s about really everyone in the world.”

Before being suspended, Kahmann secured additional internal Facebook documents showing that the Big Tech giant’s “vaccine hesitancy” policy has now been fully implemented.

He says that by doing what he did, more brave patriots working inside powerful institutions will come forward when they witness corruption. 

Kahmann has officially launched a fundraising campaign on the Christian website, GiveSendGo, to support his family in these difficult times. He says it is the best form to support him now.

“I think that the main reason why people don’t want to come out [as whistleblowers] -- because what if I, you know, I have two kids, I have my wife, and if I lose my job, it’s like ‘what do I do?’ But that’s less of a concern to me.”

About Project Veritas

James O'Keefe established Project Veritas in 2011 as a non-profit journalism enterprise to continue his undercover reporting work. Today, Project Veritas investigates and exposes corruption, dishonesty, self-dealing, waste, fraud, and other misconduct in both public and private institutions to achieve a more ethical and transparent society. O'Keefe serves as the CEO and Chairman of the Board so that he can continue to lead and teach his fellow journalists, as well as protect and nurture the Project Veritas culture. 

Project Veritas is a registered 501(c)3 organization. Project Veritas does not advocate specific resolutions to the issues raised through its investigations. Donate now to support our mission.

===

Facebook whistleblower to Tucker Carlson: It's 'highly immoral' to censor users with vaccine concerns

Morgan Kahmann was suspended by Facebook after leaking documents to Project Veritas

By Joseph A. Wulfsohn - 27. May 2021

Whistleblower who exposed Facebook's plan to censor content speaks out

Morgan Kahmann was suspended from his job after revealing the Big Tech giant's plans surrounding the vaccine

Morgan Kahmann, the Facebook whistleblower who was suspended by the tech giant after leaking internal documents exposing a "vaccine hesitancy" censorship campaign, appeared on "Tucker Carlson Tonight" about the fallout he has faced since coming forward to Project Veritas. 

"Anything that questions the vaccine or the narrative regarding the vaccine, which is, you know, everyone should get the vaccine and the vaccine is good and you're not going to get many bad side effects, anything outside of that realm is basically considered under ‘vaccine hesitancy’ by Facebook's algorithms," Kahmann told Fox News host Tucker Carlson on Thursday night. "They're afraid of what people might conclude if they see that other people are having negative side effects. They think that this is going to drive up vaccine hesitancy among the population and they see that as something that they have to combat."

Kahmann, a data center technician who initially came forward anonymously to Project Veritas with the leaked documents, explained to Carlson that he was following his "moral compass."

FACEBOOK WHISTLEBLOWER REVEALS HIMSELF AFTER BEING SUSPENDED FOR LEAKING ‘VACCINE HESITANCY’ CENSORSHIP DOCS

"My moral compass says that is not the right thing to do because basically, the users at Facebook are not aware that this is going on and if you're using Facebook or a social platform and they're censoring the content of your comments unbeknownst to you, I think that's highly immoral," the ousted Facebook employee said. 

"I believe that any consequences that are bestowed onto me by Facebook as a result of this leak and these documents that I leaked to Project Veritas- I think that all of these consequences don't really weigh much when it comes to having to live with myself," Kahmann said. "I saw these documents and I had to opportunity to, you know, show the public this and what's going on behind the scenes and I didn't do it, and so I wouldn't be able to live with myself after that." 

Kahmann told Carlson that he was "suddenly" told to stop working and was escorted to his car after collecting all his equipment and his access badge. He was also told that an "investigatory meeting" would be scheduled with him on a later date, which he said was ultimately "canceled."

Facebook previously did not respond to Fox News' multiple requests for comment. 

Video

On Monday, Project Veritas released internal documents explaining "Vaccine Hesitancy Comment Demotion" which shows the "goal" is to "drastically reduce user exposure to vaccine hesitancy." 

Another leaked document addressed "Borderline Vaccine (BV) Framework" that delves into how to classify such content with another expressed "goal" to "identify and tier the categories of non-violating content that could discourage vaccination in certain contexts, thereby contributing to vaccine hesitancy or refusal," adding "We have tiered these by potential harm and how much context is required in order to evaluate harm." 

Kahmann previously told Project Veritas founder James O'Keefe during an interview that the consequences of Facebook's actions and the ramifications if such practices are made across tech giants are "way worse than anything that can happen from me getting fired from my job."

"To me… it far outweighs that because it's about more than me. It's about really everyone in the world," Kahmann said. 

When asked if he thought there were "a lot of people" at Facebook who agree with him about such concerns, Kahmann estimated that "at least 25%."

"So you're telling me that 25% of the people there agree that what they're doing with this vaccine hesitancy coding and algorithms is morally wrong?" O'Keefe asked.

"Yes. That would not surprise me at all," Kahmann responded.

Kahmann has since established a fundraising campaign on GiveSendGo to support his family, which includes his wife who is seven months pregnant and their two-year-old son. 

FACEBOOK WHISTLEBLOWERS LEAK DOCUMENTS REVEALING EFFORT TO CENSOR ‘VACCINE HESITANCY’: REPORT

The suspended Facebook employee previously explained what the "vaccine hesitancy" program was meant to do. 

"Facebook uses classifiers in their algorithms to determine certain content… they call it ‘vaccine hesitancy.' And without the user's knowledge, they assign a score to these comments that's called the ‘VH Score,' the ‘Vaccine Hesitancy Score,’" Kahmann told O'Keefe. "And then based on that score will demote or leave the comment alone depending on the content within the comment."

Kahmann revealed that the tech giant was running a "test" on 1.5% of its 3.8 billion users with the focus on the comments sections on "authoritative health pages."  

"They're trying to control this content before it even makes it onto your page before you even see it," Kahmann said.

The ratings are divided into two tiers, one being "Alarmism & Criticism" and the other being "Indirect Vaccine Discouragement" which includes celebrating vaccine refusal and "shocking stories" that may deter others from taking the vaccine. 

The algorithm flags key terms in comments to determine whether or not it can remain in place but allows human "raters" to make a ruling if the algorithm cannot do so itself.  

CLICK HERE TO GET THE FOX NEWS APP

A second insider described as a "Data Center Facility Engineer" described Facebook's actions like being in an "abusive" relationship where they're "not allowing their spouse to speak out about the things that are going on in their marriage… and limiting their voice… It's very incriminating, in my opinion."

In response to the leaked documents, Facebook told Project Veritas, "We proactively announced this policy on our company blog and also updated our help center with this information."

Facebook whistleblowers give Project Veritas leaked documents detailing censorship effort

Joseph A. Wulfsohn is a media reporter for Fox News. Follow him on Twitter @JosephWulfsohn.

===

Facebook Whistleblower Who Leaked “Vaccine Hesitancy” Docs Morgan Kahmann Goes on Record After Suspension (VIDEO)

By Cristina Laila - 27. May 2021

Project Veritas on Monday released video of two Facebook insiders blowing the whistle on the social media giant’s effort to secretly censor Covid vaccine concerns on a global scale.

The documents obtained by Project Veritas show Facebook’s efforts to curb “vaccine hesitancy” or “VH” in comments.

One whistleblower told James O’Keefe that Facebook uses classifiers in their algorithms to determine certain content, to be what they call “vaccine hesitant” (screenshot below) without the users knowledge.

“They assign a score to these comments that’s called the VH score, the “vaccine hesitancy” score,” the whistleblower said. “And then based on that score will demote or leave the comment alone depending on the content within the comment.”

The whistleblower, Morgan Kahmann, a data center technician for Facebook, revealed himself and went on the record after the social media company suspended him.

“I was at work and I got a message from my supervisor out of the blue basically saying, ‘Go ahead and wrap up your area and clean up your stuff, gather your personal belongings and meet me in a meeting room in the lobby of the building,’” Kahmann told James O’Keefe in a video released on Thursday. “They’re basically going to have me meet with the investigative team and grill me on the whole situation.”

“To me [getting fired] far outweighs that because it’s about more than me. It’s about really everyone in the world,” Kahmann said.

VIDEO:

Author:

Cristina Laila began writing for The Gateway Pundit in 2016 and she is currently the Associate Editor.

===

N.B.: Our website www.ecoterra.info is protected against FLoC, but you should chage your browser if you use Chrome anyway.

Google’s FLoC Is a Terrible Idea

MARCH 3, 2021

Update, April 9, 2021 : We've launched Am I FLoCed, a new site that will tell you whether your Chrome browser has been turned into a guinea pig for Federated Learning of Cohorts or FLoC, Google’s latest targeted advertising experiment. 

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet. 

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious—and potentially the most harmful. 

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting. 

Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads. 

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web’s biggest mistake. Ahead of us are two possible futures. 

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them—or leveraged to manipulate them—when they next open a tab. 

In the other, each user’s behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is “democratized” and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here’s what I’ve been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox, its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group, a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGINTURTLEDOVESPARROWSWANSPURFOWLPELICANPARROT… the list goes on. Seriously. Each of the “bird” proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user’s browsing habits, then use that information to assign its user to a “cohort” or group. Users with similar browsing habits—for some definition of “similar”—would be grouped into the same cohort. Each user’s browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that’s not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google’s proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user’s machine, so there’s no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one. 

For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user’s cohort ID will be available via Javascript, but it’s unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It’s also unclear exactly how many possible cohorts there will be. Google’s experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user’s interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week’s browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks. 

Fingerprinting

The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user’s browser to create a unique, stable identifier for that browser. EFF’s Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others’, the easier it is to fingerprint. 

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn’t distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy—up to 8 bits, in Google’s proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader “Privacy Budget” plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ, that plan is “an early stage proposal and does not yet have a browser implementation.” Meanwhile, Google is set to begin testing FLoC as early as this month.

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy—which is what FLoC is. Google should not create new fingerprinting risks until it’s figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior. 

The project’s Github page addresses this up front:

This API democratizes access to some information about an individual’s general browsing history (and thus, general interests) to any site that opts into it. … Sites that know a person’s PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.

Two categories of information may be exposed in this way:

  1. Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites. 
  2. General information about demographics or interests. Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there’s no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn’t need to know whether you’ve recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we’ve shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC’s core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes. 

Over the years, the machinery of targeted advertising has frequently been used for exploitationdiscrimination, and harm. The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history—or characteristics systematically associated with it— enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams.

Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers’ ability to target people in “sensitive interest categories.” However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics—demographics like gender, ethnicity, age, and income; “big 5” personality traits; even mental health. It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users’ browsers to group themselves again. 

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users’ race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other “sensitive categories” are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing, to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean”—what kinds of people they contain—through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability—after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don’t do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced, calling FLoC “the opposite of privacy-preserving technology.” We hoped that the standards process would shed light on FLoC’s fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a “95% effective” replacement for cookie-based targeting. And starting with Chrome 89, released on March 2, it’s deploying the technology for a trial run. A small portion of Chrome users—still likely millions of people—will be (or have been) assigned to test the new technology.

Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved “options.” The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for “transparency and user control,” knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie—the technology that Google helped extend well past its shelf life, making billions of dollars in the process.

It doesn’t have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.

===

Leaked documents show that Google and the FTC have been engaged in a decades-long criminal cover-up

By Ethan Huff - 15. April 2021

Google’s team of lawyers made a huge mistake with the release of key documents requested by a group of state attorneys general.

Leaked documents show that Google and the FTC have been engaged in a decades-long criminal cover-up

Portions of the documents that should have been redacted were left in plain sight, revealing illegal behavior on Google’s part with regards to its massive advertising monopoly.

Google’s online advertising marketplace, we now know, has long been secretly rigged under a scheme known as “Project Bernanke.”

As explained by Matt Stoller, the scheme allowed Google to have “one arm of its ad business front-running trades for ad inventory,” which the company used to award itself “hundreds of millions of dollars a year by giving itself a better position in the auctions.”

We also now know that Facebook was also involved.

“The agreement was signed by, among other individuals, Philipp Schindler, Google’s Senior Vice President and Chief Business Officer, and Sheryl Sandberg, Facebook’s Chief Operation Officer.”

Not only is this little operation illegal, but it is also criminal. Such collusion and rigging of the online advertising ecosystem is about as illicit as it gets when it comes to Big Tech’s monopolistic behavior.

That this all went on in secret for so many years speaks volumes as to the failure of our current regulatory system in keeping companies in check. Stoller explains it even better:

“Today, big business in America is far too secretive, with an endless thicket of confidentiality rules, trade secrets law, and deferential judges and enforcers who think that revealing public information about big business is some sort of scandal,” he writes.

“It’s so bad that when the FDA asked pharmaceutical companies where their manufacturing plants were at the beginning of the pandemic, some firms cited trade secrets rules and refused to divulge the information.”

Google and Facebook both need to be disbanded

The Federal Trade Commission (FTC) could have brought a case against Google back in 2012 when much was already known about the company’s antitrust criminality. The regulatory body failed to do so, however, because it works with Google against We the People.

They say that God works in mysterious ways, however, and Google’s lawyer’s failure to cover up for his client by accidentally spilling the beans serves as poetic justice that one’s sins tend to eventually find him out.

In this case, Google’s collective sins as one of the world’s most evil corporations are now on fully display for the world to see. The company could not care less about following things like laws – those are for the peasants – and now everybody knows it.

Stoller notes that there is a lot to learn about this hilarious blunder. First, he says, judges redact far too much information under the guise of protecting “business proprietary information” when the reality is that what they redact is often incriminating details about illicit behavior.

“The court system is supposed to be a public accounting,” he notes.

Secondly, lawyers, no matter how much money they charge, are not gods and we should stop looking at them in such a manner. Google and other large corporations pay them the big buck so they can thwart having to abide by the rules that the rest of us have to follow.

“The Financial Services Industry has long been the clear leader in criminal activity – Google Bernanke name for its criminal front-running activity was no accident,” wrote one of Stoller’s commenters.

“Second, of course, is Big Pharma, which routinely writes checks for billion-dollar fines, but this time it’s for killing people rather than just robbing them – and thank God the Feds pick up the tab for vaccines.”

More related news about Google can be found at Evil.news.

Sources for this article include:

MattStoller.Substack.com

NaturalNews.com

===

ICYMI:

Sacha Baron Cohen:

Facebook would have let Hitler buy ads for 'final solution'

In wide-ranging speech, actor accuses tech giants of running the ‘greatest propaganda machine in history’

Read Sacha Baron Cohen’s scathing attack on Facebook in full

By  @Andrew_Pulver ‘Here’s an idea for them: abide by basic standards’ … Sacha Baron Cohen.

‘Here’s an idea for them: abide by basic standards’ … Sacha Baron Cohen. Photograph: Mario Anzuoni/Reuters

Sacha Baron Cohen has denounced tech giants Facebook, Twitter, YouTube and Google as “the greatest propaganda machine in history” and culpable for a surge in “murderous attacks on religious and ethnic minorities”.

Baron Cohen was speaking on Thursday at Never Is Now, the Anti-Defamation League’s summit on antisemitism and hate in New York, where he was presented with the organisation’s international leadership award. He said that “hate crimes are surging, as are murderous attacks on religious and ethnic minorities” and that “all this hate and violence is being facilitated by a handful of internet companies that amount to the greatest propaganda machine in history”.

He added: “The algorithms these platforms depend on deliberately amplify the type of content that keeps users engaged – stories that appeal to our baser instincts and that trigger outrage and fear. It’s why YouTube recommended videos by the conspiracist Alex Jones billions of times. It’s why fake news outperforms real news, because studies show that lies spread faster than truth … As one headline put it, just think what Goebbels could have done with Facebook.”

“If you pay them, Facebook will run any ‘political’ ad you want, even if it’s a lie,” he said. “And they’ll even help you micro-target those lies to their users for maximum effect. Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.”

N.B.: The ADL itself has a dubious history as media- and voter-influencer and might not have been the best platform Sacha Baron Cohan could have choosen to air his righteous proclamation.

Baron Cohen went on to attack Mark Zuckerberg’s defence of Facebook as a bastion of “free expression”. He said: “I think we could all agree that we should not be giving bigots and paedophiles a free platform to amplify their views and target their victims.” Baron Cohen also criticised Facebook’s decision not to remove Holocaust deniers, saying: “We have millions of pieces of evidence for the Holocaust – it is an historical fact. And denying it is not some random opinion. Those who deny the Holocaust aim to encourage another one.”

These Eyes Do Lie !

Baron Cohen also called for internet companies to be held responsible for their content. “It’s time to finally call these companies what they really are – the largest publishers in history. And here’s an idea for them: abide by basic standards and practices just like newspapers, magazines and TV news do every day.”

“Internet companies can now be held responsible for paedophiles who use their sites to target children. I say, let’s also hold these companies responsible for those who use their sites to advocate for the mass murder of children because of their race or religion. And maybe fines are not enough. Maybe it’s time to tell Mark Zuckerberg and the CEOs of these companies: you already allowed one foreign power to interfere in our elections, you already facilitated one genocide in Myanmar, do it again and you go to jail.”

A spokesperson for Facebook said in response: “Sacha Baron Cohen misrepresented Facebook’s policies. Hate speech is actually banned on our platform. We ban people who advocate for violence and we remove anyone who praises or supports it. Nobody – including politicians – can advocate or advertise hate, violence or mass murder on Facebook.”

Read Sacha Baron Cohen's scathing attack on Facebook in full: 'greatest propaganda machine in history'

Sacha Baron Cohen

===

MUST WATCH

Congresswoman Won't Let Mark Zuckerberg WEASEL His Way Out Of Her Question About Tracking People!

Congresswoman is pissed about hearing Facebook tracks people who aren't even members of Facebook, but Mark Zuckerberg doesn't want to acknowledge it.

===

Social Media is a Threat to Democracy:

Carole Cadwalladr speaks at TED2019

By  - 16. 

Investigative Journalist Carole Cadwalladr explores how social media platforms like Facebook exerted an unprecedented influence on voters in the Brexit referendum and the 2016 US presidential election. She speaks during Session 1 of TED2019: Bigger Than Us, on April 15, 2019 in Vancouver, BC, Canada. (Photo: Bret Hartman / TED)

The day after the Brexit referendum, British journalist (and recently announced Pulitzer Prize finalist) Carole Cadwalladr went to her home region of South Wales to investigate why so many voters had elected to leave the European Union.

She asked residents of the traditionally left-wing town of Ebbw Vale, a place newly rejuvenated by EU investment, why they had voted to leave. They talked about wanting to take back control — a Vote Leave campaign slogan — and being fed up with immigrants and refugees.

Cadwalladr was taken aback. “Walking around, I didn’t meet any immigrants or refugees,” she says. “I met one Polish woman who told me she was practically the only foreigner in town. When I checked the figures, I discovered that Ebbw Vale actually has one of the lowest rates of immigration in the country. So I was just a bit baffled, because I couldn’t really understand where people were getting their information from.”

A reader from the area got in touch with her after her story ran, to explain that she had seen things on Facebook, which she described to Cadwalladr as “quite scary stuff about immigration, and especially about Turkey.” This was misinformation that Cadwalladr was familiar with — the lie that Turkey was going to join the EU, accompanied by the suggestion that its population of 76 million people would promptly emigrate to current member states.

She describes trying to find evidence of this content on Facebook: “There’s no archive of ads that people see, or what had been pushed into their news feeds. No trace of anything … This entire referendum took place in darkness because it took place on Facebook.” And Mark Zuckerburg has refused multiple requests from the British parliament to come and answer questions about these ad campaigns and the data used to create them, she says.

“What I and other journalists have uncovered is that multiple crimes took place during the referendum, and they took place on Facebook,” Cadwalladr says.

The amount of money you can spend on an election is limited by law in Britain, to prevent “buying” votes. It has been found that the Vote Leave campaign laundered £750,000 shortly before the referendum, which they spent on these online disinformation campaigns.

“This was the biggest electoral fraud in Britain for a hundred years, in a once-in-a-generation vote that hinged on just 1 percent of the electorate,” Cadwalladr says.

Cadwalladr embarked on a complex and painstaking investigation into the ad campaigns used in the referendum. After spending months tracking down an ex-employee, Christopher Wylie, she found that a company called Cambridge Analytica “had profiled people politically in order to understand their individual fears, to better target them with Facebook ads, and it did this by illicitly harvesting the profiles of 87 million people from Facebook.”

Despite legal threats from both Cambridge Analytica and Facebook, Cadwalladr and her colleagues went public with their findings, publishing them in the Observer.

“Facebook: you were on the wrong side of history in that,” Cadwalladr says. “And you are on the wrong side of history in this. In refusing to give us the answers that we need. And that is why I am here. To address you directly. The gods of Silicon Valley; Mark Zuckerberg and Sheryl Sandberg and Larry Page and Sergey Brin and Jack Dorsey, and your employees and your investors, too … We are what happens to a western democracy when a hundred years of electoral laws are disrupted by technology … What the Brexit vote demonstrates is that liberal democracy is broken, and you broke it.”

Cadwalladr offers a challenge to tech companies: “It is not about left or right, or Leave or Remain, or Trump or not. It’s about whether it’s actually possible to have a free and fair election ever again. As it stands, I don’t think it is. And so my question to you is: Is this what you want? Is this how you want history to remember you? As the handmaidens to authoritarianism that is on the rise all across the world? You set out to connect people and you are refusing to acknowledge that the same technology is now driving us apart.”

And for everyone else, Cadwalladr has a call to action: “Democracy is not guaranteed, and it is not inevitable. And we have to fight. And we have to win. And we cannot let these tech companies have this unchecked power. It’s up to us: you, me and all of us. We are the ones who have to take back control.”

 

 

 

enafareucazh-CNcsfrdehiisgaitjaptruesswsvtrurcy
July 2021
S M T W T F S
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31