Crea sito

An EU coalition of techies is backing a “privacy-preserving” standard for COVID-19 contacts tracing

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.
China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.
Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.
The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.
In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.
The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.
The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.
While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?
Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.
Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.
Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.
The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.
“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.
“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”
Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”
The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.
Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.
On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.
Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”
While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”
Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:
Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.
Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.
However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.
Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.
— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.
“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.
“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.
“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”
We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.
“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”
“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.
“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”
The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.
Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.
Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.
Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.
But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.
Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.
Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.
It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.
Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.
“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.
Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.
Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.
However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.
We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).
“Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”
Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.
“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”
“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.
Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.
So — in other words — Brexit means, er, trust Google to look after your data.
“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.
“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”
Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.
The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.
So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.
It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)
Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.
Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.
Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…
We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?
Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.
This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.
In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.
Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.
In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.
Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.
Though it could be using all that personal stuff to help it build new products it can serve ads alongside.
Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.
The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

A new senate bill would create a US data protection agency

Europe’s data protection laws are some of the strictest in the world, and have long been a thorn in the side of the data-guzzling Silicon Valley tech giants since they colonized vast swathes of the internet.
Two decades later, one Democratic senator wants to bring many of those concepts to the United States.
Sen. Kirsten Gillibrand (D-NY) has published a bill which, if passed, would create a U.S. federal data protection agency designed to protect the privacy of Americans and with the authority to enforce data practices across the country. The bill, which Gillibrand calls the Data Protection Act, will address a “growing data privacy crisis” in the U.S., the senator said.
The U.S. is one of only a few countries without a data protection law, finding it in the same company as Venezuela, Libya, Sudan and Syria. Gillibrand said the U.S. is “vastly behind” other countries on data protection.
Gillibrand said a new data protection agency would “create and meaningfully enforce” data protection and privacy rights federally.
“The data privacy space remains a complete and total Wild West, and that is a huge problem,” the senator said.
The bill comes at a time where tech companies are facing increased attention by state and federal regulators over data and privacy practices. Last year saw Facebook settle a $5 billion privacy case with the Federal Trade Commission, which critics decried for failing to bring civil charges or levy any meaningful consequences. Months later, Google settled a child privacy case that cost it $170 million — costing the search giant about a day’s worth of its revenue.
Gillibrand pointedly called out Google and Facebook for “making a whole lot of money” from their empires of data, she wrote in a Medium post. Americans “deserve to be in control of your own data,” she wrote.
At its heart, the bill would — if signed into law — allow the newly created agency to hear and adjudicate complaints from consumers and declare certain privacy invading tactics as unfair and deceptive. As the government’s “referee,” the agency would let it take point on federal data protection and privacy matters, such as launching investigations against companies accused of wrongdoing. Gillibrand’s bill specifically takes issue with “take-it-or-leave-it” provisions, notably websites that compel a user to “agree” to allowing cookies with no way to opt-out. (TechCrunch’s parent company Verizon Media enforces a ‘consent required’ policy for European users under GDPR, though most Americans never see the prompt.)
Through its enforcement arm, the would-be federal agency would also have the power to bring civil action against companies, and fine companies of egregious breaches of the law up to $1 million a day, subject to a court’s approval.
The bill would transfer some authorities from the Federal Trade Commission to the new data protection agency.
Gillibrand’s bill lands just a month after California’s consumer privacy law took effect, more than a year after it was signed into law. The law extended much of Europe’s revised privacy laws, known as GDPR, to the state. But Gillibrand’s bill would not affect state laws like California’s, her office confirmed in an email.
Privacy groups and experts have already offered positive reviews.
Caitriona Fitzgerald, policy director at the Electronic Privacy Information Center, said the bill is a “bold, ambitious proposal.” Other groups, including Color of Change and Consumer Action, praised the effort to establish a federal data protection watchdog.
Michelle Richardson, director of the Privacy and Data Project at the Center for Democracy and Technology, reviewed a summary of the bill.
“The summary seems to leave a lot of discretion to executive branch regulators,” said Richardson. “Many of these policy decisions should be made by Congress and written clearly into statute.” She warned it could take years to know if the new regime has any meaningful impact on corporate behaviors.
Gillibrand’s bill stands alone — the senator is the only sponsor on the bill. But given the appetite of some lawmakers on both sides of the aisles to crash the Silicon Valley data party, it’s likely to pick up bipartisan support in no time.
Whether it makes it to the president’s desk without a fight from the tech giants remains to be seen.

California’s new privacy law is off to a rocky start

ACLU says it’ll fight DHS efforts to use app locations for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.
“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.
Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.
The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.
According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.
It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.
Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.
The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.
ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

ACLU sues Homeland Security over ‘stingray’ cell phone surveillance

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.
The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.
A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.
The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.
GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.
In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.
Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.
In its current form the automated risk assessment system failed this test, in the court’s view.
Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.
In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.
The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The Court’s reasoning doesn’t imply there should be full disclosure, but it clearly expects much more robust information on the way (objective criteria) that the model and scores were developed and the way in which particular risks for individuals were addressed.
— Joris van Hoboken (@jorisvanhoboken) February 6, 2020

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.
“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.
Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.
So the decision by the Dutch court could have some near-term implications for UK policy in this area.
The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.
It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.
It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

Tech companies, we see through your flimsy privacy promises

There’s a reason why Data Privacy Day pisses me off.
January 28 was the annual “Hallmark holiday” for cybersecurity, ostensibly a day devoted to promoting data privacy awareness and staying safe online. This year, as in recent years, it has become a launching pad for marketing fluff and promoting privacy practices that don’t hold up.
Privacy has become a major component of our wider views on security, and it’s in sharper focus than ever as we see multiple examples of companies that harvest too much of our data, share it with others, sell it to advertisers and third parties and use it to track our every move so they can squeeze out a few more dollars.
But as we become more aware of these issues, companies large and small clamor for attention about how their privacy practices are good for users. All too often, companies make hollow promises and empty claims that look fancy and meaningful.

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
Its conclusions:
The Met failed to consider the human rights impact of the techIts use was unlikely to pass the key legal test of being “necessary in a democratic society”
— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy of children online.
The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.
UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.
The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.
Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.
“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.
While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].
This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.
“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.
Here are the 15 standards in full as the regulator describes them:
Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
with this code.
Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.
The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.
So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.
However it’s not legally binding — so there’s a pretty fat chance of that.
Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable — pointing out that it has powers to take action against law breakers including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.
So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’
Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.
The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.
“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.
“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”
“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.
“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”
Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.
“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.
Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.
But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

UK quietly ditches porn age checks in favor of wider online harms rules

How comprehensive the touted ‘child protections’ will end up being remains to be seen.
Brown suggested age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.
It has also been consulting with tech companies on possible ways to implement age verification online.
The difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are mired in geopolitics.)
While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress…

Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.
Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.
A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.
At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.
Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.
The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.
Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.
“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.
The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.
Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).
The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.
There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.
If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.
Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.
Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.
In a statement welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”
The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.
The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

This makes the revised e-Privacy Regulation a *huge* national security battleground for the MSes (they will miss the UK fighting for more surveillance) and is v relevant also to the ongoing debates on “bulk”/mass surveillance, and MI5’s latest requests… #ePR
— Ian Brown (@1Br0wn) January 15, 2020

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.
“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 
“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

Yes, the U.K. now has a law to log web users’ browsing behavior, hack devices and limit encryption

Ex-Google policy chief dumps on the tech giant for dodging human rights

Google’s ex-head of international relations, Ross LaJeunesse — who clocked up more than a decade working government and policy related roles for the tech giant before departing last year — has become the latest (former) Googler to lay into the company for falling short of its erstwhile “don’t be evil” corporate motto.
Worth noting right off the bat: LaJeunesse is making his own pitch to be elected as a U.S. senator for the Democrats in Maine, where he’s pitting himself against sitting Republican, Susan Collins. So this lengthy blog post, in which he sets out reasons for joining (“making the world better and more equal”), and — at long last — exiting, Google does look like an exercise in New Year reputation ‘exfoliation’, shall we say.
One that’s intended to anticipate and deflect any critical questions he may face on the campaign trail, given his many years of service to Mountain View. Hence the inclusion of overt political messaging, such as lines like: “No longer can massive tech companies like Google be permitted to operate relatively free from government oversight.”
Still, the post makes more awkward reading for Google. (Albeit, less awkward than the active employee activism the company continues to face over a range of issues — from its corporate culture and attitude towards diversity to product dev ethics.)
LaJeunesse claims that (unnamed) senior management actively evaded his attempts to push for it to adopt a company-wide Human Rights program that would, as he tells it, “publicly commit Google to adhere to human rights principles found in the UN Declaration of Human Rights, provide a mechanism for product and engineering teams to seek internal review of product design elements, and formalize the use of Human Rights Impact Assessments for all major product launches and market entries”.
“[E]ach time I recommended a Human Rights Program, senior executives came up with an excuse to say no,” LaJeunesse alleges, going on to claim that he was subsequently side-lined in policy discussions related to a censored search project Google had been working on to enable it to return to the Chinese market.
The controversial project, code-named Dragonfly, was later shut down, per LaJeunesse’s telling, after Congress raised questions — backing up the blog’s overarching theme that only political scrutiny can put meaningful limits on powerful technologists. (Check that already steady drumbeat for the 2020 US elections.)
He writes:
At first, [Google senior executives] said human rights issues were better handled within the product teams, rather than starting a separate program. But the product teams weren’t trained to address human rights as part of their work. When I went back to senior executives to again argue for a program, they then claimed to be worried about increasing the company’s legal liability. We provided the opinion of outside experts who re-confirmed that these fears were unfounded. At this point, a colleague was suddenly re-assigned to lead the policy team discussions for Dragonfly. As someone who had consistently advocated for a human rights-based approach, I was being sidelined from the on-going conversations on whether to launch Dragonfly. I then realized that the company had never intended to incorporate human rights principles into its business and product decisions. Just when Google needed to double down on a commitment to human rights, it decided to instead chase bigger profits and an even higher stock price.
Reached for comment, a Google spokesman sent us this statement, attributed to a Google spokeswoman: “We have an unwavering commitment to supporting human rights organisations and efforts. That commitment is unrelated to and unaffected by the reorganisation of our policy team, which was widely reported and which impacted many members of the team. As part of this reorganisation, Ross was offered a new position at the exact same level and compensation, which he declined to accept. We wish Ross all the best with his political ambitions.”
LaJeunesse’s blog post also lays into Google’s workplace culture — making allegations that bullying and racist stereotyping were commonplace.
Including even apparently during attempts by management to actively engage with the issue of diversity…
It was no different in the workplace culture. Senior colleagues bullied and screamed at young women, causing them to cry at their desks. At an all-hands meeting, my boss said, “Now you Asians come to the microphone too. I know you don’t like to ask questions.” At a different all-hands meeting, the entire policy team was separated into various rooms and told to participate in a “diversity exercise” that placed me in a group labeled “homos” while participants shouted out stereotypes such as “effeminate” and “promiscuous.” Colleagues of color were forced to join groups called “Asians” and “Brown people” in other rooms nearby.
We’ve asked Google for comment on these allegations and will update this post with any response.
It’s clearly a sign of the ‘techlash’ times that an ex-Googler, who’s now a senator-in-the-running, believes there’s political capital to be made by publicly unloading on his former employer. 
“The role of these companies in our daily lives, from how we run our elections to how we entertain and educate our children, is just too great to leave in the hands of executives who are accountable only to their controlling shareholders who — in the case of Google, Amazon, Facebook and Snap — happen to be fellow company insiders and founders,” LaJeunesse goes on to write, widening his attack to incorporate other FAANG giants.
Expect plenty more such tech giant piñata in the run up to November’s ballot.

Just because it’s legal, it doesn’t mean it’s right

Polina Arsentyeva
Contributor

Share on Twitter

Polina Arsentyeva, a former commercial litigator, is a data privacy attorney who counsels fintech and startup clients on how to innovate using data in a transparent and privacy-forward way.

Companies often tout their compliance with industry standards — I’m sure you’ve seen the logos, stamps and “Privacy Shield Compliant” declarations. As we, and the FTC, were reminded a few months ago, that label does not mean that the criteria was met initially, much less years later when finally subjected to government review.
Alastair Mactaggart — an activist who helped promote the California Consumer Privacy Act (CCPA) — has threatened a ballot initiative allowing companies to voluntarily certify compliance with CCPA 2.0 to the still-unformed agency. While that kind of advertising seems like a no-brainer for companies looking to stay competitive in a market that values privacy and security, is it actually? Business considerations aside, is there a moral obligation to comply with all existing privacy laws, and is a company unethical for relying on exemptions from such laws?
I reject the notion that compliance with the law and morality are the same thing — or that one denotes the other. In reality, it’s a nuanced decision based on cost, client base, risk tolerance and other factors. Moreover, giving voluntary compliance the appearance of additional trust or altruism is actually harmful to consumers because our current system does not permit effective or timely oversight and the type of remedies available after the fact do not address the actual harms suffered.
It’s not unethical to rely on an exemption
Compliance is not tied to morality.
At its heart is a cost analysis, and a nuanced analysis at that. Privacy laws — as much as legislators want to believe otherwise — are not black and white in their implementation. Not all unregulated data collection is nefarious and not all companies that comply (voluntarily or otherwise) are purely altruistic. While penalties have a financial cost, data collection is a revenue source for many because of the knowledge and insights gained from large stores of varied data — and other companies’ need to access that data.
They balance the cost of building compliant systems and processes and amending existing agreements with often thousands of service providers with the loss of business of not being able to provide those services to consumers covered by those laws.
There is also the matter of applicable laws. Complying with a law may interfere or lessen the protections offered by the laws you follow that make you exempt in the first place, for instance, where one law prohibits you from sharing certain information for security purposes and another would require you to disclose it and make both the data and the person less secure.
Strict compliance also allows companies to rest on their laurels while taking advantage of a privacy-first reputation. The law is the minimum standard, while ethics are meant to prescribe the maximum. Complying, even with an inapplicable law, is quite literally the least the company can do. It also then puts them in a position to not make additional choices or innovate because they have already done more than what is expected. This is particularly true with technology-based laws, where legislation often lags behind the industry and its capabilities.
Moreover, who decides what is ethical varies by time, culture and power dynamics. Complying with the strict letter of a law meant to cover everyone does not take into account that companies in different industries use data differently. Companies are trying to fit into a framework without even answering the question of which framework they should voluntarily comply with. I can hear you now: “That’s easy! The one with the highest/strongest/strictest standard for collection.”  These are all adjectives that get thrown around when talking about a federal privacy law. However, “highest,” “most,” and “strongest,” are all subjective and do not live in a vacuum, especially if states start coming out with their own patchwork of privacy laws.
I’m sure there are people that say that Massachusetts — which prohibits a company from providing any details to an impacted consumer — offers the “most” consumer protection, while there is a camp that believes providing as much detailed information as possible — like California and its sample template — provides the “most” protection. Who is right? This does not even take into account that data collection can happen across multiple states. In those instances, which law would cover that individual?
Government agencies can’t currently provide sufficient oversight
Slapping a certification onto your website that you know you don’t meet has been treated as an unfair and deceptive practice by the FTC. However, the FTC generally does not have fining authority on a first-time violation. And while it can force companies to compensate consumers, damages can be very difficult to calculate.
Unfortunately, damages for privacy violations are even harder to prove in court; funds that are obtained go disproportionately to counsel, with each individual receiving a de minimis payout, if they even make it to court. The Supreme Court has indicated through their holdings in Clapper v. Amnesty Intern., USA. 133 S. Ct. 1138 (2013), and Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), that damages like the potential of fraud or ramifications form data loss or misuse are too speculative to have standing to maintain a lawsuit.
This puts the FTC in a weaker negotiating position to get results with as few resources expended as possible, particularly as the FTC can only do so much — it has limited jurisdiction and no control over banks or nonprofits. To echo Commissioner Noah Phillips, this won’t change without a federal privacy law that sets clear limits on data use and damages and gives the FTC greater power to enforce these limits in litigation.
Finally, in addition to these legal constraints, the FTC is understaffed in privacy, with approximately 40 full-time staff members dedicated to protecting the privacy of more than 320 million Americans. To adequately police privacy, the FTC needs more lawyers, more investigators, more technologists and state-of-the-art tech tools. Otherwise, it will continue to fund certain investigations at the cost of understaffing others.
Outsourcing oversight to a private company may not fare any better — for the simple fact that such certification will come at a high price (especially in the beginning), leaving medium and small-sized businesses at a competitive disadvantage. Further, unlike a company’s privacy professionals and legal team, a certification firm is more likely to look to compliance with the letter of the law — putting form over substance — instead of addressing the nuances of any particular business’ data use models.
Existing remedies don’t address consumer harms
Say an agency does come down with an enforcement action, the types of penalty powers that those agencies have currently do not adequately address the consumer harm. That is largely because compliance with a privacy legislation is not an on-off switch and the current regime is focused more on financial restitution.
Even where there are prescribed actions to come into compliance with the law, that compliance takes years and does not address the ramifications of historic non-compliant data use.
Take CNIL’s formal notice against Vectuary for failing to collect informed, affirmative consent. Vectuary collected geolocation data from mobile app users to provide marketing services to retailers using a consent management platform that it developed implementing the IAB (a self-regulating association) Transparency and Consent Framework. This notice warrants particular attention because Vectuary was following an established trade association guideline, and yet its consent was deemed invalid.
As a result, CNIL put Vectuary on notice to cease processing data this way and to delete data collected during that period. And while this can be counted as a victory because the decision forced the company to rebuild their systems  — how many companies would have the budget to do this, if they didn’t have the resources to comply in the first place? Further, this will take time, so what happens to their business model in the meantime? Can they continue to be non-compliant, in theory until the agency-set deadline for compliance is met? Even if the underlying data is deleted — none of the parties they shared the data with or the inferences they built on it were impacted.
The water is even murkier when you’re examining remedies for false Privacy Shield self-certification. A Privacy Shield logo on a company’s site essentially says that the company believes that its cross-border data transfers are adequately secured and the transfers are limited to parties the company believes has responsible data practices. So if a company is found to have falsely made those underlying representations (or failed to comply with another requirement), they would have to stop conducting those transfers and if that is part of how their services are provided, do they just have to stop providing those services to their customers immediately?
It seems in practice that choosing not to comply with an otherwise inapplicable law is not a matter of not caring about your customers or about moral failings, it is quite literally just “not how anything works,” nor is there any added consumer benefit in trying to — and isn’t that what counts in the end — consumers?
Opinions expressed in this article are those of the author and not of her firm, investors, clients or others.

More legal uncertainty for Privacy Shield ahead of crux ruling by Europe’s top court

Facebook tried to block the referral but today an influential advisor to Europe’s top court has issued a legal opinion that could have major implications for the future of the EU-US Privacy Shield personal data transfer mechanism.
It’s a complex opinion, dealing with a fundamental clash of legal priorities around personal data in the EU and US, which does not resolve question marks hanging over the legality of Privacy Shield .
The headline take-away is that a different data transfer mechanism which is also widely used by businesses to transfer personal data out of the EU — so called Standard Contractual Clauses (SCCs) — has been deemed legally valid by the court advisor.
However the advocate general to the Court of Justice of the European Union (CJEU) is also at pains to emphasize the “obligation” of data protection authorities to step in and suspend such data transfers if they are being used to send EU citizens’ data to a place where their information cannot be adequately protected.
So while SCCs look safe — as a data transfer mechanism — per this opinion, it’s a reminder that EU data protection agencies have a duty to be on top of regulating how such tools are used.
The reason the case was referred to the CJEU was a result of Ireland’s Data Protection Commission not acting on a complaint to suspend Facebook’s use of SCCs. So one view that flows from the opinion is the DPC should have done so — instead of spending years on an expensive legal fight.
The backstory to the legal referral is long and convoluted, involving a reformulated data protection complaint filed with the Irish DPC by privacy campaigner and lawyer Max Schrems challenging Facebook’s use of SCCs. His earlier legal action, in the wake of the 2013 disclosures of US government mass surveillance programs by NSA whistleblower Edward Snowden, led to Privacy Shield’s predecessor, Safe Harbor, being struck down by the CJEU in 2015.  
On the SCCs complaint Schrems prevailed in the Irish courts but instead of acting on his request to order Facebook to suspend its SCC data flows, Ireland’s data protection watchdog took the unusual step of filing a lawsuit pertaining to the validity of the entire mechanism.
Irish courts then referred a number of legal questions to the CJEU — including looping in the wider issue of the legality of Privacy Shield. It’s on those questions that the AG has now opined.
It’s worth noting that the advocate general’s opinion is not binding on the CJEU — which will issue a ruling on the case next year. Although the court does tend to follow such opinions so it’s a strong indicator of the likely direction of travel.
The opinion, by advocate general Henrik Saugmandsgaard Øe, takes the view that the use of SCCs for the transfer of personal data to a third country — i.e. a country outside the EU that does not have a bilateral trade agreement with the bloc — is valid.
However, as noted above, the AG puts the onus on data authorities to act in instances where obligations to protect EU citizens’ data under the mechanism come into conflict with privacy-hostile laws outside the EU, such as government mass surveillance programs.
“[T[here is an obligation — placed on the data controllers and, where the latter fail to act, on the supervisory authorities — to suspend or prohibit a transfer when, because of a conflict between the obligations arising under the standard clauses and those imposed by the law of the third country of destination, those clauses
cannot be complied with,” the CJEU writes in a press release on the opinion.
In a first reaction, Schrems highlights this point — writing: “The advocate general is now telling the Irish Data Protection Authority again to just do its job… After all the Irish taxpayer may have to pay up to €10M in legal costs, for the DPC delaying this case in the interest of Facebook.
“The opinion makes clear that DPC has the solution to this case in her own hands: She [Helen Dixon] can order Facebook to stop transfers tomorrow. Instead, she turned to the CJEU to invalidate the whole system. It’s like screaming for the European fire brigade, because you don’t know how to blow out a candle yourself.”
We’ve reached out to the Irish DPC and to Facebook for comment on the AG’s opinion.
“At the moment, many data protection authorities simply look the other way when they receive reports of infringements or simply do not deal with complaints. This is a huge step for the enforcement of the GDPR [the General Data Protection Regulation],” Schrems also argues.
Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo, suggests that the likelihood of EU DPAs suspending SCC personal data transfers to the US will “depend on the Court’s ultimate take on the safeguards surrounding the access to the transferred data by the United States intelligence authorities and the judicial protection available to the persons whose data are transferred”.
“The disruptive effect of a suspension of SCCs, even if partial and just for the U.S., is likely to be substantial,” he argues. “SCCs are widely used for the transfer of personal data outside the EU. They are probably the most used data transfer mechanism, including for transfers to the U.S.  Thus, even a partial suspension of the SCCs would force a significant number of organizations to explore alternative mechanisms for their transfers to the U.S. 
“However, the alternatives are limited and often difficult to apply to large-scale transfers, the main ones being the derogations allowing transfers with the consent of the data subject or necessary for the performance of a contract. These are unlikely to be suitable for all transfers currently taking place in accordance with SCCs.”
“In practice, the degree of disruption is likely to depend on the timing and duration of the suspension,” he adds. “Any suspension or other finding that data transfers to the U.S. are problematic is likely to speed up the modernization of SCCs that the European Commission is already working on but it is unclear how long it would take for the Commission to issue new SCCs.
“When the Court invalidated the Safe Harbor, it took several months for the Commission to adopt the Privacy Shield and amend the existing SCCs to take into account the Court’s judgment.”
On Privacy Shield — a newer data transfer mechanism which the European Commission claims fixes the legal issues with its predecessor — Saugmandsgaard Øe’s opinion includes some lengthy reasoning that suggests otherwise and certainly does not clear up questions around the mechanism’s legality which arise as a result of US laws that allow the state to harvest personal data for national security purposes, thereby conflicting with EU privacy rights.
Per the CJEU press release, the AG’s opinion sets out a number of reasons which it says “lead him to question the validity of the ‘privacy shield’ decision in the light of the right to respect for private life and the right to an effective remedy”.
The flagship mechanism is now used by more than 5,000 entities to authorize EU-US personal data transfers.
Should it be judged invalid by the court there would be a massive scramble for businesses to find alternatives.
It remains to be seen how the court will handle these questions. But Privacy Shield remains subject to direct legal challenge — so there are other opportunities for it to weigh in, even if CJEU judges avoids doing so in this case.
Schrems clearly hopes they will weigh in soon, skewering Privacy Shield in his statement — where he writes: “After the ‘Safe Harbor’ judgment the European Commission deliberately passed an invalid decision again — knowing that it will take two or three years until the Court will have a chance to invalidate it a second time. It will be very interesting to see if the Court will take this issue on board in the final decision or wait for another case to reach the court.”
“I am also extremely happy that the AG has taken a clear view on the Privacy Shield Ombudsperson. A mere ‘postbox’ at the foreign ministry of the US cannot possibly replace a court, as required under the first judgement by the Court,” he adds.
He does take issue with the AG’s opinion in one respect — specifically its reference to what he dubs “surveillance friendly case law” under the European Convention on Human Rights — instead of what he couches as “the clear case law of the Court of Justice”.
“This is against any logic… I am doubtful that the [CJEU] judges will join that view,” he suggests.
The court typically hands down a judgement between three and six months after an AG opinion — so privacy watchers will be readying their popcorn in 2020.
Meanwhile, for thousands of businesses, the legal uncertainty and risk of future disruption should Privacy Shield come unstuck goes on.

TikTok apologizes for removing viral video about abuses against Uighurs, blames a “human moderation error”

TikTok has issued a public apology to a teenager who had her account suspended shortly after posting a video that asked viewers to research the persecution of Uighur people and other Muslim groups in Xinjiang. TikTok included a “clarification on the timeline of events,” and said that the viral video was removed four days after it was posted on November 23 “due to a human moderation error” and did not violate the platform’s community guidelines (the account @getmefamouspartthree and video have since been reinstated).
But the user, Feroza Aziz, who describes herself in her Twitter profile as “just a Muslim trying to spread awareness,” rejected TikTok’s claims, tweeting “Do I believe they took it away because of an unrelated satirical video that was deleted on a previous deleted account of mine? Right after I finished posting a 3 part video about the Uyghurs? No.”
In the video removed by TikTok, Aziz begins by telling viewers to use an eyelash curler, before telling them to put it down and “use your phone, that you’re using right now, to search up what’s happening in China, how they’re getting concentration camps, throwing innocent Muslims in there, separating families from each other, kidnapping them, murdering them, raping them, forcing them to eat pork, forcing them to drink, forcing them to convert. This is another Holocaust, yet no one is talking about it. Please be aware, please spread awareness in Xinjiang right now.”
TikTok is owned by ByteDance and the video’s removal led to claims that the Beijing-based company capitulated to pressure from the Chinese Communist Party (Douyin, ByteDance’s version of TikTok for China, is subject to the same censorship laws as other online platforms in China).
Though the government-directed persecution of Muslim minority groups in China began several years ago and about a million people are believed to be detained in internment camps, awareness of the crisis was heightened this month after two significant leaks of classified Chinese government documents were published by the New York Times and the International Consortium of Investigative Journalists, confirming reports by former inmates, eyewitnesses and researchers.
Aziz told BuzzFeed News she has been talking about the persecution of minority groups in China since 2018 because “as a Muslim girl, I’ve always been oppressed and seen my people be oppressed, and I’ve always been into human rights.”
In the BuzzFeed News article, published before TikTok’s apology post, the company claimed Aziz’s account suspension was related to another video she made that contained an image of Osama Bin Laden. The video was created as a satirical response to a meme about celebrity crushes and Aziz told BuzzFeed News that “it was a dark humor joke that he was at the end, because obviously no one in their right mind would think or say that.” A TikTok spokesperson said it nonetheless “violated its policies on terrorism-related content.”
“While we recognize that this video may have been intended as satire, our policies on this front are currently strict. Any such content, when identified, is deemed a violation of our Community Guidelines and Terms of Service, resulting in a permanent ban of the account and associated devices,” a TikTok spokesperson told BuzzFeed, adding that the suspension of Aziz’s second account, which the makeup tutorial video was posted on, was part of the platform’s blocking of 2,406 devices linked to previously suspended accounts.
In TikTok’s apology post today, TikTok US head of safety Eric Tan wrote that the platform relies on technology to uphold community guidelines and human moderators as a “second line of defense.”
“We acknowledge that at times, this process will not be perfect. Humans will sometimes make mistakes, such as the one made today in the case of @getmefamouspartthree’s video,” he added. “When those mistakes happen, however, our commitment is to quickly address and fix them, undertake trainings or make changes to reduce the risk of the same mistakes being repeated, and fully own the responsibility for our errors.”
Aziz told the Washington Post, however, that “TikTok is trying to cover up this whole mess. I won’t let them get away with this.”
The controversy comes as TikTok faces an inquiry by the U.S. government into how it secures the personal data of users. Reuters reported yesterday that TikTok plans to separate its product and business development, and marketing and legal teams from Douyin in the third quarter of this year.
 

Leaked Chinese government documents detail how tech is used to escalate the persecution of Uighurs

In less than two weeks, two major reports have been published that contain leaked Chinese government documents about the persecution of Uighurs and other Muslim minorities in China. Details include the extent to which technology enables mass surveillance, making it possible to track the daily lives of people at unprecedented scale.
The first was a New York Times article that examined more than 400 pages of leaked documents detailing how government leaders, including President Xi Jinping, developed and enforced policies against Uighurs. The latest comes from the International Consortium of Investigative Journalists, an independent non-profit, and reports on more than 24 pages of documents that show how the government is using new technologies to engage in mass surveillance and identify groups for arrest and detainment in Xinjiang region camps that may now hold as many as a million Uighurs, Kazakhs and other minorities, including people who hold foreign citizenship.
These reports are significant because leaks of this magnitude from within the Communist Party of China are rare and they validate reports from former prisoners and the work of researchers and journalists who have been monitoring the persecution of the Uighurs, an ethnic group with more than 10 million people in China.
As ICIJ reporter Bethany Allen-Ebrahimian writes, the classifed documents, verified by independent experts and linguists, “demonstrates the power of technology to help drive industrial-scale human rights abuses.” Furthermore, they also force members of targeted groups in Xinjiang region to live in “a perpetual state of terror.”
The documents obtained by the ICIJ detail how the Integrated Joint Operations Platform (IJOP), an AI-based policing platform, is used by the police and other authorities to collect personal data, along with data from facial-recognition cameras and other surveillance tools, which is then fed into an algorithm to identify entire categories of Xinjiang residents for detention. The Human Rights Watch began reporting on the IJOP’s police app in early 2018 and the ICIJ report shows how powerful the platform has become.
The Human Rights Watch reverse-engineered the IJOP app used by police and found that it prompts them to enter a wide range of personal information about people they interrogate, including height, blood type, license plate numbers, education level, profession, recent travel and even household electric-meter readings, data which is then used by an algorithm that determines which groups of people should be viewed as “suspect.”
The documents also say that the Chinese government ordered security officials in Xinjiang to monitor users of Zapya, which has about 1.8 million users, for ties to terrorist organizations. Launched in 2012, the app was created by DewMobile, a Beijing-based startup that has received funding from InnoSpring Silicon Valley, Silicon Valley Bank and Tsinghua University and is meant to give people a way to download the Quran and send messages and files to other users without being connected to the Web.
According to the ICIJ, the documents show that since at least July 2016, Chinese authorities have been monitoring the app on some Uighurs’ phone in order to flag users for investigation. DewMobile did not respond to ICIJ’s repeated requests for comments. Uighurs who hold foreign citizenship or live abroad are not free from surveillance, with directives in the leaked documents ordering them to be monitored as well.

Sources say China used iPhone hacks to target Uyghur Muslims

Allen-Ebrahimian describes the “grinding psychological effects of living under such a system,” which Samantha Hoffman, an analyst at the Australian Strategic Policy Institute, says is deliberate: “That’s how state terror works. Part of the fear that this instills is that you don’t know when you’re not OK.”
The reports by the New York Times and the ICIJ are important because they counter the Xi administration’s insistence that the detention camps are “vocational educational and training centers” meant to prevent extremist violence and help minority groups integrate into mainstream Chinese society, even though many experts now describe the persecution and imprisonment of Uighurs as cultural genocide. Former inmates have also reported torture, beatings and sexual violence including rape and forced abortions.
But the Chinese government continues to push its narrative, even as evidence against it grows. The Chinese embassy in the United Kingdom told the Guardian, an ICIJ partner organization, that the leaked documents “pure fabrication and fake news” and insisted that “the preventative measures have nothing to do with the eradication of religious groups.” (The Guardian published the embassy’s response here.)
In October, the United States placed eight companies, including SenseTime and Megvii, on a trade blacklist for the role the Commerce Department says their technology has played in China’s campaign against Uighurs, Kazakhs and other Muslim minority groups. But the  documents published by the New York Times and ICIJ show how deeply entrenched the Chinese government’s surveillance technology has become in the daily life of Xinjiang residents and underscores how imperative it is for the world to pay attention to the atrocities being carried out against minority groups there.

Amnesty International latest to slam surveillance giants Facebook and Google as “incompatible” with human rights

Human rights charity Amnesty International is the latest to call for reform of surveillance capitalism — blasting the business models of “surveillance giants” Facebook and Google in a new report which warns the pair’s market dominating platforms are “enabling human rights harm at a population scale”.
“[D]despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost,” Amnesty warns. “The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”
“This isn’t the internet people signed up for,” it adds.
What’s most striking about the report is the familiarly of the arguments. There is now a huge weight of consensus criticism around surveillance-based decision-making — from Apple’s own Tim Cook through scholars such as Shoshana Zuboff and Zeynep Tufekci to the United Nations — that’s itself been fed by a steady stream of reportage of the individual and societal harms flowing from platforms’ pervasive and consentless capturing and hijacking of people’s information for ad-based manipulation and profit.
This core power asymmetry is maintained and topped off by self-serving policy positions which at best fiddle around the edges of an inherently anti-humanitarian system. While platforms have become practiced in dark arts PR — offering, at best, a pantomime ear to the latest data-enabled outrage that’s making headlines, without ever actually changing the underlying system. That surveillance capitalism’s abusive modus operandi is now inspiring governments to follow suit — aping the approach by developing their own data-driven control systems to straitjacket citizens — is exceptionally chilling.
But while the arguments against digital surveillance are now very familiar what’s still sorely lacking is an effective regulatory response to force reform of what is at base a moral failure — and one that’s been allowed to scale so big it’s attacking the democratic underpinnings of Western society.
“Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights,” Amnesty writes.
“The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations.”
Needless to say Facebook and Google do not agree with Amnesty’s assessment. But, well, they would say that wouldn’t they?
Amnesty’s report notes there is now a whole surveillance industry feeding this beast — from adtech players to data brokers — while pointing out that the dominance of Facebook and Google, aka the adtech duopoly, over “the primary channels that most of the world relies on to engage with the internet” is itself another harm, as it lends the pair of surveillance giants “unparalleled power over people’s lives online”.
“The power of Google and Facebook over the core platforms of the internet poses unique risks for human rights,” it warns. “For most people it is simply not feasible to use the internet while avoiding all Google and Facebook services. The dominant internet platforms are no longer ‘optional’ in many societies, and using them is a necessary part of participating in modern life.”
Amnesty concludes that it is “now evident that the era of self-regulation in the tech sector is coming to an end” — saying further state-based regulation will be necessary. Its call there is for legislators to follow a human rights-based approach to rein in surveillance giants.
You can read the report in full here (PDF).

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.
In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.
“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”
“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”
One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.
“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.
Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.
In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”
There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.
“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.
There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.
The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”
At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.
Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.
Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.
“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.
“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.
“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”
Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.
“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.
“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”
On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.
He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.
He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.
“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”
Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”
There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.
“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”
In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.
While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.
The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.
The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.
Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

EU-US Privacy Shield passes third Commission ‘health check’ — but litigation looms

The third annual review of the EU-US Privacy Shield data transfer mechanism has once again been nodded through by Europe’s executive.
This despite the EU parliament calling last year for the mechanism to be suspended.
The European Commission also issued US counterparts with a compliance deadline last December — saying the US must appoint a permanent ombudsperson to handle EU citizens’ complaints, as required by the arrangement, and do so by February.
This summer the US senate finally confirmed Keith Krach — under secretary of state for economic growth, energy, and the environment — in the ombudsperson role.
The Privacy Shield arrangement was struck between EU and US negotiators back in 2016 — as a rushed replacement for the prior Safe Harbor data transfer pact which in fall 2015 was struck down by Europe’s top court following a legal challenge after NSA whistleblower Edward Snowden revealed US government agencies were liberally helping themselves to digital data from Internet companies.
At heart is a fundamental legal clash between EU privacy rights and US national security priorities.
The intent for the Privacy Shield framework is to paper over those cracks by devising enough checks and balances that the Commission can claim it offers adequate protection for EU citizens personal data when taken to the US for processing, despite the lack of a commensurate, comprehensive data protection region. But critics have argued from the start that the mechanism is flawed.
Even so around 5,000 companies are now signed up to use Privacy Shield to certify transfers of personal data. So there would be major disruption to businesses were it to go the way of its predecessor — as has looked likely in recent years, since Donald Trump took office as US president.
The Commission remains a staunch defender of Privacy Shield, warts and all, preferring to support data-sharing business as usual than offer a pro-active defence of EU citizens’ privacy rights.
To date it has offered little in the way of objection about how the US has implemented Privacy Shield in these annual reviews, despite some glaring flaws and failures (for example the disgraced political data firm, Cambridge Analytica, was a signatory of the framework, even after the data misuse scandal blew up).
The Commission did lay down one deadline late last year, regarding the ongoing lack of a permanent ombudsperson. So it can now check that box.
It also notes approvingly today that the final two vacancies on the US’ Privacy and Civil Liberties Oversight Board have been filled, meaning it’s fully-staffed for the first time since 2016.
Commenting in a statement, commissioner for justice, consumers and gender equality, Věra Jourová, added: “With around 5,000 participating companies, the Privacy Shield has become a success story. The annual review is an important health check for its functioning. We will continue the digital diplomacy dialogue with our U.S. counterparts to make the Shield stronger, including when it comes to oversight, enforcement and, in a longer-term, to increase convergence of our systems.”
Its press release characterizes US enforcement action related to the Privacy Shield as having “improved” — citing the Federal Trade Commission taking enforcement action in a grand total of seven cases.
It also says vaguely that “an increasing number” of EU individuals are making use of their rights under the Privacy Shield, claiming the relevant redress mechanisms are “functioning well”. (Critics have long suggested the opposite.)
The Commission is recommending further improvements too though, including that the US expand compliance checks such as concerning false claims of participation in the framework.
So presumably there’s a bunch of entirely fake compliance claims going unchecked, as well as actual compliance going under-checked…
“The Commission also expects the Federal Trade Commission to further step up its investigations into compliance with substantive requirements of the Privacy Shield and provide the Commission and the EU data protection authorities with information on ongoing investigations,” the EC adds.
All these annual Commission reviews are just fiddling around the edges, though. The real substantive test for Privacy Shield which will determine its long term survival is looming on the horizon — from a judgement expected from Europe’s top court next year.
In July a hearing took place on a key case that’s been dubbed Schrems II. This is a legal challenge which initially targeted Facebook’s use of another EU data transfer mechanism but has been broadened to include a series of legal questions over Privacy Shield — now with the Court of Justice of the European Union.
There is also a separate litigation directly targeting Privacy Shield that was brought by a French digital rights group which argues it’s incompatible with EU law on account of US government mass surveillance practices.
The Commission’s PR notes the pending litigation — writing that this “may also have an impact on the Privacy Shield”. “A hearing took place in July 2019 in case C-311/18 (Schrems II) and, once the Court’s judgement is issued, the Commission will assess its consequences for the Privacy Shield,” it adds.
So, tl;dr, today’s third annual review doesn’t mean Privacy Shield is out of the legal woods.

MIT is reviewing its relationship with AI startup SenseTime, one of the Chinese tech firms blacklisted by the U.S.

The Massachusetts Institute of Technology said it is reviewing the university’s relationship with SenseTime, one of eight Chinese tech companies placed on the U.S. Entity List yesterday for their alleged role in human rights abuses against Muslim minority groups in China.
A MIT spokesperson told Bloomberg that “MIT has long had a robust export controls function that pays careful attention to export control regulations and compliance. MIT will review all existing relationships with organizations added to the U.S. Department of Commerce’s Entity List, and modify any interactions, as necessary.”
A SenseTime representative told Bloomberg “We are deeply disappointed with this decision by the U.S. Department of Commerce. We will work closely with all relevant authorities to fully understand and resolve the situation.”
The companies placed on the blacklist included several of China’s top AI startups and companies that have supplied software to mass surveillance systems that may have been used by the Chinese government to persecute Uyghurs and other Muslim minority groups.
Over one million Uyghurs are believed to currently be held in detention camps, where human rights observers report they have been subjected to forced labor and torture.
SenseTime, the world’s mostly highly-valued AI startup, provided software to the Chinese government for its national surveillance system, including CCTV cameras. It was the first company to join a MIT Intelligence Quest initiative launched last year with the goal of “driv[ing] technological breakthroughs in AI that have the potential to confront some of the world’s greatest challenges.” Since then, it has provided funding for 27 projects by MIT researchers.
Earlier this year, MIT ended its working relationships with Huawei and ZTE over alleged sanction violations.

Eight Chinese tech firms placed on U.S. Entity List for their role in human rights violations against Muslim minority groups

Eight Chinese tech firms, including SenseTime and Megvii, have been added to the U.S. government Entity List for their role in enabling human rights violations against Muslim minority groups in China, including the Uighurs. The firms were among 28 total organizations, mostly Chinese government agencies, that were implicated “in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uighurs, Kazakhs and other members of Muslim minority groups” in the Xinjiang Uighur Autonomous Region, according to an announcement by the U.S. Commerce Department.
According to the United Nations, up to one in 12 Muslim residents of Xinjiang region, or about a million people, are being held in detention camps, where they are subjected to force labor and torture.
Being placed on the Entity List means that these organizations must apply for additional licenses in order to purchase products from U.S. suppliers. But approval is difficult to obtain, which essentially means they are blocked from doing business with American companies. After Huawei was placed on the Entity List earlier this year, founder and CEO Ren Zhengfei said that he expected the company to lose $30 billion in revenue, among other financial repercussions.
The government organizations placed on the Entity List today include the Xinjiang Uighur Autonomous Region People’s Government Public Security Bureau and several associated government agencies, and tech companies video surveillance manufacturers Dahua Technology and Hikvision, AI tech firms Yitu, Megvii, SenseTime and iFlyTek, digital forensics company Meiya Pico and Yixin Technology Company.
Sense Time, the world’s most highly-valued AI startup, has supplied software to the Chinese government for its national surveillance system, including CCTV cameras and smart glasses worn by police officers.
Both Megvii, the maker of Face++, and Yitu Technology focus on facial recognition technology and have worked with the Chinese government on software used in mass surveillance systems. According to the New York Times, Hikvision made a recognition system designed to identify ethnic minorities, but began phasing it out last year.
In a 2017 report, the Human Rights Watch said voice recognition company iFlyTek supplied voiceprint technology to police bureaus in Xinjiang Province, which was used to build biometric databases for mass surveillance.
The impact of the blacklisting will depend on how deeply entrenched each company is with U.S. business partners, but many Chinese firms have begun reducing their reliance on American technology in light of the trade war. For example, Meiya Pico told the Chinese Securities Journal, a state-run publication, that overseas sales revenue makes up less than 1% of the company’s total revenue and most of its suppliers are domestic companies.
TechCrunch has contacted the eight companies for comment. In a statement, a Hikvision spokesperson said “Hikvision strongly opposes today’s decision by the U.S. Government and it will hamper efforts by global companies to improve human rights around the world. Hikvision, as the security industry’s global leader, respects human rights and takes our responsibility to protect people in the U.S. and the world seriously. Hikvision has been engaging with Administration officials over the past 12 months to clarify misunderstandings about the company and address their concerns.”

Osano makes business risk and compliance (somewhat) sexy again

A new startup is clearing the way for other companies to better monitor and manage their risk and compliance with privacy laws.
Osano, an Austin, Texas-based startup, bills itself as a privacy platform startup, which uses a software-as-a-service solution to give businesses real-time visibility into their current privacy and compliance posture. On one hand, that helps startups and enterprises large and small insight into whether or not they’re complying with global or state privacy laws, and manage risk factors associated with their business such as when partner or vendor privacy policies change.
The company launched its privacy platform at Disrupt SF on the Startup Battlefield stage.
Risk and compliance is typically a fusty, boring and frankly unsexy topic. But with ever-changing legal landscapes and constantly moving requirements, it’s hard to keep up. Although Europe’s GDPR has been around for a year, it’s still causing headaches. And stateside, the California Consumer Privacy Act is about to kick in and it is terrifying large companies for fear they can’t comply with it.
Osano mixes tech with its legal chops to help companies, particularly smaller startups without their own legal support, to provide a one-stop shop for businesses to get insight, advice and guidance.
“We believe that any time a company does a better job with transparency and data protection, we think that’s a really good thing for the internet,” the company’s founder Arlo Gilbert told TechCrunch.
Gilbert, along with his co-founder and chief technology officer Scott Hertel, have built their company’s software-as-a-service solution with several components in mind, including maintaining its scorecard of 6,000 vendors and their privacy practices to objectively grade how a company fares, as well as monitoring vendor privacy policies to spot changes as soon as they are made.
One of its standout features is allowing its corporate customers to comply with dozens of privacy laws across the world with a single line of code.
You’ve seen them before: The “consent” popups that ask (or demand) you to allow cookies or you can’t come in. Osano’s consent management lets companies install a dynamic consent management in just five minutes, which delivers the right consent message to the right people in the best language. Using the blockchain, the company says it can record and provide searchable and cryptographically verifiable proof-of-consent in the event of a person’s data access request.
“There are 40 countries with cookie and data privacy laws that require consent,” said Gilbert. “Each of them has nuances about what they consider to be consent: what you have to tell them; what you have to offer them; when you have to do it.”
Osano also has an office in Dublin, Ireland, allowing its corporate customers to say it has a physical representative in the European Union — a requirement for companies that have to comply with GDPR.
And, for corporate customers with questions, they can dial-an-expert from Osano’s outsourced and freelance team of attorneys and privacy experts to help break down complex questions into bitesize answers.
Or as Gilbert calls it, “Uber, but for lawyers.”
The concept seems novel but it’s not restricted to GDPR or California’s upcoming law. The company says it monitors international, federal and state legislatures for new laws and changes to existing privacy legislation to alert customers of upcoming changes and requirements that might affect their business.
In other words, plug in a new law or two and Osano’s customers are as good as covered.
Osano is still in its pre-seed stage. But while the company is focusing on its product, it’s not thinking too much about money.
“We’re planning to kind of go the binary outcome — go big or go home,” said Gilbert, with his eye on the small- to medium-sized enterprise. “It’s greenfield right now. There’s really nobody doing what we’re doing.”
The plan is to take on enough funding to own the market, and then focus on turning a profit. So much so, Gilbert said, that the company is registered as a B Corporation, a more socially conscious and less profit-driven approach of corporate structure, allowing it to generate profits while maintaining its social vision.
The company’s idea is strong; its corporate structure seems mindful. But is it enough of an enticement for fellow startups and small businesses? It’s either dominate the market or bust, and only time will tell.

UK privacy ‘class action’ complaint against Google gets unblocked

The UK Court of Appeal has unanimously overturned a block on a class-action style lawsuit brought on behalf of four million iPhone users against Google — meaning the case can now proceed to be heard.
The High Court tossed the suit a year ago on legal grounds. However the claimants sought permission to appeal — and today that’s been granted.
The case pertains to allegations Google used tracking cookies to override iPhone users’ privacy settings in Apple’s Safari browser between 2011 and 2012. Specifically that Google developed a workaround for browser settings that allowed it to set its DoubleClick Ad cookie without iPhone users’ knowledge or consent.
In 2012 the tech giant settled with the FTC over the same issue — agreeing to pay $22.5M to resolve the charge that it bypassed Safari’s privacy settings to serve targeted ads to consumers. Although Google’s settlement with the FTC did not include an admission of any legal wrongdoing.
Several class action lawsuits were also filed in the US and later consolidated. And in 2016 Google agreed to settle those by paying $5.5M to educational institutions or non-profits that campaign to raise public awareness of online security and privacy. Though terms of the settlement remain under legal challenge.
UK law does not have a direct equivalent to a US style class action. But in 2017 a veteran consumer rights campaigner, Richard Lloyd, filed a collective lawsuit over the Safari workaround, seeking to represent millions of UK iPhone users whose browser settings his complaint alleges were ignored by Google’s tracking technologies.
The decision a High Court judge last year to block the action boiled down to the judge not being convinced claimants could demonstrate a basis for bringing a compensation claim. Historically there’s been a high legal bar for that as UK law has required that claimants are able to demonstrate they suffered damage as a result of data protection violation.
The High Court judge was also not persuaded the complaint met the requirements for a representative action.
However the Appeals Court has taken a different view.
The three legal questions it considered were whether a claimant could recover damages for loss of control of their data under section 13 of the UK’s Data Protection Act 1998 “without proving pecuniary loss or distress”; whether the members of the class had the same interest as one another and were identifiable; and whether the judge ought to have exercised discretion to allow the case to proceed.
The court rejected Google’s main argument that UK and EU law require “proof of causation and consequential damage”.
It also took the view that the claim can stand as a representative procedure.
In concluding the judgment, the chancellor of the High Court writes:
… the judge ought to have held: (a) that a claimant can recover damages for loss of control of their data under section 13 of DPA, without proving pecuniary loss or distress, and (b) that the members of the class that Mr Lloyd seeks to represent did have the same interest under CPR Part 19.6(1) and were identifiable.
The judge exercised his discretion as to whether the action should proceed as a representative action on the wrong basis and this court can exercise it afresh. If the other members of the court agree, I would exercise our discretion so as to allow the action to proceed.
I would, therefore, allow the appeal, and make an order granting Mr Lloyd permission to serve the proceedings on Google outside the jurisdiction of the court.
Mishcon de Reya, the law firm representing Lloyd, has described the decision as “groundbreaking” — saying it could establish “a new procedural framework for the conduct of mass data breach claims” under UK civil procedure rules governing group litigations.
In a statement, partner and case lead, James Oldnall, said: “This decision is significant not only for the millions of consumers affected by Google’s activity but also for the collective action landscape more broadly. The Court of Appeal has confirmed our view that representative actions are essential for holding corporate giants to account. In doing so it has established an avenue to redress for consumers.”
Mishcon de Reya argues that the decision has confirmed a number of key legal principles around UK data protection law and representative actions, including that:
An individual’s personal data has an economic value and loss of control of that data is a violation of their right to privacy which can, in principle, constitute damage under s.13 of the DPA, without the need to demonstrate pecuniary loss or distress. The Court, can therefore, award a uniform per capita sum to members of the class in representative actions for the loss of control of their personal data
That individuals who have lost control of their personal data have suffered the same loss and therefore share the “same interest” under CPR 19.6
That representative actions are, in practice, the only way that claims such as this can be pursued
Responding to the judgement, a Google spokesperson told us: “Protecting the privacy and security of our users has always been our number one priority. This case relates to events that took place nearly a decade ago and that we addressed at the time. We believe it has no merit and should be dismissed.”

Elizabeth Warren bites back at Zuckerberg’s leaked threat to K.O. the government

Presidential candidate Senator Elizabeth Warren has responded publicly to a leaked attack on her by Facebook CEO Mark Zuckerberg, saying she won’t be bullied out of taking big tech to task for anticompetitive practices.

I’m not afraid to hold Big Tech companies like Facebook, Google, and Amazon accountable. It’s time to #BreakUpBigTech: https://t.co/o9X9v4noOm
— Elizabeth Warren (@ewarren) October 1, 2019

Warren’s subtweeting of the Facebook founder follows a leak in which the Verge obtained two hours of audio from an internal Q&A session with Zuckerberg — publishing a series of snippets today.
In one snippet the Facebook leader can be heard opining on how Warren’s plan to break up big tech would “suck”.
“You have someone like Elizabeth Warren who thinks that the right answer is to break up the companies … if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge,” he can be heard saying. “Does that still suck for us? Yeah. I mean, I don’t want to have a major lawsuit against our own government. … But look, at the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and you fight.”
Warren responded soon after publication with a pithy zinger, writing on Twitter: “What would really ‘suck’ is if we don’t fix a corrupt system that lets giant companies like Facebook engage in illegal anticompetitive practices, stomp on consumer privacy rights, and repeatedly fumble their responsibility to protect our democracy.”

What would really “suck” is if we don’t fix a corrupt system that lets giant companies like Facebook engage in illegal anticompetitive practices, stomp on consumer privacy rights, and repeatedly fumble their responsibility to protect our democracy. https://t.co/rI0v55KKAi
— Elizabeth Warren (@ewarren) October 1, 2019

In a follow up tweet she added that she would not be afraid to “hold Big Tech companies like Facebook, Google and Amazon accountable”.
The Verge claims it did not obtain the leaked audio from Facebook’s PR machine. But in a public Facebook post following its publication of the audio snippets Zuckerberg links to their article — and doesn’t exactly sound mad to have what he calls his “unfiltered” views put right out there…

Here are Zuckerberg’s thoughts on the leak. To answer some of the conspiracy tweets I’ve gotten: no, Facebook PR did not give me this audio. I wish! https://t.co/Z3oFgQwKu2 pic.twitter.com/p6Ej8Mb6zF
— Casey Newton (@CaseyNewton) October 1, 2019

Whether the audio was leaked intentionally or not, as many commentators have been quick to point out — Warren principal among them — the fact that a company has gotten so vastly powerful it feels able to threaten to fight and defeat its own government should give pause for civilized thought.
Someone high up in Facebook’s PR department might want to pull Zuckerberg aside and make a major wincing gesture right in his face.

Fortunately Facebook has no power to control what information flows to people https://t.co/cNwSUhB8JS
— David Dayen (@ddayen) October 1, 2019

In another of the audio snippets Zuckerberg extends the threat — arguing that breaking up tech giants would threaten the integrity of elections.
“It’s just that breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues,” he is heard saying. “And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together.”
Elections such as the one Warren hopes to be running in as a US presidential candidate… so er… again this argument is a very strange one to be making when the critics you’re railing against are calling you an overbearing, oversized democracy-denting beast.
Zuckerberg’s remarks also contain the implied threat that a failure to properly police elections, by Facebook, could result in someone like Warren not actually getting elected in the first place.
Given, y’know, the vast power Facebook wields with its content-shaping algorithms which amplify narratives and shape public opinion at cheap, factory farm scale.
Reading between the lines, then, presidential hopefuls should be really careful what they say about important technology companies — or, er, else!

Zuckerberg claims that one particular candidate poses an existential threat to his company. Minutes later, he claims the size of that company lets it effectively police the integrity of the election in which she is running.
He thinks this is an argument *against* breaking it up.
— Silpa Kovvali (@SilpaKov) October 1, 2019

How times change.
Just a few short years ago Zuckerberg was the guy telling everyone that election interference via algorithmically amplified social media fakes was “a pretty crazy idea”.
Now he’s saying only tech behemoths like Facebook can save democracy from, uh, tech behemoths like Facebook…

Zuckerberg now: Breaking up tech firms makes election interference more likely https://t.co/jlBuMScmoj Zuckerberg then: Election interference? Nah man. That’s just a “crazy idea” https://t.co/q9TVEbarAC
— Natasha (@riptari) October 1, 2019

For more on where Zuckerberg’s self-servingly circular logic leads, let’s refer to another of his public talking points: That only Facebook’s continued use of powerful, privacy-hostile AI technologies such as facial recognition can save Western society from a Chinese-style state dystopia in which the presence of your face broadcasts a social credit score for others to determine what you get to access.
This equally uncompelling piece of ‘Zuckerlogic’ sums to: ‘Don’t regulate our privacy hostile shit — or China will get to do worse shit before we can!’
So um… yeah but no.

Silicon Valley is terrified of California’s privacy law. Good.

Silicon Valley is terrified.
In a little over three months, California will see the widest-sweeping state-wide changes to its privacy law in years. California’s Consumer Privacy Act (CCPA) kicks in on January 1 and rolls out sweeping new privacy benefits to the state’s 40 million residents — and every tech company in Silicon Valley.
California’s law is similar to Europe’s GDPR. It grants state consumers a right to know what information companies have on them, a right to have that information deleted and the right to opt-out of the sale of that information.
For California residents, these are extremely powerful provisions that allow consumers access to their own information from companies that collect an increasingly alarming amount of data on their users. Look no further than Cambridge Analytica, which saw Facebook profile page data weaponized and used against millions to try to sway an election. And given some of the heavy fines levied in recent months under GDPR, tech companies will have to brace for more fines when the enforcement provision kicks in six months later.
No wonder the law has Silicon Valley shaking in its boots. It absolutely should.
It’s no surprise that some of the largest tech companies in the U.S. — most of which are located in California — lobbied to weaken the CCPA’s provisions. These companies don’t want to be on the hook for having to deal with what they see as burdensome requests enshrined in the state’s new law any more than they currently are for Europeans with GDPR.
Despite the extensive lobbying, California’s legislature passed the bill with minor amendments, much to the chagrin of tech companies in the state.
“Don’t let this post-Cambridge Analytica ‘mea culpa’ fool you into believing these companies have consumers’ best interests in mind,” wrote the ACLU’s Neema Singh Guliani last year, shortly after the bill was signed into law. “This seeming willingness to subject themselves to federal regulation is, in fact, an effort to enlist the Trump administration and Congress in companies’ efforts to weaken state-level consumer privacy protections,” she wrote.
Since the law passed, tech giants have pulled out their last card: pushing for an overarching federal bill.
In doing so, the companies would be able to control their messaging through their extensive lobbying efforts, allowing them to push for a weaker statute that would nullify some of the provisions in California’s new privacy law. In doing so, companies wouldn’t have to spend a ton on more resources to ensure their compliance with a variety of statutes in multiple states.
Just this month, a group of 51 chief executives — including Amazon’s Jeff Bezos, IBM’s Ginni Rometty and SAP’s Bill McDermott — signed an open letter to senior lawmakers asking for a federal privacy bill, arguing that consumers aren’t clever enough to “understand rules that may change depending upon the state in which they reside.”
Then, the Internet Association, which counts Dropbox, Facebook, Reddit, Snap, Uber (and just today ZipRecruiter) as members, also pushed for a federal privacy law. “The time to act is now,” said the industry group. If the group gets its wish before the end of the year, the California privacy law could be sunk before it kicks in.
And TechNet, a “national, bipartisan network of technology CEOs and senior executives,” also demanded a federal privacy law, claiming — and without providing evidence — that any privacy law should ensure “businesses can comply with the law while continuing to innovate.” Its members include major venture capital firms, including Kleiner Perkins and JC2 Ventures, as well as other big tech giants like Apple, Google, Microsoft, Oracle and Verizon (which owns TechCrunch).
You know there’s something fishy going on when tech giants and telcos team up. But it’s not fooling anyone.
“It’s no accident that the tech industry launched this campaign right after the California legislature rejected their attempts to undermine the California Consumer Privacy Act,” Jacob Snow, a technology and civil liberties attorney at the ACLU of Northern California, told TechCrunch.
“Instead of pushing for federal legislation that wipes away state privacy law, technology companies should ensure that Californians can fully exercise their privacy rights under the CCPA on January 1, 2020, as the law requires,” he said.
There’s little lawmakers in Congress can do in three months before the CCPA deadline, but it won’t stop tech giants from trying.
Californians might not have the CCPA for long if Silicon Valley tech giants and their lobbyists get their way, but rest easy knowing the consumer won — for once.

California passes landmark data privacy bill

America’s largest companies push for federal online privacy laws to circumvent state regulatory efforts

As California moves ahead with what would be the most restrictive online privacy laws in the nation, the chief executives of some of the nation’s largest companies are taking their case to the nation’s capitol to plead for federal regulation.
Chief executives at Amazon, AT&T, Dell, Ford, IBM, Qualcomm, Walmart, and other leading financial services, manufacturing, and technology companies have issued an open letter to Congressional leadership pleading with them to take action on online privacy, through the pro-industry organization, The Business Roundtable.
“Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened,” the letter says.
The subtext to this call to action is the California privacy regulations that are set to take effect by the end of this year.

California passes landmark data privacy bill

As we noted when the bill was passed last year there are a few key components of the California legislation including the following requirements:

Businesses must disclose what information they collect, what business purpose they do so for and any third parties they share that data with.

Businesses would be required to comply with official consumer requests to delete that data.

Consumers can opt out of their data being sold, and businesses can’t retaliate by changing the price or level of service.

Businesses can, however, offer “financial incentives” for being allowed to collect data.

California authorities are empowered to fine companies for violations.

There’s a reason why companies would push for federal regulation to supersede any initiatives from the states. It is more of a challenge for companies to adhere to a patchwork of different regulatory regimes at the state level. But it’s also true that companies, following the lead of automakers in California, could just adhere to the most stringent requirements which would clarify any confusion.
Indeed many of these companies are already complying with strict privacy regulations thanks to the passage of the GDPR in Europe.

WTF is GDPR?

Apple still has work to do on privacy

There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.
On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.
As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

The implant was used to steal location data and files like databases of WhatsApp, Telegram, iMessage. So all the user messages, or emails. Copies of contacts, photos, https://t.co/AmWRpbcIHw pic.twitter.com/vUNQDo9noJ
— Lukasz Olejnik (@lukOlejnik) August 30, 2019

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.
Are we really so sure that thesis holds?
But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?
On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.
But even directly on privacy, Apple is running into problems, too.
 
To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)

WebKit’s new anti-tracking policy puts privacy on a par with security

WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.
Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.
This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.
WebKit’s new policy is essentially saying enough: Stop the creeping.
But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.
“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.
“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.
“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”
Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”
It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).
If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.
“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.
WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.
Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

Equating circumvention of anti-tracking with security exploitation is unprecedented. This is exactly what we need to treat privacy as first class citizen. Enough with hand-waving. It’s making technology catch up with regulations (not the other way, for once!) #ePrivacy #GDPR https://t.co/G1Dx7F2MXu
— Lukasz Olejnik (@lukOlejnik) August 15, 2019

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”
Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.
“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”
“How this policy will be enforced in practice will be carefully observed,” he adds.
As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.
There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.
Although that’s also a process that takes time.
“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.
“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”
For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.
Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.
But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.
In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.
Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.
It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.
As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.
Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.
Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.
Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.
There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

What Chrome’s browser changes mean for your privacy and security

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.

Libra, Facebook’s global digital currency plan, is fuzzy on privacy, watchdogs warn

Privacy commissioners from the Americas, Europe, Africa and Australasia have put their names to a joint statement raising concerns about a lack of clarity from Facebook over how data protection safeguards will be baked into its planned cryptocurrency project, Libra.
Facebook officially unveiled its big bet to build a global digital currency using blockchain technology in June, steered by a Libra Association with Facebook as a founding member. Other founding members include payment and tech giants such as Mastercard, PayPal, Uber, Lyft, eBay, VC firms including Andreessen Horowitz, Thrive Capital and Union Square Ventures, and not-for-profits such as Kiva and Mercy Corps.
At the same time Facebook announced a new subsidiary of its own business, Calibra, which it said will create financial services for the Libra network, including offering a standalone wallet app that it expects to bake into its messaging apps, Messenger and WhatsApp, next year — raising concerns it could quickly gain a monopolistic hold over what’s being couched as an ‘open’ digital currency network, given the dominance of the associated social platforms where it intends to seed its own wallet.
In its official blog post hyping Calibra Facebook avoided any talk of how much market power it might wield via its ability to promote the wallet to its existing 2.2BN+ global users, but it did touch on privacy — writing “we’ll also take steps to protect your privacy” by claiming it would not share “account information or financial data with Facebook or any third party without customer consent”.
Except for when it admitted it would; the same paragraph states there will be “limited cases” when it may share user data. These cases will “reflect our need to keep people safe, comply with the law and provide basic functionality to the people who use Calibra”, the blog adds. (A Calibra Customer Commitment provides little more detail than a few sample instances, such as “preventing fraud and criminal activity”.)
All of that might sound reassuring enough on the surface but Facebook has used the fuzzy notion of needing to keep its users ‘safe’ as an umbrella justification for tracking non-Facebook users across the entire mainstream Internet, for example.
So the devil really is in the granular detail of anything the company claims it will and won’t do.
Hence the lack of comprehensive details about Libra’s approach to privacy and data protection is causing professional watchdogs around the world to worry.
“As representatives of the global community of data protection and privacy enforcement authorities, collectively responsible for promoting the privacy of many millions of people around the world, we are joining together to express our shared concerns about the privacy risks posed by the Libra digital currency and infrastructure,” they write. “Other authorities and democratic lawmakers have expressed concerns about this initiative. These risks are not limited to financial privacy, since the involvement of Facebook Inc., and its expansive categories of data collection on hundreds of millions of users, raises additional concerns. Data protection authorities will also work closely with other regulators.”
Among the commissioners signing the statement is the FTC’s Rohit Chopra: One of two commissioners at the US Federal Trade Commission who dissented from the $5BN settlement order that was passed by a 3:2 vote last month. 
Also raising concerns about Facebook’s transparency about how Libra will comply with privacy laws and expectations in multiple jurisdictions around the world are: Canada’s privacy commissioner Daniel Therrien; the European Union’s data protection supervisor, Giovanni Buttarelli; UK Information commissioner, Elizabeth Denham; Albania’s information and data protection commissioner, Besnik Dervishi; the president of the Commission for Information Technology and Civil Liberties for Burkina Faso, Marguerite Ouedraogo Bonane; and Australia’s information and privacy commissioner, Angelene Falk.
In the joint statement — on what they describe as “global privacy expectations of the Libra network” — they write:
In today’s digital age, it is critical that organisations are transparent and accountable for their personal information handling practices. Good privacy governance and privacy by design are key enablers for innovation and protecting data – they are not mutually exclusive. To date, while Facebook and Calibra have made broad public statements about privacy, they have failed to specifically address the information handling practices that will be in place to secure and protect personal information. Additionally, given the current plans for a rapid implementation of Libra and Calibra, we are surprised and concerned that this further detail is not yet available. The involvement of Facebook Inc. as a founding member of the Libra Association has the potential to drive rapid uptake by consumers around the globe, including in countries which may not yet have data protection laws in place. Once the Libra Network goes live, it may instantly become the custodian of millions of people’s personal information. This combination of vast reserves of personal information with financial information and cryptocurrency amplifies our privacy concerns about the Libra Network’s design and data sharing arrangements.
We’ve pasted the list of questions they’re putting to the Libra Network below — which they specify is “non-exhaustive”, saying individual agencies may follow up with more “as the proposals and service offering develops”.
Among the details they’re seeking answers to is clarity on what users personal data will be used for and how users will be able to control what their data is used for.
The risk of dark patterns being used to weaken and undermine users’ privacy is another stated concern.
Where user data is shared the commissioners are also seeking clarity on the types of data and the de-identification techniques that will be used — on the latter researchers have demonstrated for years that just a handful of data points can be used to re-identify credit card users from an ‘anonymous’ data-set of transactions, for example.
Here’s the full list of questions being put to the Libra Network:

1. How can global data protection and privacy enforcement authorities be confident that the Libra Network has robust measures to protect the personal information of network users? In particular, how will the Libra Network ensure that its participants will:
a. provide clear information about how personal information will be used (including the use of profiling and algorithms, and the sharing of personal information between members of the Libra Network and any third parties) to allow users to provide specific and informed consent where appropriate;
b. create privacy-protective default settings that do not use nudge techniques or “dark patterns” to encourage people to share personal data with third parties or weaken their privacy protections;
c. ensure that privacy control settings are prominent and easy to use;
d. collect and process only the minimum amount of personal information necessary to achieve the identified purpose of the product or service, and ensure the lawfulness of the processing;
e. ensure that all personal data is adequately protected; and
f. give people simple procedures for exercising their privacy rights, including deleting their accounts, and honouring their requests in a timely way.

2. How will the Libra Network incorporate privacy by design principles in the development of its infrastructure?

3. How will the Libra Association ensure that all processors of data within the Libra Network are identified, and are compliant with their respective data protection obligations?

4. How does the Libra Network plan to undertake data protection impact assessments, and how will the Libra Network ensure these assessments are considered on an ongoing basis?

5. How will the Libra Network ensure that its data protection and privacy policies, standards and controls apply consistently across the Libra Network’s operations in all jurisdictions?

6. Where data is shared amongst Libra Network members:

a. what data elements will be involved?

b. to what extent will it be de-identified, and what method will be used to achieve de-identification?
c. how will Libra Network ensure that data is not re-identified, including by use of enforceable contractual commitments with those with whom data is shared?

We’ve reached out to Facebook for comment.

UK High Court rejects human rights challenge to bulk snooping powers

Civil liberties campaign group Liberty has lost its latest challenge to controversial UK surveillance powers that allow state agencies to intercept and retain data in bulk.
The challenge fixed on the presence of so-called ‘bulk’ powers in the 2016 Investigatory Powers Act (IPA): A controversial capability that allows intelligence agencies to legally collect and retain large amounts of data, instead of having to operate via targeted intercepts.
The law even allows for state agents to hack into devices en masse, without per-device grounds for individual suspicion.
Liberty, which was supported in the legal action by the National Union of Journalists, argued that bulk powers are incompatible with European human rights law on the grounds that the IPA contains insufficient safeguards against abuse of these powers.
Two months ago it published examples of what it described as shocking failures by UK state agencies — such as not observing the timely destruction of material; and data being discovered to have been copied and stored in “ungoverned spaces” without the necessary controls — which it said showed MI5 had failed to comply with safeguards requirements since the IPA came into effect.
However the judges disagreed that the examples of serious flaws in spy agency MI5’s “handling procedures” — which the documents also show triggering intervention by the Investigatory Powers Commissioner — sum to a conclusion that the Act itself is incompatible with human rights law.
Rejecting the argument in their July 29 ruling they found that oversight mechanisms the government baked into the legislation (such as the creation of the office of the Investigatory Powers Commissioner to conduct independent oversight of spy agencies’ use of the powers) provide sufficient checks on the risk of abuse, dubbing the regime as “a suite of inter-locking safeguards”.
Liberty expressed disappointment with the ruling — and has said it will appeal.
In a statement the group told the BBC: “This disappointing judgment allows the government to continue to spy on every one of us, violating our rights to privacy and free expression.
“We will challenge this judgment in the courts, and keep fighting for a targeted surveillance regime that respects our rights. These bulk surveillance powers allow the state to Hoover up the messages, calls and web history of hordes of ordinary people who are not suspected of any wrongdoing.”
This is just one of several challenges brought against the IPA.
A separate challenge to bulk collection was lodged by Liberty, Big Brother Watch and others with the European Court of Human Rights (ECHR).
A hearing took place two years ago and the court subsequently found that the UK’s historical regime of bulk interception had violated human rights law. However it did not rule against bulk surveillance powers in principle — which the UK judges note in their judgement, writing that consequently: “There is no requirement for there to be reasonable grounds for suspicion in the case of any individual.”
Earlier this year Liberty et al were granted leave to appeal their case to the ECHR’s highest court. That case is still pending before the Grand Chamber.

WP Twitter Auto Publish Powered By : XYZScripts.com