Crea sito

Call for common EU approach to apps and data to fight COVID-19 and protect citizens’ rights

The European Commission has responded to the regional scramble for apps and data to help tackle the coronavirus crisis by calling for a common EU approach to boost the effectiveness of digital interventions and ensure key rights and freedoms are respected.
The European Union’s executive body wants to ensure Member States’ individual efforts to use data and tech tools to combat COVID-19 are aligned and can interoperate across borders — and therefore be more effective, given the virus does not respect national borders.
Current efforts by governments across the EU to combat the virus are being hampered by the fragmentation of approaches, it warns.
At the same time its recommendation puts a strong focus on the need to ensure that fundamental EU rights do not get overridden in the rush to mitigate the spread of the virus — with the Commission urging public health authorities and research institutions to observe a key EU legal principle of data minimization when processing personal data for a coronavirus purpose.
Specifically it writes that these bodies should apply what it calls “appropriate safeguards” — listing pseudonymization, aggregation, encryption and decentralization as examples of best practice. 
The Commission’s thinking is that getting EU citizens to trust digital efforts — such as the myriad of COVID-19 contacts tracing apps now in development — will be key to their success by helping to drive uptake and usage, which means core rights like privacy take on additional significance at a moment of public health crisis.
Commenting in a statement, commissioner for the EU’s internal market, Thierry Breton said: “Digital technologies, mobile applications and mobility data have enormous potential to help understand how the virus spreads and to respond effectively. With this Recommendation, we put in motion a European coordinated approach for the use of such apps and data, without compromising on our EU privacy and data protection rules, and avoiding the fragmentation of the internal market. Europe is stronger when it acts united.”
“Europe’s data protection rules are the strongest in the world and they are fit also for this crisis, providing for exceptions and flexibility. We work closely with data protection authorities and will come forward with guidance on the privacy implications soon,” added Didier Reynders, the commissioner for justice, in another supporting statement. “We all must work together now to get through this unprecedented crisis. The Commission is supporting the Member States in their efforts to fight the virus and we will continue to do so when it comes to an exit strategy and to recovery. In all this, we will continue to ensure full respect of Europeans’ fundamental rights.”
Since Europe has fast-followed China to become a secondary epicenter for the SARS-CoV-2 virus there has been a rush by governments, institutions and the private sector to grab data and technologies to try to map the spread of the virus and inform policy responses. The Commission itself has leant on telcos to provide anonymized and aggregated user location data for COVID-19 tracking purposes.
Some individual Member States have gone further — calling in tech companies to ask directly for resources and/or data, with little public clarity on what exactly is being provided. Some governments have even rushed out apps that apply individual-level location tracking to enforce quarantine measures.
Multiple EU countries also have contacts tracing apps in the works — taking inspiration from Singapore’s TraceTogether app which users Bluetooth proximity as a proxy for infection risk.
With so much digital activity going on — and huge economic and social pressure for a ‘coronavirus fix’ — there are clear risks to privacy and civil liberties. Governments, research institutions and the private sector are all mobilizing to capture health-related data and track people’s location like never before, all set against the pressing backdrop of a public health emergency.
The Commission warned today that some of the measures being taken by certain (unnamed) countries — such as location-tracking of individuals; the use of technology to rate an individual’s level of health risk; and the centralization of sensitive data — risk putting pressure on fundamental EU rights and freedoms.
Its recommendation emphasizes that any restrictions on rights must be justified, proportionate and temporary.
Any such restrictions should remain “strictly limited” to what is necessary to combat the crisis and should not continue to exist “without an adequate justification” after the COVID-19 emergency has passed, it adds.
It’s not alone in expressing such concerns.
In recent days bottom-up efforts have emerged out of EU research institutions with the aim of standardizing a ‘privacy-preserving’ approach to coronavirus contacts tracing.
One coalition of EU technologists and scientists led by institutions in Germany, Switzerland and France, is pushing a common approach that they’re hoping will get baked into such apps to limit risks. They’ve called the effort: PEPP-PT (Pan-European Privacy-Preserving Proximity Tracing).
However a different group of privacy experts is simultaneously pushing for a decentralized method for doing the same thing (DP-3T) — arguing it’s a better fit with the EU’s data protection model as it doesn’t require pseudonymized IDs to be centralized on a server. Instead storage of contacts and individual infection risk processing would be decentralized — performed locally, on the user’s device — thereby shrinking the risk of such a system being repurposed to carry out state-level surveillance of citizens.
Although the backers of this protocol accept it does not erase all risk; with the potential for tech savvy hackers to intercept the pseudonymized IDs of infected people at the point they’re being broadcast to devices for local processing, for instance. (While health authorities may be more accustomed to the concept of centralizing data to secure it, rather than radically distributing it.)
Earlier this week, one of the technologists involved in the PEPP-PT project told us it intends to support both approaches — centralized and decentralized — in order to try to maximize international uptake, allowing developers to make their own choice of preferred infrastructure.
Though questions remain over achieving interoperability between different models.
Per its recommendation, the Commission looks to be favoring a decentralized model — as the closest fit with the EU’s rights framework.

The European Commission Recommendation on Bluetooth COVID-19 proximity tracing apps specifically notes that they should apply decentralisation as a key data minimisation safeguard, in line with #DP3T. https://t.co/ksL1Obc8My pic.twitter.com/Vb3jTLbeo9
— Michael Veale (@mikarv) April 8, 2020

In a section of its recommendation paper on privacy and data protection for “COVID-19 mobile warning and prevention applications” it also states a preference for preference for “safeguards ensuring respect for fundamental rights and prevention of stigmatization”; and for “the least intrusive yet effective measures”.

The EU will support privacy-respecting #COVIDー19 application for contact tracing “least intrusive yet effective measures” pic.twitter.com/nNdQEGNatd
— Lukasz Olejnik (@lukOlejnik) April 8, 2020

The Commission’s recommendation also stresses the importance of keeping the public informed.
“Transparency and clear and regular communication, and allowing for the input of persons and communities most affected, will be paramount to ensuring public trust when combating the COVID-19 crisis,” it warns. 
The Commission is proposing a joint toolbox for EU Member States to encourage the development of a rights-respecting, coordinated and common approach to smartphone apps for tracing COVID-19 infections — which will consist of [emphasis its]:
specifications to ensure the effectiveness of mobile information, warning and tracing applications from a medical and technical point of view;
measures to avoid proliferation of incompatible applications, support requirements for interoperability and promotion of common solutions;
governance mechanisms to be applied by public health authorities and in cooperation with the European Centre for Disease Control;
the identification of good practices and mechanisms for exchange of information on the functioning of the applications; and
sharing data with relevant epidemiological public bodies, including aggregated data to ECDC.
It also says it will be providing guidance for Member States that will specifically cover off data protection and privacy implications — another clear signal of concerns.
“The Commission is in close contact with the European Data Protection Board [EDPB] for an overview of the processing of personal data at national level in the context of the coronavirus crisis,” it adds.
Yesterday, following a plenary meeting of the EU data watchdogs body, the EDPB announced that it’s assigned expert subgroups to work on developing guidance on key aspects of data processing in the fight against COVID-19 — including for geolocation and other tracing tools in the context of the COVID-19 outbreak, with its technology expert subgroup leading the work.
While a compliance, e-government and health expert subgroup is also now working on guidance for the processing of health data for research purposes in the coronavirus context.
These are the two areas the EDPB said it’s prioritizing at this time, putting planned guidance for teleworking tools and practices during the current crisis on ice for now.
“I strongly believe data protection and public health go hand in hand,” said EDPB chair, Andrea Jelinek, in a statement: “The EDPB will move swiftly to issue guidance on these topics within the shortest possible notice to help make sure that technology is used in a responsible way to support and hopefully win the battle against the corona pandemic.”
The Commission also wants a common approach for modelling and predicting the spread of COVID-19 too — and says the toolbox will focus on developing this via the use of “anonymous and aggregated mobile location data” (such as it has been asking EU operators to provide).
“The aim is to analyse mobility patterns including the impact of confinement measures on the intensity of contacts, and hence the risks of contamination,” it writes. “This will be an important and proportionate input for tools modelling the spread of the virus, and provide insights for the development of strategies for opening up societies again.”
“The Commission already started the discussion with mobile phone operators on 23 March 2020 with the aim to cover all Member States. The data will be fully anonymised and transmitted to the Joint Research Centre for processing and modelling. It will not be shared with third parties and only be stored as long as the crisis is ongoing,” it adds.
The Commission’s push to coordinate coronavirus tech efforts across the EU has been welcomed by privacy and security experts.
Michael Veale, a backer of the decentralized protocol for COVID-19 contacts tracing, told us: “It’s great to see the Commission recommend decentralisation as a core principle for information systems tackling COVID-19. As our DP-3T protocol shows, creating a centralised database is a wholly unnecessary and removable part of bluetooth contact tracing.”
“We hope to be able to place code online for scrutiny and feedback next week — fully open source, of course,” Veale added. “We have already had great public feedback on the protocol which we are revising in light of that to make it even more private and secure. Centralised systems being developed in Europe, such as in Germany, have not published their protocols, let along code — perhaps they are afraid of what people will find?”
While Lukasz Olejnik, an EU-based cybersecurity advisor and privacy researcher, also welcomed the Commission’s intervention, telling us: “A coordinated approach can certainly be easier to build trust. We should favor privacy-respecting approaches, and make it clear that we are in a crisis situation. Any such crisis system should be dismantled, and it looks like the recommendations recognize it. This is good.”

Cookie consent still a compliance trash-fire in latest watchdog peek

The latest confirmation of the online tracking industry’s continued flouting of EU privacy laws which — at least on paper — are supposed to protect citizens from consent-less digital surveillance comes by via Ireland’s Data Protection Commission (DPC).
The watchdog did a sweep survey of around 40 popular websites last year — covering sectors including media and publishing; retail; restaurants and food ordering services; insurance; sport and leisure; and the public sector — and in a new report, published yesterday, it found almost all failing on a number of cookie and tracking compliance issues, with breaches ranging from minor to serious.
Twenty were graded ‘amber’ by the regulator, which signals a good response and approach to compliance but with at least one serious concern identified; twelve were graded ‘red’, based on very poor quality responses and a plethora of bad practices around cookie banners, setting multiple cookies without consent, badly designed cookies policies or privacy policies, and a lack of clarity about whether they understood the purposes of the ePrivacy legislation; while a further three got a borderline ‘amber to red’ grade.
Just two of the 38 controllers got a ‘green’ rating (substantially compliance with any concerns straightforward and easily remedied); and one more got a borderline ‘green to amber’ grade.
EU law means that if a data controller is relying on consent as the legal basis for tracking a user the consent must be specific, informed and freely given. Additional court rulings last year have further finessed guidance around online tracking — clarifying pre-checked consent boxes aren’t valid, for example.
Yet the DPC still found examples of cookie banners that offer no actual choice at all. Such as those which serve a dummy banner with a cookie notice that users can only meaningless click ‘Got it!’. (‘Gotcha data’ more like.. )
In fact the watchdog writes that it found ‘implied’ consent being relied upon by around two-thirds of the controllers, based on the wording of their cookie banners (e.g. notices such as: “by continuing to browse this site you consent to the use of cookies”) — despite this no longer meeting the required legal standard.
“Some appeared to be drawing on older, but no longer extant, guidance published by the DPC that indicated consent could be obtained ‘by implication’, where such informational notices were put in place,” it writes, noting that current guidance on its website “does not make any reference to implied consent, but it also focuses more on user controls for cookies rather than on controller obligations”.
Another finding was that all but one website set cookies immediately on landing — with “many” of these found to have no legal justification for not asking first, as the DPC determined they fall outside available consent exemptions in the relevant regulations.
It also identified widespread abuse of the concept of ‘strictly necessary’ where the use of trackers are concerned. “Many controllers categorised the cookies deployed on their websites as having a ‘necessary’ or ‘strictly necessary’ function, where the stated function of the cookie appeared to meet neither of the two consent exemption criteria set down in the ePrivacy Regulations/ePrivacy Directive,” it writes in the report. “These included cookies used to establish chatbot sessions that were set prior to any request by the user to initiate a chatbot function. In some cases, it was noted that the chatbot function on the websites concerned did not work at all.
“It was clear that some controllers may either misunderstand the ‘strictly necessary’ criteria, or that their definitions of what is strictly necessary are rather more expansive than the definitions provided in Regulation 5(5),” it adds.
Another problem the report highlights is a lack of tools for users to vary or withdraw their consent choices, despite some of the reviewed sites using so called ‘consent management platforms’ (CMPs) sold by third-party vendors.
This chimes with a recent independent study of CPMs — which earlier this year found illegal practices to be widespread, with “dark patterns and implied consent… ubiquitous”, as the researchers put it.
“Badly designed — or potentially even deliberately deceptive — cookie banners and consent-management tools were also a feature on some sites,” the DPC writes in its report, detailing some examples of Quantcast’s CPM which had been implemented in such a way as to make the interface “confusing and potentially deceptive” (such as unlabelled toggles and a ‘reject all’ button that had no effect).
Pre-checked boxes/sliders were also found to be common, with the DPC finding ten of the 38 controllers used them — despite ‘consent’ collected like that not actually being valid consent.
“In the case of most of the controllers, consent was also ‘bundled’ — in other words, it was not possible for users to control consent to the different purposes for which cookies were being used,” the DPC also writes. “This is not permitted, as has been clarified in the Planet49 judgment. Consent does not need to be given for each cookie, but rather for each purpose. Where a cookie has more than one purpose requiring consent, it must be obtained for all of those purposes separately.”
In another finding, the regulator came across instances of websites that had embedded tracking technologies, such as Facebook pixels, yet their operators did not list these in responses to the survey, listing only http browser cookies instead. The DPC suggests this indicates some controllers aren’t even aware of trackers baked into their own sites.
“It was not clear, therefore, whether some controllers were aware of some of the tracking elements deployed on their websites — this was particularly the case where small controllers had outsourced their website management and development to a third-part,” it writes.
The worst sector of its targeted sweep — in terms of “poor practices and, in particular, poor understanding of the ePrivacy Regulations and their purpose” — was the restaurants and food-ordering sector, per the report. (Though the finding is clearly based on a small sampling across multiple sectors.)
Despite encountering near blanket failure to actually comply with the law, the DPC, which also happens to be the lead regulator for much of big tech in Europe, has responded by issuing, er, further guidance.
This includes specifics such as pre-checked consent boxes must be removed; cookie banners can’t be designed to ‘nudge’ users to accept and a reject option must have equal prominence; and no non-necessary cookies be set on landing. It also stipulates there must always be a way for users to withdraw consent — and doing so should be as easy as consenting.
All stuff that’s been clear and increasingly so at least since the GDPR came into application in May 2018. Nonetheless the regulator is giving the website operators in question a further six months’ grace to get their houses in order — after which it has raised the prospect of actually enforcing the EU’s ePrivacy Directive and the General Data Protection Regulation.
“Where controllers fail to voluntarily make changes to their user interfaces and/or their processing, the DPC has enforcement options available under both the ePrivacy Regulations and the GDPR and will, where necessary, examine the most appropriate enforcement options in order to bring controllers into compliance with the law,” it warns.
The report is just the latest shot across the bows of the online tracking industry in Europe.
The UK’s Information Commission’s Office (ICO) has been issuing sternly worded blog posts for months. Its own report last summer found illegal profiling of Internet users by the programmatic ad industry to be rampant — also giving the industry six months to reform.
However the ICO still hasn’t done anything about the adtech industry’s legal blackhole — leading to privacy experts to denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”, as one put it at the start of this year.

Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech

Ireland’s DPC, meanwhile, has yet to put the decision trigger on multiple cross-border investigations into the data-mining business practices of tech giants including Facebook and Google, following scores of GDPR complaints — including several targeting their legal base to process people’s data.
A two-year review of the pan-EU regulation, set for May 2020, provides one hard deadline that might concentrate minds.

EU privacy experts push a decentralized approach to COVID-19 contacts tracing

A group of European privacy experts has proposed a decentralized system for Bluetooth-based COVID-19 contacts tracing which they argue offers greater protection against abuse and misuse of people’s data than apps which pull data into centralized pots.
The protocol — which they’re calling Decentralized Privacy-Preserving Proximity Tracing (DP-PPT) — has been designed by around 25 academics from at least seven research institutions across Europe, including the Swiss Federal Institute of Technology, ETH Zurich and KU Leuven in the Netherlands.
They’ve published a White Paper detailing their approach here.
The key element is that the design entails local processing of contacts tracing and risk on the user’s device, based on devices generating and sharing ephemeral Bluetooth identifiers (referred to as EphIDs in the paper).
A backend server is used to push data out to devices — i.e. when an infected person is diagnosed with COVID-19 a health authority would sanction the upload from the person’s device of a compact representation of EphIDs over the infectious period which would be sent to other devices so they could locally compute whether there is a risk and notify the user accordingly.
Under this design there’s no requirement for pseudonymized IDs to be centralized, where the pooled data would pose a privacy risk. Which in turn should make it easier to persuade EU citizens to trust the system — and voluntarily download contacts tracing app using this protocol — given it’s architected to resist being repurposed for individual-level state surveillance.
The group does discuss some other potential threats — such as posed by tech savvy users who could eavesdrop on data exchanged locally, and decompile/recompile the app to modify elements — but the overarching contention is such risks are small and more manageable vs creating centralized pots of data that risk paving the way for ‘surveillance creep’, i.e. if states use a public health crisis as an opportunity to establish and retain citizen-level tracking infrastructure.
The DP-PPT has been designed with its own purpose-limited dismantling in mind, once the public health crisis is over.
“Our protocol is demonstrative of the fact that privacy-preserving approaches to proximity tracing are possible, and that countries or organisations do not need to accept methods that support risk and misuse,” writes professor Carmela Troncoso, of EPFL. “Where the law requires strict necessity and proportionality, and societal support is behind proximity tracing, this decentralized design provides an abuse-resistant way to carry it out.”
In recent weeks governments all over Europe have been leaning on data controllers to hand over user data for a variety of coronavirus tracking purposes. Apps are also being scrambled to market by the private sector — including symptom reporting apps that claim to help researchers fight the disease. While tech giants spy PR opportunities to repackage persistent tracking of Internet users for a claimed public healthcare cause, however vague the actual utility.

Telco metadata grab is for modelling COVID-19 spread, not tracking citizens, says EC

The next big coronavirus tech push looks likely to be contacts-tracing apps: Aka apps that use proximity-tracking Bluetooth technology to map contacts between infected individuals and others.
This is because without some form of contacts tracing there’s a risk that hard-won gains to reduce the rate of infections by curtailing people’s movements will be reversed, i.e. once economic and social activity is opened up again. Although whether contacts tracing apps can be as effective at helping to contain COVID-19 as policymakers and technologists hope remains an open question.
What’s crystal clear right now, though, is that without a thoughtfully designed protocol that bakes in privacy by design contacts-tracing apps present a real risk to privacy — and, where they exist, to hard-won human rights. 
Torching rights in the name of combating COVID-19 is neither good nor necessary is the message from the group backing the DP-PPT protocol.
“One of the major concerns around centralisation is that the system can be expanded, that states can reconstruct a social graph of who-has-been-close-to-who, and may then expand profiling and other provisions on that basis. The data can be co-opted and used by law enforcement and intelligence for non-public health purposes,” explains University College London’s Dr Michael Veale, another backer of the decentralized design.
“While some countries may be able to put in place effective legal safeguards against this, by setting up a centralised protocol in Europe, neighbouring countries become forced to interoperate with it, and use centralised rather than decentralised systems too. The inverse is true: A decentralised system puts hard technical limits on surveillance abuses from COVID-19 bluetooth tracking across the world, by ensuring other countries use privacy-protective approaches.”
“It is also simply not necessary,” he adds of centralizing proximity data. “Data protection by design obliges the minimisation of data to that which is necessary for the purpose. Collecting and centralising data is simply not technically necessary for Bluetooth contact tracing.”
Last week we reported on another EU effort — by a different coalition of technologists and scientists, led by by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI) — which has said it’s working on a “privacy preserving” standard for Covid-19 contacts tracing which they’ve dubbed: Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT).
At the time it wasn’t clear whether or not the approach was locked to a centralized model of handling the pseudoanonymized IDs. Speaking to TechCrunch today, Hans-Christian Boos, one of the PEPP-PT project’s co-initiators, confirmed the standardization effort will support both centralized and decentralized approaches to handling contacts tracing.
The effort had faced criticizm from some in the EU privacy community for appearing to favor a centralized rather than decentralized approach — thereby, its critics contend, undermining the core claim to preserve user privacy. But, per Boos, it will in fact support both approaches — in a bid to maximize uptake around the world.
He also said it will be interoperable regardless of whether data is centralized or decentralized. (In the centralized scenario, he said the hope is that the not-for-profit that’s being set up to oversee PEPP-PT will be able to manage the centralized servers itself, pending proper financing — a step intended to further shrink the risk of data centralization in regions that lacks a human rights frameworks, for example.)
“We will have both options — centralized and decentralized,” Boos told TechCrunch. “We will offer both solutions, depending on who wants to use what, and we’ll make them operable. But I’m telling you that both solutions have their merits. I know that in the crypto community there is a lot of people who want decentraliztion — and I can tell you that in the health community there’s a lot of people who hate decentralization because they’re afraid that too many people have information about infected people.”
“In a decentralized system you have the simple problem that you would broadcast the anonymous IDs of infected people to everybody — so some countries’ health legislation will absolutely forbid that. Even though you have a cryptographic method, you’re broadcasting the IDs to all over the place — that’s the only way your local phone can find out have I been in contact or no,” Boos went on.
“That’s the drawback of a decentralized solution. Other than that it’s a very good thing. On a centralized solution you have the drawback that there is a single operator, whom you can choose to trust or not to trust — has access to anonymized IDs, just the same as if they were broadcast. So the question is you can have one party with access to anonymized IDs or do you have everybody with access to anonymized IDs because in the end you’re broadcasting them over the network [because] it’s spoofable.”
“If your assumption is that someone could hack the centralized service… then you have to also assume that someone could hack a router, which stuff goes through,” he added. “Same problem.
“That’s why we offer both solutions. We’re not religious. Both solutions offer good privacy. Your question is who would you trust more and who would you un-trust more? Would you trust more a lot of users that you broadcast something to or would you trust more someone who operates a server? Or would you trust more that someone can hack a router or that someone can hack the server? Both is possible, right. Both of these options are totally valid options — and it’s a religious discussion between crypto people… but we have to balance it between what crypto wants and what healthcare wants. And because we can’t make that decision we will end up offering both solutions.
“I think there has to be choice because if we are trying to build an international standard we should try and not be part of a religious war.”
Boos also said the project aims to conduct research into the respective protocols (centralized vs decentralized) to compare and conduct risk assessments based on access to the respective data.
“From a data protection point of view that data is completely anonymized because there’s no attachment to location, there’s no attachment to time, there’s no attachment to phone number, MAC address, SIM number, any of those. The only thing you know there is a contact — a relevant contact between two anonymous IDs. That’s the only thing you have,” he said. “The question that we gave the computer scientists and the hackers is if we give you this list — or if we give you this graph, what could you derive from it? In the graph they are just numbers connected to each other, the question is how can you derive anything from it? They are trying — let’s see what’s coming out.”
“There are lots of people trying to be right about this discussion. It’s not about being right; it’s about doing the right thing — and we will supply, from the initiative, whatever good options there are. And if each of them have drawbacks we will make those drawbacks public and we will try to get as much confirmation and research in on these as we can. And we will put this out so people can make their choices which type of the system they want in their geography,” he added.
“If it turns out that one is doable and one is completely not doable then we will drop one — but so far both look doable, in terms of ‘privacy preserving’, so we will offer both. If one turns out to be not doable because it’s hackable or you could derive meta-information at an unacceptable risk then we would drop it completely and stop offering the option.”
On the interoperability point Boos described it as “a challenge” which he said boils down to how the systems calculate their respective IDs — but he emphasized it’s being worked on and is an essential piece.
“Without that the whole thing doesn’t make sense,” he told us. “It’s a challenge why the option isn’t out yet but we’re solving that challenge and it’ll definitely work… There’s multiple ideas how to make that work.”
“If every country does this by itself we won’t have open borders again,” he added. “And if in a country there’s multiple applications that don’t share data then we won’t have a large enough set of people participating who can actually make infection tracing possible — and if there’s not a single place where we can have discussions about what’s the right thing to do about privacy well then probably everybody will do something else and half of them will use phone numbers and location information.”
The PEPP-PT coalition has not yet published its protocol or any code. Which means external experts wanting to chip in with informed feedback on specific design choices related to the proposed standard haven’t been able to get their hands on the necessary data to carry out a review.
Boos said they intend to open source the code this week, under a Mozilla licence. He also said the project is willing to take on “any good suggestions” as contributions.
“Currently only beta members have access to it because those have committed to us that they will update to the newest version,” he said. “We want to make sure that when we publish the first release of code it should have gone through data privacy validation and security validation — so we are as sure as we can be that there’s no major change that someone on an open source system might skip.”
The lack of transparency around the protocol had caused concern among privacy experts — and led to calls for developers to withhold support pending more detail. And even to speculation that European governments may be intervening to push the effort towards a centralized model — and away from core EU principles of data protection by design and default.

I read this as saying that the PEPP-PT enables different configurations, depending on what the ‘user’ (government, platform) prefers. That is not DPbDD. Also I got no answer to the question who are the partners, what NDAs are involved and what downstream data-flows are enabled.
— Mireille Hildebrandt (@mireillemoret) April 6, 2020

As it stands, the EU’s long-standing data protection law bakes in principles such as data minimization. Transparency is another core requirement. And just last week the bloc’s lead privacy regulator, the EDPS, told us it’s monitoring developments around COVID-19 contacts tracing apps.
“The EDPS supports the development of technology and digital applications for the fight against the coronavirus pandemic and is monitoring these developments closely in cooperation with other national Data Protection Supervisory Authorities. It is firmly of the view that the GDPR is not an obstacle for the processing of personal data which is considered necessary by the Health Authorities to fight the pandemic,” a spokesman told us.
“All technology developers currently working on effective measures in the fight against the coronavirus pandemic should ensure data protection from the start, e.g. by applying apply data protection by design principles. The EDPS and the data protection community stand ready to assist technology developers in this collective endeavour. Guidance from data protection authorities is available here: EDPB Guidelines 4/2019 on Article 25 Data Protection by Design and by Default; and EDPS Preliminary Opinion on Privacy by Design.”
We also understand the European Commission is paying attention to the sudden crop of coronavirus apps and tools — with effectiveness and compliance with European data standards on its radar.
However, at the same time, the Commission has been pushing a big data agenda as part of a reboot of the bloc’s industrial strategy that puts digitization, data and AI at the core. And just today Euroactiv reported on leaked documents from the EU Council which say EU Member States and the Commission should “thoroughly analyse the experiences gained from the COVID-19 pandemic” in order to inform future policies across the entire spectrum of the digital domain.
So even in the EU there is a high level appetite for data that risks intersecting with the coronavirus crisis to drive developments in a direction that might undermine individual privacy rights. Hence the fierce push back from certain pro-privacy quarters for contacts tracing to be decentralized — to guard against any state data grabs.

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

For his part Boos argues that what counts as best practice ‘data minimization’ boils down to a point of view on who you trust more. “You could make an argument [for] both [deccentralized and centralized approaches] that they’re data minimizing — just because there’s data minimization at one point doesn’t mean you have data minimization overall in a decentralized system,” he suggests.
“It’s a question who do you trust? It’s who would you trust more — that’s the real question. I see the critical point of data as not the list of anonymized contacts — the critical data is the confirmed infected.
“A lot of this is an old, religious discussion between centralization and decentralization,” he added. “Generally IT oscillates between those tools; total distribution, total centralization… Because none of those is a perfect solution. But here in this case I think both offer valid security options, and then they have both different implications on what you’re willing to do or not willing to do with medical data. And then you’ve got to make a decision.
“What we have to do is we’ve got to make sure that the options are available. And we’ve got to make sure there’s sound research, not just conjecture, in heavyweight discussions: How does what work, how do they compare, and what are the risks?”
In terms of who’s involved in PEPP-PT discussions, beyond direct project participants, Boos said governments and health ministries are involved for the practical reason that they “have to include this in their health processes”. “A lot of countries now create their official tracing apps and of course those should be connected to the PEPP-PT,” he said.
“We also talk to the people in the health systems — whatever is the health system in the respective countries — because this needs to in the end interface with the health system, it needs to interface with testing… it should interface with infectious disease laws so people could get in touch with the local CDCs without revealing their privacy to us or their contact information to us, so that’s the conversation we’re also having.”
Developers with early (beta) access are kicking the tyres of the system already. Asked when the first apps making use of PEPP-PT technologies might be in general circulation Boos suggested it could be as soon as a couple of weeks.
“Most of them just have to put this into their tracing layer and we’ve already given them enough information so that they know how they can connect this to their health processes. I don’t think this will take long,” he said, noting the project is also providing a tracing reference app to help countries that haven’t got developer resource on tap.
“For user engagement you’ll have to do more than just tracing — you’ll have to include, for example, the information from the CDC… but we will offer the skeletal implementation of an app to make starting this as a project [easier],” he said.
“If all the people that have emailed us since last week put it in their apps [we’ll get widespread uptake],” Boos added. “Let’s say 50% do I think we get a very good start. I would say that the influx from countries and I would say companies especially who want their workforce back — there’s a high pressure especially to go on a system that allows international exchange and interoperability.”
On the wider point of whether contacts tracing apps is a useful tool to help control the spread of this novel coronavirus — which has shown itself to be highly infectious, more so than flu, for example — Boos said: “I don’t think there’s much argument that isolating infection is important, the problem with this disease is there’s zero symptoms while you’re already contagious. Which means that you can’t just go and measure the temperature of people and be fine. You actually need that look into the past. And I don’t think that can be done accurately without digital help.
“So if the theory that you need to isolate infection chains is true at all, which many diseases have shown that it is — but each disease is different, so there’s no 100% guarantee, but all the data speaks for it — then that is definitely something that we need to do… The argument [boils down to] if we have so many infected as we currently have, does this make sense — do we not end up very quickly, because the world is so interconnected, with the same type of lockdown mechanism?
“This is why it only makes sense to come out with an app like this when you have broken these R0 values [i.e how many other people one infected person can infect] — once you’ve got it under 1 and got the number of cases in your country down to a good level. And I think that in the language of an infectious disease person this means going back to the approach of containing the disease, rather than mitigating the disease — what we’re doing now.”
“The approach of contact chain evaluation allows you to put better priorities on testing — but currently people don’t have the real priority question, they have a resource question on testing,” he added. “Testing and tracing are independent of each other. You need both; because if you’re tracing contacts and you can’t get tested what’s that good for? So yes you definitely [also] need the testing infrastructure for sure.”

New York City bans Zoom in schools citing security concerns

As schools lie empty, students still have to learn. But officials in New York City say schools are not permitted to use Zoom for remote teaching, citing security concerns with the video conferencing service.
“Providing a safe and secure remote learning experience for our students is essential, and upon further review of security concerns, schools should move away from using Zoom as soon as possible,” said Danielle Filson, a spokesperson for the New York City Dept. of Education. “There are many new components to remote learning, and we are making real-time decisions in the best interest of our staff and students.”
Instead, the city’s Dept. of Education is transitioning schools to Microsoft Teams, which the spokesperson said has the “same capabilities with appropriate security measures in place.”
The ban will cover some 1.1 million students in more than 1,800 schools across the city’s five boroughs. The decision to ban Zoom from schools was made in part by New York City’s Cyber Command, which launched in 2018 to help keep the city’s residents safe.
Zoom did not immediately comment.
News of the ban comes after a barrage of criticism over the company’s security policies and privacy practices, as hundreds of millions of users forced to work during the pandemic from home turn to the video calling platform. On Friday, Zoom’s chief executive apologized for “mistakenly” routing some calls through China, after researchers said the setup would put ostensibly encrypted calls at risk of interception by Chinese authorities. Zoom also apologized for claiming its service was end-to-end encrypted when it was not.
Zoom also changed its default settings to enable passwords on video calls by default after a wave of “Zoombombing” attacks, which saw unprotected calls invaded by trolls and used to broadcast abusive content.
Not all schools are said to be finding the transition easy. As first reported by Chalkbeat, Zoom quickly became the popular video calling service of choice after city schools closed on March 16. But one school principal in Brooklyn warned the publication that the shift away from Zoom would make it harder to remotely teach their classes, citing a “clunkiness” of Microsoft’s service.
The city spokesperson said it had been training schools on Microsoft Teams for “several weeks.”
But the spokesperson did not rule out an eventual return to Zoom, saying that the department “continues to review and monitor developments with Zoom,” and will update schools with any changes.

Maybe we shouldn’t use Zoom after all

Before suing NSO Group, Facebook allegedly sought their software to better spy on users

Facebook’s WhatsApp is in the midst of a lawsuit against Israeli mobile surveillance outfit NSO Group. But before complaining about the company’s methods, Facebook seems to have wanted to use them for its own purposes, according to testimony from NSO founder Shalev Hulio.
Last year brought news of an exploit that could be used to install one of NSO’s spyware packages, Pegasus, on devices using WhatsApp. The latter sued the former over it, saying that over a hundred human rights activists, journalists and others were targeted using the method.

WhatsApp blames — and sues — mobile spyware maker NSO Group over its zero-day calling exploit

Last year also saw Facebook finally shut down Onavo, the VPN app it purchased in 2013 and developed into a backdoor method of collecting all manner of data about its users — but not as much as they’d have liked, according to Hulio. In a document filed with the court yesterday he states that Facebook in 2017 asked NSO Group for help collecting data on iOS devices resistant to the usual tricks:
In October 2017, NSO was approached by two Facebook representatives who asked to purchase the right to use certain capabilities of Pegasus, the same NSO software discussed in Plaintiffs’ Complaint.
The Facebook representatives stated that Facebook was concerned that its method for gathering user data through Onavo Protect was less effective on Apple devices than on Android devices. The Facebook representatives also stated that Facebook wanted to use purported capabilities of Pegasus to monitor users on Apple devices and were willing to pay for the ability to monitor Onavo Protect users. Facebook proposed to pay NSO a monthly fee for each Onavo Protect user.
NSO declined, as it claims to only provide its software to governments for law enforcement purposes. But there is a certain irony to Facebook wanting to employ against its users the very software it would later decry being employed against its users. (WhatsApp maintains some independence from its parent company but these events come well after the purchase by and organizational integration into Facebook.)
A Facebook representative did not dispute that representatives from the company approached NSO Group at the time, but said the testimony was an attempt to “distract from the facts” and contained “inaccurate representations about both their spyware and a discussion with people who work at Facebook.” We can presumably expect a fuller rebuttal in the company’s own filings soon.
Facebook and WhatsApp are, quite correctly, concerned that effective, secret intrusion methods like those developed and sold by NSO Group are dangerous in the wrong hands — as demonstrated by the targeting of activists and journalists, and potentially even Jeff Bezos. But however reasonable Facebook’s concerns are, the company’s status as the world’s most notorious collector and peddler of private information makes its righteous stance hard to take seriously.

Google rolls back SameSite cookie changes to keep essential online services from breaking

Google today announced that it will temporarily roll back the changes it recently made to how its Chrome browser handles cookies in order to ensure that sites that perform essential services like banking, online grocery, government services and healthcare won’t become inaccessible to Chrome users during the current COVID-19 pandemic.
The new SameSite rules, which the company started rolling out to a growing number of Chrome users in recent months, are meant to make it harder for sites to access cookies from third-party sites and hence track a user’s online activity. These new rules are also meant to prevent cross-site request forgery attacks.
Under Google’s new guidance, developers have to explicitly allow their cookies to be read by third-party sites, otherwise, the browser will prevent these third-party sites from accessing them.
Since this is a pretty major change, Google gave developers quite a bit of time to adapt their applications to it. Still, not every site is ready yet and so the Chrome team decided to halt the gradual rollout and stop enforcing these new rules for the time being.
“While most of the web ecosystem was prepared for this change, we want to ensure stability for websites providing essential services including banking, online groceries, government services and healthcare that facilitate our daily life during this time,” writes Google Chrome engineering director Justin Schuh. “As we roll back enforcement, organizations, users and sites should see no disruption.”
A Google spokesperson also told us that the team saw some breakage in sites “that would not normally be considered essential, but with COVID-19 having become more important, we made this decision in an effort to ensure stability during this time.”
The company says it plans to resume its SameSite enforcement over the summer, though the exact timing isn’t yet clear.

Google wants to phase out support for third-party cookies in Chrome within two years

Google is now publishing coronavirus mobility reports, feeding off users’ location history

Google is giving the world a clearer glimpse of exactly how much it knows about people everywhere — using the coronavirus crisis as an opportunity to repackage its persistent tracking of where users go and what they do as a public good in the midst of a pandemic.
In a blog post today the tech giant announced the publication of what it’s branding ‘COVID-19 Community Mobility Reports‘. Aka an in-house analysis of the much more granular location data it maps and tracks to fuel its ad-targeting, product development and wider commercial strategy to showcase aggregated changes in population movements around the world.
The coronavirus pandemic has generated a worldwide scramble for tools and data to inform government responses. In the EU, for example, the European Commission has been leaning on telcos to hand over anonymized and aggregated location data to model the spread of COVID-19.
Google’s data dump looks intended to dangle a similar idea of public policy utility while providing an eyeball-grabbing public snapshot of mobility shifts via data pulled off of its global user-base.
In terms of actual utility for policymakers, Google’s suggestions are pretty vague. The reports could help government and public health officials “understand changes in essential trips that can shape recommendations on business hours or inform delivery service offerings”, it writes.
“Similarly, persistent visits to transportation hubs might indicate the need to add additional buses or trains in order to allow people who need to travel room to spread out for social distancing,” it goes on. “Ultimately, understanding not only whether people are traveling, but also trends in destinations, can help officials design guidance to protect public health and essential needs of communities.”
The location data Google is making public is similarly fuzzy — to avoid inviting a privacy storm — with the company writing it’s using “the same world-class anonymization technology that we use in our products every day”, as it puts it.
“For these reports, we use differential privacy, which adds artificial noise to our datasets enabling high quality results without identifying any individual person,” Google writes. “The insights are created with aggregated, anonymized sets of data from users who have turned on the Location History setting, which is off by default.”
“In Google Maps, we use aggregated, anonymized data showing how busy certain types of places are—helping identify when a local business tends to be the most crowded. We have heard from public health officials that this same type of aggregated, anonymized data could be helpful as they make critical decisions to combat COVID-19,” it adds, tacitly linking an existing offering in Google Maps to a coronavirus-busting cause.
The reports consist of per country, or per state, downloads (with 131 countries covered initially), further broken down into regions/counties — with Google offering an analysis of how community mobility has changed vs a baseline average before COVID-19 arrived to change everything.
So, for example, a March 29 report for the whole of the US shows a 47 per cent drop in retail and recreation activity vs the pre-CV period; a 22% drop in grocery & pharmacy; and a 19% drop in visits to parks and beaches, per Google’s data.
While the same date report for California shows a considerably greater drop in the latter (down 38% compared to the regional baseline); and slightly bigger decreases in both retail and recreation activity (down 50%) and grocery & pharmacy (-24%).
Google says it’s using “aggregated, anonymized data to chart movement trends over time by geography, across different high-level categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential”. The trends are displayed over several weeks, with the most recent information representing 48-to-72 hours prior, it adds.
The company says it’s not publishing the “absolute number of visits” as a privacy step, adding: “To protect people’s privacy, no personally identifiable information, like an individual’s location, contacts or movement, is made available at any point.”
Google’s location mobility report for Italy, which remains the European country hardest hit by the virus, illustrates the extent of the change from lockdown measures applied to the population — with retail & recreation dropping 94% vs Google’s baseline; grocery & pharmacy down 85%; and a 90% drop in trips to parks and beaches.
The same report shows an 87% drop in activity at transit stations; a 63% drop in activity at workplaces; and an increase of almost a quarter (24%) of activity in residential locations — as many Italians stay at home, instead of commuting to work.
It’s a similar story in Spain — another country hard-hit by COVID-19. Though Google’s data for France suggests instructions to stay-at-home may not be being quite as keenly observed by its users there, with only an 18% increase in activity at residential locations and a 56% drop in activity at workplaces. Perhaps because the pandemic has so far had a less severe impact on France, although numbers of confirmed cases and deaths continue to rise across the region.
While policymakers have been scrambling for data and tools to inform their responses to COVID-19, privacy experts and civil liberties campaigners have rushed to voice concerns about the impacts of such data-fuelled efforts on individual rights, while also querying the wider utility of some of this tracking.

And yes, the disclaimer is very broad. I’d say, this is largely a PR move.
Apart from this, Google must be held accountable for its many other secondary data uses. And Google/Alphabet is far too powerful, which must be addressed at several levels, soon. https://t.co/oksJgQAPAY
— Wolfie Christl (@WolfieChristl) April 3, 2020

Contacts tracing is another area where apps are fast being touted as a potential solution to get the West out of economically crushing population lockdowns — opening up the possibility of people’s mobile devices becoming a tool to enforce lockdowns, as has happened in China.
“Large-scale collection of personal data can quickly lead to mass surveillance,” is the succinct warning of a trio of academics from London’s Imperial College’s Computational Privacy Group, who have compiled their privacy concerns vis-a-vis COVID-19 contacts tracing apps into a set of eight questions app developers should be asking.
Discussing Google’s release of mobile location data for a COVID-19 cause, the head of the group, Yves-Alexandre de Montjoye, gave a general thumbs up to the steps it’s taken to shrink privacy risks.
Although he also called for Google to provide more detail about the technical processes it’s using in order that external researchers can better assess the robustness of the claimed privacy protections. Such scrutiny is of pressing importance with so much coronavirus-related data grabbing going on right now, he argues.
“It is all aggregated, they normalize to a specific set of dates, they threshold when there are too few people and on top of this they add noise to make — according to them — the data differentially private. So from a pure anonymization perspective it’s good work,” de Montjoye told TechCrunch, discussing the technical side of Google’s release of location data. “Those are three of the big ‘levers’ that you can use to limit risk. And I think it’s well done.”
“But — especially in times like this when there’s a lot of people using data — I think what we would have liked is more details. There’s a lot of assumptions on thresholding, on how do you apply differential privacy, right?… What kind of assumptions are you making?” he added, querying how much noise Google is adding to the data, for example. “It would be good to have a bit more detail on how they applied [differential privacy]… Especially in times like this it is good to be… overly transparent.”
While Google’s mobility data release might appear to overlap in purpose with the Commission’s call for EU telco metadata for COVID-19 tracking, de Montjoye points out there are likely to be key differences based on the different data sources.
“It’s always a trade off between the two,” he says. “It’s basically telco data would probably be less fine-grained, because GPS is much more precise spatially and you might have more data points per person per day with GPS than what you get with mobile phone but on the other hand the carrier/telco data is much more representative — it’s not only smartphone, and it’s not only people who have latitude on, it’s everyone in the country, including non smartphone.”
There may be country specific questions that could be better addressed by working with a local carrier, he also suggested. (The Commission has said it’s intending to have one carrier per EU Member State providing anonymized and aggregated metadata.)
On the topical question of whether location data can ever be truly anonymized, de Montjoye — an expert in data reidentification — gave a “yes and no” response, arguing that original location data is “probably really, really hard to anonymize”.
“Can you process this data and make the aggregate results anonymous? Probably, probably, probably yes — it always depends. But then it also means that the original data exists… Then it’s mostly a question of the controls you have in place to ensure the process that leads to generating those aggregates does not contain privacy risks,” he added.
Perhaps a bigger question related to Google’s location data dump is around the issue of legal consent to be tracking people in the first place.
While the tech giant claims the data is based on opt-ins to location tracking the company was fined $57M by France’s data watchdog last year for a lack of transparency over how it uses people’s data.
Then, earlier this year, the Irish Data Protection Commission (DPC) — now the lead privacy regulator for Google in Europe — confirmed a formal probe of the company’s location tracking activity, following a 2018 complaint by EU consumers groups which accuses Google of using manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes.
“The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing,” said the DPC in a statement in February, announcing the investigation.
The legal questions hanging over Google’s consent to track likely explains the repeat references in its blog post to people choosing to opt in and having the ability to clear their Location History via settings. (“Users who have Location History turned on can choose to turn the setting off at any time from their Google Account, and can always delete Location History data directly from their Timeline,” it writes in one example.)
In addition to offering up coronavirus mobility porn reports — which Google specifies it will continue to do throughout the crisis — the company says it’s collaborating with “select epidemiologists working on COVID-19 with updates to an existing aggregate, anonymized dataset that can be used to better understand and forecast the pandemic”.
“Data of this type has helped researchers look into predicting epidemics, plan urban and transit infrastructure, and understand people’s mobility and responses to conflict and natural disasters,” it adds.

Using AI responsibly to fight the coronavirus pandemic

Mark Minevich
Contributor

Share on Twitter

Mark Minevich is president of Going Global Ventures, an advisor at Boston Consulting Group, a digital fellow at IPsoft and a leading global AI expert and digital cognitive strategist and venture capitalist.

More posts by this contributor
The American AI Initiative: A good first step, of many

Irakli Beridze
Contributor

Irakli Beridze is head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI).

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.
With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.
For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.
In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.
Isolated cases or the new norm?
With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.
While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.
In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”
This is not an easy task, but a necessary one. So what can we do?
Ways to responsibly use AI to fight the coronavirus pandemic
Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.
Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.
Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.
While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.
It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

An EU coalition of techies is backing a “privacy-preserving” standard for COVID-19 contacts tracing

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.
China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.
Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.
The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.
In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.
The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.
The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.
While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?
Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.
Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.
Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.
The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.
“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.
“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”
Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”
The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.
Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.
On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.
Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”
While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”
Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:
Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.
Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.
However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.
Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.
— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.
“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.
“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.
“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”
We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.
“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”
“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.
“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”
The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.
Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.
Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.
Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.
But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.
Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.
Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.

What does a pandemic say about the tech we’ve built?

There’s a joke* being reshared on chat apps that takes the form of a multiple choice question — asking who’s the leading force in workplace digital transformation? The red-lined punchline is not the CEO or CTO but: C) COVID-19.
There’s likely more than a grain of truth underpinning the quip. The novel coronavirus is pushing a lot of metaphorical buttons right now. ‘Pause’ buttons for people and industries, as large swathes of the world’s population face quarantine conditions that can resemble house arrest. The majority of offline social and economic activities are suddenly off limits.
Such major pauses in our modern lifestyle may even turn into a full reset, over time. The world as it was, where mobility of people has been all but taken for granted — regardless of the environmental costs of so much commuting and indulged wanderlust — may never return to ‘business as usual’.
If global leadership rises to the occasional then the coronavirus crisis offers an opportunity to rethink how we structure our societies and economies — to make a shift towards lower carbon alternatives. After all, how many physical meetings do you really need when digital connectivity is accessible and reliable? As millions more office workers log onto the day job from home that number suddenly seems vanishingly small.
COVID-19 is clearly strengthening the case for broadband to be a utility — as so much more activity is pushed online. Even social media seems to have a genuine community purpose during a moment of national crisis when many people can only connect remotely, even with their nearest neighbours.
Hence the reports of people stuck at home flocking back to Facebook to sound off in the digital town square. Now the actual high street is off limits the vintage social network is experiencing a late second wind.
Facebook understands this sort of higher societal purpose already, of course. Which is why it’s been so proactive about building features that nudge users to ‘mark yourself safe’ during extraordinary events like natural disasters, major accidents and terrorist attacks. (Or indeed why it encouraged politicians to get into bed with its data platform in the first place — no matter the cost to democracy.)
In less fraught times, Facebook’s ‘purpose’ can be loosely summed to ‘killing time’. But with ever more sinkholes being drilled by the attention economy that’s a function under ferocious and sustained attack.
Over the years the tech giant has responded by engineering ways to rise back to the top of the social heap — including spying on and buying up competition, or directly cloning rival products. It’s been pulling off this trick, by hook or by crook, for over a decade. Albeit, this time Facebook can’t take any credit for the traffic uptick; A pandemic is nature’s dark pattern design.
What’s most interesting about this virally disrupted moment is how much of the digital technology that’s been built out online over the past two decades could very well have been designed for living through just such a dystopia.
Seen through this lens, VR should be having a major moment. A face computer that swaps out the stuff your eyes can actually see with a choose-your-own-digital-adventure of virtual worlds to explore, all from the comfort of your living room? What problem are you fixing VR? Well, the conceptual limits of human lockdown in the face of a pandemic quarantine right now, actually…
Virtual reality has never been a compelling proposition vs the rich and textured opportunity of real life, except within very narrow and niche bounds. Yet all of a sudden here we all are — with our horizons drastically narrowed and real-life news that’s ceaselessly harrowing. So it might yet end up wry punchline to another multiple choice joke: ‘My next vacation will be: A) Staycation, B) The spare room, C) VR escapism.’
It’s videoconferencing that’s actually having the big moment, though. Turns out even a pandemic can’t make VR go viral. Instead, long lapsed friendships are being rekindled over Zoom group chats or Google Hangouts. And Houseparty — a video chat app — has seen surging downloads as barflies seek out alternative night life with their usual watering-holes shuttered.
Bored celebs are TikToking. Impromptu concerts are being livestreamed from living rooms via Instagram and Facebook Live. All sorts of folks are managing social distancing and the stress of being stuck at home alone (or with family) by distant socializing — signing up to remote book clubs and discos; joining virtual dance parties and exercise sessions from bedrooms. Taking a few classes together. The quiet pub night with friends has morphed seamlessly into a bring-your-own-bottle group video chat.
This is not normal — but nor is it surprising. We’re living in the most extraordinary time. And it seems a very human response to mass disruption and physical separation (not to mention the trauma of an ongoing public health emergency that’s killing thousands of people a day) to reach for even a moving pixel of human comfort. Contactless human contact is better than none at all.
Yet the fact all these tools are already out there, ready and waiting for us to log on and start streaming, should send a dehumanizing chill down society’s backbone.
It underlines quite how much consumer technology is being designed to reprogram how we connect with each other, individually and in groups, in order that uninvited third parties can cut a profit.
Back in the pre-COVID-19 era, a key concern being attached to social media was its ability to hook users and encourage passive feed consumption — replacing genuine human contact with voyeuristic screening of friends’ lives. Studies have linked the tech to loneliness and depression. Now we’re literally unable to go out and meet friends the loss of human contact is real and stark. So being popular online in a pandemic really isn’t any kind of success metric.
Houseparty, for example, self-describes as a “face to face social network” — yet it’s quite the literal opposite; you’re foregoing face-to-face contact if you’re getting virtually together in app-wrapped form.
While the implication of Facebook’s COVID-19 traffic bump is that the company’s business model thrives on societal disruption and mainstream misery. Which, frankly, we knew already. Data-driven adtech is another way of saying it’s been engineered to spray you with ad-flavored dissatisfaction by spying on what you get up to. The coronavirus just hammers the point home.
The fact we have so many high-tech tools on tap for forging digital connections might feel like amazing serendipity in this crisis — a freemium bonanza for coping with terrible global trauma. But such bounty points to a horrible flip side: It’s the attention economy that’s infectious and insidious. Before ‘normal life’ plunged off a cliff all this sticky tech was labelled ‘everyday use’; not ‘break out in a global emergency’.
It’s never been clearer how these attention-hogging apps and services are designed to disrupt and monetize us; to embed themselves in our friendships and relationships in a way that’s subtly dehumanizing; re-routing emotion and connections; nudging us to swap in-person socializing for virtualized fuzz that designed to be data-mined and monetized by the same middlemen who’ve inserted themselves unasked into our private and social lives.
Captured and recompiled in this way, human connection is reduced to a series of dilute and/or meaningless transactions. The platforms deploying armies of engineers to knob-twiddle and pull strings to maximize ad opportunities, no matter the personal cost.
It’s also no accident we’re also seeing more of the vast and intrusive underpinnings of surveillance capitalism emerge, as the COVID-19 emergency rolls back some of the obfuscation that’s used to shield these business models from mainstream view in more normal times. The trackers are rushing to seize and colonize an opportunistic purpose.
Tech and ad giants are falling over themselves to get involved with offering data or apps for COVID-19 tracking. They’re already in the mass surveillance business so there’s likely never felt like a better moment than the present pandemic for the big data lobby to press the lie that individuals don’t care about privacy, as governments cry out for tools and resources to help save lives.
First the people-tracking platforms dressed up attacks on human agency as ‘relevant ads’. Now the data industrial complex is spinning police-state levels of mass surveillance as pandemic-busting corporate social responsibility. How quick the wheel turns.
But platforms should be careful what they wish for. Populations that find themselves under house arrest with their phones playing snitch might be just as quick to round on high tech gaolers as they’ve been to sign up for a friendly video chat in these strange and unprecedented times.
Oh and Zoom (and others) — more people might actually read your ‘privacy policy‘ now they’ve got so much time to mess about online. And that really is a risk.

Every day there’s a fresh Zoom privacy/security horror story. Why now, all at once?
It’s simple: the problems aren’t new but suddenly everyone is forced to use Zoom. That means more people discovering problems and also more frustration because opting out isn’t an option. https://t.co/O9h8SHerok
— Arvind Narayanan (@random_walker) March 31, 2020

*Source is a private Twitter account called @MBA_ish

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.
But should it be?
Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.
And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.
It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:
Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response.
Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.
There are many more privacy-focused alternatives to Zoom. Motherboard noted several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.
In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.
But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.
Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

No proof of a Houseparty breach, but its privacy policy is still gatecrashing your data

Houseparty has been a smashing success with people staying home during the coronavirus pandemic who still want to connect with friends.
The group video chat app, interspersed with games and other bells and whistles, raises it above the more mundane Zooms and Hangouts (fun only in their names, otherwise pretty serious tools used by companies, schools and others who just need to work) when it comes to creating engaged leisure time, amid a climate where all of them are seeing a huge surge in growth.
All that looked like it could possibly fall apart for Houseparty and its new owner Epic Games when a series of reports appeared Monday claiming Houseparty was breached, and that malicious hackers were using users’ data to access their accounts on other apps such as Spotify and Netflix.
Houseparty was swift to deny the reports and even go so far as to claim — without evidence — it was investigating indications that the “breach” was a “paid commercial smear to harm Houseparty,” offering a $1 million reward to whoever could prove its theory.
For now, there is no proof that there was a breach, nor proof that there was a paid smear campaign, and when we reached out to ask Houseparty and Epic about this investigation, a spokesperson said: “We don’t have anything to add here at the moment.”
But that doesn’t mean that Houseparty doesn’t have privacy issues.
As the old saying goes, “if the product is free, you are the product.” In the case of the free app Houseparty, the publishers detail a 12,000+ word privacy policy that covers any and all uses of data that it might collect by way of you logging on to or using its service, laying out the many ways that it might use data for promotional or commercial purposes.
There are some clear lines in the policy about what it won’t use. For example, while phone numbers might get shared for tech support, with partnerships that you opt into, to link up contacts to talk with and to authenticate you, “we will never share your phone number or the phone numbers of third parties in your contacts with anyone else.”
But beyond that, there are provisions in there that could see Houseparty selling anonymized and other data, leading Ray Walsh of research firm ProPrivacy to describe it as a “privacy nightmare.”
“Anybody who decides to use the Houseparty application to stay in contact during quarantine needs to be aware that the app collects a worrying amount of personal information,” he said. “This includes geolocation data, which could, in theory, be used to map the location of each user. A closer look at Houseparty’s privacy policy reveals that the firm promises to anonymize and aggregate data before it is shared with the third-party affiliates and partners it works with. However, time and time again, researchers have proven that previously anonymized data can be re-identified.”
There are ways around this for the proactive. Walsh notes that users can go into the settings to select “private mode” to “lock” rooms they use to stop people from joining unannounced or uninvited; switch locations off; use fake names and birthdates; disconnect all other social apps; and launch the app on iOS with a long press to “sneak into the house” without notifying all your contacts.
But with a consumer app, it’s a longshot to assume that most people, and the younger users who are especially interested in Houseparty, will go through all of these extra steps to secure their information.

Under quarantine, media is actually social

Telco metadata grab is for modelling COVID-19 spread, not tracking citizens, says EC

As part of its response to the public health emergency triggered by the COVID-19 pandemic, the European Commission has been leaning on Europe’s telcos to share aggregate location data on their users.
“The Commission kick-started a discussion with mobile phone operators about the provision of aggregated and anonymised mobile phone location data,” it said today.
“The idea is to analyse mobility patterns including the impact of confinement measures on the intensity of contacts, and hence the risks of contamination. This would be an important — and proportionate — input for tools that are modelling the spread of the virus, and would also allow to assess the current measures adopted to contain the pandemic.”
“We want to work with one operator per Member State to have a representative sample,” it added. “Having one operator per Member State also means the aggregated and anonymised data could not be used to track individual citizens, that is also not at all the intention. Simply because not all have the same operator.
“The data will only be kept as long as the crisis is ongoing. We will of course ensure the respect of the ePrivacy Directive and the GDPR.”
Earlier this week Politico reported that commissioner Thierry Breton held a conference with carriers, including Deutsche Telekom and Orange, asking for them to share data to help predict the spread of the novel coronavirus.
Europe has become a secondary hub for the disease, with high rates of infection in countries including Italy and Spain — where there have been thousands of deaths apiece.
The European Union’s executive is understandably keen to bolster national efforts to combat the virus. Although it’s less clear exactly how aggregated mobile location data can help — especially as more EU citizens are confined to their homes under national quarantine orders. (While police patrols and CCTV offer an existing means of confirming whether or not people are generally moving around.)
Nonetheless, EU telcos have already been sharing aggregate data with national governments.
Such as Orange in France which is sharing “aggregated and anonymized” mobile phone geolocation data with Inserm, a local health-focused research institute — to enable them to “better anticipate and better manage the spread of the epidemic”, as a spokeswoman put it.
“The idea is simply to identify where the populations are concentrated and how they move before and after the confinement in order to be able to verify that the emergency services and the health system are as well armed as possible, where necessary,” she added. “For instance, at the time of confinement, more than 1 million people left the Paris region and at the same time the population of Ile de Ré increased by 30%.
“Other uses of this data are possible and we are currently in discussions with the State on all of these points. But, it must be clear, we are extremely vigilant with regards to concerns and respect for privacy. Moreover, we are in contact with the CNIL [France’s data protection watchdog]… to verify that all of these points are addressed.”
Germany’s Deutsche Telekom is also providing what a spokesperson dubbed “anonymized swarm data” to national health authorities to combat the corona virus.
“European mobile operators are also to make such anonymized mass data available to the EU Commission at its request,” the spokesperson told us. “In fact, we will first provide the EU Commission with a description of data we have sent to German health authorities.”
It’s not entirely clear whether the Commission’s intention is to pool data from such existing local efforts — or whether it’s asking EU carriers for a different, universal data-set to be shared with it during the COVID-19 emergency.
When we asked about this it did not provide an answer. Although we understand discussions are ongoing with operators — and that it’s the Commission’s aim to work with one operator per Member State.
The Commission has said the metadata will be used for modelling the spread of the virus and for looking at mobility patterns to analyze and assess the impact of quarantine measures.
A spokesman emphasized that individual-level tracking of EU citizens is not on the cards.
“The Commission is in discussions with mobile operators’ associations about the provision of aggregated and anonymised mobile phone location data,” the spokesman for Breton told us.
“These data permit to analyse mobility patterns including the impact of confinement measures on the intensity of contacts and hence the risks of contamination. They are therefore an important and proportionate tool to feed modelling tools for the spread of the virus and also assess the current measures adopted to contain the Coronavrius pandemic are effective.”
“These data do not enable tracking of individual users,” he added. “The Commission is in close contact with the European Data Protection Supervisor (EDPS) to ensure the respect of the ePrivacy Directive and the GDPR.”
At this point there’s no set date for the system to be up and running — although we understand the aim is to get data flowing asap. The intention is also to use datasets that go back to the start of the epidemic, with data-sharing ongoing until the pandemic is over — at which point we’re told the data will be deleted.
Breton hasn’t had to lean very hard on EU telcos to share data for a crisis cause.
Earlier this week Mats Granryd, director general of operator association the GSMA, tweeted that its members are “committed to working with the European Commission, national authorities and international groups to use data in the fight against COVID-19 crisis”.
Although he added an important qualifier: “while complying with European privacy standards”.

The @GSMA and our members are committed to working with the @EU_Commission, national authorities and international groups to use data in the fight against COVID-19 crisis, while complying with European privacy standards. https://t.co/f1hBYT5Lqx
— Mats Granryd (@MatsGranryd) March 24, 2020

Europe’s data protection framework means there are limits on how people’s personal data can be used — even during a public health emergency. And while the legal frameworks do quite rightly bake in flexibility for a pressing public purpose, like the COVID-19 pandemic, it does not mean individuals’ privacy rights automatically go out the window.
Individual tracking of mobile users for contact tracing — such as Israel’s government is doing — is unimaginable at the pan-EU level. Certainly unless the regional situation deteriorates drastically.
One privacy lawyer we spoke to last week suggested such a level of tracking and monitoring across Europe would be akin to a “last resort”. Though individual EU countries are choosing to respond differently to the crisis — such as, for example, Poland giving quarantined people a choice between regular police checks up or uploading geotagged selfies to prove they’re not breaking lockdown.
While former EU Member, the UK, has reportedly chosen to invite US surveillance-as-a-service tech firm Palantir to carry out resource tracking for its National Health Service during the coronavirus crisis.
Under pan-EU law (which the UK remains subject to, until the end of the Brexit transition period), the rule of thumb is that extraordinary data-sharing — such as the Commission asking telcos to share user location data during a pandemic — must be “temporary, necessary and proportionate”, as digital rights group Privacy International recently noted.
This explains why Breton’s request is for “anonymous and aggregated” location data. And why, in background comments to reporters, the claim is that any shared data sets will be deleted at the end of the pandemic.

What are the rules wrapping privacy during COVID-19?

Not every EU lawmaker appears entirely aware of all the legal limits, however.
Today the bloc’s lead privacy regulator, data protection supervisor (EDPS) Wojciech Wiewiórowski, could be seen tweeting cautionary advice at one former commissioner, Andrus Ansip (now an MEP) — after the latter publicly eyed up a Bluetooth-powered contacts tracing app deployed in Singapore.
“Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder,” wrote Wiewiórowski.
So it remains to be seen whether pressure will mount for more privacy-intrusive surveillance of EU citizens if regional rates of infection continue to grow.

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.
— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

As we reported earlier this week, governments or EU institutions seeking to make use of mobile phone data to help with the response to the coronavirus must comply with the EU’s ePrivacy Directive — which covers the processing of mobile location data.
The ePrivacy Directive allows for Member States to restrict the scope of the rights and obligations related to location metadata privacy, and retain such data for a limited time — when such restriction constitutes “a necessary, appropriate and proportionate measure within a democratic society to safeguard national security (i.e. State security), defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system” — and a pandemic seems a clear example of a public security issue.
Thing is, the ePrivacy Directive is an old framework. The previous college of commissioners had intended to replace it alongside an update to the EU’s broader personal data protection framework — the General Data Protection Regulation (GDPR) — but failed to reach agreement.
This means there’s some potential mismatch. For example the ePrivacy Directive does not include the same level of transparency requirements as the GDPR.
Perhaps understandably, then, since news of the Commission’s call for carrier metadata emerged concerns have been raised about the scope and limits of the data sharing. Earlier this week, for example, MEP Sophie in’t Veld wrote to Breton asking for more information on the data grab — including querying exactly how the data will be anonymized.

Fighting the #coronavirus with technology: sure! But always with protection of our privacy. Read my letter to @ThierryBreton about @EU_Commission’s plans to call on telecoms to hand over data from people’s mobile phones in order to track&trace how the virus is spreading. pic.twitter.com/55kZo9bMhN
— Sophie in ‘t Veld (@SophieintVeld) March 25, 2020

The EDPS confirmed to us that the Commission consulted it on the proposed use of telco metadata.
A spokesman for the regulator pointed to a letter sent by Wiewiórowski to the Commission, following the latter’s request for guidance on monitoring the “spread” of COVID-19.
In the letter the EDPS impresses on the Commission the importance of “effective” data anonymization — which means it’s in effect saying a technique that does genuinely block re-identification of the data must be used. (There are plenty of examples of ‘anonymized’ location data being shown by researchers to be trivially easy to reidentify, given how many individual tells such data typically contains, like home address and workplace address.)
“Effective anonymisation requires more than simply removing obvious identifiers such as phone numbers and IMEI numbers,” warns the EDPS, adding too that aggregated data “can provide an additional safeguard”.
We also asked the Commission for more details on how the data will be anonymized and the level of aggregation that would be used — but it told us it could not provide further information at this stage. 
So far we understand that the anonymization and aggregation process will be undertaken before data is transferred by operators to a Commission science and research advisory body, called the Joint Research Centre (JRC) — which will perform the data analytics and modelling.
The results — in the form of predictions of propagation and so on — will then be shared by the Commission with EU Member States authorities. The datasets feeding the models will be stored on secure JRC servers.
The EDPS is equally clear on the Commission’s commitments vis-a-vis securing the data.
“Information security obligations under Commission Decision 2017/464 still apply [to anonymized data], as do confidentiality obligations under the Staff Regulations for any Commission staff processing the information. Should the Commission rely on third parties to process the information, these third parties have to apply equivalent security measures and be bound by strict confidentiality obligations and prohibitions on further use as well,” writes Wiewiórowski.
“I would also like to stress the importance of applying adequate measures to ensure the secure transmission of data from the telecom providers. It would also be preferable to limit access to the data to authorised experts in spatial epidemiology, data protection and data science.”
Data retention — or rather the need for prompt destruction of data sets after the emergency is over — is another key piece of the guidance.
“I also welcome that the data obtained from mobile operators would be deleted as soon as the current emergency comes to an end,” writes Wiewiórowski. “It should be also clear that these special services are deployed because of this specific crisis and are of temporary character. The EDPS often stresses that such developments usually do not contain the possibility to step back when the emergency is gone. I would like to stress that such solution should be still recognised as extraordinary.”
teresting to note the EDPS is very clear on “full transparency” also being a requirement, both of purpose and “procedure”. So we should expect more details to be released about how the data is being effectively rendered unidentifiable.
“Allow me to recall the importance of full transparency to the public on the purpose and procedure of the measures to be enacted,” writes Wiewiórowski. “I would also encourage you to keep your Data Protection Officer involved throughout the entire process to provide assurance that the data processed had indeed been effectively anonymised.”
The EDPS has also requested to see a copy of the data model. At the time of writing the spokesman told us it’s still waiting to receive that.
“The Commission should clearly define the dataset it wants to obtain and ensure transparency towards the public, to avoid any possible misunderstandings,” Wiewiórowski added in the letter.

A Norwegian school quit Zoom after a naked man ‘guessed’ the meeting link

A school in Norway has stopped using Zoom, the popular video conferencing service, after a naked man apparently “guessed” the link to a video lesson.
According to Norwegian state broadcaster NRK, the man exposed himself in front of several young children over the video call. The theory, according to the report, is that the man guessed the meeting ID and joined the video call.
One expert quoted in the story said some are “looking” for links.
Last year security researchers told TechCrunch that malicious users could access and listen in to Zoom video meetings by cycling through different permutations of meeting IDs in bulk. The researchers said the flaw, tested on both Zoom and Webex calls, worked because many meetings were not protected by a passcode.
Zoom later changed the settings so that private rooms are password protected by default.
School and workplaces across the world are embracing remote teaching as the number of those infected by the coronavirus strain, known as COVID-19, continues to climb. There are some 523,000 confirmed cases of COVID-19 across the world as of Thursday, according to data provided by Johns Hopkins University. Norway currently has over 3,300 confirmed cases.
More than 80 percent of the world’s population is said to be on some kind of lockdown to help limit the spread of coronavirus in an effort to prevent the overrunning of health systems.
The ongoing global lockdown has forced companies to embrace their staff working from home, pushing Zoom to become the go-to video conferencing platform for not only remote workers but also for recreation, like book clubs and happy hours.
Some found out the hard way that not setting up Zoom the correct way can lead to “Zoombombing,” where trolls jump into public calls, hijack screens, and displaying obscene imagery to unsuspecting guests.

DataGuard, which provides GDPR and privacy compliance-as-a-service, raises $20M

Watchdogs have started to raise the issue that new working practices and online activity necessitated by the spread of the coronavirus pandemic are creating new sets of privacy, security and data protection challenges. Today a startup is announcing a growth round of funding to help online businesses navigate those issues better.
DataGuard, a Munich-based startup that provides “GDPR-as-a-service” — essentially a cloud-based platform to help those doing business online ensure that they are compliant with various regional regulations and best practices around privacy by analysing customers’ data processing activities, offering options and suggestions for improving privacy compliance, providing them with the ability to modify their IT infrastructure and internal processes to do so — has raised $20 million, money that it will be using to continue expanding its business across Europe and the Americas and to continue investing in building out its technology.
The funding is coming from a single investor, London’s One Peak, and is the first outside funding for the company. We’re asking but it looks like DataGuard is not disclosing its valuation with this round.
The news is coming at a critical time in the world of venture funding. We are seeing a mix of deals that either were closed or close to closing before the worst of the pandemic reared its ugly head (meaning: some deals are just going to be put on ice, Zoom meeting or not); or are being done specifically to help with business continuity in the wake of all the interruption of normal life (that is, the business is too interesting not to help prop it up); or are closing specifically because the startup has built something that is going to demonstrate just how useful it is in the months to come.
As with the strongest of funding rounds, DataGuard into a couple of those categories.
On one hand, it has demonstrated a demand for its services before any of this hit. Today, the startup provides privacy policy services both to small and medium businesses as well as larger enterprises, and it has picked up 1,000 customers since launching in 2017.

“Millions of companies are striving to comply with privacy regulation such as GDPR or CCPA,” said Thomas Regier, (pictured, left) who co-founded the company with Kivanc Semen (right), in a statement. “We are excited to partner with One Peak to help many more organizations across the globe become and remain privacy compliant. Our Privacy-as-a-Service solution provides customers with access to a proprietary full-stack platform and services from a dedicated team of data privacy experts. This enables customers to gain insights into their data processing activities and to operationalize privacy and compliance across their entire organization.” Regier tells us that the company was bootstrapped to 100 employees, which also underscores the company’s capital efficiency, also especially attractive at the moment.
On the other, the wholesale shift to more online and remote working, combined with a giant surge in online traffic caused by more people staying at home to reduce the number of new Covid-19 cases, is driving a lot more traffic and stress testing to websites, apps and other online services.
All that creates precisely the kind of environment where we might, for a period, overlook some of the trickier and more exacting aspects of privacy policies, but which are nonetheless important to keep intact, lest malicious hackers take advantage of vulnerable situations, or when we return to “normal” regulators refocus and come back with heavy fines, or consumers respond with bad PR and more.
“We have a truly horizontal product that has the potential to become an integral part of the tech stack in enterprises and SMBs alike,” said Semen in a statement. “We will use the funding to deliver on our product roadmap. We will achieve this in two ways: By increasing automation levels through improvements of the machine learning capabilities in our privacy software suite and by speeding up our development of new product categories.”
DataGuard is one of a number of startups that have emerged to help businesses navigate the waters of privacy regulations, which are usually not the core competencies of the companies but have become an essential part of how they can (and should) do business online.
Others include OneTrust, which also helps companies provide and run better data protection policies; and InCountry, which is specifically focused on providing services to help companies understand and comply with data protection policies that vary across different regions. OneTrust last year passed a $1 billion valuation, speaking to the huge opportunity and demand in this space.
OnePeak believes that DataGuard’s take on the proposition is one of the more effective and efficient, one reason it’s backed the team. “We are incredibly excited to back DataGuard’s world-class founding team,” says David Klein, Co-Founder and Managing Partner at One Peak, in a statement. “We are convinced that DataGuard’s cutting-edge software suite combined with its comprehensive service offering provides both enterprises and SMBs with an end-to-end solution that fulfils their data privacy needs across the board.”

Instagram launches Co-Watching of posts during video chat

Now you can scroll Instagram together with friends, turning a typically isolating, passive experience into something more social and active. Today Instagram launched Co-Watching, which lets friends on a video chat or group video chat browse through feed posts one user has Liked or Saved, or that Instagram recommends.
Co-Watching could let people ooh, ahh, joke, and talk about Instagram’s content instead of just consuming it solo and maybe posting it to a chat thread so friends can do the same. That could lead to long usage sessions, incentivize users to collect a great depository of Saved posts to share, and spur more video calls that drag people into the app. TechCrunch first reported Instagram was testing Co-Watching a year ago, so we’ll see if it managed to work out the technical and privacy questions of operating the feature.
The launch comes alongside other COVID-19 responses from Instagram that include:
-Showing a shared Instagram Story featuring all the posts from you network that include the “Stay Home” sticker
-Adding Story stickers that remind people to wash their hands or keep their distance from others
-Adding coronavirus educational info to the top of results for related searches
-Removing unofficial COVID-19 accounts from recommendations, as well as virus related content from Explore if it doesn’t come from a credible health organization
-Expanding the donation sticker to more countries so people can search for and ask friends for contributions to relevant non-profits
These updates build on Instagram’s efforts from two weeks ago which included putting COVID-19 prevention tips atop the feed, listing official health organizations atop search results, and demoting the reach of coronavirus-related content rated false by fact checkers.
But Co-Watching will remain a powerful feature long after the quarantines and social distancing end. The ability to co-view content while browsing social networks has already made screensharing app Squad popular. When Squad launched in January 2019, I suggested that “With Facebook and Snap already sniffing around Squad, it’s quite possible they’ll try to copy it.” Facebook tested a Watch Together feature for viewing Facebook Watch videos inside Messenger back in April. And now here we are with Instagram.

Squad is the new screensharing chat app everyone will copy

The question is whether Squad’s first-mover advantage and option to screenshare from any app will let it hold its own, or if Instagram Co-Watching will just popularize the concept and send users searching for more flexible options like Squad. “Everyone knows that the content flooding our feeds is a filtered version of reality” Squad CEO Esther Crawford told me. “The real and interesting stuff goes down in DMs because people are more authentic when they’re 1:1 or in small group conversations.”
With Co-Watching Instagram users can spill the tea and gossip about posts live and unfiltered over video chat. When people launch a video chat from the Direct inbox or a chat thread, they’ll see a “Posts” button that launches Co-Watching. They’ll be able to pick from their Liked, Saved, or Explore feeds and then reveal it to the video chat, with everyone’s windows lined up beneath the post.
Up to six people can Co-Watch at once on Instagram, consuming feed photos and videos but not IGTV posts. You can share public posts, or private ones that everyone in the chat are allowed to see. If one participant is blocked from viewing a post, it’s inelligible for Co-Watching.
Co-Watching could finally provide an answer to Instagram’s Time Well Spent problem. Research shows how the real danger in social network overuse is passive content consumption like endless solo feed scrolling. It can inspire envy, poor self-esteem, and leave users deflated, especially if the highlights of everyone else’s lives look more interesting than their own day-to-day reality. But active sharing, commenting, and messaging can have a positive effect on well-being, making people feel like they have a stronger support network.
With Co-Watching, Instagram has found a way to turn the one-player experience into a multi-player game. Especially now with everyone stuck at home and unable to crowd around one person’s phone to gab about what they see, there’s a great need for this new feature. One concern is that it could be used for bullying, with people all making fun of someone’s posts.
But in general, the idea of sifting through cute animal photos, dance tutorials, or epic art could take the focus off of the individuals in a video chat. Not having one’s face as the center of attention could make video chat less performative and exhausting. Instead, Co-Watching could let us do apart what we love to do together: just hang out.

One neat plug-in to join a Zoom call from your browser

Want to join a Zoom meeting in the browser without having to download its app to do so? Check out this browser plug-in — which short-cuts the needless friction the videoconferencing company has baked into the process of availing yourself of its web client.
As we noted last week Zoom does have a zero download option — it just hides it really well, preferring to push people to download its app. It’s pretty annoying to say the least. Some have even called it irresponsible, during the coronavirus pandemic, given how many people are suddenly forced to work from home — where they may be using locked down corporate laptops that don’t allow them to download apps.
Software engineer, Arkadiy Tetelman — currently the head of appsec/infrasec for US mobile bank Chime — was one of those who got annoyed by Zoom hiding the join via browser option. So he put together this nice little Zoom Redirector browser extension — that “transparently redirects any meeting links to use Zoom’s browser based web client”, as he puts it on Github.
“When joining a Zoom meeting, the ‘join from your browser’ link is intentionally hidden,” he warns. “This browser extension solves this problem by transparently redirecting any meeting links to use Zoom’s browser based web client.”

It kills me that Zoom intentionally hides the “join from your browser” link, so here’s a small (20 line) browser extension that transparently redirects Zoom links to use their web client: https://t.co/ZeYmmS2R2A https://t.co/50f6ak4i9x
— Arkadiy Tetelman (@arkadiyt) March 22, 2020

So far the extension is available for Chrome and Firefox. At the time of writing submissions are listed as pending for Opera and Edge.
As others have noted, it does remain possible to perform a redirect manually, by adding your meeting ID to a Zoom web client link — zoom.us/wc/join/{your-meeting-id} — though if you’re being asked to join a bunch of Zoom meetings it’s clearly a lot more convenient to have a browser plug-in take the strain for you vs saddling yourself with copypasting meeting IDs. 
While the COVID-19 pandemic has generally fuelled the use of videoconferencing, Zoom appears to be an early beneficiary — with the app enjoying a viral boom (in the digital sense of the term) in recent weeks that’s been great for earnings growth (if not immediately for its share price when it reported its Q4 bounty). And unsurprisingly it’s forecasting a bumper year.
But it’s not all positive vibes or Zoom right now. Another area where the company has faced critical attention in recent days relates to user privacy.
Over the weekend another Twitter user, going by the handle @ouren, posted a critical thread that garnered thousands of likes and retweets — detailing how Zoom can track activity on the user’s computer, including harvesting data on what other programs are running and which window the user has in the foreground.

Everyone working remotely:
ZOOM monitors the activity on your computer and collects data on the programs running and captures which window you have focus on.
If you manage the calls, you can monitor what programs users on the call are running as well. It’s fucked up.
— Wolfgang ʬ (@Ouren) March 21, 2020

The thread included a link to an EFF article about the privacy risks of remote working tools, including Zoom.
“The host of a Zoom call has the capacity to monitor the activities of attendees while screen-sharing,” the digital rights group warned. “This functionality is available in Zoom version 4.0 and higher. If attendees of a meeting do not have the Zoom video window in focus during a call where the host is screen-sharing, after 30 seconds the host can see indicators next to each participant’s name indicating that the Zoom window is not active.”
Given the sudden spike in attention around privacy, Zoom chipped into the discussion with an official response, writing that the “attention tracking feature is off by default”.
“Once enabled, hosts can tell if participants have the App open and active when the screen-sharing feature is in use,” it added. “It does not track any aspects of your audio/video or other applications on your window.”

Hi, attention tracking feature is off by default – once enabled, hosts can tell if participants have the App open and active when the screen-sharing feature is in use. It does not track any aspects of your audio/video or other applications on your window. https://t.co/sWWfrsXe42
— Zoom (@zoom_us) March 22, 2020

However the company did not explain why it offers such a privacy hostile feature as “attention tracking” in the first place.

What are the rules wrapping privacy during COVID-19?

In a public health emergency that relies on people keeping an anti-social distance from each other to avoid spreading a highly contagious virus for which humans have no pre-existing immunity governments around the world have been quick to look to technology companies for help.
Background tracking is, after all, what many Internet giants’ ad-targeting business models rely on. While, in the US, telcos were recently exposed sharing highly granular location data for commercial ends.
Some of these privacy-hostile practices face ongoing challenges under existing data protection laws in Europe — and/or have at least attracted regulator attention in the US, which lacks a comprehensive digital privacy framework — but a pandemic is clearly an exceptional circumstance. So we’re seeing governments turn to the tech sector for help.
US president Donald Trump was reported last week to have summoned a number of tech companies to the White House to discuss how mobile location data could be used for tracking citizens.
In another development this month he announced Google was working on a nationwide coronavirus screening site — in fact it’s Verily, a different division of Alphabet. But concerns were quickly raised that the site requires users to sign in with a Google account, suggesting users’ health-related queries could be linked to other online activity the tech giant monetizes via ads. (Verily has said the data is stored separately and not linked to other Google products, although the privacy policy does allow data to be shared with third parties including Salesforce for customer service purposes.)
In the UK the government has also been reported to be in discussions with telcos about mapping mobile users’ movements during the crisis — though not at an individual level. It was reported to have held an early meeting with tech companies to ask what resources they could contribute to the fight against COVID-19.
Elsewhere in Europe, Italy — which remains the European nation worst hit by the virus — has reportedly sought anonymized data from Facebook and local telcos that aggregates users’ movement to help with contact tracing or other forms of monitoring.
While there are clear public health imperatives to ensure populations are following instructions to reduce social contact, the prospect of Western democracies making like China and actively monitoring citizens’ movements raises uneasy questions about the long term impact of such measures on civil liberties.
Plus, if governments seek to expand state surveillance powers by directly leaning on the private sector to keep tabs on citizens it risks cementing a commercial exploitation of privacy — at a time when there’s been substantial push-back over the background profiling of web users for behavioral ads.
“Unprecedented levels of surveillance, data exploitation, and misinformation are being tested across the world,” warns civil rights campaign group Privacy International, which is tracking what it dubs the “extraordinary measures” being taken during the pandemic.
A couple of examples include telcos in Israel sharing location data with state agencies for COVID-19 contact tracing and the UK government tabling emergency legislation that relaxes the rules around intercept warrants.
“Many of those measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary, and proportionate,” it adds. “It is essential to keep track of them. When the pandemic is over, such extraordinary measures must be put to an end and held to account.”

As the pandemic gets worse, we’re going to see increased pressure by all governments to access #private consumer data to identify:
1) who is infected 2) who they’ve been in contact with (“contact tracing”)
This is going to be an incredibly slippery slope if we’re not careful.
— ashkan soltani (@ashk4n) March 18, 2020

At the same time employers may feel under pressure to be monitoring their own staff to try to reduce COVID-19 risks right now — which raises questions about how they can contribute to a vital public health cause without overstepping any legal bounds.
We talked to two lawyers from Linklaters to get their perspective on the rules that wrap extraordinary steps such as tracking citizens’ movements and their health data, and how European and US data regulators are responding so far to the coronavirus crisis.
Bear in mind it’s a fast-moving situation — with some governments (including the UK and Israel) legislating to extend state surveillance powers during the pandemic.
The interviews below have been lighted edited for length and clarity
Europe and the UK
Dr Daniel Pauly, technology, media & telecommunications partner at Linklaters in Frankfurt 
Data protection law has not been suspended. At least when it comes to Europe. So data protection law still applies — without any restrictions. This is the baseline on which we need to work and for which we need to start. Then we need to differentiate between what the government can do and what employers can do in particular.
It’s very important to understand that when we look at governments they do have the means to allow themselves a certain use of data. Because there are opening clauses, flexibility clauses, in particular in the GDPR, when it comes to public health concerns, cross-border threats.
By using the legislation process they may introduce further powers. To give you one example what the Germany government did to respond is they created a special law — the coronavirus notification regulation — we already have in place a law governing the use of personal data in respect of certain serious infections. And what they did is they simply added the coronavirus infection to that list, which now means that hospitals and doctors must notify the competent authority of any COVID-19 infection.
This is pretty far reaching. They need to transmit names, contact details, sex, date of birth and many other details to allow the competent authority to gather that data and to analyze that data.
Another important topic in that field is the use of telecommunications data — in particular mobile phone data. Efficient use of that data might be one of the reasons why they obviously were quite successful in China with reducing the threat from the virus.
In Europe the government may not simply use mobile phone data and movement data — they have to anonymize it first and this is what, in Germany and other European jurisdictions, happened — including the UK — that anonymized mobile phone data has been handed over to organizations who start analyzing that data to get a better view of how the people behave, how the people move and what they need to do in order to restrict further movement. Or to restrict public life. This is the view on the government at least in Europe and the UK.
Transparency obligations [related to government use of personal data] are stemming from the GDPR [General Data Protection Regulation]. When they would like to make use of mobile phone data this is the ePrivacy directive. This is not as transparent as the GDPR is and they did not succeed in replacing that piece of legislation by new regulation. So the ePrivacy directive gives again the various Member States, including the UK, the possibility to introduce further and more restrictive laws [for public health reasons].
[If Internet companies such as Google were to be asked by European governments to pass data on users for a coronavirus tracking purpose] it has to be taken into consideration that they have not included this in their records of processing activities — in their data protection notifications and information.
So it would be at least from a pure legal perspective it would be a huge step — and I’m wondering whether it would be feasible without the governments introducing special laws for that.
If [EU] governments would make use of private companies to provide them with data which has not been collected for such purposes — so that would be a huge step from the perspective of the GDPR at least. I’m not aware of something like this. I’ve certainly read there are discussions ongoing with Netflix to reduce the net traffic but I haven’t heard anything about making use of the data Google has.
I wouldn’t expect it in Europe — and particularly in Germany. Tracking people, tracking and monitoring what they are doing this is almost last resort — so I wouldn’t expect that in the next couple of weeks. And I hope then it’s over.
[So far], from my perspective, the European regulators have responded [to the coronavirus crisis] in a pretty reasonable manner by saying that, in particular, any response to the virus must be proportionate.
We still have that law in place and we need to consider that the data we’re talking about is health data — it’s the most protected data of all. Having said that there are some ways at least the GDPR is allowing the government and allowing employers to make use of that data. In particular when it comes to processing for substantial public interest. Or if it’s required for the purposes of preventive medicine or necessary for reasons of public interest.
So the legislator was wise enough to include clauses allowing the use of such data under certain circumstances and there are a number of supervisory authorities who already made public guidelines how to make use of these statutory permissions. And what they basically said was it always needs to be figured out on a case by case basis whether the data is really required in the specific case.
To give you an example, it was made clear that an employer may not ask an employee where he has been during his vacation — but he may ask have you been in any of the risk areas? And then the sufficient answer is yes or no. They do not need any further data. So it’s always [about approaching this] a smart way — by being smart you get the information you need; it’s not the flood gate suddenly opened.
You really need to look at the specific case and see how to get the data you need. Usually it’s a yes or no which is sufficient in the particular case.
The US
Caitlin Potratz Metcalf, senior U.S. associate at Linklaters and a Certified Information Privacy Professional (CIPP/US)
Even though you don’t have a structured privacy framework in the US — or one specific regulator that covers privacy — you’ve got some of the same issues. The FCC [Federal Communications Commission] will go after companies that take any action that is inconsistent with their privacy policies. And that would be misleading to consumers. Their initial focus is on consumer protection, not privacy, but in the last couple of years they’ve been wearing two hats. So there is a focus on privacy even though we don’t have a national privacy law [equivalent in scope to the EU’s GDPR] but it’s coming from a consumer protection point of view.
So, for example, the FCC back in February actually announced potential sanctions against four major telecoms companies int he US with respect to sharing data related to cell phone tracking — it wasn’t the geolocation in an app but actually pinging off cell towers — and sharing that data to third parties without proper safeguards. Because that wasn’t disclosed in their privacy policies.
They haven’t actually issued those fines but it was announced that they may pursue a $208M fine total against these four companies: AT&T, Verizon*, T-Mobile, Sprint… So they do take it very seriously about how that data is safeguarded, how it’s being shared. And the fact that we have a state of emergency doesn’t change that emphasis on consumer protection.
You’ll see the same is true for the Department of Health and Human Services (HHS) — that’s responsible for any medical or health data.
That is really limited towards entities that are covered entities under HIPAA [Health Insurance Portability and Accountability Act] or their business associates. So it doesn’t apply to everybody across the board. But if you are a hospital health plan provider, whether you’re an employer and you have a group health plan, an insurer, or a business associate supporting one of those covered entities then you have to comply with HIPAA to the extent you’re handling protected health information. And that’s a bit narrower than the definition of personal data that you’d have under GDPR.
So you’re really looking at identifying information for that patient: Their medical status, their birth date, address, things like that that might be very identifiable and related to the person. But you could share things that are more general. For example you have a middle aged man from this county who’s tested positive for COVID and is at XYZ facility being treated and his condition is stable. Or his condition is critical. So you could share that kind of level of detail — but not further.
And so HHS in February had issued a bullet stressing that you can’t set aside the privacy and security safeguards under HIPAA during an emergency. They stressed to all covered entities that you have to still comply with the law — sanctions are still in place. And to the extent that you do have to disclose some of the protected health information it has to be to the minimum extent necessary. And that can be disclosed either to other hospitals, to a regulator in order to help stem the spread of COVID and also in order to provide treatment to a patient. So they listed a couple of different exceptions how you can share that information but really stressing the minimum necessary.
The same would be true for an employer — like of a group health plan — if they’re trying to share information about employees but it’s going to be very narrow in what they can actually share. And they can’t just cite as an exception that it’s for the public health interest.. You don’t necessarily have to disclose what country they’ve been to it’s just have they been to a region that’s on a restricted list for travel. So it’s finding creative ways to relay the necessary information you need and if there’s anything less intrusive you’re required to go that route.
That said, just last week HHS also issued another bullet saying that they would waive HIPAA sanctions and penalties during the nationwide public health emergency. But it was only directed to hospitals — so it doesn’t apply to all covered entities.
They also issued another bulletin saying that they would lax restrictions on basically sharing data on using electronic means. So there’s very heightened restrictions on how you can share data electronically when it relates to medical and health information. And so this was allowing doctors to communicate by FaceTime or video chat and other methods that may not be encrypted or secure. Or communicate with patients etc. So they’re giving a waiver or just softening some of the restrictions related to transferring health data electronically.
So you can see it’s an evolving situation but they’ve still taken a very reserved and kind of conservative approach — really emphasizing that you do need to comply with your obligation to protect health data. So that’s where you see the strongest implementations. And then the FCC coming at it from a consumer protection point of view.
Going back to the point you made earlier about Google sharing data [with governments] — you could get there, it just depends on how their privacy policies are structured.
In terms of tracking individuals we don’t have a national statute like GDPR that would prevent that but it would also be very difficult to anonymize that data because it’s so tied to individuals — it’s like your DNA; you can map a person leaving home, going to work or school, going to a doctor’s office, coming back home — and it really does have very sensitive information and because of all the specific data points it means it’s very difficult to anonymize it and provide it in a format that wouldn’t violate someone’s privacy without their consent. And so while you may not need full consent in the US you would still need to have notice and transparency about the policies.
Then it would be slightly different if you’re a California resident — the degree that you need under the new California law [CCPA] to provide disclosures and give individuals the opportunity to opt out if you were to share their information. So in that case, where the telecoms companies are potentially going to be sued by the FCC for sharing data with third parties, that in particular would also violate the new California law if consumers weren’t given the opportunity to opt out of having their information sold.
So there’s a lot of different puzzle pieces that fit together since we have a patchwork quilt of data protection — depending on the different state and federal laws.
The government, I guess, could issue other mandates or regulations [to requisition telco tracking data for a COVID-related public health purpose] — I don’t know that they will. I would envisage more of a call to arms requesting support and assistance from the private sector. Not a mandate that you must share your data, given the way our government is structured. Unless things get incredibly dire I don’t really see a mandate to companies that they have to share certain data in order to be able to track patients.
[If Google makes use of health-related searches/queries to enrich user profiles it uses for commercial purposes] that in and of itself wouldn’t be protected health information.
Google is not a [HIPAA] covered entity. And depending on what type of support it’s providing for covered entities it may be in limited circumstances could be considered a business associate that could be subject to HIPAA but in the context of just collecting data on consumers it wouldn’t be governed by that.
So as long as it’s not doing anything outside the scope of what’s already in its privacy policies then it’s fine — so the fact that it’s collecting data based on searches that you run on Google that should be in the privacy policy anyway. It doesn’t need to be specific to the type of search that you’re running. So the fact that it’s looking up how to get COVID testing or treatment or what are the symptoms for COVID, things like that, that can all be tied to the data [it holds on users] and enriched. And that can also be shared and sold to third parties — unless you’re a California resident. They have a separate privacy policy for California residents… They just have to consistent with their privacy policy.
The interesting thing to me is maybe the approach that Asia has taken — where they have a lot more influence over the commercial sector and data tracking–  and so you actually have the regulator stepping in and doing more tracking, not just private companies. But private companies are able to provide tracking information.
You see it actually with Uber. They’ve issued additional privacy notices to consumers — saying that to the extent we become aware of a passenger that has had COVID or a driver, we will notify people who have come into contact with that Uber over a given time period. They’re trying to take the initiative to do their own tracking to protect workers and consumers.
And they can do that — they just have to be careful about how much detail they share about personal information. Not naming names of who was impacted [but rather saying something like] ‘in the last 24 hours you may have ridden in an Uber that was impacted or known to have an infected individual in the Uber’.
[When it comes to telehealth platforms and privacy protections] it depends if they’re considered a business associate of a covered entity. So they may not be a covered entity themselves but if they are a business associate supporting a covered entity — for example a hospital or a clinic or insurers sharing that data and relying on a telehealth platform. In that context they would be governed by some of the same privacy and security regulations under HIPAA.
Some of them are slightly different for a business associate compared to a covered entity but generally you step in the shoes of the covered entity if you’re handling the covered entity’s data and have the same restrictions apply to you.
Aggregate data wouldn’t be considered protected health information — so they could [for example] share a symptom heat map that doesn’t identify specific individuals or patients and their health data.
[But] standalone telehealth apps that are collecting data directly from the consumer are not covered by HIPAA.
That’s actually a big loophole in terms of consumer protection, privacy protections related to health data. You have the same issue for all the health fitness apps — whether it’s your fitbit or other health apps or if you’re pregnant and you have an app that tracks your maternity or your period or things like that. Any of that data that’s collected is not protected.
The only protections you have are whatever disclosures are in the privacy policies. And in them having to be transparent and act within that privacy policy. If they don’t they can face an enforcement action by the FCC but that is not regulated by the Department of Health and Human Services under HIPAA.
So it’s a very different approach than under GDPR which is much more comprehensive.
That’s not to say in the future we might see a tightening of restrictions on that but individuals are freely giving that information — and in theory should read the privacy policy that’s provided when you log into the app. But most users probably don’t read that and then that data can be shared with other third parties.
They could share it with a regulator, they could sell it to other third parties so long as they have the proper disclosure that they may sell your personal information or share it with third parties. It depends on how they’re privacy policy is crafted. So long as it covers those specific actions. And for California residents it’s a more specific test — there are more disclosures that are required.
For example the type of data that you’re collecting, the purpose that you’re collecting it for, how you intend to process that data, who you intend to share it with and why. So it’s tightened for California residents but for the rest of the US you just have to be consistent with your privacy policy and you aren’t required to have the same level of disclosures.
More sophisticated, larger companies, though, definitely are already complying with GDPR — or endeavouring to comply with the California law — and so they have more sophisticated, detailed privacy notices than are maybe required by law in the US. But they’re kind of operating on a global platform and trying to have a global privacy policy.
*Disclosure: Verizon is TechCrunch’s parent company

Israel passes emergency law to use mobile data for COVID-19 contact tracing

Israel has passed an emergency law to use mobile phone data for tracking people infected with COVID-19 including to identify and quarantine others they have come into contact with and may have infected.
The BBC reports that the emergency law was passed during an overnight sitting of the cabinet, bypassing parliamentary approval.
Israel also said it will step up testing substantially as part of its respond to the pandemic crisis.
In a statement posted to Facebook, prime minister Benjamin Netanyahu wrote: “We will dramatically increase the ability to locate and quarantine those who have been infected. Today, we started using digital technology to locate people who have been in contact with those stricken by the Corona. We will inform these people that they must go into quarantine for 14 days. These are expected to be large – even very large – numbers and we will announce this in the coming days. Going into quarantine will not be a recommendation but a requirement and we will enforce it without compromise. This is a critical step in slowing the spread of the epidemic.”
“I have instructed the Health Ministry to significantly increase the number of tests to 3,000 a day at least,” he added. “It is very likely that we will reach a higher figure, even up to 5,000 a day. To the best of my knowledge, relative to population, this is the highest number of tests in the world, even higher than South Korea. In South Korea, there are around 15,000 tests a day for a population five or six times larger than ours.”
On Monday an Israeli parliamentary subcommittee on intelligence and secret services discussed a government request to authorize Israel’s Shin Bet security service to assist in a national campaign to stop the spread of the novel coronavirus — but declined to vote on the request, arguing more time is needed to assess it.
Civil liberties campaigners have warned the move to monitor citizens’ movements sets a dangerous precedent.

Netanyahu’s announcement that he intends to bypass parliamentary oversight and implement emergency regulations that authorize the Shin Bet to locate Corona patients actualizes this danger.
— ACRI (@acri_online) March 16, 2020

According to WHO data, Israel had 200 confirmed cases of the coronavirus as of yesterday morning. Today the country’s health ministry reported cases had risen to 427.
Details of exactly how the tracking will work have not been released — but, per the BBC, the location data of people’s mobile devices will be collected from telcos by Israel’s domestic security agency and shared with health officials.
It also reports the health ministry will be involved in monitoring the location of infected people to ensure they are complying with quarantine rules — saying it can also send text messages to people who have come into contact with someone with COVID-19 to instruct them to self isolate.
In recent days Netanyahu has expressed frustration that Israel citizens have not been paying enough mind to calls to combat the spread of the virus via voluntary social distancing.
“This is not child’s play. This is not a vacation. This is a matter of life and death,” he wrote on Facebook. “There are many among you who still do not understand the magnitude of the danger. I see the crowds on the beaches, people having fun. They think this is a vacation.”
“According to the instructions that we issued yesterday, I ask you not leave your homes and stay inside as much as possible. At the moment, I say this as a recommendation. It is still not a directive but that can change,” he added.
Since the Israeli government’s intent behind the emergency mobile tracking powers is to combat the spread of COVID-19 by enabling state agencies to identify people whose movements need to be restricted to avoid them passing the virus to others, it seems likely law enforcement agencies will also be involved in enacting the measures.
That will mean citizens’ smartphones being not just a tool of mass surveillance but also a conduit for targeted containment — raising questions about the impact such intrusive measures might have on people’s willingness to carry mobile devices everywhere they go, even during a pandemic.
Yesterday the Wall Street Journal reported that the US government is considering similar location-tracking technology measures in a bid to check the spread of COVID-19 — with discussions ongoing between tech giants, startups and White House officials on measures that could be taken to monitor the disease.
Last week the UK government also held a meeting with tech companies to ask for their help in combating the coronavirus. Per Wired some tech firms offered to share data with the state to help with contact tracing — although, at the time, the government was not pursuing a strategy of mass restrictions on public movement. It has since shifted position.

Extra Crunch members get 60% off data privacy platform Osano

Extra Crunch is excited to announce a new community perk from Startup Battlefield alum Osano. Starting today, annual and two-year members of Extra Crunch can receive 60% off their data privacy management software for six months. You must be new to Osano to claim this offer. This coupon is only applicable to Osano’s self-service plans. Osano is an easy-to-use data privacy platform that instantly helps your website become compliant with laws such as GDPR and CCPA. Osano works to keep you out of trouble and monitors all of the vendors you share data with — so you don’t have to. Connect the data dots to see what’s hiding with Osano here.  
You can sign up for Extra Crunch and claim this deal here.
Extra Crunch is a membership program from TechCrunch that features weekly investor surveys, how-tos and interviews with experts on company building, analysis of IPOs and late-stage companies, an experience on TechCrunch.com that’s free of banner ads, discounts on TechCrunch events and several Partner Perks like the one mentioned in this article. We’re democratizing information for startups, and we’d love to have you join our community.
Sign up for Extra Crunch here.
New annual and two-year Extra Crunch members will receive details on how to claim the perk in the welcome email. The welcome email is sent after signing up for Extra Crunch.
If you are already an annual or two-year Extra Crunch member, you will receive an email with the offer at some point over the next 24 hours. If you are currently a monthly Extra Crunch subscriber and want to upgrade to annual in order to claim this deal, head over to the “account” section on TechCrunch.com and click the “upgrade” button.  
This is one of more than a dozen Partner Perks we’ve launched for annual Extra Crunch members. Other community perks include a 20% discount on TechCrunch events, 90% off an annual DocSend plan and an opportunity to claim $1,000 in AWS credits. For a full list of perks from partners, head here.
If there are other community perks you want to see us add, please let us know by emailing [email protected]
Sign up for an annual Extra Crunch membership today to claim this community perk. You can purchase an annual Extra Crunch membership here.
Disclosure: This offer is provided as a partnership between TechCrunch and Osano, but it is not an endorsement from the TechCrunch editorial team. TechCrunch’s business operations remain separate to ensure editorial integrity. 

To make locks touchless, Proxy bluetooth ID raises $42M

We need to go hands-off in the age of coronavirus. That means touching fewer doors, elevators, and sign-in iPads. But once a building is using phone-based identity for security, there’s opportunities to speed up access to WIFI networks and printers, or personalize conference rooms and video call set-ups. Keyless office entry startup Proxy wants to deliver all of this while keeping your phone in your pocket.
“The door is just a starting point” Proxy co-founder and CEO Denis Mars tells me. “We’re . . . empowering a movement to take back control of our privacy, our sense of self, our humanity, our individuality.”

With the contagion concerns and security risks of people rubbing dirty, cloneable, stealable key cards against their office doors, investors see big potential in Proxy. Today it’s announcing here a $42 million Series B led by Scale Venture Partners with participation from former funders Kleiner Perkins and Y Combinator plus new additions Silicon Valley Bank and West Ventures.
The raise brings Proxy to $58.8 million in funding so it can staff up at offices across the world and speed up deployments of its door sensor hardware and access control software. “We’re spread thin” says Mars. “Part of this funding is to try to grow up as quickly as possible and not grow for growth sake. We’re making sure we’re secure, meeting all the privacy requirements.”
How does Proxy work? Employers get their staff to install an app that knows their identity within the company, including when and where they’re allowed entry. Buildings install Proxy’s signal readers, which can either integrate with existing access control software or the startup’s own management dashboard.
Employees can then open doors, elevators, turnstiles, and garages with a Bluetooth low-energy signal without having to even take their phone out. Bosses can also opt to require a facial scan or fingerprint or a wave of the phone near the sensor. Existing keycards and fobs still work with Proxy’s Pro readers. Proxy costs about $300 to $350 per reader, plus installation and a $30 per month per reader subscription to its management software.

Now the company is expanding access to devices once you’re already in the building thanks to its SDK and APIs. Wifi router-makers are starting to pre-provision their hardware to automatically connect the phones of employees or temporarily allow registered guests with Proxy installed — no need for passwords written on whiteboards. Its new Nano sensors can also be hooked up to printers and vending machines to verify access or charge expense accounts. And food delivery companies can add the Proxy SDK so couriers can be granted the momentary ability to open doors when they arrive with lunch.
Rather than just indiscriminately beaming your identity out into the world, Proxy uses tokenized credentials so only its sensors know who you are. Users have to approve of new networks’ ability to read their tokens, Proxy has SOC-2 security audit certification, and complies with GDPR. “We feel very strongly about where the biometrics are stored . . . they should stay on your phone” says Mars.
Yet despite integrating with the technology for two-factor entry unlocks, Mars says “We’re not big fans of facial recognition. You don’t want every random company having your face in their database. The face becomes the password you were supposed to change every 30 days.”

Keeping your data and identity safe as we see an explosion of Internet Of Things devices was actually the impetus for starting Proxy. Mars had sold his teleconferencing startup Bitplay to Jive Software where he met his eventually co-founder Simon Ratner, who’d joined after his video annotation startup  Omnisio was acquired by YouTube. Mars was frustrated about every IoT lightbulb and appliance wanting him to download an app, set up a profile, and give it his data.
The duo founded Proxy in 2013 as a universal identity signal. Today it has over 60 customers. While other apps want you to constantly open them, Proxy’s purpose is to work silently in the background and make people more productive. “We believe the most important technologies in the world don’t seek your attention. They work for you, they empower you, and they get out of the way so you can focus your attention on what matters most — living your life.”
Now Proxy could actually help save lives. “The nature of our product is contactless interactions in commercial buildings and workplaces so there’s a bit of an unintended benefit that helps prevent the spread of the virus” Mars explains. “We have seen an uptick in customers starting to set doors and other experiences in longer-range hands-free mode so that users can walk up to an automated door and not have to touch the handles or badge/reader every time.”

The big challenge facing Proxy is maintaining security and dependability since it’s a mission-critical business. A bug or outage could potentially lock employees out of their workplace (when they eventually return from quarantine). It will have to keep hackers out of employee files. Proxy needs to stay ahead of access control incumbents like ADT and Honeywell as well as smaller direct competitors like $10 million-funded Nexkey and $28 million-funded Openpath.
Luckily, Proxy has found a powerful growth flywheel. First an office in a big building gets set up, then they convince the real estate manager to equip the lobby’s turnstiles and elevators with Proxy. Other tenants in the building start to use it, so they buy Proxy for their office. Then they get their offices in other cities on board…starting the flywheel again. That’s why Proxy is doubling down on sales to commercial real estate owners.
The question is when Proxy will start knocking on consumers’ doors. While leveling up into the enterprise access control software business might be tough for home smartlock companies like August, Proxy could go down market if it built more physical lock hardware. Perhaps we’ll start to get smart homes that know who’s home, and stop having to carry pointy metal sticks in our pockets.

India used facial recognition tech to identify 1,100 individuals at a recent riot

Law enforcement agencies in India used facial recognition to identify more than 1,100 individuals who took part in communal violence in the national capital last month, a top minister said in the lower house of the parliament on Wednesday.
In what is the first admission of its kind in the country, Amit Shah, India’s home minister, said the law enforcement agencies deployed a facial recognition system, and fed it with images from government-issued identity cards, including 12-digit Aadhaar that has been issued to more than a billion Indians and driving licenses, “among other databases,” to identify alleged culprits in the communal violence in northeast Delhi on February 25 and 26.
“This is a software. It does not see faith. It does not see clothes. It only sees the face and through the face the person is caught,” said Shah, responding to an individual who had urged New Delhi to not drag innocent people into the facial surveillance.
The admission further demonstrates how the Indian government has rushed to deploy facial recognition technology in the absence of regulation overseeing its usage. Critics have urged the government to hold consultations and formulate a law before deploying the technology.
“The use of Aadhaar for this purpose without any judicial authorisation violates the judgement of the Supreme Court in KS Puttaswamy v. UoI (2019),” said New Delhi-based digital rights advocacy group Internet Freedom Foundation, which also questioned the sophistication of the facial recognition system.
The facial recognition system that the government used in Delhi was first acquired by the Delhi Police to identify missing children. In 2019, the system had an accuracy rate of 1% and it even failed to distinguish between boys and girls, the group said.
“All of this is being done without any clear underlying legal authority and is in clear violation of the Right to Privacy judgment (that the Indian apex court upheld in 2017),” said Apar Gupta, executive director at IFF. “Facial recognition technology is still evolving and the risks of such evolutionary tech being used in policing are significant,” said Gupta.
Several law enforcement agencies have been using facial recognition for years now. In January and early February, police in New Delhi and the northern state of Uttar Pradesh used the technology during protests against a new citizenship law that critics say marginalises Muslims.

Your VPN or ad-blocker app could be collecting your data

The underpinnings of how app store analytics platforms operate were exposed this week by BuzzFeed, which uncovered the network of mobile apps used by a popular analytics firm Sensor Tower to amass app data. The company had operated at least 20 apps, including VPNs and ad blockers, whose main purpose was to collect app usage data from end users in order to make estimations about app trends and revenues. Unfortunately, these sorts of data collection apps are not new — nor unique to Sensor Tower’s operation.
Sensor Tower was found to operate apps such as Luna VPN, for example, as well as Free and Unlimited VPN, Mobile Data, and Adblock Focus, among others. After BuzzFeed reached out, Apple removed Adblock Focus and Google removed Mobile Data. Others are still being investigated, the report said.

Apps’ collection of usage data has been an ongoing issue across the app stores.
Facebook and Google have both operated such apps, not always transparently, and Sensor Tower’s key rival App Annie continues to do the same today.
Facebook
For Facebook, its 2013 acquisition of VPN app maker Onavo for years served as a competitive advantage. The traffic through the app gave Facebook insight into what other social applications were growing in popularity — so Facebook could either clone their features or acquire them outright. When Apple finally booted Onavo from the App Store half a decade later, Facebook simply brought back the same code in a new wrapper — then called the Facebook Research app. This time, it was a bit more transparent about its data collection, as the Research app was actually paying for the data.
But Apple kicked that app out, too. So Facebook last year launched Study and Viewpoints to further its market research and data collection efforts. These apps are still live today.
Google
Google was also caught doing something similar by way of its Screenwise Meter app, which invited users 18 and up (or 13 if part of a family group) to download the app and participate in the panel. The app’s users allowed Google to collect their app and web usage in exchange for gift cards. But like Facebook, Google’s app used Apple’s Enterprise Certificate program to work — a violation of Apple policy that saw the app removed, again following media coverage. Screenwise Meter returned to the App Store last year and continues to track app usage, among other things, with panelists’ consent.
App Annie
App Annie, a firm that directly competes with Sensor Tower, has acquired mobile data companies and now operates its own set of apps to track app usage under those brands.
In 2014, App Annie bought Distimo, and as of 2016 has run Phone Guardian, a “secure Wi-Fi and VPN” app, under the Distimo brand.

The app discloses its relationship with App Annie in its App Store description, but remains vague about its true purpose:
“Trusted by more than 1 million users, App Annie is the leading global provider of mobile performance estimates. In short, we help app developers build better apps. We build our mobile performance estimates by learning how people use their devices. We do this with the help of this app.”
In 2015, App Annie acquired Mobidia. Since 2017, it has operated a real-time data usage monitor My Data Manager under that brand, as well. The App Store description only offers the same vague disclosure, which means users aren’t likely aware of what they’re agreeing to.

Disclosure?
The problem with apps like App Annie’s and Sensor Tower’s is that they’re marketed as offering a particular function, when their real purpose for existing is entirely another.
The app companies’ defense is that they do disclose and require consent during onboarding. For example, Sensor Tower apps explicitly tell users what is collected and what is not:

 
App Annie’s app offers a similar disclosure, and takes the extra step of identifying the parent company by name:

Despite these opt-ins, end users may still not understand that their VPN app is actually tied to a much larger data collection operation. After all, App Annie and Sensor Tower aren’t household names (unless you’re an app publisher or marketer.)
Apple and Google’s responsibility 
Apple and Google, let’s be fair, are also culpable here.
Of course, Google is more pro-data collection because of the nature of its own business as an advertising-powered company. (It even tracks users in the real-world via the Google Maps app.)
Apple, meanwhile, markets itself as a privacy-focused company, so is deserving of increased scrutiny.
It seems unfathomable that, following the Onavo scandal, Apple wouldn’t have taken a closer look into the VPN app category to ensure its apps were compliant with its rules and transparent about the nature of their businesses. In particular, it seems Apple would have paid close attention to apps operated by companies in the app store intelligence business, like App Annie and its subsidiaries.
Apple is surely aware of how these companies acquire data — it’s common industry knowledge. Plus, App Annie’s acquisitions were publicly disclosed.

oh wait! pic.twitter.com/ktVc6E9t1f
— Will Strafach (@chronic) March 10, 2020

But Apple is conflicted. It wants to protect app usage and user data (and be known for protecting such data) by not providing any broader app store metrics of its own. However, it also knows that app publishers need such data to operate competitively on the App Store. So instead of being proactive about sweeping the App Store for data collection utilities, it remains reactive by pulling select apps when the media puts them on blast, as BuzzFeed’s report has since done. That allows Apple to maintain a veil of innocence.
But pulling user data directly covertly is only one way to operate. As Facebook and Google have since realized, it’s easier to run these sorts of operations on the App Store if the apps just say, basically, “this is a data collection app,” and/or offer payment for participation — as do many marketing research panels. This is a more transparent relationship from a consumer’s perspective too, as they know they’re agreeing to sell their data.
Meanwhile, Sensor Tower and App Annie competitor Apptopia says it tested then scrapped its own an ad blocker app around six years ago, but claims it never collected data with it. It now favors getting its data directly from its app developer customers.
“We can confidently state that 100% of the proprietary data we collect is from shared App Analytics Accounts where app developers proactively and explicitly share their data with us, and give us the right to use it for modeling,” stated Apptopia Co-founder and COO, Jonathan Kay. “We do not collect any data from mobile panels, third-party apps, or even at the user/device level.”
This isn’t necessarily better for end users, as it further obscures the data collection and sharing process. Consumers don’t know which app developers are sharing this data, what data is being shared, or how it’s being utilized. (Fortunately for those who do care, Apple allows users to disable the sharing of diagnostic and usage data from within iOS Settings.)
Data collection done by app analytics firms is only one of many, many ways that apps leak data, however.
In fact, many apps collect personal data — including data that’s far more sensitive than anonymized app usage trends — by way of their included SDKs (software development kits). These tools allow apps to share data with numerous technology companies including ad networks, data brokers, and aggregators, both large and small. It’s not illegal and mainstream users probably don’t know about this either.
Instead, user awareness seems to crop up through conspiracy theories, like “Facebook is listening through the microphone,” without realizing that Facebook collects so much data it doesn’t really need to do so. (Well, except when it does).
In the wake of BuzzFeed’s reporting, Sensor Tower says it’s “taking immediate steps to make Sensor Tower’s connection to our apps perfectly clear, and adding even more visibility around the data their users share with us.”
Apple, Google, and App Annie have been asked for comment. Google isn’t providing an official comment. Apple didn’t respond. App Annie did not have a comment ready by deadline.
Sensor Tower’s full statement is below:
Our business model is predicated on high-level, macro app trends. As such, we do not collect or store any personally identifiable information (PII) about users on our servers or elsewhere. In fact, based on the way our apps are designed, such data is separated before we could possibly view or interact with it, and all we see are ad creatives being served to users. What we do store is extremely high level, aggregated advertising data that may demonstrate trends that we share with customers.
Our privacy policy follows best practices and makes our data use clear. We want to reiterate that our apps do not collect any PII, and therefore it cannot be shared with any other entity, Sensor Tower or otherwise. We’ve made this very clear in our privacy policy, which users actively opt into during the apps’ onboarding processes after being shown an unambiguous disclaimer detailing what data is shared with us. As a routine matter, and as our business evolves, we’ll always take a privacy-centric approach to new features to help ensure that any PII remains uncollected and is fully safeguarded.
Based on the feedback we’ve received, we’re taking immediate steps to make Sensor Tower’s connection to our apps perfectly clear, and adding even more visibility around the data their users share with us.
 
 

Daily Crunch: French data watchdog investigates Criteo

Criteo faces a privacy investigation, an e-discovery startup raises $62 million and hackers hack other hackers. Here’s your Daily Crunch for March 10, 2020.
1. Adtech giant Criteo is being investigated by France’s data watchdog
Criteo is under investigation by the French data protection watchdog, the CNIL, following a complaint filed by privacy rights campaign group Privacy International.
Back in November 2018, a few months after GDPR (Europe’s updated data protection framework) came into force, Privacy International filed complaints against a number of companies operating in the space — including Criteo. A subsequent investigation by the rights group found adtech trackers on mental health websites sharing sensitive user data for ad targeting purposes.
2. Everlaw announces $62M Series C to continue modernizing legal discovery
Everlaw is bringing modern data management, visualization and machine learning to e-discovery, the process in which legal entities review large amounts of evidence to build a case. CapitalG (Alphabet’s growth equity investment fund) and Menlo Ventures led the round.
3. Hackers are targeting other hackers by infecting their tools with malware
Cybereason’s Amit Serper found that the attackers in this years-long campaign are taking existing hacking tools and injecting a powerful remote-access trojan. When the tools are opened, the hackers gain full access to the target’s computer.
4. Amazon creates $5M relief fund to aid small businesses in Seattle impacted by coronavirus outbreak
The fund will provide cash grants to local small businesses in need during the novel coronavirus outbreak. The money will be directed toward small businesses with fewer than 50 employees or less than $7 million in annual revenue, and with a physical presence within a few blocks of Regrade and South Lake Union office buildings.
5. Stitch Fix’s sharp decline signals high growth hurdles for tech-enabled startups
Shares of Stitch Fix, a digitally-enabled “styling service,” are off sharply this morning after its earnings failed to excite public market investors. The firm, worth over $29 per share as recently as February, opened today worth just $14.75 per share. (Extra Crunch membership required.)
6. Facebook Stories tests cross-posting to its pet, Instagram
Facebook’s latest colonization of Instagram has begun — the social network is testing the option to cross-post Stories to Instagram, instead of just vice-versa.
7. Sequoia is giving away $21M to a payments startup it recently funded as it walks away from deal
Sequoia Capital has, for the first time in its history, parted ways with a newly funded company (Finix) over a purported conflict of interest and, almost more shockingly, handed back its board seat, its information rights, its shares and its full investment.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

Facebook’s photo transfer tool opens to more users in Europe, LatAm and Africa

Facebook is continuing to open up access to a data porting tool it launched in Ireland in December. The tool lets users of its network transfer photos and videos they have stored on its servers directly to another photo storage service, such as Google Photos, via encrypted transfer.
A Facebook spokesman confirmed to TechCrunch that access to the transfer tool is being rolled out today to the UK, the rest of the European Union and additional countries in Latin America and Africa.
Late last month Facebook also opened up access to multiple markets in APAC and LatAm, per the spokesman. The tech giant has previously said the tool will be available worldwide in the first half of 2020.
The setting to “transfer a copy of your photos and videos” is accessed via the Your Facebook Information settings menu.

People can access this new tool in their Facebook settings within Your Facebook Information, the same place where you can download your information: https://t.co/H9xYqK7N5h pic.twitter.com/rbBrXjwUzz
— Alexandru Voica (@alexvoica) March 10, 2020

The tool is based on code developed via Facebook’s participation in the Data Transfer Project (DTP) — a collaborative effort starting in 2018 and backed by the likes of Apple, Facebook, Google, Microsoft and Twitter — who committed to build a common framework using open source code for connecting any two online service providers in order to support “seamless, direct, user initiated portability of data between the two platforms”.
In recent years the dominance of tech giants has led to an increase in competition complaints — garnering the attention of policymakers and regulators.
In the EU, for instance, competition regulators are now eyeing the data practices of tech giants including Amazon, Facebook and Google. While, in the US, tech giants including Google, Facebook, Amazon, Apple and Microsoft are also facing antitrust scrutiny. And as more questions are being asked about antitrust big tech has been under pressure to respond — hence the collective push on portability.
Last September Facebook also released a white paper laying out its thinking on data portability which seeks to frame it as a challenge to privacy — in what looks like an attempt to lobby for a regulatory moat to limit portability of the personal data mountain it’s amassed on users.
At the same time, the release of a portability tool gives Facebook something to point regulators to when they come calling — even as the tools only allows users to port a very small portion of the personal data the service holds on them. Such tools are also only likely to be sought out by the minority of more tech savvy users.
Facebook’s transfer tool also currently only supports direct transfer to Google’s cloud storage — greasing a pipe for users to pass a copy of their facial biometrics from one tech giant to another.
We checked, and from our location in the EU, Google Photos is the only direct destination offered via Facebook’s drop-down menu thus far:

However the spokesman implied wider utility could be coming — saying the DTP project updated adapters for photos APIs from Smugmug (which owns Flickr); and added new integrations for music streaming service Deezer; decentralized social network Mastodon; and Tim Berners-Lee’s decentralization project Solid.
Though it’s not clear why there’s no option offered as yet within Facebook to port direct to any of these other services. Presumably additional development work is still required by the third party to implement the direct data transfer.  (We’ve asked Facebook for more on this and will update if we get a response.)
The aim of the DTP is to develop a standardized version to make it easier for others to join without having to “recreate the wheel every time they want to build portability tools”, as the spokesman put it, adding: “We built this tool with the support of current DTP partners, and hope that even more companies and partners will join us in the future.”
He also emphasized that the code is open source and claimed it’s “fairly straightforward” for a company that wishes to plug its service into the framework especially if they already have  a public API.
“They just need to write a DTP adapter against that public API,” he suggested.
“Now that the tool has launched, we look forward to working with even more experts and companies – especially startups and new platforms looking to provide an on-ramp for this type of service,” the spokesman added.

Adtech giant Criteo is being investigated by France’s data watchdog

Adtech giant Criteo is under investigation by the French data protection watchdog, the CNIL, following a complaint filed by privacy rights campaign group Privacy International.
“I can confirm that the CNIL has opened up an investigation into Criteo . We are in the trial phase, so we can’t communicate at this stage,” a CNIL spokesperson told us.
Privacy International has been campaigning for more than a year for European data protection agencies to investigate several adtech players and data brokers involved in programmatic advertising.
Yesterday it said the French regulator has finally opened a probe of Criteo.
“CNIL’s confirmation that they are investigating Criteo is important and we warmly welcome it,” it said in the  statement. “The AdTech ecosystem is based on vast privacy infringements, exploiting people’s data on a daily basis. Whether its through deceptive consent banners or by infesting mental health websites these companies enable a surveillance environment where all you moves online are tracked to profile and target you, with little space to contest.”
We’ve reached out to Criteo for comment.
Back in November 2018, a few months after Europe’s updated data protection framework (GDPR) came into force, Privacy International filed complaints against a number of companies operating in the space — including Criteo.
A subsequent investigation by the rights group last year also found adtech trackers on mental health websites sharing sensitive user data for ad targeting purposes.
Last May Ireland’s Data Protection Commission also opened a formal investigation into Quantcast, following Privacy International’s complaint and a swathe of separate GDPR complaints targeting the real-time bidding (RTB) process involved in programmatic advertising.
The crux of the RTB complaints is that the process is inherently insecure since it entails the leaky broadcasting of people’s personal data with no way for it to be controlled once it’s out there vs GDPR’s requirement for personal data to be processed securely.
In June the UK’s Information Commission’s Office also fired a warning shot at the behavioral ad industry — saying it had “systemic concerns” about the compliance of RTB. Although the regulator has so far failed to take any enforcement action, despite issuing another blog post last December in which it discussed the “industry problem” with lawfulness — preferring instead to encourage adtech to reform itself. (Relevant: Google announcing it will phase out support for third party cookies.)
In its 2018 adtech complaint, Privacy International called for France’s CNIL, the UK’s ICO and Ireland’s DPC to investigate Criteo, Quantcast and a third company called Tapad — arguing their processing of Internet users’ data (including special category personal data) has no lawful basis, neither fulfilling GDPR’s requirements for consent nor legitimate interest.
Privacy International’s complaint argued that additional GDPR principles — including transparency, fairness, purpose limitation, data minimisation, accuracy and integrity and confidently — were also not being fulfilled; and called for further investigation to ascertain compliance with other legal rights and safeguards GDPR gives Europeans over their personal data, including the right to information; access; rights related to automated decision making and profiling; data protection and by design and default; and data protection impact assessments.
In specific complaints against Criteo, Privacy International raised concerns about its Shopper Graph tool, which is used to predict real-time product interest, and which Criteo has touted as having data on nearly three-quarters of the worlds’ shoppers, fed by cross-device online tracking of people’s digital activity which is not limited to cookies and gets supplemented by offline data; and its Dynamic Retargeting tool, which enables the retargeting of tracked shoppers with behaviorally targeted ads via Criteo sharing data with scores of ‘partners’ including publishers and ad exchanges involved in the RTB process to auction online ad slots.
At the time of the original complaint Privacy International said Criteo told it it was relying on consent to track individuals obtained via its advertising (and publisher) partners — who, per GDPR, would need to obtain informed, specific and freely given consent up-front before dropping any tracking cookies (or other tracer technologies) — as well as claiming a legal base known as legitimate interest, saying it believed this was a valid ground so that it could comply with its contractual obligations toward its clients and partners.
However legitimate interests requires a balancing test to be carried out to consider impacts on the individual’s interests, as part of a wider assessment process to determine whether it can be applied.
It’s Privacy International’s contention that legitimate interest is not a valid legal basis in this case.
Now the CNIL will look in detail at Criteo’s data processing to determine whether or not there are GDPR violations. If it finds breaches of the law, the regulation allows for monetary penalties to be issued that can scale as high as 4% of a company’s global turnover. EU data protection agencies can also order changes to how data is processed.
Commenting on the CNIL’s investigation of Criteo’s business, Dr Lukasz Olejnik, an independent privacy researcher and consultant whose research on the privacy implications of RTB predates all the aforementioned complaints told us: “I am not surprised with the investigation as in Real-Time Bidding transparency and consent were always very problematic and at best non-obvious. I don’t know how retrospective consent could be reconciled.”
“It is rather beyond doubt that a thorough privacy impact assessment (data protection impact assessment) had to be conducted for many aspects of such systems or its uses, so this particular angle of the complaint should not controversial,” Olejnik added.
“My long views on Real-Time Bidding is that it was not a technology created with particular focus on security and privacy. As a transformative technology in the long-term it also contributed to broader issues like the dissemination of harmful content like political disinformation.”
The CNIL probe certainly adds to Criteo’s business woes, with the company reporting declining revenue last year and predicting more to come in 2020. More aggressive moves by browser makers to bake in tracker blocking is clearly having an impact on its core business.
In a recent interview with Digiday CEO Megan Clarken talked about wanting to broaden the range of services it offers to advertisers and reduce its reliance on its traditional retargeting.
Criteo has also been investing heavily in artificial intelligence in recent years — ploughing in $23M in 2018 to open an AI lab in Paris.

Australia sues Facebook over Cambridge Analytica, fine could scale to $529BN

Australia’s privacy watchdog is suing Facebook over the Cambridge Analytica data breach — which, back in 2018, became a global scandal that wiped billions off the tech giant’s share price yet only led to Facebook picking up a $5BN FTC fine.
Should Australia prevail in its suit against the tech giant the monetary penalty could be exponentially larger.
Australia’s Privacy Act sets out a provision for a civil penalty of up to $1,700,000 to be levied per contravention — and the national watchdog believes there were 311,074 local Facebook users in the cache of ~86M profiles lifted by Cambridge Analytica . So the potential fine here is circa $529BN. (A very far cry from the £500k Facebook paid in the UK over the same data misuse scandal.)
In a statement published on its website today the Office of the Australian Information Commissioner (OAIC) says it has lodged proceedings against Facebook in a federal court alleging the company committed serious and/or repeated interferences with privacy.
The suit alleges the personal data of Australian Facebook users was disclosed to the This is Your Digital Life app for a purpose other than that for which it was collected — thereby breaching Australia’s Privacy Act 1988. It further claims the data was exposed to the risk of being disclosed to Cambridge Analytica and used for political profiling purposes, and passed to other third parties.
This is Your Digital Life was an app built by an app developer called GSR that was hired by Cambridge Analytica to obtain and process Facebook users’ data for political ad targeting purposes.
The events from which the suit stems took place on Facebook’s platform between March 2014 and May 2015 when user data was being siphoned off by GSR, under contract with Cambridge Analytica — which worked with US political campaigns, including Ted Cruz’s presidential campaign and later (the now) president Donald Trump.
GSR was co-founded by two psychology researchers, Aleksandr Kogan and Joseph Chancellor. And in a still unexplained twist in the saga, Facebook hired Chancellor, in about November 2015, which was soon after some of its own staffers had warned internally about the “sketchy” business Cambridge Analytica was conducting on its ad platform. Chancellor has never spoken to the press and subsequently departed Facebook as quietly and serendipitously as he arrived.
In a concise statement summing up its legal action against Facebook the OIAC writes:
Facebook disclosed personal information of the Affected Australian Individuals. Most of those individuals did not install the “This is Your Digital Life” App; their Facebook friends did. Unless those individuals undertook a complex process of modifying their settings on Facebook, their personal information was disclosed by Facebook to the “This is Your Digital Life” App by default. Facebook did not adequately inform the Affected Australian Individuals of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.
Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure. Facebook did not know the precise nature or extent of the personal information it disclosed to the “This is Your Digital Life” App. Nor did it prevent the app from disclosing to third parties the personal information obtained. The full extent of the information disclosed, and to whom it was disclosed, accordingly cannot be known. What is known, is that Facebook disclosed the Affected Australian Individuals’ personal information to the “This is Your Digital Life” App, whose developers sold personal information obtained using the app to the political consulting firm Cambridge Analytica, in breach of Facebook’s policies.
As a result, the Affected Australian Individuals’ personal information was exposed to the risk of disclosure, monetisation and use for political profiling purposes.
Commenting in a statement, Australia’s information commissioner and privacy commissioner, Angelene Falk, added: “All entities operating in Australia must be transparent and accountable in the way they handle personal information, in accordance with their obligations under Australian privacy law. We consider the design of the Facebook platform meant that users were unable to exercise reasonable choice and control about how their personal information was disclosed.
“Facebook’s default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy. We claim these actions left the personal data of around 311,127 Australian Facebook users exposed to be sold and used for purposes including political profiling, well outside users’ expectations.”
Reached for comment, a Facebook spokesperson sent this statement:
We’ve actively engaged with the OAIC over the past two years as part of their investigation. We’ve made major changes to our platforms, in consultation with international regulators, to restrict the information available to app developers, implement new governance protocols and build industry-leading controls to help people protect and manage their data. We’re unable to comment further as this is now before the Federal Court.

Grindr sold by Chinese owner after US raised national security concerns

Chinese gaming giant Beijing Kunlun has agreed to sell popular gay dating app Grindr for about $608 million, ending a tumultuous four years under Chinese ownership.
Reuters reports that the Chinese company sold its 98% stake in Grindr to a U.S.-based company, San Vicente Acquisition Partners.
The app, originally developed in Los Angeles, raised national security concerns after it was acquired by Beijing Kunlun in 2016 for $93 million. That ownership was later scrutinized by a U.S. government national security panel, the Committee on Foreign Investment in the United States (CFIUS), which reportedly told the Beijing-based parent company that its ownership of Grindr constituted a national security threat.
CFIUS expressed concern that data from the app’s some 27 million users could be used by the Chinese government. Last year, it was reported that while under Chinese ownership, Grindr allowed engineers in Beijing access to the personal data of millions of U.S. users, including their private messages and HIV status.
Little is known about San Vicente Acquisition, but a person with knowledge of the deal said that the company is made up of a group of investors that’s fully owned and controlled by Americans. Reuters said that one of those investors is James Lu, a former executive at Chinese search giant Baidu.
The deal is subject to shareholder approval and a review by CFIUS.
A spokesperson for Grindr declined to comment on the record.

Grindr sends HIV status to third parties, and some personal data unencrypted

Cathay Pacific fined £500k by UK’s ICO over data breach disclosed in 2018

Cathay Pacific has been issued with a £500,000 penalty by the UK’s data watchdog for security lapses which exposed the personal details of some 9.4 million customers globally — 111,578 of whom were from the UK.
The penalty, which is the maximum fine possible under relevant UK law, was announced today by the Information Commissioner’s Office (ICO), following a multi-month investigation. It pertains to a breach disclosed by the airline in fall 2018.
At the time Cathay Pacific said it had first identified unauthorized access to its systems in March, though it did not explain why it took more than six months to make a public disclosure of the breach.
The failure to secure its systems resulted in unauthorised access to passengers’ personal details, including names, passport and identity details, dates of birth, postal and email addresses, phone numbers and historical travel information.
Today the ICO said the earliest date of unauthorised access to Cathay Pacific’s systems was October 14, 2014. While the earliest known date of unauthorised access to personal data was February 7, 2015.
“The ICO found Cathay Pacific’s systems were entered via a server connected to the internet and malware was installed to harvest data,” the regulator writes in a press release, adding that it found “a catalogue of errors” during the investigation, including back-up files that were not password protected; unpatched Internet-facing servers; use of operating systems that were no longer supported by the developer; and inadequate antivirus protection.
Since Cathay’s systems were compromised in this breach the UK has transposed an update to the European Union’s data protection’s framework into its national law which bakes in strict disclosure requirements for breaches involving personal data — requiring data controllers inform national regulators within 72 hours of becoming aware of a breach.
The General Data Protection Regulation (GDPR) also includes a much more substantial penalties regime — with fines that can scale as high as 4% of global annual turnover.
However owing to the timing of the unauthorized access the ICO has treated this breach as falling under previous UK data protection legislation.
Under GDPR the airline would likely have faced a substantially larger fine.
Commenting on Cathay Pacific’s penalty in a statement, Steve Eckersley, the ICO’s director of investigations, said:
People rightly expect when they provide their personal details to a company, that those details will be kept secure to ensure they are protected from any potential harm or fraud. That simply was not the case here.
This breach was particularly concerning given the number of basic security inadequacies across Cathay Pacific’s system, which gave easy access to the hackers. The multiple serious deficiencies we found fell well below the standard expected. At its most basic, the airline failed to satisfy four out of five of the National Cyber Security Centre’s basic Cyber Essentials guidance.
Under data protection law organisations must have appropriate security measures and robust procedures in place to ensure that any attempt to infiltrate computer systems is made as difficult as possible.
Reached for comment the airline reiterated its regret over the data breach and said it has taken steps to enhance its security “in the areas of data governance, network security and access control, education and employee awareness, and incident response agility”.
“Substantial amounts have been spent on IT infrastructure and security over the past three years and investment in these areas will continue,” Cathay Pacific said in the statement. “We have co-operated closely with the ICO and other relevant authorities in their investigations. Our investigation reveals that there is no evidence of any personal data being misused to date. However, we are aware that in today’s world, as the sophistication of cyber attackers continues to increase, we need to and will continue to invest in and evolve our IT security systems.”
“We will continue to co-operate with relevant authorities to demonstrate our compliance and our ongoing commitment to protecting personal data,” it added.
Last summer the ICO slapped another airline, British Airways, with a far more substantial fine for a breach that leaked data on 500,000 customers, also as a result of security lapses.
In that case the airline faced a record £183.39M penalty — totalling 1.5% of its total revenues for 2018 — as the timing of the breach occurred when the GDPR applied.

FCC proposes $200M in fines for wireless carriers that sold your location for years

The FCC has officially and finally determined that the major wireless carriers in the U.S. broke the law by secretly selling subscribers’ location data for years with almost no constraints or disclosure. But its Commissioners decry the $200 million penalty proposed to be paid by these enormously rich corporations, calling it disproportionate to the harm caused to consumers.
Under the proposed fines, T-Mobile would pay $91M; AT&T, $57M; Verizon, $48M; and Sprint, $12M. (Disclosure: TechCrunch is owned by Verizon Media. This does not affect our coverage in the slightest.)
The case has stretched on for more than a year and a half after initial reports that private companies were accessing and selling real-time subscriber location data to anyone willing to pay. Such a blatant abuse of consumers’ privacy caused an immediate outcry, and carriers responded with apparent chagrin — but failed to terminate or even evaluate these programs in a timely fashion. It turns out they were run with almost no oversight at all, with responsibility delegated to the third party companies to ensure compliance.
Meanwhile the FCC was called on to investigate the nature of these offenses, and spent more than a year doing so in near-total silence, with even its own Commissioners calling out the agency’s lack of communication on such a serious issue.
Finally, in January, FCC Chairman Ajit Pai — who, it really must be noted here, formerly worked for one of the main companies implicated, Securus — announced that the investigation had found the carriers had indeed violated federal law and would soon be punished.
Today brings the official documentation of the fines, as well as commentary from the Commission. The general feeling seems to be that while it’s commendable to recognize this violation and propose what could be considered  substantial fines, the whole thing is, as Commissioner Rosenworcel put it, “a day late and a dollar short.”
The scale of the fines, they say, has little to do with the scale of the offenses — and that’s because the investigation did not adequately investigate or attempt to investigate the scale of those offenses. Essentially, the FCC didn’t even look at the number or nature of actual instances of harm — it just asked the carriers to provide the number of contracts entered into.
And why not go after the individual companies? They’re not being fined at all. Even if the FCC lacked the authority to do so, it could have handed off the case to Justice or local authorities that could determine whether these companies violated other laws.
As Rosenworcel notes in her own statement, the fines are also extraordinarily generous even beyond this minimal method of calculating harm:
The agency proposes a $40,000 fine for the violation of our rules—but only on the first day. For every day after that, it reduces to $2,500 per violation. The FCC heavily discounts the fines the carriers potentially owe under the law and disregards the scope of the problem. On top of that, the agency gives each carrier a thirty-day pass from this calculation. This thirty day “get-out-of-jail-free” card is plucked from thin air.
Given that this investigation took place over such a long period, it’s strange that it did not seek to hear from the public or subpoena further details from the companies facilitating the violations. Meanwhile the carriers sought to declare a huge proportion of their responses to the FCC’s questions confidential, including publicly available information, and the agency didn’t question these assertions until Starks and Rosenworcel intervened.
$200M sounds like a lot, but divided among several billion-dollar communications organizations it’s peanuts, especially when you consider that these location-selling agreements may have netted far more than that in the years they were active. Only the carriers know exactly how many times their subscribers’ privacy was violated, and how much money they made from that abuse. And because the investigation has ended without the authority over these matters asking about it, we likely never will know.
The proposed fines, called a Notice of Apparent Liability, are only a tentative finding, and the carriers have 30 days to respond or ask for an extension — the latter of which is the more likely. Once they respond (perhaps challenging the amount or something else) the FCC can take as long as it wants to come up with a final fine amount. And once that is issued, there is no requirement that the fine actually be collected — and the FCC has in fact declined to collect before once the heat died down, though not with a penalty of this scale.
The only thing that led to this case being investigated at all was public attention, and apparently public attention is necessary to ensure the federal government follows through on its duties.

Clearview said its facial recognition app was only for law enforcement as it courted private companies

After claiming that it would only sell its controversial facial recognition software to law enforcement agencies, a new report suggests that Clearview AI is less than discerning about its client base. According to Buzzfeed News, the small, secretive company looks to have shopped its technology far and wide. While Clearview counts ICE, the U.S. Attorney’s Office for the Southern District of New York and the retail giant Macy’s among its paying customers, many more private companies are testing the technology through 30-day free trials. Non-law enforcement entities that appeared on Clearview’s client list include Walmart, Eventbrite, the NBA, Coinbase, Equinox, and many others.
According to the report, even if a company or organization has no formal relationship with Clearview, its individual employees might be testing the software. “In some cases… officials at a number of those places initially had no idea their employees were using the software or denied ever trying the facial recognition tool,” Buzzfeed News reports.
In one example, the NYPD denied a relationship with Clearview, even as as many as 30 officers within the department conducted 11,000 searches through the software, according to internal logs.
A week ago, Clearview’s CEO Hoan Ton-That was quoted on Fox Business stating that his company’s technology is “strictly for law enforcement”—a claim the company’s budding client list appears to contradict.
“This list, if confirmed, is a privacy, security, and civil liberties nightmare,” ACLU Staff Attorney Nathan Freed Wessler said of the revelations. “Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse.”
On top of its reputation as an invasive technology, critics argue that facial recognition tech isn’t accurate enough to be used in the high-consequence settings it’s often touted for. Facial recognition software has notoriously struggled to accurately identify non-white, non-male faces, a phenomenon that undergirds arguments that biased data has the potential to create devastating real-world consequences.
Little is known about the technology that powers Clearview’s own algorithms and accuracy beyond that the company scrapes public images from many online sources, aggregates that data, and allows users to search it for matches. In light of Clearview’s reliance on photos from social networks, Facebook, YouTube, and Twitter have all issued the company cease-and-desist letters for violating their terms of use.
Clearview’s small pool of early investors includes the private equity firm Kirenaga Partners and famed investor and influential tech conservative Peter Thiel. Thiel, who sits on the board of Facebook, also co-founded Palantir, a data analytics company that’s become a favorite of law enforcement.

Amazon Transcribe can now automatically redact personally identifiable information

Amazon Transcribe, the AWS-based speech-to-text service, launched a small but important new feature this morning that, if implemented correctly, can automatically hide your personally identifiable information from call transcripts.
One of the most popular use cases for Transcribe is to create a record of customer calls. Almost by default, that involves exchanging information like your name, address or a credit card number. In my experience, some call centers stop the recording when you’re about to exchange credit card numbers, for example, but that’s not always the case.
With this new feature, Transcribe can automatically identify information like a social security number, credit card number, bank account number, name, email address, phone number and mailing address and redact that. The tool automatically replaces this information with ‘[PII]’ in the transcript.
There are, of course, other tools that can remove PII from existing documents. Often, though, these are focused on data loss prevention tools and aim to keep data from leaking out of the company when you share documents with outsiders. With this new Transcribe tool, at least some of this data will never be available for sharing (unless, of course, you keep a copy of the audio).
In total, Transcribe currently supports 31 languages. Of those, it can transcribe 6 in real-time for captioning and other use cases.

Facebook has paused election reminders in Europe after data watchdog raises transparency concerns

Big tech’s lead privacy regulator in Europe has intervened to flag transparency concerns about a Facebook election reminder feature — asking the tech giant to provide it with information about what data it collects from users who interact with the notification and how their personal data is used, including whether it’s used for targeting them with ads.
Facebook confirmed to TechCrunch it has paused use of the election reminder feature in the European Union while it works on addressing the Irish Data Protection Commission (DPC)’s concerns.
Facebook’s Election Day Reminder (EDR) feature is a notification the platform can display to users on the day of an election — ostensibly to encourage voter participation. However, as ever with the data-driven ad business, there’s a whole wrapper of associated questions about what information Facebook’s platform might be harvesting when it chooses to deploy the nudge (and how the ad business is making use of the data).
On an FAQ on its website about the election reminder Facebook writes vaguely that users “may see reminders and posts about elections and voting”.
Facebook does not explain what criteria it uses to determine whether to target (or not to target) a particular user with an election reminder.
Yet a study carried out by Facebook in 2012, working with academics from the University of California at San Diego, found an election day reminder sent via its platform on the day of the 2010 US congressional elections boosted voter turnout by about 340,000 people — which has led to concern that selective deployment of election reminders by Facebook could have the potential to influence poll outcomes.
If, for example, Facebook chose to target an election reminder at certain types of users who it knows via its profiling of them are likely to lean towards voting a particular way. Or if the reminder was targeted at key regions where a poll result could be swung with a small shift in voter turnout. So the lack of transparency around how the tool is deployed by Facebook is also concerning. 
Under EU law, entities processing personal data that reveals political opinions must also meet a higher standard of regulatory compliance for this so-called “special category data” — including around transparency and consent. (If relying on user consent to collect this type of data it would need to be explicit — requiring a clear, purpose-specific statement that the user affirms, for instance.)
In a statement today the DPC writes that it notified Facebook of a number of “data protection concerns” related to the EDR ahead of the recent Irish General Election — which took place February 8 — raising particular concerns about “transparency to users about how personal data is collected when interacting with the feature and subsequently used by Facebook”.
The DPC said it asked Facebook to make some changes to the feature but because these “remedial actions” could not be implemented in advance of the Irish election it says Facebook decided not to activate the EDR during that poll.
We understand the main issue for the regulator centers on the provision of in-context transparency for users on how their personal data would be collected and used when they engaged with the feature — such as the types of data being collected and the purposes the data is used for, including whether it’s used for advertising purposes.
In its statement, the DPC says that following its intervention Facebook has paused use of the EDR across the EU, writing: “Facebook has confirmed that the Election Day Reminder feature will not be activated during any EU elections pending a response to the DPC addressing the concerns raised.”
It’s not clear how long this intervention-triggered pause will last — neither the DPC nor Facebook have given a timeframe for when the transparency problems might be resolved.
We reached out to Facebook with questions on the DPC’s intervention.
The company sent this statement, attributed to a spokesperson:
We are committed to processing people’s information lawfully, fairly, and in a transparent manner. However, following concerns raised by the Irish Data Protection Commission around whether we give users enough information about how the feature works, we have paused this feature in the EU for the time being. We will continue working with the DPC to address their concerns.
“We believe that the Election Day reminder is a positive feature which reminds people to vote and helps them find their polling place,” Facebook added.
Forthcoming elections in Europe include Slovak parliamentary elections this month; North Macedonian and Serbian parliamentary elections, which are due to take place in April; and UK local elections in early May.
The intervention by the Irish DPC against Facebook is the second such public event in around a fortnight — after the regulator also published a statement revealing it had raised concerns about Facebook’s planned launch of a dating feature in the EU.
That launch was also put on ice following its intervention, although Facebook claimed it chose to postpone the rollout to get the launch “right”; while the DPC said it’s waiting for adequate responses and expects the feature won’t be launched before it gets them.

Facebook Dating launch blocked in Europe after it fails to show privacy workings

It looks like public statements of concern could be a new tactic by the regulator to try to address the sticky challenge of reining in big tech.
The DPC is certainly under huge pressure to deliver key decisions to prove that the EU’s flagship General Data Protection Regulation (GDPR) is functioning as intended. Critics say it’s taking too long, even as its case load continues to pile up.
No GDPR decisions on major cases involving tech giants including Facebook and Google have yet been handed down in Dublin — despite the GDPR fast approaching its second birthday.
At the same time it’s clear tech giants have no shortage of money, resources and lawyers to inject friction into the regulatory process — with the aim of slowing down any enforcement.
So it’s likely the DPC is looking for avenues to bag some quick wins — by making more of its interventions public and thereby putting pressure on a major player like Facebook to respond to publicity generated by it going public with “concerns”.

Facebook’s latest ‘transparency’ tool doesn’t offer much — so we went digging

Just under a month ago Facebook switched on global availability of a tool which affords users a glimpse into the murky world of tracking that its business relies upon to profile users of the wider web for ad targeting purposes.
Facebook is not going boldly into transparent daylight — but rather offering what privacy rights advocacy group Privacy International has dubbed “a tiny sticking plaster on a much wider problem”.
The problem it’s referring to is the lack of active and informed consent for mass surveillance of Internet users via background tracking technologies embedded into apps and websites, including as people browse outside Facebook’s own content garden.
The dominant social platform is also only offering this feature in the wake of the 2018 Cambridge Analytica data misuse scandal, when Mark Zuckerberg faced awkward questions in Congress about the extent of Facebook’s general web tracking. Since then policymakers around the world have dialled up scrutiny of how its business operates — and realized there’s a troubling lack of transparency in and around adtech generally and Facebook specifically. 
Facebook’s tracking pixels and social plugins — aka the share/like buttons that pepper the mainstream web — have created a vast tracking infrastructure which silently informs the tech giant of Internet users’ activity, even when a person hasn’t interacted with any Facebook-branded buttons.
Facebook claims this is just ‘how the web works’. And other tech giants are similarly engaged in tracking Internet users (notably Google). But as a platform with 2.2BN+ users Facebook has got a march on the lion’s share of rivals when it comes to harvesting people’s data and building out a global database of person profiles.
It’s also positioned as a dominant player in an adtech ecosystem which means it’s the one being fed with intel by data brokers and publishers who deploy tracking tech to try to survive in such a skewed system.
Meanwhile the opacity of online tracking means the average Internet user is none the wiser that Facebook can be following what they’re browsing all over the Internet. Questions of consent loom very large indeed.
Facebook is also able to track people’s usage of third party apps if a person chooses a Facebook login option which the company encourages developers to implement in their apps — again the carrot being to be able to offer a lower friction choice vs requiring users create yet another login credential.
The price for this ‘convenience’ is data and user privacy as the Facebook login gives the tech giant a window into third part app usage.
The company has also used a VPN app it bought and badged as a security tool to glean data on third party app usage — though it’s recently stepped back from the Onavo app after a public backlash (though that did not stop it running a similar tracking program targeted at teens).
Background tracking is how Facebook’s creepy ads function (it prefers to call such behaviorally targeted ads ‘relevant’) — and how they have functioned for years
Yet it’s only in recent months that it’s offered users a glimpse into this network of online informers — by providing limited information about the entities that are passing tracking data to Facebook, as well as some limited controls.
From ‘Clear History’ to “Off-Facebook Activity”
Originally briefed in May 2018, at the crux of the Cambridge Analytica scandal, as a ‘Clear History’ option this has since been renamed ‘Off-Facebook Activity’ — a label so bloodless and devoid of ‘call to action’ that the average Facebook user, should they stumble upon it buried deep in unlovely settings menus, would more likely move along than feel moved to carry out a privacy purge.
(For the record you can access the setting here — but you do need to be logged into Facebook to do so.)
The other problem is that Facebook’s tool doesn’t actually let you purge your browsing history, it just delinks it from being associated with your Facebook ID. There is no option to actually clear your browsing history via its button. Another reason for the name switch. So, no, Facebook hasn’t built a clear history ‘button’.
“While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action,” said Privacy International this week, criticizing Facebook for “not telling you everything”.
As the saying goes, a little knowledge can be a dangerous thing. So a little transparency implies — well — anything but clarity. And Privacy International sums up the Off-Facebook Activity tool with an apt oxymoron — describing it as “a new window to the opacity”.
“This tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook,” it writes, warning with emphasis: “Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles.”
It points out, for instance, that the information provided here is limited to a “simple name” — thereby preventing the user from “exercising their right to seek more information about how this data was collected”, which EU users at least are entitled to.
“As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads?” it adds.
Another criticism is Facebook is only providing limited information about each data transfer — with Privacy International noting some events are marked “under a cryptic CUSTOM” label; and that Facebook provides “no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place”.
“Does Facebook really display everything they process/store about those events in the log/export?” queries privacy researcher Wolfie Christl, who tracks the adtech industry’s tracking techniques. “They have to, because otherwise they don’t fulfil their SAR [Subject Access Request] obligations [under EU law].”
Christl notes Facebook makes users jump through an additional “download” hoop in order to view data on tracked events — and even then, as Privacy International points out, it gives up only a limited view of what has actually been tracked…

And it’s just ridiculous.
FB doesn’t show me the list of visits they recorded from a certain website in their web interface, no! I have to ‘download my information’, which takes a long time.
And then, I’m sure this is not all data they record when tracking a VIEW_CONTENT event: pic.twitter.com/qBO87Zp5YH
— Wolfie Christl (@WolfieChristl) January 29, 2020

“For example, why doesn’t Facebook list the specific sites/URLs visited? Do they infer data from the domains e.g. categories? If yes, why is this not in the logs?” Christl asks.
We reached out to Facebook with a number of questions, including why it doesn’t provide more detail by default. It responded with this statement attributed to spokesperson:
We offer a variety of tools to help people access their Facebook information, and we’ve designed these tools to comply with relevant laws, including GDPR. We disagree with this [Privacy International] article’s claims and would welcome the chance to discuss them with Privacy International.
Facebook also said it’s continuing to develop which information it surfaces through the Off-Facebook Activity tool — and said it welcomes feedback on this.
We also asked it about the legal bases it uses to process people’s information that’s been obtained via its tracking pixels and social plug-ins. It did not provide a response to those questions.
Six names, many questions…
When the company launched the Off-Facebook Activity tool a snap poll of available TechCrunch colleagues showed very diverse results for our respective tallies (which also may not show the most recent activity, per other Facebook caveats) — ranging from one colleague who had an eye-watering 1,117 entities (likely down to doing a lot of app testing); to several with several/a few hundred apiece; to a couple in the middle tens.
In my case I had just six. But from my point of view — as an EU citizen with a suite of rights related to privacy and data protection; and as someone who aims to practice good online privacy hygiene, including having a very locked down approach to using Facebook (never using its mobile app for instance) — it was still six too many. I wanted to find out how these entities had circumvented my attempts not to be tracked.
And in the case of the first one in the list who on earth it was…

Turns out cloudfront is an Amazon Web Services Content Delivery Network subdomain. But I had to go searching online myself to figure out that the owner of that particular domain is (now) a company called Nativo.
Facebook’s list provided only very bare bones information. I also clicked to delink the first entity, since it immediately looked so weird, and found that by doing that Facebook wiped all the entries — which meant I was unable to retain access to what little additional info it had provided about the respective data transfers.
Undeterred I set out to contact each of the six companies directly with questions — asking what data of mine they had transferred to Facebook and what legal basis they thought they had for processing my information.
(On a practical level six names looked like a sample size I could at least try to follow up manually — but remember I was the TechCrunch exception; imagine trying to request data from 1,117 companies, or 450 or even 57, which were the lengths of lists of some of my colleagues.)
This process took about a month and a lot of back and forth/chasing up. It likely only yielded as much info as it did because I was asking as a journalist; an average Internet user may have had a tougher time getting attention on their questions — though, under EU law, citizens have a right to request a copy of personal data held on them.
Eventually, I was able to obtain confirmation that tracking pixels and Facebook share buttons had been involved in my data being passed to Facebook in certain instances. Even so I remain in the dark on many things. Such as exactly what personal data Facebook received.
In one case I was told by a listed company that it doesn’t know itself what data was shared — only Facebook knows because it’s implemented the company’s “proprietary code”. (Insert your own ‘WTAF’ there.)
The legal side of these transfers also remains highly opaque. From my point of view I would not intentionally consent to any of this tracking — but in some instances the entities involved claim that (my) consent was (somehow) obtained (or implied).
In other cases they said they are relying on a legal basis in EU law that’s referred to as ‘legitimate interests’. However this requires a balancing test to be carried out to ensure a business use does not have a disproportionate impact on individual rights.
I wasn’t able to ascertain whether such tests had ever been carried out.
Meanwhile, since Facebook is also making use of the tracking information from its pixels and social plug ins (and seemingly more granular use, since some entities claimed they only get aggregate not individual data), Christl suggests it’s unlikely such a balancing test would be easy to pass for that tiny little ‘platform giant’ reason.
Notably he points out Facebook’s Business Tool terms state that it makes use of so called “event data” to “personalize features and content and to improve and secure the Facebook products” — including for “ads and recommendations”; for R&D purposes; and “to maintain the integrity of and to improve the Facebook Company Products”.
In a section of its legal terms covering the use of its pixels and SDKs Facebook also puts the onus on the entities implementing its tracking technologies to gain consent from users prior to doing so in relevant jurisdictions that “require informed consent” for tracking cookies and similar — giving the example of the EU.
“You must ensure, in a verifiable manner, that an end user provides the necessary consent before you use Facebook Business Tools to enable us to store and access cookies or other information on the end user’s device,” Facebook writes, pointing users of its tools to its Cookie Consent Guide for Sites and Apps for “suggestions on implementing consent mechanisms”.
Christl flags the contradiction between Facebook claiming users of its tracking tech needing to gain prior consent vs claims I was given by some of these entities that they don’t because they’re relying on ‘legitimate interests’.
“Using LI as a legal basis is even controversial if you use a data analytics company that reliably processes personal data strictly on behalf of you,” he argues. “I guess, industry lawyers try to argue for a broader applicability of LI, but in the case of FB business tools I don’t believe that the balancing test (a businesses legitimate interests vs. the impact on the rights and freedoms of data subjects) will work in favor of LI.”
Those entities relying on legitimate interests as a legal base for tracking would still need to offer a mechanism where users can object to the processing — and I couldn’t immediately see such a mechanism in the cases in question.
One thing is crystal clear: Facebook itself does not provide a mechanism for users to object to its processing of tracking data nor opt out of targeted ads. That remains a long-standing complaint against its business in the EU which data protection regulators are still investigating.
One more thing: Non-Facebook users continue to have no way of learning what data of theirs is being tracked and transferred to Facebook. Only Facebook users have access to the Off-Facebook Activity tool, for example. Non-users can’t even access a list.
Facebook has defended its practice of tracking non-users around the Internet as necessary for unspecified ‘security purposes’. It’s an inherently disproportionate argument of course. The practice also remains under legal challenge in the EU.
Tracking the trackers
SimpleReach (aka d8rk54i4mohrb.cloudfront.net)
What is it? A California-based analytics platform (now owned by Nativo) used by publishers and content marketers to measure how well their content/native ads performs on social media. The product began life in the early noughties as a simple tool for publishers to recommend similar content at the bottom of articles before the startup pivoted — aiming to become ‘the PageRank of social’ — offering analytics tools for publishers to track engagement around content in real-time across the social web (plugging into platform APIs). It also built statistical models to predict which pieces of content will be the most social and where, generating a proprietary per article score. SimpleReach was acquired by Nativo last year to complement analytics tools the latter already offered for tracking content on the publisher/brand’s own site.
Why did it appear in your Off-Facebook Activity list? Given it’s a b2b product it does not have a visible consumer brand of its own. And, to my knowledge, I have never visited its own website prior to investigating why it appeared in my Off-Facebook Activity list. Clearly, though, I must have visited a site (or sites) that are using its tracking/analytics tools. Of course an Internet user has no obvious way to know this — unless they’re actively using tools to monitor which trackers are tracking them.
In a further quirk, neither the SimpleReach (nor Nativo) brand names appeared in my Off-Facebook Activity list. Rather a domain name was listed — d8rk54i4mohrb.cloudfront.net — which looked at first glance weird/alarming.
I found this is owned by SimpleReach by using a tracker analytics service.
Once I knew the name I was able to connect the entry to Nativo — via news reports of the acquisition — which led me to an entity I could direct questions to.  
What happened when you asked them about this? There was a bit of back and forth and then they sent a detailed response to my questions in which they claim they do not share any data with Facebook — “or perform ‘off site activity’ as described on Facebook’s activity tool”.
They also suggested that their domain had appeared as a result of their tracking code being implemented on a website I had visited which had also implemented Facebook’s own trackers.
“Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page. It is possible that one of our customers added a Facebook pixel to an article you visited using our technology. This could lead Facebook to attribute this pixel to our domain, though our domain was merely a ‘carrier’ of the code,” they told me.
In terms of the data they collect, they said this: “The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months. These values are used to give our customers a general idea of the number of users that visited the articles tracked.”
So, again, they suggested the reason why their domain appeared in my Off-Facebook Activity list is a combination of Nativo/SimpleReach’s tracking technologies being implemented on a site where Facebook’s retargeting pixel is also embedded — which then resulted in data about my online activity being shared with Facebook (which Facebook then attributes as coming from SimpleReach’s domain).
Commenting on this, Christl agreed it sounds as if publishers “somehow attach Facebook pixel events to SimpleReach’s cloudfront domain”.
“SimpleReach probably doesn’t get data from this. But the question is 1) is SimpleReach perhaps actually responsible (if it happens in the context of their domain); 2) The Off-Facebook activity is a mess (if it contains events related to domains whose owners are not web or app publishers).”
Nativo offered to determine whether they hold any personal information associated with the unique identifier they have assigned to my browser if I could send them this ID. However I was unable to locate such an ID (see below).
In terms of legal base to process my information the company told me: “We have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.”
Nativo also suggested that the Offsite Activity in question might have predated its purchase of the SimpleReach technology — which occurred on March 20, 2019 — saying any activity prior to this would mean my query would need to be addressed directly with SimpleReach, Inc. which Nativo did not acquire. (However in this case the activity registered on the list was dated later than that.)
Here’s what they said on all that in full:
Thank you for submitting your data access request.  We understand that you are a resident of the European Union and are submitting this request pursuant to Article 15(1) of the GDPR.  Article 15(1) requires “data controllers” to respond to individuals’ requests for information about the processing of their personal data.  Although Article 15(1) does not apply to Nativo because we are not a data controller with respect to your data, we have provided information below that will help us in determining the appropriate Data Controllers, which you can contact directly.
First, for details about our role in processing personal data in connection with our SimpleReach product, please see the SimpleReach Privacy Policy.  As the policy explains in more detail, we provide marketing analytics services to other businesses – our customers.  To take advantage of our services, our customers install our technology on their websites, which enables us to collect certain information regarding individuals’ visits to our customers’ websites. We analyze the personal information that we obtain only at the direction of our customer, and only on that customer’s behalf.
SimpleReach is an analytics tracker tool (Similar to Google Analytics) implemented by our customers to inform them of the performance of their content published around the web.  “d8rk54i4mohrb.cloudfront.net” is the domain name of the servers that collect these metrics.  We do not share data with Facebook or perform “off site activity” as described on Facebook’s activity tool.  Our technology allows our Data Controllers to insert other tracking pixels or tags, using us as a tag manager that delivers code to the page.  It is possible that one of our customers added a Facebook pixel to an article you visited using our technology.  This could lead Facebook to attribute this pixel to our domain, though our domain was merely a “carrier” of the code.
The SimpleReach tool is implemented on articles posted by our customers and partners of our customers.  It is possible you visited a URL that has contained our tracking code.  It is also possible that the Offsite Activity you are referencing is activity by SimpleReach, Inc. before Nativo purchased the SimpleReach technology. Nativo, Inc. purchased certain technology from SimpleReach, Inc. on March 20, 2019, but we did not purchase the SimpleReach, Inc. entity itself, which remains a separate entity unaffiliated with Nativo, Inc. Accordingly, any activity that occurred before March 20, 2019 pre-dates Nativo’s use of the SimpleReach technology and should be addressed directly with SimpleReach, Inc. If, for example, TechCrunch was a publisher partner of SimpleReach, Inc. and had SimpleReach tracking code implemented on TechCrunch articles or across the TechCrunch website prior to March 20, 2019, any resulting data collection would have been conducted by SimpleReach, Inc., not by Nativo, Inc.
As mentioned above, our tracking script collects and sends information to our servers based on the articles it is implemented on. The only Personal Data that is collected by the SimpleReach Analytics tag is your IP Address and a randomly generated id.  Both of these values are processed, anonymized, and aggregated in the SimpleReach platform and not made available to anyone other than our sub-processors that are bound to process such data only on our behalf. Such values are permanently deleted from our system after 3 months.  These values are used to give our customers a general idea of the number of users that visited the articles tracked.
We do not, nor have we ever, shared ANY information with Facebook with regards to the information we collect from the SimpleReach Analytics tag, be it Personal Data or otherwise. However, as mentioned above, it is possible that one of our customers added a Facebook retargeting pixel to an article you visited using our technology. If that is the case, we would not have received any information collected from such pixel or have knowledge of whether, and to what extent, the customer shared information with Facebook. Without more information, we are unable to determine the specific customer (if any) on behalf of which we may have processed your personal information. However, if you send us the unique identifier we have assigned to your browser… we can determine whether we have any personal information associated with such browser on behalf of a customer controller, and, if we have, we can forward your request on to the controller to respond directly to your request.
As a Data Processor we have the right to process data in accordance with provisions set forth in the various Data Processor agreements we have in place with Data Controllers.  This type of agreement is designed to protect Data Subjects and ensure that Data Processors are held to the same standards that both the GDPR and the Data Controller have put forth.  This is the same type of agreement used by all other analytics tracking tools (as well as many other types of tools) such as Google Analytics, Adobe Analytics, Chartbeat, and many others.
I also asked Nativo to confirm whether Insider.com (see below) is a customer of Nativo/SimpleReach.
The company told me it could not disclose this “due to confidentiality restrictions” and would only reveal the identity of customers if “required by applicable law”.
Again, it said that if I provided the “unique identifier” assigned to my browser it would be “happy to pull a list of personal information the SimpleReach/Nativo systems currently have stored for your unique identifier (if any), including the appropriate Data Controllers”. (“If we have any personal data collected from you on behalf of Insider.com, it would come up in the list of DataControllers,” it suggested.)
I checked multiple browsers that I use on multiple devices but was unable to locate an ID attached to a SimpleReach cookie. So I also asked whether this might appear attached to any other cookie.
Their response:
Because our data is either pseudonymized or anonymized, and we do not record of any other pieces of Personal Data about you, it will not be possible for us to locate this data without the cookie value.  The SimpleReach user cookie is, and has always been, in the “__srui” cookie under the “.simplereach.com” domain or any of its sub-domains. If you are unable to locate a SimpleReach user cookie by this name on your browser, it may be because you are using a different device or because you have cleared your cookies (in which case we would no longer have the ability to map any personal data we have previously collected from you to your browser or device). We do have other cookies (under the domains postrelease.com, admin.nativo.com, and cloud.nativo.com) but those cookies would not be related to the appearance of SimpleReach in the list of Off Site Activity on your Facebook account, per your original inquiry.
What did you learn from their inclusion in the Off-Facebook Activity list? There appeared to be a correlation between this domain and a publisher, Insider.com, which also appeared in my Off-Facebook Activity list — as both logged events bear the same date; plus Insider.com is a publisher so would fall into the right customer category for using Nativo’s tool.
Given those correlations I was able to guess Insider.com is a customer of Nativo. (I confirmed this when I spoke to Insider.com) — so Facebook’s tool is able to leak relational inferences related to the tracking industry by surfacing/mapping business connections that might not have been otherwise evident.
Insider.com
What is it? A New York based business media company which owns brands such as Business Insider and Markets Insider
Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a technology article that appeared in my Facebook News Feed or elsewhere but when I was logged into Facebook
What happened when you asked them about this? After about a week of radio silence an employee in Insider’com’s legal department got in touch to say they could discuss the issue on background.
This person told me the information in the Off-Facebook Activity tool came from the Facebook share button which is embedded on all articles it runs on its media websites. They confirmed that the share button can share data with Facebook regardless of whether the site visitor interacts with the button or not.
In my case I certainly would not have interacted with the Facebook share button. Nonetheless data was passed, simply by merit of loading the article page itself.
Insider.com said the Facebook share button widget is integrated into its sites using a standard set-up that Facebook intends publishers to use. If the share button is clicked information related to that action would be shared with Facebook and would also be received by Insider.com (though, in this scenario, it said it doesn’t get any personalized information — but rather gets aggregate data).
Facebook can also automatically collect other information when a user visits a webpage which incorporates its social plug-ins.
Asked whether Insider.com knows what information Facebook receives via this passive route the company told me it does not — noting the plug-in runs proprietary Facebook code. 
Asked how it’s collecting consent from users for their data to be shared passively with Facebook, Insider.com said its Privacy Policy stipulates users consent to sharing their information with Facebook and other social media sites. It also said it uses the legal ground known as legitimate interests to provide functionality and derive analytics on articles.
In the active case (of a user clicking to share an article) Insider.com said it interprets the user’s action as consent.
Insider.com confirmed it uses SimpleReach/Nativo analytics tools, meaning site visitor data is also being passed to Nativo when a user lands on an article. It said consent for this data-sharing is included within its consent management platform (it uses a CMP made by Forcepoint) which asks site visitors to specify their cookie choices.
Here site visitors can choose for their data not to be shared for analytics purposes (which Insider.com said would prevent data being passed).
I usually apply all cookie consent opt outs, where available, so I’m a little surprised Nativo/SimpleReach was passed my data from an Insider.com webpage. Either I failed to click the opt out one time or failed to respond to the cookie notice and data was passed by default.
It’s also possible I did opt out but data was passed anyway — as there has been research which has found a proportion of cookie notifications ignore choices and pass data anyway (unintentionally or otherwise).
Follow up questions I sent to Insider.com after we talked:
1) Can you confirm whether Insider has performed a legitimate interests assessment?
2) Does Insider have a site mechanism where users can object to the passive data transfer to Facebook from the share buttons?
Insider.com did not respond to my additional questions.
What did you learn from their inclusion in the Off-Facebook Activity list? That Insider.com is a customer of Nativo/SimpleReach.
Rei.com
What is it? A California-based ecommerce website selling outdoor gear
Why did it appear in your Off-Facebook Activity list? I don’t recall ever visiting their site prior to looking into why it appeared in the list so I’m really not sure
What happened when you asked them about this? After saying it would investigate it followed up with a statement, rather than detailed responses to my questions, in which it claims it does not hold any personal data associated with — presumably — my TechCrunch email, since it did not ask me what data to check against.
It also appeared to be claiming that it uses Facebook tracking pixels/tags on its website, without explicitly saying as much, writing that: “Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool.”
It claims it has no access to this information — which it says is “pseudonymous to us” but suggested that if I have a Facebook account Facebook could link any browsing on Rei’s site to my Facebook’s identity and therefore track my activity.
The company also pointed me to a Facebook Help Center post where the company names some of the activities that might have resulted in Rei’s website sending activity data on me to Facebook (which it could then link to my Facebook ID) — although Facebook’s list is not exhaustive (included are: “viewing content”, “searching for an item”, “adding an item to a shopping cart” and “making a donation” among other activities the company tracks by having its code embedded on third parties’ sites).
Here’s Rei’s statement in full:
Thank you for your patience as we looked into your questions.  We have checked our systems and determined that REI does not maintain any personal data associated with you based on the information you provided.  Note, however, that Facebook may collect information about your interactions with our websites and mobile apps and reflect that information to you through their Off-Facebook Activity tool. The information that Facebook collects in this manner is pseudonymous to us — meaning we cannot identify you using the information and we do not maintain the information in a manner that is linked to your name or other identifying information. However, if you have a Facebook account, Facebook may be able to match this activity to your Facebook account via a unique identifier unavailable to REI. (Funnily enough, while researching this I found TechCrunch in MY list of Off-Facebook activity!)
For a complete list of activities that could have resulted in REI sharing pseudonymous information about you with Facebook, this Facebook Help Center article may be useful.  For a detailed description of the ways in which we may collect and share customer information, the purposes for which we may process your data, and rights available to EEA residents, please refer to our Privacy Policy.  For information about how REI uses cookies, please refer to our Cookie Policy.
As a follow up question I asked Rei to tell me which Facebook tools it uses, pointing out that: “Given that, just because you aren’t (as I understand it) directly using my data yourself that does not mean you are not responsible for my data being transferred to Facebook.”
The company did not respond to that point.
I also previously asked Rei.com to confirm whether it has any data sharing arrangements with the publisher of Rock & Ice magazine (see below). And, if so, to confirm the processes involved in data being shared. Again, I got no response to that.
What did you learn from their inclusion in the Off-Facebook Activity list? Given that Rei.com appeared alongside Rock & Ice on the list — both displaying the same date and just one activity apiece — I surmised they have some kind of data-sharing arrangement. They are also both outdoors brands so there would be obvious commercial ‘synergies’ to underpin such an arrangement.
That said, neither would confirm a business relationship to me. But Facebook’s list heavily implies there is some background data-sharing going on
Rock & Ice magazine 
What is it? A climbing magazine produced by a California-based publisher, Big Stone Publishing
Why did it appear in your Off-Facebook Activity list? I imagine I clicked on a link to a climbing-related article in my Facebook feed or else visited Rock & Ice’s website while I was logged into Facebook in the same browser session
What happened when you asked them about this? After ignoring my initial email query I subsequently received a brief response from the publisher after I followed up — which read:
The Rock and Ice website is opt in, where you have to agree to terms of use to access the website. I don’t know what private data you are saying Rock and Ice shared, so I can’t speak to that. The site terms are here. As stated in the terms you can opt out.
Following up, I asked about the provision in the Rock & Ice website’s cookie notice which states: “By continuing to use our site, you agree to our cookies” — asking whether it’s passing data without waiting for the user to signal their consent.
(Relevant: In October Europe’s top court issued a ruling that active consent is necessary for tracking cookies, so you can’t drop cookies prior to a user giving consent for you to do so.)
The publisher responded:
You have to opt in and agree to the terms to use the website. You may opt out of cookies, which is covered in the terms. If you do not want the benefits of these advertising cookies, you may be able to opt-out by visiting: http://www.networkadvertising.org/optout_nonppii.asp.
If you don’t want any cookies, you can find extensions such as Ghostery or the browser itself to stop and refuse cookies. By doing so though some websites might not work properly.
I followed up again to point out that I’m not asking about the options to opt in or opt out but, rather, the behavior of the website if the visitor does not provide a consent response yet continues browsing — asking for confirmation Rock & Ice’s site interprets this state as consent and therefore sends data.
The publisher stopped responding at that point.
Earlier I had asked it to confirm whether its website shares visitor data with Rei.com? (As noted above, the two appeared with the same date on the list which suggests data may be being passed between them.) I did not get a respond to that question either.
What did you learn from their inclusion in the Off-Facebook Activity list? That the magazine appears to have a data-sharing arrangement with outdoor retailer Rei.com, given how the pair appeared at the same point in my list. However neither would confirm this when I asked
MatterHackers
What is it? A California-based retailer focused on 3D printing and digital manufacturing
Why did it appear in your Off-Facebook Activity list? I honestly have no idea. I have never to my knowledge visited their site prior to investigating why they should appear on my Off Site Activity list.
I remain pretty interested to know how/why they managed to track me. I can only surmise I clicked on some technology-related content in my Facebook feed, either intentionally or by accident.
What happened when you asked them about this? They first asked me for confirmation that they were on my list. After I had sent a screenshot, they followed up to say they would investigate. I pushed again after hearing nothing for several weeks. At this point they asked for additional information from the Off-Facebook Activity tool — namely more granular metrics, such as a time and date per event and some label information — to help with tracking down this particular data-exchange.
I had previously provided them with the date (as it appears in the screenshot) but it’s possible to download additional an additional level of information about data transfers which includes per event time/date-stamps and labels/tags, such as “VIEW_CONTENT” .
However, as noted above, I had previously selected and deleted one item off of my Off-Facebook Activity list, after which Facebook’s platform had immediately erased all entries and associated metrics. There was no obvious way I could recover access to that information.
“Without this information I would speculate that you viewed an article or product on our site — we publish a lot of ‘How To’ content related to 3D printing and other digital manufacturing technologies — this information could have then been captured by Facebook via Adroll for ad retargeting purposes,” a MatterHackers spokesman told me. “Operationally, we have no other data sharing mechanism with Facebook.”
Subsequently, the company confirmed it implements Facebook’s tracking pixel on every page of its website.
Of the pixel Facebook writes that it enables website owners to track “conversions” (i.e. website actions); create custom audiences which segment site visitors by criteria that Facebook can identify and match across its user-base, allowing for the site owner to target ads via Facebook’s platform at non-customers with a similar profile/criteria to existing customers that are browsing its site; and for creating dynamic ads where a template ad gets populated with product content based on tracking data for that particular visitor.
Regarding the legal base for the data sharing, MatterHackers had this to say: “MatterHackers is not an EU entity, nor do we conduct business in the EU and so have not undertaken GDPR compliance measures. CCPA [California’s Consumer Privacy Act] will likely apply to our business as of 2021 and we have begun the process of ensuring that our website will be in compliance with those regulations as of January 1st.”
I pointed out that GDPR is extraterritorial in scope — and can apply to non-EU based entities, such as if they’re monitoring individuals in the EU (as in this case).
Also likely relevant: A ruling last year by Europe’s top court found sites that embed third party plug-ins such as Facebook’s like button are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.
Nonetheless it’s still not clear what legal base the company is relying on for implementing the tracking pixel and passing data on EU Facebook users.
When asked about this MatterHacker COO, Kevin Pope, told me:

While we appreciate the sentiment of GDPR, in this case the EU lacks the legal standing to pursue an enforcement action. I’m sure you can appreciate the potential negative consequences if any arbitrary country (or jurisdiction) were able to enforce legal penalties against any website simply for having visitors from that country. Techcrunch would have been fined to oblivion many times over by China or even Thailand (for covering the King in a negative light). In this way, the attempted overreach of the GDPR’s language sets a dangerous precedent.

To provide a little more detail – MatterHackers, at the time of your visit, wouldn’t have known that you were from the EU until we cross-referenced your session with  Facebook, who does know. At that point you would have been filtered from any advertising by us. MatterHackers makes money when our (U.S.) customers buy 3D printers or materials and then succeed at using them (hence the how-to articles), we don’t make any money selling advertising or data.

Given that Facebook does legally exist in the EU and does have direct revenues from EU advertisers, it’s entirely appropriate that Facebook should comply with EU regulations. As a global solution, I believe more privacy settings options should be available to its users. However, given Facebook’s business model, I wouldn’t expect anything other than continued deflection (note the careful wording on their tool) and avoidance from them on this issue.

What did you learn from their inclusion in the Off-Facebook Activity List? I found out that an ecommerce company I had never heard of had been tracking me
Wallapop
What is it? A Barcelona-based peer-to-peer marketplace app that lets people list secondhand stuff for sale and/or to search for things to buy in their proximity. Users can meet in person to carry out a transaction paying in cash or there can be an option to pay via the platform and have an item posted
Why did it appear in your Off-Facebook Activity list? This was the only digital activity that appeared in the list that was something I could explain — figuring out I must have used a Facebook sign-in option when using the Wallapop app to buy/sell. I wouldn’t normally use Facebook sign-in but for trust-based marketplaces there may be user benefits to leveraging network effects.
What happened when you asked them about this? After my query was booted around a bit a PR company that works with Wallapop responded asking to talk through what information I was trying to ascertain.
After we chatted they sent this response — attributed to sources from Wallapop:
Same as it happens with other apps, wallapop can appear on our users’ Facebook Off Site Activity page if they have interacted in any way with the platform while they were logged in their Facebook accounts. Some interaction examples include logging in via Facebook, visiting our website or having both apps opened and logged.
As other apps do, wallapop only shares activity events with Facebook to optimize users’ ad experience. This includes if a user is registered in wallapop, if they have uploaded an item or if they have started a conversation. Under no circumstance wallapop shares with Facebook our users’ personal data (including sex, name, email address or telephone number).
At wallapop, we are thoroughly committed with the security of our community and we do a safe treatment of the data they choose to share with us, in compliance with EU’s General Data Protection Regulation. Under no circumstance these data are shared with third parties without explicit authorization.
I followed up to ask for further details about these “activity events” — asking whether, for instance, Wallapop shares messaging content with Facebook as well as letting the social network know which items a user is chatting about.
“Under no circumstance the content of our users’ messages is shared with Facebook,” the spokesperson told me. “What is shared is limited to the fact that a conversation has been initiated with another user in relation to a specific item, this is, activity events. Under no circumstance we would share our users’ personal information either.”
Of course the point is Facebook is able to link all app activity with the user ID it already has — so every piece of activity data being shared is personal data.
I also asked what legal base Wallapop relies on to share activity data with Facebook. They said the legal basis is “explicit consent given by users” at the point of signing up to use the app.
“Wallapop collects explicit consent from our users and at any time they can exercise their rights to their data, which include the modification of consent given in the first place,” they said.
“Users give their explicit consent by clicking in the corresponding box when they register in the app, where they also get the chance to opt out and not do it. If later on they want to change the consent they gave in first instance, they also have that option through the app. All the information is clearly available on our Privacy Policy, which is GDPR compliant.”
“At wallapop we take our community’s privacy and security very seriously and we follow recommendations from the Spanish Data Protection Agency,” it added
What did you learn from their inclusion in the Off-Facebook Activity list? Not much more than I would have already guessed — i.e. that using a Facebook sign-in option in a third party app grants the social media giant a high degree of visibility into your activity within another service.
In this case the Wallapop app registered the most activity events of all six of the listed apps, displaying 13 vs only one apiece for the others — so it gave a bit of a suggestive glimpse into the volume of third party app data that can be passed if you opt to open a Facebook login wormhole into a separate service.

Forensic Architecture redeploys surveillance state tech to combat state-sponsored violence

The specter of constant surveillance hangs over all of us in ways we don’t even fully understand, but it is also possible to turn the tools of the watchers against them. Forensic Architecture is exhibiting several long-term projects at the Museum of Art and Design in Miami that use the omnipresence of technology as a way to expose crimes and violence by oppressive states.
Over seven years Eyal Weizman and his team have performed dozens of investigations into instances of state-sponsored violence, from drone strikes to police brutality. Often these events are minimized at all levels by the state actors involved, denied or no-commented until the media cycle moves on. But sometimes technology provides ways to prove a crime was committed and occasionally even cause the perpetrator to admit it — hoisted by their own electronic petard.
Sometimes this is actual state-deployed kit, like body cameras or public records, but it also uses private information co-opted by state authorities to track individuals, like digital metadata from messages and location services.
For instance, when Chicago police shot and killed Harith Augustus in 2018, the department released some footage of the incident, saying that it “speaks for itself.” But Forensic Architecture’s close inspection of the body cam footage and cross reference with other materials makes it obvious that the police violated numerous rules (including in the operation of the body cams) in their interaction with him, escalating the situation and ultimately killing a man who by all indications — except the official account — was attempting to comply. It also helped additional footage see the light which was either mistakenly or deliberately left out of a FOIA release.
In another situation, a trio of Turkish migrants seeking asylum in Greece were shown, by analysis of their WhatsApp messages, images, and location and time stamps, to have entered Greece and been detained by Greek authorities before being “pushed back” by unidentified masked escorts, having been afforded no legal recourse to asylum processes or the like. This is one example of several recently that appear to be private actors working in concert with the state to deprive people of their rights.

Situated testimony for survivors
I spoke with Weizman before the opening of this exhibition in Miami, where some of the latest investigations are being shown off. (Shortly after our interview he would be denied entry to the U.S. to attend the opening, with a border agent explaining that this denial was algorithmically determined; We’ll come back to this.)
The original motive for creating Forensic Architecture, he explained, was to elicit testimony from those who had experienced state violence.
“We started using this technique when in 2013 we met a drone survivor, a German woman who had survived a drone strike in Pakistan that killed several relatives of hers,” Weizman explained. “She has wanted to deliver testimony in a trial regarding the drone strike, but like many survivors her memory was affected by the trauma she has experienced. The memory of the event was scattered, it had lacunae and repetitions, as you often have with trauma. And her condition is like many who have to speak out in human rights work: The closer you get to the core of the testimony, the description of the event itself, the more it escapes you.”
The approach they took to help this woman, and later many others, jog her own memory, was something called “situated testimony.” Essentially it amounts to exposing the person to media from the experience, allowing them to “situate” themselves in that moment. This is not without its own risks.
“Of course you must have the appropriate trauma professionals present,” Weizman said. “We only bring people who are willing to participate and perform the experience of being again at the scene as it happened. Sometimes details that would not occur to someone to be important come out.”
A digital reconstruction of a drone strike’s explosion was recreated physically for another exhibition.
But it’s surprising how effective it can be, he explained. One case exposed American involvement hitherto undisclosed.
“We were researching a Cameroon special forces detention center, torture and death in custody occurred, for Amnesty International,” he explained. “We asked detainees to describe to us simply what was outside the window. How many trees, or what else they could see.” Such testimony could help place their exact location and orientation in the building and lead to more evidence, such as cameras across the street facing that room.
“And sitting in a room based on a satellite image of the area, one told us: ‘yes, there were two trees, and one was over by the fence where the American soldiers were jogging.’ We said, ‘wait, what, can you repeat that?’ They had been interviewed many times and never mentioned American soldiers,” Weizman recalled. “When we heard there were American personnel, we found Facebook posts from service personnel who were there, and were able to force the transfer of prisoners there to another prison.”
Weizman noted that the organization only goes where help is requested, and does not pursue what might be called private injustices, as opposed to public.
“We require an invitation, to be invited into this by communities that invite state violence. We’re not a forensic agency, we’re a counter-forensic agency. We only investigate crimes by state authorities.”
Using virtual reality: “Unparalleled. It’s almost tactile.”
In the latest of these investigations, being exhibited for the first time at MOAD, the team used virtual reality for the first time in their situated testimony work. While VR has proven to be somewhat less compelling than most would like on the entertainment front, it turns out to work quite well in this context.
“We worked with an Israeli whistleblower soldier regarding testimony of violence he committed against Palestinians,” Weizman said. “It has been denied by the Israeli prime minister and others, but we have been able to find Palestinian witnesses to that case, and put them in VR so we could cross reference them. we had victim and perpetrator testifying to the same crime in the same space, and their testimonies can be overlaid on each other.”
Dean Issacharoff – the soldier accused by Israel of giving false testimony – describes the moment he illegally beat a Palestinian civilian. (Caption and image courtesy of Forensic Architecture)
One thing about VR is that the sense of space is very real; If the environment is built accurately, things like sight-lines and positional audio can be extremely true to life. If someone says they saw the event occur here, but the state says it was here, and a camera this far away saw it at this angle… these incomplete accounts can be added together to form something more factual, and assembled into a virtual environment.
“That project is the first use of VR interviews we have done —  it’s still in a very experimental stage. But it didn’t involve fatalities, so the level of trauma was a bit more controlled,” Weizman explained. “We have learned that the level and precision we can arrive at in reconstructing and incident is unparalleled. It’s almost tactile; You can walk through the space, you can see every object: guns, cars, civilians. And you can populate it until the witness is satisfied that this is what they experienced. I think this is a first, definitely in forensic terms, as far as uses of VR.”
A photogrammetry-based reconstruction of the area of Hebron where the incident took place.
In video of the situated testimony, you can see witnesses describing locations more exactly than they likely or even possibly could have without the virtual reconstruction. “I stood with the men at exactly that point,” says one, gesturing towards an object he recognized, then pointing upwards: “There were soldiers on the roof of this building, where the writing is.”
Of course it is not the digital recreation itself that forces the hand of those involved, but the incontrovertible facts it exposes. No one would ever have know that the U.S. had a presence at that detainment facility, and the country had no reason to say it did. The testimony wouldn’t even have been enough, except that it put the investigators onto a line of inquiry that produced data. And in the case of the Israeli whistleblower, the situated testimony defies official accounts that the organization he represented had lied about the incident.
Avoiding “product placement” and tech incursion
Sophie Landres, MOAD’s Curator of Public Programs and Education, was eager to add that the museum is not hosting this exhibit as a way to highlight how wonderful technology is. It’s important to put the technology and its uses in context rather than try to dazzle people with its capabilities. You may find yourself playing into someone else’s agenda that way.
“For museum audiences, this might be one of their first encounters with VR deployed in this way. The companies that manufacture these technologies know that people will have their first experiences with this tech in a cultural or entertainment contrast, and they’re looking for us to put a friendly face on these technologies that have been created to enable war and surveillance capitalism,” she told me. “But we’re not interested in having our museum be a showcase for product placement without having a serious conversation about it. It’s a place where artists embrace new technologies, but also where they can turn it towards existing power structures.”
Boots on backs mean this not an advertisement for VR headsets or 3D modeling tools.
She cited a tongue-in-cheek definition of “mixed reality” referring to both digital crossover into the real world and the deliberate obfuscation of the truth at a greater scale.
“On the one hand you have mixing the digital world and the real, and on the other you have the mixed reality of the media environment, where there’s no agreement on reality and all these misinformation campaigns. What’s important about Forensic Architecture is they’re not just presenting evidence of the facts, but also the process used to arrive at these truth claims, and that’s extremely important.”
In openly presenting the means as well as the ends, Weizman and his team avoid succumbing to what he calls the “dark epistemology” of the present post-truth era.
“The arbitrary logic of the border”
As mentioned earlier, Weizman was denied entry to the U.S. for reasons unknown, but possibly related to the network of politically active people with whom he has associated for the sake of his work. Disturbingly, his wife and children were also stopped while entering the states a day before him and separated at the airport for questioning.
In a statement issued publicly afterwards, Weizman dissected the event.
In my interview the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled… I was asked to supply the Embassy with additional information, including fifteen years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.
This much we know: we are being electronically monitored for a set of connections – the network of associations, people, places, calls, and transactions – that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders, and staff are being targeted by the US government as security threats.
This incident exemplifies – albeit in a far less intense manner and at a much less drastic scale – critical aspects of the “arbitrary logic of the border” that our exhibition seeks to expose. The racialized violations of the rights of migrants at the US southern border are of course much more serious and brutal than the procedural difficulties a UK national may experience, and these migrants have very limited avenues for accountability when contesting the violence of the US border.
The works being exhibited, he said, “seek to demonstrate that we can invert the forensic gaze and turn it against the actors—police, militaries, secret services, border agencies—that usually seek to monopolize information. But in employing the counter-forensic gaze one is also exposed to higher level monitoring by the very state agencies investigated.”
Forensic Architecture’s investigations are ongoing; you can keep up with them at the organization’s website. And if you’re in Miami, drop by MOAD to see some of the work firsthand.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.
It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.
Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.
“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.
Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.
Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.
However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.
We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).
“Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”
Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.
“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”
“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.
Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.
So — in other words — Brexit means, er, trust Google to look after your data.
“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.
“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”
Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.
The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.
So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.
It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)
Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.
Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.
Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…
We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?
Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.
This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.
In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.
Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.
In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.
Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.
Though it could be using all that personal stuff to help it build new products it can serve ads alongside.
Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.
The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

Google gobbling Fitbit is a major privacy risk, warns EU data protection advisor

The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.
Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.
Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.
Google, meanwhile, is in the process of dialling up its designs on the health space.
In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”
Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018  which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.
We’ve reached out to the Irish DPC — which is the lead privacy regulator for Google in the EU — to ask if it shares the EDPB’s concerns.
The latter’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.
It also says it intends to remain “vigilant in this and similar cases in the future”.
The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.
“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.
We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.
Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.
However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)
“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” the advisory body adds.
We’ve also reached out to the European Commission’s competition unit for a response to the EDPB’s statement.

UCLA backtracks on plan for campus facial recognition tech

After expressing interest in processing campus security camera footage with facial recognition software, UCLA is backing down.
In a letter to Evan Greer of Fight for the Future, a digital privacy advocacy group, UCLA Administrative Vice Chancellor Michael Beck announced the institution would abandon its plans in the face of a backlash from its student body.
“We have determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community,” Beck wrote.
The decision, deemed a “major victory” for privacy advocates, came as students partnered with Fight for the Future to plan a national day of protest on March 2. UCLA’s interest in facial recognition was a controversial departure from many elite universities that confirmed they have no intention to implement the surveillance technology, including MIT, Brown, and New York University.
UCLA student newspaper the Daily Bruin reported on the school’s interest in facial recognition tech last month, as the university proposed the addition of facial recognition software in a revision of its security camera policy. According to the Daily Bruin, the technology would have been used to screen individuals from restricted campus areas and to identify anyone flagged with a “stay-away order” prohibiting them from being on university grounds. The proposal faced criticism in a January town hall meeting on campus with 200 attendees and momentum against the surveillance technology built from there.
“We hope other universities see that they will not get away with these policies,” Matthew William Richard, UCLA student and vice chair of UCLA’s Campus Safety Alliance, said of the decision. “… Together we can demilitarize and democratize our campuses.”

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.
But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.
The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.
The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.
This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.
In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)
The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.
Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.
This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)
In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).
There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.
While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)
The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]
Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.
So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.
Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.
One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)
Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )
Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.
Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”
In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.
“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”
One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.
That’s one deadline that may help to concentrate minds on issuing decisions.
Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).
There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

Europe sets out plan to boost data reuse and regulate “high risk” AIs

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
AI
Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
A voluntary labelling scheme for lower risk AI applications
Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc
Data
A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
Tech for good
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
Trustworthy artificial intelligence
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
Towards a rights-respecting common data space
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Do you want to know more on the EU’s digital strategy?Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz
— European Commission (@EU_Commission) February 18, 2020

Platform liability
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton

Ring slightly overhauls security and privacy, but it’s still not enough

Security camera maker Ring is updating its service to improve account security and give more control when it comes to privacy. Once again, this is yet another update that makes the overall experience slightly better but the Amazon-owned company is still not doing enough to protect its users.
First, Ring is reversing its stance when it comes to two-factor authentication. Two-factor authentication is now mandatory — you can’t even opt out. So the next time you login on your Ring account, you’ll receive a six-digit code via email or text message to confirm your login request.
This is very different from what Ring founder Jamie Siminoff told me at CES in early January:

“So now, we’re going one step further, which is for two-factor authentication. We really want to make it an opt-out, not an opt-in. You still want to let people opt out of it because there are people that just don’t want it. You don’t want to force it, but you want to make it as forceful as you can be without hurting the customer experience.”

How Ring is rethinking privacy and security

Security experts all say that sending you a code by text message isn’t perfect. It’s better than no form of two-factor authentication, but text messages are not secure. They’re also tied to your phone number. That’s why SIM-swapping attacks are on the rise.
As for sending you a code via email, it really depends on your email account. If you haven’t enabled two-factor authentication on your email account, then Ring’s implementation of two-factor authentication is basically worthless. Ring should let you use app-based two-factor with the ability to turn off other methods in your account.
And that doesn’t solve Ring’s password issues. As Motherboard originally found out, Ring doesn’t prevent you from using a weak password and reusing passwords that have been compromised in security breaches from third-party services.
A couple of weeks ago, TechCrunch’s Zack Whittaker could create a Ring account with “12345678” and “password” as the password. He created another account with “password” a few minutes ago.

When it comes to privacy, the EFF called out Ring’s app as it shares a ton of information with third-party services, such as branch.io, mixpanel.com, appsflyer.com and facebook.com. Worse, Ring doesn’t require meaningful consent from the user.
You can now opt out of third-party services that help Ring serve personalized advertising. As for analytics, Ring is temporarily removing most third-party analytics services from its apps (but not all). The company plans on adding a menu to opt out of third-party analytics services in a future update.
Enabling third-party trackers and letting you opt out later isn’t GDPR compliant. So I hope the onboarding experience is going to change as well as the company shouldn’t enable these features without proper consent at all.
Ring could have used this opportunity to adopt a far stronger stance when it comes to privacy. The company sells devices that you set up in your garden, your living room and sometimes even your bedroom. Users certainly don’t want third-party companies to learn more about your interactions with Ring’s services. But it seems like Ring’s motto is still: “If we can do it, why shouldn’t we do it.”

Class action suit against Clearview AI cites Illinois law that cost Facebook $550M

Just two weeks ago Facebook settled a lawsuit alleging violations of privacy laws in Illinois for the considerable sum of $550 million. Now controversial startup Clearview AI, which has gleefully admitted to scraping and analyzing the data of millions, is the target of a new lawsuit citing similar violations.
Clearview made waves earlier this year with a business model seemingly predicated on wholesale abuse of public-facing data on Twitter, Facebook, Instagram and so on. If your face is visible to a web scraper or public API, Clearview either has it or wants it and will be submitting it for analysis by facial recognition systems.
Just one problem: That’s illegal in Illinois, and you ignore this to your peril, as Facebook found.

Facebook will pay $550 million to settle class action lawsuit over privacy violations

The lawsuit, filed yesterday on behalf of several Illinois citizens and first reported by Buzzfeed News, alleges that Clearview “actively collected, stored and used Plaintiffs’ biometrics — and the biometrics of most of the residents of Illinois — without providing notice, obtaining informed written consent or publishing data retention policies.”
Not only that, but this biometric data has been licensed to many law enforcement agencies, including within Illinois itself.
All this is allegedly in violation of the Biometric Information Privacy Act, a 2008 law that has proven to be remarkably long-sighted and resistant to attempts by industry (including, apparently, by Facebook while it fought its own court battle) to water it down.
The lawsuit (filed in New York, where Clearview is based) is at its very earliest stages and has only been assigned a judge, and summonses sent to Clearview and CDW Government, the intermediary for selling its services to law enforcement. It’s impossible to say how it will play out at this point but given the success of the Facebook suit and the similarity of the two cases (essentially the automatic and undisclosed ingestion of photos by a facial recognition engine) suggest that this one has legs.
The scale is difficult to predict, and likely would depend largely on disclosure by Clearview as to the number and nature of its analysis of photos of those protected by BIPA.
Even if Clearview were to immediately delete all the information it has on citizens of Illinois, it would still likely be liable for its previous acts. A federal judge in Facebook’s case wrote: “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests,” and is therefore actionable. That’s a strong precedent and the similarities are undeniable — not that they won’t be denied.
You can read the text of the complaint here.

California’s new privacy law is off to a rocky start

Surprise! Audit finds automated license plate reader programs are a privacy nightmare

Automated license plate readers, ALPRs, would be controversial even if they were responsibly employed by the governments that run them. Unfortunately, and to no one’s surprise, the way they actually operate is “deeply disturbing and confirm[s] our worst fears about the misuse of this data,” according to an audit of the programs instigated by a Californian legislator.
“What we’ve learned today is that many law enforcement agencies are violating state law, are retaining personal data for lengthy periods of time, and are disseminating this personal data broadly. This state of affairs is totally unacceptable,” said California State Senator Scott Weiner (D-SF), who called for the audit of these programs. The four agencies audited were the LAPD, Fresno PD, and the Marin and Sacramento County Sheriffs Departments.

Police license plate readers are still exposed on the internet

The inquiry revealed that the programs can barely justify their existence and not seem to have, let alone follow, best practices for security and privacy:
Los Angeles alone stores 320 million license plate images, 99.9 percent of which were not being sought by law enforcement at the time of collection.
Those images were shared with “hundreds” of other agencies but there was no record of how this was justified legally or accomplished properly.
None of the agencies has a privacy policy in line with requirements established in 2016. Three could not adequately explain access and oversight permissions, or how and when data would or could be destroyed, “and the remaining agency has not developed a policy at all.”
There were almost no policies or protections regarding account creation and use and have never audited their own systems.
Three of the agencies store their images and data with a cloud vendor, the contract for which had inadequate if any protections for that data.
In other words, “there is significant cause for alarm,” the press release stated. As the programs appear to violate state law they may be prosecuted, and as existing law appears to be inadequate to the task of regulating them, new ones must be proposed, Wiener said, and he is working on it.
The full report can be read here.

A new senate bill would create a US data protection agency

Europe’s data protection laws are some of the strictest in the world, and have long been a thorn in the side of the data-guzzling Silicon Valley tech giants since they colonized vast swathes of the internet.
Two decades later, one Democratic senator wants to bring many of those concepts to the United States.
Sen. Kirsten Gillibrand (D-NY) has published a bill which, if passed, would create a U.S. federal data protection agency designed to protect the privacy of Americans and with the authority to enforce data practices across the country. The bill, which Gillibrand calls the Data Protection Act, will address a “growing data privacy crisis” in the U.S., the senator said.
The U.S. is one of only a few countries without a data protection law, finding it in the same company as Venezuela, Libya, Sudan and Syria. Gillibrand said the U.S. is “vastly behind” other countries on data protection.
Gillibrand said a new data protection agency would “create and meaningfully enforce” data protection and privacy rights federally.
“The data privacy space remains a complete and total Wild West, and that is a huge problem,” the senator said.
The bill comes at a time where tech companies are facing increased attention by state and federal regulators over data and privacy practices. Last year saw Facebook settle a $5 billion privacy case with the Federal Trade Commission, which critics decried for failing to bring civil charges or levy any meaningful consequences. Months later, Google settled a child privacy case that cost it $170 million — costing the search giant about a day’s worth of its revenue.
Gillibrand pointedly called out Google and Facebook for “making a whole lot of money” from their empires of data, she wrote in a Medium post. Americans “deserve to be in control of your own data,” she wrote.
At its heart, the bill would — if signed into law — allow the newly created agency to hear and adjudicate complaints from consumers and declare certain privacy invading tactics as unfair and deceptive. As the government’s “referee,” the agency would let it take point on federal data protection and privacy matters, such as launching investigations against companies accused of wrongdoing. Gillibrand’s bill specifically takes issue with “take-it-or-leave-it” provisions, notably websites that compel a user to “agree” to allowing cookies with no way to opt-out. (TechCrunch’s parent company Verizon Media enforces a ‘consent required’ policy for European users under GDPR, though most Americans never see the prompt.)
Through its enforcement arm, the would-be federal agency would also have the power to bring civil action against companies, and fine companies of egregious breaches of the law up to $1 million a day, subject to a court’s approval.
The bill would transfer some authorities from the Federal Trade Commission to the new data protection agency.
Gillibrand’s bill lands just a month after California’s consumer privacy law took effect, more than a year after it was signed into law. The law extended much of Europe’s revised privacy laws, known as GDPR, to the state. But Gillibrand’s bill would not affect state laws like California’s, her office confirmed in an email.
Privacy groups and experts have already offered positive reviews.
Caitriona Fitzgerald, policy director at the Electronic Privacy Information Center, said the bill is a “bold, ambitious proposal.” Other groups, including Color of Change and Consumer Action, praised the effort to establish a federal data protection watchdog.
Michelle Richardson, director of the Privacy and Data Project at the Center for Democracy and Technology, reviewed a summary of the bill.
“The summary seems to leave a lot of discretion to executive branch regulators,” said Richardson. “Many of these policy decisions should be made by Congress and written clearly into statute.” She warned it could take years to know if the new regime has any meaningful impact on corporate behaviors.
Gillibrand’s bill stands alone — the senator is the only sponsor on the bill. But given the appetite of some lawmakers on both sides of the aisles to crash the Silicon Valley data party, it’s likely to pick up bipartisan support in no time.
Whether it makes it to the president’s desk without a fight from the tech giants remains to be seen.

California’s new privacy law is off to a rocky start

Facebook Dating launch blocked in Europe after it fails to show privacy workings

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.
Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.
In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.
“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”
Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.
It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.
At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.
Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)
The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.
Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.
“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.
Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.
The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”
“Contained in the documentation we gathered on Monday was a DPIA,” he added.
This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…
We’ve reached out to Facebook for comment and to ask when it carried out the DPIA.
We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws.
Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. And a dating product clearly is.
While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.
Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.

Seven reasons not to trust Facebook to play cupid

Radar, a location data startup, says its “big bet” is on putting privacy first

Pick any app on your phone, and there’s a greater than average chance that it’s tracking your location right now.
Sometimes they don’t even tell you. Your location can be continually collected and uploaded, then monetized by advertisers and other data tracking firms. These companies also sell the data to the government — no warrants needed. And even if you’re app-less, your phone company knows where you are at any given time, and for the longest time sold that data to anyone who wanted it.
Location data is some of the most personal information we have — yet few think much about it. Our location reveals where we go, when, and often why. It can be used to know our favorite places and our routines, and also who we talk to. And yet it’s spilling out of our phones ever second of every day to private companies, subject to little regulation or oversight, building up precise maps of our lives. Headlines sparked anger and pushed lawmakers into taking action. And consumers are becoming increasingly aware of their tracked activity thanks to phone makers, like Apple, alerting users to background location tracking. Foursquare, one of the biggest location data companies, even called on Congress to do more to regulate the sale of location data.
But location data is not going anywhere. It’s a convenience that’s just too convenient, and it’s an industry that’s growing from strength to strength. The location data market was valued at $10 billion last year, with it set to balloon in size by more than two-fold by 2027.
There is appetite for change, Radar, a location data startup based in New York, promised in a recent blog post that it will “not sell any data we collect, and we do not share location data across customers.”
It’s a promise that Radar chief executive Nick Patrick said he’s willing to bet the company on.
“We want to be that location layer that unlocks the next generation of experiences but we also want to do it in a privacy conscious way,” Patrick told TechCrunch. “That’s our big bet.”
Developers integrate Radar into their apps. Those app makers can create location geofences around their businesses, like any Walmart or Burger King. When a user enters that location, the app knows to serve relevant notifications or alerts, making it functionally just like any other location data provider.
But that’s where Patrick says Radar deviates.
“We want to be the most privacy-first player,” Patrick said. Radar bills itself as a location data software-as-a-service company, rather than an ad tech company like its immediate rivals. That may sound like a marketing point — it is — but it’s also an important distinction, Patrick says, because it changes how the company makes its money. Instead of monetizing the collected data, Radar prices its platform based on the number of monthly active users that use the apps with Radar inside.
“We’re not going to package that up into an audience segment and sell it on an ad exchange,” he said. “We’re not going to pull all of the data together from all the different devices that we’re installed on and do foot traffic analytics or attribution.”
But that trust doesn’t come easy, nor should it. Some of the most popular apps have lost the trust of their users through privacy-invasive privacy practices, like collecting locations from users without their knowledge or permission, by scanning nearby Bluetooth beacons or Wi-Fi networks to infer where a person is.
We were curious and ran some of the apps through a network traffic analyzer to see what was going on under the hood, like Joann, GasBuddy, Draft King and others. We found that Radar only activated when location permissions were granted on the device — something apps have tried to get around in the past. The apps we checked instantly sent our precise location data back to Radar — which was to be expected — along with the device type, software version, and little else. The data collected by Radar is significantly less than what other comparable apps share with their developers, but still allows integrations with third-party platforms to make use of that location data. Via, a popular ride-sharing app, uses a person’s location, collected by Radar, to deliver notifications and promotions to users at airports and other places of interest.
The company boasts its technology is used in apps on more than 100 million device installs.
“We see a ton of opportunity around enabling folks to build location, but we also see that the space has been mishandled,” said Patrick. “We think the location space in need of a technical leader but also an ethical leader that can enable the stuff in a privacy conscious way.”
It was a convincing pitch for Radar’s investors, which just injected $20 million into its Series B fundraise, led by Accel, a substantial step up from its $8 million Series A round. Patrick said the round will help the company build out the platform further. One feature on Radar’s to-do list was to allow the platform to take advantage of on-device processing, “no user event data ever touches Radar’s servers,” he aid Patrick. The raise will help the company expand its physical footprint on the west coast by opening an office in San Francisco. Its home base in New York will also expand, he said, increasing the company’s headcount from its current two-dozen employees.
“Radar stands apart due to its focus on infrastructure rather than ad tech,” said Vas Natarajan, a partner at Accel, who also took a seat on Radar’s board.
Two Sigma Ventures, Heavybit, Prime Set, and Bedrock Capital participated in the round.
Patrick said his pitch is also working for apps and developers, which recognize that their users are becoming more aware of privacy issues. He’s seen companies, some of which he now calls customers, that are increasingly looking for more privacy-focused partners and vendors, not least to bolster their own respective reputations.
It’s healthy to be skeptical. Given the past year, it’s hard to have any faith in any location data company, let alone embrace one. And yet it’s a compelling pitch for the app community that only through years of misdeeds and a steady stream of critical headlines is being forced to repair its image.
But a company’s words are only as strong as its actions, and only time will tell if they hold up.

Foursquare CEO calls on Congress to regulate the location data industry

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.
Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.
Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.
However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

Further, it requires social media companies to *prevent* ‘harmful’ (undefined) speech going online in the first place; & prevent ‘inappropriate’ (undefined) content recommendations.
So expect state-sponsored upload filters, recommendation systems & mass surveillance.
— Big Brother Watch (@BigBrotherWatch) February 12, 2020

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.
Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.
“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”
It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.
Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.
“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.
“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”
Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.
“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”
Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.
“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.
It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.
“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”
The government is clear in the response that Online harms remains “a key legislative priority”.
“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”
Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.
It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.
Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.
Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.
The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.
This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.
The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.
In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

Ofcom responds to the Government’s announcement on online harms regulation: https://t.co/DTfMJkgIVU pic.twitter.com/Qgop0xcIcw
— Ofcom (@Ofcom) February 12, 2020

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”
Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.
The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.
In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

Jam lets you safely share streaming app passwords

Can’t afford Netflix and HBO and Spotify and Disney+…? Now there’s an app specially built for giving pals your passwords while claiming to keep your credentials safe. It’s called Jam, and the questionably legal service launched in private beta this morning. Founder John Backus tells TechCrunch in his first interview about Jam that it will let users save login details with local encryption, add friends you can then authorize to access your password for a chosen service, and broadcast to friends which of your subscriptions have room for people to piggyback on.
Jam is just starting to add users off its rapidly growing waitlist that you can join here, but when users get access, it’s designed to stay free to use. In the future, Jam could build a business by helping friends split the costs of subscriptions. There’s clearly demand. Over 80% of 13-24 year olds have given out or used someone else’s online TV password, according a study by Hub of over 2000 US consumers.
“The need for Jam was obvious. I don’t want to find out my ex-girlfriend’s roommate has been using my account again. Everyone shares passwords, but for consumers there isn’t a secure way to do that. Why?” Backus asks. “In the enterprise world, team password managers reflect the reality that multiple people need to access the same account, regularly. Consumers don’t have the same kind of system, and that’s bad for security and coordination.”
Thankfully, Backus isn’t some amateur when it comes to security. The Stanford computer science dropout and Thiel Fellow founded identity verification startup Cognito and decentralized credit scoring app Bloom. “Working in crypto at Bloom and with sensitive data at Cognito, I have a lot of experience building secure products with cryptography at the core.
He also tells me since everything saved in Jam is locally encrypted, even he can’t see it and nothing would be exposed if the company was hacked. It uses similar protocols to 1Password, “Plaintext login information is never sent to our server, nor is your master password” and “we use pretty straightforward public key cryptography.” Remember, your friend could always try to hijack and lock you out, though. And while those protocols may be hardened, TechCrunch can’t verify they’re perfectly implemented and fully secure within Jam.

Whether facilitating password sharing is legal, and whether Netflix and its peers will send an army of lawyers to destroy Jam, remain open questions. We’ve reached out to several streaming companies for comment. When asked on Twitter about Jam helping users run afoul of their terms of service, Backus claims that “plenty of websites give you permission to share your account with others (with vary degrees of constraints) but users often don’t know these rules.” 
However, sharing is typically supposed to be amongst a customer’s own devices or within their household, or they’re supposed to pay for a family plan. We asked Netflix, Hulu, CBS, Disney, and Spotify for comment, and did not receive any on the record comments. However, Spotify’s terms of service specifically prohibit providing your password to any other person or using any other person’s username and password”. Netflix’s terms insist that “the Account Owner should maintain control over the Netflix ready devices that are used to access the service and not reveal the password or details of the Payment Method associated to the account to anyone.”
Some might see Jam as ripping off the original content creators, though Backus claims that “Jam isn’t trying to take money out of anyone’s pocket. Spotify offers [family plan sharing for people under the same roof]. Many other companies offer similar bundled plans. I think people just underutilize things like this and it’s totally fair game.”
Netflix’s Chief Product Officer said in October that the company is monitoring password sharing and it’s looking at “consumer-friendly ways to push on the edges of that.” Meanwhile, The Alliance For Creativity and Entertainment that includes Netflix, Disney, Amazon, Comcast, and major film studios announced that its members will collaborate to address “piracy” including “what facilitates unauthorized access, including improper password sharing and inadequate encryption.”
That could lead to expensive legal trouble for Jam. “My past startups have done well, so I’ve had the pleasure of self-funding Jam so far” Backus says. But if lawsuits emerge or the app gets popular, he might need to find outside investors. “I only launched about 5 hours ago, but I’ll just say that I’m already in the process of upgrading my database tier due to signup growth.”

Eventually, the goal is not to monetize not through a monthly subscription like Backus expects competitors including password-sharing browser extensions might charge. Instead “Jam will make money by helping users save money. We want to make it easy fo users to track what they’re sharing and with whom so that they can settle up the difference at the end of each month” Backus explains. It could charge “either a small fee in exchange for automatically settling debts between users and/or charging a percentage of the money we save users by recommending more efficient sharing setups.” Later, he sees a chance to provide recommendations for optimizing account management across networks of people while building native mobile apps.
“I think Jam is timed perfectly to line up with multiple different booming trends in how people are using the internet”, particularly younger people says Backus. Hub says 42% of all US consumers have used someone else’s online TV service password, while amongst 13 to 24 year olds, 69% have watched Netflix on someone else’s password. “When popularity and exclusivity are combined with often ambiguous, even sometimes nonexistent, rules about legitimate use, it’s almost an invitation to subscribers to share the enjoyment with friends and family” says Peter Fondulas, the principal at Hub and co-author of the study. “Wall Street has already made its displeasure clear, but in spite of that, password sharing is still very much alive and well.”
From that perspective, you could liken Jam to sex education. Password sharing abstinence has clearly failed. At least people should learn how to do it safely.

PROTIP: Feeling lonely? Go to your Netflix settings, click “Sign out of all devices,” and wait a few hours.
Voilà! If you check your phone now, you’ll find you have several new texts from friends you haven’t spoken to in years.
— John Backus (@backus) January 15, 2020

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.
The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.
But to say things are going well is a stretch.
Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.
Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”
“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.
“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.
Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.
“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.
PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.
But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.
Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.
(Image: Twitter/@jeanqasaur)
The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.
Others require sending in even more sensitive information just to prove it’s them.
Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.
Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.
As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.
Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.
The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.
(Screenshot: Zack Whittaker/TechCrunch)
TechCrunch alerted Mine — and the two requesters — to the security lapse.
“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”
For now, many startups have caught a break.
The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.
“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.
CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.
But with the looming threat of hefty fines just months away, time is running out for the non-compliant.

Here’s where California residents can stop companies selling their data

ACLU says it’ll fight DHS efforts to use app locations for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.
“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.
Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.
The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.
According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.
It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.
Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.
The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.
ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

ACLU sues Homeland Security over ‘stingray’ cell phone surveillance

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.
The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.
A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.
The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.
GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.
In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.
Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.
In its current form the automated risk assessment system failed this test, in the court’s view.
Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.
In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.
The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The Court’s reasoning doesn’t imply there should be full disclosure, but it clearly expects much more robust information on the way (objective criteria) that the model and scores were developed and the way in which particular risks for individuals were addressed.
— Joris van Hoboken (@jorisvanhoboken) February 6, 2020

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.
“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.
Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.
So the decision by the Dutch court could have some near-term implications for UK policy in this area.
The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.
It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.
It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

Russia’s push back against Big Tech has major consequences for Apple

Josh Nadeau
Contributor

Josh Nadeau is a Canadian journalist based in St. Petersburg who covers the intersection of Russia, technology and culture. He has written for The Economist, Atlas Obscura and The Outline.

Last month, Donald Trump took to Twitter to criticize Apple for not unlocking two iPhones belonging to the Pensacola shooter, another volley in the struggle between big tech and the world’s governing bodies. But even the White House’s censure pales in comparison to the Kremlin’s ongoing plans. Apple, as the timing would have it, also happens to be in Vladimir Putin’s sights.
The company’s long-running policy of not preloading third-party software onto its devices is coming up against a new piece of Russian legislation requiring every smart device to be sold with certain applications already installed, many of which are produced by the government. Inside the country, the policy has even been called the zakon protiv Apple, or the “law against Apple,” for how it disproportionately affects the tech giant. While the law was passed last November, the Russian Federal Antimonopoly Service released the full list of apps only last week.
These regulations form the latest move in what’s turning out to be one of the largest national campaigns for digital control outside of Asia. These laws have been steadily accumulating since 2014 and are described as a way of consolidating sovereignty over the digital space — threatening to push companies out of the country if they fail to comply. Apple, for instance, will have to choose by July 1 whether maintaining access to the Russian market is worth making a revolutionary change in their policy. The same choice is given to any company wishing to do business in the country.

Ancestry.com rejected a police warrant to access user DNA records on a technicality

DNA profiling company Ancestry.com has narrowly avoided complying with a search warrant in Pennsylvania after a search warrant was rejected on technical grounds, a move that is likely to help law enforcement refine their efforts to obtain user information despite the company’s efforts to keep the data private.
Little is known about the demands of the search warrant, only that a court in Pennsylvania approved law enforcement to “seek access” to Utah-based Ancestry.com’s database of more than 15 million DNA profiles.
TechCrunch was not able to identify the search warrant or its associated court case, which was first reported by BuzzFeed News on Monday. But it’s not uncommon for criminal cases still in the early stages of gathering evidence to remain under seal and hidden from public records until a suspect is apprehended.
DNA profiling companies like Ancestry.com are increasingly popular with customers hoping to build up family trees by discovering new family members and better understanding their cultural and ethnic backgrounds. But these companies are also ripe for picking by law enforcement, which want access to genetic databases to try to solve crimes from DNA left at crime scenes.
In an email to TechCrunch, the company confirmed that the warrant was “improperly served” on the company and was flatly rejected.
“We did not provide any access or customer data in response,” said spokesperson Gina Spatafore. “Ancestry has not received any follow-up from law enforcement on this matter.”
Ancestry.com, the largest of the DNA profiling companies, would not go into specifics, but the company’s transparency report said it rejected the warrant on “jurisdictional grounds.”
“I would guess it was just an out of state warrant that has no legal effect on Ancestry.com in its home state,” said Orin S. Kerr, law professor at the University of California, Berkeley, in an email to TechCrunch. “Warrants normally are only binding within the state in which they are issued, so a warrant for Ancestry.com issued in a different state has no legal effect,” he added.
But the rejection is likely to only stir tensions between police and the DNA profiling services over access to the data they store.
Ancestry.com’s Spatafore said it would “always advocate for our customers’ privacy and seek to narrow the scope of any compelled disclosure, or even eliminate it entirely.” It’s a sentiment shared by 23andMe, another DNA profiling company, which last year said that it had “successfully challenged” all of its seven legal demands, and as a result has “never turned over any customer data to law enforcement.”
The statements were in response to criticism that rival GEDmatch had controversially allowed law enforcement to search its database of more than a million records. The decision to allow in law enforcement was later revealed as crucial in helping to catch the notorious Golden Gate Killer, one of the most prolific murderers in U.S. history.
But the move was widely panned by privacy advocates for accepting a warrant to search its database without exhausting its legal options.
It’s not uncommon for companies to receive law enforcement demands for user data. Most tech giants, like Apple, Facebook, Google and Microsoft, publish transparency reports detailing the number of legal demands and orders they receive for user data each year or half-year.
Although both Ancestry.com and 23andMe provide transparency reports, detailing the amount of law enforcement demands for user data they receive, not all are as forthcoming. GEDmatch still does not publish its data demand figures, nor does MyHeritage, which said it “does not cooperate” with law enforcement. FamilyTreeDNA said it was “working” on publishing a transparency report.
But as police continue to demand data from DNA profiling and genealogy companies, they risk turning customers away — a lose-lose for both police and the companies.
Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said it would be “alarming” if law enforcement were able to get access to these databases containing millions of people’s information.
“Ancestry did the right thing in pushing back against the government request, and other companies should follow suit,” said Eidelman.

A popular genealogy website just helped solve a serial killer cold case in Oregon

Tinder’s handling of user data is now under GDPR probe in Europe

Dating app Tinder is the latest tech service to find itself under formal investigation in Europe over how it handles user data.
Ireland’s Data Protection Commission (DPC) has today announced a formal probe of how Tinder processes users’ personal data; the transparency surrounding its ongoing processing; and compliance with obligations with regard to data subject right’s requests.
Under Europe’s General Data Protection Regulation (GDPR) EU citizens have a number of rights over their personal data — such as the right to request deletion or a copy of their data.
While those entities processing people’s personal data must have a valid legal basis to do so.
Data security is another key consideration baked into the data protection regulation.
The DPC said complaints about the dating app have been made from individuals in multiple EU countries, not just in Ireland — with the Irish regulator taking the lead under a GDPR mechanism to manage cross-border investigations.
It said the Tinder probe came about as a result of active monitoring of complaints received from individuals “both in Ireland and across the EU” — in order to identify “thematic and possible systemic data protection issues”.
“The Inquiry of the DPC will set out to establish whether the company has a legal basis for the ongoing processing of its users’ personal data and whether it meets its obligations as a data controller with regard to transparency and its compliance with data subject right’s requests,” the DPC added.
It’s not clear exactly which GDPR rights have been complained about by Tinder users at this stage.
We’ve reached out to Tinder for a response.
Also today the DPC has finally responded to long-standing complaints by consumer rights groups of Google’s handling of location data — announcing a formal investigation of that too.

Google’s location tracking finally under formal probe in Europe

Google’s lead data regulator in Europe has finally opened a formal investigation into the tech giant’s processing of location data — more than a year after receiving a series of complaints from consumer rights groups across Europe.
The Irish Data Protection Commission (DPC) announced the probe today, writing in a statement that: “The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing.”
“As such the DPC has commenced an own-volition Statutory Inquiry, with respect to Google Ireland Limited,  pursuant to Section 110 of the Data Protection 2018 and in accordance with the co-operation mechanism outlined under Article 60 of the GDPR. The Inquiry will set out to establish whether Google has a valid legal basis for processing the location data of its users and whether it meets its obligations as a data controller with regard to transparency,” its notice added.
We’ve reached out to Google for comment.
BEUC, an umbrella group for European consumer rights groups, said the complaints about ‘deceptive’ location tracking were filed back in November 2018 — several months after the General Data Protection Regulation (GDPR) came into force, in May 2018.

Google faces GDPR complaint over ‘deceptive’ location tracking

It said the rights groups are concerned about how Google gathers information about the places people visit which it says could grant private companies (including Google) the “power to draw conclusions about our personality, religion or sexual orientation, which can be deeply personal traits”.
The complaints argue that consent to “share” users’ location data is not valid under EU law because it is not freely given — an express stipulation of consent as a legal basis for processing personal data under the GDPR — arguing that consumers are rather being tricked into accepting “privacy-intrusive settings”.
It’s not clear why it’s taken the DPC so long to process the complaints and determine it needs to formally investigate. (We’ve asked for comment and will update with any response.)
BEUC certainly sounds unimpressed — saying it’s glad the regulator “eventually” took the step to look into Google’s “massive location data collection”.
“European consumers have been victim of these practices for far too long,” its press release adds. “BEUC expects the DPC to investigate Google’s practices at the time of our complaints, and not just from today. It is also important that the procedural rights of consumers who complained many months ago, and that of our members representing them, are respected.”
Commenting further in a statement, Monique Goyens, BEUC’s director general, also said: “Consumers should not be under commercial surveillance. They need authorities to defend them and to sanction those who break the law. Considering the scale of the problem, which affects millions of European consumers, this investigation should be a priority for the Irish data protection authority. As more than 14 months have passed since consumer groups first filed complaints about Google’s malpractice, it would be unacceptable for consumers who trust authorities if there were further delays. The credibility of the enforcement of the GDPR is at stake here.”
The Irish DPC has also been facing growing criticism over the length of time it’s taking to reach decisions on extant GDPR investigations.
A total of zero decisions on big tech cases have been issued by the regulator — some 20 months after GDPR came into force in May 2018.
As lead European regulator for multiple tech giants — as a consequence of a GDPR mechanism which funnels cross border complaints via a lead regulator combined with the fact so many tech firms choose to site their regional HQ in Ireland (with offers the carrot of attractive business rates) — the DPC does have a major backlog of complex cross-border cases.
However there is growing political and public pressure for enforcement action to demonstrate that the GDPR is functioning as intended.
Even as further questions have been raised about how Ireland’s legal system will be able to manage so many cases.

Simplest way to show that Ireland will be unable to enforce #GDPR: They don’t even have enough judges for appeals of 4k* cases/year..
Ireland (4,9 M) has 176 judges (1 per 28k) Austria (8,5 Mio) has 1700 judges (1 per 5k) Germany (83 Mio) has 21339 judges (1 per 3,8k) pic.twitter.com/h9oj5VjOsu
— Max Schrems (@maxschrems) February 2, 2020

Google has felt the sting of GDPR enforcement elsewhere in the region; just over a year ago the French data watchdog, the CNIL, fined the company $57 million — for transparency and consent failures attached to the onboarding process for its Android mobile operating system.
But immediately following that decision Google switched the legal location of its international business to Ireland — meaning that GDPR complaints are now funnelled through the DPC.

UK Council websites are letting citizens be profiled for ads, study shows

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.
Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.
While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.
Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).
Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.
Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.
The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.
However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.
One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.
Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

This is from the Council Advertising Network’s media pack. CAN is a small operation. They are just trying to take a small slide of the Google and IAB “real-time bidding” cake. But this gives an insight in to how insidious this RTB stuff is. pic.twitter.com/b1tiZi1p4P
— Johnny Ryan (@johnnyryan) February 4, 2020

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.
And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.
But, as the saying goes, sunlight disinfects.
Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”
We’re reached out to the ICO for a response to the report’s findings.

Customer feedback is a development opportunity

Kyle Lomeli
Contributor

Share on Twitter

Kyle Lomeli is the CTO and a founding engineer at CarGurus.com.

Online commerce accounted for nearly $518 billion in revenue in the United States alone last year. The growing number of online marketplaces like Amazon and eBay will command 40% of the global retail market in 2020. As the number of digital offerings — not only marketplaces but also online storefronts and company websites — available to consumers continues to grow, the primary challenge for any online platform lies in setting itself apart.
The central question for how to accomplish this: Where does differentiation matter most?
A customer’s ability to easily (and accurately) find a specific product or service with minimal barriers helps ensure they feel satisfied and confident with their choice of purchase. This ultimately becomes the differentiator that sets an online platform apart. It’s about coupling a stellar product with an exceptional experience. Often, that takes the form of simple, searchable access to a wide variety of products and services. Sometimes, it’s about surfacing a brand that meets an individual consumer’s needs or price point. In both cases, platforms are in a position to help customers avoid having to chase down a product or service through multiple clicks while offering a better way of comparing apples to apples.
To be successful, a company should adopt a consumer-first philosophy that informs its product ideation and development process. A successful consumer-first development resides in a company’s ability to expediently deliver fresh features that customers actually respond to, rather than prioritize the update that seems most profitable. The best way to inform both elements is to consistently collect and learn from customer feedback in a timely way — and sometimes, this will mean making decisions for the benefit of consumers versus what is in the best interest of companies.

Carriers ‘violated federal law’ by selling your location data, FCC tells Congress

More than a year and a half after wireless carriers were caught red-handed selling the real-time location data of their customers to anyone willing to pay for it, the FCC has determined that they committed a crime. An official documentation of exactly how these companies violated the law is forthcoming.
FCC Chairman Ajit Pai shared his finding in a letter to Congressman Frank Pallone (D-NJ), who chairs the Energy and Commerce Committee that oversees the agency. Rep. Pallone has been active on this and prodded the FCC for updates late last year, prompting today’s letter. (I expect a comment from his office shortly and will add it when they respond.)
“I wish to inform you that the FCC’s Enforcement Bureau has completed its extensive investigation and that it has concluded that one or more wireless carriers apparently violated federal law,” Pai wrote.
Extensive it must have been, since we first heard of this egregious breach of privacy in May of 2018, when multiple reports showed that every major carrier (including TechCrunch’s parent company Verizon) was selling precise location data wholesale to resellers who then either resold it or gave it away. It took nearly a year for the carriers to follow through on their promises to stop the practice. And now, 18 months later, we get the first real indication that regulators took notice.

A year after outcry, carriers are finally stopping sale of location data, letters to FCC show

“It’s a shame that it took so long for the FCC to reach a conclusion that was so obvious,” said Commissioner Jessica Rosenworcel in a statement issued alongside the Chairman’s letter. She has repeatedly brought up the issue in the interim, seemingly baffled that such a large-scale and obvious violation was going almost completely unacknowledged by the agency.
Commissioner Brendan Starks echoed her sentiment in his own statement: “These pay-to-track schemes violated consumers’ privacy rights and endangered to their safety. I’m glad we may finally act on these egregious allegations. My question is: what took so long?”
Chairman Pai’s letter explains that “in the coming days” he will be proposing a “Notice of Apparent Liability for Forfeiture,” or several of them. This complicated-sounding document is basically the official declaration, with evidence and legal standing, that someone has violated FCC rules and may be subject to a “forfeiture,” essentially a fine.
Right now that is all the information anyone has, including the other Commissioners, but the arrival of the notice will no doubt make things much clearer — and may help show exactly how seriously the agency took this problem and when it began to take action.

Cybersecurity 101: Seven simple security guides for protecting your privacy

Disclosure: TechCrunch is owned by Verizon Media, a subsidiary of Verizon Wireless, but this has no effect on our coverage.

Tech companies, we see through your flimsy privacy promises

There’s a reason why Data Privacy Day pisses me off.
January 28 was the annual “Hallmark holiday” for cybersecurity, ostensibly a day devoted to promoting data privacy awareness and staying safe online. This year, as in recent years, it has become a launching pad for marketing fluff and promoting privacy practices that don’t hold up.
Privacy has become a major component of our wider views on security, and it’s in sharper focus than ever as we see multiple examples of companies that harvest too much of our data, share it with others, sell it to advertisers and third parties and use it to track our every move so they can squeeze out a few more dollars.
But as we become more aware of these issues, companies large and small clamor for attention about how their privacy practices are good for users. All too often, companies make hollow promises and empty claims that look fancy and meaningful.

Ring’s new security ‘control center’ isn’t nearly enough

On the same day that a Mississippi family is suing Amazon -owned smart camera maker Ring for not doing enough to prevent hackers from spying on their kids, the company has rolled out its previously announced “control center,” which it hopes will make you forget about its verifiably “awful” security practices.
In a blog post out Thursday, Ring said the new “control center,” “empowers” customers to manage their security and privacy settings.
Ring users can check to see if they’ve enabled two-factor authentication, add and remove users from the account, see which third-party services can access their Ring cameras, and opt-out of allowing police to access their video recordings without the user’s consent.
But dig deeper and Ring’s latest changes still do practically nothing to change some of its most basic, yet highly criticized security practices.
Questions were raised over these practices months ago after hackers were caught breaking into Ring cameras and remotely watching and speaking to small children. The hackers were using previously compromised email addresses and passwords — a technique known as credential stuffing — to break into the accounts. Some of those credentials, many of which were simple and easy to guess, were later published on the dark web.
Yet, Ring still has not done anything to mitigate this most basic security problem.
TechCrunch ran several passwords through Ring’s sign-up page and found we could enter any easy to guess password, like “12345678” and “password” — which have consistently ranked as some of the most common passwords for several years running.
To combat the problem, Ring said at the time users should enable two-factor authentication, a security feature that adds an additional check to prevent account breaches like password spraying, where hackers use a list of common passwords in an effort to brute force their way into accounts.
But Ring still uses a weak form of two-factor, sending you a code by text message. Text messages are not secure and can be compromised through interception and SIM swapping attacks. Even NIST, the government’s technology standards body, has deprecated support for text message-based two-factor. Experts say although text-based two-factor is better than not using it at all, it’s far less secure than app-based two-factor, where codes are delivered over an encrypted connection to an app on your phone.
Ring said it’ll make its two-factor authentication feature mandatory later this year, but has yet to say if it will ever support app-based two-factor authentication in the future.
The smart camera maker has also faced criticism for its cozy relationship with law enforcement, which has lawmakers concerned and demanding answers.
Ring allows police access to users’ videos without a subpoena or a warrant. (Unlike its parent company Amazon, Ring still does not published the number of times police demand access to customer videos, with or without a legal request.)
Ring now says its control center will allow users to decide if police can access their videos or not.
But don’t be fooled by Ring’s promise that police “cannot see your video recordings unless you explicitly choose to share them by responding to a specific video request.” Police can still get a search warrant or a court order to obtain your videos, which isn’t particularly difficult if police can show there’s reasonable grounds that it may contain evidence — such as video footage — of a crime.
There’s nothing stopping Ring, or any other smart home maker, from offering a zero-knowledge approach to customer data, where only the user has the encryption keys to access their data. Ring cutting itself (and everyone else) out of the loop would be the only meaningful thing it could do if it truly cares about its users’ security and privacy. The company would have to decide if the trade-off is worth it — true privacy for its users versus losing out on access to user data, which would effectively kill its ongoing cooperation with police departments.
Ring says that security and privacy has “always been our top priority.” But if it’s not willing to work on the basics, its words are little more than empty promises.

Over 1,500 Ring passwords have been found on the dark web

Avast shuts down marketing analytics subsidiary Jumpshot amid controversy over selling user data

Avast has made a huge business out of selling antivirus protection for computers and mobile devices, but more recently it was revealed that the Czech-based cybersecurity specialist was also cultivating another, more controversial, revenue stream: harvesting and selling on user data, some of which it amassed by way of those security tools.
But as of today, the latter of those businesses is no longer. Avast announced that it would be winding down Jumpshot, its $180 million marketing technology subsidiary that had been in the business of collecting data from across the web, including within walled gardens, analysing it, and then — unknown to users — selling it on to third-party customers that included tech giants like Microsoft and Google and big brands like Pepsi and Home Depot.
The significance of the incident extends beyond Avast and Jumpshot’s practices: it highlights the sometimes-obscure but very real connection between how some security technology runs the risk of stepping over the boundary into violations of privacy; and ultimately how big data is a hot commodity, a fact that potentially clouds that demarcation even more, as it did here:
“We started Jumpshot in 2015 with the idea of extending our data analytics capabilities beyond core security,” writes the CEO Ondrej Vlcek in a blog post in response to Jumpshot news. “This was during a period where it was becoming increasingly apparent that cybersecurity was going to be a big data game. We thought we could leverage our tools and resources to do this more securely than the countless other companies that were collecting data.”
Today’s news comes on the heels of a series of developments and investigations highlighting Jumpshot’s practices, stretching back to December, when Mozilla and Opera removed Avast extensions after reports that they were collecting user data and browsing histories. Avast — which has over 430 million active users — later came clean, only for a follow up investigation to get published earlier this week unveiling yet more details about the practice and the specific link to Jumpshot, which was founded in 2015 and uses data from 100 million devices.
In Avast’s announcement, it said that “plans to terminate provision of data” to Jumpshot but did not give a timeframe when when Jumpshot would completely cease to operate as part of the closure. There is still no announcement on Jumpshot’s own site.
“Jumpshot intends to continue paying its vendors and suppliers in full as necessary and in the ordinary course for products and services provided to Jumpshot during its wind down process,” the company said. “Jumpshot will be promptly notifying its customers in due course about the termination of its data services.”
Avast had a key partner in Jumpshot, the business media company that took a $60.8 million, 35% stake in the subsidiary last July, effectively valuing Jumpshot at around $177 million. An internal memo that we obtained from Ascential notes that the company has already sold its stake back to Avast for the same price, incurring no costs in the process.
Avast’s CEO Ondrej Vlcek, who joined the company 7 months ago, apologised in a separate blog post while also somewhat distancing himself from the history of the company and what it did. He noted that he identified the issues during an audit of the company when he joined (although didn’t act to change any of the practices). Perhaps more importantly, he maintained the legality of the situation:
“Jumpshot has operated as an independent company from the very beginning, with its own management and board of directors, building their products and services via the data feed coming from the Avast antivirus products,” he wrote. “During all those years, both Avast and Jumpshot acted fully within legal bounds – and we very much welcomed the introduction of GDPR in the European Union in May 2018, as it was a rigorous legal framework addressing how companies should treat customer data. Both Avast and Jumpshot committed themselves to 100% GDPR compliance.”
We have reached out to the Czech DPA to ask if it is going to be conducting any investigations around the company in relation to Jumpshot and its practices with data.
In the meantime, with the regulatory implications to one side, the incident has been a blow to Avast, which has in the last couple of days seen its shares tumble nearly 11 percent on the London Stock Exchange where it is traded. The company is currently valued around £4 billion (or $5.2 billion at today’s exchange rates).

Facebook will pay $550 million to settle class action lawsuit over privacy violations

Facebook will pay over half a billion dollars to settle a class action lawsuit that alleged systematic violation of an Illinois consumer privacy law. The settlement amount is large indeed, but a small fraction of the $35 billion maximum the company could have faced.
Class members — basically Illinois Facebook users from mid-2011 to mid-2015 — may expect as much as $200 each, but that depends on several factors. If you’re one of them you should receive some notification once the settlement is approved by the court and the formalities are worked out.
The proposed settlement would require Facebook to obtain consent in the future from Illinois users for such purposes as face analysis for automatic tagging.
This is the second major settlement from Facebook in six months; an seemingly enormous $5 billion settlement of FTC violations was announced over the summer, but it’s actually a bit of a joke.

9 reasons the Facebook FTC settlement is a joke

The Illinois suit was filed in 2015, alleging that Facebook collected facial recognition data on images of users in the state without disclosure, in contravention of the state’s 2008 Biometric Information Privacy Act (BIPA). Similar suits were filed against Shutterfly, Snapchat, and Google.
Facebook pushed back in 2016, saying that facial recognition processing didn’t count as biometric data, and that anyway Illinois law didn’t apply to it, a California company. The judge rejected these arguments with flair, saying the definition of biometric was “cramped” and the assertion of Facebook’s immunity would be “a complete negation” of Illinois law in this context.

Facebook stalls in lawsuit alleging its facial recognition tech violates Illinois law

Facebook was also suspected at the time of heavy lobbying efforts towards defanging BIPA. One state senator proposed an amendment after the lawsuit was filed that would exclude digital images from BIPA coverage, which would of course have completely destroyed the case. It’s hard to imagine such a ridiculous proposal was the suggestion of anyone but the industry, which tends to regard the strong protections of the law in Illinois as quite superfluous.
As I noted in 2018, the Illinois Chamber of Commerce proposed the amendment, and a tech council there was chaired by Facebook’s own Manager of State Policy at the time. Facebook told me then that it had not taken any position on the amendment or spoken to any legislators about it.
2019 took the case to the 9th U.S. Circuit Court of Appeals, where Facebook was again rebuffed; the court concluded that “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.”
Facebook’s request for a rehearing en banc, which is to say with the full complement of judges there present, was unanimously denied two months later.
At last, after some 5 years of this, Facebook decided to settle, a representative told TechCrunch, “as it was in the best interest of our community and our shareholders to move past this matter.” Obviously it admits to no wrongdoing.
The $550 million amount negotiated is “the largest all-cash privacy class action settlement to date,” according to law firm Edelson PC, one of three that represented the plaintiffs in the suit.
“Biometrics is one of the two primary battlegrounds, along with geolocation, that will define our privacy rights for the next generation,” said Edelson PC founder and CEO Jay Edelson in a press release. “We are proud of the strong team we had in place that had the resolve to fight this critically important case over the last five years. We hope and expect that other companies will follow Facebook’s lead and pay significant attention to the importance of our biometric information.”

A Christian-friendly payments processor spilled 6 million transaction records online

A little-known payments processor, which bills itself as a Christian-friendly company that does “not process credit card transactions for morally objectionable businesses,” left a database containing years’ worth of customer payment transactions online.
The database contained 6.7 million records since 2013, and was updating by the day. But the database was not protected with a password, allowing anyone to look inside.
Security researcher Anurag Sen found the database. TechCrunch identified its owner as Cornerstone Payment Systems, which provides payment processing to ministries, non-profits, and other morally aligned businesses across the U.S., including churches, religious radio personalities, and pro-life groups.
Payment processors handle credit and debit card transactions on behalf of a business.
A review of a portion of the database showed each record contained payee names, email addresses, and in many but not all cases postal addresses. Each record also had the name of the merchant who is being paid, the card type, the last four-digits of the card number, and its expiry date.
The data also contained specific dates and times of the transaction. Each record also indicated if a payment was successful or if it was declined. Some of the records also contained notes from the customer, often describing what the payment was for — such as a donation or a commemoration.
Although there was some evidence of tokenization — a way of replacing sensitive information with a unique string of letters and numbers — the database itself was not encrypted.
We used some of the email addresses to contact a number of affected customers. Two people whose names and transactions were found in the database confirmed their information was accurate.
After TechCrunch contacted Cornerstone, the company pulled the database offline.
“Cornerstone Payment Systems has secured all server access,” said spokesperson Tony Adamo.
“It is vital to note that Cornerstone Payment Systems does not store complete credit card data or check data. We have put in place enhanced security measures locking down all URLs. We are currently reviewing all logs for any potential access,” he added.
Cornerstone did not say if it will inform state regulators of the security lapse, which it’s required to do under California state’s data breach notification laws.
Read more:
LabCorp security lapse exposed thousands of medical documents
A Sprint contractor left thousands of US cell phone bills on the internet by mistake
Over 750,000 applications for US birth certificate copies exposed online
Tuft & Needle exposed thousands of customer shipping labels
An adult sexting site exposed thousands of models’ passports and driver’s licenses

Securiti.ai scores $50M Series B to modernize data governance

Securiti.ai, a San Jose startup, is working to bring a modern twist to data governance and security. Today the company announced a $50 million Series B led by General Catalyst with participation from Mayfield.
The company, which only launched in 2018, reports it has already raised $81 million. What is attracting all of this investment in such a short period of time is that the company is going after a problem that is an increasing pain point for companies all over the world because of a growing body of data privacy regulation like GDPR and CCPA.
These laws are forcing companies to understand the data they have, and find ways to be able to pull that data into a single view, and if needed respond to customer wishes to remove or redact some of  it. It’s a hard problem to solve with customer data spread across multiple applications, and often shared with third parties and partners.
Company CEO and founder Rehan Jalil says the goal of his startup is provide an operations platform for customer data, an area he has coined PrivacyOps, with the goal of helping companies give customers more control over their data, as laws increasingly require.
“In the end it’s all about giving individuals the rights on the data: the right to privacy, the right to deletion, the right to redaction, the right to stop the processing. That’s the charter and the mission of the company,” he told TechCrunch.
You begin by defining your data sources, then a bot goes out and gathers customer data across all of the data sources you have defined. The company has links to over 250 common modern and legacy data sources out of the box. Once the bot grabs the data and creates a central record, then humans come in to review the results, make any adjustments and final decisions on how to handle a data request.
It has a number of security templates for different kinds of privacy regulations such as GDPR and CCPA, and the bot finds data that must meet these requirements and lets the governance team see how many records could be in violation or meet a set of criteria you define.
Securiti.ai data view. Screenshot: Securiti.ai (cropped)
There are a number of tools in the package including ways to look at your privacy readiness, vendor assessments, data maps and data breaches to look at data privacy in broad way.
The company launched in 2018, and in 15 months has already grown to 185 employees, a number that is expected to increase in the next year with the new funding.

An adult sexting site exposed thousands of models’ passports and driver’s licenses

A popular sexting website has exposed thousands of photo IDs belonging to models and sex workers who earn commissions from the site.
SextPanther, an Arizona-based adult site, stored over 11,000 identity documents on an exposed Amazon Web Services (AWS) storage bucket, including passports, driver’s licenses, and Social Security numbers, without a password. The company says on its website that it uses to verify the ages of models who users communicate with.
Most of the exposed identity documents contain personal information, such as names, home addresses, dates of birth, biometrics, and their photos.
Although most of the data came from models in the U.S., some of the documents were supplied by workers in Canada, India, and the United Kingdom.
The site allows models and sex workers to earn money by exchanging text messages, photos, and videos with paying users, including explicit and nude content. The exposed storage bucket also contained over a hundred thousand photos and videos sent and received by the workers.
It was not immediately clear who owned the storage bucket. TechCrunch asked U.K.-based penetration testing company Fidus Information Security, which has experience in discovering and identifying exposed data, to help.
Researchers at Fidus quickly found evidence suggesting the exposed data could belong to SextPanther.
An hour after we alerted the site’s owner, Alexander Guizzetti, to the exposed data, the storage bucket was pulled offline.
“We have passed this on to our security and legal teams to investigate further. We take accusations like this very seriously,” Guizzetti said in an email, who did not explicitly confirm the bucket belonged to his company.
Using information from identity documents matched against public records, we contacted several models whose information was exposed by the security lapse.
“I’m sure I sent it to them,” said one model, referring to her driver’s license which was exposed. (We agreed to withhold her name given the sensitivity of the data.) We passed along a photo of her license as it found in the exposed bucket. She confirmed it was her license, but said that the information on her license is no longer current.
“I truly feel awful for others whom have signed up with their legit information,” she said.
The security lapse comes a week after researchers found a similar cache of highly sensitive personal information of sex workers on adult webcam streaming site, PussyCash.
More than 850,000 documents were insecurely stored in another unprotected storage bucket.
Read more:
GPS trackers leak real-time locations and can remotely activate its microphone
A Sprint contractor left thousands of US cell phone bills on the internet by mistake
Over 750,000 applications for US birth certificate copies exposed online
Tuft & Needle exposed thousands of customer shipping labels
‘Magic: The Gathering’ game maker exposed 452,000 players’ account data
Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849.

Facebook’s dodgy defaults face more scrutiny in Europe

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.
At the same time a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.
Lack of transparency
The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the ‘free’ service, and fined Facebook €5M for failing to properly inform users how their information would be used commercially.
In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service ‘is free and always will be’ — but finds users are still not being informed, “with clarity and immediacy”, about how the tech giant monetizes their data.
The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.
In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:
We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.
Last year Italy’s data protection agency also fined Facebook $1.1M — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.
Dodgy defaults
In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is ‘free and always will be’ — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.
A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.
However that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.
Significantly vzbv has won the right to bring data protection related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market. 
This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).
But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.
The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default
The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content”, and another stipulation that users agree in advance to all future changes to the policy.
Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”
We’ve reached out to Facebook for a response.

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
Its conclusions:
The Met failed to consider the human rights impact of the techIts use was unlikely to pass the key legal test of being “necessary in a democratic society”
— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

UN report says malware built by NSO Group ‘most likely’ used in Bezos phone hack

A new United Nations report says a mobile hacking tool built by mobile spyware maker, the NSO Group, was “most likely” used to hack into the Amazon founder Jeff Bezos’ phone.
The report, published by U.N. human rights experts on Wednesday, said the Israeli-based spyware maker likely used its Pegasus mobile spyware to exfiltrate gigabytes of data from Bezos’ phone in May 2018, about six months after the Saudi government first obtained the spyware.
It comes a day after reports emerged, citing a forensics report commissioned by the Amazon founder, that the malware was delivered from a number belonging to Saudi crown prince Mohammed bin Salman. The report said it was “highly probable” that the phone hack was triggered by a malicious video sent over WhatsApp to Bezos’ phone.
Within hours, large amounts of data on Bezos’ phone had been exfiltrated.
NSO Group said in a statement that its technology “was not used in this instance,” saying its technology “cannot be used on U.S. phone numbers.” The company said any suggestion otherwise was “defamatory” and threatened legal action.
But the report left open the possibility that technology developed by another mobile spyware maker may have been used.
U.N. experts Agnes Callamard and Davie Kaye, who authored the report, said the breach of Bezos’ phone was part of “a pattern of targeted surveillance of perceived opponents and those of broader strategic importance to the Saudi authorities.”
Forensics experts are said to have began looking at Bezos’ phone after he accused the National Enquirer of blackmail last year. In a  tell-all Medium post, Bezos described how he was targeted by the tabloid, which obtained and published private text messages and photos from his phone, prompting an investigation into the leak. The subsequent forensic report, which TechCrunch has not yet seen, claims the initial breach began after Bezos and the Saudi crown prince exchanged phone numbers in April 2018, a month before the hack.
The report said several other prominent figures, including Saudi dissidents and political activists, also had their phones infected with the same mobile malware around the time of the Bezos phone breach. Some whose phones were infected including those close to Jamal Khashoggi, a prominent Saudi critic and columnist for the Washington Post — which Bezos owns — who was murdered five months later. U.S. intelligence concluded that bin Salman ordered Khashoggi’s death.
The U.N. experts said the Saudis purchased the Pegasus malware, and used WhatsApp as a way to deliver the malware to Bezos’ phone.
WhatsApp, which is owned by Facebook, filed a lawsuit against the NSO Group for creating and using the Pegasus malware, which exploits a since-fixed vulnerability in the the messaging platform. Once exploited, sometimes silently and without the target knowing, the operators can download data from the user’s device. Facebook said at the time more than the malware was delivered on more than 1,400 targeted devices.
The U.N. experts said they will continue to investigate the “growing role of the surveillance industry” used for targeting journalists, human rights defenders, and owners of media outlets.
Amazon did not immediately comment.

WhatsApp blames — and sues — mobile spyware maker NSO Group over its zero-day calling exploit

Where top VCs are investing in adtech and martech

Lately, the venture community’s relationship with advertising tech has been a rocky one.
Advertising is no longer the venture oasis it was in the past, with the flow of VC dollars in the space dropping dramatically in recent years. According to data from Crunchbase, adtech deal flow has fallen at a roughly 10% compounded annual growth rate over the last five years.
While subsectors like privacy or automation still manage to pull in funding, with an estimated 90%-plus of digital ad spend growth going to incumbent behemoths like Facebook and Google, the amount of high-growth opportunities in the adtech space seems to grow narrower by the week.
Despite these pains, funding for marketing technology has remained much more stable and healthy; over the last five years, deal flow in marketing tech has only dropped at a 3.5% compounded annual growth rate according to Crunchbase, with annual invested capital in the space hovering just under $2 billion.
Given the movement in the adtech and martech sectors, we wanted to try to gauge where opportunity still exists in the verticals and which startups may have the best chance at attracting venture funding today. We asked four leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in marketing and advertising:
Christine Tsai, 500 Startups
Scott Friend, Bain Capital Ventures
Eric Franchi, MathCapital
Jon Keidan, Torch Capital
Several of the firms we spoke to (both included and not included in this survey) stated that they are not actively investing in advertising tech at present.

UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy of children online.
The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.
UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.
The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.
Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.
“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.
While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].
This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.
“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.
Here are the 15 standards in full as the regulator describes them:
Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
with this code.
Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.
The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.
So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.
However it’s not legally binding — so there’s a pretty fat chance of that.
Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable — pointing out that it has powers to take action against law breakers including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.
So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’
Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.
The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.
“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.
“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”
“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.
“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”
Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.
“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.
Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.
But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

UK quietly ditches porn age checks in favor of wider online harms rules

How comprehensive the touted ‘child protections’ will end up being remains to be seen.
Brown suggested age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.
It has also been consulting with tech companies on possible ways to implement age verification online.
The difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are mired in geopolitics.)
While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress…

Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech

Adblock Plus’s Till Faida on the shifting shape of ad blocking

Publishers hate ad blockers, but millions of internet users embrace them — and many browsers even bake it in as a feature, including Google’s own Chrome. At the same time, growing numbers of publishers are walling off free content for visitors who hard-block ads, even asking users directly to be whitelisted.
It’s a fight for attention from two very different sides.
Some form of ad blocking is here to stay, so long as advertisements are irritating and the adtech industry remains deaf to genuine privacy reform. Although the nature of the ad-blocking business is generally closer to filtering than blocking, where is it headed?
We chatted with Till Faida, co-founder and CEO of eyeo, maker of Adblock Plus (ABP), to take the temperature of an evolving space that’s never been a stranger to controversy — including fresh calls for his company to face antitrust scrutiny.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.
In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.
Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.
It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).
“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”
For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)
Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.
Funny that.
Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.
The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.
Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)
The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.
It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)
Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.
The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.
While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.
In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.
For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.
“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.
The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.
Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.
You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.
But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 
And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.
What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.
Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)
At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.
But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.
Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.
And a ban would be far harder for platform giants to simply bend to their will.

So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.
— Jonathan Senchyne (@jsench) January 16, 2020

A 10-point plan to reboot the data industrial complex for the common good

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.
Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.
But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.
The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.
“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”
However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).
The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.
These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.
The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.
Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.
Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.
Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.
“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”
EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.
For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.
Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.
If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.
“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.
“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”
An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.
But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.
In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Privacy experts slam UK’s “disastrous” failure to tackle unlawful adtech

The UK’s data protection regulator has been slammed by privacy experts for once again failing to take enforcement action over systematic breaches of the law linked to behaviorally targeted ads — despite warning last summer that the adtech industry is out of control.
The Information Commissioner’s Office (ICO) has also previously admitted it suspects the real-time bidding (RTB) system involved in some programmatic online advertising to be unlawfully processing people’s sensitive information. But rather than take any enforcement against companies it suspects of law breaches it has today issued another mildly worded blog post — in which it frames what it admits is a “systemic problem” as fixable via (yet more) industry-led “reform”.
Yet it’s exactly such industry-led self-regulation that’s created the unlawful adtech mess in the first place, data protection experts warn.
The pervasive profiling of Internet users by the adtech ‘data industrial complex’ has been coming under wider scrutiny by lawmakers and civic society in recent years — with sweeping concerns being raised in parliaments around the world that individually targeted ads provide a conduit for discrimination, exploit the vulnerable, accelerate misinformation and undermine democratic processes as a consequence of platform asymmetries and the lack of transparency around how ads are targeted.
In Europe, which has a comprehensive framework of data protection rights, the core privacy complaint is that these creepy individually targeted ads rely on a systemic violation of people’s privacy from what amounts to industry-wide, Internet-enabled mass surveillance — which also risks the security of people’s data at vast scale.
It’s now almost a year and a half since the ICO was the recipient of a major complaint into RTB — filed by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Dr Michael Veale, a data and policy lecturer at University College London — laying out what the complainants described then as “wide-scale and systemic” breaches of Europe’s data protection regime.
The complaint — which has also been filed with other EU data protection agencies — agues that the systematic broadcasting of people’s personal data to bidders in the adtech chain is inherently insecure and thereby contravenes Europe’s General Data Protection Regulation (GDPR), which stipulates that personal data be processed “in a manner that ensures appropriate security of the personal data”.
The regulation also requires data processors to have a valid legal basis for processing people’s information in the first place — and RTB fails that test, per privacy experts — either if ‘consent’ is claimed (given the sheer number of entities and volumes of data being passed around, which means it’s not credible to achieve GDPR’s ‘informed, specific and freely given’ threshold for consent to be valid); or ‘legitimate interests’ — which requires data processors carry out a number of balancing assessment tests to demonstrate it does actually apply.
“We have reviewed a number of justifications for the use of legitimate interests as the lawful basis for the processing of personal data in RTB. Our current view is that the justification offered by organisations is insufficient,” writes Simon McDougall, the ICO’s executive director of technology and innovation, developing a warning over the industry’s rampant misuse of legitimate interests to try to pass off RTB’s unlawful data processing as legit.
The ICO also isn’t exactly happy about what it’s found adtech doing on the Data Protection Impact Assessment front — saying, in so many words, that it’s come across widespread industry failure to actually, er, assess impacts.
“The Data Protection Impact Assessments we have seen have been generally immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual,” writes McDougall.
“We have also seen examples of basic data protection controls around security, data retention and data sharing being insufficient,” he adds.
Yet — again — despite fresh admissions of adtech’s lawfulness problem the regulator is choosing more stale inaction.
In the blog post McDougall does not rule out taking “formal” action at some point — but there’s only a vague suggestion of such activity being possible, and zero timeline for “develop[ing] an appropriate regulatory response”, as he puts it. (His preferred ‘E’ word in the blog is ‘engagement’; you’ll only find the word ‘enforcement’ in the footer link on the ICO’s website.)
“We will continue to investigate RTB. While it is too soon to speculate on the outcome of that investigation, given our understanding of the lack of maturity in some parts of this industry we anticipate it may be necessary to take formal regulatory action and will continue to progress our work on that basis,” he adds.
McDougall also trumpets some incremental industry fiddling — such as trade bodies agreeing to update their guidance — as somehow relevant to turning the tanker in a fundamentally broken system.
(Trade body, the Internet Advertising Bureau’s UK branch, has responded to developments with an upbeat note from its head of policy and regulatory affairs, Christie Dennehy-Neil, who lauds the ICO’s engagement as “a constructive process”, claiming: “We have made good progress” — before going on to urge its members and the wider industry to implement “the actions outlined in our response to the ICO” and “deliver meaningful change”. The statement climaxes with: “We look forward to continuing to engage with the ICO as this process develops.”)
McDougall also points to Google removing content categories from its RTB platform from next month (a move it announced months back, in November) as an important development; and seizes on the tech giant’s recent announcement of a proposal to phase out support for third party cookies within the next two years as ‘encouraging’.
Privacy experts have responded with facepalmed outrage to yet another can-kicking exercise by the UK regulator — warning that cosmetic tweaks to adtech won’t fix a system that’s designed to feast off unlawful and insecure high velocity background trading of Internet users’ personal data.
“When an industry is premised and profiting from clear and entrenched illegality that breach individuals’ fundamental rights, engagement is not a suitable remedy,” said UCL’s Veale. “The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.”

ICO believes that cosmetic fixes can do the job when it comes to #adtech. But no matter how secure data flows are and how beautiful cookie notices are, can people really understand the consequences of their consent? I’m convinced that this consent will *never* be informed. 1/2 https://t.co/1avYt6lgV3
— Karolina Iwańska (@ka_iwanska) January 17, 2020

The trio behind the RTB complaints (which includes Veale) have also issued a scathing collective response to more “regulatory ambivalence” — denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”.
“The ‘Real-Time Bidding’ data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination,” they warn. “Regulatory ambivalence cannot continue. The longer this data breach festers, the deeper the rot sets in and the further our data gets exploited. This must end. We are considering all options to put an end to the systemic breach, including direct challenges to the controllers and judicial oversight of the ICO.”
Wolfie Christl, a privacy researcher who focuses on adtech — including contributing to a recent study looking at how extensively popular apps are sharing user data with advertisers, dubbed the ICO’s response “disastrous”.
“Last summer the ICO stated in their report that millions of people were affected by thousands of companies’ GDPR violations. I was sceptical when they announced they would give the industry six more months without enforcing the law. My impression is they are trying to find a way to impose cosmetic changes and keep the data industry happy rather than acting on their own findings and putting an end to the ubiquitous data misuse in today’s digital marketing, which should have happened years ago. The ICO seems to prioritize appeasing the industry over the rights of data subjects, and this is disastrous,” he told us.
“The way data-driven online marketing currently works is illegal at scale and it needs to be stopped from happening,” Christl added. “Each day EU data protection authorities allow these practices to continue further violates people’s rights and freedoms and perpetuates a toxic digital economy.
“This undermines the GDPR and generally trust in tech, perpetuates legal uncertainty for businesses, and punishes companies who comply and create privacy-respecting services and business models. 20 months after the GDPR came into full force, it is still not enforced in major areas. We still see large-scale misuse of personal information all over the digital world. There is no GDPR enforcement against the tech giants and there is no enforcement against thousands of data companies beyond the large platforms. It seems that data protection authorities across the EU are either not able — or not willing — to stop many kinds of GDPR violations conducted for business purposes. We won’t see any change without massive fines and data processing bans. EU member states and the EU commission must act.”

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.
Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.
A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.
At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.
Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.
The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.
Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.
“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.
The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.
Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).
The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.
There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.
If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.
Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.
Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.
In a statement welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”
The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.
The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

This makes the revised e-Privacy Regulation a *huge* national security battleground for the MSes (they will miss the UK fighting for more surveillance) and is v relevant also to the ongoing debates on “bulk”/mass surveillance, and MI5’s latest requests… #ePR
— Ian Brown (@1Br0wn) January 15, 2020

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.
“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 
“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

Yes, the U.K. now has a law to log web users’ browsing behavior, hack devices and limit encryption

Dating and fertility apps among those snitching to “out of control” adtech, report finds

The latest report to warn that surveillance capitalism is out of control — and ‘free’ digital services can in fact be very costly to people’s privacy and rights — comes courtesy of the Norwegian Consumer Council which has published an analysis of how popular apps are sharing user data with the behavioral ad industry.
It suggests smartphone users have little hope of escaping adtech’s pervasive profiling machinery — short of not using a smartphone at all.
A majority of the apps that were tested for the report were found to transmit data to “unexpected third parties” — with users not being clearly informed about who was getting their information and what they were doing with it. Most of the apps also did not provide any meaningful options or on-board settings for users to prevent or reduce the sharing of data with third parties.
“The evidence keeps mounting against the commercial surveillance systems at the heart of online advertising,” the Council writes, dubbing the current situation “completely out of control, harming consumers, societies, and businesses”, and calling for curbs to prevalent practices in which app users’ personal data is broadcast and spread “with few restraints”. 
“The multitude of violations of fundamental rights are happening at a rate of billions of times per second, all in the name of profiling and targeting advertising. It is time for a serious debate about whether the surveillance-driven advertising systems that have taken over the internet, and which are economic drivers of misinformation online, is a fair trade-off for the possibility of showing slightly more relevant ads.
“The comprehensive digital surveillance happening across the adtech industry may lead to harm to both individuals, to trust in the digital economy, and to democratic institutions,” it also warns.
In the report app users’ data is documented being shared with tech giants such as Facebook, Google and Twitter — which operate their own mobile ad platforms and/or other key infrastructure related to the collection and sharing of smartphone users’ data for ad targeting purposes — but also with scores of other faceless entities that the average consumer is unlikely to have heard of.
The Council commissioned a data flow analysis of ten popular apps running on Google’s Android smartphone platform — generating a snapshot of the privacy blackhole that mobile users inexorably tumble into when they try to go about their digital business, despite the existence (in Europe) of a legal framework that’s supposed to protect people by giving citizens a swathe of rights over their personal data.
Among the findings are a make-up filter app sharing the precise GPS coordinates of its users; ovulation-, period- and mood-tracking apps sharing users’ intimate personal data with Facebook and Google (among others); dating apps exchanging user data with each other, and also sharing with third parties sensitive user info like individuals’ sexual preferences (and real-time device specific tells such as sensor data from the gyroscope… ); and a games app for young children that was found to contain 25 embedded SDKs and which shared the Android Advertising ID of a test device with eight third parties.
The ten apps whose data flows were analyzed for the report are the dating apps Grindr, Happn, OkCupid, and Tinder; fertility/period tracker apps Clue and MyDays; makeup app Perfect365; religious app Muslim: Qibla Finder; children’s app My Talking Tom 2; and the keyboard app Wave Keyboard.
“Altogether, Mnemonic [the company which the Council commissioned to conduct the technical analysis] observed data transmissions from the apps to 216 different domains belonging to a large number of companies. Based on their analysis of the apps and data transmissions, they have identified at least 135 companies related to advertising. One app, Perfect365, was observed communicating with at least 72 different such companies,” the report notes.
“Because of the scope of tests, size of the third parties that were observed receiving data, and popularity of the apps, we regard the findings from these tests to be representative of widespread practices in the adtech industry,” it adds.
Aside from the usual suspect (ad)tech giants, less well-known entities seen receiving user data include location data brokers Fysical, Fluxloop, Placer, Places/Fouraquare, Safegraph and Unacast; behavioral ad targeting players like Receptiv/Verve, Neura, Braze and LeanPlum; mobile app marketing analytics firms like AppsFlyer; and ad platforms and exchanges like AdColony, AT&T’s AppNexus, Bucksense, OpenX, PubNative, Smaato and Vungle.
In the report the Forbrukerrådet concludes that the pervasive tracking of smartphone users which underpins the behavioral ad industry is all but impossible for smartphone users to escape — even if they are able to locate an on-device setting to opt out of behavioral ads.
This is because multiple identifiers are being attached to them and their devices, and also because of frequent sharing/syncing of identifiers by adtech players across the industry. (It also points out that on the Android platform a setting where users can opt-out of behavioral ads does not actually obscure the identifier — meaning users have to take it on trust that adtech entities won’t just ignore their request and track them anyway.)
The Council argues its findings suggest widespread breaches of Europe’s General Data Protection Regulation (GDPR), given that key principles of that pan-EU framework — such as data protection by design and default — are in stark conflict with the systematic, pervasive background profiling of app users it found (apps were, for instance, found sharing personal data by default, requiring users to actively seek out an obscure device setting to try to prevent being profiled).
“The extent of tracking and complexity of the adtech industry is incomprehensible to consumers, meaning that individuals cannot make informed choices about how their personal data is collected, shared and used. Consequently, the massive commercial surveillance going on throughout the adtech industry is systematically at odds with our fundamental rights and freedoms,” it also argues.
Where (user) consent is being relied upon as a legal basis to process personal data the standard required by GDPR states it must be informed, freely given and specific.
But the Council’s analysis of the apps found them sorely lacking on that front.
“In the cases described in this report, none of the apps or third parties appear to fulfil the legal conditions for collecting valid consent,” it writes. “Data subjects are not informed of how their personal data is shared and used in a clear and understandable way, and there are no granular choices regarding use of data that is not necessary for the functionality of the consumer-facing services.”
It also dismisses another possible legal base — known as legitimate interests — arguing app users “cannot have a reasonable expectation for the amount of data sharing and the variety of purposes their personal data is used for in these cases”.
The report points out that other forms of digital advertising (such as contextual advertising) which do not rely on third parties processing personal data are available — arguing that further undermines any adtech industry claims of ‘legitimate interests’ as a valid base for helping themselves to smartphone users’ data.
“The large amount of personal data being sent to a variety of third parties, who all have their own purposes and policies for data processing, constitutes a widespread violation of data subjects’ privacy,” the Council argues. “Even if advertising is necessary to provide services free of charge, these violations of privacy are not strictly necessary in order to provide digital ads. Consequently, it seems unlikely that the legitimate interests that these companies may claim to have can be demonstrated to override the fundamental rights and freedoms of the data subject.”
The suggestion, therefore, is that “a large number of third parties that collect consumer data for purposes such as behavioural profiling, targeted advertising and real-time bidding, are in breach of the General Data Protection Regulation”.
The report also discussing the harms attached to such widespread violation of privacy — pointing out risks such as discrimination and manipulation of vulnerable individuals, as well as chilling effects on speech, added fuel for ad fraud and the torching of trust in the digital economy, among other society-afflicting ill being fuelled by adtech’s obsession with profiling everyone…
Some of the harm of this data exploitation stems from significant knowledge and power asymmetries that render consumers powerless. The overarching lack of transparency of the system makes consumers vulnerable to manipulation, particularly when unknown companies know almost everything about the individual consumer. However, even if regular consumers had comprehensive knowledge of the technologies and systems driving the adtech industry, there would still be very limited ways to stop or control the data exploitation.
Since the number and complexity of actors involved in digital marketing is staggering, consumers have no meaningful ways to resist or otherwise protect themselves from the effects of profiling. These effects include different forms of discrimination and exclusion, data being used for new and unknowable purposes, widespread fraud, and the chilling effects of massive commercial surveillance systems. In the long run, these issues are also contributing to the erosion of trust in the digital industry, which may have serious consequences for the digital economy.
To shift what it dubs the “significant power imbalance between consumers and third party companies”, the Council calls for an end to the current practices of “extensive tracking and profiling” — either by companies changing their practices to “respect consumers’ rights”, or — where they won’t — urging national regulators and enforcement authorities to “take active enforcement measures, to establish legal precedent to protect consumers against the illegal exploitation of personal data”.
It’s fair to day that enforcement of GDPR remains a work in progress at this stage, some 20 months after the regulation came into force, back in May 2018. With scores of cross-border complaints yet to culminate in a decision (though there have been a couple of interesting adtech– and consent-related enforcements in France).
We reached out to Ireland’s Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO) for comment on the Council’s report. The Irish regulator has multiple investigations ongoing into various aspects of adtech and tech giants’ handling of online privacy, including a probe related to security concerns attached to Google’s ad exchange and the real-time bidding process which features in some programmatic advertising. It has previously suggested the first decisions from its hefty backlog of GDPR complaints will be coming early this year. But at the time of writing the DPC had not responded to our request for comment on the report.

Google’s lead EU regulator opens formal privacy probe of its adtech

A spokeswoman for the ICO — which last year put out its own warnings to the behavioral advertising industry, urging it to change its practices — sent us this statement, attributed to Simon McDougall, its executive director for technology and innovation, in which he says the regulator has been prioritizing engaging with the adtech industry over its use of personal data and has called for change itself — but which does not once mention the word ‘enforcement’…
Over the past year we have prioritised engagement with the adtech industry on the use of personal data in programmatic advertising and real-time bidding.
Along the way we have seen increased debate and discussion, including reports like these, which factor into our approach where appropriate. We have also seen a general acknowledgment that things can’t continue as they have been.
Our 2019 update report into adtech highlights our concerns, and our revised guidance on the use of cookies gives greater clarity over what good looks like in this area.
Whilst industry has welcomed our report and recognises change is needed, there remains much more to be done to address the issues. Our engagement has substantiated many of the concerns we raised and, at the same time, we have also made some real progress.
Throughout the last year we have been clear that if change does not happen we would consider taking action. We will be saying more about our next steps soon – but as is the case with all of our powers, any future action will be proportionate and risk-based.

At CES, companies slowly start to realize that privacy matters

Every year, Consumer Electronics Show attendees receive a branded backpack, but this year’s edition was special; made out of transparent plastic, the bag’s contents were visible without the wearer needing to unzip. It isn’t just a fashion decision. Over the years, security has become more intense and cumbersome, but attendees with transparent backpacks didn’t have to open their bags when entering.
That cheap backpack is a metaphor for an ongoing debate — how many of us are willing to exchange privacy for convenience?

Privacy was on everyone’s mind at this year’s CES in Las Vegas, from CEOs to policymakers, PR agencies and people in charge of programming the panels. For the first time in decades, Apple had a formal presence at the event; Senior Director of Global Privacy Jane Horvath spoke on a panel focused on privacy with other privacy leaders.

Cookie consent tools are being used to undermine EU privacy rules, study suggests

Most cookie consent pop-ups served to Internet users in the European Union — ostensibly seeking permission to track people’s web activity — are likely to be flouting regional privacy laws, a new study by researchers at MIT, UCL and Aarhus University suggests.
“The results of our empirical survey of CMPs [consent management platforms] today illustrates the extent to which illegal practices prevail, with vendors of CMPs turning a blind eye to — or worse, incentivising — clearly illegal configurations of their systems,” the researchers argue, adding that: “Enforcement in this area is sorely lacking.”
Their findings, published in a paper entitled Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence, chime with another piece of research we covered back in August — which also concluded a majority of the current implementations of cookie notices offer no meaningful choice to Europe’s Internet users — even though EU law requires one.
When consent is being relied upon as the legal basis for processing web users’ personal data, the bar for valid (i.e. legal) consent that’s set by the EU’s General Data Protection Regulation (GDPR) is clear: It must be informed, specific and freely given.
Recent jurisprudence by the Court of Justice of the European Union also further crystalized the law around cookies, making it clear that consent must be actively signalled — meaning a digital service cannot infer consent to tracking by indirect actions (such as the pop-up being closed by the user without a response or ignored in favor of interacting with the service).
Many websites use a so-called CMP to solicit consent to tracking cookies. But if it’s configured to contain pre-ticked boxes that opt users into sharing data by default — requiring an affirmative user action to opt out — any gathered ‘consent’ also isn’t legal.
Consent to tracking must also be obtained prior to a digital service dropping or accessing a cookie; Only service-essential cookies can be deployed without asking first.
All of which means — per EU law — it should be equally easy for website visitors to choose not to be tracked as to agree to their personal data being processed.
However the Dark Patterns after the GDPR study found that’s very far from the case right now.
“We found that dark patterns and implied consent are ubiquitous,” the researchers write in summary, saying that only slightly more than one in ten (11.8%) of the CMPs they looked at “meet the minimal requirements that we set based on European law” — which they define as being “if it has no optional boxes pre-ticked, if rejection is as easy as acceptance, and if consent is explicit”.
For the study, the researchers scraped the top 10,000 UK websites, as ranked by Alexa, to gather data on the most prevalent CMPs in the market — which are made by five companies: QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak — and analyzed how the design and configurations of these tools affected Internet users’ choices. (They obtained a data set of 680 CMP instances via their method — a sample they calculate is representative of at least 57% of the total population of the top 10k sites that run a CMP, given prior research found only around a fifth do so.)
Implicit consent — aka (illegally) inferring consent via non-affirmative user actions (such as the user visiting or scrolling on the website or a failure to respond to a consent pop-up or closing it without a response) — was found to be common (32.5%) among the studied sites.
“Popular CMP implementation wizards still allow their clients to choose implied consent, even when they have already indicated the CMP should check whether the visitor’s IP is within the geographical scope of the EU, which should be mutually exclusive,” they note, arguing that: “This raises significant questions over adherence with the concept of data protection by design in the GDPR.”
They also found that the vast majority of CMPs make rejecting all tracking “substantially more difficult than accepting it” — with a majority (50.1%) of studied sites not having a ‘reject all’ button. While only a tiny minority (12.6%) of sites had a ‘reject all’ button accessible with the same or fewer number of clicks as an ‘accept all’ button.
Or, to put it another way, ‘Ohhai dark pattern design‘…
“An ‘accept all’ button was never buried in a second layer,” the researchers go on to point out, also finding that “74.3% of reject all buttons were one layer deep, requiring two clicks to press; 0.9% of them were two layers away, requiring at minimum three.”
Pre-ticked boxes were found to be widely deployed in the studied CMPs as well — despite such a setting not being legally valid. (On this they found: “56.2% of sites pre-ticked optional vendors or purposes/categories, with 54.1% of sites pre-ticking optional purposes, 32.3% pre-ticking optional categories, and 30.3% pre-ticking both”.)
They also point out that the high number of third-party trackers routinely being used by sites poses a major problem for the EU consent model — given it requires a “prohibitively long time” for users to become clearly informed enough to be able to legally consent.
The exact number of third party trackers they found being packed like sardines into CMPs varied — with between tens and several hundreds in play depending on the site.
Fifty-eight was the lowest number they encountered. While the highest instance was 542 vendors — on an implementation of QuantCast’s CMP. (And, well, just imagine the ‘friction’ involved in manually unticking all those, assuming that was one of the sites that also lacked a ‘reject all’ button… )
Sites relied on a large number of third party trackers, which would take a prohibitively long time for users to inform themselves about clearly. Out of the 85.4% of sites that did list vendors (e.g. third party trackers) within the CMP, there was a median number of 315 vendors (low. quartile 58, upp. quartile 542). Different CMP vendors have different average numbers of vendors, with the highest being QuantCast at 542… 75% of sites had over 58 vendors. 76.47% of sites provide some descriptions of their vendors. The mean total length of these descriptions per site is 7,985 words: roughly 31.9 minutes of reading for the average 250 words-per-minute reader, not counting interaction time to e.g. unfold collapsed boxes or navigating to and reading specific privacy policies of a vendor.
A second part of the research involved a field experiment involving 40 participants to investigate how the eight most common CMP designs affect Internet users’ consent choices.
“We found that notification style (banner or barrier) has no effect [on consent choice]; removing the opt-out button from the first page increases consent by 22–23 percentage points; and providing more granular controls on the first page decreases consent by 8–20 percentage points,” they write in summary on that.
They argue this portion of the study supports the notion that two of the most common consent interface designs – “not showing a ‘reject all’ button on the first page; and showing bulk options before showing granular control” – make it more likely for users to provide consent, thereby “violating the [GDPR] principle of “freely given””.
They also make reference to “qualitative reflections” of the participants in the paper — which were obtained via  survey after individuals’ consent choices had been registered during the field study — suggesting these responses “put into question the entire notice-and-consent model not because of specific design decisions but merely because an action is required before the user can accomplish their main task and because they appear too frequently if they are shown on a website-by-website basis”.
So, in other words, just the fact of interrupting a web user to ask them to make a choice may itself apply substantial enough pressure that it might render any resulting ‘consent’ invalid.
The study’s finding of the prevalence of manipulative designs and configurations intended to nudge or even force consent suggests Internet users in Europe are not actually benefiting from a legal framework that’s supposed to protection their digital data from unwanted exploitation — and are rather being subject to a lot of noisy, distracting and disingenuous ‘consent theatre’.
Cookie notices not only generate friction and frustration for the average Internet user, as they try to go about their daily business online, but the current situation is creating a faux veneer of compliance — atop what is actually a massive trampling of rights via what amounts to digital daylight robbery of people’s data at scale.
The problem here is that EU regulators have for years looked the other way where online tracking is concerned, failing entirely to enforce the on-paper standard.
Enforcement is indeed sorely lacking, as the researchers note. (Industry lobbying/political pressure, limited resources, risk aversion and regulatory capture, and a legacy of inaction around digital rights are all likely to blame.)
And while the GDPR only started being applied in May 2018, Europe has had regulations on data-gathering mechanisms like cookies for approaching two decades — with the paper pointing out that an amendment to the ePrivacy Directive all the way back in 2002 made it a requirement that “storing or accessing information on a user’s device not ‘strictly necessary’ for providing an explicitly requested service requires both clear and comprehensive information and opt-in consent”.
Asked about the research findings, lead author, Midas Nouwens, questioned why CMP vendors are selling so called ‘compliance’ tools that allow for non-compliant configurations in the first place.
“It’s sad, but I don’t think anyone is surprised anymore by how few pop-ups comply with the GDPR,” he told TechCrunch. “What is shocking is how non-compliant interface designs are allowed by the companies that provide consent pop-ups. Why do they let their clients count scrolling as consent or bury the decline button somewhere on the third page?”
“Enforcement is really the next big challenge if we don’t want the GDPR to go down the same path as the ePrivacy directive,” he added. “Since enforcement agencies have limited resources, focusing on the popular consent pop-up providers could be a much more effective strategy than targeting individual websites.
“Unfortunately, while we wait for enforcement, the dark patterns in these pop-ups are still manipulating people into being tracked.”
Another of the researchers behind the paper, Michael Veale, a lecturer in digital rights and regulation at UCL, also expressed shock that CMP vendors are allowing their tools to be configured in ways which are clearly intended to manipulate Internet users — thereby flouting the law.
In the paper the researchers urge regulators to take a smarter approach to tackling such widespread violation, such as by making use of automated tools “to expedite discovery and enforcement” of non-compliant cookie notices, and suggest they work “further upstream” — such as by placing requirements on the vendors of CMPs “to only allow compliant designs to be placed on the market”.
“It’s shocking to see how many of the large providers of consent pop-ups allow their systems to be misconfigured, such as through implicit consent, in ways that clearly infringe data protection law,” Veale told us, adding: “I suspect data protection authorities see this widespread illegality and are not sure exactly where to start. Yet if they do not start enforcing these guidelines, it’s unclear when this widespread illegality will start to stop.”
“This study even overestimates compliance, as we don’t focus on what actually happens to the tracking when you click on these buttons, which other recent studies have emphasised in many cases mislead individuals and do nothing at all,” he also pointed out.
We reached out to the UK’s data protection watchdog, the ICO, for a response to the research — and a spokeswoman pointed us to this cookie advice blog post it published last year, saying the advice it contains “still stands”.
In the blog Ali Shah, the ICO’s head of technology policy, suggests there could be some (albeit limited) action from the regulator this year to clean up cookie consent, with Shah writing that: “Cookie compliance will be an increasing regulatory priority for the ICO in the future. However, as is the case with all our powers, any future action would be proportionate and risk-based.”
While European citizens wait for data protection regulators to take meaningful action over systematic breaches of the GDPR — including those attached to consent-less tracking of web users — there is one step European web users can take to shrink the pain of cookie consent pop-ups: The researchers behind the study have built an open source browser extension that can automatically answer pop-ups based on user-customizable preferences.
It’s called Consent-o-Matic — and there are versions available for Firefox and Chrome.

A holiday gift from us* at @AarhusUni: Consent-o-Matic! A browser extension that automatically answers consent pop-ups for you. Firefox: https://t.co/5PhAEN6eOdChrome: https://t.co/ob8xrLxhFWGithub: https://t.co/0Xe9xNwCEb
* @cklokmose; Janus Bager Kristensen; Rolf Bagge
1/8 pic.twitter.com/3ooV8ZFTH0
— Midas Nouwens (@MidasNouwens) December 24, 2019

At release the tool can automatically respond to cookie banners built by the five big CMP suppliers (QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak).
But being as it’s open source, the hope is others will build on it to expand the types of pop-ups it’s able to auto-respond to. In the absence of a legally enforced ‘Do Not Track’ browser standard this is about as good as it gets for Internet users desperately seeking easier agency over the online tracking industry.
In a Twitter thread last month announcing the tool, Nouwens described the project as making use of “adversarial interoperability” as a pro-privacy tactic.
“Automating consent and privacy preferences is not new (DNT and P3P), but this project uses adversarial interoperability, rather than rely on industry self-regulation or buy-in from fundamentally opposed stakeholders (browsers, advertisers, publishers),” he observed.
However he added one caveat, reminding users to be on their guard for further non-compliance from the data suckers — pointing to the earlier research paper also flagged by Veale which found a small portion of sites (~7%) entirely ignore responses to cookie pop-ups and track users regardless of response.
So sometimes even a seamlessly automated ‘no’ to tracking might still sum to being tracked…

Adtech told to keep calm and fix its ‘lawfulness’ problem

DuckDuckGo still critical of Google’s EU Android choice screen auction, after wining a universal slot

Google has announced which search engines have won an auction process it has devised for an Android ‘choice screen’ — as its response to an antitrust intervention by the region’s competition regulator.
The prompt is shown to users of Android smartphones in the European Union as they set up a device, asking them to choose a search engine from a list of four which always includes Google’s own search engine.
In mid-2018 the European Commission fined Google $5BN for antitrust violations attached to how it operates the Android platform, including related to how it bundles its own services with the dominant smartphone OS, and ordered it to remedy the infringements — while leaving it up to the tech giant to devise a fix.
Google responded by creating a choice screen for Android users to pick a search engine from a short list — with the initial choices seemingly based on local marketshare. But last summer it announced it would move to auctioning slots on the screen via a fixed sealed bid auction process.
The big winners of the initial auction, for the period March 1, 2020 to June 30, 2020, are pro-privacy search engine DuckDuckGo — which gets one of the three slots in all 31 European markets — and a product called Info.com, which will also be shown as an option in all those markets. (Per Wikipedia, the latter is a veteran metasearch engine that provides results from multiple search engines and directories, including Google.)
French pro-privacy search engine Qwant will be shown as an option to Android users in eight European markets. While Russia’s Yandex will appears as an option in five markets in the east of the region.
Other search engines that will appear as choices in a minority of the European markets are GMX, Seznam, Givero and PrivacyWall.
At a glance the big loser looks to be Microsoft’s Bing search engine — which will only appear as an option on the choice screen shown in the UK.
Tree-planting search engine Ecosia does not appear anywhere on the list at all, despite appearing on some initial Android choice screens — having taken the decision to boycott the auction to objects to Google’s ‘pay-to-play’ approach.
Ecosia CEO Christian Kroll told the BBC: “We believe this auction is at odds with the spirit of the July 2018 EU Commission ruling. Internet users deserve a free choice over which search engine they use and the response of Google with this auction is an affront to our right to a free, open and federated internet. Why is Google able to pick and choose who gets default status on Android?”
It’s not the only search engine critical of Google’s move, with Qwant and DuckDuckGo both raising concerns immediately the move to a paid auction was announced last year.
Despite participating in the process — and winning a universal slot — DuckDuckGo told us it still does not agree with Google’s pay-to-play auction.
“We believe a search preference menu is an excellent way to meaningfully increase consumer choice if designed properly. Our own research has reinforced this point and we look forward to the day when Android users in Europe will have the opportunity to easily make DuckDuckGo their default search engine while setting up their phones. However, we still believe a pay-to-play auction with only 4 slots isn’t right because it means consumers won’t get all the choices they deserve and Google will profit at the expense of the competition,” a spokesperson said in a statement.

How Ring is rethinking privacy and security

Ring is now a major player when it comes to consumer video doorbells, security cameras — and privacy protection.
Amazon acquired the company and promotes its devices heavily on its e-commerce websites. Ring has even become a cultural phenomenon with viral videos being shared on social networks and the RingTV section on the company’s website.
But that massive success has come with a few growing pains; as Motherboard found out, customers don’t have to use two-factor authentication, which means that anybody could connect to their security camera if they re-use the same password everywhere.
When it comes to privacy, Ring’s Neighbors app has attracted a ton of controversy. Some see it as a libertarian take on neighborhood watch that empowers citizens to monitor their communities using surveillance devices.
Others have questioned partnerships between Ring and local police to help law enforcement authorities request videos from Ring users.
In a wide-ranging interview, Ring founder Jamie Siminoff looked back at the past six months, expressed some regrets and defended his company’s vision. The interview was edited for clarity and brevity.
TechCrunch: Let’s talk about news first. You started mostly focused on security cameras, but you’ve expanded way beyond security cameras. And in particular, I think the light bulb that you introduced is pretty interesting. Do you want to go deeper in this area and go head to head against Phillips Hue for instance?
Jamie Siminoff: We try not to ever look at competition — like the company is going head to head with… we’ve always been a company that has invented around a mission of making neighborhoods safer.
Sometimes, that puts us into a place that would be competing with another company. But we try to look at the problem and then come up with a solution and not look at the market and try to come up with a competitive product.
No one was making — and I still don’t think there’s anyone making — a smart outdoor light bulb. We started doing the floodlight camera and we saw how important light was. We literally saw it through our camera. With motion detection, someone will come over a fence, see the light and jump back over. We literally could see the impact of light.
So you don’t think you would have done it if it wasn’t a light bulb that works outside as well as inside?
For sure. We’ve seen the advantage of linking all the lights around your home. When you walk up on a step light and that goes off, then everything goes off at the same time. It’s helpful for your own security and safety and convenience.
The light bulbs are just an extension of the floodlight. Now again, it can be used indoor because there’s no reason why it can’t be used indoor.
Following Amazon’s acquisition, do you think you have more budget, you can hire more people and you can go faster and release all these products?
It’s not a budget issue. Money was never a constraint. If you had good ideas, you could raise money — I think that’s Silicon Valley. So it’s not money. It’s knowledge and being able to reach a critical mass.
As a consumer electronics company, you need to have specialists in different areas. You can’t just get them with money, you kind of need to have a big enough thing. For example, wireless antennas. We had good wireless antennas. We did the best we thought we could do. But we get into Amazon and they have a group that’s super highly focused on each individual area of that. And we make much better antennas today.

Our reviews are up across the board, our products are more liked by our customers than they were before. Jamie Siminoff

Our reviews are up across the board, our products are more liked by our customers than they were before. To me, that’s a good measure — after Amazon, we have made more products and they’re more beloved by our customers. And I think part of that is that we can tap into resources more efficiently.
And would you say the teams are still very separate?
Amazon is kind of cool. I think it’s why a lot of companies that have been bought by Amazon stay for a long time. Amazon itself is almost an amalgamation of a lot of little startups. Internally, almost everyone is a startup CEO — there’s a lot of autonomy there.

Facebook won’t ban political ads, prefers to keep screwing democracy

It’s 2020 — a key election year in the US — and Facebook is doubling down on its policy of letting people pay it to fuck around with democracy.
Despite trenchant criticism — including from US lawmakers accusing Facebook’s CEO to his face of damaging American democracy — the company is digging in, announcing as much today by reiterating its defence of continuing to accept money to run microtargeted political ads.
Instead of banning political ads Facebook is trumpeting a few tweaks to the information it lets users see about political ads — claiming it’s boosting “transparency” and “controls” while leaving its users vulnerable to default settings that offer neither.  
Political ads running on Facebook are able to be targeted at individuals’ preferences as a result of the company’s pervasive tracking and profiling of Internet users. And ethical concerns about microtargeting led the UK’s data protection watchdog to call in 2018 for a pause on the use of digital ad tools like Facebook by political campaigns — warning of grave risks to democracy.
Facebook isn’t for pausing political microtargeting, though. Even though various elements of its data-gathering activities are also subject to privacy and consent complaints, regulatory scrutiny and legal challenge in Europe, under regional data protection legislation.
Instead, the company made it clear last fall that it won’t fact-check political ads, nor block political messages that violate its speech policies — thereby giving politicians carte blanche to run hateful lies, if they so choose.
Facebook’s algorithms also demonstrably select for maximum eyeball engagement, making it simply the ‘smart choice’ for the modern digitally campaigning politician to run outrageous BS on Facebook — as long time Facebook exec Andrew Bosworth recently pointed out in an internal posting that leaked in full to the NYT.
Facebook founder Mark Zuckerberg’s defence of his social network’s political ads policy boils down to repeatedly claiming ‘it’s all free speech man’ (we paraphrase).
This is an entirely nuance-free argument that comedian Sacha Baron Cohen expertly demolished last year, pointing out that: “Under this twisted logic if Facebook were around in the 1930s it would have allowed Hitler to post 30-second ads on his solution to the ‘Jewish problem.’”
Facebook responded to the take-down with a denial that hate speech exists on its platform since it has a policy against it — per its typical crisis PR playbook. And it’s more of the same selectively self-serving arguments being dispensed by Facebook today.
In a blog post attributed to its director of product management, Rob Leathern, it expends more than 1,000 words on why it’s still not banning political ads (it would be bad for advertisers wanting to reaching “key audiences”, is the non-specific claim) — including making a diversionary call for regulators to set ad standards, thereby passing the buck on ‘democratic accountability’ to lawmakers (whose electability might very well depend on how many Facebook ads they run…), while spinning cosmetic, made-for-PR tweaks to its ad settings and what’s displayed in an ad archive that most Facebook users will never have heard of as “expanded transparency” and “more control”. 
In fact these tweaks do nothing to reform the fundamental problem of damaging defaults.
The onus remains on Facebook users to do the leg work on understanding what its platform is pushing at their eyeballs and why.
Even as the ‘extra’ info now being drip-fed to the Ad Library is still highly fuzzy (“We are adding ranges for Potential Reach, which is the estimated target audience size for each political, electoral or social issue ad so you can see how many people an advertiser wanted to reach with every ad,” as Facebook writes of one tweak.)
The new controls similarly require users to delve into complex settings menus in order to avail themselves of inherently incremental limits — such as an option that will let people opt into seeing “fewer” political and social issue ads. (Fewer is naturally relative, ergo the scale of the reduction remains entirely within Facebook’s control — so it’s more meaningless ‘control theatre’ from the lord of dark pattern design. Why can’t people switch off political and issue ads entirely?)
Another incremental setting lets users “stop seeing ads based on an advertiser’s Custom Audience from a list”.
But just imagine trying to explain WTF that means to your parents or grandparents — let alone an average Internet user actually being able to track down the ‘control’ and exercise any meaningful agency over the political junk ads they’re being exposed to on Facebook.
It is, to quote Baron Cohen, “bullshit”.
Nor are outsiders the only ones calling out Zuckerberg on his BS and “twisted logic”: A number of Facebook’s own employees warned in an open letter last year that allowing politicians to lie in Facebook ads essentially weaponizes the platform.
They also argued that the platform’s advanced targeting and behavioral tracking tools make it “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — accusing the company’s leadership of making disingenuous arguments in defence of a toxic, anti-democratic policy. 
Nothing in what Facebook has announced today resets the anti-democratic asymmetry inherent in the platform’s relationship to its users.
Facebook users — and democratic societies — remain, by default, preyed upon by self-interested political interests thanks to Facebook’s policies which are dressed up in a self-interested misappropriation of ‘free speech’ as a cloak for its unfettered exploitation of individual attention as fuel for a propaganda-as-service business.
Yet other policy positions are available.
Twitter announced a total ban on political ads last year — and while the move doesn’t resolve wider disinformation issues attached to its platform, the decision to bar political ads has been widely lauded as a positive, standard-setting example.
Google also followed suit by announcing a ban on “demonstrably false claims” in political ads. It also put limits on the targeting terms that can be used for political advertising buys that appear in search, on display ads and on YouTube.
Still Facebook prefers to exploit “the absence of regulation”, as its blog post puts it, to not do the right thing and keep sticking two fingers up at democratic accountability — because not applying limits on behavioral advertising best serves its business interests. Screw democracy.
“We have based [our policies] on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public,” Facebook writes, ignoring the fact that some of its own staff already pointed out the sketchy hypocrisy of trying to claim that complex ad targeting tools and techniques are open to public scrutiny.

Pro-privacy search engine Qwant announces more exec changes — to ‘switch focus to monetization’

More changes have been announced in the senior leadership of French pro-privacy search engine, Qwant.
President and co-founder, Eric Leandri (pictured above), will be moving from an operational to a strategic role on January 15, the company said today — while current deputy managing director for sales and marketing, Jean-Claude Ghinozzi, is being promoted to president.
Leandri will leave the president role on January 15 , although he is not departing the business entirely but will instead shift to chair a strategic and scientific committee — where he says he will focus on technology and “strategic vision”.
This committee will work with a new governance council, also being announced today, which will be chaired by Antoine Troesch, investment director of Qwant investor, Banque des Territories, per the PR.
At the same time, Mozilla veteran Tristan Nitot — who was only promoted to a new CEO role at Qwant in September — is returning to his prior job as VP of advocacy. Although Leandri told us that Nitot will retain the spokesman component of the CEO job, leaving Ghinozzi to focus on monetization — which he said is Qwant’s top priority now.
“[Nitot] is now executive VP in charge of communications and media,” Leandri told TechCrunch. “He has to take care of company advocacy. Because of my departure he will have now to represent Qwant in [the media]. He will be the voice of Qwant. But that position will give him not enough space and time to be the full-time CEO of the company — doing both is quite impossible. I have done that for years… but it’s very complicated.”
“We will now need to focus a lot on monetization and on our core business… to create a real ad platform,” he added, by way of explaining the latest round of exec restructuring. “This needs to have somebody in charge of doing that monetization process — that execution process of the scale of Qwant.”
Ghinozzi will be responsible for developing a “new phase” for the search engine so it can scale its business in Europe, Leandri also said, adding: “For my part I take on the strategy and the tech, and I’m a member of the board.”
The search engine company is also announcing that it’s closing a new funding round to support infrastructure and scaling — including taking in more financing from existing backers Banque des Territories and publishing giant Axel Springer — saying it expects this to be finalized next month.
Leandri would not provide details on the size of the round today but French news website Liberation is reporting it as €10M, citing a government source. (Per other reports in the French media Qwant has been losing tens of millions of euros per year.)
Qwant’s co-founder did trail some “very good announcements” he said are coming imminently on the user growth front in France, related to new civil companies switching to the search engine. But again he declined to publicly confirm full details at this stage — saying the news would be confirmed in around a week’s time.
Liberation‘s report points to this being confirmation that the French state will go ahead with making Qwant the default search engine across the administration — giving its product a boost of (likely) millions more regular users, and potentially unlocking access to more government funding.
The move by the French administration aligns with a wider push for digital sovereignty in a bid to avoid being too reliant on foreign tech giants. However, in recent months, doubt had been thrown on the government’s plan to switch wholesale from Google’s search engine to the homegrown search alternative — after local media raised questions over the quality of Qwant’s search results.
The government has been conducting its own technical audit of Qwant’s search engine. But, per Liberation — which says it obtained an internal government memo earlier this month — the switch will go ahead, and is slated to be completed by the end of April.
Qwant has faced further uncomfortable press scrutiny on its home turf in recent months, with additional reports in French media suggesting the business has been facing a revenue crunch — after its privacy-respecting search engine generated lower than expected revenues last year.
On this Leandri told us Qwant’s issue boils down to a lack of ad inventory, saying it will be Ghinozzi’s job to tackle that by making sure it can monetize more of the current impressions it’s generating — such as by focusing on serving more ads against shopping-related searches, while continuing to preserve its core privacy/non-tracking promise to users.
The business was focused last year on putting in place search engine infrastructure to prepare for scaling user growth in Europe, he suggested — meaning it was spending less time on monetizing user searches.
“We started to refocus on the monetization in November and December,” he said. “So we have lost some months in terms of monetization… Now we have started to accelerate our monetization phase and we need now to make it even better in shopping, for example.”
Leandri claims Qwant has already seen “a very good ramp up”, after turning its attention back to monetization these past two months — but says beefing up ad inventory including by signing up more ad partners and serving its own ads will now be “the focus of the company”.
“For example today on 100 queries we were sometime during the year at 20 ads, just 20% of coverage,” he told us, noting that some ‘iPhone 11’ searches done via Qwant haven’t resulted in any ads being served to users in recent times. “We need to go to 30%-40%… We need to make it better on the shopping queries, brining new customers. We need to do all these things.
“Right now we have signed with Havas and Publicis in France for Europe but we need to ad more partners and start adding our own ads, our own shopping ads, our own technology for ads. That’s the new focus.”
Additionally, there have also been a number of reports in French media that have alleged HR problems within Qwant. Articles — such as this one by Next Inpact — have reported at length on claims by some employees that Leandri’s management style created a toxic workplace culture in which staff were subject to verbal abuse, threats and bullying.
Qwant disputes these reports but it’s notable that the co-founder is stepping back from an operation role at a time when both he and the business are facing questions over a wave of negative domestic press, and with investors also being asked to plough in fresh financing as a key strategy customer (the French government) is scrutinizing the product and the business.
The health of workplace culture at technology companies and high pressure startups has come in for increasing attention in recent years, as workplace expectations have shifted with the generations and digital technologies have encouraged greater openness and provided outlets for people who feel unfairly treated to make their grievances more widely known.
Major scandals in the tech industry in recent years include Uber being publicly accused of having a sexist and bullying workplace culture by a former engineer — and, more recently, travel startup Away whose CEO stepped down in December after a bombshell report in the press exposing a toxic culture.

Will online privacy make a comeback in 2020?

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.
The debate now is largely around how to regulate platforms, not whether it needs to happen.
The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.
The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.
“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.
Democracy disrupted
The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.
Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.
Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.
The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.
From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.
Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.
No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.
So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.
Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.
Adtech under attack
Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.
But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

*Very* big news last night in internet political ads. @Google’s plan to eliminate #microtargeting is a move that – if done right – could help make internet political advertising a force that informs and inspires us, rather than isolating and inflaming us.
1/9
— Ellen L Weintraub (@EllenLWeintraub) November 21, 2019

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.
Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).
Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.
This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.
Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.
European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.
Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.
The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.
A recent event held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though critics say it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.
Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.
As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.
In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.
Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?
Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.
Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.
Year of the GDPR banhammer?
2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.

Facebook data misuse and voter manipulation back in the frame with latest Cambridge Analytica leaks

More details are emerging about the scale and scope of disgraced data company Cambridge Analytica’s activities in elections around the world — via a cache of internal documents that’s being released by former employee and self-styled whistleblower, Brittany Kaiser.
The now shut down data modelling company, which infamously used stolen Facebook data to target voters for President Donald Trump’s campaign in the 2016 U.S. election, was at the center of the data misuse scandal that, in 2018, wiped billions off Facebook’s share price and contributed to a $5BN FTC fine for the tech giant last summer.
However plenty of questions remain, including where, for whom and exactly how Cambridge Analytica and its parent entity SCL Elections operated; as well as how much Facebook’s leadership knew about the dealings of the firm that was using its platform to extract data and target political ads — helped by some of Facebook’s own staff.
Certain Facebook employees were referring to Cambridge Analytica as a “sketchy” company as far back as September 2015 — yet the tech giant only pulled the plug on platform access after the scandal went global in 2018.
Facebook CEO Mark Zuckerberg has also continued to maintain that he only personally learned about CA from a December 2015 Guardian article, which broke the story that Ted Cruz’s presidential campaign was using psychological data based on research covering tens of millions of Facebook users, harvested largely without permission. (It wasn’t until March 2018 that further investigative journalism blew the lid off the story — turning it into a global scandal.)
Former Cambridge Analytica business development director Kaiser, who had a central role in last year’s Netflix documentary about the data misuse scandal (The Great Hack), began her latest data dump late last week — publishing links to scores of previously unreleased internal documents via a Twitter account called @HindsightFiles. (At the time of writing Twitter has placed a temporary limit on viewing the account — citing “unusual activity”, presumably as a result of the volume of downloads it’s attracting.)
Since becoming part of the public CA story Kaiser has been campaigning for Facebook to grant users property rights over their data. She claims she’s releasing new documents from her former employer now because she’s concerned this year’s US election remains at risk of the same type of big-data-enabled voter manipulation that tainted the 2016 result.
“I’m very fearful about what is going to happen in the US election later this year, and I think one of the few ways of protecting ourselves is to get as much information out there as possible,” she told The Guardian.
“Democracies around the world are being auctioned to the highest bidder,” is the tagline clam on the Twitter account Kaiser is using to distribute the previously unpublished documents — more than 100,000 of which are set to be released over the coming months, per the newspaper’s report.
The releases are being grouped into countries — with documents to-date covering Brazil, Kenya and Malaysia. There is also a themed release dealing with issues pertaining to Iran, and another covering CA/SCL’s work for Republican John Bolton’s Political Action Committee in the U.S.
The releases look set to underscore the global scale of CA/SCL’s social media-fuelled operations, with Kaiser writing that the previously unreleased emails, project plans, case studies and negotiations span at least 65 countries.
A spreadsheet of associate officers included in the current cache lists SCL associates in a large number of countries and regions including Australia, Argentina, the Balkans, India, Jordan, Lithuania, the Philippines, Switzerland and Turkey, among others. A second tab listing “potential” associates covers political and commercial contacts in various other places including Ukraine and even China.
A UK parliamentary committee which investigated online political campaigning and voter manipulation in 2018 — taking evidence from Kaiser and CA whistleblower Chris Wylie, among others — urged the government to audit the PR and strategic communications industry, warning in its final report how “easy it is for discredited companies to reinvent themselves and potentially use the same data and the same tactics to undermine governments, including in the UK”.
“Data analytics firms have played a key role in elections around the world. Strategic communications companies frequently run campaigns internationally, which are financed by less than transparent means and employ legally dubious methods,” the DCMS committee also concluded.
The committee’s final report highlighted election and referendum campaigns SCL Elections (and its myriad “associated companies”) had been involved in in around thirty countries. But per Kaiser’s telling its activities — and/or ambitions — appear to have been considerably broader and even global in scope.
Documents released to date include a case study of work that CA was contracted to carry out in the U.S. for Bolton’s Super PAC — where it undertook what is described as “a personality-targeted digital advertising campaign with three interlocking goals: to persuade voters to elect Republican Senate candidates in Arkansas, North Carolina and New Hampshire; to elevate national security as an issue of importance and to increase public awareness of Ambassador Bolton’s Super PAC”.
Here CA writes that it segmented “persuadable and low-turnout voter populations to identify several key groups that could be influenced by Bolton Super PAC messaging”, targeting them with online and Direct TV ads — designed to “appeal directly to specific groups’ personality traits, priority issues and demographics”. 

Psychographic profiling — derived from CA’s modelling of Facebook user data — was used to segment U.S. voters into targetable groups, including for serving microtargeted online ads. The company badged voters with personality-specific labels such as “highly neurotic” — targeting individuals with customized content designed to pray on their fears and/or hopes based on its analysis of voters’ personality traits.

#CONSCIENTIOUS? As in do you care about your #children & talk about them on social media? Well, then your data has been categorized to say this is the message @AmbJohnBolton + #CambridgeAnalytica had for you #NorthCarolina #Hindsightis2020 https://t.co/yYYDm9UTNA pic.twitter.com/C0gHa1fdGG
— Hindsight is 2020 (@HindsightFiles) January 3, 2020

The process of segmenting voters by personality and sentiment was made commercially possible by access to identity-linked personal data — which puts Facebook’s population-scale collation of identities and individual-level personal data squarely in the frame.
It was a cache of tens of millions of Facebook profiles, along with responses to a personality quiz app linked to Facebook accounts, which was sold to Cambridge Analytica in 2014, by a company called GSR, and used to underpin its psychographic profiling of U.S. voters.
In evidence to the DCMS committee last year GSR’s co-founder, Aleksandr Kogan, argued that Facebook did not have a “valid” developer policy at the time, since he said the company did nothing to enforce the stated T&Cs — meaning users’ data was wide open to misappropriation and exploitation.
The UK’s data protection watchdog also took a dim view. In 2018 it issued Facebook with the maximum fine possible, under relevant national law, for the CA data breach — and warned in a report that democracy is under threat. The country’s information commissioner also called for an “ethical pause” of the use of online microtargeting ad tools for political campaigning.
No such pause has taken place.
Meanwhile for its part, since the Cambridge Analytica scandal snowballed into global condemnation of its business, Facebook has made loud claims to be ‘locking down’ its platform — including saying it would conduct an app audit and “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; and “ban any developer from our platform that does not agree to a thorough audit”.
However, close to two years later, there’s still no final report from the company on the upshot of this self ‘audit’.
And while Facebook was slapped with a headline-grabbing FTC fine on home soil, there was in fact no proper investigation; no requirement for it to change its privacy-hostile practices; and blanket immunity for top execs — even for any unknown data violations in the 2012 to 2018 period. So, ummm…
In another highly curious detail, GSR’s other co-founder, a data scientist called Joseph Chancellor, was in fact hired by Facebook in late 2015. The tech giant has never satisfactorily explained how it came to recruit one of the two individuals at the center of a voter manipulation data misuse scandal which continues to wreak hefty reputational damage on Zuckerberg and his platform. But being able to ensure Chancellor was kept away from the press during a period of intense scrutiny looks pretty convenient.

Aleksandr Kogan and Joseph Chancellor were deceptively harvesting/matching data with that personality quiz on Facebook as GSR for SCL. John Bolton PAC was the client. @chrisinsilico’s papers to DCMS committee have the emails about delivering the data to Bolton. https://t.co/NLagY2PWEG
— David Carroll (@profcarroll) January 3, 2020

Last fall, the GSR co-founder was reported to have left Facebook — as quietly, and with as little explanation given, as when he arrived on the tech giant’s payroll.
So Kaiser seems quite right to be concerned that the data industrial complex will do anything to keep its secrets — given it’s designed and engineered to sell access to yours. Even as she has her own reasons to want to keep the story in the media spotlight.
Platforms whose profiteering purpose is to track and target people at global scale — which function by leveraging an asymmetrical ‘attention economy’ — have zero incentive to change or have change imposed upon them. Not when the propaganda-as-a-service business remains in such high demand, whether for selling actual things like bars of soap, or for hawking ideas with a far darker purpose.

From @carolecadwalla, new Cambridge Analytica docs reveal ‘global infrastructure…to manipulate voters on an industrial scale’
Key:CA & successors are parasites on host body of #SurveillanceCapitalism. It provides attack surface, weapons, opportunity.https://t.co/qA4RKYYVi6
— Shoshana Zuboff (@shoshanazuboff) January 4, 2020

BigID bags another $50M round as data privacy laws proliferate

Almost exactly 4 months to the day after BigID announced a $50 million Series C, the company was back today with another $50 million round. The Series D came entirely from Tiger Global Management. The company has raised a total of $144 million.
What warrants $100 million in interest from investors in just four months is BigID’s mission to understand the data a company has and manage that in the context of increasing privacy regulation including GDPR in Europe and CCPA in California, which went into effect this month.
BigID CEO and co-founder Dimitri Sirota admits that his company formed at the right moment when it launched in 2016, but says he and his co-founders had an inkling that there would be a shift in how governments view data privacy.
“Fortunately for us, some of the requirements that we said were going to be critical, like being able to understand what data you collect on each individual across your entire data landscape, have come to [pass],” Sirota told TechCrunch. While he understands that there are lots of competing companies going after this market, he believes that being early helped his startup establish a brand identity earlier than most.
Meanwhile, the privacy regulation landscape continues to evolve. Even as California privacy legislation is taking effect, many other states and countries are looking at similar regulations. Canada is looking at overhauling its existing privacy regulations.
Sirota says that he wasn’t actually looking to raise either the C or the D, and in fact still has B money in the bank, but when big investors want to give you money on decent terms, you take it while the money is there. These investors clearly see the data privacy landscape expanding and want to get involved. He recognizes that economic conditions can change quickly, and it can’t hurt to have money in the bank for when that happens.
That said, Sirota says you don’t raise money to keep it in the bank. At some point, you put it to work. The company has big plans to expand beyond its privacy roots and into other areas of security in the coming year. Although he wouldn’t go into too much detail about that, he said to expect some announcements soon.
For a company that is only four years old, it has been amazingly proficient at raising money with a $14 million Series A and a $30 million Series B in 2018, followed by the $50 million Series C last year, and the $50 million round today. And Sirota said, he didn’t have to even go looking for the latest funding. Investors came to him — no trips to Sand Hill Road, no pitch decks. Sirota wasn’t willing to discuss the company’s valuation, only saying the investment was minimally diluted.
BigID, which is based in New York City, already has some employees in Europe and Asia, but he expects additional international expansion in 2020. Overall the company has around 165 employees at the moment and he sees that going up to 200 by mid-year as they make a push into some new adjacencies.

BigID announces $50M Series C investment as privacy takes center stage

ByteDance & TikTok have secretly built a Deepfakes maker

TikTok parent company ByteDance has built technology to let you insert your face into videos starring someone else. TechCrunch has learned that ByteDance has developed an unreleased feature using life-like Deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.
Users scan themselves, pick a video, and have their face overlaid on the body of someone in the clip with ByteDance’s new Face Swap feature
The Deepfakes feature, if launched in Douyin and TikTok, could create a more controlled environment where face swapping technology plus a limited selection of source videos  can be used for fun instead of spreading misinformation. It might also raise awareness of the technology so more people are aware that they shouldn’t believe everything they see online. But it’s also likely to heighten fears about what ByteDance could do with such sensitive biometric data — similar to what’s used to set up FaceID on iPhones.
Several other tech companies have recently tried to consumerize watered-down versions of Deepfakes. The app Morphin lets you overlay a computerized rendering of your face on actors in GIFs. Snapchat offered a FaceSwap option for years that would switch the visages of two people in frame, or replace one on camera with one from your camera roll, and there are standalone apps that do that too like Face Swap Live. Then last month, TechCrunch spotted Snapchat’s new Cameos for inserting a real selfie into video clips it provides, though the results aren’t meant to look confusingly realistic.
Most problematic has been Chinese Deepfakes app Zao, which uses artificial intelligence to blend one person’s face into another’s body as they move and synchronize their expressions. Zao went viral in September despite privacy and security concerns about how users’ facial scans might be abused. Zao was previously blocked by China’s WeChat for presenting “security risks”. [Correction: While “Zao” is mentioned in the discovered code, it refers to the general concept rather than a partnership between ByteDance and Zao.]
But ByteDance could bring convincingly life-like Deepfakes to TikTok and Douyin, two of the world’s most popular apps with over 1.5 billion downloads.
Zao in the Chinese iOS App Store
Hidden Inside TikTok and Douyin
TechCrunch received a tip about the news from Israeli in-app market research startup Watchful.ai. The company had discovered code for the Deepfakes feature in the latest version of TikTok’s and Douyin’s Android apps. Watchful.ai was able to activate the code in Douyin to generate screenshots of the feature, though it’s not currently available to the public.
First, users scan their face into TikTok. This also serves as an identity check to make sure you’re only submitting your own face so you can’t make unconsented Deepfakes of anyone else using an existing photo or a single shot of their face. By asking you to blink, nod, and open and close your mouth while in focus and proper lighting, Douyin can ensure you’re a live human and create a manipulable scan of your face that it can stretch and move to express different emotions or fill different scenes.

You’ll then be able to pick from videos ByteDance claims to have the rights to use, and it will replace the face of whoever’s in the clip with your own. You can then share or download the Deepfake video, though it will include an overlayed watermark the company claims will help distinguish the content as not being real.
Watchful also discovered unpublished updates to TikTok and Douyin’s terms of service that cover privacy and usage of the Deepfakes feature. Inside the US version of TikTok’s Android app, English text in the code explains the feature and some of its terms of use:

“Your facial pattern will be used for this feature. Read the Drama Face Terms of Use and Privacy Policy for more details. Make sure you’ve read and agree to the Terms of Use and Privacy Policy before continuing. 1. To make this feature secure for everyone, real identity verification is required to make sure users themselves are using this feature with their own faces. For this reason, uploaded photos can’t be used; 2. Your facial pattern will only be used to generate face-change videos that are only visible to you before you post it. To better protect your personal information, identity verification is required if you use this feature later. 3. This feature complies with Internet Personal Information Protection Regulations for Minors. Underage users won’t be able to access this feature. 4. All video elements related to this feature provided by Douyin have acquired copyright authorization.”

ZHEJIANG, CHINA – OCTOBER 18 2019 Two us senators have sent a letter to the us national intelligence agency saying TikTok could pose a threat to us national security and should be investigated. Visitors visit the booth of douyin(Tiktok) at the 2019 smart expo in hangzhou, east China’s zhejiang province, Oct. 18, 2019.- PHOTOGRAPH BY Costfoto / Barcroft Media (Photo credit should read Costfoto / Barcroft Media via Getty Images)
A longer terms of use and privacy policy was also found in Chinese within Douyin. Translated into English, some highlights from the text include:

“The ‘face-changing’ effect presented by this function is a fictional image generated by the superimposition of our photos based on your photos. In order to show that the original work has been modified and the video generated using this function is not a real video, we will mark the video generated using this function. Do not erase the mark in any way.”

“The information collected during the aforementioned detection process and using your photos to generate face-changing videos is only used for live detection and matching during face-changing. It will not be used for other purposes . . . And matches are deleted immediately and your facial features are not stored.”

“When you use this function, you can only use the materials provided by us, you cannot upload the materials yourself. The materials we provide have been authorized by the copyright owner”.

“According to the ‘Children’s Internet Personal Information Protection Regulations’ and the relevant provisions of laws and regulations, in order to protect the personal information of children / youths, this function restricts the use of minors”.

We reached out to TikTok and Douyin for comment regarding the Deepfakes feature, when it might launch, how the privacy of biometric scans are protected, and the age limit. However, TikTok declined to answer those questions. Instead a spokesperson insisted that “after checking with the teams I can confirm this is definitely not a function in TikTok, nor do we have any intention of introducing it. I think what you may be looking at is something slated for Douyin – your email includes screenshots that would be from Douyin, and a privacy policy that mentions Douyin. That said, we don’t work on Douyin here at TikTok.” They later told TechCrunch that “The inactive code fragments are being removed to eliminate any confusion”, which implicitly confirms that Face Swap code was found in TikTok.
A Douyin spokesperson tells TechCrunch “Douyin follows the laws and regulations of the jurisdictions in which it operates, which is China”. They denied that the Face Swap terms of service appear in TikTok despite TechCrunch reviewing code from the app showing those terms of service and the feature’s functionality.

This is suspicious, and doesn’t explain why code for the Deepfakes feature and special terms of service in English for the feature appear in TikTok, and not just Douyin where the app can already be activated and a longer terms of service was spotted. TikTok’s US entity has previously denied complying with censorship requests from the Chinese government in contradiction to sources who told the Washington Post and that TikTok did censor some political and sexual content at China’s behest.
Consumerizing Deepfakes
It’s possible that the Deepfakes Face Swap feature never officially launches in China or the US. But it’s fully functional, even if unreleased, and demonstrates ByteDance’s willingness to embrace the controversial technology despite its reputation for misinformation and non-consensual pornography. At least it’s restricting the use of the feature by minors, only letting you face-swap yourself, and preventing users from uploading their own source videos. That avoid it being used to create dangerous misinformation Deepfakes like the one making House Speaker Nancy Pelosi seem drunk.
“It’s very rare to see a major social networking app restrict a new, advanced feature to their users 18 and over only” Watchful.ai co-founder and CEO Itay Kahana tells TechCrunch. “These deepfake apps might seem like fun on the surface, but they should not be allowed to become trojan horses, compromising IP rights and personal data, especially personal data from minors who are overwhelmingly the heaviest users of TikTok to date.”
TikTok has already been banned by the US Navy and ByteDance’s acquisition and merger of Musically into TikTok is under investigation by the Comittee On Foreign Investment In The United States. Deepfake fears could further heighten scrutiny.
With the proper safeguards, though, face-changing technology could usher in a new era of user generated content where the creator is always at the center of the action. It’s all part of a new trend of personalized media that could be big in 2020. Social media has evolved from selfies to Bitmoji to Animoji to Cameos and now consumerized Deepfakes. When there are infinite apps and videos and notifications to distract us, making us the star could be the best way to hold our attention.

Here’s where California residents can stop companies selling their data

California’s new privacy law is now in effect, allowing state residents to take better control of the data that’s collected on them — from social networks, banks, credit agencies, and more.
There’s just one catch: the companies, many of which lobbied against the law, don’t make it easy.
California’s Consumer Privacy Act (CCPA) allows anyone who resides in the state to access and obtain copies of the data that companies store on them, the right to delete that data, and to opt-out of companies selling or monetizing their data. It’s the biggest state-level overhaul of privacy rules in a generation. State regulators can impose fines and other sanctions for companies that violate the law — although, the law’s enforcement provisions do not take effect until July. That’s probably a good thing for companies, given most major tech giants operating in the state are not ready to comply with the law.
Just as companies did with Europe’s GDPR, many companies have sprung up new privacy policies in preparation and new data portals, which allow consumers access to their data and to opt-out of their data being sold on to third-parties, such as advertisers. But good luck finding them. Most companies aren’t transparent about where their data portals are, often out of sight and buried in privacy policies, near-guaranteeing that nobody will find them.
Just two days into the new law, and some are already fixing it for the average Californian.
Damian Finol created a running directory of company pages that allow California residents to opt-out of their data being sold, and request their information. The directory is updated frequently, and so far includes banks, retail giants, airlines, car rental services, gaming giants, and cell companies — to name a few.
caprivacy.me is a simple directory of links to where California residents can tell companies not to sell their data, and request what data companies store on them. (Screenshot: TechCrunch)
The project is still in its infancy but relies on community contributions (and anyone can submit a suggestion), he said. In less than a day, it already racked up more than 80 links.
“I’m passionate about privacy and allowing people to declare what their personal privacy model is,” Finol told TechCrunch.
“I grew up queer in the Latin America in the 1990s, so keeping private the truth about me was vital. Nowadays, I think of my LGBTQ siblings in places like the Middle East where if their privacy is violated, they can face capital punishment,” he said, explaining his motivations behind the directory.
There’s no easy way — yet — to opt-out in one go. Anyone in California who wants to opt-out has to go through each link. But once it’s done, it’s done. Put on a pot of coffee and get started.

California’s Privacy Act: What you need to know now

The California Consumer Privacy Act officially takes effect today

California’s much-debated privacy law officially takes effect today, a year and a half after it was passed and signed — but it’ll be six more months before you see the hammer drop on any scofflaw tech companies that sell your personal data without your permission.
The California Consumer Privacy Act, or CCPA, is a state-level law that requires, among other things, that companies notify users of the intent to monetize their data, and give them a straightforward means of opting out of said monetization.

California’s Privacy Act: What you need to know now

Here’s a top-level summary of some of its basic tenets:
Businesses must disclose what information they collect, what business purpose they do so for and any third parties they share that data with.
Businesses will be required to comply with official consumer requests to delete that data.
Consumers can opt out of their data being sold, and businesses can’t retaliate by changing the price or level of service.
Businesses can, however, offer “financial incentives” for being allowed to collect data.
California authorities are empowered to fine companies for violations.
The law is described in considerably more detail here, but the truth is that it will probably take years before its implications for businesses and regulators are completely understood and brought to bear. In the meantime the industries that will be most immediately and obviously affected are panicking.

Silicon Valley is terrified of California’s privacy law. Good.

A who’s-who of internet-reliant businesses has publicly opposed the CCPA. While they have been careful to avoid saying such regulation is unnecessary, they have said that this regulation is unnecessary. What we need, they say, is a federal law.
That’s true as far as it goes — it would protect more people and there would be less paperwork for companies that now must adapt their privacy policies and reporting to CCPA’s requirements. But the call for federal regulation is transparently a stall tactic, and an adequate bill at that level would likely take a year or more of intensive work even at the best of times, let alone during an election year while the President is being impeached.
So California wisely went ahead and established protections for its own residents, though as a consequence it will have aroused the ire of many companies based there.
A six-month grace period follows today’s official activation of the CCPA; This is a normal and necessary part of breaking in such a law, when honest mistakes can go unpunished and the inevitable bugs in the system can be squelched.
But starting in June offenses will be assessed with fines at the scale of thousands of dollars per violation, something that adds up real quick at the scales companies like Google and Facebook work in.
Adapting to the CCPA will be difficult, but as the establishment of GDPR in Europe has shown, it’s far from impossible, and at any rate the former’s requirements are considerably less stringent. Still, if your company isn’t already working on getting in compliance, better get started.

California’s new data privacy law brings U.S. closer to GDPR

A Twitter app bug was used to match 17 million phone numbers to user accounts

A security researcher said he has matched 17 million phone numbers to Twitter user accounts by exploiting a flaw in Twitter’s Android app.
Ibrahim Balic found that it was possible to upload entire lists of generated phone numbers through Twitter’s contacts upload feature. “If you upload your phone number, it fetches user data in return,” he told TechCrunch.
He said Twitter’s contact upload feature doesn’t accept lists of phone numbers in sequential format — likely as a way to prevent this kind of matching. Instead, he generated more than two billion phone numbers, one after the other, then randomized the numbers, and uploaded them to Twitter through the Android app. (Balic said the bug did not exist in the web-based upload feature.)
Over a two-month period, Balic said he matched records from users in Israel, Turkey, Iran, Greece, Armenia, France, and Germany, he said, but stopped after Twitter blocked the effort on December 20.
Balic provided TechCrunch with a sample of the phone numbers he matched. Using the site’s password reset feature, we verified his findings by comparing a random selection of usernames with the phone numbers that were provided.
In one case, TechCrunch was able to identify a senior Israeli politician using their matched phone number.
While he did not alert Twitter to the vulnerability, he took many of the phone numbers of high-profile Twitter users — including politicians and officials — to a WhatsApp group in an effort to warn users directly.
It’s not believed Balic’s efforts are related to a Twitter blog post published this week, which confirmed a bug could have allowed “a bad actor to see nonpublic account information or to control your account,” such as tweets, direct messages, and location information.
A Twitter spokesperson, when reached, did not immediately comment outside of business hours.
It’s the latest security lapse involving Twitter data in the past year. In May, Twitter admitted it gave account location data to one of its partners, even if the user had opted-out of having their data shared. In August, the company said it inadvertently gave its ad partners more data than it should have done. And just last month, Twitter confirmed it used phone numbers provided by users for two-factor authentication for serving targeted ads.
Balic is previously known for identifying a security flaw breach that affected Apple’s developer center in 2013.

Hackers are spreading Islamic State propaganda by hijacking dormant Twitter accounts

Plenty of Fish app was leaking users’ hidden names and postal codes

Dating app Plenty of Fish has pushed out a fix for its apps after a security researcher found they were leaking information that users had set to “private” on their profiles.
The app was always silently returning users’ first names and Zip postal codes to the app, according to The App Analyst, a mobile expert who writes about his analyses of popular apps on his eponymous blog.
The leaking data was not immediately visible to app users, and the data was scrambled to make it difficult to read. But using freely available tools designed to analyze network traffic, the researcher found it was possible to reveal the information about users as their profiles appeared on his phone.
In one case, the App Analyst found enough information to identify where a particular user lived, he told TechCrunch.
Plenty of Fish has more than 150 million registered users, according to its parent company IAC. In recent years, law enforcement have warned about the threats some people face on dating apps, like Plenty of Fish. Reports suggest sex attacks involving dating apps have risen in the past five years. And those in the LGBTQ+ community on these apps also face safety threats from both individuals and governments, prompting apps like Tinder to proactively warn their LGBTQ+ users when they visit regions and states with restrictive and oppressive laws against same-sex partners.
A fix is said to have rolled out for the information leakage bug earlier this month. A spokesperson for Plenty of Fish did not immediately comment.
Earlier this year, the App Analyst found a number of third-party tools were allowing app developers to record the device’s screen while users engaged with their apps, prompting a crackdown by Apple.

Many popular iPhone apps secretly record your screen without asking

Just because it’s legal, it doesn’t mean it’s right

Polina Arsentyeva
Contributor

Share on Twitter

Polina Arsentyeva, a former commercial litigator, is a data privacy attorney who counsels fintech and startup clients on how to innovate using data in a transparent and privacy-forward way.

Companies often tout their compliance with industry standards — I’m sure you’ve seen the logos, stamps and “Privacy Shield Compliant” declarations. As we, and the FTC, were reminded a few months ago, that label does not mean that the criteria was met initially, much less years later when finally subjected to government review.
Alastair Mactaggart — an activist who helped promote the California Consumer Privacy Act (CCPA) — has threatened a ballot initiative allowing companies to voluntarily certify compliance with CCPA 2.0 to the still-unformed agency. While that kind of advertising seems like a no-brainer for companies looking to stay competitive in a market that values privacy and security, is it actually? Business considerations aside, is there a moral obligation to comply with all existing privacy laws, and is a company unethical for relying on exemptions from such laws?
I reject the notion that compliance with the law and morality are the same thing — or that one denotes the other. In reality, it’s a nuanced decision based on cost, client base, risk tolerance and other factors. Moreover, giving voluntary compliance the appearance of additional trust or altruism is actually harmful to consumers because our current system does not permit effective or timely oversight and the type of remedies available after the fact do not address the actual harms suffered.
It’s not unethical to rely on an exemption
Compliance is not tied to morality.
At its heart is a cost analysis, and a nuanced analysis at that. Privacy laws — as much as legislators want to believe otherwise — are not black and white in their implementation. Not all unregulated data collection is nefarious and not all companies that comply (voluntarily or otherwise) are purely altruistic. While penalties have a financial cost, data collection is a revenue source for many because of the knowledge and insights gained from large stores of varied data — and other companies’ need to access that data.
They balance the cost of building compliant systems and processes and amending existing agreements with often thousands of service providers with the loss of business of not being able to provide those services to consumers covered by those laws.
There is also the matter of applicable laws. Complying with a law may interfere or lessen the protections offered by the laws you follow that make you exempt in the first place, for instance, where one law prohibits you from sharing certain information for security purposes and another would require you to disclose it and make both the data and the person less secure.
Strict compliance also allows companies to rest on their laurels while taking advantage of a privacy-first reputation. The law is the minimum standard, while ethics are meant to prescribe the maximum. Complying, even with an inapplicable law, is quite literally the least the company can do. It also then puts them in a position to not make additional choices or innovate because they have already done more than what is expected. This is particularly true with technology-based laws, where legislation often lags behind the industry and its capabilities.
Moreover, who decides what is ethical varies by time, culture and power dynamics. Complying with the strict letter of a law meant to cover everyone does not take into account that companies in different industries use data differently. Companies are trying to fit into a framework without even answering the question of which framework they should voluntarily comply with. I can hear you now: “That’s easy! The one with the highest/strongest/strictest standard for collection.”  These are all adjectives that get thrown around when talking about a federal privacy law. However, “highest,” “most,” and “strongest,” are all subjective and do not live in a vacuum, especially if states start coming out with their own patchwork of privacy laws.
I’m sure there are people that say that Massachusetts — which prohibits a company from providing any details to an impacted consumer — offers the “most” consumer protection, while there is a camp that believes providing as much detailed information as possible — like California and its sample template — provides the “most” protection. Who is right? This does not even take into account that data collection can happen across multiple states. In those instances, which law would cover that individual?
Government agencies can’t currently provide sufficient oversight
Slapping a certification onto your website that you know you don’t meet has been treated as an unfair and deceptive practice by the FTC. However, the FTC generally does not have fining authority on a first-time violation. And while it can force companies to compensate consumers, damages can be very difficult to calculate.
Unfortunately, damages for privacy violations are even harder to prove in court; funds that are obtained go disproportionately to counsel, with each individual receiving a de minimis payout, if they even make it to court. The Supreme Court has indicated through their holdings in Clapper v. Amnesty Intern., USA. 133 S. Ct. 1138 (2013), and Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), that damages like the potential of fraud or ramifications form data loss or misuse are too speculative to have standing to maintain a lawsuit.
This puts the FTC in a weaker negotiating position to get results with as few resources expended as possible, particularly as the FTC can only do so much — it has limited jurisdiction and no control over banks or nonprofits. To echo Commissioner Noah Phillips, this won’t change without a federal privacy law that sets clear limits on data use and damages and gives the FTC greater power to enforce these limits in litigation.
Finally, in addition to these legal constraints, the FTC is understaffed in privacy, with approximately 40 full-time staff members dedicated to protecting the privacy of more than 320 million Americans. To adequately police privacy, the FTC needs more lawyers, more investigators, more technologists and state-of-the-art tech tools. Otherwise, it will continue to fund certain investigations at the cost of understaffing others.
Outsourcing oversight to a private company may not fare any better — for the simple fact that such certification will come at a high price (especially in the beginning), leaving medium and small-sized businesses at a competitive disadvantage. Further, unlike a company’s privacy professionals and legal team, a certification firm is more likely to look to compliance with the letter of the law — putting form over substance — instead of addressing the nuances of any particular business’ data use models.
Existing remedies don’t address consumer harms
Say an agency does come down with an enforcement action, the types of penalty powers that those agencies have currently do not adequately address the consumer harm. That is largely because compliance with a privacy legislation is not an on-off switch and the current regime is focused more on financial restitution.
Even where there are prescribed actions to come into compliance with the law, that compliance takes years and does not address the ramifications of historic non-compliant data use.
Take CNIL’s formal notice against Vectuary for failing to collect informed, affirmative consent. Vectuary collected geolocation data from mobile app users to provide marketing services to retailers using a consent management platform that it developed implementing the IAB (a self-regulating association) Transparency and Consent Framework. This notice warrants particular attention because Vectuary was following an established trade association guideline, and yet its consent was deemed invalid.
As a result, CNIL put Vectuary on notice to cease processing data this way and to delete data collected during that period. And while this can be counted as a victory because the decision forced the company to rebuild their systems  — how many companies would have the budget to do this, if they didn’t have the resources to comply in the first place? Further, this will take time, so what happens to their business model in the meantime? Can they continue to be non-compliant, in theory until the agency-set deadline for compliance is met? Even if the underlying data is deleted — none of the parties they shared the data with or the inferences they built on it were impacted.
The water is even murkier when you’re examining remedies for false Privacy Shield self-certification. A Privacy Shield logo on a company’s site essentially says that the company believes that its cross-border data transfers are adequately secured and the transfers are limited to parties the company believes has responsible data practices. So if a company is found to have falsely made those underlying representations (or failed to comply with another requirement), they would have to stop conducting those transfers and if that is part of how their services are provided, do they just have to stop providing those services to their customers immediately?
It seems in practice that choosing not to comply with an otherwise inapplicable law is not a matter of not caring about your customers or about moral failings, it is quite literally just “not how anything works,” nor is there any added consumer benefit in trying to — and isn’t that what counts in the end — consumers?
Opinions expressed in this article are those of the author and not of her firm, investors, clients or others.