Crea sito

France’s competition watchdog orders Google to pay for news reuse

France’s competition authority has ordered Google to negotiate with publishers to pay for reuse of snippets of their content — such as can be displayed in its News aggregation service or surfaced via Google Search.
The country was the first of the European Union Member States to transpose the neighbouring right for news into national law, following the passing of a pan-EU copyright reform last year.
Among various controversial measures the reform included a provision to extend copyright to cover content such as the ledes of news stories which aggregators such as Google News scrape and display. The copyright reform as a whole was voted through the EU parliament in March 2019, while France’s national law for extended press publishers rights came into force in October 2019.
A handful of individual EU Member States, including Germany and Spain, had previously passed similar laws covering the use of news snippets — without successfully managing to extract payments from Google, as lawmakers had hoped.
In Spain, for example, which made payments to publishers mandatory, Google instead chose to pull the plug on its Google News service entirely. But publishers who lobbied for a pan-EU reform hoped a wider push could turn the screw on the tech giant.
Nonetheless, Google has continued to talk tough over paying for this type of content.
In a September 2019 blog post the tech giant dug in, writing — without apparent irony — that: “We sell ads, not search results, and every ad on Google is clearly marked. That’s also why we don’t pay publishers when people click on their links in a search result.”
It has also since changed how Google News displays content in France, as Euractiv reported last year — switching to showing headlines and URLs only, editing out the text snippets it shows in most other markets.
Screengrab showing how Google News displays content in France
However France’s competition authority has slapped down the tactic — taking the view that Google’s unilateral withdrawal of snippets to deny payment is likely to constitute an abuse of a dominant market position, which it writes “seriously and immediately damaged the press sector”.
It cites Google’s unilateral withdrawal of “longer display article extracts, photographs, infographics and videos within its various services (Google Search, Google News and Discover), unless the publishers give it free authorization” as unfair behavior.
“In practice, the vast majority of press publishers have granted Google licenses for the use and display of their protected content, and this without possible negotiation and without receiving any remuneration from Google. In addition, as part of Google’s new display policy, the licenses which have been granted to it by publishers and press agencies offer it the possibility of taking up more content than before,” it writes in French (which we’ve translated via Google Translate).
“In these conditions, in addition to their referral to the merits, the seizors requested the order of provisional measures aimed at enjoining Google to enter in good faith into negotiations for the remuneration of the resumption of their content.”
Hence issuing an emergency order — which gives Google three months to negotiate “in good faith” with press agencies and publishers to pay for reusing bits of their content.
Abusive practices the agency says it suspects Google of at this stage of its investigation are:
The imposition of unfair trading conditions;
circumvention of the law;
and discrimination (i.e. because of its unilateral policy of zero renumeration for all publishers)
The order requires Google to display news snippets during the negotiation period, in accordance with publishers wishes.
While terms agreed via the negotiation process will apply retrospectively — from the date the law came into force (i.e. last October).
Google is also required to send in monthly reports on how it’s implementing the decision.
“This injunction requires that the negotiations actually result in a proposal for remuneration from Google,” it adds.
We reached out to Google for comment on the Autorité de la Concurrence’s action. In a statement attributed to Richard Gingras, its VP News, the company told us:
Since the European Copyright law came into force in France last year, we have been engaging with publishers to increase our support and investment in news. We will comply with the FCA’s order while we review it and continue those negotiations.
A Google spokeswoman also pointed back to its blog post from last year — to highlight what she described as “the ways we already work with news publishers for context”.
In the blog post the company discusses directing traffic to news sites; providing ad tech used by many publishers; and a funding vehicle via which it says it’s investing $300M “to help news publishers around the world develop new products and business models that fit the different publishing marketplace the Internet has enabled”.
Interim measures are an antitrust tool that Europe’s competition authorities have pulled from the back of the cupboard and started dusting off lately.
Last October EU competition chief Margrethe Vestager used an interim order against chipmaker Broadcom to stop applying exclusivity clauses in agreements with six of its major customers — while an investigation into its practices continues.
The commission EVP, who also heads up the bloc’s digital strategy, has suggested she will seek to make greater use of interim orders as an enforcement tool to keep up with the fast pace of developments in the digital economy, responding to concern that regulators are not able to respond effectively to curtail market abuse in the modern Internet era.
In the case of France’s competition authority’s probe of Google’s treatment of publishers content the authority writes that the interim protective measures it’s ordered will remain in force until it adopts its decision “on the merits”.

Call for common EU approach to apps and data to fight COVID-19 and protect citizens’ rights

The European Commission has responded to the regional scramble for apps and data to help tackle the coronavirus crisis by calling for a common EU approach to boost the effectiveness of digital interventions and ensure key rights and freedoms are respected.
The European Union’s executive body wants to ensure Member States’ individual efforts to use data and tech tools to combat COVID-19 are aligned and can interoperate across borders — and therefore be more effective, given the virus does not respect national borders.
Current efforts by governments across the EU to combat the virus are being hampered by the fragmentation of approaches, it warns.
At the same time its recommendation puts a strong focus on the need to ensure that fundamental EU rights do not get overridden in the rush to mitigate the spread of the virus — with the Commission urging public health authorities and research institutions to observe a key EU legal principle of data minimization when processing personal data for a coronavirus purpose.
Specifically it writes that these bodies should apply what it calls “appropriate safeguards” — listing pseudonymization, aggregation, encryption and decentralization as examples of best practice. 
The Commission’s thinking is that getting EU citizens to trust digital efforts — such as the myriad of COVID-19 contacts tracing apps now in development — will be key to their success by helping to drive uptake and usage, which means core rights like privacy take on additional significance at a moment of public health crisis.
Commenting in a statement, commissioner for the EU’s internal market, Thierry Breton said: “Digital technologies, mobile applications and mobility data have enormous potential to help understand how the virus spreads and to respond effectively. With this Recommendation, we put in motion a European coordinated approach for the use of such apps and data, without compromising on our EU privacy and data protection rules, and avoiding the fragmentation of the internal market. Europe is stronger when it acts united.”
“Europe’s data protection rules are the strongest in the world and they are fit also for this crisis, providing for exceptions and flexibility. We work closely with data protection authorities and will come forward with guidance on the privacy implications soon,” added Didier Reynders, the commissioner for justice, in another supporting statement. “We all must work together now to get through this unprecedented crisis. The Commission is supporting the Member States in their efforts to fight the virus and we will continue to do so when it comes to an exit strategy and to recovery. In all this, we will continue to ensure full respect of Europeans’ fundamental rights.”
Since Europe has fast-followed China to become a secondary epicenter for the SARS-CoV-2 virus there has been a rush by governments, institutions and the private sector to grab data and technologies to try to map the spread of the virus and inform policy responses. The Commission itself has leant on telcos to provide anonymized and aggregated user location data for COVID-19 tracking purposes.
Some individual Member States have gone further — calling in tech companies to ask directly for resources and/or data, with little public clarity on what exactly is being provided. Some governments have even rushed out apps that apply individual-level location tracking to enforce quarantine measures.
Multiple EU countries also have contacts tracing apps in the works — taking inspiration from Singapore’s TraceTogether app which users Bluetooth proximity as a proxy for infection risk.
With so much digital activity going on — and huge economic and social pressure for a ‘coronavirus fix’ — there are clear risks to privacy and civil liberties. Governments, research institutions and the private sector are all mobilizing to capture health-related data and track people’s location like never before, all set against the pressing backdrop of a public health emergency.
The Commission warned today that some of the measures being taken by certain (unnamed) countries — such as location-tracking of individuals; the use of technology to rate an individual’s level of health risk; and the centralization of sensitive data — risk putting pressure on fundamental EU rights and freedoms.
Its recommendation emphasizes that any restrictions on rights must be justified, proportionate and temporary.
Any such restrictions should remain “strictly limited” to what is necessary to combat the crisis and should not continue to exist “without an adequate justification” after the COVID-19 emergency has passed, it adds.
It’s not alone in expressing such concerns.
In recent days bottom-up efforts have emerged out of EU research institutions with the aim of standardizing a ‘privacy-preserving’ approach to coronavirus contacts tracing.
One coalition of EU technologists and scientists led by institutions in Germany, Switzerland and France, is pushing a common approach that they’re hoping will get baked into such apps to limit risks. They’ve called the effort: PEPP-PT (Pan-European Privacy-Preserving Proximity Tracing).
However a different group of privacy experts is simultaneously pushing for a decentralized method for doing the same thing (DP-3T) — arguing it’s a better fit with the EU’s data protection model as it doesn’t require pseudonymized IDs to be centralized on a server. Instead storage of contacts and individual infection risk processing would be decentralized — performed locally, on the user’s device — thereby shrinking the risk of such a system being repurposed to carry out state-level surveillance of citizens.
Although the backers of this protocol accept it does not erase all risk; with the potential for tech savvy hackers to intercept the pseudonymized IDs of infected people at the point they’re being broadcast to devices for local processing, for instance. (While health authorities may be more accustomed to the concept of centralizing data to secure it, rather than radically distributing it.)
Earlier this week, one of the technologists involved in the PEPP-PT project told us it intends to support both approaches — centralized and decentralized — in order to try to maximize international uptake, allowing developers to make their own choice of preferred infrastructure.
Though questions remain over achieving interoperability between different models.
Per its recommendation, the Commission looks to be favoring a decentralized model — as the closest fit with the EU’s rights framework.

The European Commission Recommendation on Bluetooth COVID-19 proximity tracing apps specifically notes that they should apply decentralisation as a key data minimisation safeguard, in line with #DP3T. https://t.co/ksL1Obc8My pic.twitter.com/Vb3jTLbeo9
— Michael Veale (@mikarv) April 8, 2020

In a section of its recommendation paper on privacy and data protection for “COVID-19 mobile warning and prevention applications” it also states a preference for preference for “safeguards ensuring respect for fundamental rights and prevention of stigmatization”; and for “the least intrusive yet effective measures”.

The EU will support privacy-respecting #COVIDー19 application for contact tracing “least intrusive yet effective measures” pic.twitter.com/nNdQEGNatd
— Lukasz Olejnik (@lukOlejnik) April 8, 2020

The Commission’s recommendation also stresses the importance of keeping the public informed.
“Transparency and clear and regular communication, and allowing for the input of persons and communities most affected, will be paramount to ensuring public trust when combating the COVID-19 crisis,” it warns. 
The Commission is proposing a joint toolbox for EU Member States to encourage the development of a rights-respecting, coordinated and common approach to smartphone apps for tracing COVID-19 infections — which will consist of [emphasis its]:
specifications to ensure the effectiveness of mobile information, warning and tracing applications from a medical and technical point of view;
measures to avoid proliferation of incompatible applications, support requirements for interoperability and promotion of common solutions;
governance mechanisms to be applied by public health authorities and in cooperation with the European Centre for Disease Control;
the identification of good practices and mechanisms for exchange of information on the functioning of the applications; and
sharing data with relevant epidemiological public bodies, including aggregated data to ECDC.
It also says it will be providing guidance for Member States that will specifically cover off data protection and privacy implications — another clear signal of concerns.
“The Commission is in close contact with the European Data Protection Board [EDPB] for an overview of the processing of personal data at national level in the context of the coronavirus crisis,” it adds.
Yesterday, following a plenary meeting of the EU data watchdogs body, the EDPB announced that it’s assigned expert subgroups to work on developing guidance on key aspects of data processing in the fight against COVID-19 — including for geolocation and other tracing tools in the context of the COVID-19 outbreak, with its technology expert subgroup leading the work.
While a compliance, e-government and health expert subgroup is also now working on guidance for the processing of health data for research purposes in the coronavirus context.
These are the two areas the EDPB said it’s prioritizing at this time, putting planned guidance for teleworking tools and practices during the current crisis on ice for now.
“I strongly believe data protection and public health go hand in hand,” said EDPB chair, Andrea Jelinek, in a statement: “The EDPB will move swiftly to issue guidance on these topics within the shortest possible notice to help make sure that technology is used in a responsible way to support and hopefully win the battle against the corona pandemic.”
The Commission also wants a common approach for modelling and predicting the spread of COVID-19 too — and says the toolbox will focus on developing this via the use of “anonymous and aggregated mobile location data” (such as it has been asking EU operators to provide).
“The aim is to analyse mobility patterns including the impact of confinement measures on the intensity of contacts, and hence the risks of contamination,” it writes. “This will be an important and proportionate input for tools modelling the spread of the virus, and provide insights for the development of strategies for opening up societies again.”
“The Commission already started the discussion with mobile phone operators on 23 March 2020 with the aim to cover all Member States. The data will be fully anonymised and transmitted to the Joint Research Centre for processing and modelling. It will not be shared with third parties and only be stored as long as the crisis is ongoing,” it adds.
The Commission’s push to coordinate coronavirus tech efforts across the EU has been welcomed by privacy and security experts.
Michael Veale, a backer of the decentralized protocol for COVID-19 contacts tracing, told us: “It’s great to see the Commission recommend decentralisation as a core principle for information systems tackling COVID-19. As our DP-3T protocol shows, creating a centralised database is a wholly unnecessary and removable part of bluetooth contact tracing.”
“We hope to be able to place code online for scrutiny and feedback next week — fully open source, of course,” Veale added. “We have already had great public feedback on the protocol which we are revising in light of that to make it even more private and secure. Centralised systems being developed in Europe, such as in Germany, have not published their protocols, let along code — perhaps they are afraid of what people will find?”
While Lukasz Olejnik, an EU-based cybersecurity advisor and privacy researcher, also welcomed the Commission’s intervention, telling us: “A coordinated approach can certainly be easier to build trust. We should favor privacy-respecting approaches, and make it clear that we are in a crisis situation. Any such crisis system should be dismantled, and it looks like the recommendations recognize it. This is good.”

Cookie consent still a compliance trash-fire in latest watchdog peek

The latest confirmation of the online tracking industry’s continued flouting of EU privacy laws which — at least on paper — are supposed to protect citizens from consent-less digital surveillance comes by via Ireland’s Data Protection Commission (DPC).
The watchdog did a sweep survey of around 40 popular websites last year — covering sectors including media and publishing; retail; restaurants and food ordering services; insurance; sport and leisure; and the public sector — and in a new report, published yesterday, it found almost all failing on a number of cookie and tracking compliance issues, with breaches ranging from minor to serious.
Twenty were graded ‘amber’ by the regulator, which signals a good response and approach to compliance but with at least one serious concern identified; twelve were graded ‘red’, based on very poor quality responses and a plethora of bad practices around cookie banners, setting multiple cookies without consent, badly designed cookies policies or privacy policies, and a lack of clarity about whether they understood the purposes of the ePrivacy legislation; while a further three got a borderline ‘amber to red’ grade.
Just two of the 38 controllers got a ‘green’ rating (substantially compliance with any concerns straightforward and easily remedied); and one more got a borderline ‘green to amber’ grade.
EU law means that if a data controller is relying on consent as the legal basis for tracking a user the consent must be specific, informed and freely given. Additional court rulings last year have further finessed guidance around online tracking — clarifying pre-checked consent boxes aren’t valid, for example.
Yet the DPC still found examples of cookie banners that offer no actual choice at all. Such as those which serve a dummy banner with a cookie notice that users can only meaningless click ‘Got it!’. (‘Gotcha data’ more like.. )
In fact the watchdog writes that it found ‘implied’ consent being relied upon by around two-thirds of the controllers, based on the wording of their cookie banners (e.g. notices such as: “by continuing to browse this site you consent to the use of cookies”) — despite this no longer meeting the required legal standard.
“Some appeared to be drawing on older, but no longer extant, guidance published by the DPC that indicated consent could be obtained ‘by implication’, where such informational notices were put in place,” it writes, noting that current guidance on its website “does not make any reference to implied consent, but it also focuses more on user controls for cookies rather than on controller obligations”.
Another finding was that all but one website set cookies immediately on landing — with “many” of these found to have no legal justification for not asking first, as the DPC determined they fall outside available consent exemptions in the relevant regulations.
It also identified widespread abuse of the concept of ‘strictly necessary’ where the use of trackers are concerned. “Many controllers categorised the cookies deployed on their websites as having a ‘necessary’ or ‘strictly necessary’ function, where the stated function of the cookie appeared to meet neither of the two consent exemption criteria set down in the ePrivacy Regulations/ePrivacy Directive,” it writes in the report. “These included cookies used to establish chatbot sessions that were set prior to any request by the user to initiate a chatbot function. In some cases, it was noted that the chatbot function on the websites concerned did not work at all.
“It was clear that some controllers may either misunderstand the ‘strictly necessary’ criteria, or that their definitions of what is strictly necessary are rather more expansive than the definitions provided in Regulation 5(5),” it adds.
Another problem the report highlights is a lack of tools for users to vary or withdraw their consent choices, despite some of the reviewed sites using so called ‘consent management platforms’ (CMPs) sold by third-party vendors.
This chimes with a recent independent study of CPMs — which earlier this year found illegal practices to be widespread, with “dark patterns and implied consent… ubiquitous”, as the researchers put it.
“Badly designed — or potentially even deliberately deceptive — cookie banners and consent-management tools were also a feature on some sites,” the DPC writes in its report, detailing some examples of Quantcast’s CPM which had been implemented in such a way as to make the interface “confusing and potentially deceptive” (such as unlabelled toggles and a ‘reject all’ button that had no effect).
Pre-checked boxes/sliders were also found to be common, with the DPC finding ten of the 38 controllers used them — despite ‘consent’ collected like that not actually being valid consent.
“In the case of most of the controllers, consent was also ‘bundled’ — in other words, it was not possible for users to control consent to the different purposes for which cookies were being used,” the DPC also writes. “This is not permitted, as has been clarified in the Planet49 judgment. Consent does not need to be given for each cookie, but rather for each purpose. Where a cookie has more than one purpose requiring consent, it must be obtained for all of those purposes separately.”
In another finding, the regulator came across instances of websites that had embedded tracking technologies, such as Facebook pixels, yet their operators did not list these in responses to the survey, listing only http browser cookies instead. The DPC suggests this indicates some controllers aren’t even aware of trackers baked into their own sites.
“It was not clear, therefore, whether some controllers were aware of some of the tracking elements deployed on their websites — this was particularly the case where small controllers had outsourced their website management and development to a third-part,” it writes.
The worst sector of its targeted sweep — in terms of “poor practices and, in particular, poor understanding of the ePrivacy Regulations and their purpose” — was the restaurants and food-ordering sector, per the report. (Though the finding is clearly based on a small sampling across multiple sectors.)
Despite encountering near blanket failure to actually comply with the law, the DPC, which also happens to be the lead regulator for much of big tech in Europe, has responded by issuing, er, further guidance.
This includes specifics such as pre-checked consent boxes must be removed; cookie banners can’t be designed to ‘nudge’ users to accept and a reject option must have equal prominence; and no non-necessary cookies be set on landing. It also stipulates there must always be a way for users to withdraw consent — and doing so should be as easy as consenting.
All stuff that’s been clear and increasingly so at least since the GDPR came into application in May 2018. Nonetheless the regulator is giving the website operators in question a further six months’ grace to get their houses in order — after which it has raised the prospect of actually enforcing the EU’s ePrivacy Directive and the General Data Protection Regulation.
“Where controllers fail to voluntarily make changes to their user interfaces and/or their processing, the DPC has enforcement options available under both the ePrivacy Regulations and the GDPR and will, where necessary, examine the most appropriate enforcement options in order to bring controllers into compliance with the law,” it warns.
The report is just the latest shot across the bows of the online tracking industry in Europe.
The UK’s Information Commission’s Office (ICO) has been issuing sternly worded blog posts for months. Its own report last summer found illegal profiling of Internet users by the programmatic ad industry to be rampant — also giving the industry six months to reform.
However the ICO still hasn’t done anything about the adtech industry’s legal blackhole — leading to privacy experts to denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”, as one put it at the start of this year.

Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech

Ireland’s DPC, meanwhile, has yet to put the decision trigger on multiple cross-border investigations into the data-mining business practices of tech giants including Facebook and Google, following scores of GDPR complaints — including several targeting their legal base to process people’s data.
A two-year review of the pan-EU regulation, set for May 2020, provides one hard deadline that might concentrate minds.

Google is now publishing coronavirus mobility reports, feeding off users’ location history

Google is giving the world a clearer glimpse of exactly how much it knows about people everywhere — using the coronavirus crisis as an opportunity to repackage its persistent tracking of where users go and what they do as a public good in the midst of a pandemic.
In a blog post today the tech giant announced the publication of what it’s branding ‘COVID-19 Community Mobility Reports‘. Aka an in-house analysis of the much more granular location data it maps and tracks to fuel its ad-targeting, product development and wider commercial strategy to showcase aggregated changes in population movements around the world.
The coronavirus pandemic has generated a worldwide scramble for tools and data to inform government responses. In the EU, for example, the European Commission has been leaning on telcos to hand over anonymized and aggregated location data to model the spread of COVID-19.
Google’s data dump looks intended to dangle a similar idea of public policy utility while providing an eyeball-grabbing public snapshot of mobility shifts via data pulled off of its global user-base.
In terms of actual utility for policymakers, Google’s suggestions are pretty vague. The reports could help government and public health officials “understand changes in essential trips that can shape recommendations on business hours or inform delivery service offerings”, it writes.
“Similarly, persistent visits to transportation hubs might indicate the need to add additional buses or trains in order to allow people who need to travel room to spread out for social distancing,” it goes on. “Ultimately, understanding not only whether people are traveling, but also trends in destinations, can help officials design guidance to protect public health and essential needs of communities.”
The location data Google is making public is similarly fuzzy — to avoid inviting a privacy storm — with the company writing it’s using “the same world-class anonymization technology that we use in our products every day”, as it puts it.
“For these reports, we use differential privacy, which adds artificial noise to our datasets enabling high quality results without identifying any individual person,” Google writes. “The insights are created with aggregated, anonymized sets of data from users who have turned on the Location History setting, which is off by default.”
“In Google Maps, we use aggregated, anonymized data showing how busy certain types of places are—helping identify when a local business tends to be the most crowded. We have heard from public health officials that this same type of aggregated, anonymized data could be helpful as they make critical decisions to combat COVID-19,” it adds, tacitly linking an existing offering in Google Maps to a coronavirus-busting cause.
The reports consist of per country, or per state, downloads (with 131 countries covered initially), further broken down into regions/counties — with Google offering an analysis of how community mobility has changed vs a baseline average before COVID-19 arrived to change everything.
So, for example, a March 29 report for the whole of the US shows a 47 per cent drop in retail and recreation activity vs the pre-CV period; a 22% drop in grocery & pharmacy; and a 19% drop in visits to parks and beaches, per Google’s data.
While the same date report for California shows a considerably greater drop in the latter (down 38% compared to the regional baseline); and slightly bigger decreases in both retail and recreation activity (down 50%) and grocery & pharmacy (-24%).
Google says it’s using “aggregated, anonymized data to chart movement trends over time by geography, across different high-level categories of places such as retail and recreation, groceries and pharmacies, parks, transit stations, workplaces, and residential”. The trends are displayed over several weeks, with the most recent information representing 48-to-72 hours prior, it adds.
The company says it’s not publishing the “absolute number of visits” as a privacy step, adding: “To protect people’s privacy, no personally identifiable information, like an individual’s location, contacts or movement, is made available at any point.”
Google’s location mobility report for Italy, which remains the European country hardest hit by the virus, illustrates the extent of the change from lockdown measures applied to the population — with retail & recreation dropping 94% vs Google’s baseline; grocery & pharmacy down 85%; and a 90% drop in trips to parks and beaches.
The same report shows an 87% drop in activity at transit stations; a 63% drop in activity at workplaces; and an increase of almost a quarter (24%) of activity in residential locations — as many Italians stay at home, instead of commuting to work.
It’s a similar story in Spain — another country hard-hit by COVID-19. Though Google’s data for France suggests instructions to stay-at-home may not be being quite as keenly observed by its users there, with only an 18% increase in activity at residential locations and a 56% drop in activity at workplaces. Perhaps because the pandemic has so far had a less severe impact on France, although numbers of confirmed cases and deaths continue to rise across the region.
While policymakers have been scrambling for data and tools to inform their responses to COVID-19, privacy experts and civil liberties campaigners have rushed to voice concerns about the impacts of such data-fuelled efforts on individual rights, while also querying the wider utility of some of this tracking.

And yes, the disclaimer is very broad. I’d say, this is largely a PR move.
Apart from this, Google must be held accountable for its many other secondary data uses. And Google/Alphabet is far too powerful, which must be addressed at several levels, soon. https://t.co/oksJgQAPAY
— Wolfie Christl (@WolfieChristl) April 3, 2020

Contacts tracing is another area where apps are fast being touted as a potential solution to get the West out of economically crushing population lockdowns — opening up the possibility of people’s mobile devices becoming a tool to enforce lockdowns, as has happened in China.
“Large-scale collection of personal data can quickly lead to mass surveillance,” is the succinct warning of a trio of academics from London’s Imperial College’s Computational Privacy Group, who have compiled their privacy concerns vis-a-vis COVID-19 contacts tracing apps into a set of eight questions app developers should be asking.
Discussing Google’s release of mobile location data for a COVID-19 cause, the head of the group, Yves-Alexandre de Montjoye, gave a general thumbs up to the steps it’s taken to shrink privacy risks.
Although he also called for Google to provide more detail about the technical processes it’s using in order that external researchers can better assess the robustness of the claimed privacy protections. Such scrutiny is of pressing importance with so much coronavirus-related data grabbing going on right now, he argues.
“It is all aggregated, they normalize to a specific set of dates, they threshold when there are too few people and on top of this they add noise to make — according to them — the data differentially private. So from a pure anonymization perspective it’s good work,” de Montjoye told TechCrunch, discussing the technical side of Google’s release of location data. “Those are three of the big ‘levers’ that you can use to limit risk. And I think it’s well done.”
“But — especially in times like this when there’s a lot of people using data — I think what we would have liked is more details. There’s a lot of assumptions on thresholding, on how do you apply differential privacy, right?… What kind of assumptions are you making?” he added, querying how much noise Google is adding to the data, for example. “It would be good to have a bit more detail on how they applied [differential privacy]… Especially in times like this it is good to be… overly transparent.”
While Google’s mobility data release might appear to overlap in purpose with the Commission’s call for EU telco metadata for COVID-19 tracking, de Montjoye points out there are likely to be key differences based on the different data sources.
“It’s always a trade off between the two,” he says. “It’s basically telco data would probably be less fine-grained, because GPS is much more precise spatially and you might have more data points per person per day with GPS than what you get with mobile phone but on the other hand the carrier/telco data is much more representative — it’s not only smartphone, and it’s not only people who have latitude on, it’s everyone in the country, including non smartphone.”
There may be country specific questions that could be better addressed by working with a local carrier, he also suggested. (The Commission has said it’s intending to have one carrier per EU Member State providing anonymized and aggregated metadata.)
On the topical question of whether location data can ever be truly anonymized, de Montjoye — an expert in data reidentification — gave a “yes and no” response, arguing that original location data is “probably really, really hard to anonymize”.
“Can you process this data and make the aggregate results anonymous? Probably, probably, probably yes — it always depends. But then it also means that the original data exists… Then it’s mostly a question of the controls you have in place to ensure the process that leads to generating those aggregates does not contain privacy risks,” he added.
Perhaps a bigger question related to Google’s location data dump is around the issue of legal consent to be tracking people in the first place.
While the tech giant claims the data is based on opt-ins to location tracking the company was fined $57M by France’s data watchdog last year for a lack of transparency over how it uses people’s data.
Then, earlier this year, the Irish Data Protection Commission (DPC) — now the lead privacy regulator for Google in Europe — confirmed a formal probe of the company’s location tracking activity, following a 2018 complaint by EU consumers groups which accuses Google of using manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes.
“The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing,” said the DPC in a statement in February, announcing the investigation.
The legal questions hanging over Google’s consent to track likely explains the repeat references in its blog post to people choosing to opt in and having the ability to clear their Location History via settings. (“Users who have Location History turned on can choose to turn the setting off at any time from their Google Account, and can always delete Location History data directly from their Timeline,” it writes in one example.)
In addition to offering up coronavirus mobility porn reports — which Google specifies it will continue to do throughout the crisis — the company says it’s collaborating with “select epidemiologists working on COVID-19 with updates to an existing aggregate, anonymized dataset that can be used to better understand and forecast the pandemic”.
“Data of this type has helped researchers look into predicting epidemics, plan urban and transit infrastructure, and understand people’s mobility and responses to conflict and natural disasters,” it adds.

An EU coalition of techies is backing a “privacy-preserving” standard for COVID-19 contacts tracing

A European coalition of techies and scientists drawn from at least eight countries, and led by Germany’s Fraunhofer Heinrich Hertz Institute for telecoms (HHI), is working on contacts-tracing proximity technology for COVID-19 that’s designed to comply with the region’s strict privacy rules — officially unveiling the effort today.
China-style individual-level location-tracking of people by states via their smartphones even for a public health purpose is hard to imagine in Europe — which has a long history of legal protection for individual privacy. However the coronavirus pandemic is applying pressure to the region’s data protection model, as governments turn to data and mobile technologies to seek help with tracking the spread of the virus, supporting their public health response and mitigating wider social and economic impacts.
Scores of apps are popping up across Europe aimed at attacking coronavirus from different angles. European privacy not-for-profit, noyb, is keeping an updated list of approaches, both led by governments and private sector projects, to use personal data to combat SARS-CoV-2 — with examples so far including contacts tracing, lockdown or quarantine enforcement and COVID-19 self-assessment.
The efficacy of such apps is unclear — but the demand for tech and data to fuel such efforts is coming from all over the place.
In the UK the government has been quick to call in tech giants, including Google, Microsoft and Palantir, to help the National Health Service determine where resources need to be sent during the pandemic. While the European Commission has been leaning on regional telcos to hand over user location data to carry out coronavirus tracking — albeit in aggregated and anonymized form.
The newly unveiled Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project is a response to the coronavirus pandemic generating a huge spike in demand for citizens’ data that’s intended to offer not just an another app — but what’s described as “a fully privacy-preserving approach” to COVID-19 contacts tracing.
The core idea is to leverage smartphone technology to help disrupt the next wave of infections by notifying individuals who have come into close contact with an infected person — via the proxy of their smartphones having been near enough to carry out a Bluetooth handshake. So far so standard. But the coalition behind the effort wants to steer developments in such a way that the EU response to COVID-19 doesn’t drift towards China-style state surveillance of citizens.
While, for the moment, strict quarantine measures remain in place across much of Europe there may be less imperative for governments to rip up the best practice rulebook to intrude on citizens’ privacy, given the majority of people are locked down at home. But the looming question is what happens when restrictions on daily life are lifted?
Contacts tracing — as a way to offer a chance for interventions that can break any new infection chains — is being touted as a key component of preventing a second wave of coronavirus infections by some, with examples such as Singapore’s TraceTogether app being eyed up by regional lawmakers.
Singapore does appear to have had some success in keeping a second wave of infections from turning into a major outbreak, via an aggressive testing and contacts-tracing regime. But what a small island city-state with a population of less than 6M can do vs a trading bloc of 27 different nations whose collective population exceeds 500M doesn’t necessarily seem immediately comparable.
Europe isn’t going to have a single coronavirus tracing app. It’s already got a patchwork. Hence the people behind PEPP-PT offering a set of “standards, technology, and services” to countries and developers to plug into to get a standardized COVID-19 contacts-tracing approach up and running across the bloc.
The other very European flavored piece here is privacy — and privacy law. “Enforcement of data protection, anonymization, GDPR [the EU’s General Data Protection Regulation] compliance, and security” are baked in, is the top-line claim.
“PEPP-PR was explicitly created to adhere to strong European privacy and data protection laws and principles,” the group writes in an online manifesto. “The idea is to make the technology available to as many countries, managers of infectious disease responses, and developers as quickly and as easily as possible.
“The technical mechanisms and standards provided by PEPP-PT fully protect privacy and leverage the possibilities and features of digital technology to maximize speed and real-time capability of any national pandemic response.”
Hans-Christian Boos, one of the project’s co-initiators — and the founder of an AI company called Arago –discussed the initiative with German newspaper Der Spiegel, telling it: “We collect no location data, no movement profiles, no contact information and no identifiable features of the end devices.”
The newspaper reports PEPP-PT’s approach means apps aligning to this standard would generate only temporary IDs — to avoid individuals being identified. Two or more smartphones running an app that uses the tech and has Bluetooth enabled when they come into proximity would exchange their respective IDs — saving them locally on the device in an encrypted form, according to the report.
Der Spiegel writes that should a user of the app subsequently be diagnosed with coronavirus their doctor would be able to ask them to transfer the contact list to a central server. The doctor would then be able to use the system to warn affected IDs they have had contact with a person who has since been diagnosed with the virus — meaning those at risk individuals could be proactively tested and/or self-isolate.
On its website PEPP-PT explains the approach thus:

Mode 1
If a user is not tested or has tested negative, the anonymous proximity history remains encrypted on the user’s phone and cannot be viewed or transmitted by anybody. At any point in time, only the proximity history that could be relevant for virus transmission is saved, and earlier history is continuously deleted.
Mode 2
If the user of phone A has been confirmed to be SARS-CoV-2 positive, the health authorities will contact user A and provide a TAN code to the user that ensures potential malware cannot inject incorrect infection information into the PEPP-PT system. The user uses this TAN code to voluntarily provide information to the national trust service that permits the notification of PEPP-PT apps recorded in the proximity history and hence potentially infected. Since this history contains anonymous identifiers, neither person can be aware of the other’s identity.

Providing further detail of what it envisages as “Country-dependent trust service operation”, it writes: “The anonymous IDs contain encrypted mechanisms to identify the country of each app that uses PEPP-PT. Using that information, anonymous IDs are handled in a country-specific manner.”
While on healthcare processing is suggests: “A process for how to inform and manage exposed contacts can be defined on a country by country basis.”
Among the other features of PEPP-PT’s mechanisms the group lists in its manifesto are:
Backend architecture and technology that can be deployed into local IT infrastructure and can handle hundreds of millions of devices and users per country instantly.
Managing the partner network of national initiatives and providing APIs for integration of PEPP-PT features and functionalities into national health processes (test, communication, …) and national system processes (health logistics, economy logistics, …) giving many local initiatives a local backbone architecture that enforces GDPR and ensures scalability.
Certification Service to test and approve local implementations to be using the PEPP-PT mechanisms as advertised and thus inheriting the privacy and security testing and approval PEPP-PT mechanisms offer.
Having a standardized approach that could be plugged into a variety of apps would allow for contacts tracing to work across borders — i.e. even if different apps are popular in different EU countries — an important consideration for the bloc, which has 27 Member States.
However there may be questions about the robustness of the privacy protection designed into the approach — if, for example, pseudonymized data is centralized on a server that doctors can access there could be a risk of it leaking and being re-identified. And identification of individual device holders would be legally risky.
Europe’s lead data regulator, the EDPS, recently made a point of tweeting to warn an MEP (and former EC digital commissioner) against the legality of applying Singapore-style Bluetooth-powered contacts tracing in the EU — writing: “Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.”

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.
— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

A spokesman for the EDPS told us it’s in contact with data protection agencies of the Member States involved in the PEPP-PT project to collect “relevant information”.
“The general principles presented by EDPB on 20 March, and by EDPS on 24 March are still relevant in that context,” the spokesman added — referring to guidance issued by the privacy regulators last month in which they encouraged anonymization and aggregation should Member States want to use mobile location data for monitoring, containing or mitigating the spread of COVID-19. At least in the first instance.
“When it is not possible to only process anonymous data, the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security (Art. 15),” the EDPB further noted.
“If measures allowing for the processing of non-anonymised location data are introduced, a Member State is obliged to put in place adequate safeguards, such as providing individuals of electronic communication services the right to a judicial remedy.”
We reached out to the HHI with questions about the PEPP-PT project and were referred to Boos — but at the time of writing had been unable to speak to him.
“The PEPP-PT system is being created by a multi-national European team,” the HHI writes in a press release about the effort. “It is an anonymous and privacy-preserving digital contact tracing approach, which is in full compliance with GDPR and can also be used when traveling between countries through an anonymous multi-country exchange mechanism. No personal data, no location, no Mac-Id of any user is stored or transmitted. PEPP-PT is designed to be incorporated in national corona mobile phone apps as a contact tracing functionality and allows for the integration into the processes of national health services. The solution is offered to be shared openly with any country, given the commitment to achieve interoperability so that the anonymous multi-country exchange mechanism remains functional.”
“PEPP-PT’s international team consists of more than 130 members working across more than seven European countries and includes scientists, technologists, and experts from well-known research institutions and companies,” it adds.
“The result of the team’s work will be owned by a non-profit organization so that the technology and standards are available to all. Our priorities are the well being of world citizens today and the development of tools to limit the impact of future pandemics — all while conforming to European norms and standards.”
The PEPP-PT says its technology-focused efforts are being financed through donations. Per its website, it says it’s adopted the WHO standards for such financing — to “avoid any external influence”.
Of course for the effort to be useful it relies on EU citizens voluntarily downloading one of the aligned contacts tracing apps — and carrying their smartphone everywhere they go, with Bluetooth enabled.
Without substantial penetration of regional smartphones it’s questionable how much of an impact this initiative, or any contacts tracing technology, could have. Although if such tech were able to break even some infection chains people might argue it’s not wasted effort.
Notably, there are signs Europeans are willing to contribute to a public healthcare cause by doing their bit digitally — such as a self-reporting COVID-19 tracking app which last week racked up 750,000 downloads in the UK in 24 hours.
But, at the same time, contacts tracing apps are facing scepticism over their ability to contribute to the fight against COVID-19. Not everyone carries a smartphone, nor knows how to download an app, for instance. There’s plenty of people who would fall outside such a digital net.
Meanwhile, while there’s clearly been a big scramble across the region, at both government and grassroots level, to mobilize digital technology for a public health emergency cause there’s arguably greater imperative to direct effort and resources at scaling up coronavirus testing programs — an area where most European countries continue to lag.
Germany — where some of the key backers of the PEPP-PT are from — being the most notable exception.

Telco metadata grab is for modelling COVID-19 spread, not tracking citizens, says EC

As part of its response to the public health emergency triggered by the COVID-19 pandemic, the European Commission has been leaning on Europe’s telcos to share aggregate location data on their users.
“The Commission kick-started a discussion with mobile phone operators about the provision of aggregated and anonymised mobile phone location data,” it said today.
“The idea is to analyse mobility patterns including the impact of confinement measures on the intensity of contacts, and hence the risks of contamination. This would be an important — and proportionate — input for tools that are modelling the spread of the virus, and would also allow to assess the current measures adopted to contain the pandemic.”
“We want to work with one operator per Member State to have a representative sample,” it added. “Having one operator per Member State also means the aggregated and anonymised data could not be used to track individual citizens, that is also not at all the intention. Simply because not all have the same operator.
“The data will only be kept as long as the crisis is ongoing. We will of course ensure the respect of the ePrivacy Directive and the GDPR.”
Earlier this week Politico reported that commissioner Thierry Breton held a conference with carriers, including Deutsche Telekom and Orange, asking for them to share data to help predict the spread of the novel coronavirus.
Europe has become a secondary hub for the disease, with high rates of infection in countries including Italy and Spain — where there have been thousands of deaths apiece.
The European Union’s executive is understandably keen to bolster national efforts to combat the virus. Although it’s less clear exactly how aggregated mobile location data can help — especially as more EU citizens are confined to their homes under national quarantine orders. (While police patrols and CCTV offer an existing means of confirming whether or not people are generally moving around.)
Nonetheless, EU telcos have already been sharing aggregate data with national governments.
Such as Orange in France which is sharing “aggregated and anonymized” mobile phone geolocation data with Inserm, a local health-focused research institute — to enable them to “better anticipate and better manage the spread of the epidemic”, as a spokeswoman put it.
“The idea is simply to identify where the populations are concentrated and how they move before and after the confinement in order to be able to verify that the emergency services and the health system are as well armed as possible, where necessary,” she added. “For instance, at the time of confinement, more than 1 million people left the Paris region and at the same time the population of Ile de Ré increased by 30%.
“Other uses of this data are possible and we are currently in discussions with the State on all of these points. But, it must be clear, we are extremely vigilant with regards to concerns and respect for privacy. Moreover, we are in contact with the CNIL [France’s data protection watchdog]… to verify that all of these points are addressed.”
Germany’s Deutsche Telekom is also providing what a spokesperson dubbed “anonymized swarm data” to national health authorities to combat the corona virus.
“European mobile operators are also to make such anonymized mass data available to the EU Commission at its request,” the spokesperson told us. “In fact, we will first provide the EU Commission with a description of data we have sent to German health authorities.”
It’s not entirely clear whether the Commission’s intention is to pool data from such existing local efforts — or whether it’s asking EU carriers for a different, universal data-set to be shared with it during the COVID-19 emergency.
When we asked about this it did not provide an answer. Although we understand discussions are ongoing with operators — and that it’s the Commission’s aim to work with one operator per Member State.
The Commission has said the metadata will be used for modelling the spread of the virus and for looking at mobility patterns to analyze and assess the impact of quarantine measures.
A spokesman emphasized that individual-level tracking of EU citizens is not on the cards.
“The Commission is in discussions with mobile operators’ associations about the provision of aggregated and anonymised mobile phone location data,” the spokesman for Breton told us.
“These data permit to analyse mobility patterns including the impact of confinement measures on the intensity of contacts and hence the risks of contamination. They are therefore an important and proportionate tool to feed modelling tools for the spread of the virus and also assess the current measures adopted to contain the Coronavrius pandemic are effective.”
“These data do not enable tracking of individual users,” he added. “The Commission is in close contact with the European Data Protection Supervisor (EDPS) to ensure the respect of the ePrivacy Directive and the GDPR.”
At this point there’s no set date for the system to be up and running — although we understand the aim is to get data flowing asap. The intention is also to use datasets that go back to the start of the epidemic, with data-sharing ongoing until the pandemic is over — at which point we’re told the data will be deleted.
Breton hasn’t had to lean very hard on EU telcos to share data for a crisis cause.
Earlier this week Mats Granryd, director general of operator association the GSMA, tweeted that its members are “committed to working with the European Commission, national authorities and international groups to use data in the fight against COVID-19 crisis”.
Although he added an important qualifier: “while complying with European privacy standards”.

The @GSMA and our members are committed to working with the @EU_Commission, national authorities and international groups to use data in the fight against COVID-19 crisis, while complying with European privacy standards. https://t.co/f1hBYT5Lqx
— Mats Granryd (@MatsGranryd) March 24, 2020

Europe’s data protection framework means there are limits on how people’s personal data can be used — even during a public health emergency. And while the legal frameworks do quite rightly bake in flexibility for a pressing public purpose, like the COVID-19 pandemic, it does not mean individuals’ privacy rights automatically go out the window.
Individual tracking of mobile users for contact tracing — such as Israel’s government is doing — is unimaginable at the pan-EU level. Certainly unless the regional situation deteriorates drastically.
One privacy lawyer we spoke to last week suggested such a level of tracking and monitoring across Europe would be akin to a “last resort”. Though individual EU countries are choosing to respond differently to the crisis — such as, for example, Poland giving quarantined people a choice between regular police checks up or uploading geotagged selfies to prove they’re not breaking lockdown.
While former EU Member, the UK, has reportedly chosen to invite US surveillance-as-a-service tech firm Palantir to carry out resource tracking for its National Health Service during the coronavirus crisis.
Under pan-EU law (which the UK remains subject to, until the end of the Brexit transition period), the rule of thumb is that extraordinary data-sharing — such as the Commission asking telcos to share user location data during a pandemic — must be “temporary, necessary and proportionate”, as digital rights group Privacy International recently noted.
This explains why Breton’s request is for “anonymous and aggregated” location data. And why, in background comments to reporters, the claim is that any shared data sets will be deleted at the end of the pandemic.

What are the rules wrapping privacy during COVID-19?

Not every EU lawmaker appears entirely aware of all the legal limits, however.
Today the bloc’s lead privacy regulator, data protection supervisor (EDPS) Wojciech Wiewiórowski, could be seen tweeting cautionary advice at one former commissioner, Andrus Ansip (now an MEP) — after the latter publicly eyed up a Bluetooth-powered contacts tracing app deployed in Singapore.
“Please be cautious comparing Singapore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder,” wrote Wiewiórowski.
So it remains to be seen whether pressure will mount for more privacy-intrusive surveillance of EU citizens if regional rates of infection continue to grow.

Dear Mr. Commissioner, please be cautious comparing Singapoore examples with European situation. Remember Singapore has a very specific legal regime on identification of device holder.
— Wojtek Wiewiorowski (@W_Wiewiorowski) March 27, 2020

As we reported earlier this week, governments or EU institutions seeking to make use of mobile phone data to help with the response to the coronavirus must comply with the EU’s ePrivacy Directive — which covers the processing of mobile location data.
The ePrivacy Directive allows for Member States to restrict the scope of the rights and obligations related to location metadata privacy, and retain such data for a limited time — when such restriction constitutes “a necessary, appropriate and proportionate measure within a democratic society to safeguard national security (i.e. State security), defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system” — and a pandemic seems a clear example of a public security issue.
Thing is, the ePrivacy Directive is an old framework. The previous college of commissioners had intended to replace it alongside an update to the EU’s broader personal data protection framework — the General Data Protection Regulation (GDPR) — but failed to reach agreement.
This means there’s some potential mismatch. For example the ePrivacy Directive does not include the same level of transparency requirements as the GDPR.
Perhaps understandably, then, since news of the Commission’s call for carrier metadata emerged concerns have been raised about the scope and limits of the data sharing. Earlier this week, for example, MEP Sophie in’t Veld wrote to Breton asking for more information on the data grab — including querying exactly how the data will be anonymized.

Fighting the #coronavirus with technology: sure! But always with protection of our privacy. Read my letter to @ThierryBreton about @EU_Commission’s plans to call on telecoms to hand over data from people’s mobile phones in order to track&trace how the virus is spreading. pic.twitter.com/55kZo9bMhN
— Sophie in ‘t Veld (@SophieintVeld) March 25, 2020

The EDPS confirmed to us that the Commission consulted it on the proposed use of telco metadata.
A spokesman for the regulator pointed to a letter sent by Wiewiórowski to the Commission, following the latter’s request for guidance on monitoring the “spread” of COVID-19.
In the letter the EDPS impresses on the Commission the importance of “effective” data anonymization — which means it’s in effect saying a technique that does genuinely block re-identification of the data must be used. (There are plenty of examples of ‘anonymized’ location data being shown by researchers to be trivially easy to reidentify, given how many individual tells such data typically contains, like home address and workplace address.)
“Effective anonymisation requires more than simply removing obvious identifiers such as phone numbers and IMEI numbers,” warns the EDPS, adding too that aggregated data “can provide an additional safeguard”.
We also asked the Commission for more details on how the data will be anonymized and the level of aggregation that would be used — but it told us it could not provide further information at this stage. 
So far we understand that the anonymization and aggregation process will be undertaken before data is transferred by operators to a Commission science and research advisory body, called the Joint Research Centre (JRC) — which will perform the data analytics and modelling.
The results — in the form of predictions of propagation and so on — will then be shared by the Commission with EU Member States authorities. The datasets feeding the models will be stored on secure JRC servers.
The EDPS is equally clear on the Commission’s commitments vis-a-vis securing the data.
“Information security obligations under Commission Decision 2017/464 still apply [to anonymized data], as do confidentiality obligations under the Staff Regulations for any Commission staff processing the information. Should the Commission rely on third parties to process the information, these third parties have to apply equivalent security measures and be bound by strict confidentiality obligations and prohibitions on further use as well,” writes Wiewiórowski.
“I would also like to stress the importance of applying adequate measures to ensure the secure transmission of data from the telecom providers. It would also be preferable to limit access to the data to authorised experts in spatial epidemiology, data protection and data science.”
Data retention — or rather the need for prompt destruction of data sets after the emergency is over — is another key piece of the guidance.
“I also welcome that the data obtained from mobile operators would be deleted as soon as the current emergency comes to an end,” writes Wiewiórowski. “It should be also clear that these special services are deployed because of this specific crisis and are of temporary character. The EDPS often stresses that such developments usually do not contain the possibility to step back when the emergency is gone. I would like to stress that such solution should be still recognised as extraordinary.”
teresting to note the EDPS is very clear on “full transparency” also being a requirement, both of purpose and “procedure”. So we should expect more details to be released about how the data is being effectively rendered unidentifiable.
“Allow me to recall the importance of full transparency to the public on the purpose and procedure of the measures to be enacted,” writes Wiewiórowski. “I would also encourage you to keep your Data Protection Officer involved throughout the entire process to provide assurance that the data processed had indeed been effectively anonymised.”
The EDPS has also requested to see a copy of the data model. At the time of writing the spokesman told us it’s still waiting to receive that.
“The Commission should clearly define the dataset it wants to obtain and ensure transparency towards the public, to avoid any possible misunderstandings,” Wiewiórowski added in the letter.

UK researchers develop new low-cost, rapid COVID-19 test that could even be used at home

A new type of test developed by UK researchers from the Brunel University London, Lancaster University and the University of Surrey can provide COVID-19 detection in as little as 30 minutes, using hand-held hardware that costs as little as £100 (around $120 USD) with individual swab sample kits that cost around $5 per person. The test is based on existing technology that has been used in the Philippines for testing viral spread in chickens, but it’s been adapted by researchers for use with COVID-19 in humans, and the team is now working on ramping mass production.
This test would obviously need approval by local health regulatory bodies like the FDA before it goes into active use in any specific geography, but the researchers behind the project are “confident it will respond well,” and say they could even make it available for use “within a few weeks.” The hardware itself is battery-operated and connects to a smartphone application to display diagnostic results and works with nasal or throat swabs, without requiring that samples be round-tripped to a lab.
There are other tests already approved for use that use similar methods for on-site testing, including kits and machines from Cepheid and Mesa Biotech. These require expensive dedicated table-top micro-labs, however, which is installed in dedicated healthcare facilities including hospitals. This test from UK scientists has the advantage of running on inexpensive hardware, with testing capabilities for up to six people at once, which can be deployed in doctor’s offices, hospitals and even potentially workplaces and homes for truly widespread, accessible testing.
Some frontline, rapid results tests are already in use in the EU and China, but these are generally serological tests that rely on the presence of antibodies, whereas this group’s diagnostics are molecular, so it can detect the presence of viral DNA even before antibodies are present. This equipment could even potentially be used to detect the virus in asymptomatic individuals who are self-isolating at home, the group notes, which would go a long way to scoping out the portion of the population that’s not currently a priority for other testing methods, but that could provide valuable insight into the true extend of silent, community-based transmission of the coronavirus.

EU parliament moves to email voting during COVID-19

The European Parliament will temporarily allow electronic voting by email as MEPs are forced to work remotely during the coronavirus crisis.
A spokeswoman for the parliament confirmed today that an “alternative electronic voting procedure” has been agree for the plenary session that will take place on March 26.
“This voting procedure is temporary and valid until 31 July,” she added.
Earlier this month the parliament moved the majority of its staff to teleworking. MEPs have since switch to full remote work as confirmed cases of COVID-19 have continued to step up across Europe. Though how to handle voting remotely has generated some debate in and of itself.

Working in Brussels, without being in Brussels. European Parliament goes digital for #IMCO coordinators, @RenewEurope presidency & #EPbureau meetings. Next week voting remotely. Stay and work safe! pic.twitter.com/0weG9O7vow
— Dita Charanzová (@charanzova) March 20, 2020

“Based on public health grounds, the President decided to have a temporary derogation to enable the vote to take place by an alternative electronic voting procedure, with adequate safeguards to ensure that Members’ votes are individual, personal and free, in line with the provisions of the Electoral act and the Members’ Statute,” the EU parliament spokeswoman said today, when we asked for the latest on its process for voting during the COVID-19 pandemic.
“The current precautionary measures adopted by the European Parliament to contain the spread of COVID-19 don’t affect legislative priorities. Core activities are reduced, but maintained precisely to ensure legislative, budgetary, scrutiny functions,” she added.
The spokeswoman confirmed votes will take place via email — explaining the process as follows: “Members would receive electronically, via email to their official email address, a ballot form, which would be returned, completed, from their email address to the relevant Parliament’s functional mailbox.”
“The results of all votes conducted under this temporary derogation would be recorded in the minutes of the sitting concerned,” she further noted.
Last week, ahead of the parliament confirming the alternative voting process, German Pirate Party MEP, Patrick Breyer, raised concerns about the security of e-voting — arguing that what was then just a proposal for MEPs to fill and sign a voting list, scan it and send it via email to the administration risked votes being vulnerable to manipulation and hacking.
“Such a manipulation-prone procedure risks undermining public trust in the integrity of Parliament votes that can have serious consequences,” he wrote. “The procedure comes with a risk of manipulation by hackers. Usually MEPs can send emails using several devices, and their staff can access their mailbox, too. Also it is easy to come by a MEP’s signature and scan it… This procedure also comes with the risk that personally elected and highly paid MEPs could knowingly allow others to vote on their behalf.”
“eVoting via the public Internet is inherently unsafe and prone to hacking, thus risks to erode public trust in European democracy,” he added. “I am sure powerful groups such as the Russian intelligence agency have a great interest in manipulating tight votes. eVoting makes manipulation at a large scale possible.”
Breyer suggested a number of alternatives — such as parallel postal voting, to have a paper back-up of MEPs’ e-votes; presence voting in EP offices in Member States (though clearly that would require parliamentarians to risk exposing themselves and others to the virus by traveling to offices in person); and a system such as “Video Ident”, which he noted is already used in Germany, where the MEP face identify in front of a webcam in a live video stream and then show their voting sheets to the camera.
He also suggested MEPs might not notice manipulations even if voting results were published — as looks to be the case with the parliament’s agreed procedure.
It’s not clear whether the parliament is applying a further back-up step — such as requiring a paper ballot to be mailed in parallel to an email vote. The parliament spokeswoman declined to comment in any detail when we asked. “All measures have been put in place to ensure the vote runs smoothly,” she said, adding: “We never comment on security measures.”
Reached for his response, Breyer told us: “My concerns definitely stand.”
However security expert J. Alex Halderman, a professor of Computer Science and Engineering at the University of Michigan — who testified before the US Senate hearing into Russian interference in the 2016 U.S. Election — said e-voting where the results are public is relatively low risk provided MEPs check their votes have been recorded properly.
“Voting isn’t such a hard problem when it’s not a secret ballot, and I take it that how each MEP votes is normally public. As long as that’s the case, I don’t think this is a major security issue,” he told TechCrunch. “MEPs should be encouraged to check that their votes are correctly recorded in the minutes and to raise alarms if there’s any discrepancy, but that’s probably enough of a safeguard during these challenging times.”  
“All of this is in stark contrast to election for public office, which are conducted with a secret ballot and in which there’s normally no possibility for voters to verify that their votes are correctly recorded,” he added. 
NationBuilder probe closed
In further news related to the EU parliament the European Data Protection Supervisor (EDPS) announced today that it’s closed an investigation into the former’s user of the US-based political campaign group, NationBuilder last year.

EDPS closes investigation into European Parliament’s use of a US-based political campaigning company to process #personaldata as part of its 2019 election activities. Read the press release: https://t.co/y18NzWFzYK pic.twitter.com/rloQNyxSd2
— EDPS (@EU_EDPS) March 23, 2020

Back in November the EU’s lead data regulator revealed it had issued its first ever sanction of an EU institution by taking enforcement action over the parliament’s contract with NationBuilder for a public engagement campaign to promote voting in the spring election.
During the campaign the website collected personal data from more than 329,000 people, which was processed on behalf of the Parliament by NationBuilder. The EDPS found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.
The contract has been described as coming to “a natural end” in July 2019, and the EDPS said today that all data collected has been transferred to the European Parliament’s servers’.
No further sanctions have been implemented, though the regulator said it will continue to monitor the parliament’s activities closely.
“Data protection plays a fundamental role in ensuring electoral integrity and must therefore be treated as a priority in the planning of any election campaign,” said EDPS, Wojciech Wiewiórowski, in a statement today. “With this in mind, the EDPS will continue to monitor the Parliament’s activities closely, in particular those relating to the 2024 EU parliamentary elections. Nevertheless, I am confident that the improved cooperation and understanding that now exists between the EDPS and the Parliament will help the Parliament to learn from its mistakes and make more informed decisions on data protection in the future, ensuring that the interests of all those living in the EU are adequately protected when their personal data is processed.”
At the time of writing the parliament had not responded to a request for comment.

Facebook and Disney to downgrade streaming quality in Europe due to COVID-19

Facebook is temporarily downgrading the quality of video streaming in Europe on its social platforms Facebook and Instagram in response to a call for action from the European Commission, per Reuters.
Disney has also said it will work to shrink bandwidth used by its streaming service, Disney+, which is due to begin launching in Europe from tomorrow.
Last week Netflix, YouTube and Amazon said they would switch to SD streaming by default in the region.
The EU’s executive has expressed concerned about the load on Internet infrastructure during the coronavirus crisis as scores of citizens log on from home to work or try to keep themselves entertained during the COVID-19 lockdown.
Telcos in the region have reported significant increases in traffic as EU Member States have called for or instructed citizens to stay at home during the public health emergency.
Collectively, streaming platforms account for a major chunk of global Internet traffic. Online video accounted for more than 60% of the total downstream volume of traffic per a 2019 Sandvine report — while in another report last month it said YouTube alone accounted for a quarter of all mobile traffic.
“To help alleviate any potential network congestion, we will temporarily reduce bit rates for videos on Facebook and Instagram in Europe,” a Facebook spokesman also told Reuters yesterday.
We’ve reached out to Facebook with questions.
Per Reuters the measure will remain in place for as long as there are concerns about the region’s Internet infrastructure.
In related news Disney is pressing ahead with a planned launch of its new video streaming service, Disney+, in Europe starting from tomorrow but Bloomberg reports it will also take measures to reduce bandwidth utilization by at least 25% in European markets.
“We will be monitoring Internet congestion and working closely with Internet service providers to further reduce bitrates as necessary to ensure they are not overwhelmed by consumer demand,” said Kevin Mayer, chairman of Disney’s direct-to-consumer division, in a statement.
Last week the company said it would postpone the launch of Disney+ in India after the biggest local attraction — the Indian Premier League cricket tournament — was rescheduled due to the coronavirus outbreak.

YouTube goes SD streaming by default in Europe due to COVID-19

YouTube has switched to standard definition streaming by default in Europe.
We asked the company if it planned to do this yesterday — today a spokeswoman confirmed the step. The move was reported earlier by Reuters.
It’s a temporary measure in response to calls by the European Commission for streaming platforms to help ease demand on Internet infrastructure during the coronavirus crisis.
Users can still manually adjust video quality but defaults remain a powerful tool to influence overall outcomes.
A YouTube spokesperson confirmed the switch, sending us this statement:
People are coming to YouTube to find authoritative news, learning content and make connections during these uncertain times. While we have seen only a few usage peaks, we have measures in place to automatically adjust our system to use less network capacity. We are in ongoing conversations with the regulators (including Ofcom), governments and network operators all over Europe, and are making a commitment to temporarily default all traffic in the UK and the EU to Standard Definition. We will continue our work to minimize stress on the system, while also delivering a good user experience.
Yesterday Netflix announced it would default to SD streaming in the region for 30 days for the same reason.
In recent days the EU’s internet market commissioner, Thierry Breton, has held discussions with platform executives to urge them to help reduce the load on Internet infrastructure as scores of Europeans are encouraged or required to stay at home as part of quarantine measures.
The Commission is concerned about the impact on online education and remote work if there’s a major spike in demand for digital entertainment services — and is pushing for collective action from platforms and users to manage increased load on Internet infrastructure.
Breton met with Google CEO Sundar Pichai and YouTube CEO Susan Wojcick to press the case for lowering the quality of video streams during the coronavirus crisis.
Today he welcomed YouTube’s move. “Millions of Europeans are adapting to social distancing measures thanks to digital platforms, helping them to telework, e-learn and entertain themselves. I warmly welcome the initiative that Google has taken to preserve the smooth functioning of the Internet during the COVID19 crisis by having YouTube switch all EU traffic to Standard Definition by default. I appreciate the strong responsibility that Mr Pichai and Mrs Wojcicki have demonstrated. We will closely follow the evolution of the situation together,” said Breton in a statement. 
Google’s spokeswoman told us it hasn’t seen much change in regional traffic peaks so far but said there have been changes in usage patterns from more people being at home — with usage expanding across additional hours and some lower usage peaks. (The company routinely makes traffic data available in the Google Traffic and Disruptions Transparency Report.)
YouTube, along with other major social platforms, has faced scrutiny over the risks of their tools being used to spread coronavirus-related misinformation.
Although, in the case of Google, the company appears to have taken a proactive stance in suppressing bogus content and surfacing authoritative sources of health information. YouTube’s spokeswoman noted the homepage directs users to the World Health Organization for info on COVID-19 or other locally relevant authoritative organizations, for instance.
She also noted the company is donating ad inventory to governments and NGOs to use for education and information — pointing to a blog post earlier this month in which Pichai discussed some of the measures it’s taking to shield users from misinformation that could be harmful to public health.
YouTube will be rolling out a campaign rolling across Europe that encourages people to follow health authorities’ guidance and stay home, she added.
Google’s response to the COVID-19 pandemic looks to be a far swifter and more aggressive to the threat posed to public health than its approach to other types of content that can also be harmful to people’s health — such as anti-vaccination content, which YouTube only moved to demonetize last year.

Netflix and other streaming platforms urged to switch to SD during COVID-19 crisis

The European Commission is putting pressure on Netflix and other streaming platforms to switch to standard definition during periods of peak demand as the coronavirus crisis puts unprecedented load on Internet infrastructure.
Across the European Union — a region with around 445M citizens —  it’s likely many millions of office workers will switch to teleworking, as countries impose quarantine measures and instruct people to work from home wherever possible. The European Commission itself, which employs around 32,000 people, moved all non-critical staff to remote work at the start of this week.
Yesterday Thierry Breton, the commissioner for the EU’s internal market who is also a former CEO of France Telecom, tweeted that he’d spoke to Netflix CEO Reed Hastings to make the case for standard definition streaming by default during the COVID-19 public health crisis.

Important phone conversation with @ReedHastings, CEO of @Netflix
To beat #COVID19, we #StayAtHome
Teleworking & streaming help a lot but infrastructures might be in strain.
To secure Internet access for all, let’s #SwitchToStandard definition when HD is not necessary.
— Thierry Breton (@ThierryBreton) March 18, 2020

A spokesman for Breton told us the Commission is inviting streaming platforms to follow the lead of telecom providers and consider adapting the throughput of video streaming, such as by temporarily moving to SD rather than HD streaming — at least for the most critical working hours.
In Breton’s call with Netflix a number of potential measures were discussed — per the spokesman — including an automatic switch to standard definition during peaks of Internet activities on impacted geographies, which the Commission says represents a “responsible option”, that will help secure telecommunications infrastructures while “keeping offering the best service to users and consumers, with no disruption”.
Many content and application providers are already applying this sort of flexibility measure, it added.
The Commission is also asking telecoms operators that provide Internet services to take steps to prevent and mitigate the impacts of impending network congestion, by inviting them to make use of “possibilities” offered by EU net neutrality rules.
Earlier this week Vodafone reported a 50% surge in Internet traffic in some European countries as scores of people logged on from home. “Covid-19 is already having a significant impact on our services and placing a greater demand on our network,” the company said in a statement, adding that: “We should expect this trend of data growth to continue.”
At the same time the Commission is calling for Internet users in the region to make responsible use of online recreational activities — such as by opting for settings that reduce data consumption, including using Wi-Fi (rather than mobile data) and choosing lower resolution for content whenever possible.
It wants joint action from all stakeholders to ease the pressure on infrastructure and facilitate remote working and online education at a time of region-wide public health crisis.
In a statement Breton added: “Europe and the whole world are facing an unprecedented situation. Governments have taken measures to reduce social interactions to contain the spread of Covid-19, and to encourage remote working and online education. Streaming platforms, telecom operators and users, we all have a joint responsibility to take steps to ensure the smooth functioning of the Internet during the battle against the virus propagation.”
It’s not clear exactly what wiggle room the Commission envisages in EU net neutrality rules for prioritizing certain types of traffic over others during the coronavirus crisis.
We asked Breton’s spokesman for clarification on this point but he responded by emphasizing that the Commission is hoping to steer off such a scenario, telling us: “By calling for all stakeholders’ responsibility (platforms/telcos/users) we are proactively ensuring smooth functioning of the Internet so that the question of prioritization does not need to be asked.”
We also reached out to Netflix to ask what steps it’s taking to help manage bandwidth demand in the region. At the time of writing the company had not responded.
Breton’s spokesman said the commissioner is due to hold a follow-up call with Hastings in the coming days.
More broadly, the Commission is working on setting up a reporting mechanism to ensure regular monitoring of the Internet traffic situation in each Member State in order to be able to respond swiftly to capacity issues, liaising on this with the Body of European Regulators for Electronic Communications and with the support of national regulatory authorities.
We’ve also contacted YouTube for comment on the Commission’s call for proactive action from streaming platforms to help manage increased demand on Internet infrastructure.

Amazon limiting shipments to certain types of products due to COVID-19 pandemic

Amazon’s ‘Fulfillment by Amazon’ (FBA) program, through which it provides warehousing and shipment services for products from third-party sellers, was well as its larger vendor shipment services, are being partially suspended through April 5 due to the global coronavirus outbreak. This suspension will allow Amazon to prioritize shipment of “household staples, medical supplies and other high-demand products” the company said in a support document on its website, and confirmed to TechCrunch in an email.
The commerce giant notes in the email that it is “seeing increased online shopping” in the wake of the COVID-19 pandemic, and will focus on prioritizing the reception, restocking and delivery of the essential products that are most in demand from this new uptick in activity from Amazon shoppers. For all other products, Amazon says it’s disabled the creation of new inbound shipments for FBA members, as well as for retail vendors (their business-to-business selling platform).
Any existing shipments created prior to today are still going to be processed at Amazon’s fulfillment centers as usual, the company says, but otherwise new orders won’t be processed until such time as Amazon alerts sellers that things are back to normal. The tentative date for the program to resume in full is April 5 as mentioned, but it sounds like Amazon could extend these limitations depending on how the pandemic progresses.
Amazon is prioritizing goods in baby, health and household, beauty and personal care, grocery, industrial and scientific and pet supplies categories, the company says on a support document explaining the new limitations. Products outside of these categories that are already in Amazon’s fulfillment centers, or that are on their way to those facilities ahead of March 17 can still be sold through the platform.
This also doesn’t block sellers from selling their products on the platform and fulfilling the shipments themselves, the help document notes. That might be the only option available to sellers and retailers who want to continue offering their non-prioritized goods to Amazon buyers through at least the next few weeks.
An Amazon spokesperson provided TechCrunch the following statement regarding the suspension:
We are seeing increased online shopping and as a result some products such as household staples and medical supplies are out of stock. With this in mind, we are temporarily prioritizing household staples, medical supplies, and other high-demand products coming into our fulfillment centers so we can more quickly receive, restock, and deliver these products to customers. We understand this is a change for our selling partners and appreciate their understanding as we temporarily prioritize these products for customers.
Amazon has taken other steps to address the increased demand its seeing on the platform as more and more countries and cities implement isolation and quarantine measures, including shelter-in-place orders. The company announced on Monday that it would be looking to hire as many as 100,000 additional warehouse and delivery employees to address the increase.

Startups developing tech to combat COVID-19 urged to apply for fast-track EU funding

The European Commission put out a call Friday for startups and small businesses which are developing technologies that could help combat the COVID-19 outbreak to apply for fast-track EU funding.
The push is related to a €164M pot of money that’s being made available for R&D via the European Innovation Council (EIC) — a European Union funding vehicle which supports the commercialization of high risk, high impact technologies.

We are calling for startups and small businesses with technologies and innovations in: treating testing monitoring other aspects of #COVID19
Apply for fast-track EU funding – deadline 17:00 CET on 18 March ↓
— European Commission (@EU_Commission) March 15, 2020

Per the Commission, the funding does not have any particular thematic priorities attached to it but it said today it will look to “fast track the awarding of EIC grants and blended finance (combining grant and equity investment) to Coronavirus relevant innovations, as well as to facilitate access to other funding and investment sources”.
The deadline for this call for applications to the EIC Accelerator is 17:00 on March 18 CET.
The Commission has a page of tips for applicants here.
It notes EIC funding is already supporting a number of startups and SMEs with “Coronavirus relevant innovations” from funding awarded in previous rounds — pointing to the EpiShuttle project for specialised isolation units; the m-TAP project for filtration technology to remove viral material; and the MBENT project to track human mobility during epidemics.
The EIC is itself funded under the EU’s Horizon Europe research framework program.
Back in February the Commission said it expected to sign off on a significant increase for the EIC budget as of this month — to support “game-changing, market-creating innovation and deep-tech SMEs to scale-up”, as it works towards launching the next seven-year round of the Horizon Europe program, in 2021.
It also said there would be a one-off EIC Accelerator call for ‘green deal’ start-ups and SMEs in May 2020 cut-off round, to align with its push to make the bloc carbon neutral by 2050.

Tech-driven change a key priority for new EC president

European Commission goes teleworking by default over COVID-19

The European Commission is switching all staff in “non-critical functions” to remote working from next Monday in response to the Covid-19 pandemic.
In an email sent to staff today the Commission writes that president Ursula von der Leyen has activated a business continuity plan that requires all but those in “critical functions” to telework from next Monday.
Previously the EU’s executive body had been implementing limited teleworking for high risk employees such as those returning from Italy for 14 days after their return.
It’s not clear how many Commission staff are defined as carrying out “critical” functions — but it seems likely that thousands will be working from home or remotely next week. In all, the Commission employs around 32,000 people.
Per the email, staff deemed to be carrying out a critical function will already have been informed they are expected to continue to present at work, with “modalities” and “guidelines” to explain working arrangement slated to follow “soon”.
Earlier this week the European Parliament also told staff to prep for mass remote working Monday. Initially vulnerable staff with pre-existing health conditions had been told to telework to limit their risk of being exposed to the novel coronavirus. 
The Commission had also already been instructing staff to switch to videoconferencing for missions, meetings and committees where possible.
Belgium, where the Commission is mainly based, has been reporting rising numbers of cases of Covid-19. Today its federal health authority reported 85 new cases today — bringing the total number of confirmed cases in the country to 399.
The Commission itself reported the first cases (two) of Covid-19 among staff earlier this month. 
In recent weeks a number of politicians in countries across Europe have also been confirmed as having contracted the novel coronavirus.

Europe’s Deliveroo and Glovo switch on contactless delivery during COVID-19 pandemic

European on-demand food delivery startups are starting to add ‘contactless’ deliveries in response to the SARS-CoV-2 pandemic.
Earlier this month U.S. startups including Postmates and Instacart added an option for customers to choose not to have their meal handed to them by the courier — and instead have it dropped off at their door without the need for human contact. In China similar services began adding contactless deliveries last month.
Today UK-based Deliveroo said it will launch a no-contact drop-off option early next week.
“At Deliveroo we are taking action to keep our customers, riders and restaurants safe. To make our delivery service even safer we are introducing a no-contact, drop-off service,” it told us.
Currently, Deliveroo customers not wanting to expose themselves — or, indeed, the courier delivering their food — to unnecessary human contact can add a note to an order to request a no-contact drop off.
According to the latest World Health Organization (WHO) situation report on Covid-19 the UK had 373 confirmed cases and six deaths as of yesterday.
Deliveroo told us it has plans in place to respond should a rider be diagnosed with the virus or be told to isolate themselves by a medical authority. This includes a multi-million pound fund that it said will be used to support affected riders by paying in excess of the equivalent of UK statutory sick pay for 14-days.
Other steps it’s taking include ordering hand sanitizer for riders and setting up a dedicated support team in each market to answer any queries or questions riders have.
“Riders’ safety is a priority and we want to make sure those who are impacted by this unprecedented virus and cannot work are supported. Deliveroo will provide support for riders who are diagnosed with the virus or who are told to isolate themselves by a medical authority,” the company added.
In yesterday’s budget the UK chancellor set out measures intended to support gig workers during the Covid-19 crisis, announcing a £500M boost to the benefits system and steps to make it quicker and easier for self employed people to access social security — a move unions were quick to characterize as a sticking plaster atop the systemic problem of precarious gig work. 
“It is unfortunate that it takes a global health pandemic for this government to recognise that precarious workers need some form of sick pay,” said the Independent Workers Union of Great Britain’s general secretary, Jason Moyer-Lee, in a statement. “Rather than half-baked proposals on benefits, the government should be ensuring that all workers have properly enforced worker rights, including full sick pay from day one. The unaffordability of becoming ill or injured is something precarious workers face on a daily basis, and it needs a permanent solution.”
Over in the European Union, Spain’s Glovo also told us it’s implementing new measures globally from today — including recommending ‘no contact’ deliveries and removing the requirement for couriers to obtain a mobile signature from the customer.
Italy, the European country most severely affected by the novel coronavirus outbreak thus far, is one of Glovo’s biggest markets.
This month the government announced a nationwide lockdown to try to contain the spread of the virus.  Per the WHO, Italy had 10,149 confirmed cases of Covid-19 as of yesterday morning and 631 people had died.
Yesterday the Italian prime minister announced a further tightening of quarantine rules, closing all bars and restaurants to the general public but allowing for home delivery — leaving the door open for meal delivery startups to continue operating. Food stores in Italy have also not been shut.
A report by UBS today looking at the impact of Covid-19 on online food delivery across multiple markets suggests there is a general uptick in meal delivery demand in most markets, including Italy. Though the investment bank cautions this could change — highlighting the risk of supply disruption and the consumer safety concerns related to eating pre-prepared meals during a health crisis, as it says has been the case in China (with grocery delivery growing as meal delivery orders slumped).   
It’s not clear how Glovo’s on-demand business is weathering the coronavirus storm. A spokesman told us it’s unable to share any data regarding the rise/fall of orders in Italy during the quarantine.
It’s worth noting the startup has never been solely focused on meal delivery — with the app supporting requests for anything (practicable) to be delivered by bike courier in the urban centers where it operates.
Groceries have also been a growing area of focus for Glovo which has been building out a network of dark supermarkets to support fast delivery of convenience shop groceries.
When we asked it about support for riders, Glovo told us it will be covering courier incomes for 2-4 weeks during the Covid-19 outbreak if they report being sick.
“The health and wellbeing of our couriers and customers is our top priority and we think these practices will help give some peace-of-mind to our fleet, while also decreasing the interaction and contact between both parties,” said the spokesman.
We also asked Uber Eats — which operates a meal delivery service in multiple markets across Europe — what measures it’s taking to respond to the Covid-19 pandemic.
A spokeswoman told us it’s currently working to inform customers of an existing ability to communicate with delivery people via the app to give them specific guidance on where and how they’d like deliveries made — such as leaving a note to say ‘leave at door’ or ‘leave in lobby/reception’.
“Safety is essential to Uber and it’s at the heart of everything we do. In response to the ongoing spread of coronavirus, we’ve reminded Uber users that they can request deliveries be left on their doorsteps,” Uber Eats said in a statement.
“We’re simultaneously at work on new product features to make this process even smoother, which we hope will be helpful to everyone on the platform in the coming weeks,” it added.
Uber also confirmed it will compensate drivers and delivery people who have to go into quarantine for up to 14 days — provided they are able to show documentation confirming the diagnosis; or if they have to self isolate or get removed from the app at the direction of a public health authority.
The company added that it has a dedicated global team, led by SVP Andrew Macdonald and advised by a consulting public health expert and public health organizations, working on its Covid-19 response.

Tree planting search engine Ecosia is getting a visibility boost in Chrome

Ecosia, a not-for-profit search engine which uses ad generated revenue to fund planting trees, is set to get a visibility boost in Chrome. A change Google is making to its chromium engine will see it added as a default search engine choice in up to 47 markets for the version 81 release of Google’s web browser.
Ecosia will soon be included on Chrome’s default search engine list in several major markets, including the UK, US, France and Germany — alongside the likes of Google Search, Bing, DuckDuckGo and Yahoo!
It’s the first time the not-for-profit search engine will have appeared in Chrome’s default search engine choice list. And while users of Chrome can always navigate directly to Ecosia to search, or download an extension to search via it directly in the browser’s URL bar, those active steps require prior knowledge of the product. Whereas being listed as a default option in Chrome means Ecosia will be put in front of people who aren’t yet familiar with it.
The Berlin-based search engine said Google Chrome’s selection of default search engines is based on search engine popularity rankings in different markets.
The full list of markets where it will be offered as a choice in the v81 release is: Argentina, Austria, Australia, Belgium, Bahrain, Brunei, Bolivia, Brazil, Canada, Switzerland, Chile, Colombia, Costa Rica, Germany, Denmark, Ecuador, Spain, Faroe Islands, France, Guatemala, Croatia, Hungary, Ireland, Iceland, Italy, Lebanon, Liechtenstein, Luxembourg, Mexico, Nicaragua, New Zealand, Oman, Panama, Peru, Philippines, Puerto Rico, Portugal, Paraguay, Sweden, El Salvador, Taiwan, United States, United Kingdom, Uruguay, Venezuela and Vietnam.
The shift comes after what Ecosia said was a record year of usage growth for its search engine — with monthly active users rising from 8 million to 15 million during 2019.
The company dedicates 80% of advertising profits to funding reforestation projects in biodiversity hotspots around the world, and says it has planted 86 million+ trees since it was founded back in 2009 — a total it’s expecting will grow as a result of Google’s decision to include Ecosia as a default choice.
Commenting in a statement Ecosia CEO Christian Kroll said: “Ecosia’s growth over the last year shows just how invested users are in the fight against the climate crisis. Everywhere, people are weighing up the changes they can make to reduce their carbon footprint, including adopting technologies such as Ecosia. Our addition to Chrome will now make it even easier for users to help reforest delicate, at-risk and often devastated ecosystems, and to fight climate change, just by using the internet.”
“It’s also good news for user choice and fairness,” he added, pointing to recent research which he said indicates that providing a choice of search engines has the potential to increase the collective mobile market share of Google alternatives by 300-800%.
“It’s important that there are independent players in the market that don’t just exist for profit. We put our profits into tree planting and we are also focused on privacy, so users can have a positive impact on the environment while having greater control over their personal information.”
The chromium update will also see rival search engines DuckDuckGo and Yahoo added as a default in more markets when the v81 release of Chrome is pushed out.
These are the latest revisions to Chrome’s search engine defaults. But in a major shift this time last year Google quietly expanded the choice of search product in a way that gave the biggest single boost to the visibility of pro-privacy search engine rival DuckDuckGo.
It said then that the changes derived from “new usage statistics” from “recently collected data.”
But the company’s business had been facing rising attention over privacy and competition concerns.
As, indeed, Google still is…

Forty-nine states and the District of Columbia are pushing an antitrust investigation against Google

In Europe, meanwhile, antitrust enforcement around how Google operates its Android smartphone platform has already forced the tech giant to offer a choice screen that surfaces alternative search engines and web browsers alongside its own products.
In 2018 the EU’s competition competition concluded Google had violated antitrust rules by requiring Android device makers pre-install its own search and browser apps. It was fined $5BN and ordered to cease the infringement — initially responding with a choice screen prompt that appeared to select products based on marketshare, before announcing it would move to a ‘pay-to-play’ auction model to assign the non-Google slots on the screen starting early this year.
Rival search engines including Ecosia, DuckDuckGo and French pro-privacy search engine Qwant have been highly critical of this pay-to-play switch — hitting out at the limited slots and sealed bid auction structure Google devised. And DuckDuckGo has remained critical despite winning a universal slot on the screen early this year.
Ecosia chose to boycott the auction entirely — telling the BBC in January it’s at odds with the spirit of the Commission ruling.
“Internet users deserve a free choice over which search engine they use and the response of Google with this auction is an affront to our right to a free, open and federated internet. Why is Google able to pick and choose who gets default status on Android?” Kroll said then.
Asked for current Android usage metrics the company told us Ecosia’s total daily active users on Google’s platform have grown from 489,422 this time last year to 1,245,777 now — a 155% year over year rise in DAUs.
Though it remains to be seen whether Google’s shift to a paid auction model which Ecosia is not participating in — given doing so would require the not-for-profit to spend money paying Google to appear as a choice rather than ploughing those revenues into planting more trees — will put a dampener on Ecosia’s Android growth this year.
A spokesman for Ecosia pointed us to statcounter figures that estimate it took a 0.22%market share of mobile search in Europe between February 2019 and February 2020.
On desktop the search engine takes a higher regional share, per statcounter, account for 0.5% of desktop searches.
Overall, across mobile and desktop, Google’s share of the European search market over the same period is 93.83% vs 0.33% for Ecosia.

European lawmakers propose a ‘right to repair’ for mobiles and laptops

The European Commission has set out a plan to move towards a ‘right to repair’ for electronics devices, such as mobile phones, tablets and laptops.
More generally it wants to restrict single-use products, tackle “premature obsolescence” and ban the destruction of unsold durable goods — in order to make sustainable products the norm.
The proposals are part of a circular economy action plan that’s intended to deliver on a Commission pledge to transition the bloc to carbon neutrality by 2050.
By extending the lifespan of products, via measures which target design and production to encourage repair, reuse and recycling, the policy push aims to reduce resource use and shrink the environmental impact of buying and selling stuff.
The Commission also wants to arm EU consumers with reliable information about reparability and durability — to empower them to make greener product choices.
“Today, our economy is still mostly linear, with only 12% of secondary materials and resources being brought back into the economy,” said EVP Frans Timmermans in a statement. “Many products break down too easily, cannot be reused, repaired or recycled, or are made for single use only. There is a huge potential to be exploited both for businesses and consumers. With today’s plan we launch action to transform the way products are made and empower consumers to make sustainable choices for their own benefit and that of the environment.”
The Commission said electronics and ICT will be a priority area for implementing a right to repair, via planned expansion of the Ecodesign Directive — which currently sets energy efficiency standards for devices such as washing machines.
Its action plan proposes setting up a ‘Circular Electronics Initiative’ to promote longer product lifetimes through reusability and reparability as well as “upgradeability” of components and software to avoid premature obsolescence.
The Commission is also planning new regulatory measures on chargers for mobile phones and similar devices. While an EU-wide take back scheme to return or sell back old mobile phones, tablets and chargers is being considered.
Back in January the EU Parliament voted overwhelmingly for tougher action to reduce e-waste, calling for the Commission to come up with beefed up rules by this summer.
In recent years MEPs have also pushed for the Ecodesign Direction to be expanded to include repairability.
The Commission proposals also include a new regulatory framework for batteries and vehicles — including measures to improve the collection and recycling rates of batteries and ensure the recovery of valuable materials. Plus there’s a proposal to revise the rules on end-of-life vehicles to improve recycling efficiency and waste oil treatment. 
It’s also planning measures to set targets to shrink the amount of packaging being produced, with the aim of making all packaging reusable or recyclable in an economically viable way by 2030.
Mandatory requirements on recycled content for plastics used in areas such as packaging, construction materials and vehicles is another proposal.
Other priority areas for promoting circularity and reducing high consumption rates include construction, textiles and food.
The Commission expects the circular economy to have net positive benefits in terms of GDP growth and jobs’ creation across the bloc — suggesting measures to boost sustainability will increase the EU’s GDP by an additional 0.5% by 2030 and create around 700,000 new jobs.
The backing of MEPs in the European Parliament and EU Member States will be necessary if the Commission proposals are to make it into pan-EU law.
Should they do so, Dutch social enterprise Fairphone shows a glimpse of what’s coming down the repairable pipe in future…

Can Fairphone 3 scale ethical consumer electronics?

European Parliament moves to majority teleworking in response to COVID-19

The European Parliament is instructing managers to prepare for all but a minority of staff to work remotely for 70% of the week as of next Monday — dialling up its response to Covid-19, the disease caused by the SARS-CoV-2 virus.
Full-time remote working may follow, it has also said.
In an email sent today European Parliament staff have been instructed that teleworking will be introduced on March 16 — for “all colleagues whose physical presences in Parliament is not absolutely indispensable”.
“At this stage it will be 70% teleworking. That means presence in the office will be limited to 1½ days a week,” the email continues, adding: “Later on teleworking could be increased to 100% of working time dependent on the further developments.”
Earlier this week the parliament instructed “vulnerable” staff with pre-existing health conditions to telework to shrink their risk of exposure to the virus. The move followed the European Commission confirming its first cases of the disease.

European Parliament tells vulnerable staff to telework to shrink COVID-19 risk

The European Parliament is based in three locations in the EU, with the administrative offices in Luxembourg and plenary sessions of the parliament taking place in Brussels, Belgium, and Strasbourg in France.
We understand the teleworking shift applies across all locations.
The World Health Organization’s most recent Covid-19 situation report, for 10am CET March 10, lists a total of 1,402 confirmed cases in France; 239 in Belgium; and four in Luxembourg.
In recent days members of parliaments in several EU countries have also been reported to have contracted the virus — including politicians in Italy, Spain and the UK.
Another EU institution, the European Commission — which is primarily based in Brussels — is also allowing some staff to work remotely in response to the threat posed by the coronavirus, including staff with a pre-existing health condition and those who have recently traveled to regions it defines as high risk. It has also urged staff to take precautions, such as regular hand washing and social distancing.
It seems likely the Commission will follow the parliament’s lead and expand remote working further as confirmed cases of Covid-19 continue to increase. Local press in Belgium has reported 47 new cases today, including seven in Brussels.
The Belgian Federal Public Health service is also recommending businesses offer employees the option to work from home, postpone meetings and/or make use of video conferencing and avoid gathering large numbers of people in one place.

European startups applaud Commission plan to rethink stock options

Startups have welcomed proposals from the European Commission aimed at cutting red tape and shrinking cross-border barriers for small businesses as part of a new EU industrial strategy plan with a twin focus on digital and green transitions unveiled today.
Among the package of measures being proposed by the European Union’s executive body are for Member States to sing up to a “Startup Nations Standard” — which would aim to promote best practices to support startups and scale-ups, such as one-stop shops, favourable employee stock-options arrangements and visa processing to reduce cross-border friction for entrepreneurs starting and growing businesses in the bloc.
In recent years European startups have organized to campaign for reforms to rules around stock options –with 30 CEOs from homegrown startups including TransferWise, GetYourGuide, Revolut, Delivery Hero, TypeForm and Super cell (to name a few) signing an open letter to policymakers two years ago calling for legislators to fix what they dubbed “the patchy, inconsistent and often punitive rules that govern employee ownership”.
The effort appears to have made a dent in the EU policymaking universe. Both regulatory and practical barriers are now in the Commission’s sights, with it proposing a joint task force to work on sanding down business bumps.
It also today reiterated a perennial warning against Member States ‘goldplating’ pan-EU rules by adding their own conditions on top. 
“The Single Market is our proudest achievement — yet 70% of businesses report that they do not find it is sufficiently integrated,” said EVP Margrethe Vestager, laying out an industrial strategy package with a big focus on smaller companies, including those with big ambitions to scale. “Across Europe barriers are still preventing startups from growing into European businesses and our report is identifying those barriers and we also then address them in the Single Market enforcement action plan.”
In a letter responding to the Commission’s plan for an EU Startup Nations Standard, 14 European startup founders (listed below) and a number of European startup associations welcomed the proposal — urging EU Member States to get behind it.
“By making it easier to start a business, expand across borders and attract top talent, this new Standard will help to level the playing field with powerful global tech hubs in the US and China,” the tech CEOs and startup advocacy organizations wrote. “We applaud the EU’s ambition of seeking a pan-European solution to address the needs of startups. We are also encouraged that the Commission has specifically called out the treatment of stock options as one of the key issues.
“As highlighted by more than 500 leading European entrepreneurs who joined the Not Optional campaign, the inability of startups to use stock options effectively to attract and retain talent is a major bottleneck to the growth of startups in Europe.”
“The Commission’s proposals will be a major step towards unleashing the full entrepreneurial firepower of Europe – but only if they are adopted and implemented by all Member States,” they added. “That’s why we are today calling on all Member States to sign up to the EU Startup Nations Standard, including a commitment to increase the attractiveness of employee ownership schemes.”
Here’s the list of startup CEOs signing the letter:
Christian Reber, CEO & Founder, Pitch
Felix Van de Maele, CEO & Founder, Collibra
Jean-Charles Samuelian, CEO & Founder, Alan
Johannes Reck, CEO & Co-Founder of GetYourGuide
Johannes Schildt, CEO & Founder, KRY / LIVI
John Collison, Co-Founder and President, Stripe
Juan de Antonio, CEO & Co-Founder of Cabify
Markus Villig, CEO & Founder, Bolt
Miki Kuusi, CEO & Co-Founder, Wolt
Nicolas Brusson, CEO & Co-Founder, BlaBlaCar
Peter Mühlmann, CEO & Founder, Trustpilot
Sebastian Siemiatkowski, CEO & Founder, Klarna
Taavet Hinrikus, Founder & Chairman, TransferWise
Tamaz Georgadze, CEO & Founder, Raisin
Also welcoming the stock option proposals, Martin Mignot, a partner at Index Ventures — another backer of the Not Optional campaign — said: “The biggest challenge facing startups today is recruiting and retaining top talent. That’s why we are pleased that the EU Startup Nations Standard addresses stock options, making it easier for startups to allow employees to share in their success.”
“We are pleased to see the European Commission recognise the contribution that startups make to Europe and its citizens, and pursue a pan-European policy initiative to support this growing sector,” he added in a statement. “For too long, the focus in Europe has been on taming US tech giants. Today’s announcement confirms Europe’s ambition to create its own champions.”
EU startup advocacy member association, Allied for Startups, is another signatory to the letter. And in an additional response it broadly welcomed the Commission’s SME strategy — while pressing for a strong focus on startups as independent actors in the implementation of the strategy, rather than as a sub category of SMEs.
“The talent-focus of the proposed Startup Nation Standard has significant potential for startup ecosystems, since access to talent is still a bottleneck for startups in Europe.” said Benedikt Blomeyer, the lobby group’s director of EU policy, in a statement.
“Through the SME strategy, we are pleased to see concrete measures such as better startup visas and improved employee stock options on the table. Allied for Startups has repeatedly called for both measures over the past years.”
“Unlike SMEs, startups can only succeed at scale,” he added. “They are global from day one and aim to grow big and fast. Specific measures that work for SMES, for instance a regulatory exemption, might not work for startups. On the contrary, it could incentivise a startup to stay small. To account for these differences, the European Commission should consider a startup strategy that focuses on scalability, complementing the SME strategy.”
Allied for Startups also welcomed the Commission’s general goal of reducing the administrative and regulatory burden for startups within the Single Market — saying the consideration of regulatory sandboxes as part of the support toolkit is “potentially valuable for startups, who build innovative products and services”.
The Commission is also looking to support SMEs to go public in Europe — announcing an SME Initial Public Offerings (IPOs) Fund under the InvestEU SME window which will aim to make IPOs more accessible to local small businesses.
Another push aims to reduce late payments for SMEs, with the Commission noting today that one in four regional small businesses go bankrupt as a result of not being paid on time.
It also said it wants to stimulate investment in women-led companies and funds to “empower female entrepreneurship”. (Notably all the signatories on the aforementioned letter are male.)
Industrial to digital transformation
More broadly, the Commission’s new industrial strategy intends to underpin core EU policy priorities for the next five years — which include a focus on driving the digitization of legacy industries and simultaneous retooling to transition to a carbon neutral economy under the pan-EU Green Deal.
“Europe has the strongest industry in the world. Our companies — big and small — provide us with jobs, prosperity and strategic autonomy. Managing the green and digital transitions and avoiding external dependencies in a new geopolitical context requires radical change — and it needs to start now,” said Thierry Breton, commissioner for internal market, in a statement today.
During a press briefing Vestager emphasized the Commission’s view that new and more inclusive working methods are needed to deliver on the planned transformation.
“The twin digital and green transitions are posing both opportunities and challenges for the industry in general and for small and medium sized businesses in particular. Business models are changing. All across Europe companies are confronted with consumers’ decreasing trust and increasing demand for transparency,” she said. “The world around us is also changing… Today global competition, trade disputes, the return of protectionism — I think that creates a shared feeling of uncertainty.
“This is challenging Europe’s industry as it sets out to meet the twin transitions. Fortunately, the European industry is coming to this reality from a strong position. Our new strategy is building on Europe’s strength and on our values.”
On the proposals to shift to “inclusive” working methods, Vestager said the aim is “to work much closer with small and large companies, Member States, researchers, academia, social partners and other EU institutions”.
To that end, the Commission is proposing a new industrial forum to enable closer working with such stakeholders that it aims to have set up by September.
It also wants to work on identifying a number of industrial ecosystems — which Vestager said “may require a bespoke approach”, in terms of policy support.
At the press briefing Breton suggested there could be between 15 and 20 such industrial ecosystems.
“We don’t want to leave anybody out,” he said. “This is an industrial strategy but we all know that underpinning this there are large corporations but many, many, many small ones too and we have to bring these on board. If we don’t have the big and the small we won’t have a dynamic, innovative, living sector.”
“A lot of companies do this among themselves already, locally in fact, but we do hope it is going to be done in an even more horizontal manner across the EU and within the internal market,” he added.
Skills is another focus for the SME strategy — with the Commission saying it will expand Digital Innovation Hubs to every region in Europe to help small businesses plug in cutting edge tech, with expanded options for volunteering and training on digital technologies.
Helping SMEs find the skills they need to shift to sustainable ways of working is another stated aim.
The Commission has published a Q&A on the industrial strategy here.
Last month the executive body also set out proposals aimed at encouraging industrial data sharing and reuse, along with proposals for regulating high risk uses of artificial intelligence.

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

A further major piece of EU digital policy due later this year is the forthcoming Digital Services Act — which is slated to address platform liabilities and responsibilities, including towards smaller businesses that rely on them as a marketplace.

XYZ Reality secures £5M to bring a hologram headset to the construction industry

Augmented Reality technology did not, it turned out, light the touch paper on a booming new industry. What we got instead was a few cute applications on smartphones and devices like Microsoft’s Hololens, which has seen pretty limited success.
Where AR has proved that it may have a future is in industry, allowing workers to look at plans whilst they assemble something, for instance.
A new UK startup hopes to nudge that future on further with a radical new technology which, although it resembles the Hololens, is in fact a highly accurate helmet-mounted screen which enables construction workers to place beams or bricks in exactly the right locations, thus introducing significant savings in time normally lost due to mistakes.
To further boost its efforts, XYZ Reality has closed a £5 million Series A funding round, led by Amadeus Capital Partners and Hoxton Ventures, with participation from Adara Ventures and J Coffey Construction. The company will build out its AR cloud and software platform and build its team to serve the EU market and expand to US and Asia.
The idea behind it is highly innovative. A dedicated helmet with an attached visor projects a highly accurate hologram — based on laser positioning — in front of the wearer’s face, allowing them to place objects precisely according to plans projected in front of their eyes.
The company claims its HoloSite headset is the “world’s first engineering-grade Augmented Reality device,” that allows construction workers to view Building Information Models on-site to a 5-millimeter accuracy.
The problem it’s solving is an age-old one. In today’s construction industry buildings are designed in 3D and then converted into 2D drawings. But tradespeople are asked to interpret those 2D drawings and turn them into 3D buildings within construction “tolerances”. This process creates inefficiencies that mean up to 80% of the construction being “out-of-tolerance”. It’s estimated that 7-11% of project costs are wasted this way and, of course, in mega-projects like huge bridges, this amounts to an average of over $100 million.
Founder, CEO and builder David Mitchell, who has spent his career in the construction industry, says: “Works are currently validated after the fact through laser scanning. But 80% of the time the construction fails to meet acceptable tolerances. With HoloSite, we can prevent errors happening in the first place.”
Mitchell came up with the idea of eliminating 2D designs after the 2008 recession devastated the industry.
I tried out the headset for myself and found that I could complete a basic assembly of bricks according to the plans projected in front of my eyes with a reasonable degree of accuracy, from scratch.

XYZ says it was possible to build a bathroom in two hours using the headset, versus a day without it, using the technology.
The hope is that that as this technology improves, any tradesperson would be able to work on a construction site with less need for training in 2D plans, but still with a high degree of accuracy.
The project is not without risk. Daqri, which built enterprise-grade AR headsets for construction, shuttered its HQ last year. Earlier, Osterhout Design Group unloaded its AR glasses patents after acquisition talks with Magic Leap, Facebook and others stalled. Meta, an AR headset startup that raised $73 million from VCs, including Tencent, also sold its assets earlier this year after the company ran out of cash.
But Amadeus is bullish. Nick Kingsbury, Partner, Amadeus Capital Partners said: “Construction is a sector that’s ripe for radical innovation. This technology has the potential to revolutionize how the construction industry sets out and validates its work, reducing costs and the chance of project slippage from mistakes.”

Adtech giant Criteo is being investigated by France’s data watchdog

Adtech giant Criteo is under investigation by the French data protection watchdog, the CNIL, following a complaint filed by privacy rights campaign group Privacy International.
“I can confirm that the CNIL has opened up an investigation into Criteo . We are in the trial phase, so we can’t communicate at this stage,” a CNIL spokesperson told us.
Privacy International has been campaigning for more than a year for European data protection agencies to investigate several adtech players and data brokers involved in programmatic advertising.
Yesterday it said the French regulator has finally opened a probe of Criteo.
“CNIL’s confirmation that they are investigating Criteo is important and we warmly welcome it,” it said in the  statement. “The AdTech ecosystem is based on vast privacy infringements, exploiting people’s data on a daily basis. Whether its through deceptive consent banners or by infesting mental health websites these companies enable a surveillance environment where all you moves online are tracked to profile and target you, with little space to contest.”
We’ve reached out to Criteo for comment.
Back in November 2018, a few months after Europe’s updated data protection framework (GDPR) came into force, Privacy International filed complaints against a number of companies operating in the space — including Criteo.
A subsequent investigation by the rights group last year also found adtech trackers on mental health websites sharing sensitive user data for ad targeting purposes.
Last May Ireland’s Data Protection Commission also opened a formal investigation into Quantcast, following Privacy International’s complaint and a swathe of separate GDPR complaints targeting the real-time bidding (RTB) process involved in programmatic advertising.
The crux of the RTB complaints is that the process is inherently insecure since it entails the leaky broadcasting of people’s personal data with no way for it to be controlled once it’s out there vs GDPR’s requirement for personal data to be processed securely.
In June the UK’s Information Commission’s Office also fired a warning shot at the behavioral ad industry — saying it had “systemic concerns” about the compliance of RTB. Although the regulator has so far failed to take any enforcement action, despite issuing another blog post last December in which it discussed the “industry problem” with lawfulness — preferring instead to encourage adtech to reform itself. (Relevant: Google announcing it will phase out support for third party cookies.)
In its 2018 adtech complaint, Privacy International called for France’s CNIL, the UK’s ICO and Ireland’s DPC to investigate Criteo, Quantcast and a third company called Tapad — arguing their processing of Internet users’ data (including special category personal data) has no lawful basis, neither fulfilling GDPR’s requirements for consent nor legitimate interest.
Privacy International’s complaint argued that additional GDPR principles — including transparency, fairness, purpose limitation, data minimisation, accuracy and integrity and confidently — were also not being fulfilled; and called for further investigation to ascertain compliance with other legal rights and safeguards GDPR gives Europeans over their personal data, including the right to information; access; rights related to automated decision making and profiling; data protection and by design and default; and data protection impact assessments.
In specific complaints against Criteo, Privacy International raised concerns about its Shopper Graph tool, which is used to predict real-time product interest, and which Criteo has touted as having data on nearly three-quarters of the worlds’ shoppers, fed by cross-device online tracking of people’s digital activity which is not limited to cookies and gets supplemented by offline data; and its Dynamic Retargeting tool, which enables the retargeting of tracked shoppers with behaviorally targeted ads via Criteo sharing data with scores of ‘partners’ including publishers and ad exchanges involved in the RTB process to auction online ad slots.
At the time of the original complaint Privacy International said Criteo told it it was relying on consent to track individuals obtained via its advertising (and publisher) partners — who, per GDPR, would need to obtain informed, specific and freely given consent up-front before dropping any tracking cookies (or other tracer technologies) — as well as claiming a legal base known as legitimate interest, saying it believed this was a valid ground so that it could comply with its contractual obligations toward its clients and partners.
However legitimate interests requires a balancing test to be carried out to consider impacts on the individual’s interests, as part of a wider assessment process to determine whether it can be applied.
It’s Privacy International’s contention that legitimate interest is not a valid legal basis in this case.
Now the CNIL will look in detail at Criteo’s data processing to determine whether or not there are GDPR violations. If it finds breaches of the law, the regulation allows for monetary penalties to be issued that can scale as high as 4% of a company’s global turnover. EU data protection agencies can also order changes to how data is processed.
Commenting on the CNIL’s investigation of Criteo’s business, Dr Lukasz Olejnik, an independent privacy researcher and consultant whose research on the privacy implications of RTB predates all the aforementioned complaints told us: “I am not surprised with the investigation as in Real-Time Bidding transparency and consent were always very problematic and at best non-obvious. I don’t know how retrospective consent could be reconciled.”
“It is rather beyond doubt that a thorough privacy impact assessment (data protection impact assessment) had to be conducted for many aspects of such systems or its uses, so this particular angle of the complaint should not controversial,” Olejnik added.
“My long views on Real-Time Bidding is that it was not a technology created with particular focus on security and privacy. As a transformative technology in the long-term it also contributed to broader issues like the dissemination of harmful content like political disinformation.”
The CNIL probe certainly adds to Criteo’s business woes, with the company reporting declining revenue last year and predicting more to come in 2020. More aggressive moves by browser makers to bake in tracker blocking is clearly having an impact on its core business.
In a recent interview with Digiday CEO Megan Clarken talked about wanting to broaden the range of services it offers to advertisers and reduce its reliance on its traditional retargeting.
Criteo has also been investing heavily in artificial intelligence in recent years — ploughing in $23M in 2018 to open an AI lab in Paris.

Airbnb and three other p2p rental platforms agree to share limited pan-EU data

The European Commission announced yesterday it’s reached a data-sharing agreement with vacation rental platforms Airbnb, Booking.com, Expedia Group and Tripadvisor — trumpeting the arrangement as a “landmark agreement” which will allow the EU’s statistical office to publish data on short-stay accommodations offered via these platforms across the bloc.
It said it wants to encourage “balanced” development of peer-to-peer rentals, noting concerns have been raised across the EU that such platforms are putting unsustainable pressure on local communities.
It expects Eurostat will be able to publish the first statistics in the second half of this year.
“Tourism is a key economic activity in Europe. Short-term accommodation rentals offer convenient solutions for tourists and new sources of revenue for people. At the same time, there are concerns about impact on local communities,” said Thierry Breton, the EU commissioner responsible for the internal market, in a statement.
“For the first time we are gaining reliable data that will inform our ongoing discussions with cities across Europe on how to address this new reality in a balanced manner. The Commission will continue to support the great opportunities of the collaborative economy, while helping local communities address the challenges posed by these rapid changes.”
Per the Commission’s press release, data that will be shared with Eurostat on an ongoing basis includes number of nights booked and number of guests, which will be aggregated at the level of “municipalities”.
“The data provided by the platforms will undergo statistical validation and be aggregated by Eurostat,” the Commission writes. “Eurostat will publish data for all Member States as well as many individual regions and cities by combining the information obtained from the platforms.”
We asked the Commission if any other data would be shared by the platforms — including aggregated information on the number of properties rented; and whether rentals are whole properties or rooms in a lived in property — but a Commission spokeswoman could not confirm any other data would be provided under the current arrangement.
She also told us that municipalities can be defined differently across the EU — so it may not always be the case that city-level data will be available to be published by Eurostat.
In recent years multiple cities in the EU — including Amsterdam, Barcelona, Berlin and Paris — have sought to put restrictions on Airbnb and similar platforms in order to limit their impact on residents and local communities, arguing short term rentals remove housing stock and drive up rental prices, hollowing out local communities, as well as creating other sorts of anti-social disruptions.
However a ruling in December by Europe’s top court — related to a legal challenge filed against Airbnb by a French tourism association — offered the opposite of relief for such cities, with judges finding Airbnb to be an online intermediation service, rather than an estate agent.
Under current EU law on Internet business, the CJEU ruling makes it harder for cities to apply tighter restrictions as such services remain regulated under existing ecommerce rules. Although the Commission has said it will introduce a Digital Services Act this year that’s slated to upgrade liability rules for platforms (and at least could rework aspects of the ecommerce directive to allow for tighter controls).
Last year, ahead of the CJEU’s ruling, ten EU cities penned an open letter warning that “a carte blanche for holiday rental platforms is not the solution” — calling for the Commission to introduce “strong legal obligations for platforms to cooperate with us in registration-schemes and in supplying rental-data per house that is advertised on their platforms”.
So it’s notable the Commission’s initial data-sharing arrangement with the four platforms does not include any information about the number or properties being rented, nor the proportion which are whole property rentals vs rooms in a lived in property.
Both of which would be highly relevant metrics for cities concerned about short term rental platforms’ impact on local housing stock and rents.
Asked about this the Commission spokeswoman told us it had to “ensure a fair balance between the transparency that will help the cities to develop their policies better and then to protect personal data — because this is about private houses”.
“The decision was taken on this basis to strike a fair balance between the different interests at stake,” she added.
When we pointed out that it would be possible to receive property data in aggregate, in a way that does not disclose any personal data, the spokeswoman had no immediate response. (We’ll update this report if we receive any additional comment from the Commission on our questions).
Without pushing for more granular data from platforms the Commission initiative looks like it will achieve only a relatively superficial level of transparency — and one which might best suit platforms’ interests by spotlighting attention on tourist dollars generated in particular regions rather than offering data to allow for cities to drill down into flip-side impacts on local housing and rent affordability.
Gemma Galdon, director of a Barcelona-based research consultancy, called Eticas, which focuses on the ethics of applying cutting edge technologies, agreed the Commission move falls short — though she welcomed the move towards increasing transparency as “a good step”.
“This is indeed disappointing. Cities like Barcelona, NYC, Portland or Amsterdam have agreements with airbnb to access data (even personal and contact data for hosts!),” she told us.
“Mentioning privacy as a reason not to provide more data shows a serious lack of understanding of data protection regulation in Europe. Aggregate data is not personal data. And still, personal data can be shared as long as there is a legal basis or consent,” she added.
“So while this is a good step, it is unclear why it falls so short as the reason provided (privacy) is clearly not relevant in this case.”

Frontline Ventures raises new $80M fund focused on bringing US firms into Europe

Frontline Ventures, based between Dublin and London, has announced a new $80 million fund designed to assist US tech companies expanding into Europe.
The new FrontlineX fund — which means the firm now has $200 million under management — focuses mainly on growth-stage B2B companies and invest up to $5 million per company alongside lead investors in later-stage rounds. FrontlineX will be led by partners Stephen McIntyre and Brennan O’Donnell.
The firm believes that flawed go-to-market strategies and weak local talent networks means that US companies tend to lose too much money in foregone revenue when they expand into Europe and the team is aiming to try and address this.
Ireland has been a crucial landing point, particularly for US tech companies expanding into Europe, in part because of its low tax regime. No doubt, Irish investors are now realizing that with the UK leaving the EU, both Dublin and Ireland will become an even more attractive proposition.
Frontline has backed a number of successful companies in Seed Funds I and II, including Britebill (acquired by Amdocs), Logentries (acquired by Rapid7), and Orchestrate (acquired by CenturyLink) . Most recently, Frontline was an early investor in Pointy, which was acquired by Google last month.
Prior to joining Frontline, McIntyre setup Twitter’s European headquarters as the Vice President of EMEA and built its EMEA business. Prior to that he ran a substantial part of Google’s ads business.
O’Donnell joins FrontlineX as a partner in San Francisco. He previously held multiple go-to-market leadership roles at Google in the US and Europe and executive roles at Yammerm SurveyMonkey, Euclid and Airtable.
In a statement McIntyre said: “We’ve benchmarked the best of B2B software and seen that, by the time a company goes public, 30% of its revenue should be coming from Europe. But even the biggest names in tech fail to get there because of avoidable mistakes when they land. We’ve learned about international expansion the hard way as operators. The good news is that most of these problems are known and solvable.”
FrontlineX already invested in the Series B of TripActions, a company that has gone on to raise from Andreessen Horowitz at a $4 billion valuation; People.ai’s $100 million Series C together with Lightspeed, Andreessen Horowitz and ICONIQ; and Clearbanc’s $50 million Series B with Emergence and Highland. The VC has also backed more than 60 companies with recent investments including TeachCloud, Siren, Cloudsmith and Sweepr.
Ariel Cohen, the CEO of TripActions, commented that Frontline was “a crucial source of go-to-market advice”.

Cathay Pacific fined £500k by UK’s ICO over data breach disclosed in 2018

Cathay Pacific has been issued with a £500,000 penalty by the UK’s data watchdog for security lapses which exposed the personal details of some 9.4 million customers globally — 111,578 of whom were from the UK.
The penalty, which is the maximum fine possible under relevant UK law, was announced today by the Information Commissioner’s Office (ICO), following a multi-month investigation. It pertains to a breach disclosed by the airline in fall 2018.
At the time Cathay Pacific said it had first identified unauthorized access to its systems in March, though it did not explain why it took more than six months to make a public disclosure of the breach.
The failure to secure its systems resulted in unauthorised access to passengers’ personal details, including names, passport and identity details, dates of birth, postal and email addresses, phone numbers and historical travel information.
Today the ICO said the earliest date of unauthorised access to Cathay Pacific’s systems was October 14, 2014. While the earliest known date of unauthorised access to personal data was February 7, 2015.
“The ICO found Cathay Pacific’s systems were entered via a server connected to the internet and malware was installed to harvest data,” the regulator writes in a press release, adding that it found “a catalogue of errors” during the investigation, including back-up files that were not password protected; unpatched Internet-facing servers; use of operating systems that were no longer supported by the developer; and inadequate antivirus protection.
Since Cathay’s systems were compromised in this breach the UK has transposed an update to the European Union’s data protection’s framework into its national law which bakes in strict disclosure requirements for breaches involving personal data — requiring data controllers inform national regulators within 72 hours of becoming aware of a breach.
The General Data Protection Regulation (GDPR) also includes a much more substantial penalties regime — with fines that can scale as high as 4% of global annual turnover.
However owing to the timing of the unauthorized access the ICO has treated this breach as falling under previous UK data protection legislation.
Under GDPR the airline would likely have faced a substantially larger fine.
Commenting on Cathay Pacific’s penalty in a statement, Steve Eckersley, the ICO’s director of investigations, said:
People rightly expect when they provide their personal details to a company, that those details will be kept secure to ensure they are protected from any potential harm or fraud. That simply was not the case here.
This breach was particularly concerning given the number of basic security inadequacies across Cathay Pacific’s system, which gave easy access to the hackers. The multiple serious deficiencies we found fell well below the standard expected. At its most basic, the airline failed to satisfy four out of five of the National Cyber Security Centre’s basic Cyber Essentials guidance.
Under data protection law organisations must have appropriate security measures and robust procedures in place to ensure that any attempt to infiltrate computer systems is made as difficult as possible.
Reached for comment the airline reiterated its regret over the data breach and said it has taken steps to enhance its security “in the areas of data governance, network security and access control, education and employee awareness, and incident response agility”.
“Substantial amounts have been spent on IT infrastructure and security over the past three years and investment in these areas will continue,” Cathay Pacific said in the statement. “We have co-operated closely with the ICO and other relevant authorities in their investigations. Our investigation reveals that there is no evidence of any personal data being misused to date. However, we are aware that in today’s world, as the sophistication of cyber attackers continues to increase, we need to and will continue to invest in and evolve our IT security systems.”
“We will continue to co-operate with relevant authorities to demonstrate our compliance and our ongoing commitment to protecting personal data,” it added.
Last summer the ICO slapped another airline, British Airways, with a far more substantial fine for a breach that leaked data on 500,000 customers, also as a result of security lapses.
In that case the airline faced a record £183.39M penalty — totalling 1.5% of its total revenues for 2018 — as the timing of the breach occurred when the GDPR applied.

Facebook has paused election reminders in Europe after data watchdog raises transparency concerns

Big tech’s lead privacy regulator in Europe has intervened to flag transparency concerns about a Facebook election reminder feature — asking the tech giant to provide it with information about what data it collects from users who interact with the notification and how their personal data is used, including whether it’s used for targeting them with ads.
Facebook confirmed to TechCrunch it has paused use of the election reminder feature in the European Union while it works on addressing the Irish Data Protection Commission (DPC)’s concerns.
Facebook’s Election Day Reminder (EDR) feature is a notification the platform can display to users on the day of an election — ostensibly to encourage voter participation. However, as ever with the data-driven ad business, there’s a whole wrapper of associated questions about what information Facebook’s platform might be harvesting when it chooses to deploy the nudge (and how the ad business is making use of the data).
On an FAQ on its website about the election reminder Facebook writes vaguely that users “may see reminders and posts about elections and voting”.
Facebook does not explain what criteria it uses to determine whether to target (or not to target) a particular user with an election reminder.
Yet a study carried out by Facebook in 2012, working with academics from the University of California at San Diego, found an election day reminder sent via its platform on the day of the 2010 US congressional elections boosted voter turnout by about 340,000 people — which has led to concern that selective deployment of election reminders by Facebook could have the potential to influence poll outcomes.
If, for example, Facebook chose to target an election reminder at certain types of users who it knows via its profiling of them are likely to lean towards voting a particular way. Or if the reminder was targeted at key regions where a poll result could be swung with a small shift in voter turnout. So the lack of transparency around how the tool is deployed by Facebook is also concerning. 
Under EU law, entities processing personal data that reveals political opinions must also meet a higher standard of regulatory compliance for this so-called “special category data” — including around transparency and consent. (If relying on user consent to collect this type of data it would need to be explicit — requiring a clear, purpose-specific statement that the user affirms, for instance.)
In a statement today the DPC writes that it notified Facebook of a number of “data protection concerns” related to the EDR ahead of the recent Irish General Election — which took place February 8 — raising particular concerns about “transparency to users about how personal data is collected when interacting with the feature and subsequently used by Facebook”.
The DPC said it asked Facebook to make some changes to the feature but because these “remedial actions” could not be implemented in advance of the Irish election it says Facebook decided not to activate the EDR during that poll.
We understand the main issue for the regulator centers on the provision of in-context transparency for users on how their personal data would be collected and used when they engaged with the feature — such as the types of data being collected and the purposes the data is used for, including whether it’s used for advertising purposes.
In its statement, the DPC says that following its intervention Facebook has paused use of the EDR across the EU, writing: “Facebook has confirmed that the Election Day Reminder feature will not be activated during any EU elections pending a response to the DPC addressing the concerns raised.”
It’s not clear how long this intervention-triggered pause will last — neither the DPC nor Facebook have given a timeframe for when the transparency problems might be resolved.
We reached out to Facebook with questions on the DPC’s intervention.
The company sent this statement, attributed to a spokesperson:
We are committed to processing people’s information lawfully, fairly, and in a transparent manner. However, following concerns raised by the Irish Data Protection Commission around whether we give users enough information about how the feature works, we have paused this feature in the EU for the time being. We will continue working with the DPC to address their concerns.
“We believe that the Election Day reminder is a positive feature which reminds people to vote and helps them find their polling place,” Facebook added.
Forthcoming elections in Europe include Slovak parliamentary elections this month; North Macedonian and Serbian parliamentary elections, which are due to take place in April; and UK local elections in early May.
The intervention by the Irish DPC against Facebook is the second such public event in around a fortnight — after the regulator also published a statement revealing it had raised concerns about Facebook’s planned launch of a dating feature in the EU.
That launch was also put on ice following its intervention, although Facebook claimed it chose to postpone the rollout to get the launch “right”; while the DPC said it’s waiting for adequate responses and expects the feature won’t be launched before it gets them.

Facebook Dating launch blocked in Europe after it fails to show privacy workings

It looks like public statements of concern could be a new tactic by the regulator to try to address the sticky challenge of reining in big tech.
The DPC is certainly under huge pressure to deliver key decisions to prove that the EU’s flagship General Data Protection Regulation (GDPR) is functioning as intended. Critics say it’s taking too long, even as its case load continues to pile up.
No GDPR decisions on major cases involving tech giants including Facebook and Google have yet been handed down in Dublin — despite the GDPR fast approaching its second birthday.
At the same time it’s clear tech giants have no shortage of money, resources and lawyers to inject friction into the regulatory process — with the aim of slowing down any enforcement.
So it’s likely the DPC is looking for avenues to bag some quick wins — by making more of its interventions public and thereby putting pressure on a major player like Facebook to respond to publicity generated by it going public with “concerns”.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.
It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.
Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.
“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.
Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.
Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.
However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.
We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).
“Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”
Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.
“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”
“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.
Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.
So — in other words — Brexit means, er, trust Google to look after your data.
“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.
“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”
Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.
The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.
So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.
It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)
Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.
Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.
Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…
We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?
Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.
This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.
In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.
Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.
In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.
Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.
Though it could be using all that personal stuff to help it build new products it can serve ads alongside.
Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.
The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

Google gobbling Fitbit is a major privacy risk, warns EU data protection advisor

The European Data Protection Board (EDPB) has intervened to raise concerns about Google’s plan to scoop up the health and activity data of millions of Fitbit users — at a time when the company is under intense scrutiny over how extensively it tracks people online and for antitrust concerns.
Google confirmed its plan to acquire Fitbit last November, saying it would pay $7.35 per share for the wearable maker in an all-cash deal that valued Fitbit, and therefore the activity, health, sleep and location data it can hold on its more than 28M active users, at ~$2.1 billion.
Regulators are in the process of considering whether to allow the tech giant to gobble up all this data.
Google, meanwhile, is in the process of dialling up its designs on the health space.
In a statement issued after a plenary meeting this week the body that advises the European Commission on the application of EU data protection law highlights the privacy implications of the planned merger, writing: “There are concerns that the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”
Just this month the Irish Data Protection Commission (DPC) opened a formal investigation into Google’s processing of people’s location data — finally acting on GDPR complaints filed by consumer rights groups as early as November 2018  which argue the tech giant uses deceptive tactics to manipulate users in order to keep tracking them for ad-targeting purposes.
We’ve reached out to the Irish DPC — which is the lead privacy regulator for Google in the EU — to ask if it shares the EDPB’s concerns.
The latter’s statement goes on to reiterate the importance for EU regulators to asses what it describes as the “longer-term implications for the protection of economic, data protection and consumer rights whenever a significant merger is proposed”.
It also says it intends to remain “vigilant in this and similar cases in the future”.
The EDPB includes a reminder that Google and Fitbit have obligations under Europe’s General Data Protection Regulation to conduct a “full assessment of the data protection requirements and privacy implications of the merger” — and do so in a transparent way, under the regulation’s principle of accountability.
“The EDPB urges the parties to mitigate the possible risks of the merger to the rights to privacy and data protection before notifying the merger to the European Commission,” it also writes.
We reached out to Google for comment but at the time of writing it had not provided a response nor responded to a question asking what commitments it will be making to Fitbit users regarding the privacy of their data.
Fitbit has previously claimed that users’ “health and wellness data will not be used for Google ads”.
However big tech has a history of subsequently steamrollering founder claims that ‘nothing will change’. (See, for e.g.: Facebook’s WhatsApp U-turn on data-linking.)
“The EDPB will consider the implications that this merger may have for the protection of personal data in the European Economic Area and stands ready to contribute its advice on the proposed merger to the Commission if so requested,” the advisory body adds.
We’ve also reached out to the European Commission’s competition unit for a response to the EDPB’s statement.

Lack of big tech GDPR decisions looms large in EU watchdog’s annual report

The lead European Union privacy regulator for most of big tech has put out its annual report which shows another major bump in complaints filed under the bloc’s updated data protection framework, underlining the ongoing appetite EU citizens have for applying their rights.
But what the report doesn’t show is any firm enforcement of EU data protection rules vis-a-vis big tech.
The report leans heavily on stats to illustrate the volume of work piling up on desks in Dublin. But it’s light on decisions on highly anticipated cross-border cases involving tech giants including Apple, Facebook, Google, LinkedIn and Twitter.
The General Data Protection Regulation (GDPR) began being applied across the EU in May 2018 — so is fast approaching its second birthday. Yet its file of enforcements where tech giants are concerned remains very light — even for companies with a global reputation for ripping away people’s privacy.
This despite Ireland having a large number of open cross-border investigations into the data practices of platform and adtech giants — some of which originated from complaints filed right at the moment GDPR came into force.
In the report the Irish Data Protection Commission (DPC) notes it opened a further six statutory inquiries in relation to “multinational technology companies’ compliance with the GDPR” — bringing the total number of major probes to 21. So its ‘big case’ file continues to stack up. (It’s added at least two more since then, with a probe of Tinder and another into Google’s location tracking opened just this month.)
The report is a lot less keen to trumpet the fact that decisions on cross-border cases to date remains a big fat zero.
Though, just last week, the DPC made a point of publicly raising “concerns” about Facebook’s approach to assessing the data protection impacts of a forthcoming product in light of GDPR requirements to do so — an intervention that resulted in a delay to the regional launch of Facebook’s Dating product.
This discrepancy (cross-border cases: 21 – Irish DPC decisions: 0), plus rising anger from civil rights groups, privacy experts, consumer protection organizations and ordinary EU citizens over the paucity of flagship enforcement around key privacy complaints is clearly piling pressure on the regulator. (Other examples of big tech GDPR enforcement do exist. Well, France’s CNIL is one.)
In its defence the DPC does have a horrifying case load. As illustrated by other stats its keen to spotlight — such as saying it received a total of 7,215 complaints in 2019; a 75% increase on the total number (4,113) received in 2018. A full 6,904 of which were dealt with under the GDPR (while 311 complaints were filed under the Data Protection Acts 1988 and 2003).
There were also 6,069 data security breaches notified to it, per the report — representing a 71% increase on the total number (3,542) recorded last year.
While a full 457 cross-border processing complaints were received in Dublin via the GDPR’s One-Stop-Shop mechanism. (This is the device the Commission came up with for the ‘lead regulator’ approach that’s baked into GDPR and which has landed Ireland in the regulatory hot seat. tl;dr other data protection agencies are passing Dublin A LOT of paperwork.)
The DPC necessarily has to do back and forth on cross border cases, as it liaises with other interested regulators. All of which, you can imagine, creates a rich opportunity for lawyered up tech giants to inject extra friction into the oversight process — by asking to review and query everything. [Insert the sound of a can being hoofed down the road]
Meanwhile the agency that’s supposed to regulate most of big tech (and plenty else) — which writes in the annual report that it increased its full time staff from 110 to 140 last year — did not get all the funding it asked for from the Irish government.
So it also has the hard cap of its own budget to reckon with (just €15.3M in 2019) vs — for example — Google’s parent Alphabet’s $46.1BN in full year 2019 revenue. So, er, do the math.
Nonetheless the pressure is firmly now on Ireland for major GDPR enforcements to flow.
One year of major enforcement inaction could be filed under ‘bedding in’; but two years in without any major decisions would not be a good look. (It has previously said the first decisions will come early this year — so seems to be hoping to have something to show for GDPR’s 2nd birthday.)
Some of the high profile complaints crying out for regulatory action include behavioral ads serviced via real-time bidding programmatic advertising (which the UK data watchdog has admitted for half a year is rampantly unlawful); cookie consent banners (which remain a Swiss Cheese of non-compliance); and adtech platforms cynically forcing consent from users by requiring they agree to being microtargeted with ads to access the (‘free’) service. (Thing is GDPR stipulates that consent as a legal basis must be freely given and can’t be bundled with other stuff, so… )
Full disclosure: TechCrunch’s parent company, Verizon Media (née Oath), is also under ongoing investigation by the DPC — which is looking at whether it meets GDPR’s transparency requirements under Articles 12-14 of the regulation.
Seeking to put a positive spin on 2019’s total lack of a big tech privacy reckoning, commissioner Helen Dixon writes in the report: “2020 is going to be an important year. We await the judgment of the CJEU in the SCCs data transfer case; the first draft decisions on big tech investigations will be brought by the DPC through the consultation process with other EU data protection authorities, and academics and the media will continue the outstanding work they are doing in shining a spotlight on poor personal data practices.”
In further remarks to the media Dixon said: “At the Data Protection Commission, we have been busy during 2019 issuing guidance to organisations, resolving individuals’ complaints, progressing larger-scale investigations, reviewing data breaches, exercising our corrective powers, cooperating with our EU and global counterparts and engaging in litigation to ensure a definitive approach to the application of the law in certain areas.
“Much more remains to be done in terms of both guiding on proportionate and correct application of this principles-based law and enforcing the law as appropriate. But a good start is half the battle and the DPC is pleased at the foundations that have been laid in 2019. We are already expanding our team of 140 to meet the demands of 2020 and beyond.”
One notable date this year also falls when GDPR turns two — because a Commission review of how the regulation is functioning is looming in May.
That’s one deadline that may help to concentrate minds on issuing decisions.
Per the DPC report, the largest category of complaints it received last year fell under ‘access request’ issues — whereby data controllers are failing to give up (all) people’s data when asked — which amounted to 29% of the total; followed by disclosure (19%); fair processing (16%); e-marketing complaints (8%); and right to erasure (5%).

On the security front, the vast bulk of notifications received by the DPC related to unauthorised disclosure of data (aka breaches) — with a total across the private and public sector of 5,188 vs just 108 for hacking (though the second largest category was actually lost or stolen paper, with 345).
There were also 161 notification of phishing; 131 notification of unauthorized access; 24 notifications of malware; and 17 of ransomeware.

Europe sets out plan to boost data reuse and regulate “high risk” AIs

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
AI
Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
A voluntary labelling scheme for lower risk AI applications
Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc
Data
A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
Tech for good
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
Trustworthy artificial intelligence
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
Towards a rights-respecting common data space
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Do you want to know more on the EU’s digital strategy?Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz
— European Commission (@EU_Commission) February 18, 2020

Platform liability
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton

Facebook asks for a moat of regulations it already meets

It’s suspiciously convenient that Facebook already fulfills most of the regulatory requirements it’s asking governments to lay on the rest of the tech industry. Facebook CEO Mark Zuckerberg is in Brussels lobbying the European Union’s regulators as they form new laws to govern artificial intelligence, content moderation, and more. But if they follow Facebook’s suggestions, they might reinforce the social network’s power rather than keep it in check by hamstringing companies with fewer resources.
We already saw this happen with GDPR. The idea was to strengthen privacy and weaken exploitative data collection that tech giants like Facebook and Google depend on for their business models. The result was the Facebook and Google actually gained or only slightly lost EU market share while all other adtech vfendors got wrecked by the regulation, according to WhoTracksMe.
GDPR went into effect in May 2018, hurting other ad tech vendors’ EU market share much worse than Google and Facebook. Image credit: WhoTracksMe
Tech giants like Facebook have the profits lawyers, lobbyists, engineers, designers, scale, and steady cash flow to navigate regulatory changes. Unless new laws are squarely targeted at the abuses or dominance of these large companies, their collateral damage can loom large. Rather than spend time and money they don’t have in order to comply, some smaller competitors will fold, scale back, or sell out.
But at least in the case of GDPR, everyone had to add new transparency and opt out features. If Facebook’s slate of requests goes through, it will sail forward largely unpeturbed while rivals and upstarts scramble to get up to speed. I made this argument in March 2018 in my post “Regulation could protect Facebook, not punish it”. Then GDPR did exactly that.
Google gained market share and Facebook only lost a little in the EU following GDPR. Everyone else faired worse. Image via WhoTracksMe
That doesn’t mean these safeguards aren’t sensible for everyone to follow. But regulators need to consider what Facebook isn’t suggesting if it wants to address its scope and brazenness, and what timelines or penalties would be feasible for smaller players.
If we take a quick look at what Facebook is proposing, it becomes obvious that it’s self-servingly suggesting what it’s already accomplished:
User-friendly channels for reporting content – Every post and entity on Facebook can already be flagged by users with an explanation of why
External oversight of policies or enforcement – Facebook is finalizing its independent Oversight Board right now
Periodic public reporting of enforcement data – Facebook publishes a twice-yearly report about enforcement of its Community Standards
Publishing their content standards – Facebook publishes its standards and notes updates to them
Consulting with stakeholders when making significant changes – Facebook consults a Safety Advisory Board and will have its new Oversight Board
Creating a channel for users to appeal a company’s content removal decisions – Facebook’s Oversight Board will review content removal appeals
Incentives to meet specific targets such as keeping the prevalence of violating content below some agreed threshold – Facebook already touts how 99% of child nudity content and 80% of hate speech removed was detected proactively, and that it deletes 99% of ISIS and Al Qaeda content
Facebook CEO Mark Zuckerberg arrives at the European Parliament, prior to his audition on the data privacy scandal on May 22, 2018 at the European Union headquarters in Brussels. (Photo by JOHN THYS / AFP) (Photo credit should read JOHN THYS/AFP/Getty Images)
Finally, Facebook asks that the rules for what content should be prohibited on the internet “recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context”. That’s a lot of leeway. Facebook already allows different content in different geographies to comply with local laws, lets Groups self-police themselves more than the News Feed, and Zuckerberg has voiced support for customizable filters on objectionable content with defaults set by local majorities.
“…Can be enforced at scale” is a last push for laws that wouldn’t require tons of human moderators to enforce that might further drag down Facebook’s share price. ‘100 billion piece of content come in per day, so don’t make us look at it all.’ Investments in safety for elections, content, and cybersecurity already dragged Facebook’s profits down from growth of 61% year-over-year in 2019 to just 7% in 2019.
To be clear, it’s great that Facebook is doing any of this already. Little is formally required. If the company was as evil as some make it out to be, it wouldn’t be doing any of this.

Facebook pushes EU for dilute and fuzzy internet content rules

Then again, Facebook earned $18 billion in profit in 2019 off our data while repeatedly proving it hasn’t adequately protected it. The $5 billion fine and settlement with FTC where Facebook has pledged to build more around privacy and transparency shows it’s still playing catch up given its role as a ubiquitous communications utility.
There’s plenty more for EU and hopefully US regulators to investigate. Should Facebook pay a tax on the use of AI? How does it treat and pay its human content moderators? Would requiring users be allowed to export their interoperable friends list promote much-needed competition in social networking that could let the market compel Facebook to act better?
As the EU internal market commissioner Thierry Breton told reporters following Zuckerberg’s meetings with regulators, “It’s not for us to adapt to those companies, but for them to adapt to us.”

Unemployment is the top risk of AI. I think a tax on its use by big companies could help pay for job retraining the world will desperately need
— Josh Constine (@JoshConstine) February 17, 2020

Friend portability is the must-have Facebook regulation

Facebook pushes EU for dilute and fuzzy Internet content rules

Facebook founder Mark Zuckerberg is in Europe this week — attending a security conference in Germany over the weekend where he spoke about the kind of regulation he’d like applied to his platform ahead of a slate of planned meetings with digital heavyweights at the European Commission.
“I do think that there should be regulation on harmful content,” said Zuckerberg during a Q&A session at the Munich Security Conference, per Reuters, making a pitch for bespoke regulation.
He went on to suggest “there’s a question about which framework you use”, telling delegates: “Right now there are two frameworks that I think people have for existing industries — there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.”
“I actually think where we should be is somewhere in between,” he added, making his plea for Internet platforms to be a special case.
At the conference he also said Facebook now employs 35,000 people to review content on its platform and implement security measures — including suspending around 1 million fake accounts per day, a stat he professed himself “proud” of.
The Facebook chief is due to meet with key commissioners covering the digital sphere this week, including competition chief and digital EVP Margrethe Vestager, internal market commissioner Thierry Breton and Věra Jourová, who is leading policymaking around online disinformation.
The timing of his trip is clearly linked to digital policymaking in Brussels — with the Commission due to set out its thinking around the regulation of artificial intelligence this week. (A leaked draft last month suggested policymaker are eyeing risk-based rules to wrap around AI.)
More widely, the Commission is wrestling with how to respond to a range of problematic online content — from terrorism to disinformation and election interference — which also puts Facebook’s 2BN+ social media empire squarely in regulators’ sights.
Another policymaking plan — a forthcoming Digital Service Act (DSA) — is slated to upgrade liability rules around Internet platforms.
The detail of the DSA has yet to be publicly laid out but any move to rethink platform liabilities could present a disruptive risk for a content distributing giant such as Facebook.
Going into meetings with key commissioners Zuckerberg made his preference for being considered a ‘special’ case clear — saying he wants his platform to be regulated not like the media businesses which his empire has financially disrupted; nor like a dumbpipe telco.
On the latter it’s clear — even to Facebook — that the days of Zuckerberg being able to trot out his erstwhile mantra that ‘we’re just a technology platform’, and wash his hands of tricky content stuff, are long gone.
Russia’s 2016 foray into digital campaigning in the US elections and sundry content horrors/scandals before and since have put paid to that — from nation-state backed fake news campaigns to livestreamed suicides and mass murder.
Facebook has been forced to increase its investment in content moderation. Meanwhile it announced a News section launch last year — saying it would hand pick publishers content to show in a dedicated tab.
The ‘we’re just a platform’ line hasn’t been working for years. And EU policymakers are preparing to do something about that.
With regulation looming Facebook is now directing its lobbying energies onto trying to shape a policymaking debate — calling for what it dubs “the ‘right’ regulation”.
Here the Facebook chief looks to be applying a similar playbook as the Google’s CEO, Sundar Pichai — who recently tripped to Brussels to push for AI rules so dilute they’d act as a tech enabler.
In a blog post published today Facebook pulls its latest policy lever: Putting out a white paper which poses a series of questions intended to frame the debate at a key moment of public discussion around digital policymaking.
Top of this list is a push to foreground focus on free speech, with Facebook questioning “how can content regulation best achieve the goal of reducing harmful speech while preserving free expression?” — before suggesting more of the same: (Free, to its business) user-generated policing of its platform.
Another suggestion it sets out which aligns with existing Facebook moves to steer regulation in a direction it’s comfortable with is for an appeals channel to be created for users to appeal content removal or non-removal. Which of course entirely aligns with a content decision review body Facebook is in the process of setting up — but which is not in fact independent of Facebook.
Facebook is also lobbying in the white paper to be able to throw platform levers to meet a threshold of ‘acceptable vileness’ — i.e. it wants a proportion of law-violating content to be sanctioned by regulators — with the tech giant suggesting: “Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.”
It’s also pushing for the fuzziest and most dilute definition of “harmful content” possible. On this Facebook argues that existing (national) speech laws — such as, presumably, Germany’s Network Enforcement Act (aka the NetzDG law) which already covers online hate speech in that market — should not apply to Internet content platforms, as it claims moderating this type of content is “fundamentally different”.
“Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context,” it writes — lobbying for maximum possible leeway to be baked into the coming rules.
“The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms,” Facebook’s VP of content policy, Monika Bickert, also writes in the blog.
“If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation,” she adds, ticking off more of the tech giant’s usual talking points at the point policymakers start discussing putting hard limits on its ad business.

Facebook Dating launch blocked in Europe after it fails to show privacy workings

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.
Late yesterday Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act.
In a statement on its website the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.
“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”
Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.
It went on to test launch the product in Colombia a few months later. And since then it’s been gradually adding more countries in South American and Asia. It also launched in the US last fall — soon after it was fined $5BN by the FTC for historical privacy lapses.
At the time of its US launch Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop — despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.
Which is either extremely careless or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)
The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”.
Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.
“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on.
Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.
The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”
“Contained in the documentation we gathered on Monday was a DPIA,” he added.
This begs the question why Facebook didn’t send the DPIA to the DPC on February 3 — unless of course this document did not actually exist on that date…
We’ve reached out to Facebook for comment and to ask when it carried out the DPIA.
We’ve also asked the DPC to confirm its next steps. The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws.
Under GDPR there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. And a dating product clearly is.
While a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.
Again, the launch of a dating product on a platform such as Facebook — which has hundreds of millions of regional users — would be a clear-cut case for such an assessment to be carried out ahead of any launch.

Seven reasons not to trust Facebook to play cupid

European founders look to new markets, aim for profitability

To get a better sense of what lies ahead for the European startup ecosystem, we spoke to several investors and entrepreneurs in the region about their impressions and lessons learned from 2019, along with their predictions for 2020.
We asked for blunt responses — and we weren’t disappointed.
These responses have been edited for clarity and length.
Kenny Ewan, founder/CEO, Wefarm (London)
I’ve often been faced with questions around how we can generate revenue in markets like Africa. There has historically been a view that you can do something good, or you can generate revenue — and companies that talk about developing markets usually get squarely lumped into the former. While mission-led companies achieving tremendous growth has been talked about for a while, 2019 has been a year I have felt conversations with investors and others really begin to shift to the reality of that and it’s thanks to more and more proof points being delivered by startups across the board.
As more and more businesses begin to realize they don’t need to wait for the internet to descend from the sky for these markets to become hubs of commerce and innovation — and see that it’s already happening — I believe 2020 will continue to witness more and more historic tech companies shifting their focus to markets like Africa and that there will be more coverage and discussion as a result.

What to expect when pitching European VCs

Russ Heddleston
Contributor

Share on Twitter

Russ is the cofounder and CEO of DocSend. He was previously a product manager at Facebook, where he arrived via the acquisition of his startup Pursuit.com, and has held roles at Dropbox, Greystripe, and Trulia. Follow him here: @rheddleston and @docsend

More posts by this contributor
First mover advantage: Does it matter in startup fundraising?
Data says there are only two seasons for fundraising and one secret window

Fundraising is the single most important thing you can do for your business, but I know very few founders who enjoy the process.
It’s inherently stressful: you’re running out of capital, which is why you’re trying to get more of it. There’s also no clear roadmap to getting funding and almost every company goes through the process differently. I’ve talked a lot about what makes a successful early-stage pitch deck and what you can expect when you’re trying to close a funding round. But do those same best practices still apply when you’re fundraising outside of the United States?
Before we continue, the research project that we’ve completed is opt-in, and we don’t look at anyone’s data without their express permission. We take privacy very seriously, but we also work with an amazing group of founders who are willing to pass on what they’ve learned to the next generation of founders going through the process. If you want to be included in our next round of research, you can find the survey links at the bottom of this blog post.
So what can you expect while sending your pitch deck out to European VCs?
Have a 9-12 month runway
When DocSend conducted this study previously, we found that the average length of a Series seed or pre-seed was about 11-15 weeks. In fact, according to our research, if you’re in the United States and you’re sending your pitch deck to investors, you can expect about 50 percent of your views to come in just the first nine days. You’ll also hit 75 percent of your visits in just over a month, which is very much in line with the 11-15 week average window.
However, when we look outside of the U.S., the numbers change dramatically.
Sending out your pitch deck in Europe, you can expect to wait over two weeks (15 days) for the first 50 percent of your visits. And you’ll likely wait nearly two months (53 days) for 75 percent of your visits. There are a lot of reasons for the discrepancies. It could be that your potential investors are more spread out. We also don’t see the same level of urgency in EU funding rounds as we often see in the U.S. No matter the reason, you’re going to want to have enough runway to survive the fundraising gauntlet in your region. While I usually recommend having at least six months in the bank, you may want to look at having 9-12 months of runway so you’re not desperate by the end of your fundraising round.
However, your round speed will most likely vary depending on the type of company you are. There has been a trend in recent years of U.S. investors looking to make deals with European startups. We also know American investors are looking for 100x companies to make solid returns for their funds. There are only so many 100x-type companies in the U.S you can invest in, but Europe is an emerging market. But American VCs have a different pace and rounds for hot startups can last weeks, not months. So if you think you have a unicorn in the making (and are comfortable with a more aggressive growth plan and the burn rate that goes with it), you can use U.S. investors to help create a sense of urgency. But even if that’s your plan, I would still recommend having a healthy runway to get you through in case the round doesn’t go as you expect.
VCs are likely to spend more time on your deck — you should too
A clear indicator of VC interest is the amount of time they spend reading your deck before they request a meeting. Knowing how long they spend reading your deck and what pages they stop on (which isn’t necessarily a good thing) can help you gauge VC interest.
We’ve seen an interesting trend in Europe over the last few years. The average amount of time VCs are spending reading a deck has increased and not by a small amount. We’ve seen an increase of more than 20 seconds between 2018 and now, even while the length of the standard fundraising deck has stayed stable. It’s still within the industry average (both in and outside of the U.S.) of 19-20 pages. With page length staying stable, that extra time on a deck means VCs are willing to spend more time assessing an investment.
If you know your slides will be scrutinized, make sure you have content in each of the key sections VCs expect to see in your deck. Be very clear with the goal for each page and don’t include too much information. If your page is describing the problem your company is solving, you don’t need to add in your market size and the traction you’ve already gotten. Remember, the pitch deck is just there to get you the meeting; you don’t need to include every detail about your business. Your goal is to build an understandable narrative that will make a VC want to know more.
You could face more competition for European VCs’ attention
Investments are heating up outside of the U.S.
With fund sizes increasing, especially in the earlier rounds, there’s more money being invested. But with the continual focus on unicorns, that money is being concentrated in fewer companies. In fact, in the U.S., we’ve seen the number of decks with six or more views drop by nearly a full percentage point from 2018 to 2019. But the trend is the opposite in Europe. The number of pitch decks that are being viewed six or more times is actually on the rise.
We’ve also seen the number of pitch decks being viewed only once drop outside of the U.S. by 1.2 percent. This could be due to several factors. The number of VC firms in Europe viewing decks has grown by 56 percent on our platform in the last year. In the U.S., it’s only grown by 35 percent since 2018. Having more active VCs means there are more opportunities to pitch your company. But with a decrease in pitch decks that aren’t getting any action, it could be that the quality of startups is increasing, so VCs are saturated with opportunities. With well over 250 accelerators in Europe, it isn’t hard to imagine that with more and more resources available, startups are further along when looking for that initial investment than they were just a few years ago.
Takeaways
Raising a funding round is completely different in Europe than it is in the U.S.
Investors in Europe aren’t in a rush to view your deck, but when they do, they will likely spend more time reading it through and considering it. Combine that with the fact that the number of highly-viewed decks is increasing, and you have the makings for a long and potentially arduous round pitching to VCs who have multiple good investments on offer.
If your business will support a more aggressive growth plan and investment, it may be worth it to court outside investment. But if you’d like to play it safe, aiming for a U.S. VC may be a waste of time.

Facebook’s use of Onavo spyware faces questions in EU antitrust probe — report

Facebook’s use of the Onavo spyware VPN app it acquired in 2013 — and used to inform its 2014 purchase of the then rival WhatsApp messaging platform — is on the radar of Europe’s antitrust regulator, per a report in the Wall Street Journal.
The newspaper reports that the Commission has requested a large volume of internal documents as part of a preliminary investigation into Facebook’s data practices which was announced in December.
The WSJ cites people familiar with the matter who told it the regulator’s enquiry is focused on allegations Facebook sought to identify and crush potential rivals and thereby stifle competition by leveraging its access to user data.
Facebook announced it was shutting down Onavo a year ago — in the face of rising controversial about its use of the VPN tool as a data-gathering business intelligence dragnet that’s both hostile to user privacy and raises major questions about anti-competitive practices.
As recently as 2018 Facebook was still actively pushing Onavo at users of its main social networking app — marketing it under a ‘Protect’ banner intended to convince users that the tool would help them protect their information.
In fact the VPN allowed Facebook to monitor their activity across third party apps — enabling the tech giant to spot emerging trends across the larger mobile ecosystem. (So, as we’ve said before, ‘Protect Facebook’s business’ would have been a more accurate label for the tool.)
By the end of 2018 further details about how Facebook had used Onavo as a key intelligence lever in major acquisitions emerged when a UK parliamentary committee obtained a cache of internal documents related to a US court case brought by a third party developer which filed suit alleging unfair treatment on its app platform.
UK parliamentarians concluded that Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, apparently without their knowledge — using the intel to assess not just how many people had downloaded apps but how often they used them, which in turn helped the tech giant to decide which companies to acquire and which to treat as a threat.
The parliamentary committee went on to call for competition and data protection authorities to investigate Facebook’s business practices.
So it’s not surprising that Europe’s competition commission should also be digging into how Facebook used Onavo. The Commission also been reviewing changes Facebook made to its developer APIs which affected what information it made available, per the WSJ’s sources.
Internal documents published by the UK parliament also highlighted developer access issues — such as Facebook’s practice of whitelisting certain favored developers’ access to user data, raising questions about user consent to the sharing of their data — as well as fairness vis-a-vis non-whitelisted developers.
According to the newspaper’s report the regulator has requested a wide array of internal Facebook documents as part of its preliminary investigation, including emails, chat logs and presentations. It says Facebook’s lawyers have pushed back — seeking to narrow the discovery process by arguing that the request for info is so broad it would produce millions of documents and could reveal Facebook employees’ personal data.
Some of the WSJ’s sources also told it the Commission has withdrawn the original order and intends to issue a narrower request.
We’ve reached out to Facebook and the competition regulator for comment.
Back in 2017 the European Commission fined Facebook $122M for providing incorrect or misleading information at the time of the WhatsApp acquisition. Facebook had given regulator assurances that user accounts could not be linked across the two services — which cleared the way for it to be allowed to acquire WhatsApp — only for the company to u-turn in 2016 by saying it would be linking user data.
In addition to investigating Facebook’s data practices over potential antitrust concerns, the EU’s competition regulator is also looking into Google’s data practices — announcing a preliminary probe in December.

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.
The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.
A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.
The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.
GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.
In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.
Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.
In its current form the automated risk assessment system failed this test, in the court’s view.
Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.
In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.
The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The Court’s reasoning doesn’t imply there should be full disclosure, but it clearly expects much more robust information on the way (objective criteria) that the model and scores were developed and the way in which particular risks for individuals were addressed.
— Joris van Hoboken (@jorisvanhoboken) February 6, 2020

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.
“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.
Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.
So the decision by the Dutch court could have some near-term implications for UK policy in this area.
The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.
It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.
It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

Qualcomm faces fresh competition scrutiny in Europe over RFFE chips for 5G

Qualcomm is facing fresh antitrust scrutiny from the European Commission, with the regulator raising questions about radio frequency front-end (RFFE) chips which can be used in 5G devices.
The chipmaker has been expanding into selling RFFE chips for 5G devices, per Reuters, encouraging buyers of its 5G modems to also buy its radio frequency front-end chips, rather than buying from other vendors and integrating their hardware with its 5G modem chips.
A European Commission spokeswomen confirmed the action, telling us: “We can confirm that the Commission has sent out questionnaires, as part of a preliminary investigation into the market for radio frequency front end.”
We’ve reached out to Qualcomm for comment.
The chipmaker disclosed the activity in its 10Q investor filing — where it writes that the regulator wrote to request information in early December: “notifying us that it is investigating whether we engaged in anti-competitive behavior in the European Union (EU)/European Economic Area (EEA) by leveraging our market position in 5G baseband processors in the RFFE space”.
Qualcomm says it’s in the process of responding to the request for information.
It’s not yet clear whether the investigation will move to a formal footing in future. “Our preliminary investigation is ongoing. We cannot comment on or predict its timing or outcome,” the EC spokeswoman told us.
“It is difficult to predict the outcome of this matter or what remedies, if any, may be imposed by the EC,” Qualcomm also writes in the investor filing, adding: “We believe that our business practices do not violate the EU competition rules.”
If a violation is found it also warns investors that the EC has the power to impose a fine of up to 10% of its annual revenues, and could also issue injunctive relief that prohibits or restricts certain business practices.
The preliminary probe of Qualcomm’s 5G modem business is by no means the first antitrust action the chip giant has faced in Europe.
Last summer Europe’s competition commission fined Qualcomm close to $270M — following a long-running antitrust investigation into whether it used predatory pricing when selling UMTS baseband chips, with the regulator concluding Qualcomm had used predatory pricing to force a competitor out of the market.
Two years ago the Commission also fined the chipmaker a full $1.23BN in another antitrust case related to its dominance in LTE chipsets for smartphones, and specifically related to its relationship with iPhone maker, Apple.
In both cases Qualcomm is appealing the decisions.
It is also battling a major competition case on its home turf: In 2017 the U.S. Federal Trade Commission (FTC) filed charges against Qualcomm — accusing it of using anticompetitive tactics in an attempt to maintain a monopoly in its chip business.
Last year a US court sided with the FTC, agreeing the chip giant had violated antitrust law — and warning that such behavior would likely continue, given Qualcomm’s key role in making modems for next-gen 5G cellular tech. But, again, Qualcomm has appealed — and the legal process is continuing, with a decision on the appeal possible this year.
Its investor filing notes it was granted a motion to expedite the appeal against the FTC in July — with a hearing scheduled for February 13, 2020.
Most recently, in August, the chipmaker won a partial stay against an earlier court decision that had required it to grant patent licenses to rivals and end its practice of requiring its chip customers sign a patent license before purchasing chips.
“We will continue to vigorously defend ourself in the foregoing matters. However, litigation and investigations are inherently uncertain, and we face difficulties in evaluating or estimating likely outcomes or ranges of possible loss in antitrust and trade regulation investigations in particular,” Qualcomm adds.

Tinder’s handling of user data is now under GDPR probe in Europe

Dating app Tinder is the latest tech service to find itself under formal investigation in Europe over how it handles user data.
Ireland’s Data Protection Commission (DPC) has today announced a formal probe of how Tinder processes users’ personal data; the transparency surrounding its ongoing processing; and compliance with obligations with regard to data subject right’s requests.
Under Europe’s General Data Protection Regulation (GDPR) EU citizens have a number of rights over their personal data — such as the right to request deletion or a copy of their data.
While those entities processing people’s personal data must have a valid legal basis to do so.
Data security is another key consideration baked into the data protection regulation.
The DPC said complaints about the dating app have been made from individuals in multiple EU countries, not just in Ireland — with the Irish regulator taking the lead under a GDPR mechanism to manage cross-border investigations.
It said the Tinder probe came about as a result of active monitoring of complaints received from individuals “both in Ireland and across the EU” — in order to identify “thematic and possible systemic data protection issues”.
“The Inquiry of the DPC will set out to establish whether the company has a legal basis for the ongoing processing of its users’ personal data and whether it meets its obligations as a data controller with regard to transparency and its compliance with data subject right’s requests,” the DPC added.
It’s not clear exactly which GDPR rights have been complained about by Tinder users at this stage.
We’ve reached out to Tinder for a response.
Also today the DPC has finally responded to long-standing complaints by consumer rights groups of Google’s handling of location data — announcing a formal investigation of that too.

Google’s location tracking finally under formal probe in Europe

Google’s lead data regulator in Europe has finally opened a formal investigation into the tech giant’s processing of location data — more than a year after receiving a series of complaints from consumer rights groups across Europe.
The Irish Data Protection Commission (DPC) announced the probe today, writing in a statement that: “The issues raised within the concerns relate to the legality of Google’s processing of location data and the transparency surrounding that processing.”
“As such the DPC has commenced an own-volition Statutory Inquiry, with respect to Google Ireland Limited,  pursuant to Section 110 of the Data Protection 2018 and in accordance with the co-operation mechanism outlined under Article 60 of the GDPR. The Inquiry will set out to establish whether Google has a valid legal basis for processing the location data of its users and whether it meets its obligations as a data controller with regard to transparency,” its notice added.
We’ve reached out to Google for comment.
BEUC, an umbrella group for European consumer rights groups, said the complaints about ‘deceptive’ location tracking were filed back in November 2018 — several months after the General Data Protection Regulation (GDPR) came into force, in May 2018.

Google faces GDPR complaint over ‘deceptive’ location tracking

It said the rights groups are concerned about how Google gathers information about the places people visit which it says could grant private companies (including Google) the “power to draw conclusions about our personality, religion or sexual orientation, which can be deeply personal traits”.
The complaints argue that consent to “share” users’ location data is not valid under EU law because it is not freely given — an express stipulation of consent as a legal basis for processing personal data under the GDPR — arguing that consumers are rather being tricked into accepting “privacy-intrusive settings”.
It’s not clear why it’s taken the DPC so long to process the complaints and determine it needs to formally investigate. (We’ve asked for comment and will update with any response.)
BEUC certainly sounds unimpressed — saying it’s glad the regulator “eventually” took the step to look into Google’s “massive location data collection”.
“European consumers have been victim of these practices for far too long,” its press release adds. “BEUC expects the DPC to investigate Google’s practices at the time of our complaints, and not just from today. It is also important that the procedural rights of consumers who complained many months ago, and that of our members representing them, are respected.”
Commenting further in a statement, Monique Goyens, BEUC’s director general, also said: “Consumers should not be under commercial surveillance. They need authorities to defend them and to sanction those who break the law. Considering the scale of the problem, which affects millions of European consumers, this investigation should be a priority for the Irish data protection authority. As more than 14 months have passed since consumer groups first filed complaints about Google’s malpractice, it would be unacceptable for consumers who trust authorities if there were further delays. The credibility of the enforcement of the GDPR is at stake here.”
The Irish DPC has also been facing growing criticism over the length of time it’s taking to reach decisions on extant GDPR investigations.
A total of zero decisions on big tech cases have been issued by the regulator — some 20 months after GDPR came into force in May 2018.
As lead European regulator for multiple tech giants — as a consequence of a GDPR mechanism which funnels cross border complaints via a lead regulator combined with the fact so many tech firms choose to site their regional HQ in Ireland (with offers the carrot of attractive business rates) — the DPC does have a major backlog of complex cross-border cases.
However there is growing political and public pressure for enforcement action to demonstrate that the GDPR is functioning as intended.
Even as further questions have been raised about how Ireland’s legal system will be able to manage so many cases.

Simplest way to show that Ireland will be unable to enforce #GDPR: They don’t even have enough judges for appeals of 4k* cases/year..
Ireland (4,9 M) has 176 judges (1 per 28k) Austria (8,5 Mio) has 1700 judges (1 per 5k) Germany (83 Mio) has 21339 judges (1 per 3,8k) pic.twitter.com/h9oj5VjOsu
— Max Schrems (@maxschrems) February 2, 2020

Google has felt the sting of GDPR enforcement elsewhere in the region; just over a year ago the French data watchdog, the CNIL, fined the company $57 million — for transparency and consent failures attached to the onboarding process for its Android mobile operating system.
But immediately following that decision Google switched the legal location of its international business to Ireland — meaning that GDPR complaints are now funnelled through the DPC.

UK Council websites are letting citizens be profiled for ads, study shows

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.
Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.
While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.
Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).
Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.
Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.
The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.
However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.
One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.
Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

This is from the Council Advertising Network’s media pack. CAN is a small operation. They are just trying to take a small slide of the Google and IAB “real-time bidding” cake. But this gives an insight in to how insidious this RTB stuff is. pic.twitter.com/b1tiZi1p4P
— Johnny Ryan (@johnnyryan) February 4, 2020

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.
And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.
But, as the saying goes, sunlight disinfects.
Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”
We’re reached out to the ICO for a response to the report’s findings.

EU lawmakers take fresh aim at Apple’s Lightning connector with latest e-waste push

The European parliament has voted overwhelmingly for tougher action to reduce e-waste, calling for the Commission to come up with beefed up rules by July 2020.
Specifically, the parliament wants the Commission to adopt the delegated act foreseen in the 2014 Radio Equipment Directive by that deadline — or else table a legislative measure by the same date, at the latest.
The resolution, which was approved by 582 votes to 40, points out that MEPs have been calling for a single charger for mobile devices for more than a decade now. But the Commission has repeatedly postponed taking steps to force an industry-wide shift. Subtext: We’re tired of the ongoing charging cable nightmare.
The parliament says there is now “an urgent need” for EU regulatory action on the issue — to shrink e-waste, empower consumers to make sustainable choices, and allow EU citizens to “fully participate in an efficient and well-functioning internal market”.
The resolution notes that around 50 million metric tons of e-waste is generated globally per year, with an average of more than 6 kg per person.
While, in Europe in 2016, the figure for total e-waste generated was 12.3 million metric tonnes, equivalent to 16.6 kg on average per inhabitant — with the parliament asserting this represents “an unnecessary environmental footprint that can be reduced”.
To date, the Commission’s approach to the charger e-waste issue has been to lean on industry to take voluntary steps to reduce unnecessary variety. Which has resulted in a reduction of the number of charger types on the market — down from 30+ in 2009 to just three today — but still no universal charger which works across brands and device types (phones, tablets, e-readers etc).
Most notably, Apple continues to use its own Lightning port charger standard — while other device makers have switched to USB-based charging (such as the newest, USB-C standard).
When news emerged earlier this month of the parliament’s intention to vote on tougher measures to standardize mobile chargers Apple attacked the plan — arguing that regulation would ‘stifle innovation’.
But the tech giant has had plenty of years to chew over clever ways to switch from the proprietary charging port only it uses to one of two USB standards used by everyone else. So the ‘innovation’ argument seems a pretty stale one.
Meanwhile Apple has worked around previous EU attempts to push device makers to standardize charging on Micro USB by expanding its revenue-generating dongle collection — and selling Europeans a Lighting to Micro USB adaptor. Thereby necessitating even more e-waste.
Perhaps picking up on Apple’s ‘innovation’ framing sidestep, i.e. to try to duck the e-waste issue, the parliament also writes:
… that the Commission, without hampering innovation, should ensure that the legislative framework for a common charger will be scrutinised regularly in order to take into account technical progress; reiterates the importance of research and innovation in this domain to improve existing technologies and come up with new ones;
It also wants the Commission to grapple with the issue of wireless chargers — and take steps to ensure interoperability there too, so that wireless chargers aren’t locked to only one brand or device type.
Consumers should not be obliged to buy new chargers with each new device, per the resolution, with the parliament calling on the Commission to introduce strategies to decouple the purchase of chargers from a new device alongside a common charger solution — while making sure any decoupling measures do not result in higher prices for consumers.
It also wants the Commission to look at legislative options for increasing the volume of cables and chargers that are collected and recycled in EU member states.
We’ve reached out to the Commission for comment.
Per Reuters, officials in the executive are in agreement that the voluntary approach is not working and have said they plan to introduce legislation for a common charger this year.

No pan-EU Huawei ban as Commission endorses 5G risk mitigation plan

The European Commission has endorsed a risk mitigation approach to managing 5G rollouts across the bloc — meaning there will be no pan-EU ban on Huawei. Rather it’s calling for Member States to coordinate and implement a package of “mitigating measures” in a 5G toolbox it announced last October and has endorsed today.
“Through the toolbox, the Member States are committing to move forward in a joint manner based on an objective assessment of identified risks and proportionate mitigating measures,” it writes in a press release.
It adds that Member States have agreed to “strengthen security requirements, to assess the risk profiles of suppliers, to apply relevant restrictions for suppliers considered to be high risk including necessary exclusions for key assets considered as critical and sensitive (such as the core network functions), and to have strategies in place to ensure the diversification of vendors”.
The move is another blow for the Trump administration — after the UK government announced yesterday that it would not be banning so-called “high risk” providers from supplying 5G networks.
Instead the UK said it will place restrictions on such suppliers — barring their kit from the “sensitive” ‘core’ of 5G networks, as well as from certain strategic sites (such as military locations), and placing a 35% cap on such kit supplying the access network.
However the US has been amping up pressure on the international community to shut the door entirely on the Chinese tech giant, claiming there’s inherent strategic risk in allowing Huawei to be involved in supplying such critical infrastructure — with the Trump administration seeking to demolish trust in Chinese-made technology.
Next-gen 5G is expected to support a new breed of responsive applications — such as self-driving cars and personalized telemedicine — where risks, should there be any network failure, are likely to scale too.
But the Commission take the view that such risks can be collectively managed.
The approach to 5G security continues to leave decisions on “specific security” measures as the responsibility of Member States. So there’s a possibility of individual countries making their own decisions to shut out Huawei. But in Europe the momentum appears to be against such moves.
“The collective work on the toolbox demonstrates a strong determination to jointly respond to the security challenges of 5G networks,” the EU writes. “This is essential for a successful and credible EU approach to 5G security and to ensure the continued openness of the internal market provided risk-based EU security requirements are respected.”
The next deadline for the 5G toolbox is April 2020, when the Commission expects Member States to have implemented the recommended measures. A joint report on their implementation will follow later this year.
Key actions being endorsed in the toolbox include:
    Strengthen security requirements for mobile network operators (e.g. strict access controls, rules on secure operation and monitoring, limitations on outsourcing of specific functions, etc.);
    Assess the risk profile of suppliers; as a consequence,  apply relevant restrictions for suppliers considered to be high risk – including necessary exclusions to effectively mitigate risks – for key assets defined as critical and sensitive in the EU-wide coordinated risk assessment (e.g. core network functions, network management and orchestration functions, and access network functions);
    Ensure that each operator has an appropriate multi-vendor strategy to avoid or limit any major dependency on a single supplier (or suppliers with a similar risk profile), ensure an adequate balance of suppliers at national level and avoid dependency on suppliers considered to be high risk; this also requires avoiding any situations of lock-in with a single supplier, including by promoting greater interoperability of equipment;
The Commission also recommends that Member States should contribute towards increasing diversification and sustainability in the 5G supply chain and co-ordinate on standardization around security objectives and on developing EU-wide certification schemes.

Facebook’s dodgy defaults face more scrutiny in Europe

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.
At the same time a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.
Lack of transparency
The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the ‘free’ service, and fined Facebook €5M for failing to properly inform users how their information would be used commercially.
In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service ‘is free and always will be’ — but finds users are still not being informed, “with clarity and immediacy”, about how the tech giant monetizes their data.
The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.
In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:
We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.
Last year Italy’s data protection agency also fined Facebook $1.1M — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.
Dodgy defaults
In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is ‘free and always will be’ — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.
A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.
However that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.
Significantly vzbv has won the right to bring data protection related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market. 
This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).
But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.
The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default
The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content”, and another stipulation that users agree in advance to all future changes to the policy.
Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”
We’ve reached out to Facebook for a response.

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
Its conclusions:
The Met failed to consider the human rights impact of the techIts use was unlikely to pass the key legal test of being “necessary in a democratic society”
— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.
In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.
Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.
It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).
“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”
For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)
Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.
Funny that.
Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.
The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.
Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)
The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.
It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)
Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.
The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.
While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.
In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.
For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.
“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.
The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.
Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.
You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.
But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 
And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.
What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.
Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)
At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.
But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.
Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.
And a ban would be far harder for platform giants to simply bend to their will.

So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.
— Jonathan Senchyne (@jsench) January 16, 2020

A 10-point plan to reboot the data industrial complex for the common good

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.
Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.
But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.
The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.
“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”
However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).
The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.
These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.
The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.
Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.
Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.
Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.
“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”
EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.
For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.
Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.
If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.
“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.
“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”
An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.
But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.
In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Privacy experts slam UK’s “disastrous” failure to tackle unlawful adtech

The UK’s data protection regulator has been slammed by privacy experts for once again failing to take enforcement action over systematic breaches of the law linked to behaviorally targeted ads — despite warning last summer that the adtech industry is out of control.
The Information Commissioner’s Office (ICO) has also previously admitted it suspects the real-time bidding (RTB) system involved in some programmatic online advertising to be unlawfully processing people’s sensitive information. But rather than take any enforcement against companies it suspects of law breaches it has today issued another mildly worded blog post — in which it frames what it admits is a “systemic problem” as fixable via (yet more) industry-led “reform”.
Yet it’s exactly such industry-led self-regulation that’s created the unlawful adtech mess in the first place, data protection experts warn.
The pervasive profiling of Internet users by the adtech ‘data industrial complex’ has been coming under wider scrutiny by lawmakers and civic society in recent years — with sweeping concerns being raised in parliaments around the world that individually targeted ads provide a conduit for discrimination, exploit the vulnerable, accelerate misinformation and undermine democratic processes as a consequence of platform asymmetries and the lack of transparency around how ads are targeted.
In Europe, which has a comprehensive framework of data protection rights, the core privacy complaint is that these creepy individually targeted ads rely on a systemic violation of people’s privacy from what amounts to industry-wide, Internet-enabled mass surveillance — which also risks the security of people’s data at vast scale.
It’s now almost a year and a half since the ICO was the recipient of a major complaint into RTB — filed by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Dr Michael Veale, a data and policy lecturer at University College London — laying out what the complainants described then as “wide-scale and systemic” breaches of Europe’s data protection regime.
The complaint — which has also been filed with other EU data protection agencies — agues that the systematic broadcasting of people’s personal data to bidders in the adtech chain is inherently insecure and thereby contravenes Europe’s General Data Protection Regulation (GDPR), which stipulates that personal data be processed “in a manner that ensures appropriate security of the personal data”.
The regulation also requires data processors to have a valid legal basis for processing people’s information in the first place — and RTB fails that test, per privacy experts — either if ‘consent’ is claimed (given the sheer number of entities and volumes of data being passed around, which means it’s not credible to achieve GDPR’s ‘informed, specific and freely given’ threshold for consent to be valid); or ‘legitimate interests’ — which requires data processors carry out a number of balancing assessment tests to demonstrate it does actually apply.
“We have reviewed a number of justifications for the use of legitimate interests as the lawful basis for the processing of personal data in RTB. Our current view is that the justification offered by organisations is insufficient,” writes Simon McDougall, the ICO’s executive director of technology and innovation, developing a warning over the industry’s rampant misuse of legitimate interests to try to pass off RTB’s unlawful data processing as legit.
The ICO also isn’t exactly happy about what it’s found adtech doing on the Data Protection Impact Assessment front — saying, in so many words, that it’s come across widespread industry failure to actually, er, assess impacts.
“The Data Protection Impact Assessments we have seen have been generally immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual,” writes McDougall.
“We have also seen examples of basic data protection controls around security, data retention and data sharing being insufficient,” he adds.
Yet — again — despite fresh admissions of adtech’s lawfulness problem the regulator is choosing more stale inaction.
In the blog post McDougall does not rule out taking “formal” action at some point — but there’s only a vague suggestion of such activity being possible, and zero timeline for “develop[ing] an appropriate regulatory response”, as he puts it. (His preferred ‘E’ word in the blog is ‘engagement’; you’ll only find the word ‘enforcement’ in the footer link on the ICO’s website.)
“We will continue to investigate RTB. While it is too soon to speculate on the outcome of that investigation, given our understanding of the lack of maturity in some parts of this industry we anticipate it may be necessary to take formal regulatory action and will continue to progress our work on that basis,” he adds.
McDougall also trumpets some incremental industry fiddling — such as trade bodies agreeing to update their guidance — as somehow relevant to turning the tanker in a fundamentally broken system.
(Trade body, the Internet Advertising Bureau’s UK branch, has responded to developments with an upbeat note from its head of policy and regulatory affairs, Christie Dennehy-Neil, who lauds the ICO’s engagement as “a constructive process”, claiming: “We have made good progress” — before going on to urge its members and the wider industry to implement “the actions outlined in our response to the ICO” and “deliver meaningful change”. The statement climaxes with: “We look forward to continuing to engage with the ICO as this process develops.”)
McDougall also points to Google removing content categories from its RTB platform from next month (a move it announced months back, in November) as an important development; and seizes on the tech giant’s recent announcement of a proposal to phase out support for third party cookies within the next two years as ‘encouraging’.
Privacy experts have responded with facepalmed outrage to yet another can-kicking exercise by the UK regulator — warning that cosmetic tweaks to adtech won’t fix a system that’s designed to feast off unlawful and insecure high velocity background trading of Internet users’ personal data.
“When an industry is premised and profiting from clear and entrenched illegality that breach individuals’ fundamental rights, engagement is not a suitable remedy,” said UCL’s Veale. “The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.”

ICO believes that cosmetic fixes can do the job when it comes to #adtech. But no matter how secure data flows are and how beautiful cookie notices are, can people really understand the consequences of their consent? I’m convinced that this consent will *never* be informed. 1/2 https://t.co/1avYt6lgV3
— Karolina Iwańska (@ka_iwanska) January 17, 2020

The trio behind the RTB complaints (which includes Veale) have also issued a scathing collective response to more “regulatory ambivalence” — denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”.
“The ‘Real-Time Bidding’ data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination,” they warn. “Regulatory ambivalence cannot continue. The longer this data breach festers, the deeper the rot sets in and the further our data gets exploited. This must end. We are considering all options to put an end to the systemic breach, including direct challenges to the controllers and judicial oversight of the ICO.”
Wolfie Christl, a privacy researcher who focuses on adtech — including contributing to a recent study looking at how extensively popular apps are sharing user data with advertisers, dubbed the ICO’s response “disastrous”.
“Last summer the ICO stated in their report that millions of people were affected by thousands of companies’ GDPR violations. I was sceptical when they announced they would give the industry six more months without enforcing the law. My impression is they are trying to find a way to impose cosmetic changes and keep the data industry happy rather than acting on their own findings and putting an end to the ubiquitous data misuse in today’s digital marketing, which should have happened years ago. The ICO seems to prioritize appeasing the industry over the rights of data subjects, and this is disastrous,” he told us.
“The way data-driven online marketing currently works is illegal at scale and it needs to be stopped from happening,” Christl added. “Each day EU data protection authorities allow these practices to continue further violates people’s rights and freedoms and perpetuates a toxic digital economy.
“This undermines the GDPR and generally trust in tech, perpetuates legal uncertainty for businesses, and punishes companies who comply and create privacy-respecting services and business models. 20 months after the GDPR came into full force, it is still not enforced in major areas. We still see large-scale misuse of personal information all over the digital world. There is no GDPR enforcement against the tech giants and there is no enforcement against thousands of data companies beyond the large platforms. It seems that data protection authorities across the EU are either not able — or not willing — to stop many kinds of GDPR violations conducted for business purposes. We won’t see any change without massive fines and data processing bans. EU member states and the EU commission must act.”

Bolt raises €50M in venture debt from the EU to expand its ride-hailing business

Bolt, the billion-dollar startup out of Estonia that’s building a ride-hailing, scooter and food delivery business across Europe and Africa, has picked up a tranche of funding in its bid to take on Uber and the rest in the world of on-demand transportation.
The company has picked up €50 million (about $56 million) from the European Investment Bank to continue developing its technology and safety features, as well as to expand newer areas of its business such as food delivery and personal transport like e-scooters.
With this latest money, Bolt has raised over €250 million in funding since opening for business in 2013 and as of its last equity round in July 2019 (when it raised $67 million), it was valued at over $1 billion, which Bolt has confirmed to me remains the valuation here.
Bolt further said that its service now has over 30 million users in 150 cities and 35 countries and is profitable in two-thirds of its markets.
“Bolt is a good example of European excellence in tech and innovation. As you say, to stand still is to go backwards, and Bolt is never standing still,” said The EIB’s Vice President Alexander Stubb in a statement. “The Bank is very happy to support the company in improving its services, as well as allowing it to branch out into new service fields. In other words, we’re fully on board!”
The EIB is the non-profit, long-term lending arm of the European Union, and this financing in the form of a quasi-equity facility.
Also known as venture debt, the financing is structured as a loan, where repayment terms are based on a percentage of future revenue streams, and ownership is not diluted. The funding is backed in turn by the European Fund for Strategic Investments, as part of a bigger strategy to boost promising companies, and specifically riskier startups, in the tech industry. It expects to make and spur some €458.8 billion in investments across 1 million startups and SMEs.
Opting for “quasi-equity” loan instead of a straight equity or debt investment is attractive to Bolt for a couple of reasons. One is fact that the funding comes without ownership dilution is one attractive factor of the funding. Two is the endorsement and support of the EU itself, in a market category where tech disruptors have been known to run afoul of regulators and lawmakers, in part because of the ubiquity and nature of the transportation/mobility industry.
“Mobility is one of the areas where Europe will really benefit from a local champion who shares the values of European consumers and regulators,” said Martin Villig, the co-founder and CEO of Bolt, in a statement. :Therefore, we are thrilled to have the European Investment Bank join the ranks of Bolt’s backers as this enables us to move faster towards serving many more people in Europe.”
(Butting heads with authorities is something that Bolt is no stranger to: it tried to enter the lucrative London taxi market through a backdoor to bypass the waiting time to get a license. It really didn’t work, and the company had to wait another 21 months to come to London doing it by the book. In its first six months of operation in London, the company has picked up 1.5 million customers.)
While private VCs account for the majority of startup funding, backing from government groups is an interesting and strategic route for tech companies that are making waves in large industries that sit adjacent to technology. Before it was acquired by PayPal, IZettle also picked up a round from funding from the EIB specifically to invest in its AI R&D. Navya, the self-driving bus and shuttle startup, has also raised money from the EIB in the past.
One of the big issues with on-demand transportation companies has been its safety record — indeed, this is at the center of Uber’s latest scuffle in Europe, where London’s transport regulator has rejected a license renewal for the company over concerns about Uber’s safety record. (Uber is appealing and while it does, it’s business as usual. )
So it’s no surprise that with this funding, Bolt says that it will be specifically using the money to develop technology to “improve the safety, reliability and sustainability of its services while maintaining the high efficiency of the company’s operations.”
Bolt is one of a group of companies that have been hatched out of Estonia, which has worked to position itself as a leader in Europe’s tech industry as part of its own economic regeneration in the decades after existing as part of the Soviet Union (it formally left in 1990). The EIB has invested around €830 million in Estonian projects in the last five years.
“Estonia is as the forefront of digital transformation in Europe,” said Paolo Gentiloni, European Commissioner for the Economy, in a statement. “I am proud that Europe, through the Investment Plan, supports Estonian platform Bolt’s research and development strategy to create innovative and safe services that will enhance urban mobility.”

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.
Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.
A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.
At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.
Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.
The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.
Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.
“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.
The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.
Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).
The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.
There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.
If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.
Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.
Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.
In a statement welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”
The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.
The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

This makes the revised e-Privacy Regulation a *huge* national security battleground for the MSes (they will miss the UK fighting for more surveillance) and is v relevant also to the ongoing debates on “bulk”/mass surveillance, and MI5’s latest requests… #ePR
— Ian Brown (@1Br0wn) January 15, 2020

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.
“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 
“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

Yes, the U.K. now has a law to log web users’ browsing behavior, hack devices and limit encryption

Dating and fertility apps among those snitching to “out of control” adtech, report finds

The latest report to warn that surveillance capitalism is out of control — and ‘free’ digital services can in fact be very costly to people’s privacy and rights — comes courtesy of the Norwegian Consumer Council which has published an analysis of how popular apps are sharing user data with the behavioral ad industry.
It suggests smartphone users have little hope of escaping adtech’s pervasive profiling machinery — short of not using a smartphone at all.
A majority of the apps that were tested for the report were found to transmit data to “unexpected third parties” — with users not being clearly informed about who was getting their information and what they were doing with it. Most of the apps also did not provide any meaningful options or on-board settings for users to prevent or reduce the sharing of data with third parties.
“The evidence keeps mounting against the commercial surveillance systems at the heart of online advertising,” the Council writes, dubbing the current situation “completely out of control, harming consumers, societies, and businesses”, and calling for curbs to prevalent practices in which app users’ personal data is broadcast and spread “with few restraints”. 
“The multitude of violations of fundamental rights are happening at a rate of billions of times per second, all in the name of profiling and targeting advertising. It is time for a serious debate about whether the surveillance-driven advertising systems that have taken over the internet, and which are economic drivers of misinformation online, is a fair trade-off for the possibility of showing slightly more relevant ads.
“The comprehensive digital surveillance happening across the adtech industry may lead to harm to both individuals, to trust in the digital economy, and to democratic institutions,” it also warns.
In the report app users’ data is documented being shared with tech giants such as Facebook, Google and Twitter — which operate their own mobile ad platforms and/or other key infrastructure related to the collection and sharing of smartphone users’ data for ad targeting purposes — but also with scores of other faceless entities that the average consumer is unlikely to have heard of.
The Council commissioned a data flow analysis of ten popular apps running on Google’s Android smartphone platform — generating a snapshot of the privacy blackhole that mobile users inexorably tumble into when they try to go about their digital business, despite the existence (in Europe) of a legal framework that’s supposed to protect people by giving citizens a swathe of rights over their personal data.
Among the findings are a make-up filter app sharing the precise GPS coordinates of its users; ovulation-, period- and mood-tracking apps sharing users’ intimate personal data with Facebook and Google (among others); dating apps exchanging user data with each other, and also sharing with third parties sensitive user info like individuals’ sexual preferences (and real-time device specific tells such as sensor data from the gyroscope… ); and a games app for young children that was found to contain 25 embedded SDKs and which shared the Android Advertising ID of a test device with eight third parties.
The ten apps whose data flows were analyzed for the report are the dating apps Grindr, Happn, OkCupid, and Tinder; fertility/period tracker apps Clue and MyDays; makeup app Perfect365; religious app Muslim: Qibla Finder; children’s app My Talking Tom 2; and the keyboard app Wave Keyboard.
“Altogether, Mnemonic [the company which the Council commissioned to conduct the technical analysis] observed data transmissions from the apps to 216 different domains belonging to a large number of companies. Based on their analysis of the apps and data transmissions, they have identified at least 135 companies related to advertising. One app, Perfect365, was observed communicating with at least 72 different such companies,” the report notes.
“Because of the scope of tests, size of the third parties that were observed receiving data, and popularity of the apps, we regard the findings from these tests to be representative of widespread practices in the adtech industry,” it adds.
Aside from the usual suspect (ad)tech giants, less well-known entities seen receiving user data include location data brokers Fysical, Fluxloop, Placer, Places/Fouraquare, Safegraph and Unacast; behavioral ad targeting players like Receptiv/Verve, Neura, Braze and LeanPlum; mobile app marketing analytics firms like AppsFlyer; and ad platforms and exchanges like AdColony, AT&T’s AppNexus, Bucksense, OpenX, PubNative, Smaato and Vungle.
In the report the Forbrukerrådet concludes that the pervasive tracking of smartphone users which underpins the behavioral ad industry is all but impossible for smartphone users to escape — even if they are able to locate an on-device setting to opt out of behavioral ads.
This is because multiple identifiers are being attached to them and their devices, and also because of frequent sharing/syncing of identifiers by adtech players across the industry. (It also points out that on the Android platform a setting where users can opt-out of behavioral ads does not actually obscure the identifier — meaning users have to take it on trust that adtech entities won’t just ignore their request and track them anyway.)
The Council argues its findings suggest widespread breaches of Europe’s General Data Protection Regulation (GDPR), given that key principles of that pan-EU framework — such as data protection by design and default — are in stark conflict with the systematic, pervasive background profiling of app users it found (apps were, for instance, found sharing personal data by default, requiring users to actively seek out an obscure device setting to try to prevent being profiled).
“The extent of tracking and complexity of the adtech industry is incomprehensible to consumers, meaning that individuals cannot make informed choices about how their personal data is collected, shared and used. Consequently, the massive commercial surveillance going on throughout the adtech industry is systematically at odds with our fundamental rights and freedoms,” it also argues.
Where (user) consent is being relied upon as a legal basis to process personal data the standard required by GDPR states it must be informed, freely given and specific.
But the Council’s analysis of the apps found them sorely lacking on that front.
“In the cases described in this report, none of the apps or third parties appear to fulfil the legal conditions for collecting valid consent,” it writes. “Data subjects are not informed of how their personal data is shared and used in a clear and understandable way, and there are no granular choices regarding use of data that is not necessary for the functionality of the consumer-facing services.”
It also dismisses another possible legal base — known as legitimate interests — arguing app users “cannot have a reasonable expectation for the amount of data sharing and the variety of purposes their personal data is used for in these cases”.
The report points out that other forms of digital advertising (such as contextual advertising) which do not rely on third parties processing personal data are available — arguing that further undermines any adtech industry claims of ‘legitimate interests’ as a valid base for helping themselves to smartphone users’ data.
“The large amount of personal data being sent to a variety of third parties, who all have their own purposes and policies for data processing, constitutes a widespread violation of data subjects’ privacy,” the Council argues. “Even if advertising is necessary to provide services free of charge, these violations of privacy are not strictly necessary in order to provide digital ads. Consequently, it seems unlikely that the legitimate interests that these companies may claim to have can be demonstrated to override the fundamental rights and freedoms of the data subject.”
The suggestion, therefore, is that “a large number of third parties that collect consumer data for purposes such as behavioural profiling, targeted advertising and real-time bidding, are in breach of the General Data Protection Regulation”.
The report also discussing the harms attached to such widespread violation of privacy — pointing out risks such as discrimination and manipulation of vulnerable individuals, as well as chilling effects on speech, added fuel for ad fraud and the torching of trust in the digital economy, among other society-afflicting ill being fuelled by adtech’s obsession with profiling everyone…
Some of the harm of this data exploitation stems from significant knowledge and power asymmetries that render consumers powerless. The overarching lack of transparency of the system makes consumers vulnerable to manipulation, particularly when unknown companies know almost everything about the individual consumer. However, even if regular consumers had comprehensive knowledge of the technologies and systems driving the adtech industry, there would still be very limited ways to stop or control the data exploitation.
Since the number and complexity of actors involved in digital marketing is staggering, consumers have no meaningful ways to resist or otherwise protect themselves from the effects of profiling. These effects include different forms of discrimination and exclusion, data being used for new and unknowable purposes, widespread fraud, and the chilling effects of massive commercial surveillance systems. In the long run, these issues are also contributing to the erosion of trust in the digital industry, which may have serious consequences for the digital economy.
To shift what it dubs the “significant power imbalance between consumers and third party companies”, the Council calls for an end to the current practices of “extensive tracking and profiling” — either by companies changing their practices to “respect consumers’ rights”, or — where they won’t — urging national regulators and enforcement authorities to “take active enforcement measures, to establish legal precedent to protect consumers against the illegal exploitation of personal data”.
It’s fair to day that enforcement of GDPR remains a work in progress at this stage, some 20 months after the regulation came into force, back in May 2018. With scores of cross-border complaints yet to culminate in a decision (though there have been a couple of interesting adtech– and consent-related enforcements in France).
We reached out to Ireland’s Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO) for comment on the Council’s report. The Irish regulator has multiple investigations ongoing into various aspects of adtech and tech giants’ handling of online privacy, including a probe related to security concerns attached to Google’s ad exchange and the real-time bidding process which features in some programmatic advertising. It has previously suggested the first decisions from its hefty backlog of GDPR complaints will be coming early this year. But at the time of writing the DPC had not responded to our request for comment on the report.

Google’s lead EU regulator opens formal privacy probe of its adtech

A spokeswoman for the ICO — which last year put out its own warnings to the behavioral advertising industry, urging it to change its practices — sent us this statement, attributed to Simon McDougall, its executive director for technology and innovation, in which he says the regulator has been prioritizing engaging with the adtech industry over its use of personal data and has called for change itself — but which does not once mention the word ‘enforcement’…
Over the past year we have prioritised engagement with the adtech industry on the use of personal data in programmatic advertising and real-time bidding.
Along the way we have seen increased debate and discussion, including reports like these, which factor into our approach where appropriate. We have also seen a general acknowledgment that things can’t continue as they have been.
Our 2019 update report into adtech highlights our concerns, and our revised guidance on the use of cookies gives greater clarity over what good looks like in this area.
Whilst industry has welcomed our report and recognises change is needed, there remains much more to be done to address the issues. Our engagement has substantiated many of the concerns we raised and, at the same time, we have also made some real progress.
Throughout the last year we have been clear that if change does not happen we would consider taking action. We will be saying more about our next steps soon – but as is the case with all of our powers, any future action will be proportionate and risk-based.

Cross-border investments aren’t dead, they’re getting smarter

Yohei Nakajima
Contributor

Share on Twitter

Yohei Nakajima is an investor at Scrum Ventures, an early-stage venture capital fund based in San Francisco, and Senior Vice President at Scrum Studio, helping global corporations connect and work with innovative startups.

In recent years, the venture capital and startup worlds have seen a significant shift towards globalization. More and more startups are going global and breaking borders, such as payments giant Stripe and their recent expansion to Latin America, e-scooter startup Bird’s massive European expansion, or fashion subscription service (an investment in our portfolio) Le Tote’s entrance into China.
Likewise, more VC Funds are spanning geographies in both investment focus and the limited partners, or LPs, who fuel those investments. While Silicon Valley is very much seen as an epicenter for tech — it is no longer the sole proprietor for innovation — with new technology hubs rising across the world from Israel to the UK to Latin America and beyond.
Yet, many have commented on a shift or slowdown of globalization, or “slobalization,” in recent months. Whether it be from the current political climate or other factors, it’s been said that there’s been a marked decrease in cross-border investments of late — leading to the question: Is the world still interested in U.S. startups?
To answer this and better understand the hunger from foreign investors in participating in U.S. funding rounds, both from a geographic and stage perspective, I looked at Crunchbase data in U.S. seed and VC rounds between the years of 2009 to 2018. The data shows that cross-border investments are far from dead — but they are getting smarter and perhaps even more global with the rise of investments from Asia.

Cookie consent tools are being used to undermine EU privacy rules, study suggests

Most cookie consent pop-ups served to Internet users in the European Union — ostensibly seeking permission to track people’s web activity — are likely to be flouting regional privacy laws, a new study by researchers at MIT, UCL and Aarhus University suggests.
“The results of our empirical survey of CMPs [consent management platforms] today illustrates the extent to which illegal practices prevail, with vendors of CMPs turning a blind eye to — or worse, incentivising — clearly illegal configurations of their systems,” the researchers argue, adding that: “Enforcement in this area is sorely lacking.”
Their findings, published in a paper entitled Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence, chime with another piece of research we covered back in August — which also concluded a majority of the current implementations of cookie notices offer no meaningful choice to Europe’s Internet users — even though EU law requires one.
When consent is being relied upon as the legal basis for processing web users’ personal data, the bar for valid (i.e. legal) consent that’s set by the EU’s General Data Protection Regulation (GDPR) is clear: It must be informed, specific and freely given.
Recent jurisprudence by the Court of Justice of the European Union also further crystalized the law around cookies, making it clear that consent must be actively signalled — meaning a digital service cannot infer consent to tracking by indirect actions (such as the pop-up being closed by the user without a response or ignored in favor of interacting with the service).
Many websites use a so-called CMP to solicit consent to tracking cookies. But if it’s configured to contain pre-ticked boxes that opt users into sharing data by default — requiring an affirmative user action to opt out — any gathered ‘consent’ also isn’t legal.
Consent to tracking must also be obtained prior to a digital service dropping or accessing a cookie; Only service-essential cookies can be deployed without asking first.
All of which means — per EU law — it should be equally easy for website visitors to choose not to be tracked as to agree to their personal data being processed.
However the Dark Patterns after the GDPR study found that’s very far from the case right now.
“We found that dark patterns and implied consent are ubiquitous,” the researchers write in summary, saying that only slightly more than one in ten (11.8%) of the CMPs they looked at “meet the minimal requirements that we set based on European law” — which they define as being “if it has no optional boxes pre-ticked, if rejection is as easy as acceptance, and if consent is explicit”.
For the study, the researchers scraped the top 10,000 UK websites, as ranked by Alexa, to gather data on the most prevalent CMPs in the market — which are made by five companies: QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak — and analyzed how the design and configurations of these tools affected Internet users’ choices. (They obtained a data set of 680 CMP instances via their method — a sample they calculate is representative of at least 57% of the total population of the top 10k sites that run a CMP, given prior research found only around a fifth do so.)
Implicit consent — aka (illegally) inferring consent via non-affirmative user actions (such as the user visiting or scrolling on the website or a failure to respond to a consent pop-up or closing it without a response) — was found to be common (32.5%) among the studied sites.
“Popular CMP implementation wizards still allow their clients to choose implied consent, even when they have already indicated the CMP should check whether the visitor’s IP is within the geographical scope of the EU, which should be mutually exclusive,” they note, arguing that: “This raises significant questions over adherence with the concept of data protection by design in the GDPR.”
They also found that the vast majority of CMPs make rejecting all tracking “substantially more difficult than accepting it” — with a majority (50.1%) of studied sites not having a ‘reject all’ button. While only a tiny minority (12.6%) of sites had a ‘reject all’ button accessible with the same or fewer number of clicks as an ‘accept all’ button.
Or, to put it another way, ‘Ohhai dark pattern design‘…
“An ‘accept all’ button was never buried in a second layer,” the researchers go on to point out, also finding that “74.3% of reject all buttons were one layer deep, requiring two clicks to press; 0.9% of them were two layers away, requiring at minimum three.”
Pre-ticked boxes were found to be widely deployed in the studied CMPs as well — despite such a setting not being legally valid. (On this they found: “56.2% of sites pre-ticked optional vendors or purposes/categories, with 54.1% of sites pre-ticking optional purposes, 32.3% pre-ticking optional categories, and 30.3% pre-ticking both”.)
They also point out that the high number of third-party trackers routinely being used by sites poses a major problem for the EU consent model — given it requires a “prohibitively long time” for users to become clearly informed enough to be able to legally consent.
The exact number of third party trackers they found being packed like sardines into CMPs varied — with between tens and several hundreds in play depending on the site.
Fifty-eight was the lowest number they encountered. While the highest instance was 542 vendors — on an implementation of QuantCast’s CMP. (And, well, just imagine the ‘friction’ involved in manually unticking all those, assuming that was one of the sites that also lacked a ‘reject all’ button… )
Sites relied on a large number of third party trackers, which would take a prohibitively long time for users to inform themselves about clearly. Out of the 85.4% of sites that did list vendors (e.g. third party trackers) within the CMP, there was a median number of 315 vendors (low. quartile 58, upp. quartile 542). Different CMP vendors have different average numbers of vendors, with the highest being QuantCast at 542… 75% of sites had over 58 vendors. 76.47% of sites provide some descriptions of their vendors. The mean total length of these descriptions per site is 7,985 words: roughly 31.9 minutes of reading for the average 250 words-per-minute reader, not counting interaction time to e.g. unfold collapsed boxes or navigating to and reading specific privacy policies of a vendor.
A second part of the research involved a field experiment involving 40 participants to investigate how the eight most common CMP designs affect Internet users’ consent choices.
“We found that notification style (banner or barrier) has no effect [on consent choice]; removing the opt-out button from the first page increases consent by 22–23 percentage points; and providing more granular controls on the first page decreases consent by 8–20 percentage points,” they write in summary on that.
They argue this portion of the study supports the notion that two of the most common consent interface designs – “not showing a ‘reject all’ button on the first page; and showing bulk options before showing granular control” – make it more likely for users to provide consent, thereby “violating the [GDPR] principle of “freely given””.
They also make reference to “qualitative reflections” of the participants in the paper — which were obtained via  survey after individuals’ consent choices had been registered during the field study — suggesting these responses “put into question the entire notice-and-consent model not because of specific design decisions but merely because an action is required before the user can accomplish their main task and because they appear too frequently if they are shown on a website-by-website basis”.
So, in other words, just the fact of interrupting a web user to ask them to make a choice may itself apply substantial enough pressure that it might render any resulting ‘consent’ invalid.
The study’s finding of the prevalence of manipulative designs and configurations intended to nudge or even force consent suggests Internet users in Europe are not actually benefiting from a legal framework that’s supposed to protection their digital data from unwanted exploitation — and are rather being subject to a lot of noisy, distracting and disingenuous ‘consent theatre’.
Cookie notices not only generate friction and frustration for the average Internet user, as they try to go about their daily business online, but the current situation is creating a faux veneer of compliance — atop what is actually a massive trampling of rights via what amounts to digital daylight robbery of people’s data at scale.
The problem here is that EU regulators have for years looked the other way where online tracking is concerned, failing entirely to enforce the on-paper standard.
Enforcement is indeed sorely lacking, as the researchers note. (Industry lobbying/political pressure, limited resources, risk aversion and regulatory capture, and a legacy of inaction around digital rights are all likely to blame.)
And while the GDPR only started being applied in May 2018, Europe has had regulations on data-gathering mechanisms like cookies for approaching two decades — with the paper pointing out that an amendment to the ePrivacy Directive all the way back in 2002 made it a requirement that “storing or accessing information on a user’s device not ‘strictly necessary’ for providing an explicitly requested service requires both clear and comprehensive information and opt-in consent”.
Asked about the research findings, lead author, Midas Nouwens, questioned why CMP vendors are selling so called ‘compliance’ tools that allow for non-compliant configurations in the first place.
“It’s sad, but I don’t think anyone is surprised anymore by how few pop-ups comply with the GDPR,” he told TechCrunch. “What is shocking is how non-compliant interface designs are allowed by the companies that provide consent pop-ups. Why do they let their clients count scrolling as consent or bury the decline button somewhere on the third page?”
“Enforcement is really the next big challenge if we don’t want the GDPR to go down the same path as the ePrivacy directive,” he added. “Since enforcement agencies have limited resources, focusing on the popular consent pop-up providers could be a much more effective strategy than targeting individual websites.
“Unfortunately, while we wait for enforcement, the dark patterns in these pop-ups are still manipulating people into being tracked.”
Another of the researchers behind the paper, Michael Veale, a lecturer in digital rights and regulation at UCL, also expressed shock that CMP vendors are allowing their tools to be configured in ways which are clearly intended to manipulate Internet users — thereby flouting the law.
In the paper the researchers urge regulators to take a smarter approach to tackling such widespread violation, such as by making use of automated tools “to expedite discovery and enforcement” of non-compliant cookie notices, and suggest they work “further upstream” — such as by placing requirements on the vendors of CMPs “to only allow compliant designs to be placed on the market”.
“It’s shocking to see how many of the large providers of consent pop-ups allow their systems to be misconfigured, such as through implicit consent, in ways that clearly infringe data protection law,” Veale told us, adding: “I suspect data protection authorities see this widespread illegality and are not sure exactly where to start. Yet if they do not start enforcing these guidelines, it’s unclear when this widespread illegality will start to stop.”
“This study even overestimates compliance, as we don’t focus on what actually happens to the tracking when you click on these buttons, which other recent studies have emphasised in many cases mislead individuals and do nothing at all,” he also pointed out.
We reached out to the UK’s data protection watchdog, the ICO, for a response to the research — and a spokeswoman pointed us to this cookie advice blog post it published last year, saying the advice it contains “still stands”.
In the blog Ali Shah, the ICO’s head of technology policy, suggests there could be some (albeit limited) action from the regulator this year to clean up cookie consent, with Shah writing that: “Cookie compliance will be an increasing regulatory priority for the ICO in the future. However, as is the case with all our powers, any future action would be proportionate and risk-based.”
While European citizens wait for data protection regulators to take meaningful action over systematic breaches of the GDPR — including those attached to consent-less tracking of web users — there is one step European web users can take to shrink the pain of cookie consent pop-ups: The researchers behind the study have built an open source browser extension that can automatically answer pop-ups based on user-customizable preferences.
It’s called Consent-o-Matic — and there are versions available for Firefox and Chrome.

A holiday gift from us* at @AarhusUni: Consent-o-Matic! A browser extension that automatically answers consent pop-ups for you. Firefox: https://t.co/5PhAEN6eOdChrome: https://t.co/ob8xrLxhFWGithub: https://t.co/0Xe9xNwCEb
* @cklokmose; Janus Bager Kristensen; Rolf Bagge
1/8 pic.twitter.com/3ooV8ZFTH0
— Midas Nouwens (@MidasNouwens) December 24, 2019

At release the tool can automatically respond to cookie banners built by the five big CMP suppliers (QuantCast, OneTrust, TrustArc, Cookiebot, and Crownpeak).
But being as it’s open source, the hope is others will build on it to expand the types of pop-ups it’s able to auto-respond to. In the absence of a legally enforced ‘Do Not Track’ browser standard this is about as good as it gets for Internet users desperately seeking easier agency over the online tracking industry.
In a Twitter thread last month announcing the tool, Nouwens described the project as making use of “adversarial interoperability” as a pro-privacy tactic.
“Automating consent and privacy preferences is not new (DNT and P3P), but this project uses adversarial interoperability, rather than rely on industry self-regulation or buy-in from fundamentally opposed stakeholders (browsers, advertisers, publishers),” he observed.
However he added one caveat, reminding users to be on their guard for further non-compliance from the data suckers — pointing to the earlier research paper also flagged by Veale which found a small portion of sites (~7%) entirely ignore responses to cookie pop-ups and track users regardless of response.
So sometimes even a seamlessly automated ‘no’ to tracking might still sum to being tracked…

Adtech told to keep calm and fix its ‘lawfulness’ problem

DuckDuckGo still critical of Google’s EU Android choice screen auction, after wining a universal slot

Google has announced which search engines have won an auction process it has devised for an Android ‘choice screen’ — as its response to an antitrust intervention by the region’s competition regulator.
The prompt is shown to users of Android smartphones in the European Union as they set up a device, asking them to choose a search engine from a list of four which always includes Google’s own search engine.
In mid-2018 the European Commission fined Google $5BN for antitrust violations attached to how it operates the Android platform, including related to how it bundles its own services with the dominant smartphone OS, and ordered it to remedy the infringements — while leaving it up to the tech giant to devise a fix.
Google responded by creating a choice screen for Android users to pick a search engine from a short list — with the initial choices seemingly based on local marketshare. But last summer it announced it would move to auctioning slots on the screen via a fixed sealed bid auction process.
The big winners of the initial auction, for the period March 1, 2020 to June 30, 2020, are pro-privacy search engine DuckDuckGo — which gets one of the three slots in all 31 European markets — and a product called Info.com, which will also be shown as an option in all those markets. (Per Wikipedia, the latter is a veteran metasearch engine that provides results from multiple search engines and directories, including Google.)
French pro-privacy search engine Qwant will be shown as an option to Android users in eight European markets. While Russia’s Yandex will appears as an option in five markets in the east of the region.
Other search engines that will appear as choices in a minority of the European markets are GMX, Seznam, Givero and PrivacyWall.
At a glance the big loser looks to be Microsoft’s Bing search engine — which will only appear as an option on the choice screen shown in the UK.
Tree-planting search engine Ecosia does not appear anywhere on the list at all, despite appearing on some initial Android choice screens — having taken the decision to boycott the auction to objects to Google’s ‘pay-to-play’ approach.
Ecosia CEO Christian Kroll told the BBC: “We believe this auction is at odds with the spirit of the July 2018 EU Commission ruling. Internet users deserve a free choice over which search engine they use and the response of Google with this auction is an affront to our right to a free, open and federated internet. Why is Google able to pick and choose who gets default status on Android?”
It’s not the only search engine critical of Google’s move, with Qwant and DuckDuckGo both raising concerns immediately the move to a paid auction was announced last year.
Despite participating in the process — and winning a universal slot — DuckDuckGo told us it still does not agree with Google’s pay-to-play auction.
“We believe a search preference menu is an excellent way to meaningfully increase consumer choice if designed properly. Our own research has reinforced this point and we look forward to the day when Android users in Europe will have the opportunity to easily make DuckDuckGo their default search engine while setting up their phones. However, we still believe a pay-to-play auction with only 4 slots isn’t right because it means consumers won’t get all the choices they deserve and Google will profit at the expense of the competition,” a spokesperson said in a statement.

Will online privacy make a comeback in 2020?

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.
The debate now is largely around how to regulate platforms, not whether it needs to happen.
The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.
The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.
“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.
Democracy disrupted
The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.
Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.
Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.
The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.
From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.
Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.
No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.
So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.
Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.
Adtech under attack
Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.
But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

*Very* big news last night in internet political ads. @Google’s plan to eliminate #microtargeting is a move that – if done right – could help make internet political advertising a force that informs and inspires us, rather than isolating and inflaming us.
1/9
— Ellen L Weintraub (@EllenLWeintraub) November 21, 2019

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.
Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).
Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.
This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.
Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.
European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.
Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.
The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.
A recent event held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though critics say it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.
Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.
As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.
In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.
Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?
Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.
Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.
Year of the GDPR banhammer?
2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.

Adtech told to keep calm and fix its ‘lawfulness’ problem

Six months after warning that the real-time bidding (RTB) component of programmatic online advertising is wildly out of control — i.e. in a breaking the law sense — the UK’s data protection watchdog has marked half a year’s regulatory inaction with a blog post that entreats the adtech industry to come up with a solution to an “industry problem”. 
Casual readers of the ICO’s pre-Christmas message for European law-flouting adtech might be forgiven for thinking it looks a lot like the regulator telling the industry to ‘keep calm and carry on regulating yourselves’.
More informed readers, who understand that RTB is a process which (currently) entails systematic, privacy-eviscerating high velocity trading of people’s personal data for the purpose of targeting them with ads, might feel moved to point out that self-regulation is a core part of why adtech is in the abject mess it’s in.
Ergo, a data protection regulator calling for more of the same systemic failure does look rather, uh, uninspiring.
In the mildly worded blog post, Simon McDougall, the ICO’s executive director for technology and innovation — who does not appear to work anywhere near an enforcement department — includes such grand suggestions for adtech law-breakers as: “keep engaging with your trade associations”.
You’ll have to forgive us for not being overly convinced such a step will lead to any paradigm tilts to privacy — or “solutions that combine innovation and privacy”, as McDougall puts it — given episodes like this.
Another of the big ideas he has for the industry to get with the legal program is to suggest people working in adtech “challenge” senior management to “review their approach”.
Now we know employee activism is rather in vogue right now — at least at certain monopolistic tech giants who’ve scaled so big, and employ such large armies of lawyers, they’re essentially immune to moral and societal operational norms — but we’re not sure it’s the greatest look for the UK’s data watchdog to be encouraging adtech professionals to put their own jobs on the line instead of, y’know, doing its job and enforcing the law.
It’s possible that McDougall, a relatively recent recruit to the regulator, may not yet know it from his perch in the “technology and innovation” unit, but the ICO does have a powerful toolbox at its disposal these days. Including the ability, under the pan-EU General Data Protection Regulation framework, to levy fines of up to 4% of global turnover on entities it finds seriously violating the law.
It can also order a stop to law-violating data processing. And what better way to end the mass-scale privacy violations attached to programmatic advertising than by ordering personal data be stripped out of RTB requests, you might wonder?
It wouldn’t mean an end to being able to target ads online. Contextual targeting doesn’t require personal data — and has been used successfully by the likes of non-tracking search engine DuckDuckGo for years (and profitably so). It would just mean an end to the really creepy, stalkerish stuff. The stuff consumers hate — which also serves up horribly damaging societal effects, given that the mass profiling of Internet users enables push-button discrimination and exploitation of the vulnerable at vast scale.
Microtargeted ads are also, as we now know all too well, a pre-greased electronic conduit for attacks on democracy and society — enabling the spread of malicious disinformation.

Since folks with an eye on these topics are retweeting this, here are a few things I’ve written this year about the negative externalities of behavioral targeting. 1/3 https://t.co/n8i7QyCeR0 pic.twitter.com/g3a4X1bbpi
— Josh Braun (@josh_braun) December 20, 2019

The societal stakes couldn’t be higher. Yet the ICO appears content to keep calm and let the adtech industry carry on — no enforcement just biannual reminders of “concerns” about “lawfulness”.
To wit: “We have significant concerns about the lawfulness of the processing of special category data which we’ve seen in the industry, and the lack of explicit consent for that processing,” as McDougall admits in the post.
“We also have concerns about whether reliance on contractual clauses to justify onward data sharing is sufficient to comply with the law. We have not seen case studies that appear to adequately justify this.”
Set tone to: ‘Oopsy’.
The title of the ICO’s blog post — Adtech and the data protection debate – where next? — also incorporates contradictory framing as if to imply there is “debate” as to whether the industry needs to comply with data protection law. (Given the ICO’s own findings of “concern” that framing is itself concerning.)
So what can the adtech industry expect the ICO to actually do if it continues to fail to embed a “privacy by design approach in its use of RTB” (another of the blog post’s big suggestions) — and therefore keeps on, er, breaking the law?
Well, the ICO plans to make like a sponge over the “coming weeks”, per McDougall, who says it will spend time “absorbing all the information gathered and the rich conversations we’ve had throughout the year” and then shift into first gear — where it will be “evaluating all of the options available to us”.
No rush, eh.
A “further update” will then be put out in “early 2020” which will set out the ICO’s position — third time lucky perhaps?!
This update, we are informed, will also include “any action we’re taking”. So possibly still nothing, then.
“The future of RTB is both in the balance and in the hands of all the organisations involved,” McDougall writes — as if regulatory enforcement requires industry buy in.
UK taxpayers should be forgiven for wondering what exactly their data protection regulator is for at this point. Hopefully they’ll find out in a few months’ time.

Regulator confuses blogging with enforcement https://t.co/0QJxyDT10X Next up perhaps @iconew will hold an adtech roundtable where they don’t serve tea & biscuits
— Natasha (@riptari) December 20, 2019

GDPR adtech complaints keep stacking up in Europe

France slaps Google with $166M antitrust fine for opaque and inconsistent ad rules

France’s competition watchdog has slapped Google with a €150 million (~$166M) fine after finding the tech giant abused its dominant position in the online search advertising market.
In a decision announced today — following a lengthy investigation into the online ad sector — the competition authority sanctions Google for adopting what it describes as “opaque and difficult to understand” operating rules for its ad platform, Google Ads, and for applying them in “an unfair and random manner”.
The watchdog has ordered Google to clarify how it draws up rules for the operation of Google Ads and its procedures for suspending accounts. The tech giant will also have to put in place measures to prevent, detect and deal with violations of Google Ads rules.
A Google spokesman told TechCrunch the company will appeal the decision.
The decision — which comes hard on the heels of a market study report by the UK’s competition watchdog asking for views on whether Google should be broken up — relates to search ads which appear when a user of Google’s search engine searches for something and ads are served alongside organic search results.
More specifically it relates to the rules Google applies to its Ads platform which set conditions under which advertisers can broadcast ads — rules the watchdog found to be confusing and inconsistently applied.
It also found Google had changed its position on the interpretation of the rules over time, which it said generated instability for some advertisers who were kept in a situation of legal and economic insecurity.
In France, Google holds a dominant position in the online search market, with its search engine responsible for more than 90% of searches carried out and holds more than 80% of the online ad market linked to searches, per the watchdog which notes that that dominance puts requirements on it to define operating rules of its ad platform in an objective, transparent and non-discriminatory manner.
However it found Google’s wording of ad rules failed to live up to that standard — saying it is “not based on any precise and stable definition, which gives Google full latitude to interpret them according to situations”.
Explaining its decision in a press release the Autorité de la Concurrence writes [translated by Google Translate]:
[T]he French Competition Authority considers that the Google Ads operating rules imposed by Google on advertisers are established and applied under non-objective, non-transparent and discriminatory conditions. The opacity and lack of objectivity of these rules make it very difficult for advertisers to apply them, while Google has all the discretion to modify its interpretation of the rules in a way that is difficult to predict, and decide accordingly whether the sites comply with them or not. This allows Google to apply them in a discriminatory or inconsistent manner. This leads to damage both for advertisers and for search engine users.
The watchdog’s multi-year investigation of the online ad sector was instigated after a complaint by a company called Gibmedia — which raised an objection more than four years ago after Google closed its Google Ads account without notice.
At the time, Gibmedia requested provisional measures be taken. The watchdog rejected that request in a 2015 decision but elected to continue investigating “the merits of the case”. Today’s decision marks the culmination of the investigation.
In a response statement on the decision, a Google spokesperson said: “People expect to be protected from exploitative and abusive ads and this is what our advertising policies are for.”
Its statement also claims Gibmedia was “running ads for websites that deceived people into paying for services on unclear billing terms”. “We do not want these kinds of ads on our systems, so we suspended Gibmedia and gave up advertising revenue to protect consumers from harm,” the Google spokesperson added.
However the watchdog’s press release anticipates and unpicks this argument — pointing out that while having an objective of consumer protection is “perfectly legitimate” it does not justify Google treating advertisers in “a differentiated and random manner in comparable situations”.
“Google cannot suspend the account of an advertiser on the grounds that it would offer services that it considers contrary to the interests of the consumer, while agreeing to reference and accompany on its advertising platform sites that sell similar services,” it writes. 
While the watchdog does not state that it found evidence Google used ambiguous and inconsistently applied ad rules in a deliberate attempt to block competitors, it asserts the behavior displays “at best negligence, at worst opportunism”.
It also suggests that another element of Google ad rules could lead sites to favor a content policy aligned with its own ad-funded services — thereby pushing online publishers to adopt an economic model that feeds and benefits its own. 
During Google’s implementation of the now sanctioned practices the watchdog points out that the company has received regular warnings around EU competition rules — noting the string of European Commission antitrust decisions against Google products in recent years. (Most recently, in March, Google was fined ~$1.7BN for antitrust violations related to its search ad brokering business, AdSense.)
While, since 2010, it says it has issued a number of decisions related to the drafting and application of rules on the ad market which Google could also have taken note of.
In addition to being fined, being required to clarify its procedures and to set up a system of alerts to help advertisers avoid account suspensions, the decision requires Google to organize mandatory annual training for Google Ads support staff.
It must also submit an annual report to the watchdog specifying the number of complaints filed against it by French Internet users; the number of sites and accounts suspended; the nature of the Rules violated and the terms of the suspension.
Within two months of today’s decision Google must also present the watchdog with a report detailing the measures and procedures it will take to take to comply with the orders. A further report is due within six months detailing all the measures and procedures Google has actually put in place.
At the start of this year Google was also fined $57M by France’s data protection watchdog for violations of Europe’s General Data Protection Regulation.

Uber’s ride-hailing business hit with ban in Germany

Another legal blow for Uber in Europe: A regional court in Frankfurt has banned the company from sending ride-hailing requests to rental car companies via its app — with the court finding multiple competition violations.
The ruling, over Uber’s dispatching process, follows a legal challenge brought by a German taxi association.
In Germany Uber’s ride-hailing business works exclusively with professional and licensed private hire vehicle (PHV) companies — whose drivers and cars have the necessary licenses and permits to transport passengers. So the court ban essentially outlaws Uber’s current model in the country — unless it’s able to make changes to come into compliance.
Uber can appeal the Frankfurt court’s judgement but did not respond when asked whether it intends to do so.
The ban is enforceable immediately. It’s not clear whether Uber will temporarily pausing service in the market to come into compliance — it has not said it will do so, suggesting it intends to scramble to make changes while continuing to operate. But if it does that it risks fines if it’s caught breaching the law in the meanwhile.
Per Reuters, the plaintiff in the case, Taxi Deutschland, has said it will seek immediate provisional enforcement — with the threat of fines of €250 per ride, or up to €250,000 per ride for repeated offences if Uber fails to make the necessary changes.
“We will assess the court’s ruling and determine next steps to ensure our services in Germany continue,” an Uber spokesperson said in a statement. “Working with licensed PHV operators and their professional drivers, we are committed to being a true partner to German cities for the long term.”
Among the issues identified by the court as violations of German law are Uber’s lack of a rental licence; rental drivers it uses to supply the driving service accepting jobs via the Uber app without first returning to their company’s headquarters; and rental drivers accepting jobs directly in the app without the jobs being previously received by their company.
Uber’s p2p ride-hailing offering has been effectively outlawed across Europe since a 2017 decision by the region’s top court which judged it a transportation company, not merely a tech platform — which means its business is subject to PHV regulations in each EU Member State. Compliance costs have thus been piled onto its model in the region. 
Uber argues that reform of German transport law is needed to take account of digital business models and app-based dispatch. In the meanwhile its business demonstrably remains vulnerable to legal challenges around PHVs regulations.
The Frankfurt court ruling also comes hard on the heels of a decision by London’s transport regulator not to renew Uber’s license to operate in the UK capital.
The city regulator found a “pattern of failures” which it said put “passenger safety and security at risk” — including unauthorised drivers being able to pick up passengers as though they were the booked driver in at least 14,000 trips.
In that case Uber can continue to operate in London during the appeals process. The company submitted an appeal last week.

Airbnb is a platform not an estate agent, says Europe’s top court

Airbnb will be breathing a sigh of relief today: Europe’s top court has judged it to be an online platform which merely connects people looking for short term accommodation, rather than a full-blown estate agent.
The ruling may make it harder for the ‘home sharing’ platform to be forced to comply with local property regulations — at least under current regional rules governing ecommerce platforms.
The judgement by the Court of Justice of the European Union (CJEU) today follows a complaint made by a French tourism association, AHTOP, which had argued Airbnb should hold a professional estate agent licence. And, that by not having one, the platform giant was in breach of a piece of French legislation known as the ‘Hoguet Law’.
However the court disagreed — siding with Airbnb’s argument that its business must be classified as an ‘information society service’ under EU Directive 2000/31 on electronic commerce.
Commenting on the ruling in a statement, Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo, told us: “The Court’s finding that online platforms that facilitate the provision of short-term accommodation services, such as Airbnb, qualify as providers of ‘information society services’ entails strict limitations on the ability to introduce or enforce restrictive measures on similar services by a Member State other than that in whose territory the relevant service provider is established.”
“The Court’s judgment suggests that the enforcement of restrictive measures against a provider of ‘information society services’ may only occur on a very exceptional basis, subject to strict substantive and procedural conditions, including prior specific notification to the European Commission,” he added.
It’s a ruling that Uber may well look enviously at — given, in the case of its ride-hailing platform, the CJEU reached a very different conclusion a couple of years ago, finding Uber to be a transportation service not merely a tech platform.
In the Airbnb case the court points to differences vs the Uber ruling, noting that an online intermediation service may be classed otherwise if the intermediation service forms an integral part of an overall service whose main component is a service coming under another legal classification.
“In the present case, the Court found that an intermediation service such as that provided by Airbnb Ireland satisfied those conditions, and the nature of the links between the intermediation service and the provision of accommodation did not justify departing from the classification of that intermediation service as an ‘information society service’ and thus the application of Directive 2000/31 to that service,” it writes in a press release on the judgement.
Factors which informed that judgement include that Airbnb’s service is “not aimed only at providing immediate accommodation services, but rather it consists essentially of providing a tool for presenting and finding accommodation for rent, thereby facilitating the conclusion of future rental agreements”; that the platform is “in no way indispensable to the provision of accommodation services, since the guests and hosts have a number of other channels in that respect, some of which are long-standing”; and it found nothing indicate Airbnb sets or caps the amount of the rents charged by the hosts using its platform.
“[U]nlike the intermediation services at issue in the judgments in Asociación Profesional Elite Taxi and Uber France, neither that intermediation service nor the ancillary services offered by Airbnb Ireland make it possible to establish the existence of a decisive influence exercised by that company over the accommodation services to which its activity relates, with regard both to determining the rental price charged and selecting the hosts or accommodation for rent on its platform,” the CJEU adds in its press release.
The court also found fault with France for failing to notify the European Commission of the licensing requirement it was placing on Airbnb.
Reached for comment on the CJEU judgement Airbnb suggested the outcome does not mean governments in Europe are unable to apply regulations to its platform — saying that it wants to keep working with the European Commission to ensure there are fair and proportionate rules for how Member States can apply local regulations to online platforms.
“We welcome this judgment and want to move forward and continue working with cities on clear rules that put local families and communities at the heart of sustainable 21st century travel,” the company said in a statement. “We want to be good partners to everyone and already we have worked with more than 500 governments and authorities to help hosts share their homes, follow the rules and pay tax.”
The new European Commission has signalled it intends to upgrade safety and liability rules around online platforms — via a forthcoming Digital Service Act which looks set to amend the current ecommerce rules. So it’s possible tighter regulations could be coming down the pipe for platforms in the next few years. Hence Airbnb being keen to work with the Commission on any resetting of the rules.

More legal uncertainty for Privacy Shield ahead of crux ruling by Europe’s top court

Facebook tried to block the referral but today an influential advisor to Europe’s top court has issued a legal opinion that could have major implications for the future of the EU-US Privacy Shield personal data transfer mechanism.
It’s a complex opinion, dealing with a fundamental clash of legal priorities around personal data in the EU and US, which does not resolve question marks hanging over the legality of Privacy Shield .
The headline take-away is that a different data transfer mechanism which is also widely used by businesses to transfer personal data out of the EU — so called Standard Contractual Clauses (SCCs) — has been deemed legally valid by the court advisor.
However the advocate general to the Court of Justice of the European Union (CJEU) is also at pains to emphasize the “obligation” of data protection authorities to step in and suspend such data transfers if they are being used to send EU citizens’ data to a place where their information cannot be adequately protected.
So while SCCs look safe — as a data transfer mechanism — per this opinion, it’s a reminder that EU data protection agencies have a duty to be on top of regulating how such tools are used.
The reason the case was referred to the CJEU was a result of Ireland’s Data Protection Commission not acting on a complaint to suspend Facebook’s use of SCCs. So one view that flows from the opinion is the DPC should have done so — instead of spending years on an expensive legal fight.
The backstory to the legal referral is long and convoluted, involving a reformulated data protection complaint filed with the Irish DPC by privacy campaigner and lawyer Max Schrems challenging Facebook’s use of SCCs. His earlier legal action, in the wake of the 2013 disclosures of US government mass surveillance programs by NSA whistleblower Edward Snowden, led to Privacy Shield’s predecessor, Safe Harbor, being struck down by the CJEU in 2015.  
On the SCCs complaint Schrems prevailed in the Irish courts but instead of acting on his request to order Facebook to suspend its SCC data flows, Ireland’s data protection watchdog took the unusual step of filing a lawsuit pertaining to the validity of the entire mechanism.
Irish courts then referred a number of legal questions to the CJEU — including looping in the wider issue of the legality of Privacy Shield. It’s on those questions that the AG has now opined.
It’s worth noting that the advocate general’s opinion is not binding on the CJEU — which will issue a ruling on the case next year. Although the court does tend to follow such opinions so it’s a strong indicator of the likely direction of travel.
The opinion, by advocate general Henrik Saugmandsgaard Øe, takes the view that the use of SCCs for the transfer of personal data to a third country — i.e. a country outside the EU that does not have a bilateral trade agreement with the bloc — is valid.
However, as noted above, the AG puts the onus on data authorities to act in instances where obligations to protect EU citizens’ data under the mechanism come into conflict with privacy-hostile laws outside the EU, such as government mass surveillance programs.
“[T[here is an obligation — placed on the data controllers and, where the latter fail to act, on the supervisory authorities — to suspend or prohibit a transfer when, because of a conflict between the obligations arising under the standard clauses and those imposed by the law of the third country of destination, those clauses
cannot be complied with,” the CJEU writes in a press release on the opinion.
In a first reaction, Schrems highlights this point — writing: “The advocate general is now telling the Irish Data Protection Authority again to just do its job… After all the Irish taxpayer may have to pay up to €10M in legal costs, for the DPC delaying this case in the interest of Facebook.
“The opinion makes clear that DPC has the solution to this case in her own hands: She [Helen Dixon] can order Facebook to stop transfers tomorrow. Instead, she turned to the CJEU to invalidate the whole system. It’s like screaming for the European fire brigade, because you don’t know how to blow out a candle yourself.”
We’ve reached out to the Irish DPC and to Facebook for comment on the AG’s opinion.
“At the moment, many data protection authorities simply look the other way when they receive reports of infringements or simply do not deal with complaints. This is a huge step for the enforcement of the GDPR [the General Data Protection Regulation],” Schrems also argues.
Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo, suggests that the likelihood of EU DPAs suspending SCC personal data transfers to the US will “depend on the Court’s ultimate take on the safeguards surrounding the access to the transferred data by the United States intelligence authorities and the judicial protection available to the persons whose data are transferred”.
“The disruptive effect of a suspension of SCCs, even if partial and just for the U.S., is likely to be substantial,” he argues. “SCCs are widely used for the transfer of personal data outside the EU. They are probably the most used data transfer mechanism, including for transfers to the U.S.  Thus, even a partial suspension of the SCCs would force a significant number of organizations to explore alternative mechanisms for their transfers to the U.S. 
“However, the alternatives are limited and often difficult to apply to large-scale transfers, the main ones being the derogations allowing transfers with the consent of the data subject or necessary for the performance of a contract. These are unlikely to be suitable for all transfers currently taking place in accordance with SCCs.”
“In practice, the degree of disruption is likely to depend on the timing and duration of the suspension,” he adds. “Any suspension or other finding that data transfers to the U.S. are problematic is likely to speed up the modernization of SCCs that the European Commission is already working on but it is unclear how long it would take for the Commission to issue new SCCs.
“When the Court invalidated the Safe Harbor, it took several months for the Commission to adopt the Privacy Shield and amend the existing SCCs to take into account the Court’s judgment.”
On Privacy Shield — a newer data transfer mechanism which the European Commission claims fixes the legal issues with its predecessor — Saugmandsgaard Øe’s opinion includes some lengthy reasoning that suggests otherwise and certainly does not clear up questions around the mechanism’s legality which arise as a result of US laws that allow the state to harvest personal data for national security purposes, thereby conflicting with EU privacy rights.
Per the CJEU press release, the AG’s opinion sets out a number of reasons which it says “lead him to question the validity of the ‘privacy shield’ decision in the light of the right to respect for private life and the right to an effective remedy”.
The flagship mechanism is now used by more than 5,000 entities to authorize EU-US personal data transfers.
Should it be judged invalid by the court there would be a massive scramble for businesses to find alternatives.
It remains to be seen how the court will handle these questions. But Privacy Shield remains subject to direct legal challenge — so there are other opportunities for it to weigh in, even if CJEU judges avoids doing so in this case.
Schrems clearly hopes they will weigh in soon, skewering Privacy Shield in his statement — where he writes: “After the ‘Safe Harbor’ judgment the European Commission deliberately passed an invalid decision again — knowing that it will take two or three years until the Court will have a chance to invalidate it a second time. It will be very interesting to see if the Court will take this issue on board in the final decision or wait for another case to reach the court.”
“I am also extremely happy that the AG has taken a clear view on the Privacy Shield Ombudsperson. A mere ‘postbox’ at the foreign ministry of the US cannot possibly replace a court, as required under the first judgement by the Court,” he adds.
He does take issue with the AG’s opinion in one respect — specifically its reference to what he dubs “surveillance friendly case law” under the European Convention on Human Rights — instead of what he couches as “the clear case law of the Court of Justice”.
“This is against any logic… I am doubtful that the [CJEU] judges will join that view,” he suggests.
The court typically hands down a judgement between three and six months after an AG opinion — so privacy watchers will be readying their popcorn in 2020.
Meanwhile, for thousands of businesses, the legal uncertainty and risk of future disruption should Privacy Shield come unstuck goes on.

UK’s competition regulator asks for views on breaking up Google

The UK’s competition regulator has raised concerns about the market power of digital ad platform giants Google and Facebook in an interim report published today, opening up a consultation on a range of potential inventions — from breaking up platform giants, to limiting their ability to set self-serving defaults, and enforcing data sharing and/or feature interoperability to help rivals compete.
Breaking up Google by forcing it to separate its ad server arm from the rest of the business is one of a number of possible interventions it’s eyeing, along with enforcing choice screens for search engines and browsers that use non-monetary criteria to allocate slots — vs Google’s plan for a pay-to-play offering for EU Android users (which rivals argue does not offer relief for the antitrust abuse the European Commission sanctioned last year).
The UK regulator is also considering whether to require Facebook to interoperate specific features of its current network so they can be accessed by competitors — as a fix for what it describes as “strong network effects” which work against “new entrant and challenger social media platforms”.
The Competition and Markets Authority (CMA) launched the market study in July — a couple of weeks after the UK’s data watchdog published its own damning report setting out major privacy and other concerns around programmatic advertising.
It is due to issue a final report next summer — which will set out conclusions and recommendations for interventions — and is now consulting on suggestions in its interim report, inviting contributions before February 12.
Since beginning the study the CMA says it has received several requests to open a full-blown market investigation, which means it has a statutory duty to consult on making such a reference.
Based on initial findings from the study it says there are “reasonable grounds” for suspecting serious impediments to competition in the online platforms and digital advertising market.
The report specifically flags three areas where it suspects harm — namely:
the open display advertising market — with a focus on “the conflicts of interest Google faces at several parts of its vertically integrated chain of intermediaries”;
general search and search advertising — with a focus on “Google’s market power and the barriers to expansion faced by rival search engines”;
social media and display advertising — with a focus on “Facebook’s market power and the lack of interoperability between Facebook and rival services”;
Other concerns raised in the report include problems flowing from a lack of transparency in the digital advertising market; and the difficulty or lack of choice for consumers to opt out of behavioral advertising.
However the regulator is not making a market investigation reference at this stage — a step which would open access to the order making powers which could be used to enforce the sorts of interventions discussed in the report. Instead, the CMA says it is favors making recommendations to government to feed into a planned “comprehensive regulatory framework” to govern the behaviour of online platforms.
Earlier this year the UK government set out a wide-ranging proposal to regulate a range of online harms. Although it remains to be seen how much of that program prime minister Boris Johnson’s newly elected Conservative government will now push ahead with.
“Although it is a finely balanced judgement, we remain of the view that a comprehensive suite of recommendations to government is currently the best way forward and are therefore consulting on not making a market investigation reference at this stage,” the CMA writes, saying it feels it has further investigation work to do and also does not wish to “cut across” the government’s plans around regulating platforms.
“The concerns we have identified regarding online platforms such as Google and Facebook are a truly global antitrust challenge facing governments and regulators. Therefore, in relation to some of the potential interventions we may consider in a market investigation, and in particular any significant structural remedies such as those involving ownership separation, we need to be pragmatic about what changes could efficiently be pursued unilaterally by the UK,” it adds, saying it will “continue to work as closely as we can with our international counterparts to develop a coordinated position on these issues in the second half of the study”.
Antitrust regulators in a number of countries have been turning their attention on platform giants in recent years — including Australia and the US.
The new European Commission has also talked tough on platform power, suggesting it will further dial up scrutiny of tech giants and seek to accelerate its own interventions where it finds competitive harms.
Responding to the CMA report in a statement, Ronan Harris, VP, Google UK and Ireland, told us:
The digital advertising industry helps British businesses of all sizes find customers in the UK and across the world, and supports the websites that people know and love with revenue and reach. We’ve built easy-to-use controls that enable people to manage their data in Google’s services — such as the ability to turn off personalised advertising and to automatically delete their search history.  We’ll continue to work constructively with the CMA and the government on these important areas so that everyone can make the most of the web.
A Facebook spokesperson also sent us this statement:
We are fully committed to engaging in the consultation process around the CMA’s preliminary report, and continuing to deliver the benefits of technology and relevant advertising to the millions of people and small businesses in the UK who use our services.
We agree with the CMA that people should have control over their data and transparency around how it is used. In fact, for every ad we show, we give people the option to find out why they are seeing that ad and an option to turn off ads from that advertiser entirely.  We also provide industry-leading tools to help people control their data, like “Off Facebook Activity”, and to transfer it to other services through our Data Transfer tools.  We look forward to further engagement with the CMA on these topics.

Volocopter awarded key designation by European aviation safety regulator

Electric vertical takeoff and landing (eVTOL) aircraft maker Volocotper has received a Design Organization Approval (DOA) from the European Union Aviation Safety Agency (EASA). This is basically a recognition by the EU that the processes Volocopter has in place in developing and building its aircraft are of a high enough standard that it can expedite the process of deploying its eVTOLs for commercial use.
That’s a big advantage for Volocopter as it moves forward with its commercialization plans. The German company announced plans this year to produce a cargo version of its vehicle designed for hauling goods, and also revealed it’ll be doing pilot of that vehicle in partnership with John Deere focused on testing its using in agriculture. Meanwhile, it’s also moving ahead with its plans for an ‘air taxi’ version that’s meant to transport people in urban environments.
Volocopter has flown its personal transport with passengers on board in Singapore and Stuttgart so far, in tests designed to help demonstrate its feasibility ahead of a true commercial launch. The company announced a €50 million euro (around $55 million USD) funding round earlier this year, and it hopes to launch its service for the public in around two to three years’ time.

Reddit links UK-US trade talk leak to Russian influence campaign

Reddit has linked account activity involving the leak and amplification of sensitive UK-US trade talks on its platform during the ongoing UK election campaign to a suspected Russian political influence operation.
Or, to put it more plainly, the social network suspects that Russian operatives are behind the leak of sensitive trade data — likely with the intention of impacting the UK’s General Election campaign.
The country goes to the polls next week, on December 12.
The UK has been politically deadlocked since mid 2016 over how to implement the result of the referendum to leave the European Union . The minority Conservative government has struggled to negotiate a brexit deal that parliament backs. Another hung parliament or minority government would likely result in continued uncertainty.
In a post discussing the “Suspected campaign from Russia”, Reddit writes:

We were recently made aware of a post on Reddit that included leaked documents from the UK. We investigated this account and the accounts connected to it, and today we believe this was part of a campaign that has been reported as originating from Russia.
Earlier this year Facebook discovered a Russian campaign on its platform, which was further analyzed by the Atlantic Council and dubbed “Secondary Infektion.” Suspect accounts on Reddit were recently reported to us, along with indicators from law enforcement, and we were able to confirm that they did indeed show a pattern of coordination. We were then able to use these accounts to identify additional suspect accounts that were part of the campaign on Reddit. This group provides us with important attribution for the recent posting of the leaked UK documents, as well as insights into how adversaries are adapting their tactics.

Reddit says that an account, called gregoratior, originally posted the leaked trade talks document. Later a second account, ostermaxnn, reposted it. The platform also found a “pocket of accounts” that worked together to manipulate votes on the original post in an attempt to amplify it. Though fairly fruitlessly, as it turned out; the leak gained little attention on Reddit, per the company.
As a result of the investigation Reddit says it has banned 1 subreddit and 61 accounts — under policies against vote manipulation and misuse of its platform.
The story doesn’t end there, though, because whoever was behind the trade talk leak appears to have resorted to additional tactics to draw attention to it — including emailing campaign groups and political activists directly.
This activity did bear fruit this month when the opposition Labour party got hold of the leak and made it into a major campaign issue, claiming the 451-page document shows the Conservative party, led by Boris Johnson, is plotting to sell off the country’s free-at-the-point-of-use National Health Service (NHS) to US private health insurance firms and drug companies.
Labour party leader, Jeremy Corbyn, showed a heavily redacted version of the document during a TV leaders debate earlier this month, later calling a press conference to reveal a fully un-redacted version of the data — arguing the document proves the NHS is in grave danger if the Conservatives are re-elected.
Johnson has denied Labour’s accusation that the NHS will be carved up as the price of a Trump trade deal. But the leaked document itself is genuine.
It details preliminary meetings between UK and US trade negotiators, which took place between July 2017 and July 2019, in which discussion of the NHS takes place, in addition to other issues such as food standards. Although the document does not confirm what position the UK might seek to adopt in any future trade talks with the US.
The source of the heavily redacted version of the document appears to be a Freedom of Information (FOI) request by campaigning organisation, Global Justice Now — which told Vice it made an FOI request to the UK’s Department for International Trade around 18 months ago.
The group said it was subsequently emailed a fully unredacted version of the document by an unknown source which also appears to have sent the data directly to the Labour party. So while the influence operation looks to have originated on Reddit, the agents behind it seem to have resorted to more direct means of data dissemination in order for the leak to gain the required attention to become an election-influencing issue.
Experts in online influence operations had already suggested similarities between the trade talks leak and an earlier Russian operation, dubbed Secondary Infektion, which involved the leak of fake documents on multiple online platforms. Facebook identified and took down that operation in May.
In a report analysing the most recent leak, social network mapping and analysis firm Graphika says the key question is how the trade document came to be disseminated online a few weeks before the election.
“The mysterious [Reddit] user seemingly originated the leak of a diplomatic document by posting it around online, just six weeks before the UK elections. This raises the question of how the user got hold of the document in the first place,” it writes. “This is the single most pressing question that arises from this report.”
Graphika’s analysis concludes that the manner of leaking and amplifying the trade talks data “closely resembles” the known Russian information operation, Secondary Infektion.
“The similarities to Secondary Infektion are not enough to provide conclusive attribution but are too close to be simply a coincidence. They could indicate a return of the actors behind Secondary Infektion or a sophisticated attempt by unknown actors to mimic it,” it adds.
Internet-enabled Russian influence operations that feature hacking and strategically timed data dumps of confidential/sensitive information, as well as the seeding and amplification of political disinformation which is intended to polarize, confuse and/or disengage voters, have become a regular feature of Western elections in recent years.
The most high profile example of Russian election interference remains the 2016 hack of documents and emails from Hillary Clinton’s presidential campaign and Democratic National Committee — which went on to be confirmed by US investigators as an operation by the country’s GRU intelligence agency.
In 2017 emails were also leaked from French president Emmanuel Macron’s campaign shortly before the election — although with apparently minimal impact in that case. (Attribution is also less clear-cut.)
Russian activity targeting UK elections and referendums remains a matter of intense interest and investigation — and had been raised publicly as a concern by former prime minister, Theresa May, in 2017.
Although her government failed to act on recommendations to strengthen UK election and data laws to respond to the risks posed by Internet-enabled interference. She also did nothing to investigate questions over the extent of foreign interference in the 2016 brexit referendum.
May was finally unseated by the ongoing political turmoil around brexit this summer, when Johnson took over as prime minister. But he has also turned a wilfully blind eye to the risks around foreign election interference — while fully availing himself of data-fuelled digital campaign methods whose ethics have been questioned by multiple UK oversight bodies.
A report into Russian interference in UK politics which was compiled by the UK’s intelligence and security parliamentary committee — and had been due to be published ahead of the general election — was also personally blocked from publication by the prime minister.
Voters won’t now get to see that information until after the election. Or, well, barring another strategic leak…

No Libra style digital currencies without rules, say EU finance ministers

European Union finance ministers have agreed a defacto ban on the launch in the region of so-called global ‘stablecoins’ such as Facebook’s planned Libra digital currency until the bloc has a common approach to regulation that can mitigate the risks posed by the technology.
In a joint statement the European Council and Commission write that “no global ‘stablecoin’ arrangement should begin operation in the European Union until the legal, regulatory and oversight challenges and risks have been adequately identified and addressed”.
The statement includes recognition of potential benefits of the crypto technology, such as cheaper and faster payments across borders, but says they pose “multifaceted challenges and risks related for example to consumer protection, privacy, taxation, cyber security and operational resilience, money laundering, terrorism financing, market integrity, governance and legal certainty”.
“When a ‘stablecoin’ initiative has the potential to reach a global scale, these concerns are likely to be amplified and new potential risks to monetary sovereignty, monetary policy, the safety and efficiency of payment systems, financial stability, and fair competition can arise,” they add.
All options are being left open to ensure effective regulation, per the statement, with ministers and commissioners stating this should include “any measures to prevent the creation of unmanageable risks by certain global “stablecoins”.”
The new European Commission is already working on a regulation for global stablecoins, per Reuters.
In a speech at a press conference, Commission VP Valdis Dombrovskis, said: “Today the Ecofin endorsed a joint statement with the Commission on stablecoins. These are part of a much broader universe of crypto assets. If we properly address the risks, innovation around crypto assets has the potential to play a positive role for investors, consumers and the efficiency of our financial system.
“A number of Member States like France, Germany or Malta introduced national crypto asset laws, but most people agree with the advice of the European Supervisory Authorities that these markets go beyond borders and so we need a common European framework.
“We will now move to implement this advice. We will launch a public consultation very shortly, before the end of the year.”
The joint statement also hits out at the lack of legal clarity around some major global projects in this area — which looks like a tacit reference to Facebook’s Libra project (though the text does not include any named entities).
“Some recent projects of global dimension have provided insufficient information on how precisely they intend to manage risks and operate their business. This lack of adequate information makes it very difficult to reach definitive conclusions on whether and how the existing EU regulatory framework applies. Entities that intend to issue ‘stablecoins’, or carry out other activities involving ‘stablecoins’ in the EU should provide full and adequate information urgently to allow for a proper assessment against the applicable existing rules,” they warn.
Facebook’s Libra project was only announced this summer — with a slated launch of the first half of 2020 — but was quickly dealt major blows by the speedy departure of key founder members from the vehicle set up to steer the initiative, as giants including Visa, Stripe and eBay apparently took fright at the regulatory backlash. Though you’d never know it from reading the Libra Association PR.
One perhaps unintended effective of Facebook’s grand design on disrupting global financial systems is to amp up pressure on traditional payment providers to innovate and improve their offerings for consumers.
EU ministers write that the emergence of stablecoin initiatives “highlight the importance of continuous improvements to payment arrangements in order to meet market and consumer expectations for convenient, fast, efficient and inexpensive payments – especially cross-border”.
“While European payment systems have already made significant progress, European payment actors, including payment services providers, also have a key role to play in this respect,” they continue. “We note that the ECB and other central banks and national competent authorities will explore further the ongoing digital transformation of the payment system and, in particular, the consequences of initiatives such as ‘stablecoins’. We welcome that central banks in cooperation with other relevant authorities continue to assess the costs and benefits of central bank digital currencies as well as engage with European payment actors regarding the role of the private sector in meeting expectations for efficient, fast and inexpensive cross-border payments.”

Carbon dioxide emissions are set to hit a record high this year (it’s not fine, but not hopeless)

Carbon dioxide emissions, one of the main contributors to the climate changes bringing extreme weather, rising oceans, and more frequent fires that have killed hundreds of Americans and cost the U.S. billions of dollars, are set to reach another record high in 2019.
That’s the word from the Global Carbon Project, an initiative of researchers around the world led by Stanford University scientist Rob Jackson.
The new projections from the Global Carbon Project are set out in a trio of papers published in “Earth System Science Data“, “Environmental Research Letters“, and “Nature Climate Change“.
That’s the bad news. The good news (if you want to take a glass half-full view) is that the rate of growth has slowed dramatically from the previous two years. However, researchers are warning that emissions could keep increasing for another decade unless nations around the globe take dramatic action to change their approach to energy, transportation and industry, according to a statement from Jackson.

“When the good news is that emissions growth is slower than last year, we need help,” said Jackson, a professor of Earth system science in Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth), in a statement. “When will emissions start to drop?”

Just in: Global carbon emissions hit a new all-time high in 2019, up 0.6% from last year.
This news is shockingly important and heartbreakingly serious. Not only are we entirely failing to reduce emissions, we are making the climate emergency worse at an increasingly fast rate. pic.twitter.com/A2nasPT3lI
— Eric Holthaus (@EricHolthaus) December 4, 2019

Globally, carbon dioxide emissions from fossil fuel sources (which are over 90 percent of all emissions) are expected to grow 0.6 percent over the 2018 emissions. In 2018 that figure was 2.1 percent above the 2017 figure, which was, itself, a 1.5 percent increase over 2016 emissions figures.
Even as the use of coal is in drastic decline around the world, natural gas and oil use is climbing, according to researchers, and stubbornly high per capita emissions in affluent countries mean that reductions won’t be enough to offset the emissions from developing countries as they turn to natural gas and gasoline for their energy and transportation needs.
“Emissions cuts in wealthier nations must outpace increases in poorer countries where access to energy is still needed,” said Pierre Friedlingstein, a mathematics professor at the University of Exeter and lead author of the Global Carbon Budget paper in Earth System Science Data, in a statement.
Some countries are making progress. Both the UK and Denmark have managed to achieve economic growth while simultaneously reducing their carbon emissions. In the third quarter of the year, renewable power supplied more energy to homes and businesses in the United Kingdom than fossil fuels for the first time in the nation’s history, according to a report cited by “The Economist”.

Costs of wind and solar power are declining so dramatically that they are cost competitive with natural gas in many parts of the wealthy world and cheaper than coal, according to a study earlier in the year from the International Monetary Fund.
Still, the U.S., the European Union and China account for more than half of all carbon dioxide emissions. Carbon dioxide emissions in the U.S. did decrease year-on-year — projected to decline by 1.7 percent — but it’s not enough to counteract the rising demand from countries like China, where carbon dioxide emissions are expected to rise by 2.6 percent.
And the U.S. has yet to find a way to wean itself off of its addiction to cheap gasoline and big cars. It hasn’t helped that the country is throwing out emissions requirements for passenger vehicles that would have helped to reduce its contribution to climate change even further. Even so, at current ownership rates, there’s a need to radically reinvent transportation given what U.S. car ownership rates mean for the world.
U.S. oil consumption per person is 16 times greater than in India and six times greater than in China, according to the reports. And the United States has roughly one car per-person while those numbers are roughly one for every 40 people in India and one for every 6 in China. If ownership rates in either country were to rise to similar levels as the U.S. that would put 1 billion cars on the road in either country.
About 40 percent of global carbon dioxide emissions were attributable to coal use, 34 percent from oil, 20 percent from natural gas, and the remaining 6 percent from cement production and other sources, according to a Stanford University statement on the Global Carbon Project report.
“Declining coal use in the U.S. and Europe is reducing emissions, creating jobs and saving lives through cleaner air,” said Jackson, who is also a senior fellow at the Stanford Woods Institute for the Environment and the Precourt Institute for Energy, in a statement. “More consumers are demanding cheaper alternatives such as solar and wind power.”
There’s hope that a combination of policy, technology and changing social habits can still work to reverse course. The adoption of new low-emission vehicles, the development of new energy storage technologies, continued advancements in energy efficiency, and renewable power generation in a variety of new applications holds some promise. As does the social adoption of alternatives to emissions intensive animal farming and crop cultivation.

Reasons to be climate cheerful (ish)

“We need every arrow in our climate quiver,” Jackson said, in a statement. “That means stricter fuel efficiency standards, stronger policy incentives for renewables, even dietary changes and carbon capture and storage technologies.”
 

Atomico Partner Tom Wehmeier reviews ‘The State of European Tech’ 2019 report

Atomico, the European venture capital firm founded by Skype’s Niklas Zennström, has released its latest annual The State of European Tech report, published in partnership with Slush and Orrick.
As part of the report, the authors surveyed 5,000 members of the ecosystem — including 1,000 founders — as well as pulling in robust data from other sources, such as Dealroom and the London Stock Exchange .
This year, the report reveals that the European tech ecosystem continues to mature and shows no sign of slowing — particularly highlighting the contrast from five years ago when the The State of European Tech report made its debut. Almost every key indicator is up and to the right, except, rather depressingly, diversity.
The data shows, for example, that competition for talent and access to the best founders has increased ferociously. And from a funding perspective, European founders have more choice than ever, especially with U.S. and Asian VC firms investing more and more in the region. Progress with gender diversity stalled, however, such as 92% of funding going to all-male teams.
I caught up with the report’s author Tom Wehmeier, Partner and Head of Insights at Atomico (also sometimes jokingly referred to as the “Mary Meeker of Europe”), where we discuss in more detail some of the key findings and why, it seems, that the rest of the world has finally woken up to Europe’s tech potential.
But first, a few headlines from the report:
European technology companies are on track to raise a record 30$B+ in funding in 2019, up from $25B the year before. (Source: Dealroom)
Despite failing to match the level of venture-backed exits of 2018, there was a record number of 40 $100M-plus deals as of September 2019, a size that many European tech sceptics did not believe was possible. (Source: Dealroom)
A number of multi-billion-dollar non-venture backed companies like Nexi and Trainline made their debut on the public markets.
European tech policymaking remains a mystery to many European founders.
When asked to describe the top priority of the European Commission in terms of tech policy, 40% of founders and startup employees say they don’t feel informed enough to comment. (Source: survey)
Despite this reported lack of awareness on policy issues, all respondents voted EU competition commissioner Margrethe Vestager as the person who had the most influence on European tech in 2019, good or bad. (Source: survey)
European parliamentarians aren’t talking about fintech and digital health, two sectors which investors poured a combined $12.7bn into last year (Source: Politico and Dealroom)
Europe’s diversity figures are still grim reading.
In 2019, 92% of funding went to all-male teams, a similar level to 2018. (Source: Dealroom)
There is still only one woman CTO in the 119 companies (

As the new year beckons European investors start moving into new roles

As the Holiday Season approaches, new jobs for players in the tech ecosystem beckon. And this is no less true for investors. Two notable moves have recently happened that are worthy of note in the European scene.
The first is that GR Capital, a pan-European VC, is opening an office in London and has lured Jason Ball, who, earlier this year, left Qualcomm Ventures where had been European Managing Director for over a decade. Bad spent ten years as a mentor at Seedcamp and individually invested in more than ten companies. He was understood to be looking for new challenges, either building a new fund or joining another – so now we have our answer as to what he decided.
Founded in 2016 by Roma Ivaniuk in Ukraine, GR Capital specializes in late-stage VC investments. It has over $70M under management and has invested in Lime, Azimo, WeFox, McMakler, Glovo and Meero among others. The fund has traditionally been known for investing in Eastern Europe, but with a London office and the extremely well-networked Ball under its belt, we should be hearing more from them on the wider European scene in future.
Ivaniuk said in a statement that the move “means we can now drive our pan-European business activities from the continent’s most important VC hub, London.”
Ball said “We see a huge opportunity here to connect the dots between West and East. The London ecosystem is an exciting offering for investors in Eastern Europe, which in turn presents unique R&D and growth opportunities for portfolio companies.”
Meanwhile, Jon Bradford was most recently a partner of Motive Partners and a UK investment pioneer — having founded the Springboard Accelerator that merged with Techstars to become Techstars London, as well as helping to co-found F6S and Tech.eu. But he is also on the move, now joining Dynamo Ventures as its newest partner.
Bradford will be joining Dynamo on a full-time basis having previously been an advisor who helped launch the debut fund. He has invested in over 100 startups over the last decade including Apiary, Hassle, Tray.io, Flitto (that recently IPO’ed in Korea), Sendbird and Chainalysis. Dynamo is a US-EU based seed fund focused on B2B startups in supply chain and mobility. It has invested in 20 startups across the US and overseas, investing in including Sennder (Berlin), Skupos, Stord, Gatik and LEAF Logistics.

European parliament’s NationBuilder contract under investigation by data regulator

Europe’s lead data regulator has issued its first ever sanction of an EU institution — taking enforcement action against the European parliament over its use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections.
NationBuilder is a veteran of the digital campaign space — indeed, we first covered the company back in 2011— which has become nearly ubiquitous for digital campaigns in some markets.
But in recent years European privacy regulators have raised questions over whether all its data processing activities comply with regional data protection rules, responding to growing concern around election integrity and data-fuelled online manipulation of voters.
The European parliament had used NationBuilder as a data processor for a public engagement campaign to promote voting in the spring election, which was run via a website called thistimeimvoting.eu.
The website collected personal data from more than 329,000 people interested in the EU election campaign — data that was processed on behalf of the parliament by NationBuilder.
The European Data Protection Supervisor (EDPS), which started an investigation in February 2019, acting on its own initiative — and “taking into account previous controversy surrounding this company” as its press release puts it — found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.
The sub-processors in question are not named. (We’ve asked for more details.)
The parliament received a second reprimand from the EDPS after it failed to publish a compliant Privacy Policy for the thistimeimvoting website within the deadline set by the EDPS. Although the regulator says it acted in line with its recommendations in the case of both sanctions.
The EDPS also has an ongoing investigation into whether the Parliament’s use of the voter mobilization website, and related processing operations of personal data, were in accordance with rules applicable to EU institutions (as set out in Regulation (EU) 2018/1725).
The enforcement actions had not been made public until a hearing earlier this week — when assistant data protection supervisor, Wojciech Wiewiórowski, mentioned the matter during a Q&A session in front of MEPs.
He referred to the investigation as “one of the most important cases we did this year”, without naming the data processor. “Parliament was not able to create the real auditing actions at the processor,” he told MEPs. “Neither control the way the contract has been done.”
“Fortunately nothing bad happened with the data but we had to make this contract terminated the data being erased,” he added.
When TechCrunch asked the EDPS for more details about this case on Tuesday a spokesperson told us the matter is “still ongoing” and “being finalized” and that it would communicate about it soon.
Today’s press release looks to be the upshot.
Provided canned commentary in the release Wiewiórowski writes:
The EU parliamentary elections came in the wake of a series of electoral controversies, both within the EU Member States and abroad, which centred on the the threat posed by online manipulation. Strong data protection rules are essential for democracy, especially in the digital age. They help to foster trust in our institutions and the democratic process, through promoting the responsible use of personal data and respect for individual rights. With this in mind, starting in February 2019, the EDPS acted proactively and decisively in the interest of all individuals in the EU to ensure that the European Parliament upholds the highest of standards when collecting and using personal data. It has been encouraging to see a good level of cooperation developing between the EDPS and the European Parliament over the course of this investigation.
One question that arises is why no firmer sanction has been issued to the European parliament — beyond a (now public) reprimand, some nine months after the investigation began.
Another question is why the matter was not more transparently communicated to EU citizens.
The EDPS’ PR emphasizes that its actions “are not limited to reprimands”, without explaining why the two enforcements thus far didn’t merit tougher action. (At the time of writing the EDPS had not responded to questions about why no fines have so far been issued.)
There may be more to come, though.
The regulator says it will “continue to check the parliament’s data protection processes” — revealing that the European Parliament has finished informing individuals of a revised intention to retain personal data collected by the thistimeimvoting website until 2024.
“The outcome of these checks could lead to additional findings,” it warns, adding that it intends to finalise the investigation by the end of this year.
Asked about the case, a spokeswoman for the European parliament told us that the thistimeimvoting campaign had been intended to motivate EU citizens to participate in the democratic process, and that it used a mix of digital tools and traditional campaigning techniques in order to try to reach as many potential voters as possible. 
She said NationBuilder had been used as a customer relations management platform to support staying in touch with potential voters — via an offer to interested citizens to sign up to receive information from the parliament about the elections (including events and general info).
Subscribers were also asked about their interests — which allowed the parliament to send personalized information to people who had signed up.
Some of the regulatory concerns around NationBuilder have centered on how it allows campaigns to match data held in their databases (from people who have signed up) with social media data that’s publicly available, such as an unlocked Twitter account or public Facebook profile.
In 2017 in France, after an intervention by the national data watchdog, NationBuilder suspended this data matching tool in the market.
The same feature has attracted attention from the UK’s Information Commissioner — which warned last year that political parties should be providing a privacy notice to individuals whose data is collected from public sources such as social media and matched. Yet aren’t.
“The ICO is concerned about political parties using this functionality without adequate information being provided to the people affected,” the ICO said in the report, while stopping short of ordering a ban on the use of the matching feature.
Its investigation confirmed that up to 200 political parties or campaign groups used NationBuilder during the 2017 UK general election.

Twitter to add a way to ‘memorialize’ accounts for deceased users before removing inactive ones

Twitter has changed its tune regarding inactive accounts after receiving a lot of user feedback: It will now be developing a way to “memorialize” user accounts for those who have passed away, before proceeding with a plan it confirmed this week to deactivate accounts that are inactive in order to “present more accurate, credible information” on the service. To the company’s credit, it reacted swiftly after receiving a significant amount of negative feedback on this move, and it seems like the case of deceased users simply wasn’t considered in the decision to proceed with terminating dormant accounts.
After Twitter confirmed the inactive account (those that haven’t tweeted in more than six months) cleanup on Tuesday, a number of users noted that this would also have the effect of erasing the content of accounts whose owners have passed away. TechCrunch alum Drew Olanoff wrote about this impact from a personal perspective, asking Twitter to reconsider their move in light of the human impact and potential emotional cost.
In a thread today detailing their new thinking around inactive accounts, Twitter explained that its current inactive account policy has actually always been in place, but that they haven’t been diligent about enforcing it. They’re going to begin doin so in the European Union partly in accordance with local privacy laws, citing GDPR specifically. But the company also says it will now not be removing any inactive accounts before first implementing a way for inactive accounts belonging to deceased users to be “memorialized,” which presumably means preserving their content.
Twitter went on to day that it might expand or refine its inactive account policy to ensure it works with global privacy regulations, but will be sure to communicate these changes broadly before they go into effect.
It’s not yet clear what Twitter will do to offer this ‘memorialization’ of accounts, but there is some precedent they can look to for cues: Facebook has a ‘memorialized accounts’ feature that it introduced for similar reasons.

You can take my Dad’s tweets over my dead body

Volkswagen’s new all-electric concept wagon could be coming to the U.S. by 2022

Volkswagen revealed Tuesday evening a new concept vehicle called the ID Space Vizzion, and despite the crazy Frank Zappaesque name, this one might actually make it into production in Europe and North America.
The ID Space Vizzion is the seventh concept that VW has introduced since 2016 that uses its MEB platform, a flexible modular system — really a matrix of common parts — for producing electric vehicles that VW says make it more efficient and cost-effective.
The first vehicles to use this MEB platform will be under the ID brand, although this platform can and will be used for electric vehicles under other VW Group brands such as Skoda and Seat. The ID.3, the first model in its new all-electric ID brand and the beginning of the automaker’s ambitious plan to sell 1 million EVs annually by 2025.

The ID Space Vizzion is equipped with a rear-mounted 275-horsepower motor and a 82 kilowatt-hour battery pack with a range of up to 300 miles under the EU’s WLTP cycle. A second motor can be added to give it all-wheel drive capability and a total output of 355 horsepower.
This concept will likely be described in a number of ways — and during the event at the Petersen Museum in Los Angeles it was — but this is a wagon through and through.

Audi’s next all-electric vehicle, the e-tron Sportback, is a “coupé” SUV

Audi revealed Tuesday evening in Los Angeles the e-tron Sportback as the German automaker begins to chip away at its plan to launch more than 30 electric vehicles and plug-in hybrids by 2025.
The e-tron Sportback reveal ahead of the LA Auto Show follows the launch earlier this year of Audi’s first all-electric vehicle, the 2019 e-tron.
Audi has delivered 18,500 of its all-electric e-tron SUVs globally since March 2019 when the vehicle first came to market. And the company is hoping to grab more, and different, customers with the Sportback.
Audi plans to offer two variants of the vehicle, a Sportback 50 and Sportback 55. The Sportback will come to Europe first in spring 2020. The Sportback 55 will come to the U.S. in fall 2020.

Audi calls this e-tron Sportback a SUV coupé, the latest evidence that automakers are comfortable pushing the boundaries of traditional automotive terminology. This is not a two-door car with a fixed roof and a sloping rear, although there are “coupé” elements in the design.
This is in fact a SUV with a roof that extend flat over the body and then drops steeply to the rear — that’s where the coupé name comes in — and into the D pillar of the vehicle. Then there’s the classic “Sportback” feature in the body where the lower edge of the side window rises toward the rear.
There are design details repeated throughout the exterior, specifically the four-bar pattern in the headlamps, front grille and wheels. And of course there are special interior and exterior finishes – 13 paint colors in all — and a first edition version customers can buy. The base price of the Sportback is 71,350 ($79,000).
But importantly, besides some styling and design changes, this vehicle boasts longer range and for everyone outside the U.S., futuristic looking side mirrors and new lighting tech.
The 2020 Audi e-tron Sportback has a 86.4 kilowatt-hour battery pack that has a range of up to 446 kilometers (277.1 miles) in the EU’s WLTP cycle. The EPA estimates aren’t out yet, but expect the range numbers to be slightly lower.
The company is targeting an EPA range of about 220 miles over the 204 miles of range that the regular e-tron gets.
Audi was able to improve the range by increasing the net battery capacity. It also decoupled the front motor and improved the thermal management.
Lighting and mirrors
Audi is known for its lighting and the company has made this a key feature in the Sportback. The vehicle has a new digital matrix headlights that breaks down light into tiny pixels. The result is precise lighting that has high resolution.
Inside the headlight is a digital micromirror device that acts like a video projector. Inside the DMD is a small chip from Texas Instruments that contains one million micromirrors. These micromirrors can be tilted up to 5,000 times per second.
The upshot: The headlights can project specific patterns on the road or illuminate certain areas more brightly. And for fun, animations like the e-tron or Audi logos can be projected on a wall when the vehicle is stopped.
Check out this video to see it in action.

The safety piece of this is the most interesting. For instance, on a freeway the light might creates a carpet of light that illuminates the driver’s own lane brightly and adjusts dynamically when he or she changes lane.
Then there are the virtual exterior mirrors. This wing-shaped side mirror doesn’t have an exterior mirror. Instead, it supports integrate small cameras. The captured images appear on high-contrast OLED displays inside the car between the instrument panel and the door.

If the driver moves their finger toward the surface of the touch display, symbols are activated with which the driver can reposition the image. The mirrors can be adjust automatically to three driving situations for highway driving, turning and parking.
Neither the mirrors of the digital matrix LED lighting is available in the U.S. and won’t be until the government changes its Federal Motor Vehicle Safety Standards, or FMVSS, which are the regulations that dictate the design, construction, performance, and durability requirements for motor vehicles.

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.
In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.
“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”
“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”
One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.
“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.
Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.
In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”
There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.
“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.
There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.
The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”
At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.
Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.
Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.
“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.
“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.
“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”
Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.
“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.
“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”
On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.
He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.
He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.
“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”
Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”
There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.
“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”
In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.
While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.
The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.
The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.
Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

Microsoft announces changes to cloud contract terms following EU privacy probe

Chalk up another win for European data protection: Microsoft has announced changes to commercial cloud contracts following privacy concerns raised by European Union data protection authorities.
The changes to contactual terms will apply globally and to all its commercial customers — whether public or private sector entity, or large or small business, it said today.
The new contractual provisions will be offered to all public sector and enterprise customers at the beginning of 2020, it adds.
In October Europe’s data protection supervisor warned that preliminary results of an investigation into contractual terms for Microsoft’s cloud services had raised serious concerns about compliance with EU data protection rules and the role of the tech giant as a data processor for EU institutions.
Writing on its EU Policy blog, Julie Brill, Microsoft’s corporate VP for global privacy and regulatory affairs and chief privacy officer, announces the update to privacy provisions in the Online Services Terms (OST) of its commercial cloud contracts — saying it’s making the changes as a result of “feedback we’ve heard from our customers”.
“The changes we are making will provide more transparency for our customers over data processing in the Microsoft cloud,” she writes.
She also says the changes reflect those Microsoft developed in consultation with the Dutch Ministry of Justice and Security — which comprised both amended contractual terms and technical safeguards and settings — after the latter carried out risk assessments of Microsoft’s OST earlier this year and also raised concerns.
Specifically, Microsoft is accepting greater data protection responsibilities for additional processing involved in providing enterprise services, such as account management and financial reporting, per Brill:
Through the OST update we are announcing today we will increase our data protection responsibilities for a subset of processing that Microsoft engages in when we provide enterprise services. In the OST update, we will clarify that Microsoft assumes the role of data controller when we process data for specified administrative and operational purposes incident to providing the cloud services covered by this contractual framework, such as Azure, Office 365, Dynamics and Intune. This subset of data processing serves administrative or operational purposes such as account management; financial reporting; combatting cyberattacks on any Microsoft product or service; and complying with our legal obligations.
Microsoft currently designates itself as a data processor, rather than data controller for these administrative and operations functions that can be linked to provision of commercial cloud services, such as its Azure platform.
But under Europe’s General Data Protection framework a data controller has the widest obligations around handling personal data — with responsibility under Article 5 of the GDPR for the lawfulness, fairness and security of the data being processed — and therefore also greater legal risk should it fail to meet the standard.
So, from a regulatory point of view, Microsoft’s current commercial contract structure poses a risk for EU institutions of user data ending up being processed under a lower standard of legal protection than is merited.
The announced switch from data processor to controller should raise the bar around associated purposes that Microsoft may also provide to commercial customers of its cloud services.
For the latter purpose itself, Microsoft says it will remain the data processor, as well as for improving and addressing bugs or other issues related to the service, ensuring security of the services, and keeping the services up to date.
In August a conference organized jointly by the EU’s data protection supervisor and and the Dutch Ministry brought together EU customers of cloud giants to work on a joint response to regulatory risks related to cloud software provision.
Earlier this year the Dutch Ministry obtained contractual changes and technical safeguards and settings in the amended contracts it agreed with Microsoft.
“The only substantive differences in the updated terms [that will roll out globally for all commercial cloud customers] relate to customer-specific changes requested by the Dutch MOJ, which had to be adapted for the broader global customer base,” Brill writes now.
Microsoft’s blog post also points to other global privacy-related changes it says were made following feedback from the Dutch MOJ and others — including a roll out of new privacy tools across major services; specific changes to Office 365 ProPlus; and increased transparency regarding use of diagnostic data.

Fertility startup Mojo wants to take the trial and error out of IVF

Fertility tech startup Mojo is coming out of stealth to announce a €1.7 million (~$1.8M) seed round of funding led by Nordic seed fund Inventure. Also participating are Doberman and Privilege Ventures (an investor in Ava), plus a number of angel investors including Josefin Landgard (founder and ex-CEO of Kry) and Hampus Jakobsson (partner at BlueYard, BA in Clue & Kind.app).
Mojo’s mission, says co-founder and CEO Mohamed Taha, is to make access to fertility treatment more affordable and accessible by using AI and robotics technology to assist in sperm and egg quality analysis, selection and fertilization to reduce costs for clinics. Only by reducing clinics’ costs will the price fall for couples, he suggests.
“What the AI does in our technology stack from now until our roadmap is completed, product wise, is to look at sperm, look at eggs, look at data and ensure that the woman or the couple get precise treatment or the precise embryo that yields healthy baby,” he tells TechCrunch. “The role of robotics is to ensure that the manipulations/procedures are done precisely and at reduced time compared to nowadays, and also accurately.”
The idea for the business came to Taha after he was misdiagnosed with a kidney condition while still a student. His doctor suggested freezing his sperm as a precaution against deterioration in case he wanted to father a child in the future, so he started having regular sperm tests. “I was super annoyed with one particular fact,” he says of this. “Every time I do a sperm test I get a different result.”
After speaking to doctors the consensus view of male fertility he heard was “I shouldn’t care about my fertility — worst case scenario all that they need from me is one sperm”. He was told it would be his future partner who would be put on IVF to “take the treatment for me”. Doctors also told him there was little research into male fertility, and therefore into sperm quality — such as which sperm might yield a healthy baby or could result in a miscarriage. And after learning about what IVF entailed, Taha says it struck him as a “tough” deal for the woman.
“It’s completely blackbox,” he says of male fertility. “I also learned that in terms of IVF or ART [assisted reproductive technologies] everything, pretty much, is done manually. And everything, pretty much, also is done at random — you select a random sperm, they fertilize it with a random egg. Hopefully the technician who’s doing it manually knows his or her job. And in the end there’s going to be an embryo that will be implanted.”
He says he was also struck by the fact the ‘trial and error’ process only works 25% of the time in high end laboratories, yet can prospective parents between €40,000-€100,000 for each round of treatment. “This is where the idea of the company came from,” he adds. Mojo’s expectation for their technology is that it will be able to increase IVF success rates to 75% by 2030.
The team started work in 2016 as a weekend project during their PhDs. Taha initially trained as an electrical engineer before going on to do a PhD in nanotechnology, investigating new and affordable materials for use as biosensors. It was the microscopes and robotic arms that he and his co-founders, Fanny Chesa, Tobias Boecker, Daniel Thomas, were using in the labs to examine nanoparticles and select specific particles for insertion into other media that led them to think why not adapt this type of technology for use in fertility clinics — as an alternative to purely manual selection and fertilization.
“We just completely automate everything to ensure that the procedure is done faster, better and at the same time more reliably,” Taha says of the concept for Mojo. “No randomness. Understand the good from the bad.”
That — at least — is the theory. To be clear, they don’t yet have their proposition robustly proved out nor productized at this stage. Their intended first product, called Mojo Pro, is still pending certification as a medical device in the EU, for example. But the plan, should everything go to plan, is to get it to market next summer, starting in the UK.
This product, a combination of microscopy hardware and AI software, will be sold to fertility clinics (under a subscription model) to offer an analysis service consisting of a sperm count and quality check — as a first service for couples to determine whether or not the man has a fertility problem.
Initially, Mojo’s computer vision analysis system is focused on sperm counts, automating what Taha says is currently a manual process, as well as assessing some basic quality signals — such as the speed and morphology of the sperm. For example, a sperm with two heads or two tails would be an easy initial judgement call to weed out as “bad”, he suggests.
“The first product is to look at the sperm and say if this man experiences infertility or not. So we have a smart microscopy — built custom in-house. And this is where the element of the robotics comes in,” he explains. “At the same time we put on it an AI that looks at a moving sperm sample. Then, through looking at this, the system on Mojo Pro will tell us what is the sperm count, what is the sperm mobility (how fast they move) and what is the predominant shape of the sperm.
“The second part is the selection of the sperm [i.e. if the sample is needed for IVF]. Now we ensure that good sperm is being selected. This microscopy will look at the same and visually will guide the embryologist to pick the good sperm — that’s highlighted around, for example, by a green box. Good sperm have green boxes around them, bad sperm have red boxes around them so they can pick up through their current techniques the sperm that are highlighted green.”

Based on internal testing of Mojo Pro the system has achieved 97% of the accuracy of a manual sperm count so far, per Taha, who says further optimization is planned.
Though he admits there’s no standardization of sperm counts in the fertility industry — which means such comparative metrics offer limited utility, given the lack of robust benchmarks.
“The way we are going with this is we’re really choosing the best of the best practitioners and we are just comparing our work against them for now,” is the claim. (Mojo’s lab partner for developing the product is TDL.)
“We will try to introduce new standards for ourselves,” he adds.
The current research focus is: “What are the visuals to make sure the sperm is good or bad; how to actually measure the sperm sample, the sperm count; in terms of morphology… how we can incorporate a protocol that can be the gold standard of computer vision or AI looking at sperm?”
The wider goal for the business is to understand much more about the role that individual sperm and eggs play in yielding a healthy (or otherwise) embryo and baby.
Taha says the team’s ultimate goal is “automating the fertilization process”, again with the help of applied AI and robotics (and likely also incorporating genetic testing to screen for diseases).
He points out that in many markets couples are choosing to conceive later in life. The big vision, therefore, is to develop new assisted reproductive technologies that can support older couples to conceive healthy babies.
“Generally speaking we leave our fertility to chance — which is sex… So there’s a little bit of randomness in the process. This doesn’t necessarily mean it’s bad — it’s how the body functions. But when you hit later ages, 30 or 40, we face biological deficiencies which means the quality of the eggs are not good any more, the quality of the sperm might not be good any more, if fertilization happens with old gametes… you are not sure there is a healthy baby. So we need technology to play a role here.
“Imagine a couple at the age of 40 who want to conceive a baby ten, twelve years from now. What happens if this couple have the possibility of the sperm of the man to be shipped somewhere, the egg of the woman to be shipped somewhere and they get fertilized using high end technology, and they get informed once the embryo is ready to be implanted. This is where we believe the consumer game will be in the future,” he says.
“We envisage ourselves going from just working with clinics in the coming ten years… making our AI and our robotics really flawless at manipulation, and then we are envisaging of having as consumer-facing way where we ensure people have healthy babies. Not necessarily this will be a clinic but it will be somehow where fertilization will happen in our facilities.”
“I’m not speaking about super humans or designer babies,” he adds. “I’m speaking about ensuring at a later stage of the conception journey to have a healthy baby. And this is where we see ART can actually be the way to procreate at later stages in order to ensure that the baby is healthy then there should be new technologies that just give you a healthy baby — and not mess up with your body.”
Of course this is pure concept right now. And Taja concedes that Mojo doesn’t even have data to determine “good” sperm from “bad” — beyond some basic signifiers.
But once samples start flowing via customers of the first product they expect to be able to start gathering data (with permission) to support further research into the role played by individual sperm and eggs in reproduction — looking at the whole journey from sperm and egg selection through to embryo and baby.
Though getting permission for all elements of the research they hope to do may be one potential barrier.
“Once the first module is in the market we will be collecting data,” he says. “And this data that we’ll be collecting will go and be associated with the live births or the treatment outcome. And with that we’ll understand more and more what is a good sperm, what is a bad sperm.
“But we need to start from somewhere. And this somewhere right now what we’re relying on is the knowledge that good practitioners have in the field.”
Taha says he and his co-founders actively started building the company in January 2018, taking in some angel investment, along with government grants from France and the EU’s Horizon 2020 research pot.
They’ve been building the startup out of Lyon, France but the commercial team will shortly be moving to the UK ahead of launching Mojo Pro.
In the short term the hope is to attract clinics to adopt the Mojo Pro subscription service as a way for them to serve more customers, while potentially helping couples reduce the number of IVF cycles they have to go. Longer term the bet is that changing lifestyles will only see demand for data-fuelled technology-assisted reproduction grow.
“Now we help streamline laboratory processes in order to help the 180M people who have fertility problems have access to fertility at an affordable price and reliable manner but also we have an eye on the future — what happens when genetic testing… [plays] an important role in the procreation and people will opt for this,” he adds.

It’s a new era for fertility tech

Messaging app Wire confirms $8.2M raise, responds to privacy concerns after moving holding company to the US

Big changes are afoot for Wire, an enterprise-focused end-to-end encrypted messaging app and service that advertises itself as “the most secure collaboration platform”. In February, Wire quietly raised $8.2 million from Morpheus Ventures and others, we’ve confirmed — the first funding amount it has ever disclosed — and alongside that external financing, it moved its holding company in the same month to the US from Luxembourg, a switch that Wire’s CEO Morten Brogger described in an interview as “simple and pragmatic.”
He also said that Wire is planning to introduce a freemium tier to its existing consumer service — which itself has half a million users — while working on a larger round of funding to fuel more growth of its enterprise business — a key reason for moving to the US, he added: There is more money to be raised there.
“We knew we needed this funding and additional to support continued growth. We made the decision that at some point in time it will be easier to get funding in North America, where there’s six times the amount of venture capital,” he said.
While Wire has moved its holding company to the US, it is keeping the rest of its operations as is. Customers are licensed and serviced from Wire Switzerland; the software development team is in Berlin, Germany; and hosting remains in Europe.
The news of Wire’s US move and the basics of its February funding — sans value, date or backers — came out this week via a blog post that raises questions about whether a company that trades on the idea of data privacy should itself be more transparent about its activities.
The changes to Wire’s financing and legal structure had not been communicated to users until news started to leak out, which brings up questions not just about transparency, but about how secure Wire’s privacy policy will play out, given the company’s ownership now being on US soil.

So turns out @wire changed ownership, didn’t really notify anyone as per their own privacy policy, and worst of all it’s to a US entity. It’s been proven time after time we shouldn’t place our data (or trust) into US entities. I used wire because it was different. Cc @Snowden https://t.co/i2cwAhMaTQ
— Peter Sunde Kolmisoppi (@brokep) November 12, 2019

It was an issue picked up and amplified by NSA whistleblower Edward Snowden . Via Twitter, he described the move to the US as “not appropriate for a company claiming to provide a secure messenger — claims a large number of human rights defenders relied on.”

If you’re a tech journalist, you should be digging into the story behind what’s going on behind the curtain here. This is not appropriate for a company claiming to provide a secure messenger — claims a large number of human rights defenders relied on — and we need facts. https://t.co/iV4tRZwgDR
— Edward Snowden (@Snowden) November 12, 2019

The key question is whether Wire’s shift to the US puts users’ data at risk — a question that Brogger claims is straightforward to answer: “We are in Switzerland, which has the best privacy laws in the world” — it’s subject to Europe’s General Data Protection Regulation framework (GDPR) on top of its own local laws — “and Wire now belongs to a new group holding, but there no change in control.” 
In its blog post published in the wake of blowback from privacy advocates, Wire also claims it “stands by its mission to best protect communication data with state-of-the-art technology and practice” — listing several items in its defence:
All source code has been and will be available for inspection on GitHub (github.com/wireapp).
All communication through Wire is secured with end-to-end encryption — messages, conference calls, files. The decryption keys are only stored on user devices, not on our servers. It also gives companies the option to deploy their own instances of Wire in their own data centers.
Wire has started working on a federated protocol to connect on-premise installations and make messaging and collaboration more ubiquitous.
Wire believes that data protection is best achieved through state-of-the-art encryption and continues to innovate in that space with Messaging Layer Security (MLS).
But where data privacy and US law are concerned, it’s complicated. Snowden famously leaked scores of classified documents disclosing the extent of US government mass surveillance programs in 2013, including how data-harvesting was embedded in US-based messaging and technology platforms.
Six years on, the political and legal ramifications of that disclosure are still playing out — with a key judgement pending from Europe’s top court which could yet unseat the current data transfer arrangement between the EU and the US.
Privacy versus security
Wire launched at a time when interest in messaging apps was at a high watermark. The company made its debut in the middle of February 2014, and it was only one week later that Facebook acquired WhatsApp for the princely sum of $19 billion. We described Wire’s primary selling point at the time as a “reimagining of how a communications tool like Skype should operate had it been built today” rather than in in 2003.
That meant encryption and privacy protection, but also better audio tools and file compression and more. It was  a pitch that seemed especially compelling considering the background of the company. Skype co-founder Janus Friis and funds connected to him were the startup’s first backers (and they remain the largest shareholders); Wire was co-founded in by Skype alums Jonathan Christensen and Alan Duric (no longer with the company); and even new investor Morpheus has Skype roots.

Skype Co-Founder Backs Wire, A New Communications App Launching Today On iOS, Android And Mac

Even with the Skype pedigree, the strategy faced a big challenge.
“The consumer messaging market is lost to the Facebooks of the world, which dominate it,” Brogger said today. “However, we made a clear insight, which is the core strength of Wire: security and privacy.”
That, combined with trend around the consumerization of IT that’s brought new tools to business users, is what led Wire to the enterprise market in 2017.
But fast forward to today, and it seems that even as security and privacy are two sides of the same coin, it may not be so simple when deciding what to optimise in terms of features and future development, which is part of the question now and what critics are concerned with.
“Wire was always for profit and planned to follow the typical venture backed route of raising rounds to accelerate growth,” one source familiar with the company told us. “However, it took time to find its niche (B2B, enterprise secure comms).
“It needed money to keep the operations going and growing. [But] the new CEO, who joined late 2017, didn’t really care about the free users, and the way I read it now, the transformation is complete: ‘If Wire works for you, fine, but we don’t really care about what you think about our ownership or funding structure as our corporate clients care about security, not about privacy.’”
And that is the message you get from Brogger, too, who describes individual consumers as “not part of our strategy”, but also not entirely removed from it, either, as the focus shifts to enterprises and their security needs.
Brogger said there are still half a million individuals on the platform, and they will come up with ways to continue to serve them under the same privacy policies and with the same kind of service as the enterprise users. “We want to give them all the same features with no limits,” he added. “We are looking to switch it into a freemium model.”
On the other side, “We are having a lot of inbound requests on how Wire can replace Skype for Business,” he said. “We are the only one who can do that with our level of security. It’s become a very interesting journey and we are super excited.”
Part of the company’s push into enterprise has also seen it make a number of hires. This has included bringing in two former Huddle C-suite execs, Brogger as CEO and Rasmus Holst as chief revenue officer — a bench that Wire expanded this week with three new hires from three other B2B businesses: a VP of EMEA sales from New Relic, a VP of finance from Contentful; and a VP of Americas sales from Xeebi.
Such growth comes with a price-tag attached to it, clearly. Which is why Wire is opening itself to more funding and more exposure in the US, but also more scrutiny and questions from those who counted on its services before the change.
Brogger said inbound interest has been strong and he expects the startup’s next round to close in the next two to three months.

Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit

A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol.
The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his likeness to shill Bitcoin scams via fake ads run on its platform.
In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide de Mol with data on the accounts running them within a week.
Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.
The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams.
Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.
In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook. It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.
In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing de Mol’s image on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.
However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.
It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.
Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered. Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced. 
Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising.
It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.
The judgement orders Facebook to remove fake scam ads containing de Mol’s likeness from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).
The court order also requires that Facebook provides de Mol with data on the accounts that had been misusing his image within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.
Facebook has also been ordered to pay the case costs.
Responding to the judgement in a statement, a Facebook spokesperson told us:
We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.
One legal expert describes the judgement as “pivotal“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.
Injunctions are faster and more effective, Hildebrandt added.

This is a tort case, not based on GDPR. Enforcement of GDPR obligations can also be ensured via such penalty payments (much more effective than fines), ex art. 79 GDPR, which basically requires that tort actions can be filed against controllers if unlawful processing 4/4
— Mireille Hildebrandt (@mireillemoret) November 12, 2019

The judgement also raises questions around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.
Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.
The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order. 

Facebook agrees to pay UK data watchdog’s Cambridge Analytica fine but settles without admitting liability

Facebook has reached a settlement with the UK’s data protection watchdog, the ICO, agreeing to pay in full a £500,000 (~$643k) fine following the latter’s investigating into the Cambridge Analytica data misuse scandal.
As part of the arrangement Facebook has agreed to drop its legal appeal against the penalty. But under the terms of the settlement it has not admitted any liability in relation to paying the fine, which is the maximum possible monetary penalty under the applicable UK data protection law. (The Cambridge Analytica scandal predates Europe’s GDPR framework coming into force.)
Facebook’s appeal against the ICO’s penalty was focused on a claim that there was no evidence that U.K. Facebook users’ data had being mis-used by Cambridge Analytica .
But there’s a further twist here in that the company had secured a win, from a first tier legal tribunal — which held in June that “procedural fairness and allegations of bias” on the part of the ICO should be considered as part of its appeal.
The decision required the ICO to disclose materials relating to its decision-making process regarding the Facebook fine. The ICO, evidently less than keen for its emails to be trawled through, appealed last month. It’s now withdrawing the action as part of the settlement, Facebook having dropped its legal action.
In a statement laying out the bare bones of the settlement reached, the ICO writes: “The Commissioner considers that this agreement best serves the interests of all UK data subjects who are Facebook users. Both Facebook and the ICO are committed to continuing to work to ensure compliance with applicable data protection laws.”
An ICO spokeswoman did not respond to additional questions — telling us it does not have anything further to add than its public statement.
As part of the settlement, the ICO writes that Facebook is being allowed to retain some (unspecified) “documents” that the ICO had disclosed during the appeal process — to use for “other purposes”, including for furthering its own investigation into issues around Cambridge Analytica.
“Parts of this investigation had previously been put on hold at the ICO’s direction and can now resume,” the ICO adds.
Under the terms of the settlement the ICO and Facebook each pay their own legal costs. While the £500k fine is not kept by the ICO but paid to HM Treasury’s consolidated fund.
Commenting in a statement, deputy commissioner, James Dipple-Johnstone, said:
The ICO welcomes the agreement reached with Facebook for the withdrawal of their appeal against our Monetary Penalty Notice and agreement to pay the fine. The ICO’s main concern was that UK citizen data was exposed to a serious risk of harm. Protection of personal information and personal privacy is of fundamental importance, not only for the rights of individuals, but also as we now know, for the preservation of a strong democracy. We are pleased to hear that Facebook has taken, and will continue to take, significant steps to comply with the fundamental principles of data protection. With this strong commitment to protecting people’s personal information and privacy, we expect that Facebook will be able to move forward and learn from the events of this case.
In its own supporting statement, attached to the ICO’s remarks, Harry Kinmonth, director and associate general counsel at Facebook, added:
We are pleased to have reached a settlement with the ICO. As we have said before, we wish we had done more to investigate claims about Cambridge Analytica in 2015. We made major changes to our platform back then, significantly restricting the information which app developers could access. Protecting people’s information and privacy is a top priority for Facebook, and we are continuing to build new controls to help people protect and manage their information. The ICO has stated that it has not discovered evidence that the data of Facebook users in the EU was transferred to Cambridge Analytica by Dr Kogan. However, we look forward to continuing to cooperate with the ICO’s wider and ongoing investigation into the use of data analytics for political purposes.
A charitable interpretation of what’s gone on here is that both Facebook and the ICO have reached a stalemate where their interests are better served by taking a quick win that puts the issue to bed, rather than dragging on with legal appeals that might also have raised fresh embarrassments. 
That’s quick wins in terms of PR (a paid fine for the ICO; and drawing a line under the issue for Facebook), as well as (potentially) useful data to further Facebook’s internal investigation of the Cambridge Analytica scandal.
We don’t know exactly it’s getting from the ICO’s document stash. But we do know it’s facing a number of lawsuits and legal challenges over the scandal in the US. 
The ICO announced its intention to fine Facebook over the Cambridge Analytica scandal just over a year ago.
In March 2018 it had raided the UK offices of the now defunct data company, after obtaining a warrant, taking away hard drives and computers for analysis. It had also earlier ordered Facebook to withdraw its own investigators from the company’s offices.
Speaking to a UK parliamentary committee a year ago the information commissioner, Elizabeth Denham, and deputy Dipple-Johnstone, discussed their (then) ongoing investigation of data seized from Cambridge Analytica — saying they believed the Facebook user data-set the company had misappropriated could have been passed to more entities than were publicly known.
The ICO said at that point it was looking into “about half a dozen” entities.
It also told the committee it had evidence that, even as recently as early 2018, Cambridge Analytica might have retained some of the Facebook data — despite having claimed it had deleted everything.
“The follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000,” Denham also said at the time. 
Some of this evidence will likely be very useful for Facebook as it prepares to defend itself in legal challenges related to Cambridge Analytica. As well as aiding its claimed platform audit — when, in the wake of the scandal, Facebook said it would run a historical app audit and challenge all developers who it determined had downloaded large amounts of user data.
The audit, which it announced in March 2018, apparently remains ongoing.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

US search market needs a ‘choice screen’ remedy now, says DuckDuckGo

US regulators shouldn’t be sitting on their hands while the 50+ state, federal and congressional antitrust investigations of Google to grind along, search rival DuckDuckGo argues.
It’s put out a piece of research today that suggests choice screens which let smartphone users choose from a number of search engines to be their device default — aka “preference menus” as DuckDuckGo founder Gabe Weinberg prefers to call them — offer an easy and quick win for regulators to reboot competition in the search space by rebalancing markets right now.
“If designed properly we think [preference menus] are a quick and effective key piece in the puzzle for a good remedy,” Weinberg tells TechCrunch. “And that’s because it finally enables people to change the search defaults across the entire device which has been difficult in the past… It’s at a point, during device set-up, where you can promote the users to take a moment to think about whether they want to try out an alternative search engine.”
Google is already offering such a choice to Android users in Europe, following an EU antitrust decision against Android last year.

DuckDuckGo is concerned US regulators aren’t thinking pro-actively enough about remedies for competition in the US search market — and is hoping to encourage more of a lean-in approach to support boosting diversity so that rivals aren’t left waiting years for the courts to issue judgements before any relief is possible.
In a survey of Internet users which it commissioned, polling more than 3,400 adults in the US, UK, Germany and Australia, people were asked to respond to a 4-choice screen design, based on an initial Google Android remedy proposal, as well as an 8-choice variant.
“We found that in each surveyed country, people select the Google alternatives at a rate that could increase their collective mobile market share by 300%-800%, with overall mobile search market share immediately changing by over 10%,” it writes [emphasis its].
Survey takers were also asked about factors that motivate them to switch search engines — with the number one reason given being a better quality of search results, and the next reason being if a search engine doesn’t track their searches or data.
Of course DuckDuckGo stands to gain from any pro-privacy switching, having built an alternative search business by offering non-tracked searches supported by contextual ads. Its model directly contrasts with Google’s, which relies on pervasive tracking of Internet users to determine which ads to serve.
But there’s plenty of evidence consumers hate being tracked. Not least the rise in use of tracker blockers.
“Using the original design puzzle [i.e. that Google devised] we saw a lot of people selecting alternative search engines and we think it would go up from there,” says Weinberg. “But even initially a 10% market share change is really significant.”
He points to regulatory efforts in Europe and also Russia which have resulted in antitrust decisions and enforcements against Google — and where choice screens are already in use promoting alternative search engine choices to Android users.
He also notes that regulators in Australia and the UK are pursuing choice screens — as actual or potential remedies for rebalancing the search market.
Russia has the lead here, with its regulator — the FAS — slapping Google with an order against bundling its services with Android all the way back in 2015, a few months after local search giant Yandex filed a complaint. A choice screen was implemented in 2017 and Russia’s homegrown Internet giant has increased its search market share on Android devices as a result. Google continues to do well in Russia. But the result is greater diversity in the local search market, as a direct result of implementing a choice screen mechanism.
“We think that all regulatory agencies that are now considering search market competition should really implement this remedy immediately,” says Weinberg. “They should do other things… as well but I don’t see any reason why one should wait on not implementing this because it would take a while to roll out and it’s a good start.”
Of course US regulators have yet to issue any antitrust findings against Google — despite there now being tens of investigations into “potential monopolistic behavior”. And Weinberg concedes that US regulators haven’t yet reached the stage of discussing remedies.
“It feels at a very investigatory stage,” he agrees. “But we would like to accelerate that… As well as bigger remedial changes — similar to privacy and how we’re pushing Do Not Track legislation — as something you can do right now as kind of low hanging fruit. I view this preference menu in the same way.”
“It’s a very high leverage thing that you can do immediately to move market share and increase search competition and so one should do it faster and then take the things that need to be slower slower,” he adds, referring to more radical possible competition interventions — such as breaking a business up.
There is certainly growing concern among policymakers around the world that the current modus operandi of enforcing competition law has failed to keep pace with increasingly powerful technology-driven businesses and platforms — hence ‘winner takes all’ skews which exist in certain markets and marketplaces, reducing choice for consumers and shrinking opportunities for startups to compete.
This concern was raised as a question for Europe’s competition chief, Margrethe Vestager, during her hearing in front of the EU parliament earlier this month. She pointed to the Commission’s use of interim measures in an ongoing case against chipmaker Broadcom as an example of how the EU is trying to speed up its regulatory response, noting it’s the first time such an application has been made for two decades.
In a press conference shortly afterwards, to confirm the application of EU interim measures against Broadcom, Vestager added: “Interim measures are one way to tackle the challenge of enforcing our competition rules in a fast and effective manner. This is why they are important. And especially that in fast moving markets. Whenever necessary I’m therefore committed to making the best possible use of this important tool.”
Weinberg is critical of Google’s latest proposals around search engine choice in Europe — after it released details of its idea to ‘evolve’ the search choice screen — by applying an auction model, starting early next year. Other rivals, such as French pro-privacy engine Qwant, have also blasted the proposal.
Clearly, how choice screens are implemented is key to their market impact.
“The way the current design is my read is smaller search engines, including us and including European search engines will not be on the screen long term the way it’s set up,” says Weinberg. “There will need to be additional changes to get the effects that we were seeing in our studies we made.
“There’s many reasons why us and others would not be those highest bidders,” he says of the proposed auction. “But needless to say the bigger companies can weigh outweigh the smaller ones and so there are alternative ways to set this up.”

Github removes Tsunami Democràtic’s APK after a takedown order from Spain

Microsoft-owned Github has removed the APK of an app for organizing political protests in the autonomous community of Catalonia — acting on a takedown request from Spain’s military police (aka the Guardia Civil).
As we reported earlier this month supporters of independence for Catalonia have regrouped under a new banner — calling itself Tsunami Democràtic — with the aim of rebooting the political movement and campaigning for self-determination by mobilizing street protests and peaceful civil disobedience.
The group has also been developing bespoke technology tools to coordinate protest action. It’s one of these tools, the Tsunami Democràtic app, which was being hosted as an APK on Github and has now been taken down.
The app registers supporters of independence by asking them to communicate their availability and resources for taking part in local protest actions across Catalonia. Users are also asked to register for protest actions and check-in when they get there — at which point the app asks them to abide by a promise of non-violence (see point 3 in this sample screengrab):

Users of the app see only upcoming protests relevant to their location and availability — making it different to the one-to-many broadcasts that Tsunami Democràtic also puts out via its channel on the Telegram messaging app.
Essentially, it’s a decentalized tool for mobilizing smaller, localized protest actions vs the largest demos which continue to be organized via Telegram broadcasts (such as a mass blockade of Barcelona airport, earlier this month).
A source with knowledge of Tsunami Democràtic previously told us the sorts of protests intended to be coordinated via the app could include actions such as go-slows to disrupt traffic on local roads and fake shopping sprees in supermarkets, with protestors abandoning carts filled with products in the store.
In a section of Github’s site detailing government takedowns the request from the Spanish state to remove the Tsunami Democràtic app sits alongside folders containing historical takedown requests from China and Russia.
“There is an ongoing investigation being carried out by the National High Court where the movement Tsunami Democràtic has been confirmed as a criminal organization driving people to commit terrorist attacks. Tsunami Democràtic’s main goal is coordinating these riots and terrorist actions by using any possible mean,” Spain’s military police write in the letter sent to Github.
We’ve reached out to Microsoft for comment on Github’s decision to remove the app APK.
In a note about government takedowns on Github’s website it writes:
From time to time, GitHub receives requests from governments to remove content that has been declared unlawful in their local jurisdiction. Although we may not always agree with those laws, we may need to block content if we receive a valid request from a government official so that our users in that jurisdiction may continue to have access to GitHub to collaborate and build software.
“GitHub does not endorse or adopt any assertion contained in the following notices,” it adds in a further caveat on the page.
The trigger for the latest wave of street demonstrations in Catalonia were lengthy jail sentences handed down to a number of Catalan political and cultural leaders by Spain’s Supreme Court earlier this month.
These were people involved in organizing an illegal independence referendum two years ago. The majority of these Catalan leaders were convicted for sedition. None were found guilty of the more serious charge of rebellion — but sentences ran as long as 13 years nonetheless.
This month Spanish judges also reissued a European arrest warrant seeking to extradite the former leader of the Catalan government, Carles Puigdemont, from Brussels to Spain to face trial.  Last year a court in Germany refused his extradition to Spain on charges of rebellion or sedition — only allowing it on lesser grounds of misuse of public funds. A charge which Spain did not pursue.
Puigdemont fled Catalonia in the wake of the failed 2017 independence bid and has remained living in exile in Brussels. He has also since been elected as an MEP but has been unable to take up his seat in the EU parliament after the Spanish state moved to block him from being recognized as a parliamentarian.
Shortly after the latest wave of pro-independence demonstrations took off in Catalonia the Tsunami Democràtic movement’s website was taken offline — also as a result of a takedown request by the Spanish state.

pic.twitter.com/RveFW1YfIu
— Sergio Argüelles (@arguellesergio) October 18, 2019

The website remains offline at the time of writing.
While the Tsunami Democràtic app could be accused of encouraging disruption, the charge of “terrorism” is clearly overblown. Unless your definition of terrorism extends to harnessing the power of peaceful civil resistance to generate momentum for political change. 
And while there has been unrest on the streets of Barcelona and other Catalan towns and cities this month, with fires being lit and projectiles thrown at police, there are conflicting reports about what has triggered these clashes between police and protestors — including criticism of the police response as overly aggressive vs what has been, in the main, large but peaceful crowds of pro-democracy demonstrators.
The police response on the day of the 2017 referendum was also widely condemned as violently disproportionate, with scenes of riot gear clad police officers beating up people as they tried to cast a vote.
Local press in Catalonia has reported the European Commission response to Spain’s takedown of the Tsunami Democràtic website — saying the pan-EU body said Spain has a responsibility to find “the right balance between guaranteeing freedom of expression and upholding public order and ensuring security, as well as protecting [citizens] from illegal content”.
Asked what impact the Github takedown of the Tsunami Democràtic app’s APK will have on the app, a source with knowledge of the movement suggested very little — pointing out that the APK is now being hosted on Telegram.
Similarly, the content that was available on the movement’s website is being posted to its 380,000+ subscribers on Telegram — a messaging platform that’s itself been targeted for blocks by authoritarian states in various locations around the world. (Though not, so far, in Spain.)
Another protest support tool that’s been in the works in Catalonia — a live-map for crowdsourcing information about street protests which looks similar to the HKlive.maps app used by pro-democracy campaigners in Hong Kong — is still in testing but expected to launch soon, per the source.

Tech giants still not doing enough to fight fakes, says European Commission

It’s a year since the European Commission got a bunch of adtech giants together to spill ink on a voluntary Code of Practice to do something — albeit, nothing very quantifiable — as a first step to stop the spread of disinformation online.
Its latest report card on this voluntary effort sums to the platforms could do better.
The Commission said the same in January. And will doubtless say it again. Unless or until regulators grasp the nettle of online business models that profit by maximizing engagement. As the saying goes, lies fly while the truth comes stumbling after. So attempts to shrink disinformation without fixing the economic incentives to spread BS in the first place are mostly dealing in cosmetic tweaks and optics.
Signatories to the Commission’s EU Code of Practice on Disinformation are: Facebook, Google, Twitter, Mozilla, Microsoft and several trade associations representing online platforms, the advertising industry, and advertisers — including the Internet Advertising Bureau (IAB) and World Federation of Advertisers (WFA).
In a press release assessing today’s annual reports, compiled by signatories, the Commission expresses disappointment that no other Internet platforms or advertising companies have signed up since Microsoft joined as a late addition to the Code this year.
“We commend the commitment of the online platforms to become more transparent about their policies and to establish closer cooperation with researchers, fact-checkers and Member States. However, progress varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny,” write commissioners Věra Jourová, Julian King, and Mariya Gabriel said in a joint statement. [emphasis ours]
“While the 2019 European Parliament elections in May were clearly not free from disinformation, the actions and the monthly reporting ahead of the elections contributed to limiting the space for interference and improving the integrity of services, to disrupting economic incentives for disinformation, and to ensuring greater transparency of political and issue-based advertising. Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the Code. We cannot accept this as a new normal,” they add.
The risk, of course, is that the Commission’s limp-wristed code risks rapidly cementing a milky jelly of self-regulation in the fuzzy zone of disinformation as the new normal, as we warned when the Code launched last year.
The Commission continues to leave the door open (a crack) to doing something platforms can’t (mostly) ignore — i.e. actual regulation — saying it’s assessment of the effectiveness of the Code remains ongoing.
But that’s just a dangled stick. At this transitionary point between outgoing and incoming Commissions, it seems content to stay in a ‘must do better’ holding pattern. (Or: “It’s what the Commission says when it has other priorities,” as one source inside the institution put it.)
A comprehensive assessment of how the Code is working is slated as coming in early 2020 — i.e. after the new Commission has taken up its mandate. So, yes, that’s the sound of the can being kicked a few more months on.
Summing up its main findings from signatories’ self-marked ‘progress’ reports, the outgoing Commission says they have reported improved transparency between themselves vs a year ago on discussing their respective policies against disinformation. 
But it flags poor progress on implementing commitments to empower consumers and the research community.
“The provision of data and search tools is still episodic and arbitrary and does not respond to the needs of researchers for independent scrutiny,” it warns. 
This is ironically an issue that one of the signatories, Mozilla, has been an active critic of others over — including Facebook, whose political ad API it reviewed damningly this year, finding it not fit for purpose and “designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation”. So, er, ouch.
The Commission is also critical of what it says are “significant” variations in the scope of actions undertaken by platforms to implement “commitments” under the Code, noting also differences in implementation of platform policy; cooperation with stakeholders; and sensitivity to electoral contexts persist across Member States; as well as differences in EU-specific metrics provided.
But given the Code only ever asked for fairly vague action in some pretty broad areas, without prescribing exactly what platforms were committing themselves to doing, nor setting benchmarks for action to be measured against, inconsistency and variety is really what you’d expect. That and the can being kicked down the road. 
The Code did extract one quasi-firm commitment from signatories — on the issue of bot detection and identification — by getting platforms to promise to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.
A year later it’s hard to see clear sign of progress on that goal. Although platforms might argue that what they claim is increased effort toward catching and killing malicious bot accounts before they have a chance to spread any fakes is where most of their sweat is going on that front.
Twitter’s annual report, for instance, talks about what it’s doing to fight “spam and malicious automation strategically and at scale” on its platform — saying its focus is “increasingly on proactively identifying problematic accounts and behaviour rather than waiting until we receive a report”; after which it says it aims to “challenge… accounts engaging in spammy or manipulative behavior before users are ​exposed to ​misleading, inauthentic, or distracting content”.
So, in other words, if Twitter does this perfectly — and catches every malicious bot before it has a chance to tweet — it might plausibly argue that bot labels are redundant. Though it’s clearly not in a position to claim it’s won the spam/malicious bot war yet. Ergo, its users remain at risk of consuming inauthentic tweets that aren’t clearly labeled as such (or even as ‘potentially suspect’ by Twitter). Presumably because these are the accounts that continue slipping under its bot-detection radar.
There’s also nothing in Twitter’s report about it labelling even (non-malicious) bot accounts as bots — for the purpose of preventing accidental confusion (after all satire misinterpreted as truth can also result in disinformation). And this despite the company suggesting a year ago that it was toying with adding contextual labels to bot accounts, at least where it could detect them.
In the event it’s resisted adding any more badges to accounts. While an internal reform of its verification policy for verified account badges was put on pause last year.
Facebook’s report also only makes a passing mention of bots, under a section sub-headed “spam” — where it writes circularly: “Content actioned for spam has increased considerably, since we found and took action on more content that goes against our standards.”
It includes some data-points to back up this claim of more spam squashed — citing a May 2019 Community Standards Enforcement report — where it states that in Q4 2018 and Q1 2019 it acted on 1.8 billion pieces of spam in each of the quarters vs 737 million in Q4 2017; 836 million in Q1 2018; 957 million in Q2 2018; and 1.2 billion in Q3 2018. 
Though it’s lagging on publishing more up-to-date spam data now, noting in the report submitted to the EC that: “Updated spam metrics are expected to be available in November 2019 for Q2 and Q3 2019″ — i.e. conveniently late for inclusion in this report.
Facebook’s report notes ongoing efforts to put contextual labels on certain types of suspect/partisan content, such as labelling photos and videos which have been independently fact-checked as misleading; labelling state-controlled media; and labelling political ads.
Labelling bots is not discussed in the report — presumably because Facebook prefers to focus attention on self-defined spam-removal metrics vs muddying the water with discussion of how much suspect activity it continues to host on its platform, either through incompetence, lack of resources or because it’s politically expedient for its business to do so.
Labelling all these bots would mean Facebook signposting inconsistencies in how it applies its own policies –in a way that might foreground its own political bias. And there’s no self-regulatory mechanism under the sun that will make Facebook fess up to such double-standards.
For now, the Code’s requirement for signatories to publish an annual report on what they’re doing to tackle disinformation looks to be the biggest win so far. Albeit, it’s very loosely bound self-reporting. While some of these ‘reports’ don’t even run to a full page of A4-text — so set your expectations accordingly.
The Commission has published all the reports here. It has also produced its own summary and assessment of them (here).
“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends,” it writes. “In addition, the metrics provided so far are mainly output indicators rather than impact indicators.”
Of the Code generally — as a “self-regulatory standard” — the Commission argues it has “provided an opportunity for greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement those policies”, adding: “This represents progress over the situation prevailing before the Code’s entry into force, while further serious steps by individual signatories and the community as a whole are still necessary.”

Alexa, where are the legal limits on what Amazon can do with my health data?

The contract between the UK’s National Health Service (NHS) and ecommerce giant Amazon — for a health information licensing partnership involving its Alexa voice AI — has been released following a Freedom of Information request.
The government announced the partnership this summer. But the date on the contract, which was published on the gov.uk contracts finder site months after the FOI was filed, shows the open-ended arrangement to funnel nipped-and-tucked health advice from the NHS’ website to Alexa users in audio form was inked back in December 2018.
The contract is between the UK government and Amazon US (Amazon Digital Services, Delaware) — rather than Amazon UK. 
Nor is it a standard NHS Choices content syndication contract. A spokeswoman for the Department of Health and Social Care (DHSC) confirmed the legal agreement uses an Amazon contract template. She told us the department had worked jointly with Amazon to adapt the template to fit the intended use — i.e. access to publicly funded healthcare information from the NHS’ website.
The NHS does make the same information freely available on its website, of course. As well as via API — to some 1,500 organizations. But Amazon is not just any organization; It’s a powerful US platform giant with a massive ecommerce business.
The contract reflects that power imbalance; not being a standard NHS content syndication agreement — but rather DHSC tweaking Amazon’s standard terms.
“It was drawn up between both Amazon UK and the Department for Health and Social Care,” a department spokeswoman told us. “Given that Amazon is in the business of holding standard agreements with content providers they provided the template that was used as the starting point for the discussions but it was drawn up in negotiation with the Department for Health and Social Care, and obviously it was altered to apply to UK law rather than US law.”
In July, when the government officially announced the Alexa-NHS partnership, its PR provided a few sample queries of how Amazon’s voice AI might respond to what it dubbed “NHS-verified” information — such as: “Alexa, how do I treat a migraine?”; “Alexa, what are the symptoms of flu?”; “Alexa, what are the symptoms of chickenpox?”.
But of course as anyone who’s ever googled a health symptom could tell you, the types of stuff people are actually likely to ask Alexa — once they realize they can treat it as an NHS-verified info-dispensing robot, and go down the symptom-querying rabbit hole — is likely to range very far beyond the common cold.
At the official launch of what the government couched as a ‘collaboration’ with Amazon, it explained its decision to allow NHS content to be freely piped through Alexa by suggesting that voice technology has “the potential to reduce the pressure on the NHS and GPs by providing information for common illnesses”.
Its PR cited an unattributed claim that “by 2020, half of all searches are expected to be made through voice-assisted technology”.
This prediction is frequently attributed to ComScore, a media measurement firm that was last month charged with fraud by the SEC. However it actually appears to originate with computer scientist Andrew Ng, from when he was chief scientist at Chinese tech giant Baidu.
Econsultancy noted last year that Mary Meeker included Ng’s claim on a slide in her 2016 Internet Trends report — which is likely how the prediction got so widely amplified.
But on Meeker’s slide you can see that the prediction is in fact “images or speech”, not voice alone…

So it turns out the UK government incorrectly cited a tech giant prediction to push a claim that “voice search has been increasing rapidly” — in turn its justification for funnelling NHS users towards Amazon.
“We want to empower every patient to take better control of their healthcare and technology like this is a great example of how people can access reliable, world-leading NHS advice from the comfort of their home, reducing the pressure on our hardworking GPs and pharmacists,” said health secretary Matt Hancock in a July statement.
Since landing at the health department, the app-loving former digital minister has been pushing a tech-first agenda for transforming the NHS — promising to plug in “healthtech” apps and services, and touting “preventative, predictive and personalised care”. He’s also announced an AI lab housed within a new unit that’s intended to oversee the digitization of the NHS.
Compared with all that, plugging the NHS’ website into Alexa probably seems like an easy ‘on-message’ win. But immediately the collaboration was announced concerns were raised that the government is recklessly mixing the streams of critical (and sensitive) national healthcare infrastructure with the rapacious data-appetite of a foreign tech giant with both an advertising and ecommerce business, plus major ambitions of its own in the healthcare space.
On the latter front, just yesterday news broke of Amazon’s second health-related acquisition: Health Navigator, a startup with an API platform for integrating with health services, such as telemedicine and medical call centers, which offers natural language processing tools for documenting health complaints and care recommendations.
Last year Amazon also picked up online pharmacy PillPack — for just under $1BN. While last month it launched a pilot of a healthcare service offering to its own employees in and around Seattle, called Amazon Care. That looks intended to be a road-test for addressing the broader U.S. market down the line. So the company’s commercial designs on healthcare are becoming increasingly clear.
Returning to the UK, in response to early critical feedback on the Alexa-NHS arrangement, the IT delivery arm of the service, NHS Digital, published a blog post going into more detail about the arrangement — following what it couched as “interesting discussion about the challenges for the NHS of working with large commercial organisations like Amazon”.
A core critical “discussion” point is the question of what Amazon will do with people’s medical voice query data, given the partnership is clearly encouraging people to get used to asking Alexa for health advice.
“We have stuck to the fundamental principle of not agreeing a way of working with Amazon that we would not be willing to consider with any single partner – large or small. We have been careful about data, commercialisation, privacy and liability, and we have spent months working with knowledgeable colleagues to get it right,” NHS Digital claimed in July.
In another section of the blog post, responding to questions about what Amazon will do with the data and “what about privacy”, it further asserted there would be no health profiling of customers — writing:
We have worked with the Amazon team to ensure that we can be totally confident that Amazon is not sharing any of this information with third parties. Amazon has been very clear that it is not selling products or making product recommendations based on this health information, nor is it building a health profile on customers. All information is treated with high confidentiality. Amazon restrict access through multi-factor authentication, services are all encrypted, and regular audits run on their control environment to protect it.
Yet it turns out the contract DHSC signed with Amazon is just a content licensing agreement. There are no terms contained in it concerning what can or can’t be done with the medical voice query data Alexa is collecting with the help of “NHS-verified” information.
Per the contract terms, Amazon is required to attribute content to the NHS when Alexa responds to a query with information from the service’s website. (Though the company says Alexa also makes use of medical content from the Mayo Clinic and Wikipedia.) So, from the user’s point of view, they will at times feel like they’re talking to an NHS-branded service.
But without any legally binding confidentiality clauses around what can be done with their medical voice queries it’s not clear how NHS Digital can confidently assert that Amazon isn’t creating health profiles.
The situation seems to sum to, er, trust Amazon. (NHS Digital wouldn’t comment; saying it’s only responsible for delivery not policy setting, and referring us to the DHSC.)
Asked what it does with medical voice query data generated as a result of the NHS collaboration an Amazon spokesperson told us: “We do not build customer health profiles based on interactions with nhs.uk content or use such requests for marketing purposes.”
But the spokesperson could not point to any legally binding contract clauses in the licensing agreement that restrict what Amazon can do with people’s medical queries.
We’ve also asked the company to confirm whether medical voice queries that return NHS content are being processed in the US.
“This collaboration only provides content already available on the NHS.UK website, and absolutely no personal data is being shared by NHS to Amazon or vice versa,” Amazon also told us, eliding the key point that it’s not NHS data being shared with Amazon but NHS users, reassured by the presence of a trusted public brand, being encouraged to feed Alexa sensitive personal data by asking about their ailments and health concerns.
Bizarrely, the Department of Health and Social Care went further. Its spokeswoman claimed in an email that “there will be no data shared, collected or processed by Amazon and this is just an alternative way of providing readily available information from NHS.UK.”
When we spoke to DHSC on the phone prior to this, to raise the issue of medical voice query data generated via the partnership and fed to Amazon — also asking where in the contract are clauses to protect people’s data — the spokeswoman said she would have to get back to us.
All of which suggests the government has a very vague idea (to put it generously) of how cloud-powered voice AIs function.
Presumably no one at DHSC bothered to read the information on Amazon’s own Alexa privacy page — although the department spokeswomen was at least aware this page existed (because she knew Amazon had pointed us to what she called its “privacy notice”, which she said “sets out how customers are in control of their data and utterances”).
If you do read the page you’ll find Amazon offers some broad-brush explanation there which tells you that after an Alexa device has been woken by its wake word, the AI will “begin recording and sending your request to Amazon’s secure cloud”.
Ergo data is collected and processed. And indeed stored on Amazon’s servers. So, yes, data is ‘shared’.
The more detailed Alexa Internet Privacy Notice, meanwhile, sets out broad-brush parameters to enable Amazon’s reuse of Alexa user data — stating that “the information we learn from users helps us personalize and continually improve your Alexa experience and provide information about Internet trends, website popularity and traffic, and related content”. [emphasis ours]
The DHSC sees the matter very differently, though.
With no contractual binds covering health-related queries UK users of Alexa are being encouraged to whisper into Amazon’s robotic ears — data that’s naturally linked to Alexa and Amazon account IDs (and which the Alexa Internet Privacy Notice also specifies can be accessed by “a limited number of employees”) — the government is accepting the tech giant’s standard data processing terms for a commercial, consumer product which is deeply integrated into its increasingly sprawling business empire.
Terms such as indefinite retention of audio recordings — unless users pro-actively request that they are deleted. And even then Amazon admitted this summer it doesn’t always delete the text transcripts of recordings. So even if you keep deleting all your audio snippets, traces of medical queries may well remain on Amazon’s servers.
Earlier this year it also emerged the company employs contractors around the world to listen in to Alexa recordings as part of internal efforts to improve the performance of the AI.
A number of tech giants recently admitted to the presence of such ‘speech grading’ programs, as they’re sometimes called — though none had been up front and transparent about the fact their shiny AIs needed an army of external human eavesdroppers to pull off a show of faux intelligence.
It’s been journalists highlighting the privacy risks for users of AI assistants; and media exposure leading to public pressure on tech giants to force changes to concealed internal processes that have, by default, treated people’s information as an owned commodity that exists to serve and reserve their own corporate interests.
Data protection? Only if you interpret the term as meaning your personal data is theirs to capture and that they’ll aggressively defend the IP they generate from it.
So, in other words, actual humans — both employed by Amazon directly and not — may be listening to the medical stuff you’re telling Alexa. Unless the user finds and activates a recently added ‘no human review’ option buried in Alexa settings.
Many of these arrangements remain under regulatory scrutiny in Europe. Amazon’s lead data protection regulator in Europe confirmed in August it’s in discussions with it over concerns related to its manual reviews of Alexa recordings. So UK citizens — whose taxes fund the NHS — might be forgiven for expecting more care from their own government around such a ‘collaboration’.
Rather than a wholesale swallowing of tech giant T&Cs in exchange for free access to the NHS brand and  “NHS-verified” information which helps Amazon burnish Alexa’s utility and credibility, allowing it to gather valuable insights for its commercial healthcare ambitions.
To date there has been no recognition from DHSC the government has a duty of care towards NHS users as regards potential risks its content partnership might generate as Alexa harvests their voice queries via a commercial conduit that only affords users very partial controls over what happens to their personal data.
Nor is DHSC considering the value being generously gifted by the state to Amazon — in exchange for a vague supposition that a few citizens might go to the doctor a bit less if a robot tells them what flu symptoms look like.
“The NHS logo is supposed to mean something,” says Sam Smith, coordinator at patient data privacy advocacy group, MedConfidential — one of the organizations that makes use of the NHS’ free APIs for health content (but which he points out did not write its own contract for the government to sign).
“When DHSC signed Amazon’s template contract to put the NHS logo on anything Amazon chooses to do, it left patients to fend for themselves against the business model of Amazon in America.”
In a related development this week, Europe’s data protection supervisor has warned of serious data protection concerns related to standard contracts EU institutions have inked with another tech giant, Microsoft, to use its software and services.
The watchdog recently created a strategic forum that’s intended to bring together the region’s public administrations to work on drawing up standard contracts with fairer terms for the public sector — to shrink the risk of institutions feeling outgunned and pressured into accepting T&Cs written by the same few powerful tech providers.
Such an effort is sorely needed — though it comes too late to hand-hold the UK government into striking more patient-sensitive terms with Amazon US.

EU-US Privacy Shield passes third Commission ‘health check’ — but litigation looms

The third annual review of the EU-US Privacy Shield data transfer mechanism has once again been nodded through by Europe’s executive.
This despite the EU parliament calling last year for the mechanism to be suspended.
The European Commission also issued US counterparts with a compliance deadline last December — saying the US must appoint a permanent ombudsperson to handle EU citizens’ complaints, as required by the arrangement, and do so by February.
This summer the US senate finally confirmed Keith Krach — under secretary of state for economic growth, energy, and the environment — in the ombudsperson role.
The Privacy Shield arrangement was struck between EU and US negotiators back in 2016 — as a rushed replacement for the prior Safe Harbor data transfer pact which in fall 2015 was struck down by Europe’s top court following a legal challenge after NSA whistleblower Edward Snowden revealed US government agencies were liberally helping themselves to digital data from Internet companies.
At heart is a fundamental legal clash between EU privacy rights and US national security priorities.
The intent for the Privacy Shield framework is to paper over those cracks by devising enough checks and balances that the Commission can claim it offers adequate protection for EU citizens personal data when taken to the US for processing, despite the lack of a commensurate, comprehensive data protection region. But critics have argued from the start that the mechanism is flawed.
Even so around 5,000 companies are now signed up to use Privacy Shield to certify transfers of personal data. So there would be major disruption to businesses were it to go the way of its predecessor — as has looked likely in recent years, since Donald Trump took office as US president.
The Commission remains a staunch defender of Privacy Shield, warts and all, preferring to support data-sharing business as usual than offer a pro-active defence of EU citizens’ privacy rights.
To date it has offered little in the way of objection about how the US has implemented Privacy Shield in these annual reviews, despite some glaring flaws and failures (for example the disgraced political data firm, Cambridge Analytica, was a signatory of the framework, even after the data misuse scandal blew up).
The Commission did lay down one deadline late last year, regarding the ongoing lack of a permanent ombudsperson. So it can now check that box.
It also notes approvingly today that the final two vacancies on the US’ Privacy and Civil Liberties Oversight Board have been filled, meaning it’s fully-staffed for the first time since 2016.
Commenting in a statement, commissioner for justice, consumers and gender equality, Věra Jourová, added: “With around 5,000 participating companies, the Privacy Shield has become a success story. The annual review is an important health check for its functioning. We will continue the digital diplomacy dialogue with our U.S. counterparts to make the Shield stronger, including when it comes to oversight, enforcement and, in a longer-term, to increase convergence of our systems.”
Its press release characterizes US enforcement action related to the Privacy Shield as having “improved” — citing the Federal Trade Commission taking enforcement action in a grand total of seven cases.
It also says vaguely that “an increasing number” of EU individuals are making use of their rights under the Privacy Shield, claiming the relevant redress mechanisms are “functioning well”. (Critics have long suggested the opposite.)
The Commission is recommending further improvements too though, including that the US expand compliance checks such as concerning false claims of participation in the framework.
So presumably there’s a bunch of entirely fake compliance claims going unchecked, as well as actual compliance going under-checked…
“The Commission also expects the Federal Trade Commission to further step up its investigations into compliance with substantive requirements of the Privacy Shield and provide the Commission and the EU data protection authorities with information on ongoing investigations,” the EC adds.
All these annual Commission reviews are just fiddling around the edges, though. The real substantive test for Privacy Shield which will determine its long term survival is looming on the horizon — from a judgement expected from Europe’s top court next year.
In July a hearing took place on a key case that’s been dubbed Schrems II. This is a legal challenge which initially targeted Facebook’s use of another EU data transfer mechanism but has been broadened to include a series of legal questions over Privacy Shield — now with the Court of Justice of the European Union.
There is also a separate litigation directly targeting Privacy Shield that was brought by a French digital rights group which argues it’s incompatible with EU law on account of US government mass surveillance practices.
The Commission’s PR notes the pending litigation — writing that this “may also have an impact on the Privacy Shield”. “A hearing took place in July 2019 in case C-311/18 (Schrems II) and, once the Court’s judgement is issued, the Commission will assess its consequences for the Privacy Shield,” it adds.
So, tl;dr, today’s third annual review doesn’t mean Privacy Shield is out of the legal woods.

Google’s Play Store is giving an age-rating finger to Fleksy, a Gboard rival 🖕

Platform power is a helluva a drug. Do a search on Google’s Play store in Europe and you’ll find the company’s own Gboard app has an age rating of PEGI 3 — aka the pan-European game information labelling system which signifies content is suitable for all age groups.
PEGI 3 means it may still contain a little cartoon violence. Say, for example, an emoji fist or middle finger.
Now do a search on Play for the rival Fleksy keyboard app and you’ll find it has a PEGI 12 age rating. This label signifies the rated content can contain slightly more graphic fantasy violence and mild bad language.
The discrepancy in labelling suggests there’s a material difference between Gboard and Fleksy — in terms of the content you might encounter. Yet both are pretty similar keyboard apps — with features like predictive emoji and baked in GIFs. Gboard also lets you create custom emoji. While Fleksy puts mini apps at your fingertips.
A more major difference is that Gboard is made by Play Store owner and platform controller, Google. Whereas Fleksy is an indie keyboard that since 2017 has been developed by ThingThing, a startup based out of Spain.
Fleksy’s keyboard didn’t used to carry a 12+ age rating — this is a new development. Not based on its content changing but based on Google enforcing its Play Store policies differently.
The Fleksy app, which has been on the Play Store for around eight years at this point — and per Play Store install stats has had more than 5M downloads to date — was PEGI 3 rating until earlier this month. But then Google stepped in and forced the team to up the rating to 12. Which means the Play Store description for Fleksy in Europe now rates it PEGI 12 and specifies it contains “Mild Swearing”.

The Play store’s system for age ratings requires developers to fill in a content ratings form, responding to a series of questions about their app’s content, in order to obtain a suggested rating.
Fleksy’s team have done so over the years — and come up with the PEGI 3 rating without issue. But this month they found they were being issued the questionnaire multiple times and then that their latest app update was blocked without explanation — meaning they had to reach out to Play Developer Support to ask what was going wrong.
After some email back and forth with support staff they were told that the app contained age inappropriate emoji content. Here’s what Google wrote:
During review, we found that the content rating is not accurate for your app… Content ratings are used to inform consumers, especially parents, of potentially objectionable content that exists within an app.
For example, we found that your app contains content (e.g. emoji) that is not appropriate for all ages. Please refer to the attached screenshot.
In the attached screenshot Google’s staff fingered the middle finger emoji as the reason for blocking the update:

 
“We never thought a simple emoji is meant to be 12+,” ThingThing CEO Olivier Plante tells us.
With their update rejected the team was forced to raise the rating of Fleksy to PEGI 12 — just to get their update unblocked so they could push out a round of bug fixes for the app.
That’s not the end of the saga, though. Google’s Play Store team is still not happy with the regional age rating for Fleksy — and wants to push the rating even higher — claiming, in a subsequent email, that “your app contains mature content (e.g. emoji) and should have higher rating”.
Now, to be crystal clear, Google’s own Gboard app also contains the middle finger emoji. We are 100% sure of this because we double-checked…
Emojis available on Google’s Gboard keyboard, including the ‘screw you’ middle finger. Photo credit: Romain Dillet/TechCrunch
This is not surprising. Pretty much any smartphone keyboard — native or add-on — would contain this symbol because it’s a totally standard emoji.
But when Plante pointed out to Google that the middle finger emoji can be found in both Fleksy’s and Gboard’s keyboards — and asked them to drop Fleksy’s rating back to PEGI 3 like Gboard — the Play team did not respond.
A PEGI 16 rating means the depiction of violence (or sexual activity) “reaches a stage that looks the same as would be expected in real life”, per official guidance on the labels, while the use of bad language can be “more extreme”, and content may include the use of tobacco, alcohol or illegal drugs.
And remember Google is objecting to “mature” emoji. So perhaps its app reviewers have been clutching at their pearls after finding other standard emojis which depict stuff like glasses of beer, martinis and wine…
Over on the US Play Store, meanwhile, the Fleksy app is rated “teen”.
While Gboard is — yup, you guessed it! — ‘E for Everyone’…

 
Plante says the double standard Google is imposing on its own app vs third party keyboards is infuriating, and he accuses the platform giant of anti-competitive behavior.
“We’re all-in for competition, it’s healthy… but incumbent players like Google playing it unfair, making their keyboard 3+ with identical emojis, is another showcase of abuse of power,” he tells TechCrunch.
A quick search of the Play Store for other third party keyboard apps unearths a mixture of ratings — most rated PEGI 3 (such as Microsoft-owned SwiftKey and Grammarly Keyboard); some PEGI 12 (such as Facemoji Emoji Keyboard which, per Play Store’s summary contains “violence”).
Only one that we could find among the top listed keyboard apps has a PEGI 16 rating.
This is an app called Classic Big Keyboard — whose listing specifies it contains “Strong Language” (and what keyboard might not, frankly!?). Though, judging by the Play store screenshots, it appears to be a fairly bog standard keyboard that simply offers adjustable key sizes. As well as, yes, standard emoji.
“It came as a surprise,” says Plante describing how the trouble with Play started. “At first, in the past weeks, we started to fill in the rating reviews and I got constant emails the rating form needed to be filled with no details as why we needed to revise it so often (6 times) and then this last week we got rejected for the same reason. This emoji was in our product since day 1 of its existence.”
Asked whether he can think of any trigger for Fleksy to come under scrutiny by Play store reviewers now, he says: “We don’t know why but for sure we’re progressing nicely in the penetration of our keyboard. We’re growing fast for sure but unsure this is the reason.”
“I suspect someone is doubling down on competitive keyboards over there as they lost quite some grip of their search business via the alternative browsers in Europe…. Perhaps there is a correlation?” he adds, referring to the European Commission’s antitrust decision against Google Android last year — when the tech giant was hit with a $5BN fine for various breaches of EU competition law. A fine which it’s appealing.
“I’ll continue to fight for a fair market and am glad that Europe is leading the way in this,” adds Plante.
Following the EU antitrust ruling against Android, which Google is legally compelled to comply with during any appeals process, it now displays choice screens to Android users in Europe — offering alternative search engines and browsers for download, alongside Google’s own dominate search  and browser (Chrome) apps.
However the company still retains plenty of levers it can pull and push to influence the presentation of content within its dominant Play Store — influencing how rival apps are perceived by Android users and so whether or not they choose to download them.
So requiring that a keyboard app rival gets badged with a much higher age rating than Google’s own keyboard app isn’t a good look to say the least.
We reached out to Google for an explanation about the discrepancy in age ratings between Fleksy and Gboard and will update this report with any further response. At first glance a spokesman agreed with us that the situation looks odd.

EU contracts with Microsoft raising “serious” data concerns, says watchdog

Europe’s chief data protection watchdog has raised concerns over contractual arrangements between Microsoft and the European Union institutions which are making use of its software products and services.
The European Data Protection Supervisor (EDPS) opened an enquiry into the contractual arrangements between EU institutions and the tech giant this April, following changes to rules governing EU outsourcing.
Today it writes [with emphasis]: “Though the investigation is still ongoing, preliminary results reveal serious concerns over the compliance of the relevant contractual terms with data protection rules and the role of Microsoft as a processor for EU institutions using its products and services.”
We’ve reached out to Microsoft for comment.
A spokesperson for the company told Reuters: “We are committed to helping our customers comply with GDPR [General Data Protection Regulation], Regulation 2018/1725 and other applicable laws. We are in discussions with our customers in the EU institutions and will soon announce contractual changes that will address concerns such as those raised by the EDPS.”
The preliminary finding follows risk assessments carried out by the Dutch Ministry of Justice and Security, published this summer, which also found similar issues, per the EDPS.
At issue is whether contractual terms are compatible with EU data protection laws intended to protect individual rights across the region.
“Amended contractual terms, technical safeguards and settings agreed between the Dutch Ministry of Justice and Security and Microsoft to better protect the rights of individuals shows that there is significant scope for improvement in the development of contracts between public administration and the most powerful software developers and online service outsourcers,” the watchdog writes today.
“The EDPS is of the opinion that such solutions should be extended not only to all public and private bodies in the EU, which is our short-term expectation, but also to individuals.”
A conference, jointly organized by the EDPS and the Dutch Ministry, which was held in August, brought together EU customers of cloud giants to work on a joint response to tackle regulatory risks related to cloud software provision. The event agenda included a debate on what was billed as “Strategic Vendor Management with respect to hyperscalers such as Microsoft, Amazon Web Services and Google”.
The EDPS says the idea for The Hague Forum — as it’s been named — is to develop a common strategy to “take back control” over IT services and products sold to the public sector by cloud giants.
Such as by creating standard contracts with fair terms for public administration, instead of the EU’s various public bodies feeling forced into accepting T&Cs as written by the same few powerful providers.
Commenting in a statement today, assistant EDPS, Wojciech Wiewiórowski, said: “We expect that the creation of The Hague Forum and the results of our investigation will help improve the data protection compliance of all EU institutions, but we are also committed to driving positive change outside the EU institutions, in order to ensure maximum benefit for as many people as possible. The agreement reached between the Dutch Ministry of Justice and Security and Microsoft on appropriate contractual and technical safeguards and measures to mitigate risks to individuals is a positive step forward. Through The Hague Forum and by reinforcing regulatory cooperation, we aim to ensure that these safeguards and measures apply to all consumers and public authorities living and operating in the EEA.”
EU data protection law means data controllers who make use of third parties to process personal data on their behalf remain accountable for what’s done with the data — meaning EU public institutions have a responsibility to assess risks around cloud provision, and have appropriate contractual and technical safeguards in place to mitigate risks. So there’s a legal imperative to dial up scrutiny of cloud contracts.
In parallel, the EDPS has been pushing for greater transparency in consumer agreements too.
On the latter front Microsoft’s arrangements with consumers using its desktop OS remain under scrutiny in the EU. Earlier this year the Dutch data protection agency referred privacy concerns about how Windows 10 gathers user data to the company’s lead regulator in Europe.
While this summer the company made changes to its privacy policy for its VoIP product Skype and AI assistant Cortana after media reports revealed it employed contractors who could listen in to audio snippets to improve automated translation and inferences.
The French government, meanwhile, has been loudly pursuing a strategy of digital sovereignty to reduce the state’s reliance on foreign tech providers. Though kicking the cloud giant habit may prove harder than ditching Google search.

Europe issues interim antitrust order against Broadcom as probe continues

Europe has ordered chipmaker Broadcom to stop applying exclusivity clauses in agreements with six of its major customers — imposing so called ‘interim measures’ based on preliminary findings from an ongoing antitrust investigation.
The move follows a formal statement of objections issued by the Competition Commission in June. At the time the regulator said it would seek to order Broadcom to halt its behaviour while the investigation proceeds — “to avoid any risk of serious and irreparable harm to competition”.
Today Broadcom has been ordered to unilaterally stop applying “anticompetitive provisions” in agreements with six customers, and to inform them it will no longer apply such measures.
It is also barred from agreeing provisions with the same or similar effect, and from taking any retaliatory practices intended to punish customers with an equivalent effect.
Commenting in a statement, antitrust chief Margrethe Vestager, said: “We have strong indications that Broadcom, the world’s leading supplier of chipsets used for TV set-top boxes and modems, is engaging in anticompetitive practices. Broadcom’s behaviour is likely, in the absence of intervention, to create serious and irreversible harm to competition. We cannot let this happen, or else European customers and consumers would face higher prices and less choice and innovation. We therefore ordered Broadcom to immediately stop its conduct.”
We’ve reached out to Broadcom for comment.
The chipmaker has 30 days to comply with the interim measures, though it can choose to challenge the order in court.
Should the order stand it will apply for up to three years — or the date of adoption of a final competition decision on the case (whichever is earlier).
The Commission began investigations into Broadcom a year ago.
“We have reached the conclusion that in first sight — or in legal lingo, prima facie — Broadcom is currently infringing competition rules by abusing its dominant position in the system on a chip market in TV set-top boxes, fiber modems and xDSL modems,” said Vestager today, speaking during a press conference setting out the interim measures decision.
In June, when the Commission issued formal objections, it said it believes the chipmaker holds a dominant position in markets for the supply of systems-on-a-chip for TV set-top boxes and modems — identifying clauses in agreements with manufacturers that it suspected could harm competition.
At the time it flagged seven agreements. That’s now been reduced to six as the scope of the investigation has been limited to three markets, following submissions from Broadcom after the Statement of Objections.
Vestager said the slight reduction in scope is “a reflection of a process having heard Broadcom’s arguments” over the past few months.
The use of interim measures is noteworthy — as a sign of how the EU regulator is seeking to evolve competition enforcement to keep up with market activity. It’s the first time in 18 years the commission has sought to use the tool.
“Interim measures are one way to tackle the challenge of enforcing our competition rules in a fast and effective manner,” said Vestager. “This is why they are important. And especially that in fast moving markets. Whenever necessary I’m therefore committed to making the best possible use of this important tool.”
During a recent hearing in front of the EU parliament — as the commissioner heads towards another five years as Europe’s competition chief combined with an expanded role as an EVP setting digital policy — she suggested she will seek to make greater use of interim orders as an enforcement tool.
Asked today whether she has already identified other cases where interim measures could be applied, she said she hasn’t but added: “The tool is on the table. And if we find cases that live up to the two things that have to be fulfilled at the same time, yes we will indeed use interim measures more often.
“We don’t have a line up of cases [where interim measures might be applied],” she added. “Two quite substantial conditions will have to be met. One we have to prove that it’s likely there will be serious and irreparable harm to competition, and second we’ll have to find that there is an infringement at first sight.
“[It’s] an instrument, a tool, where we still will have to be careful and precise,” she went on, noting that the Broadcom investigation has taken a full year’s investigation work up to this point. “We are careful and we will not compromise on the right for the company in question to defend themself.”
Responding to a question about whether interim measures might be more difficult to apply in digital vs traditional markets, she said the regulator will need to be able to identify harm.
“The thing is for an interim measures case to work obviously you will have to be able to identify the harm. And that of course when markets are fast moving — that is the first sort of port of call. Can we identify harm in this market?” she said. “But… we do a lot of different things to fully grasp how competition works in fast moving, platform-driven, network-driven markets in order to be able to do that. And to be able to use the instrument if we find a case where this would be the thing to do in order to prevent irreparable and serious harm to competition.”

Germany says it won’t ban Huawei or any 5G supplier up front

Germany is resisting US pressure to shut out Chinese tech giant Huawei from its 5G networks — saying it will not ban any supplier for the next-gen mobile networks on an up front basis, per Reuters.
“Essentially our approach is as follows: We are not taking a pre-emptive decision to ban any actor, or any company,” government spokesman, Steffen Seibert, told a news conference in Berlin yesterday.
The country’s Federal Network Agency is slated to be publishing detailed security guidance on the technical and governance criteria for 5G networks in the next few days.
The next-gen mobile technology delivers faster speeds and lower latency than current-gen cellular technologies, as well as supporting many more connections per cell site. So it’s being viewed as the enabling foundation for a raft of futuristic technologies — from connected and autonomous vehicles to real-time telesurgery.
But increased network capabilities that support many more critical functions means rising security risk. The complexity of 5G networks — marketed by operators as “intelligent connectivity” — also increases the surface area for attacks. So future network security is now a major geopolitical concern.
German business newspaper Handelsblatt, which says it has reviewed a draft of the incoming 5G security requirements, reports that chancellor Angela Merkel stepped in to intervene to exclude a clause which would have blocked Huawei’s market access — fearing a rift with China if the tech giant is shut out.
Earlier this year it says the federal government pledged the highest possible security standards for regulating next-gen mobile networks, saying also that systems should only be sourced from “trusted suppliers”. But those commitments have now been watered down by economic considerations at the top of the German government.
The decision not to block Huawei’s access has attracted criticism within Germany, and flies in the face of continued US pressure on allies to ban the Chinese tech giant over security and espionage risks.
The US imposed its own export controls on Huawei in May.
A key concern attached to Huawei is that back in 2017 China’s Communist Party passed a national intelligence law which gives the state swingeing powers to compel assistance from companies and individuals to gather foreign and domestic intelligence.
For network operators outside China the problem is Huawei has the lead as a global 5G supplier — meaning any ban on it as a supplier would translate into delays to network rollouts. Years of delay and billions of dollars of cost to 5G launches, according to warnings by German operators.
Another issue is that Huawei’s 5G technology has also been criticized on security grounds.
A report this spring by a UK oversight body set up to assess the company’s approach to security was damning — finding “serious and systematic defects” in its software engineering and cyber security competence.
Though a leak shortly afterwards from the UK government suggested it would allow Huawei partial access — to supply non-core elements of networks.
An official UK government decision on Huawei has been delayed, causing ongoing uncertainty for local carriers. In the meanwhile a government review of the telecoms supply chain this summer called for tougher security standards and updated regulations — with major fines for failure. So it’s possible that stringent UK regulations might sum to a de facto ban if Huawei’s approach to security isn’t seen to take major steps forward soon.
According to Handelsblatt’s report, Germany’s incoming guidance for 5G network operators will require carriers identify critical areas of network architecture and apply an increased level of security. (Although it’s worth pointing out there’s ongoing debate about how to define critical/core network areas in 5G networks.)
The Federal Office for Information Security (BSI) will be responsible for carrying out security inspections of networks.
Last week a pan-EU security threat assessment of 5G technology highlighted risks from “non-EU state or state-backed actors” — in a coded jab at Huawei.
The report also flagged increased security challenges attached to 5G vs current gen networks on account of the expanded role of software in the networks and apps running on 5G. And warned of too much dependence on individual 5G suppliers, and of operators relying overly on a single supplier.
Shortly afterwards the WSJ obtained a private risk assessment by EU governments — which appears to dial up regional concerns over Huawei, focusing on threats linked to 5G providers in countries with “no democratic and legal restrictions in place”.
Among the discussed risks in this non-public report are the insertion of concealed hardware, software or flaws into 5G networks; and the risk of uncontrolled software updates, backdoors or undocumented testing features left in the production version of networking products.
“These vulnerabilities are not ones which can be remedied by making small technical changes, but are strategic and lasting in nature,” a source familiar with the discussions told the WSJ — which implies that short term economic considerations risk translating into major strategic vulnerabilities down the line.
5G alternatives are in short supply, though.
US Senator Mark Warner recently floated the idea of creating a consortium of ‘Five Eyes’ allies — aka the U.S., Australia, Canada, New Zealand and the UK — to finance and build “a Western open-democracy type equivalent” to Huawei.
But any such move would clearly take time, even as Huawei continues selling services around the world and embedding its 5G kit into next-gen networks.

California’s Privacy Act: What you need to know now

This week California’s attorney general, Xavier Becerra, published draft guidance for enforcing the state’s landmark privacy legislation.
The draft text of the regulations under the California Consumer Privacy Act (CCPA) will undergo a public consultation period, including a number of public hearings, with submissions open until December 6 this year.
The CCPA itself will take effect in the state on January 1, with a further six months’ grace period before enforcement of the law begins.
“The proposed regulations are intended to operationalize the CCPA and provide practical guidance to consumers and businesses subject to the law,” writes the State of California’s Department of Justice in a press release announcing the draft text. “The regulations would address some of the open issues raised by the CCPA and would be subject to enforcement by the Department of Justice with remedies provided under the law.”
Translation: Here’s the extra detail we think is needed to make the law work.
The CCPA was signed into law in June 2018 — enshrining protections for a sub-set of US citizens against their data being collected and sold without their knowledge.
The law requires businesses over a certain user and/or revenue threshold to disclose what personal data they collect; the purposes they intend to use the data for; and any third parties it will be shared with; as well as requiring that they provide a discrimination-free opt-out to personal data being sold or shared.
Businesses must also comply with consumer requests for their data to be deleted.

Amid security concerns, the European Union puts 5G — and Huawei — under the microscope

The European Union is putting under the microscope the rollout of new, high-speed mobile networking technologies known as 5G in a move that could affect the technology’s dominant company — Huawei.
Regulators focused on specific security threats linked to technology providers headquartered in countries with “no democratic and legal restrictions in place,” according to a report in The Wall Street Journal. 
The news follows the release of a public report from the European Union that enumerated a number of challenges with 5G technology.
Heightened scrutiny of 5G implementation on European shores actually began back in March as member states wrestled with how to address American pressure to block Huawei from building out new telecommunications infrastructure on the continent.
The report from earlier in the week identified three security concerns that relate to the reliance on vendors linked to technology coming from individual suppliers — especially if that supplier represents a high degree of risk given its relationship to the government in its native country.

European risk report flags 5G security challenges

The new, private assessment reviewed by the WSJ is raising particular concerns about Huawei, according to the latest report.

“These vulnerabilities are not ones which can be remedied by making small technical changes, but are strategic and lasting in nature,” a source familiar with the discussions told the WSJ.
According to the WSJ report, concerns raised by the new EU analysis include: the insertion of concealed hardware, software or flaws into the 5G network; or the risk of uncontrolled software updates, backdoors or undocumented testing features left in the production version of the networking products.
U.S. security experts have long been concerned about Huawei’s dominance over the new telecommunications technology. Indeed, security officials and U.S. regulators have begun advocating for a combined public-private response to the threat Huawei poses (in concert with European allies).

U.S. security experts admit China’s 5G dominance, push for public investment

As trade talks resume between the U.S. and China in an effort to end the ongoing trade war between the two countries, the hard-line stance that the U.S. government has taken on China’s telecommunications and networking technology powerhouse may be changing.
Yesterday The New York Times reported that the U.S. government would allow some companies to resume selling to Huawei, reversing course on a ban on technology sales that had been imposed over the summer.
Now, with a preliminary trade deal apparently in place, the fate of Huawei’s 5G ambitions remain up in the air. Both the U.S. and the European Union have significant concerns, but China is likely to bring up Huawei’s ability to sell into foreign markets as part of any agreement.

European risk report flags 5G security challenges

European Union Member States have published a joint risk assessment report into 5G technology which highlights increased security risks that will require a new approach to securing telecoms infrastructure.
The EU has so far resisted pressure from the U.S. to boycott Chinese tech giant Huawei as a 5G supplier on national security grounds, with individual Member States such as the UK also taking their time to chew over the issue.
But the report flags risks to 5G from what it couches as “non-EU state or state-backed actors” — which can be read as diplomatic code for Huawei. Though, as some industry watchers have been quick to point out, the label could be applied rather closer to home in the near future, should Brexit comes to pass…

Some parts of the 5G report on risk of non-EU cyberattacks may accidentally gain a new unexpected meaning after #Brexit (https://t.co/o7gyV0hqCv) https://t.co/VgU30kRz4p
— Lukasz Olejnik (@lukOlejnik) October 9, 2019

Back in March, as European telecom industry concern swirled about how to respond to US pressure to block Huawei, the Commission stepped in to issue a series of recommendations — urging Member States to step up individual and collective attention to mitigate potential security risks as they roll out 5G networks.
Today’s risk assessment report follows on from that.
It identifies a number of “security challenges” that the report suggests are “likely to appear or become more prominent in 5G networks” vs current mobile networks — linked to the expanded use of software to run 5G networks; and software and apps that will be enabled by and run on the next-gen networks.
The role of suppliers in building and operating 5G networks is also noted as a security challenge, with the report warning of a “degree of dependency on individual suppliers”, and also of too many eggs being placed in the basket of a single 5G supplier.
Summing up the effects expected to follow 5G rollouts, per the report, it predicts:
An increased exposure to attacks and more potential entry points for attackers: With 5G networks increasingly based on software, risks related to major security flaws, such as those deriving from poor software development processes within suppliers are gaining in importance. They could also make it easier for threat actors to maliciously insert backdoors into products and make them harder to detect.
Due to new characteristics of the 5G network architecture and new functionalities, certain pieces of network equipment or functions are becoming more sensitive, such as base stations or key technical management functions of the networks.
An incre