Crea sito

Divesting from one facial recognition startup, Microsoft ends outside investments in the tech

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.
The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.
The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”
Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.
Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.
Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.
Smith wrote:
We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.
The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.
Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.
Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.
“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”
 
 

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.
It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.
Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.
“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.
Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.
Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.
However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.
We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).
“Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”
Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.
“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”
“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.
Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.
So — in other words — Brexit means, er, trust Google to look after your data.
“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.
“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”
Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.
The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.
So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.
It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)
Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.
Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.
Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future…
We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?
Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.
This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.
In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.
Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.
In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.
Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.
Though it could be using all that personal stuff to help it build new products it can serve ads alongside.
Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.
The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

Europe sets out plan to boost data reuse and regulate “high risk” AIs

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
AI
Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
A voluntary labelling scheme for lower risk AI applications
Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc
Data
A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
Tech for good
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
Trustworthy artificial intelligence
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
Towards a rights-respecting common data space
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Do you want to know more on the EU’s digital strategy?Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz
— European Commission (@EU_Commission) February 18, 2020

Platform liability
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton

Sixgill raises $15M to expand its dark web intelligence platform

Sixgill, an Israeli cyber threat intelligence company that specializes in monitoring the deep and dark web, today announced that it has raised a $15 million funding round led by Sonae IM, a fund based in Portugal, and London-based REV Venture Partners. Crowdfunding platform OurCrowd also participated in the round, as did previous investors Elron and Terra Venture Partners.
According to Crunchbase, this brings the company’s total funding to $21 million to date.
Sixgill, which was founded in 2014, plans to use the new funding to expand its efforts in North America, EMEA and APAC. In addition to expanding its geographic focus, Sixgill also plans to expand its product’s capabilities, including its Dynamic CVE Rating.

 
It’s current customer base mostly includes large enterprises, law enforcement and other government agencies as well as other security providers.
Given its focus, that client list doesn’t come as a surprise. The company uses its technology to automatically monitor dark web forums and marketplaces for potential threats and then find those that could affect its clients. Users can either access Sixgill’s through its SaaS platform or install it on-premises. For enterprises and agencies that don’t have their own staff to run the service, Sixgills also offers access to its internal analysts.
“Sixgill uses advanced automation and artificial intelligence technologies to provide accurate, contextual intelligence to customers. The solution integrates seamlessly into the platforms that security teams use to orchestrate, automate, and manage security events,” said Sharon Wagner, CEO of Sixgill. “The market has made it clear that Sixgill has built a powerful real-time engine for more effective handling of the rapidly expanding threat landscape; this investment will position us for significant growth and expansion in 2020.”

ACLU says it’ll fight DHS efforts to use app locations for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.
“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.
Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.
The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.
According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.
It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.
Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.
The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.
ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

ACLU sues Homeland Security over ‘stingray’ cell phone surveillance

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

Ancestry.com rejected a police warrant to access user DNA records on a technicality

DNA profiling company Ancestry.com has narrowly avoided complying with a search warrant in Pennsylvania after a search warrant was rejected on technical grounds, a move that is likely to help law enforcement refine their efforts to obtain user information despite the company’s efforts to keep the data private.
Little is known about the demands of the search warrant, only that a court in Pennsylvania approved law enforcement to “seek access” to Utah-based Ancestry.com’s database of more than 15 million DNA profiles.
TechCrunch was not able to identify the search warrant or its associated court case, which was first reported by BuzzFeed News on Monday. But it’s not uncommon for criminal cases still in the early stages of gathering evidence to remain under seal and hidden from public records until a suspect is apprehended.
DNA profiling companies like Ancestry.com are increasingly popular with customers hoping to build up family trees by discovering new family members and better understanding their cultural and ethnic backgrounds. But these companies are also ripe for picking by law enforcement, which want access to genetic databases to try to solve crimes from DNA left at crime scenes.
In an email to TechCrunch, the company confirmed that the warrant was “improperly served” on the company and was flatly rejected.
“We did not provide any access or customer data in response,” said spokesperson Gina Spatafore. “Ancestry has not received any follow-up from law enforcement on this matter.”
Ancestry.com, the largest of the DNA profiling companies, would not go into specifics, but the company’s transparency report said it rejected the warrant on “jurisdictional grounds.”
“I would guess it was just an out of state warrant that has no legal effect on Ancestry.com in its home state,” said Orin S. Kerr, law professor at the University of California, Berkeley, in an email to TechCrunch. “Warrants normally are only binding within the state in which they are issued, so a warrant for Ancestry.com issued in a different state has no legal effect,” he added.
But the rejection is likely to only stir tensions between police and the DNA profiling services over access to the data they store.
Ancestry.com’s Spatafore said it would “always advocate for our customers’ privacy and seek to narrow the scope of any compelled disclosure, or even eliminate it entirely.” It’s a sentiment shared by 23andMe, another DNA profiling company, which last year said that it had “successfully challenged” all of its seven legal demands, and as a result has “never turned over any customer data to law enforcement.”
The statements were in response to criticism that rival GEDmatch had controversially allowed law enforcement to search its database of more than a million records. The decision to allow in law enforcement was later revealed as crucial in helping to catch the notorious Golden Gate Killer, one of the most prolific murderers in U.S. history.
But the move was widely panned by privacy advocates for accepting a warrant to search its database without exhausting its legal options.
It’s not uncommon for companies to receive law enforcement demands for user data. Most tech giants, like Apple, Facebook, Google and Microsoft, publish transparency reports detailing the number of legal demands and orders they receive for user data each year or half-year.
Although both Ancestry.com and 23andMe provide transparency reports, detailing the amount of law enforcement demands for user data they receive, not all are as forthcoming. GEDmatch still does not publish its data demand figures, nor does MyHeritage, which said it “does not cooperate” with law enforcement. FamilyTreeDNA said it was “working” on publishing a transparency report.
But as police continue to demand data from DNA profiling and genealogy companies, they risk turning customers away — a lose-lose for both police and the companies.
Vera Eidelman, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said it would be “alarming” if law enforcement were able to get access to these databases containing millions of people’s information.
“Ancestry did the right thing in pushing back against the government request, and other companies should follow suit,” said Eidelman.

A popular genealogy website just helped solve a serial killer cold case in Oregon

Amazon quietly publishes its latest transparency report

Just as Amazon was basking in the news of a massive earnings win, the tech giant quietly published — as it always does — its latest transparency report, revealing a slight dip in the number of government demands for user data.
It’s a rarely seen decline in the number of demands received by a tech company during a year where almost every other tech giant — including Facebook, Google, Microsoft and Twitter — all saw an increase in the number of demands they receive. Only Apple reported a decline in the number of demands it received.
Amazon said it received 1,841 subpoenas, 440 search warrants and 114 other court orders for user data — such as its Echo and Fire devices — during the six-month period ending 2019.
That’s about a 4% decline on the first six months of the year.
The company’s cloud unit, Amazon Web Services, also saw a decline in the number of demands for data stored by customers, down by about 10%.
Amazon also said it received between 0 and 249 national security requests for both its consumer and cloud services (rules set out by the Justice Department only allow tech and telecom companies to report in ranges).
At the time of writing, Amazon has not yet updated its law enforcement requests page to list the latest report.
Amazon’s biannual transparency report is one of the lightest reads of any company’s figures across the tech industry. We previously reported on how Amazon’s transparency reports have purposefully become more vague over the years rather than clearer — bucking the industry trend. At just three pages, the company spends most of it explaining how it responds to each kind of legal demand rather than expanding on the numbers themselves.
The company’s Ring smart camera division, which has faced heavy criticism for its poor security practices and its cozy relationship with law enforcement, still hasn’t released its own data demand figures.

Many smart home device makers still won’t say if they give your data to the government

Ring’s new security ‘control center’ isn’t nearly enough

On the same day that a Mississippi family is suing Amazon -owned smart camera maker Ring for not doing enough to prevent hackers from spying on their kids, the company has rolled out its previously announced “control center,” which it hopes will make you forget about its verifiably “awful” security practices.
In a blog post out Thursday, Ring said the new “control center,” “empowers” customers to manage their security and privacy settings.
Ring users can check to see if they’ve enabled two-factor authentication, add and remove users from the account, see which third-party services can access their Ring cameras, and opt-out of allowing police to access their video recordings without the user’s consent.
But dig deeper and Ring’s latest changes still do practically nothing to change some of its most basic, yet highly criticized security practices.
Questions were raised over these practices months ago after hackers were caught breaking into Ring cameras and remotely watching and speaking to small children. The hackers were using previously compromised email addresses and passwords — a technique known as credential stuffing — to break into the accounts. Some of those credentials, many of which were simple and easy to guess, were later published on the dark web.
Yet, Ring still has not done anything to mitigate this most basic security problem.
TechCrunch ran several passwords through Ring’s sign-up page and found we could enter any easy to guess password, like “12345678” and “password” — which have consistently ranked as some of the most common passwords for several years running.
To combat the problem, Ring said at the time users should enable two-factor authentication, a security feature that adds an additional check to prevent account breaches like password spraying, where hackers use a list of common passwords in an effort to brute force their way into accounts.
But Ring still uses a weak form of two-factor, sending you a code by text message. Text messages are not secure and can be compromised through interception and SIM swapping attacks. Even NIST, the government’s technology standards body, has deprecated support for text message-based two-factor. Experts say although text-based two-factor is better than not using it at all, it’s far less secure than app-based two-factor, where codes are delivered over an encrypted connection to an app on your phone.
Ring said it’ll make its two-factor authentication feature mandatory later this year, but has yet to say if it will ever support app-based two-factor authentication in the future.
The smart camera maker has also faced criticism for its cozy relationship with law enforcement, which has lawmakers concerned and demanding answers.
Ring allows police access to users’ videos without a subpoena or a warrant. (Unlike its parent company Amazon, Ring still does not published the number of times police demand access to customer videos, with or without a legal request.)
Ring now says its control center will allow users to decide if police can access their videos or not.
But don’t be fooled by Ring’s promise that police “cannot see your video recordings unless you explicitly choose to share them by responding to a specific video request.” Police can still get a search warrant or a court order to obtain your videos, which isn’t particularly difficult if police can show there’s reasonable grounds that it may contain evidence — such as video footage — of a crime.
There’s nothing stopping Ring, or any other smart home maker, from offering a zero-knowledge approach to customer data, where only the user has the encryption keys to access their data. Ring cutting itself (and everyone else) out of the loop would be the only meaningful thing it could do if it truly cares about its users’ security and privacy. The company would have to decide if the trade-off is worth it — true privacy for its users versus losing out on access to user data, which would effectively kill its ongoing cooperation with police departments.
Ring says that security and privacy has “always been our top priority.” But if it’s not willing to work on the basics, its words are little more than empty promises.

Over 1,500 Ring passwords have been found on the dark web

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
Its conclusions:
The Met failed to consider the human rights impact of the techIts use was unlikely to pass the key legal test of being “necessary in a democratic society”
— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.
Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.
But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.
The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.
“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”
However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).
The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.
These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.
The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.
Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.
Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.
Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.
“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”
EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.
For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.
Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.
If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.
“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.
“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”
An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.
But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.
In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Amazon fires employees for leaking customer email addresses and phone numbers

Amazon has fired an employee after they shared customer email address and phone numbers with a third-party “in violation of our policies.”
The email to customers sent Friday afternoon, seen by TechCrunch, said the employees was “terminated” for sharing the data, and that the company is supporting law enforcement in their prosecution.
“No other information related to your account was shared. This is not a result of anything you have done, and there is no need for you to take any action,” the email read to customers.
Amazon confirmed the incident in an email to TechCrunch. A spokesperson said a number of employees were fired but only one employee shared the customer data with the third-party. But little else is known about the employees, when the information was shared and with whom, and how many customers are affected.
It’s not the first time it’s happened. Amazon was just as vague about a similar breach of email addresses last year, in which Amazon declined to comment further.
In a separate incident, Amazon said this week that it fired four employee at Ring, one of the retail giant’s smart camera and door bell subsidiaries. Ring said it fired the employees for improperly viewing video footage from customer cameras.
Updated headline to clarify that an unknown number of employees were fired but only one shared customer data with the third-party.

Amazon admits it exposed customer email addresses, but refuses to give details

 

Amazon has fired an employee for leaking user email addresses and phone numbers

Amazon has emailed an unknown number of customers saying that their email address and phone number was obtained by an Amazon employee and shared with a third-party “in violation of our policies.”
The email, seen by TechCrunch, said the employee was “terminated” and is supporting law enforcement in its prosecution.
“No other information related to your account was shared. This is not a result of anything you have done, and there is no need for you to take any action,” the email read to customers.
But little else is known about the employee, when the information was shared and with whom, and how many customers are affected.
Amazon confirmed the authenticity of the email it sent to customers on Friday, but did not comment beyond what was in the email.
It’s not the first time it’s happened. Amazon was just as vague about a similar breach of email addresses last year, which Amazon declined to comment further.
In a separate incident, Amazon said this week that it fired four employee were fired at Ring, one of its smart camera and door bell subsidiaries. Ring said it fired the employees for improperly viewing video footage from customer cameras.

Amazon admits it exposed customer email addresses, but refuses to give details

 

Plenty of Fish app was leaking users’ hidden names and postal codes

Dating app Plenty of Fish has pushed out a fix for its apps after a security researcher found they were leaking information that users had set to “private” on their profiles.
The app was always silently returning users’ first names and Zip postal codes to the app, according to The App Analyst, a mobile expert who writes about his analyses of popular apps on his eponymous blog.
The leaking data was not immediately visible to app users, and the data was scrambled to make it difficult to read. But using freely available tools designed to analyze network traffic, the researcher found it was possible to reveal the information about users as their profiles appeared on his phone.
In one case, the App Analyst found enough information to identify where a particular user lived, he told TechCrunch.
Plenty of Fish has more than 150 million registered users, according to its parent company IAC. In recent years, law enforcement have warned about the threats some people face on dating apps, like Plenty of Fish. Reports suggest sex attacks involving dating apps have risen in the past five years. And those in the LGBTQ+ community on these apps also face safety threats from both individuals and governments, prompting apps like Tinder to proactively warn their LGBTQ+ users when they visit regions and states with restrictive and oppressive laws against same-sex partners.
A fix is said to have rolled out for the information leakage bug earlier this month. A spokesperson for Plenty of Fish did not immediately comment.
Earlier this year, the App Analyst found a number of third-party tools were allowing app developers to record the device’s screen while users engaged with their apps, prompting a crackdown by Apple.

Many popular iPhone apps secretly record your screen without asking

Many smart home device makers still won’t say if they give your data to the government

A year ago, we asked some of the most prominent smart home device makers if they have given customer data to governments. The results were mixed.
The big three smart home device makers — Amazon, Facebook and Google (which includes Nest) — all disclosed in their transparency reports if and when governments demand customer data. Apple said it didn’t need a report, as the data it collects was anonymized.
As for the rest, none had published their government data-demand figures.
In the year that’s past, the smart home market has grown rapidly, but the remaining device makers have made little to no progress on disclosing their figures. And in some cases, it got worse.
Smart home and other internet-connected devices may be convenient and accessible, but they collect vast amounts of information on you and your home. Smart locks know when someone enters your house, and smart doorbells can capture their face. Smart TVs know which programs you watch and some smart speakers know what you’re interested in. Many smart devices collect data when they’re not in use — and some collect data points you may not even think about, like your wireless network information, for example — and send them back to the manufacturers, ostensibly to make the gadgets — and your home — smarter.
Because the data is stored in the cloud by the devices manufacturers, law enforcement and government agencies can demand those companies turn over that data to solve crimes.
But as the amount of data collection increases, companies are not being transparent about the data demands they receive. All we have are anecdotal reports — and there are plenty: Police obtained Amazon Echo data to help solve a murder; Fitbit turned over data that was used to charge a man with murder; Samsung helped catch a sex predator who watched child abuse imagery; Nest gave up surveillance footage to help jail gang members; and recent reporting on Amazon-owned Ring shows close links between the smart home device maker and law enforcement.
Here’s what we found.
Smart lock and doorbell maker August gave the exact same statement as last year, that it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA).” But August spokesperson Stephanie Ng would not comment on the number of non-national security requests — subpoenas, warrants and court orders — that the company has received, only that it complies with “all laws” when it receives a legal demand.
Roomba maker iRobot said, as it did last year, that it has “not received” any government demands for data. “iRobot does not plan to issue a transparency report at this time,” but it may consider publishing a report “should iRobot receive a government request for customer data.”
Arlo, a former Netgear smart home division that spun out in 2018, did not respond to a request for comment. Netgear, which still has some smart home technology, said it does “not publicly disclose a transparency report.”
Amazon-owned Ring, whose cooperation with law enforcement has drawn ire from lawmakers and faced questions over its ability to protect users’ privacy, said last year it planned to release a transparency report in the future, but did not say when. This time around, Ring spokesperson Yassi Shahmiri would not comment and stopped responding to repeated follow-up emails.
Honeywell spokesperson Megan McGovern would not comment and referred questions to Resideo, the smart home division Honeywell spun out a year ago. Resideo’s Bruce Anderson did not comment.
And just as last year, Samsung, a maker of smart devices and internet-connected televisions and other appliances, also did not respond to a request for comment.
On the whole, the companies’ responses were largely the same as last year.
But smart switch and sensor maker Ecobee, which last year promised to publish a transparency report “at the end of 2018,” did not follow through with its promise. When we asked why, Ecobee spokesperson Kristen Johnson did not respond to repeated requests for comment.
Based on the best available data, August, iRobot, Ring and the rest of the smart home device makers have hundreds of millions of users and customers around the world, with the potential to give governments vast troves of data — and users and customers are none the wiser.
Transparency reports may not be perfect, and some are less transparent than others. But if big companies — even after bruising headlines and claims of co-operation with surveillance states — disclose their figures, there’s little excuse for the smaller companies.
This time around, some companies fared better than their rivals. But for anyone mindful of their privacy, you can — and should — expect better.

Now even the FBI is warning about your smart TV’s security

ACLU sues Homeland Security over ‘stingray’ cell phone surveillance

One of the largest civil liberties groups in the U.S. is suing two Homeland Security agencies for failing to turn over documents it requested as part of a public records request about a controversial cell phone surveillance technology.
The American Civil Liberties Union filed suit against Customs & Border Protection (CBP) and Immigration & Customs Enforcement (ICE) in federal court on Wednesday after the organization claimed the agencies “failed to produce records” relating to cell site simulators — or “stingrays.”
Stingrays impersonate cell towers to trick cell phones into connecting to them, allowing its operator to collect unique identifiers from the device and determine their location. The devices are used for surveillance, but also ensnare all other devices in their range. It’s believed newer, more advanced devices can intercept all the phone calls and text messages in range.
A government oversight report in 2016 said both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution,” the ACLU said.
But little else is known about stingray technology because the cell phone snooping technology is sold exclusively to police departments and federal agencies under strict non-disclosure agreements with the device manufacturer.
The ACLU filed a Freedom of Information Act request in 2017 to learn more about the technology and how it’s used, but both agencies failed to turn over any documents, it said.
The civil liberties organization said there is evidence to suggest that records exist, but has “exhausted all administrative remedies” to obtain the documents. Now it wants the courts to compel the agencies to turn over the records, “not only to shine a light on the government’s use of powerful surveillance technology in the immigration context, but also to assess whether its use of this technology complies with constitutional and legal requirements and is subject to appropriate oversight and control,” the filing said.
The group wants the agencies’ training materials and guidance documents, and records to show where and when stingrays were deployed across the United States.
CBP spokesperson Nathan Peeters said the agency does not comment on pending litigation as a matter of policy. A spokesperson for ICE did not comment.

US border officials are increasingly denying entry to travelers over others’ social media

DHS wants to expand airport face recognition scans to include US citizens

Homeland Security wants to expand facial recognition checks for travelers arriving and departing the U.S. to also include citizens, which had previously been exempt from the mandatory checks.
In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.
Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.
But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.
Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.
“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .
“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.
“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.
Citing a data breach of close to 100,000 license plate and traveler images in June as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.
A spokesperson for Homeland Security did not immediately comment when reached.

CBP says traveler photos and license plate images stolen in data breach

Another US court says police cannot force suspects to turn over their passwords

The highest court in Pennsylvania has ruled that the state’s law enforcement cannot force suspects to turn over their password that would unlock their devices.
The state’s Supreme Court said compelling a password from a suspect is a violation of the Fifth Amendment, a constitutional protection that protects suspects from self-incrimination.
It’s not an surprising ruling given other state and federal courts have almost always come to the same conclusion. The Fifth Amendment grants anyone in the U.S. the right to remain silent, which includes the right to not turn over information that could incriminate them in a crime. These days, those protections extend to the passcodes that only a device owner knows.
But the ruling is not expected to affect the ability by police to force suspects to use their biometrics — like their face or fingerprints — to unlock their phone or computer.
Because your passcode is in stored your head and your biometrics are not, prosecutors have long argued that police can compel a suspect into unlocking a device with their biometrics, which they say are not constitutionally protected. The court also did not address biometrics. In a footnote of the ruling, the court said it “need not address” the issue, blaming the U.S. Supreme Court for creating “the dichotomy between physical and mental communication.”
Peter Goldberger, president of the ACLU of Pennsylvania, who presented the arguments before the court, said it was “fundamental” that suspects have the right to “to avoid self-incrimination.”
Despite the spate of rulings in recent years, law enforcement have still tried to find their way around compelling passwords from suspects. The now-infamous Apple-FBI case saw the federal agency try to force the tech giant to rewrite its iPhone software in an effort to beat the password on the handset of the terrorist Syed Rizwan Farook, who with his wife killed 14 people in his San Bernardino workplace in 2015. Apple said the FBI’s use of the 200-year-old All Writs Act would be “unduly burdensome” by putting potentially every other iPhone at risk if the rewritten software leaked or was stolen.
The FBI eventually dropped the case without Apple’s help after the agency paid hackers to break into the phone.
Brett Max Kaufman, a senior staff attorney at the ACLU’s Center for Democracy said the Pennsylvania case ruling sends a message to other courts to follow in its footsteps.
“The court rightly rejects the government’s effort to create a giant, digital-age loophole undermining our time-tested Fifth Amendment right against self-incrimination,” he said. “The government has never been permitted to force a person to assist in their own prosecution, and the courts should not start permitting it to do so now simply because encrypted passwords have replaced the combination lock.”
“We applaud the court’s decision and look forward to more courts to follow in the many pending cases to be decided next,” he added.

3D-printed heads let hackers – and cops – unlock your phone

New 5G flaws can track phone locations and spoof emergency alerts

5G is faster and more secure than 4G. But new research shows it also has vulnerabilities that could put phone users at risk.
Security researchers at Purdue University and the University of Iowa have found close to a dozen vulnerabilities, which they say can be used to track a victim’s real-time location, spoof emergency alerts that can trigger panic or silently disconnect a 5G-connected phone from the network altogether.
5G is said to be more secure than its 4G predecessor, able to withstand exploits used to target users of older cellular network protocols like 2G and 3G like the use of cell site simulators — known as “stingrays.” But the researchers’ findings confirm that weaknesses undermine the newer security and privacy protections in 5G.
Worse, the researchers said some of the new attacks also could be exploited on existing 4G networks.
The researchers expanded on their previous findings to build a new tool, dubbed 5GReasoner, which was used to find 11 new 5G vulnerabilities. By creating a malicious radio base station, an attacker can carry out several attacks against a target’s connected phone used for both surveillance and disruption.
In one attack, the researchers said they were able to obtain both old and new temporary network identifiers of a victim’s phone, allowing them to discover the paging occasion, which can be used to track the phone’s location — or even hijack the paging channel to broadcast fake emergency alerts. This could lead to “artificial chaos,” the researcher said, similar to when a mistakenly sent emergency alert claimed Hawaii was about to be hit by a ballistic missile amid heightened nuclear tensions between the U.S. and North Korea. (A similar vulnerability was found in the 4G protocol by University of Colorado Boulder researchers in June.)
Another attack could be used to create a “prolonged” denial-of-service condition against a target’s phone from the cellular network.
In some cases, the flaws could be used to downgrade a cellular connection to a less-secure standard, which makes it possible for law enforcement — and capable hackers — to launch surveillance attacks against their targets using specialist “stingray” equipment.
All of the new attacks can be exploited by anyone with practical knowledge of 4G and 5G networks and a low-cost software-defined radio, said Syed Rafiul Hussain, one of the co-authors of the new paper.
Given the nature of the vulnerabilities, the researchers said they have no plans to release their proof-of-concept exploitation code publicly. However, the researchers did notify the GSM Association (GSMA), a trade body that represents cell networks worldwide, of their findings.
Although the researchers were recognized by GSMA’s mobile security “hall of fame,” spokesperson Claire Cranton said the vulnerabilities were “judged as nil or low-impact in practice.” The GSMA did not say if the vulnerabilities would be fixed — or give a timeline for any fixes. But the spokesperson said the researchers’ findings “may lead to clarifications” to the standard where it’s written ambiguously.
Hussain told TechCrunch that while some of the fixes can be easily fixed in the existing design, the remaining vulnerabilities call for “a reasonable amount of change in the protocol.”
It’s the second round of research from the academics released in as many weeks. Last week, the researchers found several security flaws in the baseband protocol of popular Android models — including Huawei’s Nexus 6P and Samsung’s Galaxy S8+ — making them vulnerable to snooping attacks on their owners.

Popular Android phones can be tricked into snooping on their owners

DNA testing startup Veritas Genetics confirms data breach

Veritas Genetics, a DNA testing startup, has said a data breach resulted in the theft of some customer information.
The Danvers, MA-based company said its customer facing portal had “recently” been breached but did not say when. Although the portal did not contain test results or medical information, the company declined to say what information had been stolen — only that a handful of customers were affected.
The company has not issued a public statement, nor has it acknowledge the breach on its website. A spokesperson for Veritas did not respond to a request for comment.
Bloomberg first reported the news.
Veritas, whose competitors include 23andMe, Ancestry, and MyHeritage, says it can analyze and understand a human genome using a smaller portion of an individual’s DNA, allowing customers to better understand what health risks they may face in later life or pass on to their children.
Although the stolen data did not include personal health information, it’s likely to further fuel concerns that health startups, particularly companies dealing with sensitive DNA and genome information, can’t protect their users’ data.
Privacy remains an emerging concern in genetics testing after law enforcement have served legal demands against DNA collection and genetics testing companies to help identify suspects in criminal cases. Just this week, it was reported that a “game changer” warrant obtained in Florida allowed one police department to search the full database of GEDmatch, a DNA testing company, which was last year was used by police to help catch the notorious Golden State Killer.
Some 26 million consumers have used an at-home genetics testing kit.

DNA analysis site that led to the Golden State Killer issues a privacy warning to users

Amazon Ring doorbells exposed home Wi-Fi passwords to hackers

Security researchers have discovered a vulnerability in Ring doorbells that exposed the password for the Wi-Fi network it was connected to.
Bitdefender said the Amazon-owned doorbell was sending its owner’s Wi-Fi password in cleartext over the internet, allowing for nearby hackers to intercept the Wi-Fi password and gain access to the network to launch larger attacks or conduct surveillance.
Amazon fixed the vulnerability in all Ring devices in September, but the vulnerability was only disclosed today.
It’s another example of smart home technology suffering from security issues. As much as smart home devices are designed to make our lives easier and homes more secure, researchers keep finding vulnerabilities that allow them to get access to the very thing they’re trying to protect.
Earlier this year, flaws in a popular smart home hub allowed researchers to break into a person’s home by triggering a smart lock to unbolt the door.
Amazon has faced intense scrutiny in recent months for Ring’s work with law enforcement. Several news outlets, including Gizmodo, have detailed the close relationship Ring has with police departments, including their Ring-related messaging.
It was reported this week that Ring had bragged on Instagram about tracking millions of trick-or-treaters this Halloween.

Security flaws in a popular smart home hub let hackers unlock front doors

Disinformation ‘works better than censorship,’ warns internet freedom report

A rise in social media surveillance, warrantless searches of travelers’ devices at the border, and the continued spread of disinformation are among the reasons why the U.S. has declined in internet freedom rankings, according to a leading non-profit watchdog.
Although Freedom House said that the U.S. enjoys some of the greatest internet freedoms in the world, its placement in the worldwide rankings declined for the third year in a row. Last year’s single-point drop was blamed on the repeal of net neutrality.
Iceland and Estonia remained at the top of the charts, according to the rankings, with China and Iran ranking with the least free internet.
The report said that digital platforms, including social media, have emerged as the “new battleground” for democracy, where governments would traditionally use censorship and site-blocking technologies. State and partisan actors have used disinformation and propaganda to distort facts and opinions during elections in dozens of countries over the past year, including the 2018 U.S. midterm elections and the 2019 European Parliament elections.
“Many governments are finding that on social media, propaganda works better than censorship,” said Mike Abramowitz, president of Freedom House.
Freedom House’s 2019 internet freedom rankings. (Image: Freedom House)
Disinformation — or “fake news” as described by some — has become a major headache for both governments and private industry. As the spread of deliberately misleading and false information has become more prevalent, lawmakers have threatened to step in to legislate against the problem.
But as some governments — including the U.S. — have tried to stop the spread of disinformation, Freedom House accused some global leaders — including the U.S. — of “co-opting” social media platforms for their own benefit. Both the U.S. and China are among the 40 countries that have expanded their monitoring of social media platforms, the report said.
“Law enforcement and immigration agencies expanded their surveillance of the public, eschewing oversight, transparency, and accountability mechanisms that might restrain their actions,” the report said.
The encroachment on personal privacy, such as the warrantless searching of travelers’ phones without court-approved warrants, also contributed to the U.S.’ decline.
Several stories in the last year revealed how border authorities would deny entry to travelers for the content of social media posts made by other people, following changes to rules that compelled visa holders to disclose their social media handles at the border.
“The future of internet freedom rests on our ability to fix social media,” said Adrian Shahbaz, the non-profit’s research director for technology and democracy.
Given that most social media platforms are based in the U.S., Shahbaz said the U.S. has to be a “leader” in promoting transparency and accountability.
“This is the only way to stop the internet from becoming a Trojan horse for tyranny and oppression,” he said.

It was a really bad month for the internet

By tweeting from a SCIF, House lawmakers put national security at risk

If you thought storming into a highly secured government facility with your electronics but without permission was a smart idea, you’d be wrong.
But that didn’t stop Rep. Matt Gaetz and close to three-dozen of his Republican colleagues on Wednesday from doing exactly that.
Gaetz, a Republican congressman from Florida, proudly announced in his since-deleted tweets: “I led over 30 of my colleagues into the SCIF where Adam Schiff is holding secret impeachment depositions.” At the time, Gaetz was interrupting a hearing of the House Intelligence Committee where chairman, Schiff, was deposing senior government official, Laura Cooper, as part of the Democrats’ impeachment inquiry into President Trump’s dealings with Ukraine.
One of the cardinal rules of entering one of these facilities is that you don’t bring in any electronics. And those lawmakers did exactly that.
No wonder Gaetz deleted his tweets.
A SCIF — a sensitive compartmented information facility — sounds fancy but in reality are just rooms designed to be secure for sharing sensitive and secret information at the higher echelons of government secrecy. There are plenty dotted around Washington DC for lawmakers and government officials to huddle in and chit-chat. There are SCIFs in the White House, Congress and every major government department in the capital — even the president’s Florida resort.
The idea is you go in to one of these rooms and they’re safe to discuss state secrets. These rooms vary by size and shape — some are enormous and are able to sit an entire congressional committee. Some can be used on the road in the form of a large pop-up tent. But they all do the same job: they’re designed to a specification so that nobody can eavesdrop on what’s being said.
So when a gaggle of Republican lawmakers stormed one of the congressional SCIFs yesterday with their electronics in their pockets, understandably a lot of people were furious.
“No unauthorized electronic devices are allowed in the SCIF precisely because they could be used to exfiltrate decoded, highly classified data,” said Alan Woodward, a professor at the University of Surrey. “Standard operating procedure is to deposit anything like a mobile phone in some storage outside before entering,” he said.
“To force your way into a SCIF and use a mobile device inside is the height of recklessness: you must know that you are endangering material that could cause grave damage to the national interest,” he added.
The rebuke was quick.
“[The lawmakers] endangered our national security and demonstrated they care more about a political stunt than protecting intelligence information,” tweeted Mieke Eoyang, vice-president of Third Way, a national security think tank. “Foreign adversaries are constantly trying to figure out what goes on inside those rooms to figure out what the U.S. knows about them, to out U.S. high-level sources in their governments, to know what the U.S. government knows and use it against us,” she said.
“I cannot emphasize enough how serious this is,” she added.
And neither can the chairman of the House Homeland Security committee, Rep. Bennie Thompson (D-MS), who wrote to the Sergeant-at-Arms, the official in charge of the House’s law enforcement, expressing his anger at the infraction.
“Such action is a blatant breach of security,” said Thompson in the strongly worded letter, demanding action is taken against the violating House members.
“Inadvertently bringing electronics into a SCIF is a very common security infraction, and it is taken incredibly seriously by agencies,” said Mark S. Zaid, an attorney specializing in national security cases. “It is drilled into people’s heads to never bring their cell phone into a protected area.”
The penalties for the House members could be swift, said Zaid, including pulling their future access to classified material.
“Agencies will not hesitate to revoke someone’s security clearances when multiple infractions occur,” he said. “When it comes to intentional infractions, the repercussions would be swift and severe, as it should be.”

Bloomberg’s spy chip story reveals the murky world of national security reporting

Europe’s recharged antitrust chief makes her five year pitch to be digital EVP

Europe’s competition commissioner Margrethe Vestager, set for a dual role in the next Commission, faced three hours of questions from members of four committees in the European Parliament this afternoon, as MEPs got their chance to interrogate her priorities for a broader legislative role that will shape pan-EU digital strategy for the next five years.
As we reported last month Vestager is headed for an expanded role in the incoming European Commission with president-elect Ursula von der Leyen picking her as an executive VP overseeing a new portfolio called ‘Europe fit for the digital age’.
She is also set to retain her current job as competition commissioner. And a question she faced more than once during today’s hearing in front of MEPs, who have a confirming vote on her appointment, was whether the combined portfolio wasn’t at risk of a conflict of interest?
Or whether she “recognized the tension between objective competition enforcement and industrial policy interests in your portfolio” as one MEP put it, before asking whether she would “build Chinese walls” within it to avoid crossing the streams of enforcement and policymaking.
Vestager responded by saying it was the first question she’d asked herself on being offered the role — before laying out flat reasoning that “the independence in law enforcement is non-negotiable”.
“It has always been true that the commissioner for competition has been part of the College. And every decision we take also in competition is a collegial decision,” she said. “What justifies that is of course that every decision is subject to not one but 2x legal scrutiny if need be. And the latest confirmation of this set up was two judgements in 2011 — where it was looked into whether this set up… is in accordance with our human rights and that has been found to be so. So the set up, as such, is as it should be.”
The commissioner and commissioner-designate responded capably to a wide range of questions reflecting the broad span of her new responsibilities — fielding questions on areas including digital taxation; platform power and regulation; a green new deal; AI and data ethics; digital skills and research; and small business regulation and funding, as well as queries around specific pieces of legislation (such as ePrivacy and Copyright Reform). 
Climate change and digital transformation were singled out in her opening remarks as two of Europe’s biggest challenges — ones she said will require both joint working and a focus on fairness.
“Europe is filled with highly skilled people, we have excellent infrastructure, fair and effective laws. Our Single Market gives European businesses the room to grow and innovate, and be the best in the world at what they do,” she said at the top of her pitch to MEPs. “So my pledge is not to make Europe more like China, or America. My pledge is to help make Europe more like herself. To build on our own strengths and values, so our society is both strong and fair. For all Europeans.”
Building trust in digital services
In her opening remarks Vestager said that if confirmed she will work to build trust in digital services — suggesting regulation on how companies collect, use and share data might be necessary to ensure people’s data is used for public good, rather than to concentrate market power.
It’s a suggestion that won’t have gone unnoticed in Silicon Valley.
“I will work on a Digital Services Act that includes upgrading our liability and safety rules for digital platforms, services and products,” she pledged. “We may also need to regulate the way that companies collect and use and share data – so it benefits the whole of our society.”
“As global competition gets tougher we’ll need to work harder to preserve a level playing field,” she also warned.
But asked directly during the hearing whether Europe’s response to platform power might include breaking up overbearing tech giants, Vestager signalled caution — saying such an intrusive intervention should only be used as a last resort, and that she has an obligation to try less drastic measures first. (It’s a position she’s set out before in public.)
“You’re right to say fines are not doing the trick and fines are not enough,” she said in response to one questioner on the topic. Another MEP complained fines on tech giants are essentially just seen as an “operating expense”.
Vestager went on to to cite the Google AdSense antitrust case as an example of enforcement that hasn’t succeeded because it has failed to restore competition. “Some of the things that we will of course look into is do we need even stronger remedies for competition to pick up in these markets,” she said. “They stopped their behavior. That’s now two years ago. The market hasn’t picked up. So what do we do in those kind of cases? We have to consider remedies that are much more far reaching.
“Also before we reach for the very, very far reaching remedy to break up a company — we have that tool in our toolbox but obviously it is very far reaching… My obligation is to ensure that we do the least intrusive thing in order to make competition come back. And in that respect, obviously, I am willing to explore what do we need more, in competition cases, for competition to come back.”
Competition law enforcers in Europe will have to consider how to make sure rules enforce fair competition in what what Vestager described as a “new phenomenon” of “competition for a market, not just in a market” — meaning that whoever wins the competition becomes “the de facto rule setter in this market”.
Regulating platforms on transparency and fairness is something European legislators have already agreed — earlier this year. Though that platform to business regulation has yet to come into force. “But it will also be a question for us as competition law enforcers,” Vestager told MEPs.
Making use of existing antitrust laws but doing so with greater speed and agility, rather than a drastic change of competition approach appeared to be her main message — with the commissioner noting she’d recently dusted off interim measures in an ongoing case against chipmaker Broadcom; the first time such an application has been made for 20 years.
“It’s a good reflection of the fact that we find it a very high priority to speed up what we do,” she said, adding: “There’s a limit as to how fast law enforcement can work, because we will never compromise on due process — on the other hand side we should be able to work as fast as possible.”
Her responses to MEPs on platform power favored greater regulation of digital markets (potentially including data), markets which have become dominated by data-gobbling platforms — rather than an abrupt smashing of the platforms themselves. So not an Elizabeth Warren ‘existential’ threat to big tech, then, but from a platform point of view Vestager’s preferred approach might just sum to death by a thousand legal cuts.
“One of course could consider what kind of tools do we need?” she opined, talking about market reorganization as a means of regulating platform power. “[There are] different ways of trying to re-organize a marketplace if the competition authority finds that the way it’s working is not beneficial for fair competition. And those are tools that can be considered in order to sort of re-organize before harm is done. Then you don’t punish because no infringement is found but you can give very direct almost orders… as to how a market should be organized.”
Artificial intelligence with a purpose
On artificial intelligence — which the current Commission has been working on developing a framework for ethical design and application — Vestager’s opening remarks contained a pledge to publish proposals for this framework — to “make sure artificial intelligence is used ethically, to support human decisions and not undermine them” — and to do so within her first 100 days in office.
That led one MEP to question whether it wasn’t too ambitious and hasty to rush to control a still emerging technology. “It is very ambitious,” she responded. “And one of the things that I think about a lot is of course if we want to build trust then you have to listen.
“You cannot just say I have a brilliant idea, I make it happen all over. You have to listen to people to figure out what would be the right approach here. Also because there is a balance. Because if you’re developing something new then — exactly as you say — you should be very careful not to over-regulate.
“For me, to fulfil these ambitions, obviously we need the feedback from the many, many businesses who have taken upon them to use the assessment list and the principles [recommended by the Commission’s HLEG on AI] of how to create AI you can trust. But I also think, to some degree, we have to listen fast. Because we have to talk with a lot of different people in order to get it right. But it is a reflection of the fact that we are in hurry. We really need to get our AI strategy off the ground and these proposals will be part of that.”
Europe could differentiate itself — and be “a world leader” — by developing “AI with a purpose”, Vestager suggested, pointing to potential applications for the tech such as in healthcare, transportation and combating climate change which she said would also work to further European values.
“I don’t think that we can be world leaders without ethical guidelines,” she said of AI. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money. I think we will lose out because the AI you create because you want to serve humans. That’s a different sort of AI. This is AI with a purpose.”
On digital taxation — where Vestager will play a strategic role, working with other commissioners — she said her intention is to work towards trying to achieve global agreement on reforming rules to take account of how data and profits flow across borders. But if that’s not possible she said Europe is prepared to act alone — and quickly — by the end of 2020.
“Surprising things can happen,” she said, discussing the challenge of achieving even an EU-wide consensus on tax reform, and noting how many pieces of tax legislation have already been passed in the European Council by unanimity. “So it’s not undoable. The problem is we have a couple of very important pieces of legislation that have not been passed.
“I’m still kind of hopeful in the working way that we can get a global agreement on digital taxation. If that is not the case obviously we will table and push for a European solution. And I admire the Member States who’ve said we want a European or global solution but if that isn’t to be we’re willing to do that by ourselves in order to be able to answer to all the businesses who pay their taxes.”
Vestager also signalled support for exploring the possibility of amending Article 116 of the Treaty on the Functioning of the EU, which relates to competition based distortion of the internal market, in order to enable tax reform to be passed by a qualified majority, instead of unanimously — as a potential strategy for getting past the EU’s own current blocks to tax reform.
“I think definitely we should start exploring what would that entail,” she said in response to a follow up question. “I don’t think it’s a given that it would be successful but it’s important that we take the different tools that the treaty gives us and use these tools if need be.”
During the hearing she also advocated for a more strategic use of public procurement by the EU and Member States — to push for more funding to go into digital research and business innovation that benefits common interests and priorities.
“It means working together with Member States on important projects of common European interest. We will bring together entire value chains, from universities, suppliers, manufacturers all the way to those who recycle the raw material that is used in manufacturing,” she said.
“Public procurement in Europe is… a lot of money,” she added. “And if we also use that to ask for solutions well then we can have also maybe smaller businesses to say I can actually do that. So we can make an artificial intelligence strategy that will push in all different sectors of society.”
She also argued that Europe’s industrial strategy needs to reach beyond its own Single Market — signalling a tougher approach to market access to those outside the bloc.
And implying she might favor less of a free-for-all when it comes to access to publicly funded data — if the value it contains risks further entrenching already data-rich, market-dominating giants at the expense of smaller local players.
“As we get more and more interconnected, we are more dependent and affected by decisions made by others. Europe is the biggest trading partner of some 80 countries, including China and the US. So we are in a strong position to work for a level global playing field. This includes pursuing our proposal to reform the World Trade Organization. It includes giving ourselves the right tools to make sure that foreign state ownership and subsidies do not undermine fair competition in Europe,” she said.
“We have to figure out what constitutes market power,” she went on, discussing how capacity to collect data can influence market position, regardless of whether it’s directly linked to revenue. “We will expand our insights as to how this works. We have learned a lot from some of the merger cases that we have been doing to see how data can work as an asset for innovation but also as a barrier to entry. Because if you don’t have the right data it’s very difficult to produce the services that people are actually asking for. And that becomes increasingly critical when it comes to AI. Because once you have it then you can do even more.
“I think we have to discuss what we do with all the amazing publicly funded data that we make available. It’s not to be overly biblical but we shouldn’t end up in a situation where ‘those who have shall more be given’. If you have a lot already then you also have the capabilities and the technical insight to make very good use of it. And we do have amazing data in Europe. Just think about what can be assessed in our supercomputers… they are world class… And second when it comes to both [EU sat-nav] Galileo and [earth observation program] Copernicus. Also here data is available. Which is an excellent thing for the farmer doing precision farming and saving in pesticides and seeds and all of that. But are really happy that we also make it available for those who could actually pay for it themselves?
“I think that is a discussion that we will have to have — to make sure that not just the big ones keep taking for themselves but the smaller ones having a fair chance.”
Rights and wrongs
During the hearing Vestager was also asked whether she supported the controversial EU copyright reform.
She said she supports the “compromise” achieved — arguing that the legislation is important to ensure artists are rewarded for the work they do — but stressed that it will be important for the incoming Commission to ensure Member States’ implementations are “coherent” and that fragmentation is avoided.
She also warned against the risk of the same “divisive” debates being reopened afresh, via other pieces of legislation.
“I think now that the copyright issue has been settled it shouldn’t be reopened in the area of the Digital Services Act,” she said. “I think it’s important to be very careful not to do that because then we would loose speed again when it comes to actually making sure there is renumeration for those who hold copyright.”
Asked in a follow up question how, as the directive gets implemented by EU Member States, she will ensure freedom of speech is protected from upload filter technologies — which is what critics of the copyright reform argue the law effectively demands that platforms deploy — Vestager hedged, saying: “[It] will take a lot of discussions and back and forth between Member States and Commission, probably. Also this parliament will follow this very closely. To make sure that we get an implementation in Member States that are similar.”
“One has to be very careful,” she added. “Some of the discussions that we had during the adoption of the copyright directive will come back. Because these are crucial debates. Because it’s a debate between the freedom of speech and actually protecting people who have rights. Which is completely justified… Just as we have fundamental values we also have fundamental discussions because it’s always a balancing act how to get this right.”
The commissioner also voiced support for passing the ePrivacy Regulation. “It will be high priority to make sure that we’re able to pass that,” she told MEPs, dubbing the reform an important building block.
“One of the things I hope is that we don’t just always decentralize to the individual citizens,” she added.  “Now you have rights, now you just go and force them. Because I know I have rights but one of my frustrations is how to enforce them? Because I am to read page after page after page and if I’m not tired and just forget about it then I sign up anyway. And that doesn’t really make sense. We still have to do more for people to feel empowered to protect themselves.”
She was also asked for her views on adtech-driven microtargeting — as a conduit for disinformation campaigns and political interference — and more broadly as so-called ‘surveillance capitalism’. “Are you willing to tackle adtech-driven business models as a whole?” she was asked by one MEP. “Are you willing to take certain data exploitation practices like microtargeting completely off the table?”
Hesitating slightly before answering, Vestager said: “One of the things I have learned from surveillance capitalism and these ideas is it’s not you searching Google it is Google searching you. And that gives a very good idea about not only what you want to buy but also what you think. So we have indeed a lot to do. I am in complete agreement with what has been done so far — because we needed to do something fast. So the Code of Practice [on disinformation] is a very good start to make sure that we get things right… So I think we have a lot to build on.
“I don’t know yet what should be the details of the Digital Services Act. And I think it’s very important that we make the most of what we have since we’re in a hurry. Also to take stock of what I would call digital citizens’ rights — the GDPR [General Data Protection Regulation] — that we can have national authorities enforce that in full, and hopefully also to have a market response so that we have privacy by design and being able to choose that. Because I think it’s very important that we also get a market response to say well you can actually do things in a very different way than just to allow yourself to feel forced to sign up to whatever terms and conditions that are put in front of you.
“I myself find it very thought-provoking if you have the time just once in a while to read the T&Cs now when they are obliged, thanks to this parliament, to write in a way that you can actually understand that makes it even more scary. And very often it just makes me think thanks but no thanks. And that of course is the other side of that coin. Yes, regulation. But also us as citizens to be much more aware of what kind of life we want to live and what kind of democracy we want to have. Because it cannot just be digital. Then I think we will lose it.”
In her own plea to MEPs Vestager urged them to pass the budget so that the Commission can get on with all the pressing tasks in front of it. “We have proposed that we increase our investments quite a lot in order to be able to do all this kind of stuff,” she said.
“First things first, I’m sorry to say this, we need the money. We need funding. We need the programs. We need to be able to do something so that people can see that businesses can use funds to invest in innovation, so that researchers can make their networks work all over Europe. That they get the funding actually to get there. And in that respect I hope that you will help push for the multi-annual financial framework to be in place. I don’t think that Europeans have any patience for us when it comes to these different things that we would like to be real. That is now, that is here.”

Over 30 civil rights groups demand an end to Amazon Ring’s police partnerships

Over 30 civil rights organizations have penned an open letter that calls on government officials to investigate Amazon Ring’s business practices and end the company’s numerous police partnerships. The letter follows a report by The Washington Post in August that detailed how over 400 police forces across the U.S. have partnered with Ring to gain access to homeowners’ camera footage.
These partnerships have already raised concerns with privacy advocates and civil liberties organizations, who claim the agreements turn neighbors into informants and subject innocent people to greater risk and surveillance.
Had the government itself installed a video network of this size and scope, it would have drawn greater scrutiny. But by quietly working with Ring behind the scenes, law enforcement gets to tap into a massive surveillance network without being directly involved in its creation.
The new letter from the civil rights groups demand that government officials put an end to these behind-the-scenes deals between Amazon and the police.
“With no oversight and accountability, Amazon’s technology creates a seamless and easily automated experience for police to request and access footage without a warrant, and then store it indefinitely,” the letter reads. “In the absence of clear civil liberties and rights-protective policies to govern the technologies and the use of their data, once collected, stored footage can be used by law enforcement to conduct facial recognition searches, target protesters exercising their First Amendment rights, teenagers for minor drug possession, or shared with other agencies like ICE or the FBI,” it says.
Additionally, the letter points out these police deals involve Amazon coaching cops on how to obtain surveillance footage without a warrant. It also notes that Ring allowed employees to share unencrypted customer videos with each other, including in offices based in Ukraine. And it raises concerns about Amazon’s potential plans to integrate facial recognition features into Ring cameras, based on patents it filed.
The groups also point to the map released by Amazon Ring, which now shows over 500 cities with Amazon-police partnerships across the U.S.
The groups’ letter is not the first to demand action.
Senator Edward J. Markey (D-Mass.) also last month wrote to Amazon to get more information about Ring and its relationships with law enforcement agencies.
But unlike Sen. Markey’s investigative letter to Amazon’s Ring, today’s letter has specific demands for action. The groups are asking mayors and city council members to require their local police departments to cancel their Ring partnerships. The groups also want local government officials to pass new surveillance oversight ordinances that will ensure police departments can’t enter into any such partnerships in the future.
And they want Congress to investigate Ring’s dealings with police more closely.
The letter itself was published online and signed by the following organizations:
Fight for the Future, Media Justice, Color of Change, Secure Justice, Demand Progress, Defending Rights & Dissent, Muslim Justice League, X-Lab, Media Mobilizing Project, Restore The Fourth, Inc., Media Alliance, Youth Art & Self Empowerment Project, Center for Human Rights and Privacy, Oakland Privacy, Justice For Muslims Collective, The Black Alliance for Just Immigration (BAJI), Nation Digital Inclusion Alliance, Project On Government Oversight, OpenMedia, Council on American-Islamic Relations-SFBA, Million Hoodies Movement for Justice, Wellstone Democratic Renewal Club, MPower Change, Mijente, Access Humboldt, RAICES, National Immigration Law Center, The Tor Project, United Church of Christ, Office of Communication Inc., the Constitutional Alliance, RootsAction.org, CREDO Action, Presente.org, American-Arab Anti-Discrimination Committee, and United We Dream.
According to Evan Greer, Deputy Director at Fight for the Future, the letter has not yet been mailed. But the plan, going forward, is to use it in local organizing when groups on the ground make deliveries to local officials in cities where the partnerships are live.
“Amazon has created the perfect end-run around our democratic process by entering into for-profit surveillance partnerships with local police departments. Police departments have easy access to surveillance networks without oversight or accountability,” said Greer. “Amazon Ring’s customers provide the company with the footage needed to build their privately owned, nationwide surveillance dragnet. We’re the ones who pay the cost – as they violate our privacy rights and civil liberties. Our elected officials are supposed to protect us, both from abusive policing practices and corporate overreach. These partnerships are a clear case of both,” Greer added.

Facebook is being leant on by US, UK, Australia to ditch its end-to-end encryption expansion plan

Here we go again. Western governments are once again dialling up their attack on end-to-end encryption — calling for either no e2e encryption or backdoored e2e encryption so platforms can be commanded to serve state agents with messaging data in “a readable and usable format”.
US attorney general William Barr, acting US homeland security secretary Kevin McAleenan, UK home secretary Priti Patel and Australia’s minister for home affairs, Peter Dutton, have co-signed an open letter to Facebook calling on the company to halt its plan to roll out e2e encryption across its suite of messaging products. Unless the company can ensure what they describe as “no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens”, per a draft of the letter obtained by BuzzFeed ahead of publication later today.
If platforms have e2e encryption a “means for lawful access” to the content of communications sums to a backdoor in the crypto.
Presumably along the lines of the ‘ghost protocol’ that UK spooks have been pushing for the past year. Aka an “exceptional access mechanism” that would require platforms CC’ing a state/law enforcement agent as a silent listener to eavesdrop on a conversation on warranted request.
Facebook -owned WhatsApp was one of a number of tech giants joining an international coalition of civic society organizations, security and policy experts condemning the proposal as utter folly earlier this year.
The group warned that demanding a special security hole in encryption for law enforcement risks everyone’s security by creating a vulnerability which could be exploited by hackers. Or indeed service providers themselves. But the age-old ‘there’s no such thing as a backdoor just for you’ warning appears to have fallen on deaf ears.
In their open letter to Facebook, the officials write: “Companies should not deliberately design their systems to preclude any form of access to content, even for preventing or investigating the most serious crimes. This puts our citizens and societies at risk by severely eroding a company’s ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries’ attempts to undermine democratic values and institutions, preventing the prosecution of offenders and safeguarding of victims. It also impedes law enforcement’s ability to investigate these and other serious crimes.”
Of course Facebook is not the only messaging company using e2e encryption but it’s in the governments’ crosshairs now on account of a plan to expand its use of e2e crypto — announced earlier this year, as part of a claimed ‘pivot to privacy’. And, well, on account of it having two billion+ users.
The officials claim in the letter that “much” of the investigative activity which is critical to protecting child safety and fighting terrorism “will no longer be possible if Facebook implements its proposals as planned”.
“Risks to public safety from Facebook’s proposals are exacerbated in the context of a single platform that would combine inaccessible messaging services with open profiles, providing unique routes for prospective offenders to identify and groom our children,” they warn, noting that the Facebook founder expressed his own concerns about finding “the right ways to protect both privacy and safety”.
In March Mark Zuckerberg also talked about building “the appropriate safety systems that stop bad actors as much as we possibly can within the limits of an encrypted service”.
Which could, if you’re cynically inclined, be read as Facebook dangling a carrot to governments — along the lines of: ‘We might be able to scratch your security itch, if your regulators don’t break up our business.’
Ironically enough the high profile intervention by officials risks derailing Facebook’s plan to unify the backends of its platforms — widely interpreted as a play to make it harder for regulators to act on competition concerns and break up Facebook’s business empire along messaging product lines: Facebook, WhatsApp, Instagram.
Or, well — alternative scenario — Facebook could choose to strip e2e crypto from WhatsApp. Which is currently the odd one out in its messaging suite on account of having proper crypto. Governments would sure be happy if it did that. But it’s the opposite of what Zuckerberg has said he’s planning.

The government is demanding backdoor access to the private communications of 1.5 billion people using #WhatsApp. If @Facebook agrees, it may be the largest overnight violation of privacy in history. https://t.co/qkxO1pJuUh
— Edward Snowden (@Snowden) October 3, 2019

Curiously the draft letter makes no mention of platform metadata. Which is not shielded by even WhatsApp’s e2e encryption. And thus can be extracted — via a warrant — in a readable format for legit investigative purposes. And let’s not forget US spooks are more than happy to kill people based on metadata.
Instead the officials write: “We must find a way to balance the need to secure data with public safety and the need for law enforcement to access the information they need to safeguard the public, investigate crimes, and prevent future criminal activity. Not doing so hinders our law enforcement agencies’ ability to stop criminals and abusers in their tracks.”
The debate is being framed by spooks and security ministers as all about content.
Yet a scrambled single Facebook backend would undoubtedly yield vastly more metadata, and higher resolution metadata, on account of triangulation across the services. So it really is a curious omission.
We’ve reached out to Facebook for its reaction to the letter. BuzzFeed reports that it sent a statement in which it strongly opposes government attempts to build backdoors. So if Facebook holds firm to that stance it looks like another big crypto fight could well be brewing. A la Apple vs the FBI.
Bilateral Data Access Agreement
In another announcement being made today, the UK and the US have signed a “world first” Bilateral Data Access Agreement that’s intended to greatly speed up electronic data access requests by their respective law enforcement agencies.
The agreement is intended to replace the current process which sees requests for communications data from law enforcement agencies submitted and approved by central governments via a process called Mutual Legal Assistance — which can take months or even years.
Once up and running, the claim is the new arrangement will see the process reduced to a matter of weeks or even days.
The agreement will work reciprocally with the UK getting data from US tech firms, and the US getting access from UK communication service providers (via a US court order).
Any request for data must be made under an authorisation in accordance with the legislation of the country making the request and will be subject to independent oversight or review by a court, judge, magistrate or other independent authority, per the announcement.
The UK also says specifically that it has obtained “assurances” which are in line with the government’s continued opposition to the death penalty in all circumstances. Which is only mildly reassuring given the home secretary’s previous views on the topic.
The announcement also makes a point of noting the data access agreement does not change anything about how companies can use encryption — nor prevent them from encrypting data.
For interfering with proper encryption the plan among this trio of signals intelligence allies is, seemingly, to reach for the old PR lever and apply public pressure. So, yeah, here we go again.

Justice Department has issued draft rules on using consumer genetic data in investigations

The U.S. Department of Justice has issued a preliminary set of guidelines for how law enforcement agencies can use genetic information from consumer DNA analysis services in their investigations.
“Prosecuting violent crimes is a Department priority for many reasons, including to ensure public safety and to bring justice and closure to victims and victims’ families,” said Deputy Attorney General Jeffrey A. Rosen, in a statement. “We cannot fulfill our mission if we cannot identify the perpetrators. Forensic genetic genealogy gets us that much closer to being able to solve the formerly unsolvable. But we must not prioritize this investigative advancement above our commitments to privacy and civil liberties; and that is why we have released our Interim Policy – to provide guidance on maintaining that crucial balance.”
Most critically the Department guidelines clearly state that a suspect “shall not be arrested based solely on a genetic association” generated by a genetic genealogical service.
If a suspect is identified using genetic information, the sample must be directly compared to the forensic profile that had already been uploaded to the FBI’s Combined DNA Index System (called CODIS).
Genetic information from a consumer service can only be used when a case involves an unsolved violent crime or sexual offenses and the forensic sample belongs to the person investigators believe to be the perpetrator or when a case involves the remains of a suspected homicide victim, according to the Justice Department.
Prosecutors have the ability to expand or authorize the use of genetic genealogical data beyond violent crimes when law enforcement is investigating crimes that present “a substantial and ongoing threat to public safety or national security.”
Genetic data from a consumer service can only be used after investigators have searched the FBI’s internal system and the collected samples that would be correlated with public information must be reviewed by a designated laboratory official, the Department of Justice said.
“The DLO must determine if the candidate forensic sample is from a single source contributor or is a deduced mixture. The DLO will also assess the candidate forensic sample’s suitability (e.g., quantity, quality, degradation, mixture status, etc.),” for comparison with publicly available genetic records. 
Under the new guidelines, law enforcement agencies can only search consumer genetic databases that provide explicit notifications to their users that law enforcement may use the services to investigate crimes or identify human remains. Investigators also have to receive consent from users of the genealogical service if their genetic information is going to be collected as part of an investigation (unless the consent would compromise the investigation).
These new guidelines follow a series of revelations from earlier in the year centering on the fact that DNA testing services had opened up their services to law enforcement agencies to aid in criminal investigations without their customers’ knowledge or consent.
At the heart of the story, was the decision by the genealogy service FamilyTreeDNA to open the genetic records of several million customers to law enforcement agencies without informing their customers. The story was first reported in January by BuzzFeed.
It wasn’t the first time that law enforcement had turned to genetic evidence to solve a crime. In April 2018, the police arrested a man believed to be the “Golden State Strangler” in part thanks to DNA evidence collected from online DNA and genealogical databases. It was the first instance of public genetic information being used to solve a crime.
The ensuing outcry over FamilyTreeDNA’s decision brought new attention to the fact that the consumer genetic testing companies are largely unregulated and very few regulations exist governing how these companies can use information once a consumer has given their consent.
“We are nearing a de-facto national DNA database,” Natalie Ram, an assistant law professor at the University of Baltimore who specializes in bioethics and criminal justice, told BuzzFeed News at the time. “We don’t choose our genetic relatives, and I cannot sever my genetic relation to them. There’s nothing voluntary about that.”

Documents reveal how Russia taps phone companies for surveillance

In cities across Russia, large boxes in locked rooms are directly connected to the networks of some of the country’s largest phone and internet companies.
These unsuspecting boxes, some the size of a washing machine, house equipment that gives the Russian security services access to the calls and messages of millions of citizens. This government surveillance system remains largely shrouded in secrecy, even though phone and web companies operating in Russia are forced by law to install these large devices on their networks.
But documents seen by TechCrunch offer new insight into the scope and scale of the Russian surveillance system — known as SORM (Russian: COPM) — and how Russian authorities gain access to the calls, messages and data of customers of the country’s largest phone provider, Mobile TeleSystems (MTS) .
The documents were found on an unprotected backup drive owned by an employee of Nokia Networks (formerly Nokia Siemens Networks), which through a decade-long relationship maintains and upgrades MTS’s network — and ensures its compliance with SORM.
Chris Vickery, director of cyber risk research at security firm UpGuard, found the exposed files and reported the security lapse to Nokia. In a report out Wednesday, UpGuard said Nokia secured the exposed drive four days later.
“A current employee connected a USB drive that contained old work documents to his home computer,” said Nokia spokesperson Katja Antila in a statement. “Due to a configuration mistake, his PC and the USB drive connected to it was accessible from the internet without authentication.”
“After this came to our attention, we contacted the employee and the machine was disconnected and brought to Nokia,” the spokesperson said.
Nokia said its investigation is ongoing.
‘Lawful intercept’
The exposed data — close to 2 terabytes in size — contain mostly internal Nokia files.
But a portion of the documents seen by TechCrunch reveals Nokia’s involvement in providing “lawful intercept” capabilities to phone and internet providers, which Russia mandates by law.
SORM, an acronym for “system for operative investigative activities,” was first developed in 1995 as a lawful intercept system to allow the Federal Security Services (FSB, formerly the KGB) to access telecoms data, including call logs and content of Russians. Changes to the law over the last decade saw the government’s surveillance powers expand to internet providers and web companies, which were compelled to install SORM equipment to allow the interception of web traffic and emails. Tech companies, including messaging apps like Telegram, also have to comply with the law. The state internet regulator, Roskomnadzor, has fined several companies for not installing SORM equipment.
Since the system’s expansion in recent years, several government agencies and police departments can now access citizens’ private data with SORM.
Most countries, including the U.S. and the U.K., have laws to force telecom operators to install lawful intercept equipment so security services can access phone records in compliance with local laws. That’s enabled an entirely new industry of tech companies, primarily network equipment providers like Nokia, to build and install technologies on telecom networks that facilitate lawful intercepts.
Alexander Isavnin, an expert at Roskomsvoboda and the Internet Protection Society, told TechCrunch that work related to SORM, however, is “classified” and requires engineers to obtain special certifications for work. He added that it’s not uncommon for the FSB to demand telecom and internet companies buy and use SORM equipment from a pre-approved company of its choosing.
The documents show that between 2016 and 2017, Nokia planned and proposed changes to MTS’s network as part of the telecom giant’s “modernization” effort.
Nokia planned to improve a number of local MTS-owned phone exchanges in several Russian cities — including Belgorod, Kursk and Voronezh — to comply with the latest changes to the country’s surveillance laws.
TechCrunch reviewed the documents, which included several floor plans and network diagrams for the local exchanges. The documents also show that the installed SORM device on each phone network has direct access to the data that passes through each phone exchange, including calls, messages and data.
MTS’ exchange in Belgorod containing SORM equipment. Authorities can remotely access the system.
The plans contain the physical address — including floor number — of each phone exchange, as well as the location of each locked room with SORM equipment in large bold red font, labeled “COPM.” One document was titled “COPM equipment installation [at] MTS’ mobile switching center,” a core function for handling calls on a cell network.
An unedited floor plan detailing where the SORM equipment is located.
One photo showed the inside of one of the SORM rooms, containing the sealed box containing intercept equipment with the letters “COPM” in large font on the metal cabinet next to an air-conditioning unit to keep the equipment cool.
A photo of a SORM (COPM) device in a locked room at one of MTS’ local phone exchanges.
Nokia says it provides — and its engineers install — the “port” in the network to allow lawful intercept equipment to plug in and intercept data pursuant to a court order, but denied storing, analyzing or processing intercepted data.
That’s where other companies come into play. Russian lawful intercept equipment maker Malvin Systems provides SORM-compatible technology that sits on top of the “port” created by Nokia. That compatible technology allows the collection and storage of citizens’ data.
“As it is a standard requirement for lawful interception in Russia and SORM providers must be approved by the appropriate authorities, we work with other companies to enable SORM capabilities in the networks that we provide,” said Nokia’s spokesperson, who confirmed Malvin as one of those companies.
Nokia’s logo was on Malvin’s website at the time of writing. A representative for Malvin did not return a request for comment.
Another set of documents shows that the “modernized” SORM capabilities on MTS’s network also allows the government access to the telecom’s home location register (HLR) database, which contains records on each subscriber allowed to use the cell network, including their international mobile subscriber identity (IMSI) and SIM card details.
The documents also make reference to Signalling System 7 (SS7), a protocol critical to allowing cell networks to establish and route calls and text messages. The protocol has widely been shown not to be secure and has led to hacking.
MTS spokesperson Elena Kokhanovskaya did not respond to several emails requesting comment.
‘Bulk wiretapping’
Lawful intercept, as its name suggests, allows a government to lawfully acquire data for investigations and countering terrorism.
But as much as it’s recognized that it’s necessary and obligatory in most Western countries — including the U.S. — some have expressed concern at how Russia rolled out and interprets its lawful intercept powers.
Russia has long faced allegations of human rights abuses. In recent years, the Kremlin has cracked down on companies that don’t store citizens’ data within its borders — in some cases actively blocking Western companies like LinkedIn for failing to comply. The country also has limited freedom of speech, expression and dissidents, and activists are often arrested for speaking out.
“The companies will always say that with lawful interception, they’re complying with the rule of law,” said Adrian Shahbaz, research director for technology and democracy at Freedom House, a civil liberties and rights watchdog. “But it’s clear when you look at how Russian authorities are using this type of apparatus that it goes far beyond what is normal in a democratic society.”
For Nokia’s part, it says its lawful intercept technology allows telecom companies — like MTS — to “respond to interception requests on targeted individuals received from the legal authority through functionality in our solutions.”
But critics say Russia’s surveillance program is flawed and puts citizens at risk.

“In Russia, the operator installs it and have no control over what is being wiretapped Only the FSB knows what they collect.”Alexander Isavnin, expert

Isavnin, who reviewed and translated some of the files TechCrunch has seen, said Russia’s view of lawful intercept goes far beyond other Western nations with similar laws. He described SORM as “bulk wiretapping.”
He said in the U.S., the Communications Assistance for Law Enforcement Act (CALEA) requires a company to verify the validity of a wiretap order. “In Russia, the operator installs it and have no control over what is being wiretapped,” he said. The law states that the telecom operation is “not able to determine what data is being wiretapped,” he said.
“Only the FSB knows what they collect,” he said. “There is no third-party scrutiny.
Nokia denied wrongdoing, and said it is “committed” to supporting human rights.
Nokia chief marketing officer David French told TechCrunch in a call that Nokia uses a list of countries that are considered “high-risk” on human rights before it sells equipment that could be used for surveillance.
“When we see a match between a technology that we think has potential risk and a country that has potential risk, we have a process where we review it internally and decide to go forward with the sale,” said French.
When pressed, French declined to say whether Russia was on that list. He added that any equipment that Nokia provides to MTS is covered under non-disclosure agreements.
A spokesperson for the Russian consulate in New York could not be reached by phone prior to publication.
This latest security lapse is the second involving SORM in recent months. In August, a developer found thousands of names, numbers, addresses and geolocations said to have leaked from SORM devices. Using open-source tools, Russian developer Leonid Evdokimov found dozens of “suspicious packet sniffers” in the networks of several Russian internet providers.
It took more than a year for the internet providers to patch those systems.
Ingrid Lunden contributed translations and reporting.
Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

What startup CSOs can learn from three enterprise security experts

How do you keep your startup secure?
That’s the big question we explored at TC Sessions: Enterprise earlier this month. No matter the size, every startup is an enterprise. Every startup will grow in size as it builds out. But as a company expands, that rapid growth can lead to a distraction from the foundational principle of any modern company — keeping it secure.
Security isn’t just a buzzword. As some of the largest companies in Silicon Valley have shown, security can be difficult. From storing passwords in plaintext to data breaches galore, how can startups learn from some of the biggest security lapses in the tech industry’s history?
Our panel consisted of three of the brightest minds in enterprise security: Wendy Nather, head of advisory CISOs at Duo Security, is an enterprise security expert; Martin Casado, general partner at Andreessen Horowitz, is a security and enterprise startup investor; and Emily Heath, United’s chief information security officer, oversees the security operations of the largest U.S. airlines.
This is what advice they had.
Security from the very start

Federal judge rules that the “terrorist watchlist” database violates U.S. citizens’ rights

A Federal judge appointed by President George W. Bush has ruled that the “terrorist watchlist” database compiled by Federal agencies and used by the Federal Bureau of Investigation and the Department of Homeland Security violates the rights of American citizens who are on it.
The ruling, first reported by The New York Times, raises questions about the constitutionality of the practice, which was initiated in the wake of the September 11 terrorist attacks.
The Terrorist Screening Database is used both domestically and internationally by law enforcement and other federal agencies and inclusion on the database can have negative consequences — including limiting the ability of citizens whose names are on the list to travel.
The U.S. government has identified more than 1 million people as “known or suspected terrorists” and included them on the watchlist, according to reporting from the Associated Press.
The ruling from U.S. District Judge Anthony Trenga is the culmination of several years of hearings on the complaint, brought to court by roughly two dozen Muslim U.S. citizens with the support of Muslim civil-rights group, the Council on American Islamic Relations.
The methodology the government used to add names to the watch list was shrouded in secrecy and citizens placed on the list often had no way of knowing how or why they were on it. Indeed, much of the plaintiffs lawsuit hinged on the over-broad and error-prone ways in which the list was updated and maintained.
“The vagueness of the standard for inclusion in the TSDB, coupled with the lack of any meaningful restraint on what constitutes grounds for placement on the Watchlist, constitutes, in essence, the absence of any ascertainable standard for inclusion and exclusion, which is precisely what offends the Due Process Clause,” wrote Judge Trenga.
In court, lawyers for the FBI contended that any difficulties the 21 Muslim plaintiffs suffered were outweighed by the government’s need to combat terrorist threats.
Judge Trenga disagreed. Especially concerning for the judge were the potential risks to an individual’s reputation as a result of their inclusion on the watchlist. That’s because the list isn’t just distributed to federal law enforcement agencies, but also finds its way into the hands of over 18,000 state, local,  county, city,  university and college, and tribal and federal law enforcement agencies and another 533 private entities. The judge was concerned that mistaken inclusion on the watchlist could have negative implications in interactions with local law enforcement and potential employers or local government services.
“Every step of this case revealed new layers of government secrets, including that the government shares the watchlist with private companies and more than sixty foreign countries,” said CAIR Senior Litigation Attorney Gadeir Abbas. “CAIR will continue its fight until the full scope of the government’s shadowy watchlist activities is disclosed to the American public.”
Federal agencies have consistently expanded the number of names on the watchlist over the years. As of June 2017, 1.16 million people were included on the watchlist, according to government documents filed in the lawsuit and cited by the AP — with roughly 4,600 of those names belonging to U.S. citizens and lawful permanent residents. In 2013, that number was 680,000, according to the AP.
“The fundamental principle of due process is notice and the opportunity to be heard,” said CAIR Trial Attorney Justin Sadowsky. “Today’s opinion provides that due process guarantee to all Americans affected by the watchlist.”

A new app can detect Bluetooth credit card skimmers on gas pumps

A team of computer scientists has built a new app that can wirelessly detect credit card skimmers, often found discreetly placed on gas pumps and bank ATMs.
Gone are the days where entire card skimmers would take over the front facade of an entire cash machine. Credit card skimmers are tiny, almost invisible — and many contain Bluetooth wireless capabilities, meaning skimming operators can install their credit card data-stealing skimmers just once and never have to take apart a gas pump again. Instead, criminals can just pull up in their car and wirelessly download the stolen card data.
Skimmers are also often connected to the magnetic stripe reader or the keypad, not only to steal your credit card number but also your PIN and ZIP codes.
This new app, dubbed Bluetana, developed by researchers at the University of California, San Diego and the University of Illinois Urbana-Champaign, can detect Bluetooth-enabled skimmers without having to dismantle vulnerable gas pumps.
By detecting Bluetooth signatures, the app aims to find more skimmers without flagging false positives, like speed-limit signs and fleet tracking systems, said Nishant Bhaskar, a PhD student and one of the researchers. Many skimmers use the same components, which when detected can indicate the presence of a skimmer. The prefix of the Bluetooth device’s unique MAC address is then compared to a hit list of prefixes known to be used by skimmers recovered by law enforcement. The app also uses signal strength is a “reliable way” to determine if a Bluetooth skimmer device is located near a gas pump.
The app was developed after field testers obtained scans of 1,185 Bluetooth gas pump skimmers in six U.S. states.
It’s a new technique aimed at improving on existing efforts designed to detect these tiny, inconspicuously installed skimming devices. Bluetooth skimmers are popular among scammers and fraudsters, not least because they offer a high return on investment. A single device can cost $20 to develop and can be used to steal thousands of dollars in a single data, depending on where the skimmer is located.
So far, the Bluetana app has detected 64 Bluetooth-based skimmers that had evaded other, existing scans, according to the researchers, and cuts down detection time to just a few seconds rather than minutes.
But don’t expect the app to come to consumers any time soon. The app is currently in use by U.S. law enforcement. Currently the app is in use in several U.S. states, the researchers said.

Card readers at electric vehicle charging stations will weaken security, researchers say

UK High Court rejects human rights challenge to bulk snooping powers

Civil liberties campaign group Liberty has lost its latest challenge to controversial UK surveillance powers that allow state agencies to intercept and retain data in bulk.
The challenge fixed on the presence of so-called ‘bulk’ powers in the 2016 Investigatory Powers Act (IPA): A controversial capability that allows intelligence agencies to legally collect and retain large amounts of data, instead of having to operate via targeted intercepts.
The law even allows for state agents to hack into devices en masse, without per-device grounds for individual suspicion.
Liberty, which was supported in the legal action by the National Union of Journalists, argued that bulk powers are incompatible with European human rights law on the grounds that the IPA contains insufficient safeguards against abuse of these powers.
Two months ago it published examples of what it described as shocking failures by UK state agencies — such as not observing the timely destruction of material; and data being discovered to have been copied and stored in “ungoverned spaces” without the necessary controls — which it said showed MI5 had failed to comply with safeguards requirements since the IPA came into effect.
However the judges disagreed that the examples of serious flaws in spy agency MI5’s “handling procedures” — which the documents also show triggering intervention by the Investigatory Powers Commissioner — sum to a conclusion that the Act itself is incompatible with human rights law.
Rejecting the argument in their July 29 ruling they found that oversight mechanisms the government baked into the legislation (such as the creation of the office of the Investigatory Powers Commissioner to conduct independent oversight of spy agencies’ use of the powers) provide sufficient checks on the risk of abuse, dubbing the regime as “a suite of inter-locking safeguards”.
Liberty expressed disappointment with the ruling — and has said it will appeal.
In a statement the group told the BBC: “This disappointing judgment allows the government to continue to spy on every one of us, violating our rights to privacy and free expression.
“We will challenge this judgment in the courts, and keep fighting for a targeted surveillance regime that respects our rights. These bulk surveillance powers allow the state to Hoover up the messages, calls and web history of hordes of ordinary people who are not suspected of any wrongdoing.”
This is just one of several challenges brought against the IPA.
A separate challenge to bulk collection was lodged by Liberty, Big Brother Watch and others with the European Court of Human Rights (ECHR).
A hearing took place two years ago and the court subsequently found that the UK’s historical regime of bulk interception had violated human rights law. However it did not rule against bulk surveillance powers in principle — which the UK judges note in their judgement, writing that consequently: “There is no requirement for there to be reasonable grounds for suspicion in the case of any individual.”
Earlier this year Liberty et al were granted leave to appeal their case to the ECHR’s highest court. That case is still pending before the Grand Chamber.

Bellingcat journalists targeted by failed phishing attempt

Investigative news site Bellingcat has confirmed several of its staff were targeted by an attempted phishing attack on their Protonmail accounts, which the journalists and the email provider say failed.
“Yet again, Bellingcat finds itself targeted by cyber attacks, almost certainly linked to our work on Russia,” wrote Eliot Higgins, founder of the investigative news site in a tweet. “I guess one way to measure our impact is how frequently agents of the Russian Federation try to attack it, be it their hackers, trolls, or media.”
News emerged that a small number of Protonmail email accounts were targeted during the week — several of which belonged to Bellingcat’s researchers who work on projects relating to activities by the Russian government. A phishing email purportedly from Protonmail itself asked users to change their email account passwords or generate new encryption keys on a similarly named domain set up by the attackers. Records show the fake site was registered anonymously, according to an analysis by security researchers.
In a statement, Protonmail said the phishing attacks “did not succeed” and denied that its systems or user accounts had been hacked or compromised.
“The most practical way to obtain email data from a ProtonMail user’s inbox is by compromising the user, as opposed to trying to compromise the service itself,” said Protonmail’s chief executive Andy Yen. “For this reason, the attackers opted for a phishing campaign that targeted the journalists directly.”
Yen said the attackers tried to exploited an unpatched flaw in third-party software used by Protonmail, which has yet to be fixed or disclosed by the software maker.
“This vulnerability, however, is not widely known and indicates a higher level of sophistication on the part of the attackers,” said Yen.
It’s not known conclusively who was behind the attack. However, both Bellingcat and Protonmail said they believe certain tactics and indicators of the attack — and the fact that the targets were Bellingcat’s researchers working on the ongoing investigation into the downing of flight MH17 by Russian forces and the release of nerve agent in the U.K. — may point to hackers associated with the Russian government.
Higgins said in a tweet that this week’s attempted attack likely targeted a number of people “in the tens” unlike earlier attacks attributed to the Russian government-backed hacker group, known as APT 28 or Fancy Bear.
Bellingcat in the past year has gained critical acclaim for its investigations into the Russian government, uncovering the names of the alleged Russian operatives behind the suspected missile attack that blew up Malaysian airliner MH17 in 2014. The research team also discovered the names of the Russian operatives who were since accused of poisoning former Russian intelligence agent Sergei Skripal and his daughter Yulia in a nerve agent attack in Salisbury, U.K. in 2018.
The researchers use open-source intelligence and information gathering where police, law enforcement and intelligence agencies often fail.
It’s not the first time that hackers have targeted Bellingcat. Its researchers were targeted several times in 2016 and 2017 following the breach on the Democratic National Committee which saw thousands of internal emails stolen and published online.
A phone call to the Russian consulate in New York requesting comment was not returned.

Microsoft has warned 10,000 victims of state-sponsored hacking

Microsoft makes 3 data sharing agreements available to the community

Microsoft today published three data sharing agreements that it hopes can function as a basis for other organizations who want to create similar documents.
In today’s announcement, Erich Anderson, Corporate Vice President and Chief IP Counsel at Microsoft, argues that while there are plenty of organizations that want to work together on shared datasets, the logistics of creating these agreements — with months of time spent on negotiating and talking to lawyers — often stall or stop these projects.
“We want to help make it easier for individuals and organizations that want to share data to do so,” Anderson writes. “Often, agreements for broad data sharing scenarios are unnecessarily long and complex. We also think there is an important role for agreements that limit rights to computational use for AI. Further, we think the state of the art on data sharing for proprietary and private data sets is changing rapidly and the terms available publicly could be improved and better explained.”
The three agreements focus on slight different use cases. One is the “Computational Use of Data Agreement” for sharing data from publicly available sources for computational purposes that don’t include any personal data, for example. The “Data Use Agreement for Open AI Model Development,” on the other hand, is all about training AI models with data that could include personal data, while the “Open Use of Data Agreement” focuses, as the name implies, on making data publicly available.
Anderson stresses that Microsoft is making these licenses available for community review and input. “Going forward, our aim is to work with interested stakeholders to improve these agreements and to offer additional ones that cover a wide range of data sharing scenarios,” he notes.
 

AG Barr says consumers should accept security risks of encryption backdoors

U.S. attorney general William Barr has said consumers should accept the risks that encryption backdoors pose to their personal cybersecurity to ensure law enforcement can access encrypted communications.
In a speech Tuesday in New York, the U.S. attorney general parroted much of the same rhetoric from his predecessors and other senior staff at the Justice Department, calling on tech companies to do more to assist federal authorities gain access to devices with a lawful order.
Encrypted messaging has taken off in recent years, making its way to Apple products, Facebook, Instagram, and WhatsApp, a response from Silicon Valley in response to the abuse of access by the intelligence services in the wake of the Edward Snowden revelations in 2013. But law enforcement says encryption thwarts their access to communications they claim they need to prosecute criminals.
The government calls this “going dark” because they cannot see into encrypted communications, remains a key talking point by the authorities. Security experts have long said there is no secure way to create “backdoor” access to encrypted communications for law enforcement without potentially allowing malicious hackers to also gain access to people’s private communications.
In remarks, Barr said the “significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society.”
He suggested that the “residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product.”
“Some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety,” he said.
The risk, he said, was acceptable because “we are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications,” and “not talking about protecting the nation’s nuclear launch codes.”
The attorney general said it was “untenable” that devices offer uncrackable encryption while offering zero access to law enforcement.
Barr is the latest in a stream of attorney generals to decry an inability by law enforcement to access encrypted communications, despite pushback from the tech companies.
The U.S. is far from alone in calling on tech companies to give law enforcement access.
Earlier this year U.K. authorities proposed a new backdoor mechanism, the so-called “ghost protocol,” which would give law enforcement access to encrypted communications as though they were part of a private conversation. Apple, Google, Microsoft and WhatsApp rejected the proposal.
The FBI inadvertently undermined its “going dark” argument last year when it admitted the number of encrypted device it claimed it couldn’t gain access to was overestimated by thousands.
FBI director Christopher Wray said the number of devices it couldn’t gain access to was less than a quarter of the claimed 7,800 phones and tablets.
Barr did not rule out pushing legislation to force tech companies to build backdoors.

Apple rebukes Australia’s “dangerously ambiguous” anti-encryption bill

WP Twitter Auto Publish Powered By : XYZScripts.com