Crea sito

Divesting from one facial recognition startup, Microsoft ends outside investments in the tech

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.
The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.
The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”
Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.
Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.
Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.
Smith wrote:
We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.
The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.
Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.
Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.
“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”
 
 

What are the rules wrapping privacy during COVID-19?

In a public health emergency that relies on people keeping an anti-social distance from each other to avoid spreading a highly contagious virus for which humans have no pre-existing immunity governments around the world have been quick to look to technology companies for help.
Background tracking is, after all, what many Internet giants’ ad-targeting business models rely on. While, in the US, telcos were recently exposed sharing highly granular location data for commercial ends.
Some of these privacy-hostile practices face ongoing challenges under existing data protection laws in Europe — and/or have at least attracted regulator attention in the US, which lacks a comprehensive digital privacy framework — but a pandemic is clearly an exceptional circumstance. So we’re seeing governments turn to the tech sector for help.
US president Donald Trump was reported last week to have summoned a number of tech companies to the White House to discuss how mobile location data could be used for tracking citizens.
In another development this month he announced Google was working on a nationwide coronavirus screening site — in fact it’s Verily, a different division of Alphabet. But concerns were quickly raised that the site requires users to sign in with a Google account, suggesting users’ health-related queries could be linked to other online activity the tech giant monetizes via ads. (Verily has said the data is stored separately and not linked to other Google products, although the privacy policy does allow data to be shared with third parties including Salesforce for customer service purposes.)
In the UK the government has also been reported to be in discussions with telcos about mapping mobile users’ movements during the crisis — though not at an individual level. It was reported to have held an early meeting with tech companies to ask what resources they could contribute to the fight against COVID-19.
Elsewhere in Europe, Italy — which remains the European nation worst hit by the virus — has reportedly sought anonymized data from Facebook and local telcos that aggregates users’ movement to help with contact tracing or other forms of monitoring.
While there are clear public health imperatives to ensure populations are following instructions to reduce social contact, the prospect of Western democracies making like China and actively monitoring citizens’ movements raises uneasy questions about the long term impact of such measures on civil liberties.
Plus, if governments seek to expand state surveillance powers by directly leaning on the private sector to keep tabs on citizens it risks cementing a commercial exploitation of privacy — at a time when there’s been substantial push-back over the background profiling of web users for behavioral ads.
“Unprecedented levels of surveillance, data exploitation, and misinformation are being tested across the world,” warns civil rights campaign group Privacy International, which is tracking what it dubs the “extraordinary measures” being taken during the pandemic.
A couple of examples include telcos in Israel sharing location data with state agencies for COVID-19 contact tracing and the UK government tabling emergency legislation that relaxes the rules around intercept warrants.
“Many of those measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary, and proportionate,” it adds. “It is essential to keep track of them. When the pandemic is over, such extraordinary measures must be put to an end and held to account.”

As the pandemic gets worse, we’re going to see increased pressure by all governments to access #private consumer data to identify:
1) who is infected 2) who they’ve been in contact with (“contact tracing”)
This is going to be an incredibly slippery slope if we’re not careful.
— ashkan soltani (@ashk4n) March 18, 2020

At the same time employers may feel under pressure to be monitoring their own staff to try to reduce COVID-19 risks right now — which raises questions about how they can contribute to a vital public health cause without overstepping any legal bounds.
We talked to two lawyers from Linklaters to get their perspective on the rules that wrap extraordinary steps such as tracking citizens’ movements and their health data, and how European and US data regulators are responding so far to the coronavirus crisis.
Bear in mind it’s a fast-moving situation — with some governments (including the UK and Israel) legislating to extend state surveillance powers during the pandemic.
The interviews below have been lighted edited for length and clarity
Europe and the UK
Dr Daniel Pauly, technology, media & telecommunications partner at Linklaters in Frankfurt 
Data protection law has not been suspended. At least when it comes to Europe. So data protection law still applies — without any restrictions. This is the baseline on which we need to work and for which we need to start. Then we need to differentiate between what the government can do and what employers can do in particular.
It’s very important to understand that when we look at governments they do have the means to allow themselves a certain use of data. Because there are opening clauses, flexibility clauses, in particular in the GDPR, when it comes to public health concerns, cross-border threats.
By using the legislation process they may introduce further powers. To give you one example what the Germany government did to respond is they created a special law — the coronavirus notification regulation — we already have in place a law governing the use of personal data in respect of certain serious infections. And what they did is they simply added the coronavirus infection to that list, which now means that hospitals and doctors must notify the competent authority of any COVID-19 infection.
This is pretty far reaching. They need to transmit names, contact details, sex, date of birth and many other details to allow the competent authority to gather that data and to analyze that data.
Another important topic in that field is the use of telecommunications data — in particular mobile phone data. Efficient use of that data might be one of the reasons why they obviously were quite successful in China with reducing the threat from the virus.
In Europe the government may not simply use mobile phone data and movement data — they have to anonymize it first and this is what, in Germany and other European jurisdictions, happened — including the UK — that anonymized mobile phone data has been handed over to organizations who start analyzing that data to get a better view of how the people behave, how the people move and what they need to do in order to restrict further movement. Or to restrict public life. This is the view on the government at least in Europe and the UK.
Transparency obligations [related to government use of personal data] are stemming from the GDPR [General Data Protection Regulation]. When they would like to make use of mobile phone data this is the ePrivacy directive. This is not as transparent as the GDPR is and they did not succeed in replacing that piece of legislation by new regulation. So the ePrivacy directive gives again the various Member States, including the UK, the possibility to introduce further and more restrictive laws [for public health reasons].
[If Internet companies such as Google were to be asked by European governments to pass data on users for a coronavirus tracking purpose] it has to be taken into consideration that they have not included this in their records of processing activities — in their data protection notifications and information.
So it would be at least from a pure legal perspective it would be a huge step — and I’m wondering whether it would be feasible without the governments introducing special laws for that.
If [EU] governments would make use of private companies to provide them with data which has not been collected for such purposes — so that would be a huge step from the perspective of the GDPR at least. I’m not aware of something like this. I’ve certainly read there are discussions ongoing with Netflix to reduce the net traffic but I haven’t heard anything about making use of the data Google has.
I wouldn’t expect it in Europe — and particularly in Germany. Tracking people, tracking and monitoring what they are doing this is almost last resort — so I wouldn’t expect that in the next couple of weeks. And I hope then it’s over.
[So far], from my perspective, the European regulators have responded [to the coronavirus crisis] in a pretty reasonable manner by saying that, in particular, any response to the virus must be proportionate.
We still have that law in place and we need to consider that the data we’re talking about is health data — it’s the most protected data of all. Having said that there are some ways at least the GDPR is allowing the government and allowing employers to make use of that data. In particular when it comes to processing for substantial public interest. Or if it’s required for the purposes of preventive medicine or necessary for reasons of public interest.
So the legislator was wise enough to include clauses allowing the use of such data under certain circumstances and there are a number of supervisory authorities who already made public guidelines how to make use of these statutory permissions. And what they basically said was it always needs to be figured out on a case by case basis whether the data is really required in the specific case.
To give you an example, it was made clear that an employer may not ask an employee where he has been during his vacation — but he may ask have you been in any of the risk areas? And then the sufficient answer is yes or no. They do not need any further data. So it’s always [about approaching this] a smart way — by being smart you get the information you need; it’s not the flood gate suddenly opened.
You really need to look at the specific case and see how to get the data you need. Usually it’s a yes or no which is sufficient in the particular case.
The US
Caitlin Potratz Metcalf, senior U.S. associate at Linklaters and a Certified Information Privacy Professional (CIPP/US)
Even though you don’t have a structured privacy framework in the US — or one specific regulator that covers privacy — you’ve got some of the same issues. The FCC [Federal Communications Commission] will go after companies that take any action that is inconsistent with their privacy policies. And that would be misleading to consumers. Their initial focus is on consumer protection, not privacy, but in the last couple of years they’ve been wearing two hats. So there is a focus on privacy even though we don’t have a national privacy law [equivalent in scope to the EU’s GDPR] but it’s coming from a consumer protection point of view.
So, for example, the FCC back in February actually announced potential sanctions against four major telecoms companies int he US with respect to sharing data related to cell phone tracking — it wasn’t the geolocation in an app but actually pinging off cell towers — and sharing that data to third parties without proper safeguards. Because that wasn’t disclosed in their privacy policies.
They haven’t actually issued those fines but it was announced that they may pursue a $208M fine total against these four companies: AT&T, Verizon*, T-Mobile, Sprint… So they do take it very seriously about how that data is safeguarded, how it’s being shared. And the fact that we have a state of emergency doesn’t change that emphasis on consumer protection.
You’ll see the same is true for the Department of Health and Human Services (HHS) — that’s responsible for any medical or health data.
That is really limited towards entities that are covered entities under HIPAA [Health Insurance Portability and Accountability Act] or their business associates. So it doesn’t apply to everybody across the board. But if you are a hospital health plan provider, whether you’re an employer and you have a group health plan, an insurer, or a business associate supporting one of those covered entities then you have to comply with HIPAA to the extent you’re handling protected health information. And that’s a bit narrower than the definition of personal data that you’d have under GDPR.
So you’re really looking at identifying information for that patient: Their medical status, their birth date, address, things like that that might be very identifiable and related to the person. But you could share things that are more general. For example you have a middle aged man from this county who’s tested positive for COVID and is at XYZ facility being treated and his condition is stable. Or his condition is critical. So you could share that kind of level of detail — but not further.
And so HHS in February had issued a bullet stressing that you can’t set aside the privacy and security safeguards under HIPAA during an emergency. They stressed to all covered entities that you have to still comply with the law — sanctions are still in place. And to the extent that you do have to disclose some of the protected health information it has to be to the minimum extent necessary. And that can be disclosed either to other hospitals, to a regulator in order to help stem the spread of COVID and also in order to provide treatment to a patient. So they listed a couple of different exceptions how you can share that information but really stressing the minimum necessary.
The same would be true for an employer — like of a group health plan — if they’re trying to share information about employees but it’s going to be very narrow in what they can actually share. And they can’t just cite as an exception that it’s for the public health interest.. You don’t necessarily have to disclose what country they’ve been to it’s just have they been to a region that’s on a restricted list for travel. So it’s finding creative ways to relay the necessary information you need and if there’s anything less intrusive you’re required to go that route.
That said, just last week HHS also issued another bullet saying that they would waive HIPAA sanctions and penalties during the nationwide public health emergency. But it was only directed to hospitals — so it doesn’t apply to all covered entities.
They also issued another bulletin saying that they would lax restrictions on basically sharing data on using electronic means. So there’s very heightened restrictions on how you can share data electronically when it relates to medical and health information. And so this was allowing doctors to communicate by FaceTime or video chat and other methods that may not be encrypted or secure. Or communicate with patients etc. So they’re giving a waiver or just softening some of the restrictions related to transferring health data electronically.
So you can see it’s an evolving situation but they’ve still taken a very reserved and kind of conservative approach — really emphasizing that you do need to comply with your obligation to protect health data. So that’s where you see the strongest implementations. And then the FCC coming at it from a consumer protection point of view.
Going back to the point you made earlier about Google sharing data [with governments] — you could get there, it just depends on how their privacy policies are structured.
In terms of tracking individuals we don’t have a national statute like GDPR that would prevent that but it would also be very difficult to anonymize that data because it’s so tied to individuals — it’s like your DNA; you can map a person leaving home, going to work or school, going to a doctor’s office, coming back home — and it really does have very sensitive information and because of all the specific data points it means it’s very difficult to anonymize it and provide it in a format that wouldn’t violate someone’s privacy without their consent. And so while you may not need full consent in the US you would still need to have notice and transparency about the policies.
Then it would be slightly different if you’re a California resident — the degree that you need under the new California law [CCPA] to provide disclosures and give individuals the opportunity to opt out if you were to share their information. So in that case, where the telecoms companies are potentially going to be sued by the FCC for sharing data with third parties, that in particular would also violate the new California law if consumers weren’t given the opportunity to opt out of having their information sold.
So there’s a lot of different puzzle pieces that fit together since we have a patchwork quilt of data protection — depending on the different state and federal laws.
The government, I guess, could issue other mandates or regulations [to requisition telco tracking data for a COVID-related public health purpose] — I don’t know that they will. I would envisage more of a call to arms requesting support and assistance from the private sector. Not a mandate that you must share your data, given the way our government is structured. Unless things get incredibly dire I don’t really see a mandate to companies that they have to share certain data in order to be able to track patients.
[If Google makes use of health-related searches/queries to enrich user profiles it uses for commercial purposes] that in and of itself wouldn’t be protected health information.
Google is not a [HIPAA] covered entity. And depending on what type of support it’s providing for covered entities it may be in limited circumstances could be considered a business associate that could be subject to HIPAA but in the context of just collecting data on consumers it wouldn’t be governed by that.
So as long as it’s not doing anything outside the scope of what’s already in its privacy policies then it’s fine — so the fact that it’s collecting data based on searches that you run on Google that should be in the privacy policy anyway. It doesn’t need to be specific to the type of search that you’re running. So the fact that it’s looking up how to get COVID testing or treatment or what are the symptoms for COVID, things like that, that can all be tied to the data [it holds on users] and enriched. And that can also be shared and sold to third parties — unless you’re a California resident. They have a separate privacy policy for California residents… They just have to consistent with their privacy policy.
The interesting thing to me is maybe the approach that Asia has taken — where they have a lot more influence over the commercial sector and data tracking–  and so you actually have the regulator stepping in and doing more tracking, not just private companies. But private companies are able to provide tracking information.
You see it actually with Uber. They’ve issued additional privacy notices to consumers — saying that to the extent we become aware of a passenger that has had COVID or a driver, we will notify people who have come into contact with that Uber over a given time period. They’re trying to take the initiative to do their own tracking to protect workers and consumers.
And they can do that — they just have to be careful about how much detail they share about personal information. Not naming names of who was impacted [but rather saying something like] ‘in the last 24 hours you may have ridden in an Uber that was impacted or known to have an infected individual in the Uber’.
[When it comes to telehealth platforms and privacy protections] it depends if they’re considered a business associate of a covered entity. So they may not be a covered entity themselves but if they are a business associate supporting a covered entity — for example a hospital or a clinic or insurers sharing that data and relying on a telehealth platform. In that context they would be governed by some of the same privacy and security regulations under HIPAA.
Some of them are slightly different for a business associate compared to a covered entity but generally you step in the shoes of the covered entity if you’re handling the covered entity’s data and have the same restrictions apply to you.
Aggregate data wouldn’t be considered protected health information — so they could [for example] share a symptom heat map that doesn’t identify specific individuals or patients and their health data.
[But] standalone telehealth apps that are collecting data directly from the consumer are not covered by HIPAA.
That’s actually a big loophole in terms of consumer protection, privacy protections related to health data. You have the same issue for all the health fitness apps — whether it’s your fitbit or other health apps or if you’re pregnant and you have an app that tracks your maternity or your period or things like that. Any of that data that’s collected is not protected.
The only protections you have are whatever disclosures are in the privacy policies. And in them having to be transparent and act within that privacy policy. If they don’t they can face an enforcement action by the FCC but that is not regulated by the Department of Health and Human Services under HIPAA.
So it’s a very different approach than under GDPR which is much more comprehensive.
That’s not to say in the future we might see a tightening of restrictions on that but individuals are freely giving that information — and in theory should read the privacy policy that’s provided when you log into the app. But most users probably don’t read that and then that data can be shared with other third parties.
They could share it with a regulator, they could sell it to other third parties so long as they have the proper disclosure that they may sell your personal information or share it with third parties. It depends on how they’re privacy policy is crafted. So long as it covers those specific actions. And for California residents it’s a more specific test — there are more disclosures that are required.
For example the type of data that you’re collecting, the purpose that you’re collecting it for, how you intend to process that data, who you intend to share it with and why. So it’s tightened for California residents but for the rest of the US you just have to be consistent with your privacy policy and you aren’t required to have the same level of disclosures.
More sophisticated, larger companies, though, definitely are already complying with GDPR — or endeavouring to comply with the California law — and so they have more sophisticated, detailed privacy notices than are maybe required by law in the US. But they’re kind of operating on a global platform and trying to have a global privacy policy.
*Disclosure: Verizon is TechCrunch’s parent company

Clearview said its facial recognition app was only for law enforcement as it courted private companies

After claiming that it would only sell its controversial facial recognition software to law enforcement agencies, a new report suggests that Clearview AI is less than discerning about its client base. According to Buzzfeed News, the small, secretive company looks to have shopped its technology far and wide. While Clearview counts ICE, the U.S. Attorney’s Office for the Southern District of New York and the retail giant Macy’s among its paying customers, many more private companies are testing the technology through 30-day free trials. Non-law enforcement entities that appeared on Clearview’s client list include Walmart, Eventbrite, the NBA, Coinbase, Equinox, and many others.
According to the report, even if a company or organization has no formal relationship with Clearview, its individual employees might be testing the software. “In some cases… officials at a number of those places initially had no idea their employees were using the software or denied ever trying the facial recognition tool,” Buzzfeed News reports.
In one example, the NYPD denied a relationship with Clearview, even as as many as 30 officers within the department conducted 11,000 searches through the software, according to internal logs.
A week ago, Clearview’s CEO Hoan Ton-That was quoted on Fox Business stating that his company’s technology is “strictly for law enforcement”—a claim the company’s budding client list appears to contradict.
“This list, if confirmed, is a privacy, security, and civil liberties nightmare,” ACLU Staff Attorney Nathan Freed Wessler said of the revelations. “Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse.”
On top of its reputation as an invasive technology, critics argue that facial recognition tech isn’t accurate enough to be used in the high-consequence settings it’s often touted for. Facial recognition software has notoriously struggled to accurately identify non-white, non-male faces, a phenomenon that undergirds arguments that biased data has the potential to create devastating real-world consequences.
Little is known about the technology that powers Clearview’s own algorithms and accuracy beyond that the company scrapes public images from many online sources, aggregates that data, and allows users to search it for matches. In light of Clearview’s reliance on photos from social networks, Facebook, YouTube, and Twitter have all issued the company cease-and-desist letters for violating their terms of use.
Clearview’s small pool of early investors includes the private equity firm Kirenaga Partners and famed investor and influential tech conservative Peter Thiel. Thiel, who sits on the board of Facebook, also co-founded Palantir, a data analytics company that’s become a favorite of law enforcement.

UCLA backtracks on plan for campus facial recognition tech

After expressing interest in processing campus security camera footage with facial recognition software, UCLA is backing down.
In a letter to Evan Greer of Fight for the Future, a digital privacy advocacy group, UCLA Administrative Vice Chancellor Michael Beck announced the institution would abandon its plans in the face of a backlash from its student body.
“We have determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community,” Beck wrote.
The decision, deemed a “major victory” for privacy advocates, came as students partnered with Fight for the Future to plan a national day of protest on March 2. UCLA’s interest in facial recognition was a controversial departure from many elite universities that confirmed they have no intention to implement the surveillance technology, including MIT, Brown, and New York University.
UCLA student newspaper the Daily Bruin reported on the school’s interest in facial recognition tech last month, as the university proposed the addition of facial recognition software in a revision of its security camera policy. According to the Daily Bruin, the technology would have been used to screen individuals from restricted campus areas and to identify anyone flagged with a “stay-away order” prohibiting them from being on university grounds. The proposal faced criticism in a January town hall meeting on campus with 200 attendees and momentum against the surveillance technology built from there.
“We hope other universities see that they will not get away with these policies,” Matthew William Richard, UCLA student and vice chair of UCLA’s Campus Safety Alliance, said of the decision. “… Together we can demilitarize and democratize our campuses.”

ACLU says it’ll fight DHS efforts to use app locations for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.
“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.
Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.
The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.
According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.
It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.
Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.
The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.
ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

ACLU sues Homeland Security over ‘stingray’ cell phone surveillance

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
Its conclusions:
The Met failed to consider the human rights impact of the techIts use was unlikely to pass the key legal test of being “necessary in a democratic society”
— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.
Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.
But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.
The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.
“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”
However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).
The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.
These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.
The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.
Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.
Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.
Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.
“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”
EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.
For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.
Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.
If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.
“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.
“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”
An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.
But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.
In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

ACLU sues Homeland Security over ‘stingray’ cell phone surveillance

One of the largest civil liberties groups in the U.S. is suing two Homeland Security agencies for failing to turn over documents it requested as part of a public records request about a controversial cell phone surveillance technology.
The American Civil Liberties Union filed suit against Customs & Border Protection (CBP) and Immigration & Customs Enforcement (ICE) in federal court on Wednesday after the organization claimed the agencies “failed to produce records” relating to cell site simulators — or “stingrays.”
Stingrays impersonate cell towers to trick cell phones into connecting to them, allowing its operator to collect unique identifiers from the device and determine their location. The devices are used for surveillance, but also ensnare all other devices in their range. It’s believed newer, more advanced devices can intercept all the phone calls and text messages in range.
A government oversight report in 2016 said both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution,” the ACLU said.
But little else is known about stingray technology because the cell phone snooping technology is sold exclusively to police departments and federal agencies under strict non-disclosure agreements with the device manufacturer.
The ACLU filed a Freedom of Information Act request in 2017 to learn more about the technology and how it’s used, but both agencies failed to turn over any documents, it said.
The civil liberties organization said there is evidence to suggest that records exist, but has “exhausted all administrative remedies” to obtain the documents. Now it wants the courts to compel the agencies to turn over the records, “not only to shine a light on the government’s use of powerful surveillance technology in the immigration context, but also to assess whether its use of this technology complies with constitutional and legal requirements and is subject to appropriate oversight and control,” the filing said.
The group wants the agencies’ training materials and guidance documents, and records to show where and when stingrays were deployed across the United States.
CBP spokesperson Nathan Peeters said the agency does not comment on pending litigation as a matter of policy. A spokesperson for ICE did not comment.

US border officials are increasingly denying entry to travelers over others’ social media

DHS wants to expand airport face recognition scans to include US citizens

Homeland Security wants to expand facial recognition checks for travelers arriving and departing the U.S. to also include citizens, which had previously been exempt from the mandatory checks.
In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.
Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.
But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.
Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.
“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .
“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.
“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.
Citing a data breach of close to 100,000 license plate and traveler images in June as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.
A spokesperson for Homeland Security did not immediately comment when reached.

CBP says traveler photos and license plate images stolen in data breach

Another US court says police cannot force suspects to turn over their passwords

The highest court in Pennsylvania has ruled that the state’s law enforcement cannot force suspects to turn over their password that would unlock their devices.
The state’s Supreme Court said compelling a password from a suspect is a violation of the Fifth Amendment, a constitutional protection that protects suspects from self-incrimination.
It’s not an surprising ruling given other state and federal courts have almost always come to the same conclusion. The Fifth Amendment grants anyone in the U.S. the right to remain silent, which includes the right to not turn over information that could incriminate them in a crime. These days, those protections extend to the passcodes that only a device owner knows.
But the ruling is not expected to affect the ability by police to force suspects to use their biometrics — like their face or fingerprints — to unlock their phone or computer.
Because your passcode is in stored your head and your biometrics are not, prosecutors have long argued that police can compel a suspect into unlocking a device with their biometrics, which they say are not constitutionally protected. The court also did not address biometrics. In a footnote of the ruling, the court said it “need not address” the issue, blaming the U.S. Supreme Court for creating “the dichotomy between physical and mental communication.”
Peter Goldberger, president of the ACLU of Pennsylvania, who presented the arguments before the court, said it was “fundamental” that suspects have the right to “to avoid self-incrimination.”
Despite the spate of rulings in recent years, law enforcement have still tried to find their way around compelling passwords from suspects. The now-infamous Apple-FBI case saw the federal agency try to force the tech giant to rewrite its iPhone software in an effort to beat the password on the handset of the terrorist Syed Rizwan Farook, who with his wife killed 14 people in his San Bernardino workplace in 2015. Apple said the FBI’s use of the 200-year-old All Writs Act would be “unduly burdensome” by putting potentially every other iPhone at risk if the rewritten software leaked or was stolen.
The FBI eventually dropped the case without Apple’s help after the agency paid hackers to break into the phone.
Brett Max Kaufman, a senior staff attorney at the ACLU’s Center for Democracy said the Pennsylvania case ruling sends a message to other courts to follow in its footsteps.
“The court rightly rejects the government’s effort to create a giant, digital-age loophole undermining our time-tested Fifth Amendment right against self-incrimination,” he said. “The government has never been permitted to force a person to assist in their own prosecution, and the courts should not start permitting it to do so now simply because encrypted passwords have replaced the combination lock.”
“We applaud the court’s decision and look forward to more courts to follow in the many pending cases to be decided next,” he added.

3D-printed heads let hackers – and cops – unlock your phone

Amnesty International latest to slam surveillance giants Facebook and Google as “incompatible” with human rights

Human rights charity Amnesty International is the latest to call for reform of surveillance capitalism — blasting the business models of “surveillance giants” Facebook and Google in a new report which warns the pair’s market dominating platforms are “enabling human rights harm at a population scale”.
“[D]despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost,” Amnesty warns. “The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”
“This isn’t the internet people signed up for,” it adds.
What’s most striking about the report is the familiarly of the arguments. There is now a huge weight of consensus criticism around surveillance-based decision-making — from Apple’s own Tim Cook through scholars such as Shoshana Zuboff and Zeynep Tufekci to the United Nations — that’s itself been fed by a steady stream of reportage of the individual and societal harms flowing from platforms’ pervasive and consentless capturing and hijacking of people’s information for ad-based manipulation and profit.
This core power asymmetry is maintained and topped off by self-serving policy positions which at best fiddle around the edges of an inherently anti-humanitarian system. While platforms have become practiced in dark arts PR — offering, at best, a pantomime ear to the latest data-enabled outrage that’s making headlines, without ever actually changing the underlying system. That surveillance capitalism’s abusive modus operandi is now inspiring governments to follow suit — aping the approach by developing their own data-driven control systems to straitjacket citizens — is exceptionally chilling.
But while the arguments against digital surveillance are now very familiar what’s still sorely lacking is an effective regulatory response to force reform of what is at base a moral failure — and one that’s been allowed to scale so big it’s attacking the democratic underpinnings of Western society.
“Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights,” Amnesty writes.
“The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations.”
Needless to say Facebook and Google do not agree with Amnesty’s assessment. But, well, they would say that wouldn’t they?
Amnesty’s report notes there is now a whole surveillance industry feeding this beast — from adtech players to data brokers — while pointing out that the dominance of Facebook and Google, aka the adtech duopoly, over “the primary channels that most of the world relies on to engage with the internet” is itself another harm, as it lends the pair of surveillance giants “unparalleled power over people’s lives online”.
“The power of Google and Facebook over the core platforms of the internet poses unique risks for human rights,” it warns. “For most people it is simply not feasible to use the internet while avoiding all Google and Facebook services. The dominant internet platforms are no longer ‘optional’ in many societies, and using them is a necessary part of participating in modern life.”
Amnesty concludes that it is “now evident that the era of self-regulation in the tech sector is coming to an end” — saying further state-based regulation will be necessary. Its call there is for legislators to follow a human rights-based approach to rein in surveillance giants.
You can read the report in full here (PDF).

U.S. Senator demands answers from Amazon Ring over its police partnerships

Senator Edward J. Markey (D-Mass.) is looking for answers from Amazon about its doorbell camera Ring and the company’s relationships with law enforcement agencies around the U.S. In a letter published on Thursday, he writes to the technology giant how partnerships like Ring’s raise “serious privacy and civil liberties concerns,” and asks Amazon to further detail the size and scope of its numerous deals with police.
The Senator’s attention was drawn to the issue following a recent report by The Washington Post. The report revealed that Ring had entered into some 400 video-sharing partnerships with U.S. police forces, which granted the police access to the footage from the homeowners’ internet-connected video cameras.
The police cannot tap into live or on ongoing footage, and the homeowners can choose to decline police requests, the report also said.
However, the partnerships raised concerns from privacy advocates who believe the program could threaten civil liberties, turn residents into informants, and subject innocent people to greater surveillance and risk, The Washington Post noted.
The same kind of network of surveillance cameras would draw more scrutiny if the police or the government had installed it themselves, but by working with Ring they’re not directly involved.
In a letter (see below) dated Thursday, September 5, 2019, Senator Markey questions the use targeted language that encourages Ring owners to opt in to the video-sharing program with police, as well as the way Amazon actively courts law enforcement to increase its use of the Ring system. Sen. Markey additionally points out that Amazon seems to be increasingly working with U.S. police forces, like it did when marketing its facial recognition product, Rekognition.
“The scope and nature of Ring’s partnership with police forces raise additional civil liberties concerns. The integration of Ring’s network of cameras with law enforcement offices could easily create a surveillance network that places dangerous burdens on people of color and feeds racial anxieties in local communities,” Sen. Markey wrote. “I am particularly alarmed to learn that Ring is pursuing facial recognition technology with the potential to flag certain individuals as suspicious based on their biometric information,” he continued, referencing a patent Ring applied for last year that would catch “suspicious” people on camera.
“In light of evidence that existing facial recognition technology disproportionately misidentifies African Americans and Latinos, a product like this has the potential to catalyze racial profiling and harm people of color,” Markey said.
The Senator asks Amazon to detail how long it’s been asking users to share video footage with police, how those policies have changed, and which police departments it’s working with. He also asked for information about how this data was being stored, what safeguards are in place, whether the police are sharing that footage with other entities, whether Rekognition capabilities were coming to Ring, and more.
Markey also wants Ring to review its consent prompts for video-sharing with experts to make sure it’s not manipulating consumers into these agreements with police.
And he wants to know if Ring has worked with any experts in civil liberties, criminal justice or other relevant fields to review its doorbells and social network Neighbors to ensure they don’t present unique threats to people of color or other populations.
Ring would not be the first to create a home for racial profiling among neighbors, though its connection to cameras makes it more of an actionable threat than other networks, like Nextdoor or Facebook Groups.
Nextdoor, for example, became well-known for issues around racial profiling, and eventually rolled out tools to try to stem the problem in its app. Crime-tracking app Citizen also faced controversies for creating a state of paranoid hypervigilance among its users — something that has long-term effects on how people perceive their world and those they share it with. And anyone in a Facebook Group for their neighborhood knows there will at some point be a post about a “suspcious” person with little evidence of any wrongdoing.
Sen. Markey gave Amazon until September 26, 2019 to respond to his questions.

View this document on Scribd

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proce

Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.
The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.
Now, thanks to an unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.
The most significant language from the decision from the circuit court seems to be this:

 We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.
“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”
As April Glaser noted in “Slate”, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.
“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”
That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebooks billion-strong population of users.
As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

The real risk of Facebook’s Libra coin is crooked developers

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to “Reuters”.
Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.
Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit those figures could amount to real money.
“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”
These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.

Trueface raises $3.7M to recognise that gun, as it’s being pulled, in real time

Globally, millions of cameras are in deployed by companies and organizations every year. All you have to do is look up. Yes, there they are! But the petabytes of data collected by these cameras really only become useful after something untoward has occurred. They can very rarely influence an action in “real-time”.
Trueface is a US-based computer vision company that turns camera data into so-called ‘actionable data’ using machine learning and AI by employing partners who can perform facial recognition, threat detection, age and ethnicity detection, license plate recognition, emotion analysis as well as object detection. That means, for instance, recognising a gun, as it’s pulled in a dime store. Yes folks, welcome to your brave new world.
The company has now raised $3.7M from Lavrock Ventures, Scout Ventures, and Advantage Ventures to scale the team growing partnerships and market share.
Trueface claims it can identify enterprises’ employees for access to a building, detect a weapon as it’s being wielded, or stop fraudulent spoofing attempts. Quite some claims.
However, it’s good enough for the US Air Force as it recently partnered with them to enhance base security.
Originally embedded in a hardware access control device, Trueface’s computer vision software inside one of the first ‘intelligent doorbell’, Chui which was covered by TechCrunch’s Anthony Ha in 2014.
Trueface has multiple solutions to run on an array of clients’ infrastructures including a dockerized container, SDKs that partners can use to build their own solutions with, and a plug and play solution that requires no code to get up and running.
The solution can be deployed in various scenarios such as fintech, healthcare, retail to humanitarian aid, age verification, digital identity verification and threat detection. Shaun Moore and Nezare Chafni are the cofounders and CEO and CTO, respectively.
The computer vision market was valued at USD 9.28 billion in 2017 and is now set to reach a valuation of USD 48.32 billion by the end of 2023.
Facial recognition was banned by agency use in the city of San Francisco recently. There are daily news stories about privacy concerns of facial recognition, especially in regards to how China is using computer vision technology.
However, Truface is only deployed ‘on-premise’ and includes features like ‘fleeting data’ and blurring for people who have not opted-in. It’s good to see a company building in such controls, from the word go.
However, it’s then it’s up to the company you work for not to require you to sign a statement saying you are happy to have your face recognized. Interesting times, huh?
And if you want that job, well, that’s a whole other story, as I’m sure you can imagine.

UK High Court rejects human rights challenge to bulk snooping powers

Civil liberties campaign group Liberty has lost its latest challenge to controversial UK surveillance powers that allow state agencies to intercept and retain data in bulk.
The challenge fixed on the presence of so-called ‘bulk’ powers in the 2016 Investigatory Powers Act (IPA): A controversial capability that allows intelligence agencies to legally collect and retain large amounts of data, instead of having to operate via targeted intercepts.
The law even allows for state agents to hack into devices en masse, without per-device grounds for individual suspicion.
Liberty, which was supported in the legal action by the National Union of Journalists, argued that bulk powers are incompatible with European human rights law on the grounds that the IPA contains insufficient safeguards against abuse of these powers.
Two months ago it published examples of what it described as shocking failures by UK state agencies — such as not observing the timely destruction of material; and data being discovered to have been copied and stored in “ungoverned spaces” without the necessary controls — which it said showed MI5 had failed to comply with safeguards requirements since the IPA came into effect.
However the judges disagreed that the examples of serious flaws in spy agency MI5’s “handling procedures” — which the documents also show triggering intervention by the Investigatory Powers Commissioner — sum to a conclusion that the Act itself is incompatible with human rights law.
Rejecting the argument in their July 29 ruling they found that oversight mechanisms the government baked into the legislation (such as the creation of the office of the Investigatory Powers Commissioner to conduct independent oversight of spy agencies’ use of the powers) provide sufficient checks on the risk of abuse, dubbing the regime as “a suite of inter-locking safeguards”.
Liberty expressed disappointment with the ruling — and has said it will appeal.
In a statement the group told the BBC: “This disappointing judgment allows the government to continue to spy on every one of us, violating our rights to privacy and free expression.
“We will challenge this judgment in the courts, and keep fighting for a targeted surveillance regime that respects our rights. These bulk surveillance powers allow the state to Hoover up the messages, calls and web history of hordes of ordinary people who are not suspected of any wrongdoing.”
This is just one of several challenges brought against the IPA.
A separate challenge to bulk collection was lodged by Liberty, Big Brother Watch and others with the European Court of Human Rights (ECHR).
A hearing took place two years ago and the court subsequently found that the UK’s historical regime of bulk interception had violated human rights law. However it did not rule against bulk surveillance powers in principle — which the UK judges note in their judgement, writing that consequently: “There is no requirement for there to be reasonable grounds for suspicion in the case of any individual.”
Earlier this year Liberty et al were granted leave to appeal their case to the ECHR’s highest court. That case is still pending before the Grand Chamber.

WP Twitter Auto Publish Powered By : XYZScripts.com