Crea sito

Using AI responsibly to fight the coronavirus pandemic

Mark Minevich
Contributor

Share on Twitter

Mark Minevich is president of Going Global Ventures, an advisor at Boston Consulting Group, a digital fellow at IPsoft and a leading global AI expert and digital cognitive strategist and venture capitalist.

More posts by this contributor
The American AI Initiative: A good first step, of many

Irakli Beridze
Contributor

Irakli Beridze is head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI).

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.
With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.
For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.
In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.
Isolated cases or the new norm?
With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.
While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.
In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”
This is not an easy task, but a necessary one. So what can we do?
Ways to responsibly use AI to fight the coronavirus pandemic
Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.
Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.
Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.
While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.
It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

Divesting from one facial recognition startup, Microsoft ends outside investments in the tech

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.
The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.
The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”
Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.
Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.
Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.
Smith wrote:
We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.
The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.
Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.
Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.
“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”
 
 

India used facial recognition tech to identify 1,100 individuals at a recent riot

Law enforcement agencies in India used facial recognition to identify more than 1,100 individuals who took part in communal violence in the national capital last month, a top minister said in the lower house of the parliament on Wednesday.
In what is the first admission of its kind in the country, Amit Shah, India’s home minister, said the law enforcement agencies deployed a facial recognition system, and fed it with images from government-issued identity cards, including 12-digit Aadhaar that has been issued to more than a billion Indians and driving licenses, “among other databases,” to identify alleged culprits in the communal violence in northeast Delhi on February 25 and 26.
“This is a software. It does not see faith. It does not see clothes. It only sees the face and through the face the person is caught,” said Shah, responding to an individual who had urged New Delhi to not drag innocent people into the facial surveillance.
The admission further demonstrates how the Indian government has rushed to deploy facial recognition technology in the absence of regulation overseeing its usage. Critics have urged the government to hold consultations and formulate a law before deploying the technology.
“The use of Aadhaar for this purpose without any judicial authorisation violates the judgement of the Supreme Court in KS Puttaswamy v. UoI (2019),” said New Delhi-based digital rights advocacy group Internet Freedom Foundation, which also questioned the sophistication of the facial recognition system.
The facial recognition system that the government used in Delhi was first acquired by the Delhi Police to identify missing children. In 2019, the system had an accuracy rate of 1% and it even failed to distinguish between boys and girls, the group said.
“All of this is being done without any clear underlying legal authority and is in clear violation of the Right to Privacy judgment (that the Indian apex court upheld in 2017),” said Apar Gupta, executive director at IFF. “Facial recognition technology is still evolving and the risks of such evolutionary tech being used in policing are significant,” said Gupta.
Several law enforcement agencies have been using facial recognition for years now. In January and early February, police in New Delhi and the northern state of Uttar Pradesh used the technology during protests against a new citizenship law that critics say marginalises Muslims.

R&D Roundup: Smart chips, dream logic and crowdsourcing space

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.
This week: crowdsourcing in space, vision on a chip, robots underground and under the skin and other developments.
The eye is the brain
Computer vision is a challenging problem, but the perennial insult added to this difficulty is the fact that humans process visual information as well as we do. Part of that is because in computers, the “eye” — a photosensitive sensor — merely collects information and relays it to a “brain” or processing unit. In the human visual system, the eye itself does rudimentary processing before images are even sent to the brain, and when they do arrive, the task of breaking them down is split apart and parallelized in an amazingly effective manner.
The chip, divided into several sub-areas, which specialize in detecting different shapes
Researchers at the Vienna University of Technology (TU Wien) integrate neural network logic directly into the sensor, grouping pixels and subpixels into tiny pattern recognition engines by individually tuning their sensitivity and carefully analyzing their output. In one demonstration described in Nature, the sensor was set up so that images of simplified letters falling on it would be recognized in nanoseconds because of their distinctive voltage response. That’s way, way faster than sending it off to a distant chip for analysis.

Connie Chan of Andreessen Horowitz discusses consumer tech’s winners and losers

Last week, I sat down with Connie Chan, a general partner with Andreessen Horowitz who focuses on investing in consumer tech. She joined the firm in 2011 after working at HP in China.
From her temporary offices located in a modest skyscraper with unobscured views of San Francisco, we talked about where she sees the biggest opportunities right now, along with how big of an impact fears over coronavirus could have on the startup industry — and for how long.
Our conversation has been edited for length. You can also find a longer version of our chat in podcast form.
TechCrunch: There’s so much money flowing into the Bay Area and startups generally from all over the world. What happens if that slows down because of the coronavirus?
Connie Chan: It’s interesting, I was just talking to a friend of mine who is an investor in Asia, in China. And she said that some industries are going to suffer significantly. Restaurants, for example, are hurting [along with] any store that relies on foot traffic [like] bookstores, so forth. Yet you see a lot of companies also doing really well in this time. You’ll see grocery delivery as something that’s in high demand. Insurance is in very high demand. People are spending more time at home, so whether it’s games or streaming or whatever they’re doing at home is doing well. Lots of my counterparts in China are also taking all their pitches via video conference. They’re still doing work, but they’re all just working from home.
Where do you think we’ll see the biggest impact most immediately?

China Roundup: Amid coronavirus, tech firms offer ways to maintain China’s lifeblood

Hello and welcome back to TechCrunch’s China Roundup, a digest of recent events shaping the Chinese tech landscape and what they mean to people in the rest of the world. The coronavirus outbreak is posing a devastating impact on people’s life and the economy in China, but there’s a silver lining that the epidemic might have benefited a few players in the technology industry as the population remains indoors.
The SARS (severe acute respiratory syndrome) virus that infected thousands and killed hundreds in China back in 2002 is widely seen as a catalyst for the country’s fledgling e-commerce industry. People staying indoors to avoid contracting the deadly virus flocked to shop online. Alibaba’s Taobao, an eBay-like digital marketplace, notably launched at the height of the SARS outbreak.
“Although it sickened thousands and killed almost eight hundred people, the outbreak had a curiously beneficial impact on the Chinese internet sector, including Alibaba,” wrote China internet expert Duncan Clark in his biography of Alibaba founder Jack Ma.
Nearly two decades later, as the coronavirus outbreak sends dozens of Chinese cities into various kinds of lockdown, tech giants are again responding to fill consumers’ needs amid the crisis. Others are providing digital tools to help citizens and the government battle the disease.
According to data from analytics company QuestMobile, Chinese people’s average time spent on the mobile internet climbed from 6.1 hours a day in January, to 6.8 hours a day during Chinese New Year, to an astounding daily usage of 7.3 hours post-holiday as businesses delay returning to the office or resuming on-premises operation.
Here’s a look at what some of them are offering.
Remote work apps: Boom and crash
China’s enterprise software industry has been slow to take off in comparison to the West, though it’s slowly picking up steam as the country’s consumer-facing industry becomes crowded, prompting investors and tech behemoths to bet on more business-oriented services. Now remote work apps are witnessing a boom as millions are confined to working from home.
The online education sector is experiencing a similar uptick as schools nationwide are suspended, according to data from research firm Sensor Tower.

The main players trying to tap the nationwide work-from-home practice are Alibaba’s DingTalk, Tencent’s WeChat Work, and ByteDance’s Lark. App rankings compiled  by Sensor Tower show that all three apps experienced significant year-over-year growth in downloads from January 22 through February 20, though their user bases vary greatly:
DingTalk: 1,446%
Lark: 6,085%
WeChat Work: 572%
DingTalk, launched in 2014 by an Alibaba team after its failed attempt to take on WeChat, shot up to the most-downloaded free iOS app in China in early February. The app claimed in August that more than 10 million enterprises and over 200 million individual users had registered on its platform.
Dingtalk became China’s most-downloaded free iOS app mid the coronavirus outbreak. Data: Sensor Tower
WeChat’s enterprise version WeChat Work, born in 2016, trailed closely behind DingTalk, rising to second place among free iOS apps in the same period. In December, WeChat Work announced it had logged more than 2.5 million enterprises and some 60 million active users.
Lark, launched only in 2019, pales in comparison to its two predecessors, hovering around the 300th mark in early February. Nonetheless, Lark appears to be making a big user acquisition push recently by placing ads on its sibling Douyin, TikTok’s China version. Douyin has emerged as a marketing darling as advertisers rush to embrace vertical, short videos, and Lark can certainly benefit from exposure on the red-hot app. WeChat, despite its colossal one-billion monthly user base, has remained restrained in ad monetization.
The question is whether the sudden boom will develop into a sustainable growth trend for these apps. System crashes on DingTalk and WeChat Work due to user influx at the start of the remote working regime might suggest that neither had projected such traffic volumes on its growth curve. After all, most businesses are expected to resume in-person communication when safety conditions are ensured.
Indeed, the work-from-home model has been widely ill-received by employees who are frustrated with intrusive company rules like “keep your webcam on while working from home.” In a more unexpected turn, DingTalk suffered from a backlash after it added tools to host online classes for students. Resentful that the app had spoiled their extended holiday, young users flooded to give DingTalk one-star ratings.
Face mask algorithms
To curb the spread of the virus, local governments in China have mandated people to wear masks in public, posing a potential challenge to the country’s omnipresent facial recognition-powered identity checks. But the technologies necessary to handle the situation is already in place, such as iris scanning.
Travelers whom I spoke to reported they are now able to pass through train station security without taking their masks off — which could sound an alarm to privacy-conscious individuals. But it’s unclear whether the change is due to more advanced forms of biometrics technologies or that the authority had temporarily loosened security on low-risk individuals. People still have to scan their ID cards before getting their biometrics verified and travelers whose identities have been flagged could trigger stricter screening, people familiar with China’s AI industry told me. They added that the latter case is more probable, for it will take time to implement a nationwide infrastructure upgrade.
Digital passes
Local governments have also introduced tools for people to attain digital records of their travel history, which has become some sort of permit to go about their daily life, be it returning to work, their apartment, or even the city they live in.
One example is web-based app Close Contact Detector developed by a state-owned company. Users can obtain a record of their travel history by opting to submit their names, ID numbers and phone numbers. So far the app has drawn more scorn than praise for containing the virus, bringing people to the questions: If the government already has a grip on people’s travel history, why didn’t it react earlier to restrict the free flow of travelers? Why did it only introduce the service a few weeks after the first big outbreak?
All of this could point to the challenge of collecting and consolidating citizen data across departments and regions, despite China’s ongoing efforts to encourage the use of social credits nationwide through the use of real-name registration and big data. The health crisis appears to have accelerated this data-unification process. The pressing question is how the government will utilize these data following the outbreak.

Eg migrants who’d been in Hubei slipped through the cracks while 10s of thousands Hubeiren outside the province are left stranded (what’s all that use of SIM card location tracking +face scans?) and SH gov’t late to disclose affected neighborhoods (data supposedly easy to attain)
— Rita Liao (@ritacyliao) February 12, 2020

Many of these digital permits are powered by WeChat on the merit of the messenger’s ubiquity and broad-ranging functions in Chinese society. In Shenzhen, where WeChat’s parent Tencent is headquartered, cars can only enter the city after the drivers use WeChat to scan a QR code hung by a drone — for the obvious reason to avoid contact with checkpoint officers — and digitally file their travel history.
Photo: Xinhua News
Citizen reporting
As the fast-spreading virus fuels rumors, individual citizens are playing an active role in combating misinformation. Dxy.cn (丁香园), an online community targeting medical professionals, responded swiftly with a fact-checking feature dedicated to the coronavirus and a national map tracking the development of the outbreak in real time.
Yikuang, the brainchild of several independent developers and app review site Sspai.com, is one of the first WeChat-based services to map neighborhoods with confirmed cases using official data from local governments.
Young citizens have also joined in. A Shanghai-based high school senior and his peers launched a blog that provides Chinese summaries of coronavirus coverage from news organizations around the world.
Dining and entertainment
The nationwide lockdown is almost guaranteed a boon to online entertainment. The short video sector recorded 569 million daily active users in the post-holiday period, far exceeding 492 million on a regular daily basis, shows QuestMobile. Video streaming sites are gathering musicians to virtually perform and movies are premiering online as the virus forces live venues and cinemas to shut.
Many Chinese cities have gone as far as to ban eating in restaurants during the epidemic, putting the burden on food and grocery delivery services. To ensure safety, delivery companies have devised ways to avoid human interaction, such as Meituan Dianping’s “contactless” solution, which is in effect a self-served cabinet to temporarily store food orders awaiting customer pickup.

China’s food delivery company @meituan launched this “contactless” service that provides zero physical contact between customers and delivery folks amid #coronavirus pic.twitter.com/6BPXPPnI0K
— Keith Zhai (@QiZHAI) February 3, 2020

UCLA backtracks on plan for campus facial recognition tech

After expressing interest in processing campus security camera footage with facial recognition software, UCLA is backing down.
In a letter to Evan Greer of Fight for the Future, a digital privacy advocacy group, UCLA Administrative Vice Chancellor Michael Beck announced the institution would abandon its plans in the face of a backlash from its student body.
“We have determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community,” Beck wrote.
The decision, deemed a “major victory” for privacy advocates, came as students partnered with Fight for the Future to plan a national day of protest on March 2. UCLA’s interest in facial recognition was a controversial departure from many elite universities that confirmed they have no intention to implement the surveillance technology, including MIT, Brown, and New York University.
UCLA student newspaper the Daily Bruin reported on the school’s interest in facial recognition tech last month, as the university proposed the addition of facial recognition software in a revision of its security camera policy. According to the Daily Bruin, the technology would have been used to screen individuals from restricted campus areas and to identify anyone flagged with a “stay-away order” prohibiting them from being on university grounds. The proposal faced criticism in a January town hall meeting on campus with 200 attendees and momentum against the surveillance technology built from there.
“We hope other universities see that they will not get away with these policies,” Matthew William Richard, UCLA student and vice chair of UCLA’s Campus Safety Alliance, said of the decision. “… Together we can demilitarize and democratize our campuses.”

Europe sets out plan to boost data reuse and regulate “high risk” AIs

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
AI
Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
A voluntary labelling scheme for lower risk AI applications
Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc
Data
A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
Tech for good
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
Trustworthy artificial intelligence
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
Towards a rights-respecting common data space
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Do you want to know more on the EU’s digital strategy?Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz
— European Commission (@EU_Commission) February 18, 2020

Platform liability
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.
The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.
But to say things are going well is a stretch.
Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.
Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”
“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.
“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.
Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.
“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.
PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.
But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.
Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.
(Image: Twitter/@jeanqasaur)
The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.
Others require sending in even more sensitive information just to prove it’s them.
Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.
Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.
As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.
Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.
The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.
(Screenshot: Zack Whittaker/TechCrunch)
TechCrunch alerted Mine — and the two requesters — to the security lapse.
“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”
For now, many startups have caught a break.
The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.
“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.
CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.
But with the looming threat of hefty fines just months away, time is running out for the non-compliant.

Here’s where California residents can stop companies selling their data

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.
The deployment comes after a multi-year period of trials by the Met and police in South Wales.
The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.
“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.
It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.
“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”
The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.
In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.
“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.
London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.
The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.
The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.
However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.
So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.
Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.
Instead it makes pains to couch the technology as “additional tool” to assist its officers.
“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.
While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.
A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.
On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.
UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.
Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.
It also suggested such use would not meet key legal requirements.
“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.
Its conclusions:
The Met failed to consider the human rights impact of the techIts use was unlikely to pass the key legal test of being “necessary in a democratic society”
— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.
Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.
A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.
In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.
Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.
It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).
“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”
For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)
Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.
Funny that.
Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.
The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.
Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)
The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.
It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)
Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.
The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.
While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.
In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.
For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.
“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.
The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.
Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.
You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.
But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 
And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.
What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.
Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)
At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.
But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.
Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.
And a ban would be far harder for platform giants to simply bend to their will.

So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.
— Jonathan Senchyne (@jsench) January 16, 2020

A 10-point plan to reboot the data industrial complex for the common good

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.
Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.
But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.
The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.
“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”
However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).
The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.
These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.
The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.
Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.
Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.
Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.
“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”
EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.
For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.
Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.
If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.
“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.
“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”
An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.
But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.
In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Here’s how Bosch engineers transformed the regular ol’ sun visor

While vehicles have become increasingly more advanced, the sun visor has stayed almost the same for more than nine decades. And yet, it remains a problematic feature that can obscure a driver’s field of view, especially at dusk and dawn.
Three Bosch engineers have come up with a novel way to solve that problem by using a liquid crystal display, or LCD, a camera and facial recognition and detection software. Bosch is calling it the Virtual Visor and it’s making its debut at CES 2020 in Las Vegas. The Virtual Visor honored as a Best of Innovation in the CES 2020 Innovation Awards.
The visor links an LCD panel with a camera that tracks the sun’s casted shadow on the driver’s face. The system uses facial recognition and detection find the driver within the image captured by camera and to determine landmarks on the face to identify shadow. Algorithms then analyze the driver’s view and darkens only the section of display where light hits the driver’s eyes. The rest of the display remains transparent, allowing the driver to see the road.
The project started as a grassroots effort within Bosch as part of the company’s internal innovation activities, according to Jason Zink, a technical expert for Bosch in North America and one of the co-creators of the Virtual Visor.
One of the first breakthroughs came when one of the co-creators, Ryan Todd, was shopping for TVs, Zink explained. During this comparison shopping, Todd remembered that LCDs are selectively black.
By using a liquid crystal display instead of a traditional visor it allowed us to make the visor either transparent or opaque at different parts across device, Zink told TechCrunch .
“We discovered early in the development that users adjust their traditional sun visors to always cast a shadow on their own eyes,” Zink said. “This realization was profound in helping simplify the product concept and fuel
the design of the technology.”
The visor isn’t headed for vehicles yet, but Zink said Bosch is in discussions with OEMs from the commercial and passenger vehicles markets.
“We have every intention of making this a real product,” Zink said.
“For most drivers around the world, the visor component as we know it is not enough to avoid hazardous sun glare, especially at dawn and dusk when the sun can greatly decrease drivers’ vision,” said Dr. Steffen Berns, president of Bosch Car Multimedia. “Some of the simplest innovations make the greatest impact, and Virtual Visor changes the way drivers see the road.”

DHS wants to expand airport face recognition scans to include US citizens

Homeland Security wants to expand facial recognition checks for travelers arriving and departing the U.S. to also include citizens, which had previously been exempt from the mandatory checks.
In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.
Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.
But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.
Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.
“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .
“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.
“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.
Citing a data breach of close to 100,000 license plate and traveler images in June as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.
A spokesperson for Homeland Security did not immediately comment when reached.

CBP says traveler photos and license plate images stolen in data breach

Gift Guide: Essential security and privacy gifts to help protect your friends and family

 
There’s no such thing as perfect privacy or security, but there’s a lot you can do to lock down your online life. And the holiday season is a great time to encourage others to do the same. Some people are more likely to take security into their own hands if they’re given a nudge along the way.
Here we have a selection of gift ideas — from helpful security solutions to unique and interesting gadgets that will keep your information safe, but without breaking the bank.

A hardware security key for two-factor
Your online accounts have everything about you and you’d want to keep them safe. Two-factor authentication is great, but for the more security minded there’s an even stronger solution. A security key is a physical hardware key that’s even stronger than having a two-factor code going to your phone. These keys plug into your USB port on your computer (or the charger port on your phone) to prove to online services, like Facebook, Google, and Twitter, that you are who you say you are. Google’s own data shows security keys offer near-unbeatable protection against even the most powerful and resourced nation-state hackers. Yubikeys are our favorite and come in all shapes and sizes. They’re also cheap. Google also has a range of its own branded Titan security keys, one of which also offers Bluetooth connectivity.
Price: from $20.Available from: Yubico Store | Google Store

Webcam cover
Surveillance-focused malware, like remote access trojans, can infect computers and remotely switch on your webcam without your permission. Most computer webcams these days have an indicator light that shows you when the camera is active. But what if your camera is blocked, preventing any accidental exposure in the first place? Enter the simple but humble webcam blocker. It slides open when you need to access your camera, and slides to cover the lens when you don’t. Support local businesses and non-profits — you can search for unique and interesting webcam covers on Etsy
Price: from $5 – $10.Available from: Etsy | Electronic Frontier Foundation

A microphone blocker
Now you have you webcam cover, what about your microphone? Just as hackers can tap into your webcam, they can also pick up on your audio. Microphone blockers contain a semiconductor that tricks your computer or device into thinking that it’s a working microphone, when in fact it’s not able to pick up any audio. Anyone hacking into your device won’t hear a thing. Some modern Macs already come with a new Apple T2 security chip which prevents hackers from snooping on your microphone when your laptop’s lid is shut. But a microphone blocker will work all the time, even when the lid is open.
Price: $6.99 – $16.99.Available from: Nope Blocker | Mic Lock

A USB data blocker
You might have heard about “juice-jacking,” where hackers plant malicious implants in USB outlets, which steal a person’s device data when an unsuspecting victim plugs in. It’s a threat that’s almost unheard of, but proof-of-concepts have shown how easy it is to implant malicious components in legitimate-looking cables. A USB data blocker essentially acts as a data barrier, preventing any information going in or out of your device, while letting power through to charge your battery. They’re cheap but effective.
Price: from $6.99 and $11.49.Available from: Amazon | SyncStop

A privacy screen for your computer or phone
How often have you seen someone’s private messages or document as you look over their shoulder, or see them in the next aisle over? Privacy screens can protect you from “visual hacking.” These screens make it near-impossible for anyone other than the device user to snoop at what you’re working on. And, you can get them for all kinds of devices and displays — including phones. But make sure you get the right size!
Price: from about $17.Available from: Amazon

A password manager subscription
Password managers are a real lifesaver. One strong, unique password lets you into your entire bank of passwords. They’re great for storing your passwords, but also for encouraging you to use better, stronger, unique passwords. And because many are cross-platform, you can bring your passwords with you. Plenty of password managers exist — from LastPass, Lockbox, and Dashlane, to open-source versions like KeePass. Many are free, but a premium subscription often comes with benefits and better features. And if you’re a journalist, 1Password has a free subscription for you.
Price: Many free, premium offerings start at $35.88 – $44.28 annuallyAvailable from: 1Password | LastPass | Dashlane | KeePass

Anti-surveillance clothing
Whether you’re lawfully protesting or just want to stay in “incognito mode,” there are — believe it or not — fashion lines that can help prevent facial recognition and other surveillance systems from identifying you. This clothing uses a kind of camouflage that confuses surveillance technology by giving them more interesting things to detect, like license plates and other detectable patterns.
Price: $35.99.Available from: Adversarial Fashion

Pi-hole
Think of a Pi-hole as a “hardware ad-blocker.” A Pi-hole is a essentially a Raspberry Pi mini-computer that runs ad-blocking technology as a box that sits on your network. It means that everyone on your home network benefits from ad blocking. Ads may generate revenue for websites but online ads are notorious for tracking users across the web. Until ads can behave properly, a Pi-hole is a great way to capture and sinkhole bad ad traffic. The hardware may be cheap, but the ad-blocking software is free. Donations to the cause are welcome.
Price: From $35.Available from: Pi-hole | Raspberry Pi

And finally, some light reading…
There are two must-read books this year. NSA whistleblower Edward Snowden’s “Permanent Record” autobiography covers his time as he left the shadowy U.S. intelligence agency to Hong Kong, where he spilled thousands of highly classified government documents to reporters about the scope and scale of its massive global surveillance partnerships and programs. And, Andy Greenberg’s book on “Sandworm”, a beautifully written deep-dive into a group of Russian hackers blamed for the most disruptive cyberattack in history, NotPetya, This incredibly detailed investigative book leaves no stone unturned, unravelling the work of a highly secretive group that caused billions of dollars of damage.
Price: From $14.99.Available from: Amazon (Permanent Record) | Amazon (Sandworm)

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.
In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.
“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”
“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”
One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.
“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.
Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.
In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”
There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.
“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.
There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.
The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”
At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.
Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.
Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.
“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.
“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.
“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”
Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.
“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.
“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”
On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.
He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.
He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.
“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”
Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”
There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.
“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”
In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.
While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.
The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.
The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.
Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

China Roundup: facial recognition lawsuit and cashless payments for foreigners

Hello and welcome back to TechCrunch’s China Roundup, a digest of recent events shaping the Chinese tech landscape and what they mean to people in the rest of the world. This week, a lawsuit sparked a debate over the deployment of China’s pervasive facial recognition; meanwhile, in some good news, foreigners in China can finally experience cashless payment just like locals.
China’s first lawsuit against face scans
Many argue that China holds an unfair advantage in artificial intelligence because of its citizens’ willingness to easily give up personal data desired by tech companies. But a handful of people are surely getting more privacy-conscious.
This week, a Chinese law professor filed what looks like the country’s first lawsuit against the use of AI-powered face scans, according to Qianjiang Evening News, a local newspaper in the eastern province of Zhejiang. In dispute is the decision by a privately-owned zoo to impose mandatory facial recognition on admission control for all annual pass holders.
“I’ve always been conservative about gathering facial biometrics data. The collection and use of facial biometrics involve very uncertain security risks,” the professor told the paper, adding that he nonetheless would accept such requirement from the government for the purpose of “public interest.”
Both the government and businesses in China have aggressively embraced facial recognition in wide-ranging scenarios, be it to aid public security checks or speed up payments at supermarket checkouts. The technology will certainly draw more scrutiny from the public as it continues to spread. Already, the zoo case is garnering considerable attention. On Weibo, China’s equivalent of Twitter, posts about the suit have generated some 100 million views and 10,000 comments in less than a week. Many share the professors’ concerns over potential leaks and data abuse.
Scan and pay like a local
The other technology that has become ubiquitous in China is cashless payments. For many years, foreign visitors without a Chinese bank account have not been able to participate in the scan-and-pay craze that’s received extensive coverage in the west. But the fences are now down.
This week, two of the country’s largest payment systems announced almost at the same time that they are making it easier for foreigners to pay through their smartphones. Visitors can now pay at a selection of Chinese merchants after linking their overseas credit cards backed by Visa, MasterCard, American Express, Discover Global Network or JCB to Tencent’s WeChat Pay.
“This is to provide travelers, holding 2.6 billion Mastercard cards around the world, with the ability to make simple and smart payments anytime, anywhere in China,” Mastercard said in a company statement.
Alipay, Alibaba’s affiliate, now also allows foreign visitors to top up RMB onto a prepaid virtual card issued by Bank of Shanghai with their international credit or debit cards. The move is a boon to the large swathes of foreign tourists in China, which numbered 141 million in 2018.

We heard you. You want to use @Alipay and guess what? Now you can! Visitors to China are now able to #PayWithAlipay. Simply download @Alipay via app stores to start enjoying wallet-free travel! That QR code by the cashier will no longer be a foreign sight. pic.twitter.com/E8zmCovCJ7
— Alipay (@Alipay) November 5, 2019

Also worth your attention
Didi’s controversial carpooling service is finally back this week, more than a year after the feature was suspended following two murders of female passengers. But the company, which has become synonymous with ride-hailing, was immediately put in the hot seat again. The relaunched feature noticeably included a curfew on women, who are only able to carpool between 5 a.m. and 8 p.m. The public lambasted the decision as humiliating and discriminating against women, and Didi responded swiftly to extend the limit to both women and men. The murders were a huge backlash for the company, and it’s since tried to allay the concerns. At this point, the ride-hailing giant simply can’t afford another publicity debacle.
The government moves to stamp out monopolistic practices of some of China’s largest e-commerce platforms ahead of Single’s Day, the country’s busiest shopping festival. Merchants have traditionally been forced to be an exclusive supplier for one of these giants, but Beijing wants to put a stop to it and summoned Alibaba, JD.com, Pinduoduo (in Chinese) and other major retail players for talks on anti-competition this week.
Iqiyi, often hailed as the “Netflix of China,” reports widening net loss at $516.0 million in the third quarter ending September 30. The good news is it has added 25 million new subscribers to its video streaming platform. 99.2% of its 105.8 million user base are now paying members.
36Kr, one of China’s most prominent tech news sites, saw its shares tumble 10% in its Nasdaq debut on Friday. The company generates revenue from subscriptions, advertisements and enterprise “value-added” services. The last segment, according to its prospectus, is designed to “help established companies increase media exposure and brand awareness.”

Elizabeth Warren bites back at Zuckerberg’s leaked threat to K.O. the government

Presidential candidate Senator Elizabeth Warren has responded publicly to a leaked attack on her by Facebook CEO Mark Zuckerberg, saying she won’t be bullied out of taking big tech to task for anticompetitive practices.

I’m not afraid to hold Big Tech companies like Facebook, Google, and Amazon accountable. It’s time to #BreakUpBigTech: https://t.co/o9X9v4noOm
— Elizabeth Warren (@ewarren) October 1, 2019

Warren’s subtweeting of the Facebook founder follows a leak in which the Verge obtained two hours of audio from an internal Q&A session with Zuckerberg — publishing a series of snippets today.
In one snippet the Facebook leader can be heard opining on how Warren’s plan to break up big tech would “suck”.
“You have someone like Elizabeth Warren who thinks that the right answer is to break up the companies … if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge,” he can be heard saying. “Does that still suck for us? Yeah. I mean, I don’t want to have a major lawsuit against our own government. … But look, at the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and you fight.”
Warren responded soon after publication with a pithy zinger, writing on Twitter: “What would really ‘suck’ is if we don’t fix a corrupt system that lets giant companies like Facebook engage in illegal anticompetitive practices, stomp on consumer privacy rights, and repeatedly fumble their responsibility to protect our democracy.”

What would really “suck” is if we don’t fix a corrupt system that lets giant companies like Facebook engage in illegal anticompetitive practices, stomp on consumer privacy rights, and repeatedly fumble their responsibility to protect our democracy. https://t.co/rI0v55KKAi
— Elizabeth Warren (@ewarren) October 1, 2019

In a follow up tweet she added that she would not be afraid to “hold Big Tech companies like Facebook, Google and Amazon accountable”.
The Verge claims it did not obtain the leaked audio from Facebook’s PR machine. But in a public Facebook post following its publication of the audio snippets Zuckerberg links to their article — and doesn’t exactly sound mad to have what he calls his “unfiltered” views put right out there…

Here are Zuckerberg’s thoughts on the leak. To answer some of the conspiracy tweets I’ve gotten: no, Facebook PR did not give me this audio. I wish! https://t.co/Z3oFgQwKu2 pic.twitter.com/p6Ej8Mb6zF
— Casey Newton (@CaseyNewton) October 1, 2019

Whether the audio was leaked intentionally or not, as many commentators have been quick to point out — Warren principal among them — the fact that a company has gotten so vastly powerful it feels able to threaten to fight and defeat its own government should give pause for civilized thought.
Someone high up in Facebook’s PR department might want to pull Zuckerberg aside and make a major wincing gesture right in his face.

Fortunately Facebook has no power to control what information flows to people https://t.co/cNwSUhB8JS
— David Dayen (@ddayen) October 1, 2019

In another of the audio snippets Zuckerberg extends the threat — arguing that breaking up tech giants would threaten the integrity of elections.
“It’s just that breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues,” he is heard saying. “And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together.”
Elections such as the one Warren hopes to be running in as a US presidential candidate… so er… again this argument is a very strange one to be making when the critics you’re railing against are calling you an overbearing, oversized democracy-denting beast.
Zuckerberg’s remarks also contain the implied threat that a failure to properly police elections, by Facebook, could result in someone like Warren not actually getting elected in the first place.
Given, y’know, the vast power Facebook wields with its content-shaping algorithms which amplify narratives and shape public opinion at cheap, factory farm scale.
Reading between the lines, then, presidential hopefuls should be really careful what they say about important technology companies — or, er, else!

Zuckerberg claims that one particular candidate poses an existential threat to his company. Minutes later, he claims the size of that company lets it effectively police the integrity of the election in which she is running.
He thinks this is an argument *against* breaking it up.
— Silpa Kovvali (@SilpaKov) October 1, 2019

How times change.
Just a few short years ago Zuckerberg was the guy telling everyone that election interference via algorithmically amplified social media fakes was “a pretty crazy idea”.
Now he’s saying only tech behemoths like Facebook can save democracy from, uh, tech behemoths like Facebook…

Zuckerberg now: Breaking up tech firms makes election interference more likely https://t.co/jlBuMScmoj Zuckerberg then: Election interference? Nah man. That’s just a “crazy idea” https://t.co/q9TVEbarAC
— Natasha (@riptari) October 1, 2019

For more on where Zuckerberg’s self-servingly circular logic leads, let’s refer to another of his public talking points: That only Facebook’s continued use of powerful, privacy-hostile AI technologies such as facial recognition can save Western society from a Chinese-style state dystopia in which the presence of your face broadcasts a social credit score for others to determine what you get to access.
This equally uncompelling piece of ‘Zuckerlogic’ sums to: ‘Don’t regulate our privacy hostile shit — or China will get to do worse shit before we can!’
So um… yeah but no.

Passbase grabs $3.6M to power privacy-preserving online ID checks

Digital identity startup Passbase has closed a $3.6 million seed round, led by Cowboy Ventures and Eniac Ventures, with participation from Seedcamp and other European investors.
The 2018 founded startup bagged a $600k pre-seed round earlier this year for its full-stack identity engine with a privacy twist.
The latest tranche of funding will go on growing the team and sales channels in the US and Europe, says co-founder Mathias Klenk. “Our goal is to build an API-first company, so building a strong core organization is key for us to be able to fully focus on securing partnerships with complementary services,” he tells TechCrunch.
“By the end of next year, we aim to have our consumer application rolled out so that individuals can leverage the core value proposition of our service and businesses can reap the rewards of seamless reauthentication,” he adds. In terms of clients, our goal is to move up in scale and conduct pilots with some of the larger players in our target segment.”
Passbase launched an open beta in May and has been running tests over the summer, according to Klenk, who says around 15 companies have been actively testing the platform — claiming 300+ businesses have “expressed interest” in the product.
Earlier testers hail from industries including healthcare, gig economy and mobility, with “exciting use cases in the pipeline from recruitment to financial services that will launch soon”, per Klenk.
What is the product? Passbase dubs it ‘Stripe for identity verification’ — meaning it’s offering APIs to make it easy for developers to plug and integrate a range of consumer-friendly identity checks into their digital services. Such as selfie video scans and identity document scanning. (Passbase is itself plugging into ID document verification services from a range of partners, augmented with add-ons such as a liveness check.)
It touts “NIST-certified facial recognition, forensic ID authenticity analysis, and a patent-pending zero-knowledge sharing architecture” as forming part of its stack. 
The overarching goal is to become a trusted intermediary exchange later between businesses and end users — aka a “consent layer” — by building out a developer platform to support the integration of verification technologies into web services, while — on the consumer end — allowing web users to limit who gets access to their actual data. Hence the promise of privacy baked in.
“Our vision is to build out an open identity system that encourages services to hold less information, yet be sure of the quality of the result they are receiving,” adds Klenk.
Consumers can submit personal data to verify their ID, such as a facial biometric scan and identity document scan via their webcam, without having to rely on their data being exposed to and potentially mishandled by non-specialists — instead they have to trust Passbase’s tech architecture.
It also plans to launch a (free) consumer app early next year that will provide end users with controls over the information they’re sharing for ID verification and also serve up insights on how it’s being used — to give people “a holistic view and analytics of their data exposure online”, as Klenk puts it. 
Though it won’t be requiring such highly engaged participation from end users — to ‘claim their digital identity’ by downloading its app.
“Our aim is to incorporate your digital identity into the verification flow,” he says, adding: “If you do not care enough about your digital footprint, you do not have to claim your digital identity and can process through a transactional relationship like with any other identity verification provider. However, with a combination of your biometrics and unique identifier, we have the first building blocks of creating a universal digital identity.”
Klenk says he expects access management and account recovery to become an important area for Passbase as — or, well, if — consumers adopt its idea of a “verified digital identity” which they can control.
“In terms of businesses accepting this, of course there are network effects in play,” he goes on. “That being said, identity works as a stack and if we manage to tie the root identity to additional credentials (through partnerships) like background checks, credit scores etc, it would be difficult to pass on using such a system. So at the end of the day, it comes down to who can offer the most full-stack solution.”
There’s plentiful and growing competition in the digital identity management space — including for privacy-protecting sign-ins now Apple has skin in the game — so Passbase certainly has its work cut out to get traction. Though it’s targeting fuller ID checks, arguing that a username and password are inadequate for many of the authentication checks which digital services now demand, given there’s a platforms offering to connect you to pretty much anyone these days, be it a medical professional, babysitter, taxi driver, cleaner, delivery driver or potential life partner.
Klenk says Passbase’s defensibility “comes from the B2B2C approach whereby we are creating a useful service for businesses from day 1, while enabling data ownership for consumers in order to create a more secure and privacy-preserving digital future”.
It does also have patents pending in the US.
“For some of the incumbents in the market, it is complicated to completely shift their business model, whereas for newer competitors, it comes down to the operating model and execution,” he also argues of the competitive landscape.
If Passbase can make their full-stack stick, the plan is to monetize via the developer platform where they’ll offer businesses their first 50 verifications for free.
“Afterwards, our pricing has a platform access fee combined with a per verification cost. The reason being that as we build out more and more modules (ID document verification, phone number, living address, email, work permit) we plan to move towards a SaaS model, offering businesses all kinds of identification services for a predictable cost,” he says. “This is why our pricing also reflects a lower variable cost and increased subscription fee, as volumes grow.”
A self-service b2b product will launch next month — meaning any business will be able to tap Passbase’s APIs and integrate its verification service. The consumer app will naturally follow later.
“For the consumer, the product will always be free as we believe that the data needs to be given back and belong to consumers,” Klenk adds.

Megvii, the Chinese startup unicorn known for facial recognition tech, files to go public in Hong Kong

Megvii Technology, the Beijing-based artificial intelligence startup known in particular for its facial recognition brand Face++, has filed for a public listing on the Hong Kong stock exchange.
Its prospectus did not disclose share pricing or when the IPO will take place, but Reuters reports that the company plans to raise between $500 million and $1 billion and list in the fourth quarter of this year. Megvii’s investors include Alibaba, Ant Financial and the Bank of China. Its last funding round was a Series D of $750 million announced in May that reportedly brought its valuation to more than $4 billion.
Founded by three Tsinghua University graduates in 2011, Megvii is among China’s leading AI startups, with its peers (and rivals) including SenseTime and Yitu. Its clients include Alibaba, Ant Financial, Lenovo, China Mobile and Chinese government entities.
The company’s decision to list in Hong Kong comes against the backdrop of an economic recession and political unrest, including pro-democracy demonstrations, factors that have contributed to a slump in the value of the benchmark Hang Seng index. Last month, Alibaba reportedly decided to postpone its Hong Kong listing until the political and economic environment becomes more favorable.
Megvii’s prospectus discloses both rapid growth in revenue and widening losses, which the company attributes to changes in the fair value of its preferred shares and investment in research and development. Its revenue grew from 67.8 million RMB in 2016 to 1.42 billion RMB in 2018, representing a compound annual growth rate of about 359%. In the first six months of 2019, it made 948.9 million RMB. Between 2016 and 2018, however, its losses increased from 342.8 million RMB to 3.35 billion RMB, and in the first half of this year, Megvii has already lost 5.2 billion RMB.
Investment risks listed by Megvii include high R&D costs, the U.S.-China trade war and negative publicity over facial recognition technology. Earlier this year, Human Rights Watch published a report that linked Face++ to a mobile app used by Chinese police and officials for mass surveillance of Uighurs in Xinjiang, but it later added a correction that said Megvii’s technology had not been used in the app. Megvii’s prospectus alluded to the report, saying that in spite of the correction, the report “still caused significant damages to our reputation which are difficult to completely mitigate.”
The company also said that despite internal measures to prevent misuse of Megvii’s tech, it cannot assure investors that those measures “will always be effective,” and that AI technology’s risks and challenges include “misuse by third parties for inappropriate purposes, for purposes breaching public confidence or even violate applicable laws and regulations in China and other jurisdictions, bias applications or mass surveillance, that could affect user perception, public opinions and their adoption.”
From a macroeconomic perspective, Megvii’s investment risks include the restrictions and tariffs placed on Chinese exports to the U.S. as part of the ongoing trade war. It also cited reports that Megvii is among the Chinese tech companies the U.S. government may add to trade blacklists. “Although we are not aware of, nor have we received any notification, that we have been added as a target of any such restrictions as of the date this Document, the existence of such media reports itself has already damaged our reputation and diverted our management’s attention,” the prospectus said. “Whether or not we will be included as a target for economic and trade restrictions is beyond our control.”

The renaissance of silicon will create industry giants

Navin Chaddha
Contributor

Navin Chaddha leads Mayfield. The firm invests in early-stage consumer and enterprise technology companies and currently has $2.7 billion under management.

More posts by this contributor
A people-first view of investing in innovation or entrepreneurs who take people for granted will fail
Jason Kilar on founding Vessel and the wonderful world of customer service

Every time we binge on Netflix or install a new internet-connected doorbell to our home, we’re adding to a tidal wave of data. In just 10 years, bandwidth consumption has increased 100 fold, and it will only grow as we layer on the demands of artificial intelligence, virtual reality, robotics and self-driving cars. According to Intel, a single robo car will generate 4 terabytes of data in 90 minutes of driving. That’s more than 3 billion times the amount of data people use chatting, watching videos and engaging in other internet pastimes over a similar period.
Tech companies have responded by building massive data centers full of servers. But growth in data consumption is outpacing even the most ambitious infrastructure build outs. The bottom line: We’re not going to meet the increasing demand for data processing by relying on the same technology that got us here.
The key to data processing is, of course, semiconductors, the transistor-filled chips that power today’s computing industry. For the last several decades, engineers have been able to squeeze more and more transistors onto smaller and smaller silicon wafers — an Intel chip today now squeezes more than 1 billion transistors on a millimeter-sized piece of silicon.
This trend is commonly known as Moore’s Law, for the Intel co-founder Gordon Moore and his famous 1965 observation that the number of transistors on a chip doubles every year (later revised to every two years), thereby doubling the speed and capability of computers.
This exponential growth of power on ever-smaller chips has reliably driven our technology for the past 50 years or so. But Moore’s Law is coming to an end, due to an even more immutable law: material physics. It simply isn’t possible to squeeze more transistors onto the tiny silicon wafers that make up today’s processors.
Compounding matters, the general-purpose chip architecture in wide use today, known as x86, which has brought us to this point, isn’t optimized for computing applications that are now becoming popular.
That means we need a new computing architecture. Or, more likely, multiple new computer architectures. In fact, I predict that over the next few years we will see a flowering of new silicon architectures and designs that are built and optimized for specialized functions, including data intensity, the performance needs of artificial intelligence and machine learning and the low-power needs of so-called edge computing devices.
The new architects
We’re already seeing the roots of these newly specialized architectures on several fronts. These include Graphic Processing Units from Nvidia, Field Programmable Gate Arrays from Xilinx and Altera (acquired by Intel), smart network interface cards from Mellanox (acquired by Nvidia) and a new category of programmable processor called a Data Processing Unit (DPU) from Fungible, a startup Mayfield invested in.  DPUs are purpose-built to run all data-intensive workloads (networking, security, storage) and Fungible combines it with a full-stack platform for cloud data centers that works alongside the old workhorse CPU.
These and other purpose-designed silicon will become the engines for one or more workload-specific applications — everything from security to smart doorbells to driverless cars to data centers. And there will be new players in the market to drive these innovations and adoptions. In fact, over the next five years, I believe we’ll see entirely new semiconductor leaders emerge as these services grow and their performance becomes more critical.
Let’s start with the computing powerhouses of our increasingly connected age: data centers.
More and more, storage and computing are being done at the edge; that means, closer to where our devices need them. These include things like the facial recognition software in our doorbells or in-cloud gaming that’s rendered on our VR goggles. Edge computing allows these and other processes to happen within 10 milliseconds or less, which makes them more work for end users.

I commend the entrepreneurs who are putting the silicon back into Silicon Valley.

With the current arithmetic computations of x86 CPU architecture, deploying data services at scale, or at larger volumes, can be a challenge. Driverless cars need massive, data-center-level agility and speed. You don’t want a car buffering when a pedestrian is in the crosswalk. As our workload infrastructure — and the needs of things like driverless cars — becomes ever more data-centric (storing, retrieving and moving large data sets across machines), it requires a new kind of microprocessor.
Another area that requires new processing architectures is artificial intelligence, both in training AI and running inference (the process AI uses to infer things about data, like a smart doorbell recognizing the difference between an in-law and an intruder). Graphic Processing Units (GPUs), which were originally developed to handle gaming, have proven faster and more efficient at AI training and inference than traditional CPUs.
But in order to process AI workloads (both training and inference), for image classification, object detection, facial recognition and driverless cars, we will need specialized AI processors. The math needed to run these algorithms requires vector processing and floating-point computations at dramatically higher performance than general purpose CPUs provide.
Several startups are working on AI-specific chips, including SambaNova, Graphcore and Habana Labs. These companies have built new AI-specific chips for machine intelligence. They lower the cost of accelerating AI applications and dramatically increase performance. Conveniently, they also provide a software platform for use with their hardware. Of course, the big AI players like Google (with its custom Tensor Processing Unit chips) and Amazon (which has created an AI chip for its Echo smart speaker) are also creating their own architectures.
Finally, we have our proliferation of connected gadgets, also known as the Internet of Things (IoT). Many of our personal and home tools (such as thermostats, smoke detectors, toothbrushes and toasters) operate on ultra-low power.
The ARM processor, which is a family of CPUs, will be tasked for these roles. That’s because gadgets do not require computing complexity or a lot of power. The ARM architecture is perfectly designed for them. It’s made to handle smaller number of computing instructions, can operate at higher speeds (churning through many millions of instructions per second) and do it at a fraction of the power required for performing complex instructions. I even predict that ARM-based server microprocessors will finally become a reality in cloud data centers.
So with all the new work being done in silicon, we seem to be finally getting back to our original roots. I commend the entrepreneurs who are putting the silicon back into Silicon Valley. And I predict they will create new semiconductor giants.

Traces AI is building a less invasive alternative to facial recognition tracking

With all of the progress we’ve seen in deep learning tech in the past few years, it seems pretty inevitable that security cameras become smarter and more capable in regards to tracking, but there are more options than we think in how we choose to pull this off.
Traces AI is a new computer vision startup, in Y Combinator’s latest batch of bets, that’s focused on helping cameras track people without relying on facial recognition data, something the founders believe is too invasive of the public’s privacy. The startup’s technology actually blurs out all human faces in frame, only relying on the other physical attributes of a person.
“It’s a combination of different parameters from the visuals. We can use your hair style, whether you have a backpack, your type of shoes and the combination of your clothing,” co-founder Veronica Yurchuk tells TechCrunch.
Tech like this obviously doesn’t scale too great for a multi-day city-wide manhunt and leaves room for some Jason Bourne-esque criminals to turn their jackets inside out and toss on a baseball cap to evade detection. As a potential customer, why forego a sophisticated technology just to stave off dystopia? Well, Traces AI isn’t so convinced that facial recognition tech is always the best solution, they believe that facial tracking isn’t something every customer wants or needs and there should be more variety in terms of solutions.
“The biggest concern [detractors] have is, ‘Okay, you want to ban the technology that is actually protecting people today, and will be protecting this country tomorrow?’ And, that’s hard to argue with, but what we are actually trying to do is propose an alternative that will be very effective but less invasive of privacy,” co-founder Kostya Shysh tells me.
Earlier this year, San Francisco banned government agencies from the use of facial recognition software, and it’s unlikely that they will be the only city to make that choice. In our conversation, Shysh also highlighted some of the backlash to Detroit’s Project Green Light which brought facial recognition surveillance tech city-wide.
Traces AI’s solution can also be a better option for closed venues that have limited data on the people on their premises in the first place. One use case Shysh highlighted was being able to find a lost child in an amusement park with just a little data.
“You can actually give them a verbal description, So if you say, it’s a missing 10-year-old boy, and he had blue shorts and a white t shirt, that will be enough information for us to start a search,” Shysh says.
In addition to being a better way to promote privacy, Shysh also sees the technology as a more effective way to reduce the racial bias of these computer vision systems which have proven less adept at distinguishing non-white faces, and are thus often more prone to false positives.
“The way our technology works, we actually blur faces of the people before sending it to the cloud. We’re doing it intentionally as one of the safety mechanisms to protect from racial and gender biases as well,” Shysh says.
The co-founders say that the U.S. and Great Britain are likely going to be their biggest markets due to the high quantity of CCTV cameras, but they’re also pursuing customers in Asian countries like Japan and Singapore where face-obscuring facial masks are often worn and can leave facial tracking software much less effective.

Artificial intelligence can contribute to a safer world

Matt Ocko
Contributor

Share on Twitter

Matt Ocko is co-Managing Partner and co-founder of DCVC (Data Collective).

Alan Cohen
Contributor

Share on Twitter

Alan Cohen is an operating partner at DCVC.

We all see the headlines nearly every day. A drone disrupting the airspace in one of the world’s busiest airports, putting aircraft at risk (and inconveniencing hundreds of thousands of passengers) or attacks on critical infrastructure.  Or a shooting in a place of worship, a school, a courthouse.  Whether primitive (gunpowder) or cutting-edge (unmanned aerial vehicles) in the wrong hands, technology can empower bad actors and put our society at risk, creating a sense of helplessness and frustration.
Current approaches to protecting our public venues are not up to the task, and, frankly appear to meet Einstein’s definition of insanity: “doing the same thing over and over and expecting a different result.”  It is time to look past traditional defense technologies and see if newer approaches can tilt the pendulum back in the defender’s favor.  Artificial Intelligence (AI) can play a critical role here, helping to identify, classify and promulgate counteractions on potential threats faster than any security personnel.
Using technology to prevent violence, specifically by searching for concealed weapons has a long history. Alexander Graham Bell invented the first metal detector in 1881 in an unsuccessful attempt to locate the fatal slug as President James Garfield lay dying of an assassin’s bullet. The first commercial metal detectors were developed in the 1960s. Most of us are familiar with their use in airports, courthouses and other public venues to screen for guns, knives and bombs.
However, metal detectors are slow and full of false positives – they cannot distinguish between a Smith & Wesson and an iPhone.  It is not enough to simply identify a piece of metal; it is critical to determine whether it is a threat.  Thus, the physical security industry has developed newer approaches, including full-body scanners – which are now deployed on a limited basis. While effective to a point, the systems in use today all have significant drawbacks. One is speed. Full body scanners, for example, can process only about 250 people per hour, not much faster than a metal detector. While that might be okay for low volume courthouses, it’s a significant problem for larger venues like a sporting arena.
Image via Getty Images
Fortunately, new AI technologies are enabling major advances in physical security capabilities. These new systems not only deploy advanced sensors to screen for guns, knives and bombs, they get smarter with each screen, creating an increasingly large database of known and emerging threats while segmenting off alarms for common, non-threatening objects (keys, change, iPads, etc.)
As part of a new industrial revolution in physical security, engineers have developed a welcomed approach to expediting security screenings for threats through machine learning algorithms, facial recognition, and advanced millimeter wave and other RF sensors to non-intrusively screen people as they walk through scanning devices.  It’s like walking through sensors at the door at Nordstrom, the opposite of the prison-like experience of metal detectors with which we are all too familiar.  These systems produce an analysis of what someone may be carrying in about a hundredth of a second, far faster than full body scanners. What’s more, people do not need to empty their pockets during the process, further adding speed. Even so, these solutions can screen for firearms, explosives, suicide vests or belts at a rate of about 900 people per hour through one lane.
Using AI, advanced screening systems enable people to walk through quickly and provide an automated decision but without creating a bottleneck. This volume greatly improves traffic flow while also improving the accuracy of detection and makes this technology suitable for additional facilities such as stadiums and other public venues such as Lincoln Center in New York City and the Oakland airport.
Apollo Shield’s anti-drone system.
 
So much for the land, what about the air?   Increasingly drones are being used as weapons. Famously, this was seen in a drone attack last year against Venezuelan president Nicolas Maduro. An airport drone incident drew widespread attention when a drone shut down Gatwick Airport in late 2018 inconveniency stranded tens of thousands of people.
People are rightly concerned about how easy it is to get a gun. Drones are also easy to acquire and operate, and quite difficult to monitor and to defend against. AI is now being deployed to prevent drone attacks, whether at airports, stadiums, or critical infrastructure. For example, new AI-powered radar technology is being used to detect, classify, monitor and safely capture drones identified as dangerous.
Additionally, these systems use can rapidly develop a map of the airspace and effectively create a security “dome” around specific venues or areas. These systems have an integration component to coordinate with on-the-ground security teams and first responders.  Some even have a capture drone to incarcerate a suspicious drone. When a threatening drone is detected and classified by the system as dangerous, the capture drone is dispatched and nets the invading drone. The hunter then tows the targeted drone to a safe zone for the threat to be evaluated and if needed, destroyed.
While there is much dialogue about the potential risk of AI affecting our society, there is also a positive side to these technologies.  Coupled with our best physical security approaches, AI can help prevent violent incidents.

Week in Review: Netflix’s big problem and Apple’s thinnest product yet

Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.
Last week, I talked about the Capital One breach and how Equifax taught us that irresponsible actions only affect companies in the PR department.

Thomas Trutschel/Photothek via Getty Images

The big story
Disney is going to eat Netflix’s lunch.
The content giant announced this week that when Disney+ launches, it will be shipping a $12.99 bundle that brings its Disney+ streaming service, ESPN+ and ad-supported Hulu together into a single-pay package. That price brings those three services together for the same cost as Netflix and is $5 cheaper that what you would spend on each of the services individually.
This announcement from Disney comes after Netflix stuttered in its most recent earnings, missing big on its subscriber add while actually losing subscribers in the U.S.

Disney will bundle Hulu, ESPN+ and Disney+ for a monthly price of $12.99

Netflix isn’t the aggregator it once was; its library is consistently shifting, with original series taking the dominant position. As much as Netflix is spending on content, there’s simply no way that it can operate on the same plane as Disney, which has been making massive content buys and is circling around to snap up the market by acquiring its way into consumers’ homes.
Disney has slowly amassed control of Hulu through buying out various stakeholders, but now that it shifts the platform’s weight, it’s pretty clear that it will use it as a selling point for its time-honed in-house content, which it is still expanding.
The streaming wars have been raging for years, but as the services seem to become more like what they’ve replaced, Disney seems poised to take control.
Send me feedback
on Twitter @lucasmtny or email
[email protected]
On to the rest of the week’s news.

Trends of the week
Here are a few big news items from big companies, with green links to all the sweet, sweet added context:
Apple Card rolls out
Months after its public debut, Apple has begun rolling out its Apple Card credit card. We got our hands on the new Apple Card app, so check out more about what it’s like here.
Amid a struggling smartphone market, Samsung introduces new flagships
The smartphone market is in a low-key free fall, but there’s not much for hardware makers to do than keep innovating. Samsung announced the release of two new phones for its Note series, with new features including a time-of-flight 3D scanning camera, a larger size and… no headphone jack. Read more here.
FedEx ties up ground contract with AmazonAs Amazon rapidly attempts to build out its own air fleet to compete with FedEx’s planes, FedEx confirmed this week that it’s ending its ground-delivery contract with Amazon. Read more here.

GAFA Gaffes
How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:
Facebook could get fined billions more:
[Facebook could face billions in potential damages as court rules facial recognition lawsuit can proceed]
Instagram gets its own Cambridge Analytica:[Instagram ad partner secretly sucked up and tracked millions of users’ locations and stories]

Extra Crunch

Our premium subscription service had another week of interesting deep dives. My colleague Sarah Buhr had a few great conversations with VCs in the healthtech space and distilled some of their investment theses into a report.

What leading HealthTech VCs are investing in 
Why is tech still aiming for the healthcare industry? It seems full of endless regulatory hurdles or stories of misguided founders with no knowledge of the space, running headlong into it, only to fall on their faces…
It’s easy to shake our fists at fool-hardy founders hoping to cash in on an industry that cannot rely on the old motto “move fast and break things.” But it doesn’t have to be the code tech lives or dies by.
So which startups have the mojo to keep at it and rise to the top? Venture capitalists often get to see a lot before deciding to invest. So we asked a few of our favorite health VC’s to share their insights.
Here are some of our other top reads this week for premium subscribers. This week, we talked about how to raise funding in August, a month not typically known for ease of access to VCs, and my colleague Ron dove into the MapR fire sale that took place this week:
How to fundraise in August
With MapR fire sale, Hadoop’s promise has fallen on hard times
We’re excited to ramp up The Station, a new TechCrunch newsletter all about mobility. Each week, in addition to curating the biggest transportation news, Kirsten Korosec will provide analysis, original reporting and insider tips. Sign up here to get The Station in your inbox beginning this month.

Why AI needs more social workers, with Columbia University’s Desmond Patton

Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.
In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at TechCrunch). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.
A less expected and perhaps more profound finding, however, has been what the introspective founder Priyag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.
In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.
So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.
The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.
Desmond Patton. Image via Desmond Patton / Stern Strategy Group
A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.
It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.
Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.
Greg Epstein: How did you end up working in both social work and tech?
Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.
[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proce

Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.
The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.
Now, thanks to an unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.
The most significant language from the decision from the circuit court seems to be this:

 We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.
“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”
As April Glaser noted in “Slate”, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.
“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”
That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebooks billion-strong population of users.
As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

The real risk of Facebook’s Libra coin is crooked developers

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to “Reuters”.
Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.
Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit those figures could amount to real money.
“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”
These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.

Trueface raises $3.7M to recognise that gun, as it’s being pulled, in real time

Globally, millions of cameras are in deployed by companies and organizations every year. All you have to do is look up. Yes, there they are! But the petabytes of data collected by these cameras really only become useful after something untoward has occurred. They can very rarely influence an action in “real-time”.
Trueface is a US-based computer vision company that turns camera data into so-called ‘actionable data’ using machine learning and AI by employing partners who can perform facial recognition, threat detection, age and ethnicity detection, license plate recognition, emotion analysis as well as object detection. That means, for instance, recognising a gun, as it’s pulled in a dime store. Yes folks, welcome to your brave new world.
The company has now raised $3.7M from Lavrock Ventures, Scout Ventures, and Advantage Ventures to scale the team growing partnerships and market share.
Trueface claims it can identify enterprises’ employees for access to a building, detect a weapon as it’s being wielded, or stop fraudulent spoofing attempts. Quite some claims.
However, it’s good enough for the US Air Force as it recently partnered with them to enhance base security.
Originally embedded in a hardware access control device, Trueface’s computer vision software inside one of the first ‘intelligent doorbell’, Chui which was covered by TechCrunch’s Anthony Ha in 2014.
Trueface has multiple solutions to run on an array of clients’ infrastructures including a dockerized container, SDKs that partners can use to build their own solutions with, and a plug and play solution that requires no code to get up and running.
The solution can be deployed in various scenarios such as fintech, healthcare, retail to humanitarian aid, age verification, digital identity verification and threat detection. Shaun Moore and Nezare Chafni are the cofounders and CEO and CTO, respectively.
The computer vision market was valued at USD 9.28 billion in 2017 and is now set to reach a valuation of USD 48.32 billion by the end of 2023.
Facial recognition was banned by agency use in the city of San Francisco recently. There are daily news stories about privacy concerns of facial recognition, especially in regards to how China is using computer vision technology.
However, Truface is only deployed ‘on-premise’ and includes features like ‘fleeting data’ and blurring for people who have not opted-in. It’s good to see a company building in such controls, from the word go.
However, it’s then it’s up to the company you work for not to require you to sign a statement saying you are happy to have your face recognized. Interesting times, huh?
And if you want that job, well, that’s a whole other story, as I’m sure you can imagine.

Google’s Pixel 4 smartphone will have motion control and face unlock

Google’s Pixel 4 is coming out later this year, and it’s getting the long reveal treatment thanks to a decision this year from Google to go ahead and spill some of the beans early, rather than saving everything for one big final unveiling closer to availability. A new video posted by Google today about the forthcoming Pixel 4 (which likely won’t actually be available until fall) shows off some features new to this generation: Motion control and face unlock.
The new “Motion Sense” feature in the Pixel 4 will detect waves of your hand and translate them into software control, including skipping songs, snoozing alarms and quieting incoming phone call alerts, with more planned features to come, according to Google. It’s based on Soli, a radar-based fine motion detection technology that Google first revealed at its I/O annual developer conference in 2016. Soli can detect very fine movements, including fingers pinched together to mimic a watch-winding motion, and it got approval from the FCC in January, hinting it would finally be arriving in production devices this year.
Pixel 4 is the first shipping device to include Soli, and Google says it’ll be available in “select Pixel countries” at launch (probably due to similar approvals requirements wherever it rolls out to consumers).

Google also teased “Face unlock,” something it has supported in Android previously – but Google is doing it very differently than it has been handled on Android in the past with the Pixel 4. Once again, Soli is part of its implementation, turning on the face unlock sensors in the device as it detects your hand reaching to pick up the device. Google says this should mean that the phone will be unlocked by the time you’re ready to use it, since it does this all on the fly, and works from pretty much any authentication.
Face unlock will be supported for authorizing payments and logging into Android apps, as well, and all of the facial recognition processing done for face unlock will occur on the device – a privacy-oriented feature that’s similar to how Apple handles its own Face ID. In fact, Google will also be storing all the facial recognition data securely in its own dedicated on-device Titan M security chip, another move similar to Apple’s own approach.
Google made the Pixel 4 official and tweeted photos (or maybe photorealistic renders) of the new smartphone back in June, bucking the trend of keeping things unconfirmed until an official reveal closer to release. Based on this update, it seems likely we can expect to learn more about the new smartphone ahead of its availability, which is probably going to happen sometime around October based on past behavior.

Cities must plan ahead for innovation without leaving people behind

Yung Wu
Contributor

Yung Wu is the CEO of MaRS Discovery District, a Toronto-based innovation hub.

More posts by this contributor
To actually change the world, Big Tech needs to grow up

From Toronto to Tokyo, the challenges faced by cities today are often remarkably similar: climate change, rising housing costs, traffic, economic polarization, unemployment. To tackle these problems, new technology companies and industries have been sprouting and scaling up with innovative digital solutions like ride sharing and home sharing. Without a doubt, the city of the future must be digital. It must be smart. It must work for everyone.
This is a trend civic leaders everywhere need to embrace wholeheartedly. But building a truly operational smart city is going to take a village, and then some. It won’t happen overnight, but progress is already under way.
As tech broadens its urban footprint, there will be more and more potential for conflict between innovation and citizen priorities like privacy and inclusive growth. Last month, we were reminded of that in Toronto, where planning authorities from three levels of government released a 1,500-page plan by Alphabet’s Sidewalk Labs meant to pave the way for a futuristic waterfront development. Months in the making, the plan met with considerably less than universal acclaim.
But whether it’s with Sidewalk or other tech partners, the imperative to resolve these conflicts becomes even stronger for cities like Toronto. If they’re playing this game to win, civic leaders need to minimize the damage and maximize the benefits for the people they represent. They need to develop co-ordinated innovation plans that prioritize transparency, public engagement, data privacy and collaboration.

Transparency
The Sidewalk Labs plan is full of tech-forward proposals for new transit, green buildings and affordable housing, optimized by sensors, algorithms and mountains of data. But even the best intentions of a business or a city can be misconstrued when leaders fail to be transparent about their plans. Openness and engagement are critical for building legitimacy and social license.
Sidewalk says it consulted 21,000 Toronto citizens while developing its proposal. But some critics have already complained that the big decisions were made behind closed doors, with too many public platitudes and not enough debate about issues raised by citizens, city staff and the region’s already thriving innovation ecosystem.
In defense of Sidewalk Labs and Alphabet, their roots are in Internet services. They are relative newcomers to the give and take of community consultation. But they are definitely now hearing how citizens would prefer to be engaged and consulted.
As for the public planners, they have a number of excellent examples to draw from. In Barcelona, for example, the city government opened up its data sets to citizens to encourage shared use among private, public and academic sectors. And in Pittsburgh, which has become a hub for the testing of autonomous vehicles, the city provided open forum opportunities for the public to raise questions, concerns and issues directly with civic decision-makers.
Other forward-looking cities, such as San Francisco, Singapore, Helsinki and Glasgow, are already using digital technology and smart sensors to build futuristic urban services that can serve as real-world case studies for Toronto and others. However, to achieve true success, city officials need to earn residents’ trust and confidence that they are following and adapting best practices.
Toronto skyline courtesy of Shutterstock/Niloo
Data privacy
Access to shared data is crucial to informing and improving tech-enabled urban innovation. But it could also fuel a technologically driven move toward surveillance capitalism or a surveillance state – profiteering or big brother instead of trust and security.
The Sidewalk proposal respects the principles of responsible use of data and artificial intelligence. It outlines principles for guiding the smart-city project’s ethical handling of citizen data and secure use of emerging technologies like facial recognition. But these principles aren’t yet accompanied by clear, enforceable standards.
Members of the MaRS Discovery District recently co-authored an open-source report with fellow design and data governance experts, outlining how privacy conflicts could be addressed by an ethical digital trust. A digital trust ought to be transparently governed by independent, representative third-party trustees. Its trustees should be mandated to make data-use decisions in the public interest: how data could be gathered, how anonymity could be ensured, how requests for use should be dealt with.
They come with big questions to be resolved. But if a digital trust were developed for the Sidewalk project, it could be adapted and reused in other cities around the world, as civic leaders everywhere grapple with innovation plans of their own.
Image courtesy of Getty Images/Colin Anderson
Collaboration
The private sector creates jobs and economic growth. Academia and education offers ideas, research and a sustainable flow of tech-savvy workers. The public sector provide policy guidance and accountability. Non-profits mobilize public awareness and surplus capital.
As Toronto is learning, it isn’t always easy to get buy-in, because every player in every sector has its own priorities. But civic leaders should be trying to pull all these innovation levers to overcome urban challenges, because when the mission is right, collaboration creates more than the sum of its parts.
One civic example we like to point to is New York, where the development of the High Line park and the rezoning of the West Chelsea Special District created a “halo effect.” A $260-million investment increased property values, boosted city tax revenues by $900-million and brought four million tourists per year to a formerly underused neighborhood.
A mission-oriented innovation ecosystem connects the dots between entrepreneurs and customers, academia and corporates, capital and talent, policymakers and activists, physical and digital infrastructure – and systems financing models can help us predict and more equitably distribute the returns. Organizations like Civic Capital Lab (disclaimer: a MaRS partner) work to repurpose projects like the High Line into real-life frameworks for other cities and communities.
That kind of planning works because the challenges cities face are so similar. When civic leaders are properly prepared to make the best of modern tech-driven innovation, there’s no problem they can’t overcome.

WP Twitter Auto Publish Powered By : XYZScripts.com