Crea sito

DARPA snags Intel to lead its machine learning security tech

Chip maker Intel has been chosen to lead a new initiative led by the U.S. military’s research wing DARPA, aimed at improving cyber-defenses against deception attacks on machine learning models.
Machine learning is a kind of artificial intelligence that allows systems to improve over time with new data and experiences. One of its most common use cases today is object recognition, such as taking a photo and describing what’s in it. That can help those with impaired vision to know what’s in a photo if they can’t see it, for example,  but it can also be used by other computers, such as autonomous vehicles, to identify what’s what’s on the road.
But deception attacks, although rare, can meddle with machine learning algorithms. Subtle changes to real-world objects can, in the case of a self-driving vehicle, can have disastrous consequences.
Just a few weeks ago, McAfee researchers tricked a Tesla into accelerating 50 miles per hour above its intended speed by adding a two-inch piece of tape on a speed limit sign. The research was one of the first examples of manipulating a device’s machine learning algorithms.
That’s where DARPA hopes to come into play. The research arm said earlier this year that it’s working on a program known as GARD, or the Guaranteeing AI Robustness against Deception. The existing mitigations against machine learning attacks are typically rule-based and pre-defined, but DARPA hopes it can develop GARD into a system that will have broader defenses to address a number of different kinds of attacks.
Intel said today it’ll serve as the prime contractor for the four-year program alongside Georgia Tech.
Jason Martin, principal engineer at Intel Labs who leads Intel’s GARD team, said the chip maker and Georgia Tech will work together to “enhance object detection and to improve the ability for AI and machine learning to respond to adversarial attacks.”
During the first phase of the program, Intel said its focus is on enhancing its object detection technologies using spatial, temporal, and semantic coherence for both still images and video.
DARPA said GARD could be used in a number of settings — such as in biology.
“The kind of broad scenario-based defense we’re looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements,” said Dr. Hava Siegelmann, a program manager in DARPA’s Information Innovation Office.
“We must ensure machine learning is safe and incapable of being deceived,” said Siegelmann.

The right way to do AI in security

Lacking eyeballs, Facebook’s ad review system fails to spot coronavirus harm

Facebook’s ad review system is failing to prevent coronavirus misinformation from being targeted at its users, according to an investigation by Consumer Reports.
The not-for-profit consumer advocacy organization set out to test Facebook’s system by setting up a page for a made-up organization, called the Self Preservation Society, and creating ads that contained false or deliberately misleading information about the coronavirus — including messaging that claimed (incorrectly) that people under 30 are “safe”, or that coronavirus is a “HOAX”.
Another of the bogus ads urged people to “stay healthy with SMALL daily doses” of bleach, per the report.
The upshot of the experiment? Facebook’s system waived all the ads through, apparently failing to spot any problems or potential harms. “Facebook approved them all,” writes Consumer Reports . “The advertisements remained scheduled for publication for more than a week without being flagged by Facebook.”
Of course the organization pulled the ads before they were published, saying it made certain no Facebook users were exposed to the false or misleading claims. But the test appears to expose how few barriers there are within Facebook’s current ad review system for picking up and preventing harmful ads targeting the coronavirus pandemic.
The only ad in the experiment Facebook rejected was flagged because of its image, per Consumer Reports — which says it had used a stock shot of a respirator-style face mask. After swapping the image for a “similar alternative” it says Facebook approved that too.
Last month, as part of its own business response to the threat posed by COVID-19, Facebook announced it was sending home all global content reviewers “until further notice” — saying it would be relying on more automated review as a consequence of this decision.
“As we rely more on our automated systems, we may make mistakes,” it wrote then.
Consumer Reports’ investigation highlights how serious those mistakes can be, as a result of Facebook’s decision to lean so heavily on AI moderation — given the company is waiving through clearly harmful messages that urge users to ignore public health advice to stay home and socially distance themselves, or even drink a harmful substance to stay “safe”.
In response to the Consumer Reports investigation Facebook defended itself — saying it has removed “millions” of listings for policy violations related to the coronavirus. Though it also conceded its enforcement around COVID-19 misinformation is far from perfect.
“While we’ve removed millions of ads and commerce listings for violating our policies related to COVID-19, we’re always working to improve our enforcement systems to prevent harmful misinformation related to this emergency from spreading on our services,” a Facebook spokesperson, Devon Kearns, told Consumer Reports.
A Facebook spokeswoman declined to specify how many humans it has working on ad review during the coronavirus crisis when we asked. Though the company told Consumer Reports it has a “few thousand” reviewers now able to work from home.
Back in 2018 Facebook reported having some 15,000 people employed doing content review.
It’s never been clear what proportion of those are focused on (user) content review vs ad review. But a “few thousand” vs 15k suggests there has likely been a very considerable drop in the number of eyeballs checking ads. (Pre-COVID, Facebook also liked to refer to having a safety and security team of over 35,000 people globally — with the 15k reviewers sitting within that.)
Facebook’s content review team has clearly shrunk considerably as a result of coronavirus-related disruption to its business. Though the company is refusing to come clean on exactly how many (few) people it has doing content review right now.
It’s also clear that the risk of harm from tools like Facebook’s ad platform — that can be used to easily and cheaply amplify damaging online disinformation — could hardly be higher than during a pandemic, when there is a pressing need for governments and health authorities to be able to communicate facts, official guidance and best practice to their populations to keep them safe.
Facebook’s platform becoming a conduit for false and/or maliciously misleading messaging risks undermining public health at a critical time.
Last month the company was also revealed to have blocked links to legitimate news and other websites that were sharing coronavirus-related content — following its switch to AI-led moderation.
While, in recent weeks, the company has faced criticism for failing to live up to a pledge to take down ads for coronavirus masks.
At the same time, Facebook’s platform remains a hotbed of user generated coronavirus-related misinformation — with individuals widely reported sharing posts that claim bogus home remedies such as gargling with salt water to kill the virus (it doesn’t) or playing down the seriousness of the COVID-19 pandemic by claiming it’s ‘just the flu’ (it’s not).

Why AI startups’ economics will likely improve over time

Hello and welcome back to our regular morning look at private companies, public markets and the gray space in between.
If you can recall February, we dug into the question of AI startup gross margins. Venture shop a16z had published an interesting blog on the subject, arguing that it may be the case that AI-focused startups will enjoy strong gross margins, but perhaps not as strong as those posted by SaaS companies.
Modern software startups (SaaS companies) have some of the highest gross margins in business, delivering their digital services over the Internet at little cost. Their high-margin revenue has made them incredibly valuable to private and public investors alike. To see a16z draw a line for AI gross margins a little under SaaS, then, was notable. AI startups might earn long-term lower revenue multiples than SaaS firms, and, if so, they might need to adjust their valuation expectations.
Since that nerdy interlude, the world has fallen apart. The United States has accreted over 337,000 COVID-19 cases, the stock markets have fallen sharply and we’re somewhere between a bear market and a recession. Shit, as they say, has changed.
But after our first look at the world of AI margins, asking a number of VCs to weigh in on the matter, we wound up talking to one more VC, David Blumberg of Blumberg Capital, who had some interesting notes on the AI margin matter to share from his portfolio.
Since that conversation, TechCrunch covered Deepgram’s Series A, which brought the subject of AI startups and their margins back into our heads. So, before Q2 really gets under way, a little more on AI and COGS.
AI margins and the future

R&D Roundup: Ultrasound/AI medical imaging, assistive exoskeletons and neural weather modeling

In the time of COVID-19, much of what transpires from the science world to the general public relates to the virus, and understandably so. But other domains, even within medical research, are still active — and as usual, there are tons of interesting (and heartening) stories out there that shouldn’t be lost in the furious activity of coronavirus coverage. This last week brought good news for several medical conditions as well as some innovations that could improve weather reporting and maybe save a few lives in Cambodia.
Ultrasound and AI promise better diagnosis of arrhythmia
Arrhythmia is a relatively common condition in which the heart beats at an abnormal rate, causing a variety of effects, including, potentially, death. Detecting it is done using an electrocardiogram, and while the technique is sound and widely used, it has its limitations: first, it relies heavily on an expert interpreting the signal, and second, even an expert’s diagnosis doesn’t give a good idea of what the issue looks like in that particular heart. Knowing exactly where the flaw is makes treatment much easier.
Ultrasound is used for internal imaging in lots of ways, but two recent studies establish it as perhaps the next major step in arrhythmia treatment. Researchers at Columbia University used a form of ultrasound monitoring called Electromechanical Wave Imaging to create 3D animations of the patient’s heart as it beat, which helped specialists predict 96% of arrhythmia locations compared with 71% when using the ECG. The two could be used together to provide a more accurate picture of the heart’s condition before undergoing treatment.

Another approach from Stanford applies deep learning techniques to ultrasound imagery and shows that an AI agent can recognize the parts of the heart and record the efficiency with which it is moving blood with accuracy comparable to experts. As with other medical imagery AIs, this isn’t about replacing a doctor but augmenting them; an automated system can help triage and prioritize effectively, suggest things the doctor might have missed or provide an impartial concurrence with their opinion. The code and data set of EchoNet are available for download and inspection.

Google research makes for an effortless robotic dog trot

As capable as robots are, the original animals after which they tend to be designed are always much, much better. That’s partly because it’s difficult to learn how to walk like a dog directly from a dog — but this research from Google’s AI labs make it considerably easier.
The goal of this research, a collaboration with UC Berkeley, was to find a way to efficiently and automatically transfer “agile behaviors” like a light-footed trot or spin from their source (a good dog) to a quadrupedal robot. This sort of thing has been done before, but as the researchers’ blog post points out, the established training process can often “require a great deal of expert insight, and often involves a lengthy reward tuning process for each desired skill.”
That doesn’t scale well, naturally, but that manual tuning is necessary to make sure the animal’s movements are approximated well by the robot. Even a very doglike robot isn’t actually a dog, and the way a dog moves may not be exactly the way the robot should, leading the latter to fall down, lock up, or otherwise fail.
The Google AI project addresses this by adding a bit of controlled chaos to the normal order of things. Ordinarily, the dog’s motions would be captured and key points like feet and joints would be carefully tracked. These points would be approximated to the robot’s in a digital simulation where a virtual version of the robot attempts to imitate the motions of the dog with its own, learning as it goes.
So far, so good, but the real problem comes when you try to use the results of that simulation to control an actual robot. The real world isn’t a 2D plane with idealized friction rules and all that. Unfortunately, that means that uncorrected simulation-based gaits tend to walk a robot right into the ground.
To prevent this, the researchers introduced an element of randomness to the physical parameters used in the simulation, making the virtual robot weigh more, or have weaker motors, or experience greater friction with the ground. This made the machine learning model describing how to walk have to account for all kinds of small variances and the complications they create down the line — and how to counteract them.
Learning to accommodate for that randomness made the learned walking method far more robust in the real world, leading to a passable imitation of the target dog walk, and even more complicated moves like turns and spins, without any manual intervention and only little extra virtual training.
Naturally manual tweaking could still be added to the mix if desired, but as it stands this is a large improvement over what could previously be done totally automatically.
In another research project described in the same post, another set of researchers describe a robot teaching itself to walk on its own, but imbued with the intelligence to avoid walking outside its designated area and to pick itself up when it falls. With those basic skills baked in, the robot was able to amble around its training area continuously with no human intervention, learning quite respectable locomotion skills.
The paper on learning agile behaviors from animals can be read here, while the one on robots learning to walk on their own (a collaboration with Berkeley and the Georgia Institute of Technology) is here.

OctoML raises $15M to make optimizing ML models easier

OctoML, a startup founded by the team behind the Apache TVM machine learning compiler stack project, today announced that it has raised a $15 million Series A round led by Amplify, with participation from Madrone Ventures, which led its $3.9 million seed round. The core idea behind OctoML and TVM is to use machine learning to optimize machine learning models so they can more efficiently run on different types of hardware.
“There’s been quite a bit of progress in creating machine learning models,” OctoML CEO and University of Washington professor Luis Ceze told me.” But a lot of the pain has moved to once you have a model, how do you actually make good use of it in the edge and in the clouds?”
That’s where the TVM project comes in, which was launched by Ceze and his collaborators at the University of Washington’s Paul G. Allen School of Computer Science & Engineering. It’s now an Apache incubating project and because it’s seen quite a bit of usage and support from major companies like AWS, ARM, Facebook, Google, Intel, Microsoft, Nvidia, Xilinx and others, the team decided to form a commercial venture around it, which became OctoML. Today, even Amazon Alexa’s wake word detection is powered by TVM.

Ceze described TVM as a modern operating system for machine learning models. “A machine learning model is not code, it doesn’t have instructions, it has numbers that describe its statistical modeling,” he said. “There’s quite a few challenges in making it run efficiently on a given hardware platform because there’s literally billions and billions of ways in which you can map a model to specific hardware targets. Picking the right one that performs well is a significant task that typically requires human intuition.”
And that’s where OctoML and its “Octomizer” SaaS product, which it also announced, today come in. Users can upload their model to the service and it will automatically optimize, benchmark and package it for the hardware you specify and in the format you want. For more advanced users, there’s also the option to add the service’s API to their CI/CD pipelines. These optimized models run significantly faster because they can now fully leverage the hardware they run on, but what many businesses will maybe care about even more is that these more efficient models also cost them less to run in the cloud, or that they are able to use cheaper hardware with less performance to get the same results. For some use cases, TVM already results in 80x performance gains.
Currently, the OctoML team consists of about 20 engineers. With this new funding, the company plans to expand its team. Those hires will mostly be engineers, but Ceze also stressed that he wants to hire an evangelist, which makes sense, given the company’s open-source heritage. He also noted that while the Octomizer is a good start, the real goal here is to build a more fully featured MLOps platform. “OctoML’s mission is to build the world’s best platform that automates MLOps,” he said.

Using AI responsibly to fight the coronavirus pandemic

Mark Minevich
Contributor

Share on Twitter

Mark Minevich is president of Going Global Ventures, an advisor at Boston Consulting Group, a digital fellow at IPsoft and a leading global AI expert and digital cognitive strategist and venture capitalist.

More posts by this contributor
The American AI Initiative: A good first step, of many

Irakli Beridze
Contributor

Irakli Beridze is head of the Centre for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI).

The emergence of the novel coronavirus has left the world in turmoil. COVID-19, the disease caused by the virus, has reached virtually every corner of the world, with the number of cases exceeding a million and the number of deaths more than 50,000 worldwide. It is a situation that will affect us all in one way or another.
With the imposition of lockdowns, limitations of movement, the closure of borders and other measures to contain the virus, the operating environment of law enforcement agencies and those security services tasked with protecting the public from harm has suddenly become ever more complex. They find themselves thrust into the middle of an unparalleled situation, playing a critical role in halting the spread of the virus and preserving public safety and social order in the process. In response to this growing crisis, many of these agencies and entities are turning to AI and related technologies for support in unique and innovative ways. Enhancing surveillance, monitoring and detection capabilities is high on the priority list.
For instance, early in the outbreak, Reuters reported a case in China wherein the authorities relied on facial recognition cameras to track a man from Hangzhou who had traveled in an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Police in China and Spain have also started to use technology to enforce quarantine, with drones being used to patrol and broadcast audio messages to the public, encouraging them to stay at home. People flying to Hong Kong airport receive monitoring bracelets that alert the authorities if they breach the quarantine by leaving their home.
In the United States, a surveillance company announced that its AI-enhanced thermal cameras can detect fevers, while in Thailand, border officers at airports are already piloting a biometric screening system using fever-detecting cameras.
Isolated cases or the new norm?
With the number of cases, deaths and countries on lockdown increasing at an alarming rate, we can assume that these will not be isolated examples of technological innovation in response to this global crisis. In the coming days, weeks and months of this outbreak, we will most likely see more and more AI use cases come to the fore.
While the application of AI can play an important role in seizing the reins in this crisis, and even safeguard officers and officials from infection, we must not forget that its use can raise very real and serious human rights concerns that can be damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be exposed or damaged if we do not tread this path with great caution. There may be no turning back if Pandora’s box is opened.
In a public statement on March 19, the monitors for freedom of expression and freedom of the media for the United Nations, the Inter-American Commission for Human Rights and the Representative on Freedom of the Media of the Organization for Security and Co-operation in Europe issued a joint statement on promoting and protecting access to and free flow of information during the pandemic, and specifically took note of the growing use of surveillance technology to track the spread of the coronavirus. They acknowledged that there is a need for active efforts to confront the pandemic, but stressed that “it is also crucial that such tools be limited in use, both in terms of purpose and time, and that individual rights to privacy, non-discrimination, the protection of journalistic sources and other freedoms be rigorously protected.”
This is not an easy task, but a necessary one. So what can we do?
Ways to responsibly use AI to fight the coronavirus pandemic
Data anonymization: While some countries are tracking individual suspected patients and their contacts, Austria, Belgium, Italy and the U.K. are collecting anonymized data to study the movement of people in a more general manner. This option still provides governments with the ability to track the movement of large groups, but minimizes the risk of infringing data privacy rights.
Purpose limitation: Personal data that is collected and processed to track the spread of the coronavirus should not be reused for another purpose. National authorities should seek to ensure that the large amounts of personal and medical data are exclusively used for public health reasons. The is a concept already in force in Europe, within the context of the European Union’s General Data Protection Regulation (GDPR), but it’s time for this to become a global principle for AI.
Knowledge-sharing and open access data: António Guterres, the United Nations Secretary-General, has insisted that “global action and solidarity are crucial,” and that we will not win this fight alone. This is applicable on many levels, even for the use of AI by law enforcement and security services in the fight against COVID-19. These agencies and entities must collaborate with one another and with other key stakeholders in the community, including the public and civil society organizations. AI use case and data should be shared and transparency promoted.
Time limitation:  Although the end of this pandemic seems rather far away at this point in time, it will come to an end. When it does, national authorities will need to scale back their newly acquired monitoring capabilities after this pandemic. As Yuval Noah Harari observed in his recent article, “temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon.” We must ensure that these exceptional capabilities are indeed scaled back and do not become the new norm.
Within the United Nations system, the United Nations Interregional Crime and Justice Research Institute (UNICRI) is working to advance approaches to AI such as these. It has established a specialized Centre for AI and Robotics in The Hague and is one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. It assists national authorities, in particular law enforcement agencies, to understand the opportunities presented by these technologies and, at the same time, to navigate the potential pitfalls associated with these technologies.
Working closely with International Criminal Police Organization (INTERPOL), UNICRI has set up a global platform for law enforcement, fostering discussion on AI, identifying practical use cases and defining principles for responsible use. Much work has been done through this forum, but it is still early days, and the path ahead is long.
While the COVID-19 pandemic has illustrated several innovative use cases, as well as the urgency for the governments to do their utmost to stop the spread of the virus, it is important to not let consideration of fundamental principles, rights and respect for the rule of law be set aside. The positive power and potential of AI is real. It can help those embroiled in fighting this battle to slow the spread of this debilitating disease. It can help save lives. But we must stay vigilant and commit to the safe, ethical and responsible use of AI.
It is essential that, even in times of great crisis, we remain conscience of the duality of AI and strive to advance AI for good.

You can now buy AWS’ $99 DeepComposer keyboard

AWS today announced that its DeepComposer keyboard is now available for purchase. And no, DeepComposer isn’t a mechanical keyboard for hackers but a small MIDI keyboard for working with the AWS DeepComposer service that uses AI to create songs based on your input.
First announced at AWS re:Invent 2019, the keyboard created a bit of confusion, in part because Amazon’s announcement almost made it seem like a consumer product. DeepComposer, which also works without the actual hardware keyboard, is more of a learning tool, though, and belongs to the same family of AWS hardware like DeepLens and DeepRacer. It’s meant to teach developers about generative adversarial networks, just like DeepLens and DeepRacer also focus on specific machine learning technologies.
Users play a short melody, either using the hardware keyboard or an on-screen one, and the service then automatically generates a backing track based on your choice of musical style. The results I heard at re:Invent last year were a bit uneven (or worse), but that may have improved by now. But this isn’t a tool for creating the next Top 40 song. It’s simply a learning tool. I’m not sure you need the keyboard to get that learning experience out of it, but if you do, you can now head over to Amazon and buy it.

Why AWS is selling a MIDI keyboard to teach machine learning

Flagship Pioneering raises $1.1 billion to spend on sustainability and health-focused biotech

Flagship Pioneering, the Boston-based biotech company incubator and holding company, said it has raised $1.1 billion for its Flagship Labs unit.
Flagship, which raised $1 billion back in 2019 for growth stage investment vehicles, develops and operates startups that leverage biotechnology innovation to provide goods and services that improve human health and promote sustainable industries.
“We’re honored to have the strong support of our existing Limited Partners, as well as the interest from a select group of new Limited Partners, to support Flagship’s unique form of company origination during this time of unprecedented economic uncertainty,” said Noubar Afeyan, the founder and chief executive of Flagship Pioneering, in a statement.
In addition to its previous focus on health and sustainability, Flagship will use the new funds to focus on new medicines, artificial intelligence and “health security”, which the company says is “designed to create a range of products and therapies to improve societal health defenses by treating pre-disease states before they escalate,” according to Afeyan.
Flagship companies are already on the forefront of the healthcare industry’s efforts to stop the COVID-19 pandemic. Portfolio company Moderna is one of the companies leading efforts to develop a vaccine for the novel coronavirus which causes COVID-19.
In the 20 years since its launch, Flagship has 15 wholly owned companies and another 26 growth stage companies among its portfolio of investments.
New companies include: Senda Biosciences, Generate Biomedicines, Tessera Therapeutics, Cellarity, Cygnal Therapeutics, Ring Therapeutics, and Integral Health. Growth Companies developed or backed by Flagship include Ohana Biosciences, Kintai Therapeutics, and Repertoire Immune Medicines.
Two of the companies in the Flagship Labs portfolio have already had initial public offerings in the past two years, the company said. Kaleido Biosciences and Axcella Health raised public capital in 2019 and Moderna Therapeutics conducted a $575 million secondary offering earlier this year.

Activity-monitoring startup Zensors repurposes its tech to help coronavirus response

Computer vision techniques used for commercial purposes are turning out to be valuable tools for monitoring people’s behavior during the present pandemic. Zensors, a startup that uses machine learning to track things like restaurant occupancy, lines, and so on, is making its platform available for free to airports and other places desperate to take systematic measures against infection.
The company, founded two years ago but covered by TechCrunch in 2016, was among the early adopters of computer vision as a means to extract value from things like security camera feeds. It may seem obvious now that cameras covering a restaurant can and should count open tables and track that data over time, but a few years ago it wasn’t so easy to come up with or accomplish that.
Since then Zensors has built a suite of tools tailored to specific businesses and spaces, like airports, offices, and retail environments. They can count open and occupied seats, spot trash, estimate lines, and all that kind of thing. Coincidentally, this is exactly the kind of data that managers of these spaces are now very interested in watching closely given the present social distancing measures.
Zensors co-founder Anuraag Jain told Carnegie Mellon University — which the company was spun out of — that it had received a number of inquiries from the likes of airpots regarding applying the technology to public health considerations.

Software that counts how many people are in line can be easily adapted to, for example, estimate how close people are standing and send an alert if too many people are congregating or passing through a small space.
“Rather than profiting off them, we thought we would give our help for free,” said Jain. And so, for the next two months at least, Zensors is providing its platform for free to “selected entities who are on the forefront of responding to this crisis, including our airport clients.”
The system has already been augmented to answer COVID-19-specific questions like whether there are too many people in a given area, when a surface was last cleaned and whether cleaning should be expedited, and how many of a given group are wearing face masks.
Airports surely track some of this information already, but perhaps in a much less structured way. Using a system like this could be helpful for maintaining cleanliness and reducing risk, and no doubt Zensors hopes that having had a taste via what amounts to a free trial, some of these users will become paying clients. Interested parties should get in touch with Zensors via its usual contact page.

AI and big data won’t work miracles in the fight against coronavirus

Google and USCF collaborate on machine learning tool to help prevent harmful prescription errors

Machine learning experts working at Google Health have published a new study in tandem with the University of California San Francisco (UCSF)’s computational health sciences department that describes a machine learning model the researchers built that can anticipate normal physician drug prescribing patterns, using a patient’s electronic health records (EHR) as input. That’s useful because around 2 percent of patients who end up hospitalized are affected by preventable mistakes in medication prescriptions, some instances of which can even lead to death.
The researchers describe the system as working in a similar manner to automated, machine learning-based fraud detection tools that are commonly used by credit card companies to alert customers of possible fraudulent transactions: They essentially build a baseline of what’s normal consumer behavior based on past transactions, and then alert your bank’s fraud department or freeze access when they detect a behavior that is not in line with and individual’s baseline behavior.
Similarly, the model trained by Google and UCSF worked by identifying any prescriptions that “looked abnormal for the patient and their current situation.” That’s a much more challenging proposition in the case of prescription drugs, vs. consumer activity – because courses of medication, their interactions with one another, and the specific needs, sensitivities and conditions of any given patient all present an incredibly complex web to untangle.
To make it possible, the researchers used electronic health records from de-identified patient that include vital signs, lab results, prior medications and medical procedures, as well as diagnoses and changes over time. They paired this historical data with current state information, and came up with various models to attempt to output an accurate prediction of a course of prescription for a given patient.
Their best-performing model was accurate “three quarters of the time,” Google says, which means that it matched up with what a physician actually decided to prescribe in a large majority of cases. It was also even more accurate (93%) in terms of predicting at least one medication that would fall within a top ten list of a physician’s most likely medicine choices for a patient – even if its top choice didn’t match the doctor’s.
The researchers are quick to note that though the model thus far has been fairly accurate in predicting a normal course of prescription, that doesn’t mean it’s able to successfully detect deviations from that yet with any high degree of accuracy. Still, it’s a good first step upon which to build that kind of flagging system.

NASA issues agency-wide crowdsourcing call for ideas around COVID-19 response

There’s crowdsourcing a problem, and then there’s crowdsourcing a problem within NASA, where some of the smartest, most creative and resourceful problem-solvers in the world solve real-world challenges daily as part of their job. That’s why it’s uplifting to hear that NASA has issued a call to its entire workforce to come up with potential ways the agency and its resources can contribute to the ongoing effort to with the current coronavirus pandemic.
NASA is using its crowdsourcing platform NASA @ WORK, which it uses to internally source creative solutions to persistent problems, in order to collect creative ideas about new ways to address the COVID-19 crisis and the various problems it presents. Already, NASA is engaged in a few different ways, including offering supercomputing recourses for treatment research, and working on developing AI solutions that can help provide insight into key scientific investigations that are ongoing around the virus.
There is a degree of specificity in the open call NASA put to its workforce: It identified key areas where solutions are most urgently needed, working together with the White House and other government agencies involved in the response, and determined that NASA staff efforts should focus on addressing shortfalls and gaps in the availability of personal protective equipment, ventilation hardware, and ways to monitor and track the coronavirus spread and transmission. That’s not to say NASA doesn’t want to hear solutions about other COVID-19 issues, just that these are the areas where they’ve identified the most current need.
To add some productive time-pressure to this endeavor, NASA is looking for submissions from staff on all the areas above to be made via NASA @ WORK by April 15. Then there’ll be a process of assessing what’s most viable, and allocating resources to make those a reality. Any products or designs that result will be made “open source for any business or country to use,” the agency says – with the caveat that this might not be strictly possible in all cases depending on the specific technologies involved.

DeepMind’s Agent57 AI agent can best human players across a suite of 57 Atari games

Development of artificial intelligence agents tends to frequently be measured by their performance in games, but there’s a good reason for that: Games tend to offer a wide proficiency curve, in terms of being relatively simple to grasp the basics, but difficult to master, and they almost always have a built-in scoring system to evaluate performance. DeepMind’s agents have tackled board game Go, as well as real-time strategy video game StarCraft – but the Alphabet company’s most recent feat is Agent57, a learning agent that can beat the average human on each of 57 Atari games with a wide range of difficulty, characteristics and gameplay styles.
Being better than humans at 57 Atari games may seem like an odd benchmark against which to measure the performance of a deep learning agent, but it’s actually a standard that goes all the way back to 2012, with a selection of Atari classics including Pitfall, Solaris, Montezuma’s Revenge and many others. Taken together, these games represent a broad range of difficulty levels, as well as requiring a range of different strategies in order to achieve success.
That’s a great type of challenge for creating a deep learning agent because the goal is not to build something that can determine one effective strategy that maximizes your chances of success every time you play a game – instead, the reason researchers build these agents and set them to these tasks at all is to develop something that can learn across multiple and shifting scenarios and conditions, with the long-term aim of building a learning agent that approaches general AI – or AI that is more human in terms of being able to apply its intelligence to any problem put before it, including challenges it’s never encountered before.
DeepMind’s Agent57 is remarkable because it performs better than human players on each of the 57 games in the Atari57 set – previous agents have been able to be better than human players on average – but that’s because they were extremely good at some of the simpler games that basically just worked via a simple action-reward loop, but terrible at games that required more advanced play, including long-term exploration and memory, like Montezuma’s Revenge.
The DeepMind team addressed this by building a distributed agent with different computers tackling different aspects of the problem, with some tuned to focus on novelty rewards (encountering things they haven’t encountered before), with both short- and long-term time horizons for when the novelty value resets. Others sought out more simple exploits, figuring out which repeated pattern provided the biggest reward, and then all the results are combined and managed by an agent equipped with a meta-controller that allows it to weight the costs and benefits of different approaches based on which game it encounters.
In the end, Agent57 is an accomplishment, but the team says it can stand to be improved in a few different ways. First, it’s incredibly computationally expensive to run, so they will seek to streamline that. Second, it’s actually not as good at some of the simpler games as some simpler agents – even though it excels at the the top 5 games in terms of challenge to previous intelligent agents. The team says it has ideas for how to make it even better at the simpler games that other, less sophisticated agents, are even better at.

Pre-school EdTech startup Lingumi raises £4m, adds some free services during Covid-19

At these difficult times, parents are concerned for their children’s education, especially given so much of it has had to go online during the Covid-19 pandemic. But what about pre-schoolers who are missing out?
Pre-school children are sponges for information but don’t get formal training on reading and writing until they enter the classroom when they are less sponge-like and surrounded by 30 other children. Things are tougher for non-English speaking children who’s parents want them to learn English.
Lingumi, a platform aimed at toddlers learning critical skills, has now raised £4 million in a funding round led by China-based technology fund North Summit Capital – a fund run by Alibaba’s former Chief Data Scientist Dr Min Wanli – alongside existing investors LocalGlobe, ADV, and Entrepreneur First.
The startup, launched in 2017, is also announcing the launch of daily free activity packs and videos to support children and families during the COVID-19 outbreak, and has pledged to donate 20% of its sales during this period to the Global Children’s Fund.
Lingumi’s interactive courses offer one-to-one tutoring with a kind ‘social learning’ and its first course helps introduce key English grammar and vocabulary from the age of 2.
Instead of tuning into live lessons with tutors, which are typically timetabled and expensive, Lingumi’s lessons are delivered through interactive speaking tasks, teacher videos, and games. At the end of each lesson, children can see videos of Lingumi friends speaking the same words and phrases as them. Because the kids are watching videos, Lingumi is cheaper than live courses, and thus more flexible for parents.
The company launched the first Lingumi course in China last year, focused on teaching spoken English to non-English speakers. The platform is now being used by more than 100,000 families globally, including in mainland China, Taiwan, UK, Germany, Italy, and France. More than 1.5 million English lessons have taken place in China over the past six months, and 40% of active users are also playing lessons daily. Lingumi says its user base grew 50% during China’s lockdown and it has had a rapid uptake in Europe.
“Lingumi’s rapid expansion in the Chinese market required a strategic local investor, and Dr Min and the team had a clear-sighted understanding of the technology and scale opportunity both in China, and globally.”
Dr Wanli Min, general partner at North Summit Capital, commented: “It is only the most privileged children who can access native English speakers for one-on-one tutoring… Lingumi has the potential to democratize English learning and offer every kid a personalized curriculum empowered by AI & Lingumi’s ‘asynchronous teaching; model.”
Competitors to include Lingumi include live teaching solutions like VIPKid, and learning platforms like Jiliguala in China, or Lingokids in the West.

Africa Roundup: Africa’s tech ecosystem responds to COVID-19

In March, the virus gripping the world — COVID-19 — started to spread in Africa. In short order, actors across the continent’s tech ecosystem began to step up to stem the spread.
Early in March Africa’s coronavirus cases by country were in the single digits, but by mid-month those numbers had spiked leading the World Health Organization to sound an alarm.
“About 10 days ago we had 5 countries affected, now we’ve got 30,” WHO Regional Director Dr Matshidiso Moeti said at a press conference on March 19. “It’s has been an extremely rapid…evolution.” 
By the World Health Organization’s stats Tuesday there were 3671 COVID-19 cases in Sub-Saharan Africa and 87 confirmed deaths related to the virus — up from 463 cases and 8 deaths on March 18.
As the COVID-19 began to grow in major economies, governments and startups in Africa started measures to shift a greater volume of transactions toward digital payments and away from cash — which the World Health Organization flagged as a conduit for the spread of the coronavirus.
Africa’s leader in digital payment adoption — Kenya — turned to mobile-money as a public-health tool.
At the urging of the Central Bank and President Uhuru Kenyatta, the country’s largest telecom, Safaricom, implemented a fee-waiver on East Africa’s leading mobile-money product, M-Pesa, to reduce the physical exchange of currency.
The company announced that all person-to-person (P2P) transactions under 1,000 Kenyan Schillings (≈ $10) would be free for three months.
Kenya has one of the highest rates of digital finance adoption in the world — largely due to the dominance of M-Pesa  in the country — with 32 million of its 53 million population subscribed to mobile-money accounts, according to Kenya’s Communications Authority.
On March 20, Ghana’s central bank directed mobile money providers to waive fees on transactions of GH₵100 (≈ $18), with restrictions on transactions to withdraw cash from mobile-wallets.
Ghana’s monetary body also eased KYC requirements on mobile-money, allowing citizens to use existing mobile phone registrations to open accounts with the major digital payment providers, according to a March 18 Bank of Ghana release.
Growth in COVID-19 cases in Nigeria, Africa’s most populous nation of 200 million, prompted one of the country’s largest digital payments startups to act.
Lagos based venture Paga made fee adjustments, allowing merchants to accept payments from Paga customers for free — a measure “aimed to help slow the spread of the coronavirus by reducing cash handling in Nigeria,” according to a company release.

Africa turns to mobile payments as a tool to curb COVID-19

In March, Africa’s largest innovation incubator, CcHub, announced funding and engineering support to tech projects aimed at curbing COVID-19 and its social and economic impact.
The Lagos and Nairobi based organization posted an open application on its website to provide $5,000 to $100,000 funding blocks to companies with COVID-19 related projects.
CcHub’s CEO Bosun Tijani expressed concern for Africa’s ability to combat a coronavirus outbreak. “Quite a number of African countries, if they get to the level of Italy or the UK, I don’t think the system… is resilient enough to provide support to something like that,” Tijani said.

CcHub funds tech to curb COVID-19 on concerns of an Africa outbreak

Cape Town based crowdsolving startup Zindi — that uses AI and machine learning to tackle complex problems — opened a challenge to the 12,000 registered engineers on its platform.
The competition, sponsored by AI4D, tasks scientists to create models that can use data to predict the global spread of COVID-19 over the next three months. The challenge is open until April 19, solutions will be evaluated against future numbers and the winner will receive $5,000.
Zindi will also sponsor a hackathon in April to find solutions to coronavirus related problems.
Image Credits: Sam Masikini via Zindi
On the digital retail front, Pan-African e-commerce company Jumia announced measures it would take on its network to curb the spread of COVID-19.
The Nigeria headquartered operation — with online goods and services verticals in 11 African countries — said it would donate certified face masks to health ministries in Kenya, Ivory Coast, Morocco, Nigeria and Uganda, drawing on its supply networks outside Africa.
The company has also offered African governments use of of its last-mile delivery network for distribution of supplies to healthcare facilities and workers.
Jumia is reviewing additional assets it can offer the public sector. “If governments find it helpful we’re willing to do it,” CEO Sacha Poignonnec told TechCrunch.

Jumia adapts Pan-African e-commerce network in response to COVID-19

More Africa-related stories @TechCrunch
Visa partners with Paga on payments and fintech for Africa and abroad
Did African startups raise $496M, $1B or $2B in 2019?
A snapshot of the leading startups in Africa’s top VC markets
African tech around the ‘net
Twitter CEO will ‘reevaluate’ plan to spend months in Africa citing coronavirus concerns
EWB Canada launches $24M Africa-focused tech venture fund
Nigeria rolls out broadband to boost growth

Cnvrg.io launches a free version of its data science platform

Data science platform cnvrg.io today announced the launch of the free community version of its data science platform. Dubbed ‘CORE,’ this version includes most — but not all — of the standard feature in cnvrg’s main commercial offering. It’s an end-to-end solution for building, managing and automating basic ML models with limitations in the free version that mostly center around the production capabilities of the paid premium version and working with larger teams of data scientists.
As the company’s CEO Yochay Ettun told me, CORE users will be able to use the platform either on-premise or in the cloud, using Nvidia-optimized containers that run on a Kubernetes cluster. Because of this, it natively handles hybrid- and multi-cloud deployments that can automatically scale up and down as needed — and adding new AI frameworks is simply a matter of spinning up new containers, all of which are managed from the platform’s web-based dashboard.

Ettun describes CORE as a ‘lightweight version’ of the original platform but still hews closely to the platform’s original mission. “As was our vision from the very start, cnvrg.io wants to help data scientists do what they do best – build high impact AI,” he said. “With the growing technical complexity of the AI field, the data science community has strayed from the core of what makes data science such a captivating profession — the algorithms. Today’s reality is that data scientists are spending 80 percent of their time on non-data science tasks, and 65 percent of models don’t make it to production. Cnvrg.io CORE is an opportunity to open its end-to-end solution to the community to help data scientists and engineers focus less on technical complexity and DevOps, and more on the core of data science — solving complex problems.”
This has very much been the company’s direction from the outset and as Ettun noted in a blog post from a few days ago, many data scientists today try to build their own stack by using open-source tools. They want to remain agile and able to customize their tools to their needs, after all. But he also argues that data scientists are usually hired to build machine learning models, not to build and manage data science platforms.

While other platforms like H2O.ai, for example, are betting on open source and the flexibility that comes with that, cnvrg.io’s focus is squarely on ease of use. Unlike those tools, Jerusalem-based cnvrg.io, which has raised about $8 million so far, doesn’t have the advantage of the free marketing that comes with open source, so it makes sense for the company to now launch this free self-service version
It’s worth noting that while cnvrg.io features plenty of graphical tools for managing date ingestion flows, models and clusters, it’s very much a code-first platform. With that, Ettun tells me that the ideal user is a data scientist, data engineer or a student passionate about machine learning. “As a code-first platform, users with experience and savvy in the data science field will be able to leverage cnvrg CORE features to produce high impact models,” he said. “As our product is built around getting more models to production, users that are deploying their models to real-world applications will see the most value.”

 

Microsoft brings Teams to consumers and launches Microsoft 365 personal and family plans

Microsoft today announced a slew of new products, but at the core of the release is a major change to how the company is marketing its tools and services to consumers.
Office 365, which has long been the brand for the company’s subscription service for its productivity tools like Word, Excel and Outlook, is going away. On April 21, it’ll be replaced by new Microsoft 365 plans, including new personal and family plans (for up to six people), at $6.99 and $9.99 respectively. That’s the same price as the existing Office 365 Personal and Home plans.
“We are basically evolving our subscription from — in our minds — a set of tools to solutions that help you manage across your work and life,” Yusuf Mehdi, Microsoft’s CVP of Modern Life, Search and Devices, told me ahead of today’s announcement.
Microsoft is making similar branding changes to its business plans for Office 365. For the most part. There, they are a bit more convoluted, with Office 365 Business Premium now called Microsoft 365 Business Standard and Microsoft 365 Business now becoming Microsoft 365 Business Premium, but for the most part, this is about branding while prices stay the same.
These new Microsoft 365 Personal and Family plans will include access to Outlook and the Office desktop apps for Windows and macOS, 1TB of OneDrive storage per person (including unlimited access to the more secure OneDrive Personal Vault service) and 50G of Outlook.com email storage, Skype call recording and 60 minutes of Skype landline and mobile phone calls.
And since this is now Microsoft 365 and not Office 365, you can also get Windows 10 technical support with the subscription, as well as additional security features to protect you from phishing and malware attacks.
More than 37 million people currently have personal Office 365 subscriptions and chances are these lower prices will bring more users to the platform in the long run. As Mehdi stressed, Microsoft’s free offerings aren’t going away.
But with today’s release, Microsoft isn’t just changing the branding and launching these new plans, it’s also highlighting quite a few new capabilities in its various applications that are either launching today or in the coming months.
Microsoft Teams gets a personal edition
The highlight of this launch, especially given the current situation around COVID-19, is likely the announcement of Teams for consumers. Teams is already one of Microsoft’s fastest-growing products for businesses, with 44 million people using it. But in its efforts to help people bridge their work and personal lives, it will now launch a new Teams edition for consumers, as well.
Just like you can switch between work and personal accounts in Outlook, you will soon be able to do the same in Teams. The personal teams view will look a little bit different, with shared calendars for the family, access to OneDrive vaults, photo sharing, etc., but it sits on the same codebase as the business version. You’ll also be able to do video calls and shared to-do lists.

Better writing through AI
About a year ago, Microsoft announced that Word Online would get a new AI-powered editor that would help you write better. You can think of it as a smarter grammar checker that can fix all of your standard grammar mistakes but can also help you avoid overly complex sentences and bias in your word choices.
This editor is now the Microsoft Editor, and the company is expanding it well beyond Word. The new AI-powered service is now available in 20 languages in Word and Outlook.com — and maybe most importantly, it’ll be available as a Microsoft Edge and Google Chrome plug-in, too.
Free users will get basic spelling and grammar features, while Microsoft 365 subscribers will get a number of more advanced features like the ability to ask the editor to suggest a rewrite of a mangled sentence, a plagiarism checker, style analysis to see if your writing is unclear or too formal, and access to an inclusive language critique to help you avoid unintentional bias.
If you’ve used Grammarly in the past, then a lot of this will sound familiar. Both services now offer a similar set of capabilities, but Microsoft may have an edge with its ability to rewrite sentences.

Better presentations through technology
In a similar vein, Microsoft also launched a presentation coach for PowerPoint as a limited test last September. This AI-driven feature helps you avoid filler words and other presentation no-nos.

This feature first launched in the online version of PowerPoint, with a basic set of features. Now, Microsoft 365 subscribers will get two new advanced features, too, that can help you avoid a monotone pitch that puts your audience to sleep and avoid grammar mistakes in your spoken sentences.
Currently, these are still available as a free preview to all but will become Microsoft 365-only features soon.
PowerPoint is also getting an updated Designer to help you create better presentations. It can now easily turn text into a timeline, for example, and when you add an image, it can present you with a set of potential slide layouts.
Microsoft 365 subscribers now also get access to over 8,000 images and 175 looping videos from Getty Images, as well as 300 new fonts and 2,800 new icons.
Excel + Plaid
For you spreadsheet jockeys out there, Microsoft also has some good news, especially if you want to use Excel to manage your personal budgets.
In partnership with Plaid, you can now link your bank accounts to Excel and import all of your expenses into your spreadsheets. With that, you can then categorize your spend and build your own personal Mint. This feature, dubbed “Money in Excel,” will launch in the U.S. in the coming months.

In addition, Excel is getting a lot more cloud- and AI-driven data types that now cover over 100 topics, including nutrition, movies, places, chemistry and — because why not — Pokémon. Like some of the previous features, this is an extension of the work Microsoft did on Excel in the last few years, starting with the ability to pull in stock market and geographical data.
And just like with the previous set of features, you’ll need a Microsoft 365 subscription to get access to these additional data types. Otherwise, you’ll remain restricted to the stock market and geography data types, which will become available to Office Insiders in the spring and then Personal and Family subscribers in the U.S. in the coming months.
Outlook gets more personal
Even though you may want to forget about Outlook and ignore your inbox for a while, Microsoft doesn’t. In Outlook on the web, you can now link your personal and work calendars to ensure you don’t end up with a work meeting in the middle of a personal appointment, because Chris from marketing really needs another sync meeting during your gym time even though a short email would suffice.
Outlook for Android can now summarize and read your emails aloud for you, too. This feature will roll out in the coming months.

Family Safety
While most of the new features here focus on existing applications, Microsoft is also launching one completely new app: Microsoft Family Safety. This app is coming to Microsoft 365 subscribers on iOS and Android and will bring together a set of tools that can help families manage their online activities and track the location of family members.
The app lets families manage the screen time of their kids (and maybe parents, too) across Windows, Android and Xbox, for example. Parents can also set content filters that only allow kids to download age-appropriate apps. But it also allows parents to track their kids in the real world through location tracking and even driving reports. This, as Mehdi stressed, is a feature that kids can turn off, but they’ll probably have to explain themselves to their parents then. Indeed, he stressed that a lot of what the app does is give parents a chance to have a dialog with their kids. What makes the service unique is that it works across platforms, with iOS support coming in the future.
This app is launching as a limited preview now and will be available in the coming months (I think you can spot a trend here).

Partner benefits
Mehdi noted that Microsoft is also partnering with companies like Adobe, Bark, Blinkist, Creative Live, Experian, Headspace and TeamSnap to provide Microsoft 365 subscribers with additional benefits like limited-time access to their products and services. Subscribers will get three free months of access to Adobe’s Creative Cloud Photography plan, for example.
At the core of today’s updates, though, is a mission to bring a lot of the productivity tools that people know from their work life to their personal life, too, with the personal edition of Teams being the core example.
“We’re very much excited to bring this type of value — not increase the price of Office 365 — take a big step forward, and then move to this,” Mehdi said. “We think now more than ever, it is valuable for people to have the subscription service for their life that helps them make the most of their time, protects their family, lets them develop and grow. And our goal or aspiration is: Can we give you the most valuable subscription for your life? I know people value their video subscriptions and music subscriptions. Our aspiration is to provide the most valuable subscription for your life via Microsoft 365 Personal and Family.”

‘It’s part of my job as a VC to remain calm,’ says Anorak’s Greg Castle

As the venture landscape adjusts to the COVID-19 pandemic and seismic shifts in public markets, early-stage VCs are reassessing which bets they’re making, along with questions they’re asking of founders who are exploring bleeding-edge technology.
Anorak Ventures is a small seed-investment firm that bets on emerging tech like AR/VR, machine learning and robotics. I recently hopped on a Zoom call with founder Greg Castle to talk about what he’s seen recently in seed investing and how the sector is responding to the crisis. Castle was an early investor in Oculus; his other bets at Anorak include Against Gravity, 6D.ai and Anduril.
Our conversation has been edited for length and clarity.
TechCrunch: Has this pandemic affected the types of companies that you’re looking at?
Greg Castle: From my experience as an investor thus far, being reactive as an investor and looking at “hot” areas has a lot of pitfalls to be mindful of. I think a lot of the areas that excite me as an investor could benefit from what’s going on here, those areas including robotics, automation, immersive entertainment and immersive computing.
Just generally, do you feel like a recession is more likely to negatively impact emerging tech more so than other areas?

Online tutoring marketplace Preply banks $10M to fuel growth in North America, Europe

Online learning looks likely to be a key beneficiary of the social distancing and quarantine measures that are being applied around the world as countries grapple with the COVID-19 pandemic.
In turn, this looks set to buoy some relative veterans of the space. To wit: Preply, a 2013-founded tutoring marketplace, is today announcing a $10 million Series A. It said the funding will be used to scale the business and beef up its focus on the US market, where it plans to open an office by the end of the year.
The Series A is led by London-based Hoxton Ventures, with European VC funds Point Nine Capital, All Iron Ventures, The Family, EduCapital, and Diligent Capital also participating.
Preply’s press release also notes a number of individual angel investors jumped aboard this round: Arthur Kosten of Booking.com; Gary Swart, former CEO of Upwork; David Helgason, founder of Unity Technologies; and Daniel Hoffer, founder of Couchsurfing.
The startup said it has seen a record number of daily hours booked on its platform this week. It also reports a spike in the number of tutors registering in markets including the U.S., U.K., Germany, France, Italy and Spain — which are among the regions where schools have been closed as a coronavirus response measure.
Also this week Preply said some countries have seen the number of tutor registrations triple vs the same period in February, while it also reports a doubling of the number of hours students are booking on the platform in some markets.
The former TechStars Berlin alum closed a $1.3M seed back in 2016 to expand its marketplace in Europe, when it said it had 25,000 “registered” tutors — and was generating revenue from more than 130 countries.
The new funding will be used to help scale mainly in North America, France, Germany, Spain, Italy and the UK, it said today.
Another core intent for the funding is to grow Preply’s current network of 10,000 “verified” tutors, who it says are teaching 50 languages to students in 190 countries around the world. So tackling the level of tutor churn it has evidently experienced over the years — by getting more of those who sign up to stick around teaching for a longer haul — looks to be one of the priorities now it’s flush with Series A cash.
It also plans to spend on building additional data-driven tools — including for assessments and homework.
The aim is to increase the platform’s utility by adding more features for tutors to track students’ progress and better support them to hit learning goals. “Preply wants to engage and enable tutors to develop alongside the platform, giving them the opportunity to explore training and lessons plans so they can streamline their income and maximize their classes,” it said in a press release.
Another area of focus on the product dev front is mobile. Here, Preply said it will be spending to boost the efficiency and improve the UX of its Android and iOS apps.
​“The new funding allows us to bring a more in-depth, immersive and convenient experience to both tutors and learners all over the world. Today, we are laser focused on language learning, but ultimately, I envision a future where anyone can learn anything using Preply,” said Kirill Bigai, CEO of Preply, in a statement.
“Getting to know Kirill and the team at Preply we were most impressed with their tremendous growth already in the US market as well as the size of the global market in online language tutoring. We believe the team has vast opportunity ahead of it, especially in the English-learning segment of the market where Preply already demonstrates market leadership,” added Hoxton Ventures’ Rob Kniaz in another supporting statement.
To date, Preply says some two million classes have been taken with teachers of 160 nationalities, via its marketplace. The platform maintains a strong focused on language learning, although topic-based lessons are also offered — such as maths and physics.
The business model entails taking a lead generation fee — in the form of the entire fee for the first lesson — after which it takes a revenue share of any lessons booked thereafter. The average price of a lesson on the platform is $15 to $20 per hour, per Preply, with tutors having leeway to set prices (within some fixed bounds, such as a minimum per lesson price).
The company currently employs 125 staff, based out of Kyiv (Ukraine) and Barcelona (Spain) and says its revenues have grown tenfold in the last three years.
A core tech component of the marketplace is a machine-learning matching system which it uses to increase the efficiency of pairing tutors with learners — touting this as a way to make “smarter connections” that “crack the code of effective language learning”.
In plainer language, it’s using automated decision-making to help users find a relevant teacher without having to do lots of search legwork themselves, while the platform can use AI-matching to drive bookings by managing the experience of tutor discovery in a way that also avoids students being overwhelmed by too much choice.

AI and big data won’t work miracles in the fight against coronavirus

To someone with a hammer, every problem looks like a nail — and as expected, the tech sector is hard at work hammering every nail it can find. But the analytical prowess of the modern data ecosystem is especially limited when attempting to tackle the problem of potential coronavirus treatments.
It’s only to be expected — and of course lauded — that companies with immense computing resources would attempt to dedicate those resources in some way to the global effort to combat the virus.
In some ways these efforts are extremely valuable. For instance, one can apply the context-aware text analysis of Semantic Scholar to the thousands of articles on known coronaviruses to make them searchable by researchers around the globe. And digital collaboration tools available globally to research centers and health authorities are leagues beyond where they were during the last health crisis of (or rather, approaching) this magnitude.
But other efforts may give a false sense of progress. One field in particular where AI and tech have made large advances is in drug discovery. Numerous companies have been founded, and attracted hundreds of millions in funding, on the promise of using AI to speed up the process by which new substances can be identified that may have an effect on a given condition.

BenevolentAI starts AI collaboration with AstraZeneca to accelerate drug discovery

Coronavirus is a natural target for such work, and already some companies and research organizations are touting early numbers: ten or a hundred such substances identified which may be effective against coronavirus. These are the types of announcements that gather headlines around them — “An AI found 10 possible coronavirus cures” and that sort of thing.
It’s not that these applications of AI are bad, but rather that they belong to a set with few actionable outcomes. If your big data analysis of traffic supports or undercuts a proposed policy of limiting transportation options in such and such a way, that’s one thing. If your analysis produces dozens of possible courses of action, any of which might be a dead end or even detrimental to current efforts, it’s quite another.
Because these companies are tech companies, and by necessity part ways with their solutions once they are proposed. Any given treatment lead requires a grueling battery of real life tests even to be excluded as a possibility, let alone found to be effective. Even drugs already approved for other purposes would need to be re-tested for this new application before they could be responsibly deployed at scale.
Furthermore the novel substances that are often the result of this type of drug discovery process are not guaranteed to have a realistic path to manufacturing even at the scale of thousands of doses, to say nothing of billions. That’s a completely different problem! (Though it must be said, other AI companies are working on.)

Molecule.one uses machine learning to make synthesizing new drugs a snap

As a lead generation mechanism these approaches are invaluable, but the problem is not that we have no leads — it’s all the entire world can manage right now to follow up on the leads it started with. Again, this is not to say that no one should be doing drug candidate identification, but that they should be considered for what they are: a list of tasks, with uncertain outcomes, for other people to do.
Similarly, an “AI” technique by which, say, chest x-rays can be automatically analyzed by an algorithm is something that could be valuable in the future, and should be pursued — but it’s important to keep expectations in line with reality. A year or two from now there may be telehealth labs set up for that purpose. But no one this spring is going to be given a coronavirus diagnosis by an AI doctor.
Other places where algorithmic predictions and efficiencies would be welcome in other days are going to reject them during an emergency response where everything needs to be deliberate and triple checked, not clever and novel. The most attractive and popular approaches for fast-moving startups are rarely the right ones for a global crisis involving millions of lives and thousands of interlocking parts.
We’re happy when a vehicle manufacturer repurposes its factories to produce masks or ventilators, but we don’t expect it discover new drugs. Similarly, we shouldn’t expect those working on drug discovery to be anything more than that — but AI has a reputation as being something like magic, in that its results are somehow fundamentally superhuman. As has been noted repeatedly before, sometimes “better” processes just get you the wrong answer faster.
The work on the digital bleeding edge of the biotech industry is indispensable in general yet, in the face of a looming health crisis, uniquely unsuited for helping mitigate the crisis. But it must not be expected to, either among the lay public who read only headlines, or among the technotopians who find in such advances more promise than is warranted.

Zindi taps 12,000 African data scientists for solutions to COVID-19

Since its inception, Cape Town based crowdsolving startup Zindi has been building a database of data scientists across Africa.
It now has 12,000 registered on its its platform that uses AI and machine learning to tackle complex problems and will offer them cash-prizes to find solutions to curb COVID-19.
Zindi has an open challenge focused on stemming the spread and havoc of coronavirus and will introduce a hackathon in April. The current competition, sponsored by AI4D, tasks scientists to create models that can use data to predict the global spread of COVID-19 over the next three months.
The challenge is open until April 19, solutions will be evaluated against future numbers and the winner will receive $5000.
The competition fits with Zindi’s business model of building a platform that can aggregate pressing private or public-sector challenges and match the solution seekers to problem solvers.
Founded in 2018, the early-stage venture allows companies, NGOs or government institutions to host online competitions around data oriented issues.
Zindi’s model has gained the attention of some notable corporate names in and outside of Africa. Those who have hosted competitions include Microsoft, IBM and Liquid Telecom. Public sector actors — such as the government of South Africa and UNICEF — have also tapped Zindi for challenges as varied as traffic safety and disruptions in agriculture.

Image Credits: Zindi
The startup’s CEO didn’t imagine a COVID-19 situation precisely, but sees it as one of the reasons she co-founded Zindi with South African Megan Yates and Ghanaian Ekow Duker.

The ability to apply Africa’s data science expertise, to solve problems around a complex health crisis such as COVID-19 is what Zindi was meant for, Lee explained to TechCrunch on a call from Cape Town.
“As an online platform, Zindi is well-positioned to mobilize data scientists at scale, across Africa and around the world, from the safety of their homes,” she said.
Lee explained that perception leads many to believe Africa is the victim or source of epidemics and disease. “We wanted to show Africa can actually also contribute to the solution for the globe.”
With COVID-19, Zindi is being employed to alleviate a problem that is also impacting its founder, staff and the world.
Lee spoke to TechCrunch while sheltering in place in Cape Town, as South Africa went into lockdown Friday due to coronavirus. Zindi’s founder explained she also has in-laws in New York and family in San Francisco living under similar circumstances due to the global spread of COVID-19.
Lee believes the startup’s competitions can produce solutions that nations in Africa could tap as the coronavirus spreads. “The government of Kenya just started a task force where they’re including companies from the ICT sector. So I think there could be interest,” she said.
Starting April, Zindi will launch six weekend hackathons focused on COVID-19.
That could be timely given the trend of COVID-19 in Africa. The continent’s cases by country were in the single digits in early March, but those numbers spiked last week — prompting the World Health Organization’s Regional Director Dr Matshidiso Moeti to sound an alarm on the rapid evolution of the virus on the continent.
By the WHO’s stats Wednesday there were 1691 COVID-19 cases in Sub-Saharan Africa and 29 confirmed deaths related to the virus — up from 463 cases and 10 deaths last Wednesday.
The trajectory of the coronavirus in Africa has prompted countries and startups, such as Zindi, to include the continent’s tech sector as part of a broader response. Central banks and fintech companies in Ghana, Nigeria, and Kenya have employed measures to encourage more mobile-money usage, vs. cash — which the World Health Organization flagged as a conduit for the spread of the virus.

Africa turns to mobile payments as a tool to curb COVID-19

The continent’s largest incubator, CcHub, launched a fund and open call for tech projects aimed at curbing COVID-19 and its social and economic impact.
Pan-African e-commerce company Jumia has offered African governments use of its last-mile delivery network for distribution of supplies to healthcare facilities and workers.
Zindi’s CEO Celina Lee anticipates the startup’s COVID-19 related competitions can provide additional means for policy-makers to combat the spread of the virus.
“The one that’s open right now should hopefully go into informing governments to be able to anticipate the spread of the disease and to more accurately predict the high risk areas in a country,” she said.

Helm.ai raises $13M on its unsupervised learning approach to driverless car AI

Four years ago, mathematician Vlad Voroninski saw an opportunity to remove some of the bottlenecks in the development of autonomous vehicle technology thanks to breakthroughs in deep learning.
Now, Helm.ai, the startup he co-founded in 2016 with Tudor Achim, is coming out of stealth with an announcement that it has raised $13 million in a seed round that includes investment from A.Capital Ventures, Amplo, Binnacle Partners, Sound Ventures, Fontinalis Partners and SV Angel. More than a dozen angel investors also participated, including Berggruen Holdings founder Nicolas Berggruen, Quora co-founders Charlie Cheever and Adam D’Angelo, professional NBA player Kevin Durant, Gen. David Petraeus, Matician co-founder and CEO Navneet Dalal, Quiet Capital managing partner Lee Linden and Robinhood co-founder Vladimir Tenev, among others.
Helm.ai will put the $13 million in seed funding toward advanced engineering and R&D and hiring more employees, as well as locking in and fulfilling deals with customers.
Helm.ai is focused solely on the software. It isn’t building the compute platform or sensors that are also required in a self-driving vehicle. Instead, it is agnostic to those variables. In the most basic terms, Helm.ai is creating software that tries to understand sensor data as well as a human would, in order to be able to drive, Voroninski said.
That aim doesn’t sound different from other companies. It’s Helm.ai’s approach to software that is noteworthy. Autonomous vehicle developers often rely on a combination of simulation and on-road testing, along with reams of data sets that have been annotated by humans, to train and improve the so-called “brain” of the self-driving vehicle.
Helm.ai says it has developed software that can skip those steps, which expedites the timeline and reduces costs. The startup uses an unsupervised learning approach to develop software that can train neural networks without the need for large-scale fleet data, simulation or annotation.
“There’s this very long tail end and an endless sea of corner cases to go through when developing AI software for autonomous vehicles, Voroninski explained. “What really matters is the unit of efficiency of how much does it cost to solve any given corner case, and how quickly can you do it? And so that’s the part that we really innovated on.”
Voroninski first became interested in autonomous driving at UCLA, where he learned about the technology from his undergrad adviser who had participated in the DARPA Grand Challenge, a driverless car competition in the U.S. funded by the Defense Advanced Research Projects Agency. And while Voroninski turned his attention to applied mathematics for the next decade — earning a PhD in math at UC Berkeley and then joining the faculty in the MIT mathematics department — he knew he’d eventually come back to autonomous vehicles. 
By 2016, Voroninski said breakthroughs in deep learning created opportunities to jump in. Voroninski left MIT and Sift Security, a cybersecurity startup later acquired by Netskope, to start Helm.ai with Achim in November 2016.
“We identified some key challenges that we felt like weren’t being addressed with the traditional approaches,” Voroninski said. “We built some prototypes early on that made us believe that we can actually take this all the way.”
Helm.ai is still a small team of about 15 people. Its business aim is to license its software for two use cases — Level 2 (and a newer term called Level 2+) advanced driver assistance systems found in passenger vehicles and Level 4 autonomous vehicle fleets.
Helm.ai does have customers, some of which have gone beyond the pilot phase, Voroninski said, adding that he couldn’t name them.

Espressive lands $30M Series B to build better help chatbots

Espressive, a four-year-old startup from former ServiceNow employees, is working to build a better chatbot to reduce calls to company help desks. Today, the company announced a $30 million Series B investment.
Insight Partners led the round with help from Series A lead investor General Catalyst along with Wing Venture Capital. Under the terms of today’s agreement, Insight founder and managing director Jeff Horing will be joining the Espressive Board. Today’s investment brings the total raised to $53 million, according to the company.
Company founder and CEO Pat Calhoun says that when he was at ServiceNow he observed that, in many companies, employees often got frustrated looking for answers to basic questions. That resulted in a call to a Help Desk requiring human intervention to answer the question.
He believed that there was a way to automate this with AI-driven chatbots, and he founded Espressive to develop a solution. “Our job is to help employees get immediate answers to their questions or solutions or resolutions to their issues, so that they can get back to work,” he said.
They do that by providing a very narrowly focused natural language processing (NLP) engine to understand the question and find answers quickly, while using machine learning to improve on those answers over time.
“We’re not trying to solve every problem that NLP can address. We’re going after a very specific set of use cases which is really around employee language, and as a result, we’ve really tuned our engine to have the highest accuracy possible in the industry,” Calhoun told TechCrunch.
He says what they’ve done to increase accuracy is combine the NLP with image recognition technology. “What we’ve done is we’ve built our NLP engine on top of some image recognition architecture that’s really designed for a high degree of accuracy and essentially breaks down the phrase to understand the true meaning behind the phrase,” he said.
The solution is designed to provide a single immediate answer. If, for some reason, it can’t understand a request, it will open a help ticket automatically and route it to a human to resolve, but they try to keep that to a minimum. He says that when they deploy their solution, they tune it to the individual customers’ buzzwords and terminology.
So far they have been able to reduce help desk calls by 40% to 60% across customers with around 85% employee participation, which shows that they are using the tool and it’s providing the answers they need. In fact, the product understands 750 million employee phrases out of the box.
The company was founded in 2016. It currently has 65 employees and 35 customers, but with the new funding, both of those numbers should increase.

Monitoring is critical to successful AI

Amit Paka
Contributor

Share on Twitter

Amit Paka is co-founder and chief product officer at Fiddler Labs, an explainable AI startup that enables enterprises to deploy and scale risk- and bias-free AI applications.

More posts by this contributor
How Legal Immigration Failed Silicon Valley
The Rise Of Micro Startup Acquisitions

Krishna Gade
Contributor

Share on Twitter

Krishna Gade is co-founder and CEO at Fiddler Labs, an explainable AI startup that enables enterprises to deploy and scale risk- and bias-free AI applications.

As the world becomes more deeply connected through IoT devices and networks, consumer and business needs and expectations will soon only be sustainable through automation.
Recognizing this, artificial intelligence and machine learning are being rapidly adopted by critical industries such as finance, retail, healthcare, transportation and manufacturing to help them compete in an always-on and on-demand global culture. However, even as AI and ML provide endless benefits — such as increasing productivity while decreasing costs, reducing waste, improving efficiency and fostering innovation in outdated business models — there is tremendous potential for errors that result in unintended, biased results and, worse, abuse by bad actors.
The market for advanced technologies including AI and ML will continue its exponential growth, with market research firm IDC projecting that spending on AI systems will reach $98 billion in 2023, more than two and one-half times the $37.5 billion that was projected to be spent in 2019. Additionally, IDC foresees that retail and banking will drive much of this spending, as the industries invested more than $5 billion in 2019.
These findings underscore the importance for companies that are leveraging or plan to deploy advanced technologies for business operations to understand how and why it’s making certain decisions. Moreover, having a fundamental understanding of how AI and ML operate is even more crucial for conducting proper oversight in order to minimize the risk of undesired results.
Companies often realize AI and ML performance issues after the damage has been done, which in some cases has made headlines. Such instances of AI driving unintentional bias include the Apple Card allowing lower credit limits for women and Google’s AI algorithm for monitoring hate speech on social media being racially biased against African Americans. And there have been far worse examples of AI and ML being used to spread misinformation online through deepfakes, bots and more.
Through real-time monitoring, companies will be given visibility into the “black box” to see exactly how their AI and ML models operate. In other words, explainability will enable data scientists and engineers to know what to look for (a.k.a. transparency) so they can make the right decisions (a.k.a. insight) to improve their models and reduce potential risks (a.k.a. building trust).
But there are complex operational challenges that must first be addressed in order to achieve risk-free and reliable, or trustworthy, outcomes.
5 key operational challenges in AI and ML models

Stuart Russell on how to make AI ‘human-compatible’

In a career spanning several decades, artificial intelligence researcher and professor Stuart Russell has contributed extensive knowledge on the subject, including foundational textbooks. He joined us onstage at TC Sessions: Robotics + AI to discuss the threat he perceives from AI, and his book, which proposes a novel solution.
Russell’s thesis, which he develops in “Human Compatible: Artificial Intelligence and the Problem of Control,” is that the field of AI has been developed on the false premise that we can successfully define the goals toward which these systems work, and the result is that the more powerful they are, the worse they are capable of. No one really thinks a paperclip-making AI will consume the Earth to maximize production, but a crime-prevention algorithm could very easily take badly constructed data and objectives and turn them into recommendations that cause real harm.
The solution, Russell suggests, is to create systems that aren’t so sure of themselves — essentially, knowing what they don’t or can’t know and looking to humans to find out.
The interview has been lightly edited. My remarks, though largely irrelevant, are retained for context.
TechCrunch: Well, thanks for joining us here today. You’ve written a book. Congratulations on it. In fact, you’ve actually, you’ve been an AI researcher and author, teacher for a long time. You’ve seen this, the field of AI sort of graduated from a niche field that academics were working in to a global priority in private industry. But I was a little surprised by the thesis of your book; do you really think that the current approach to AI is sort of fundamentally mistaken?
Stuart Russell: So let me take you back a bit, to even before I started doing AI. So, Alan Turing, who, as you all know, is the father of computer science — that’s why we’re here — he wrote a very famous paper in 1950 called “Computing Machinery and Mind,” that’s where the Turing test comes from. He laid out a lot of different subfields of AI, he proposed that we would need to use machine learning to create sufficiently intelligent programs.

FluSense system tracks sickness trends by autonomously monitoring public spaces

One of the obstacles to accurately estimating the prevalence of sickness in the general population is that most of our data comes from hospitals, not the 99.9 percent of the world that isn’t hospitals. FluSense is an autonomous, privacy-respecting system that counts the people and coughs in public spaces to keep health authorities informed.
Every year has a flu and cold season, of course, though this year’s is of course far more dire. But it’s like an ordinary flu season in that the main way anyone estimates how many people are sick is by analyzing stats from hospitals and clinics. Patients reporting “influenza-like illness” or certain symptoms get aggregated and tracked centrally. But what about the many folks who just stay home, or go to work sick?
We don’t know what we don’t know here, and that makes estimates of sickness trends — which inform things like vaccine production and hospital staffing — less reliable than they could be. Not only that, but it likely produces biases: Who is less likely to go to a hospital, and more likely to have to work sick? Folks with low incomes and no healthcare.
Researchers at the University of Massachusetts Amherst are attempting to alleviate this data problem with an automated system they call FluSense, which monitors public spaces, counting the people in them and listening for coughing. A few of these strategically placed in a city could give a great deal of valuable data and insight into flu-like illness in the general population.
Tauhidur Rahman and Forsad Al Hossain describe the system in a recent paper published in an ACM journal. FluSense basically consists of a thermal camera, a microphone, and a compact computing system loaded with a machine learning model trained to detect people and the sounds of coughing.
To be clear at the outset, this isn’t recording or recognizing individual faces; Like a camera doing face detection in order to set focus, this system only sees that a face and body exists and uses that to create a count of people in view. The number of coughs detected is compared to the number of people, and a few other metrics like sneezes and amount of speech, to produce a sort of sickness index — think of it as coughs per person per minute.
A sample setup, above, the FluSense prototype hardware, center, and sample output from the thermal camera with individuals being counted and outlined.
Sure, it’s a relatively simple measurement, but there’s nothing like this out there, even in places like clinic waiting rooms where sick people congregate; Admissions staff aren’t keeping a running tally of coughs for daily reporting. One can imagine not only characterizing the types of coughs, but visual markers like how closely packed people are, and location information like sickness indicators in one part of a city versus another.
“We believe that FluSense has the potential to expand the arsenal of health surveillance tools used to forecast seasonal flu and other viral respiratory outbreaks, such as the COVID-19 pandemic or SARS,” Rahman told TechCrunch. “By understanding the ebb and flow of the symptoms dynamics across different locations, we can have a better understanding of the severity of a novel infectious disease and that way we can enforce targeted public health intervention such as social distancing or vaccination.”
Obviously privacy is an important consideration with something like this, and Rahman explained that was partly why they decided to build their own hardware, since as some may have realized already, this is a system that’s possible (though not trivial) to integrate into existing camera systems.
“The researchers canvassed opinions from clinical care staff and the university ethical review committee to ensure the sensor platform was acceptable and well-aligned with patient protection considerations,” he said. “All persons discussed major hesitations about collection any high-resolution visual imagery in patient areas.”
Similarly, the speech classifier was built specifically to not retain any speech data beyond that someone spoke — can’t leak sensitive data if you never collect any.
The plan for now is to deploy FluSense “in several large public spaces,” one presumes on the UMass campus in order to diversify their data. “We are also looking for funding to run a large-scale multi-city trial,” Rahman said.
In time this could be integrated with other first- and second-hand metrics used in forecasting flu cases. It may not be in time to help much with controlling COVID-19, but it could very well help health authorities plan better for the next flu season, something that could potentially save lives.

Hospital droid Diligent Robotics raises $10M to assist nurses

28% of a nurse’s time is wasted on low-skilled tasks like fetching medical tools. We need them focused on the complex and compassionate work of treating patients, especially amid the coronavirus outbreak. Diligent Robotics wants to give them a helper droid that can run errands for them around the hospital. The startup’s bot Moxi is equipped with a flexible arm, gripper hand, and full mobility so it can hunt down lightweight medical resources, navigate a clinic’s hallways, and drop them off for the nurse.
With the world facing a critical shortage of medical care professionals, Moxi could help health care centers use their staffs as efficiently as possible. And since robots can’t be infected by COVID-19, they’re one less potential carrier interacting with vulnerable populations.

Today, Diligent Robotics announces its $10 million Series A that will help it scale up to deliver “more robots to more hospitals” CEO Andrea Thomaz tells me. “We’ve been designing our product, Moxi, side by side with hospital customers because we don’t just want to give them an automation solution for their materials management problems. We want to give them a robot that frontline staff are delighted to work with and feels like a part of the team.”
The round led by DNX Ventures brings Diligent Robotics to $15.75 million in total funding that’s propelled it to the fifth generation of its Moxi robot. It currently has two deployed in Dallas, TX but is already working with two of the three top hospital networks in the U.S. ““As the current pandemic and circumstance has shown, the real heroes are our healthcare providers” says Q Motiwala, partner at DNX Ventures. The new cash from DNX, True Ventures, Ubiquity Ventures, Next Coast Ventures, Grit Ventures, E14 Fund, and Promus Ventures will help Diligent Robotics expand Moxi’s use cases and seamlessly complement nurses’ workflows to help alleviate the talent crunch.
Thomaz came up with the idea for a hospital droid after doing her Ph.D. in social robotics at the MIT Media lab. Her co-founder and CTO Vivien Chu had done a masters at UPenn on how to give robots a sense of touch, and then came to work with Thomaz at Georgia Tech. They were inspired by a study revealing how nurses spent so much time acting as hosptial gofers, so in 2016 they applied for and won a National Science Foundation grant of $750,000 that funded a six-month sprint to build a prototype of Moxi.
Since then, 18-person Diligent Robotics has worked with hundreds of nurses to learn about exactly what they need from an autonomous assistant. “Today you will go about your day, and you probably won’t interact with any robots….we want to change that” Thomaz tells me. “The only way you can really bring robots out of the warehouses, off of the factory floors, is to build a robot that can work in our dynamic and messy everyday human environments.” The startup’s intention isn’t to full replace humans, which it doesn’t think is possible, but to let them focus on the most human elements of their jobs.
Moxi is about the size of a human but designed to look like an 80s movie robot so as not to engender and uncanny valley cyborg weirdness. Its head and eyes can move to signal intent, like which direction it’s about to move in, while sounds let it communicate with nurses and acknowledge their commands. A moving pillar lets it adjust its height while its gripper hand and arm can pick and put down smaller pieces of hospital equipment. Its round shape and courteous navigation makes sure it can politely share crowded hallways and travel via elevator.

Diligent Robotics’ solution engineers work with hospitals to teach Moxi how to get around and what they need. The company hopes to eventually build the ability to learn and adapt right into the bot so nurses can teach it new tasks on the fly. “The team continues to demonstrate unmatched robotics-specific innovation by combining social intelligence and human-guided learning capabilities” says True Ventures partner and Diligent board member Rohit Sharma.
Hospitals pay an upfront fee to buy Moxi robots, and then there’s a monthly fee for the software, services, and maintenance. Thomaz admits that “Hospitals are naturally risk-averse, and can be wary to take up new technology” so the startup is taking a slow and steady approach to deployment so it can convince buyers that Moxi is worth the learning curve.
Diligent Robotics will be competing with companies like Aethon’s TUG bot for pulling laundry and pharmacy carts. Other players in the hospital tech space include Xenex’s machine that disinfects rooms with light, and surgical bots like those from Johnson & Johnson’s Auris and Intuitive Surgical.

Diligent Robotics hopes to differentiate itself by building social intelligence into Moxi so it feels more like an intern than a gadget. “Time again, we hear from our hospital partners that Moxi not only returns time back to their day but also brings a smile to their face” says Thomaz. The company wants to evolve Moxi for other dull, dirty, or dangerous service jobs.
Eventually, Diligent Robotics hopes to bring Moxi into people’s homes. “While we don’t see robots replacing the companionship and the human connection, we do dream of a time that robots could making nursing homes more pleasant by offsetting the often staggering numbers of caretakers to bed ratios (as bad as 30:1)” Thomaz concludes. That way, Moxi could “help people age with dignity and hold onto their independence for as long as possible.”

Anthony Levandowski pleads guilty to one count of trade secrets theft under plea deal

Anthony Levandowski, the former Google engineer and serial entrepreneur who was at the center of lawsuit between Uber and Waymo, has pleaded guilty to one count of stealing trade secrets while working at Google under a plea agreement reached with the U.S. District Attorney.
While Levandowski still faces a possible prison sentence of between 24 to 30 months, the outcome is much rosier than it could have been. In August, federal grand jury indicted Levandowski on 33 counts of theft and attempted theft.
“Mr. Levandowski accepts responsibility and is looking forward to resolving this matter. Mr. Levandowski is a young man with enormous talents and much to contribute to the fast-moving world of AI and AV and we hope that this plea will allow him to move on with his life and focus his energies where they matter most,” his attorney, Miles Ehrlich said in an emailed statement.
Under the plea agreement, Levandowski admits to downloading thousands of files related to Project Chauffeur, the Google self-driving project that later spun out to become Waymo . Levandowski was an engineer and one of the founding members of Project Chauffeur, which launched in 2009.
He said that in 2015, prior to leaving to start his own self-driving trucking company, he downloaded 14,000 documents from an internal Google server and transferred it to his laptop. Levandowski specifically pleaded guilty to count 33 of the indictment, which is related to taking what was known as the Chauffeur Weekly Update, a spreadsheet that contained a variety of details including quarterly goals and weekly metrics, the team’s objectives and key results as well as summaries of 15 technical challenges faced by the program and notes related to previous challenges that had been overcome, according to the filing.
Levandowski said in the plea agreement that he downloaded the Chauffeur Weekly Update to his personal laptop on or about January 17, 2016, and accessed the document after his resignation from Google, which occurred about 10 days later.
“Mr. Levandowski’s guilty plea in a criminal hearing today brings to an end a seminal case for our company and the self-driving industry and underscores the value of Waymo’s intellectual property,” a Waymo spokesperson said in an emailed statement. “Through today’s development and related cases, we are successfully protecting our intellectual property as we build the world’s most experienced driver.”
Levandowski left Google and started Otto, a self-driving trucking company that was then bought by Uber. Waymo later sued Uber for trade secret theft. Waymo alleged in the suit, which went to trial and ended in a settlement, that Levandowski stole trade secrets, which were then used by Uber. Under the settlement, Uber agreed to not incorporate Waymo’s confidential information into their hardware and software. Uber also agreed to pay a financial settlement that included 0.34% of Uber equity, per its Series G-1 round $72 billion valuation. That calculated at the time to about $244.8 million in Uber equity.
The plea deal puts an end to any criminal charges. However, Levandowski still faces a civil matter. An arbitration panel ruled in December that Levandowski and Lior Ron had engaged in unfair competition and breached their contract with Google when they left the company to start a rival autonomous vehicle company focused on trucking, called Otto. Uber acquired Otto in 2017. Earlier this month, San Francisco County court confirmed the panel’s decision and order Levandowski to pay $179 million.
Ron settled in February 2019 with Google for $9.7 million. Levandowski had disputed the ruling. The San Francisco County Superior Court denied his petition, granting Google’s petition to hold Levandowski to the arbitration agreement under which he was liable. Levandowski himself may not have to pay the money personally, as this sort of liability may fall to his employer depending on his contract or other legal quirks. However, Levandowski personally filed March 4 for Chapter 11 bankruptcy, stating that the presumptive $179 million debt quite exceeds his assets, which he estimates at somewhere between $50 million and $100 million.

Mya Systems gets $18.75M to keep scaling its recruitment chatbot

Hiring chatbot Mya Systems — which uses a conversational AI to grease the recruitment pipeline by automating sourcing for agencies and large enterprises needing to fill lots of vacancies in areas such as retail and warehouse jobs — has closed an $18.75 million Series C.
The funding round was led by Notion Capital with participation from earlier investors, Foundation Capital and Emergence Capital, along with Workday Ventures . The 2012-founded company, which was previously known as FirstJob, raised an $11.4M Series A back in 2017.
Touting growth over the past year, Mya said it saw 3x customer subscription growth in 2019.
In all it says it now has more than 460 brands using its tools — including six of the eight largest staffing agencies, and 29 of the Fortune 100 — name checking the likes of Hays, Adecco, L’Oreal, Deloitte, and Anheuser Busch.
Its chatbot approach to engaging and “deeply” screening applicants via a mobile app has led to more than 400,000 interviews being scheduled with “qualified and interested” candidates, it added. 
Pointing to the COVID-19 pandemic, founder and CEO Eyal Grayevsky suggested there could be increased demand for AI job screening as companies face highly dynamic recruitment needs. “Now more than ever, organizations in healthcare, e-commerce, light industrial, transportation, logistics, retail, and other industries impacted by COVID-19 need help scaling recruitment to serve large, unexpected spikes in demand. Mya is uniquely positioned to help organizations with high volume recruitment needs and the increasing reliance on temp and contract-based work,” he said in a statement.
“While some hiring is slowing down due to COVID-19, we are seeing spikes in demand from industries such as healthcare, light industrial, call center, logistics, grocery, and supply chain. We have received multiple requests from our healthcare, light industrial and e-commerce customers seeking additional support to rapidly scale engagement with nurses, in-home care professionals, warehouse workers, call center representatives, etc. to serve rapidly growing demand for those functions,” he also told us, saying the team is prioritizing helping those dealing with spikes in demand as a result of the coronavirus public health emergency.
In addition to conversational AI, Mya has focused on integrating its platform with other tools used for recruitment, including CRM, ATS and HRIS systems — plugging into the likes of Bullhorn, Workday, and SAP SuccessFactors. Asked what the new funding will be put towards, Grayevsky told us deeper integrations with such partners is on the cards, along with expanding use-cases for the product.
“Mya will be using the funds to invest in our platform, further expanding the use cases designed to support the end-to-end recruiting and post hire engagement process, and continuing to deepen our integrations with partner ATS solutions like Bullhorn, Workday, and SAP Successfactors. In addition to deepening our integrations, we are also investing heavily in turnkey, fully-featured solutions built alongside our ATS partners that allow for even greater ease and speed to implement Mya,” he said.
“Mya will also be investing in deepening our core platform and conversational AI technology, specifically to expand our conversational capabilities across new industries. We will further enhance our self-service conversation design and configuration capabilities to make it even easier for our customers to rapidly scale the Mya conversational experience across both high volume, hourly and professional roles. Lastly, we are strengthening our infrastructure and support for global customers who are rapidly scaling internationally (e.g. L’Oreal is now live in 18 countries globally).”
At this point Mya is selling its product into more than 35 countries — predominantly in North America, EMEA and APAC — with a focus on large and mid-sized employers that operate globally, including staffing businesses and corporations across high-volume recruitment industries such as healthcare, light industrial, call center, retail, transportation and logistics, hospitality, grocery and automotive.
“We have teams in both the US and Europe to support our expanding global customer base,” Grayevsky added. “With the new funding, we will continue to invest in the distribution, infrastructure and support needed to address demand across target markets globally.”
He name-checks the likes of Olivia, AllyO, Job Pal, Roborecruiter, XOR, and TextRecruit as main competitors. In terms of differentiation, he points to Mya having processed “tens of millions” of candidate interactions thus far — amassing an “ever-growing domain specific conversational dataset” — which he said enables it to continue to enhance the experience the platform can deliver for candidates.
“We have the most robust conversational technology and platform enabling rich, dynamic, and natural conversational experiences that deliver higher engagement, conversion, and actionable insights,” he claimed “We have the most in-depth solutions that support use cases across the entire recruiting funnel (e.g. sourcing, talent pool nurturing and re-engagement, screening, scheduling, contractor redeployment, etc.).
“We have the most experience successfully delivering deeply integrated solutions that scale for the largest staffing business and corporations (e.g. our largest customer is now deployed in 600+ locations globally, across hundreds of job roles, and thousands of recruiters on the platform, engaging with millions of both passive and active candidates on an annual basis), and… we are closely partnered with the leading ATS providers such as Workday and Bullhorn, where we have differentiated integrations and channel relationships that give us a competitive advantage.”
We also asked Grayevsky how the recruitment tool is complying with different national employment and equality laws, and also avoiding introducing any discrimination/bias into its AI-aided screening.
On this he said: “Mya does not apply any advanced AI or machine learning to decision making (i.e. determining fit for a job role), we are squarely focused on developing a robust conversational experience that allows Mya to engage instantly, capture information with high response and completion rates, automate outbound sourcing and scheduling process, and provide ongoing engagement and support with both active applicants and passive candidates in our customer’s talent community.”
He also said they have put steps in place to confirm the accuracy of candidate responses through the conversation (such as by adding a confirmation step for core requirements); and by “employing processes that check for bias, both before a solution goes into production and once its launched”.
Another step it’s taken to “ensure a positive experience for all candidates”, as he put it, is to provides the user an option to reroute a question that the bot does not understand to a recruiter.
“Those queries are immediately submitted into our AI training and annotation pipeline, tested and deployed into production to continually expand our FAQ support within the platform,” he told us.
“Mya’s mission is to create a far more efficient and equitable job market powered by conversational AI, and a core principle of that mission is to level the playing field,” Grayevsky added. “Our AI, engineering and product teams (which includes individuals from highly diverse backgrounds) make all product design decisions to consciously remove bias from the hiring process and ensure that information surfaced to hiring teams is accurate and objective.”
“We are also investing extensively to ensure Mya is in full compliance with local employment laws, data protection and privacy regulations, rules around opt-in and consent, everywhere we operate in the world. Ahead of GDPR, we worked with outside counsel on a Privacy Impact Assessment. That led to our appointment of a Data Protection Officer, and informed our ‘Privacy by Default’ and ‘Privacy by Design’ philosophies. We’re continuing to keep an eye on regulations around AI in the EU, and are already in compliance with the seven key requirements identified in the European Commission’s white paper on artificial intelligence.”
Asked about the disproportionate admin burden automated hiring tools can place on candidates, as they grease up efficiencies of scale for employers — i.e. by requiring jobseekers respond individually to screening system vs just being able to submit a single CV to multiple companies — Grayevsky argued that dynamic data-capturing recruitment systems, such as Mya, offer a better experience to job seekers via increased responsiveness and through expanding potential future job-matching opportunities.
“Traditional job applications often have many steps, and can take a long time to complete, creating a negative experience and drop-off for the employer. Those questionnaires are not dynamic and often lead to no response. That time and effort invested often cannot be leveraged into other opportunities,” he argued.
“With Mya, everyone gets a response instantaneously and around the clock (24/7). They can ask questions and get answers in real-time, providing more transparency and insight into the opportunity, process, and employer. Mya can follow up and if deemed fit by the hiring team, schedule a phone call or on-site interview with the recruiter or hiring manager to help accelerate the process.
“If they are ultimately not deemed fit by the hiring team due to a missing requirement, Mya is able to re-engage to help that candidate connect to a role that is more suitable based on their profile, interests, and availability. The idea is that a candidate can build one profile and be connected to multiple jobs, quickly and efficiently. We built our solution through the lens of the candidate, making it easy to provide information about qualifications, interests, and availability, and get connected to the right opportunities.”

Fear and liability in algorithmic hiring 

Google postpones the online version of Cloud Next until further notice

A few weeks ago, Google canceled the in-person version of Cloud Next, its largest conference of the year, amid a flurry of similar coronavirus-related cancellations of major events. Originally, Cloud Next was scheduled to run from April 6 to 8. Like other companies, Google at the time said it would hold an online version of the show, but as the company announced today, it is now also postponing that. The company did not announce a new date.
“Right now, the most important thing we can do is focus our attention on supporting our customers, partners, and each other,” Alison Wagonfeld, Google Cloud’s Chief Marketing Officer, writes. “Please know that we are fully committed to bringing Google Cloud Next ‘20: Digital Connect to life, but will hold the event when the timing is right. We will share the new date when we have a better sense of the evolving situation. ”
Chances are we will see a few more of these announcements in the coming weeks. As companies move to remote work, states enact curfews and social distancing has become a word everybody suddenly knows, even putting on a streamed keynote is getting harder. From a more cynical point of view, it’s also worth noting that tech companies are also now facing a world where there isn’t all that much interest in their announcements during a relentless news cycle that prioritizes other topics. Over the last few days, we’ve seen a number of companies postpone their pre-planned announcements, most of which weren’t public yet, and more are sure to come.

Google cancels Cloud Next because of coronavirus, goes online-only

Veteran VC Mike Volpi discusses investing and fundraising in ‘a very difficult time’

Last week, I talked with Mike Volpi, longtime Index Ventures partner and the former head of M&A at Cisco for many years before that. We originally planned to talk about Index and general market trends, and we did. The topics we discussed included whether self-driving technologies have attracted too much funding and the damage inflicted by SoftBank on its portfolio companies.
Still, few could have predicted how extraordinarily trying the week would be leading up to our interview. Little wonder we spent much of our time talking about who is likely to snap shut their checkbook first, and why, in some cases, the best thing to do now is to keep the money flowing.
We have parts of our conversation available in podcast form here; other pieces, including those not included in the podcast, follow. These excerpts have been lightly edited for length.

Mike Volpi on the art of board membership

TechCrunch: Let’s talk first about Index. You closed your last funds in 2018 with $1.65 billion in capital commitments. Are you in the market again now?
Mike Volpi: We raise funds every three-ish years. So, at some point, yeah, we’ll be in the market again. [We are] not specifically at this point in time, but sooner or later we’ll raise another fund.
More broadly speaking — and because the market is tanking so badly as we speak — do LPs tend to snap their checkbooks shut as soon as trouble hits? What’s been your experience over the years?

No mute necessary: Taiv replaces live TV ads at bars with custom content

Live TV is a staple of bars and restaurants, packing sports fans in during playoffs and convincing people to grab one more beer to finish off that late-night X-Files rerun before heading home. But the commercials are a pain. Taiv turns TV ads into opportunities by sensing when they start and immediately switching to the bar’s own promo content.
It’s one of those ideas that seems incredibly obvious in retrospect. With recorded TV you can sometimes skip ads, but it’s harder when they’re broadcast live. In that case the thing people tend to do is hit mute or change the channel — simple enough, but it seems like a waste and a chore for already busy staff. And of course the patrons don’t like ads, either.
Taiv founder and CEO Noah Palansky ran into this issue himself one day and saw an opportunity.
“I was sitting in a bar, the TV cut to commercial, and it showed an ad for, literally, another bar down the street,” he said, and what’s more it was advertising lower prices than what he was paying. “If they were showing me craft beers or events they had going on I’d be super into it, but instead it made me feel like I was getting ripped off.”
If it’s okay for bar owners to switch channels when ads come on, why not something that did that automatically? And instead of surfing for another game or show, why not just switch to pre-made content? It led him to found the company debuting today at Y Combinator’s (virtual) demo day.
Taiv installs hardware at the restaurant that sits between the live feed and the TV, analyzing the image so that it can instantly understand that a commercial has come on, and switch over to custom content before anyone even notices.

“We really feel strongly that what we’re doing helps businesses a lot,” Palansky said. “All it looks like on their end is they say, we just launched a promotion on chicken wings, or we have these new beers on tap. Kind of like restaurants have table toppers or pamphlets — it lets them spread awareness to people, but instead hurting their atmosphere, it helps. And we make all the content for them, automate it and include it.”
“The first thing that happens is that within a week people call and are like, ‘everyone has noticed this!’ One customer saw a 14 percent revenue increase,” he continued. “They’re a Greek restaurant and they had Greek beers, but no one ordered these. They said, ‘hey, we import these’ on the TV and alcohol sales went up 14 percent.”
The value seems obvious. But the execution puzzled me. Doing computer vision on a live stream and detecting when a commercial comes on is a deceptively difficult task — commercials don’t all look the same, and some look a lot like shows! How could they do it in just a few frames?
My first thought was that of someone who’s been burned too many times by AI startups. I asked whether there was a low-wage worker somewhere being paid to manually switch the content by watching every channel simultaneously.
Palansky laughed — because they had considered it. “Our first idea was that exact idea, we literally thought we could hire someone to sit in a room,” he said. “But we needed to be versatile and work with lots of stations, very local ones. If we only supported major broadcasts we’re limiting what customers could show.”
Co-founders (left to right) Jordan Davis, Noah Palansky, and Avi Stoller holding the Taiv hardware.
My second thought was that there was some kind of “tell” that the system could tap into, a watermark or secret “local affiliates, start rolling ads now” signal. Turns out that used to exist but doesn’t any more, or is wrapped up in strange copyright law.
“There are very strict laws about transmitting signals, even timing signals saying, it’s hockey now, now it’s a commercial. That’s been a surprisingly annoying part of the process,” Palansky said. “We’ve had to go through some really weird design constraints.”
Then I thought they must maintain a database of all known commercials and compare the first few frames to it. Nope. Turns out Taiv recognizes commercials pretty much the way we do.
“The vast majority of our algorithm is the same as how people recognize the difference between a hockey game and a commercial,” he explained. “We pull off a bunch of different heuristics and train models on each one. For example, looking at color balance. If you notice in a 30 second block that the color balance has shifted suddenly, it’s probably different content or a different scene. We use lots of those in combination, all in real time.”
The error rate is low, but even so the cost of those errors is very low — a few frames of a commercial shown, for instance. The system quickly recognizes when it’s made a mistake and will revert within a fraction of a second.
Taiv provides the hardware and setup for free, charging the business based on how many signals are being analyzed. “It’s usually a few hundred bucks a month,” said Palansky, which sounds like a lot until you realize that businesses often pay well over a thousand a month for a commercial cable connection. And the benefits seem pretty tangible.
My last consideration was worry that Palansky would end up with the local TV equivalent of a horse’s head in his bed. Old media folks can be ruthless. But it turns out some are actually into it — the system could be used to provide customized last-mile ad delivery if it’s the network operating it.
Taiv has raised a $500K seed round led by Conexus Venture Capital Fund with participation from Y Combinator and Golden Opportunities. You can get in contact with the company for more details or a demo at Taiv.tv.

Amazon cancels re:MARS 2020 event amid COVID-19 outbreak

Amazon has canceled re:MARS 2020, an annual AI event that focuses on machine learning, automation, robotics and space, over concerns about the COVID-19 pandemic. The event was scheduled to be held June 16 to 19, 2020 in Las Vegas.
Organizers of the event said in a statement on its website that the event will be cancelled and guests who purchased tickets will receive a full refund. Here’s the statement:
Thank you for your interest in re:MARS 2020. Our top priority is the well-being of our employees, customers, partners, and event attendees. We have been closely monitoring the situation with COVID-19, and after much consideration, we have made the decision to cancel re:MARS 2020. All guests who have purchased tickets will receive a full refund of registration fees. Hotel rooms booked through our conference website will be canceled free of charge. Over the course of the coming weeks, we will explore other ways to engage the community.
Amazon launched re:MARS in 2019 and last year featured founder and CEO Jeff Bezos, Landing AI founder and CEO Andrew Ng, actor and producer Robert Downey, Jr., MIT Media Lab researcher Dr. Kate Darling and Boston Dynamics founder Marc Raibert, and Zoox CEO Aicha Evans, among others.
COVID-19, a disease caused by a new virus that is a member of the coronavirus family and a close cousin to the SARS and MERS viruses. COVID-19 has caused governments and companies to cancel tech, business and automotive events around the world, including the NCAA March Madness basketball tournaments, professional sports games in the NBA  and NHL, the Geneva International Motor Show, MWC in Barcelona and the SXSW festival in Austin, Texas. Disneyland and California Adventure will close through the end of the month. On Friday, President Donald Trump declared a national emergency, a designation that allows the government to free up more federal resources that states can access as they respond to outbreak.
Amazon issued guidance Thursday in response to the COVID-19 outbreak recommending that global employees who are able to work from home to do so through the end of March.
“We continue to work closely with public and private medical experts to ensure we are taking the right precautions as the situation continues to evolve,” an Amazon spokesperson said in an email statement. “As a result, we are now recommending that all of our employees globally who are able to work from home do so through the end of March.”
Earlier this week, Amazon said it would provide two weeks of extra paid time off for full and part-time employees who are diagnosed with COVID-19 or placed into quarantine. The company said it will continue to pay all hourly employees, including food service, janitorial and security staff, who support its offices around the world.

Glisten uses computer vision to break down fashion photos to their styles and parts

It’s amazing that in this day and age, the best way to search for new clothes is to click a few check boxes and then scroll through endless pictures. Why can’t you search for “green patterned scoop neck dress” and see one? Glisten is a new startup enabling just that by using computer vision to understand and list the most important aspects of the clothing in any photo.
Now, you may think this already exists. In a way, it does — but not a way that’s helpful. Co-founder Alice Deng encountered this while working on a fashion search project of her own while going to MIT.
“I was procrastinating by shopping online, and I searched for v-neck crop shirt, and only like 2 things came up. But when I scrolled through there were 20 or so,” she said. “I realized things were tagged in very inconsistent ways — and if the data is that gross when consumers see it, it’s probably even worse in the backend.”
As it turns out, computer vision systems have been trained to identify, really quite effectively, features of all kinds of images, from identifying dog breeds to recognizing facial expressions. When it comes to fashion, they do the same sort of thing: Look at the image and generate a list of features with corresponding confidence levels.
So for a given image, it would produce a sort of tag list, like this:

As you can imagine, that’s actually pretty useful. But it also leaves a lot to be desired. The system doesn’t really understand what “maroon” and “sleeve” really mean, except that they’re present in this image. If you asked the system what color the shirt is, it would be stumped unless you manually sorted through the list and said, these two things are colors, these are styles, these are variations of styles, and so on.
That’s not hard to do for one image, but a clothing retailer might have thousands of products, each with a dozen pictures, and new ones coming in weekly. Do you want to be the intern assigned to copying and pasting tags into sorted fields? No, and neither does anyone else. That’s the problem Glisten solves, by making the computer vision engine considerably more context-aware and its outputs much more useful.
Here’s the same image as it might be processed by Glisten’s system:
Better, right?
“Our API response will be actually, the neckline is this, the color is this, the pattern is this,” Deng said.
That kind of structured data can be plugged far more easily into a database and queried with confidence. Users (not necessarily consumers, as Deng explains later) can mix and match, knowing that when they say “long sleeves” the system has actually looked at the sleeves of the garment and determined that they are long.
The system was trained on a growing library of around 11 million product images and corresponding descriptions, which the system parses using natural language processing to figure out what’s referring to what. That gives important contextual clues that prevent the model from thinking “formal” is a color or “cute” is an occasion. But you’d be right in thinking that it’s not quite as easy as just plugging in the data and letting the network figure it out.
Here’s a sort of idealized version of how it looks:

“There’s a lot of ambiguity in fashion terms and that’s definitely a problem,” Deng admitted, but far from an insurmountable one. “When we provide the output for our customers we sort of give each attribute a score. So if it’s ambiguous whether it’s a crew neck or a scoop neck, if the algorithm is working correctly it’ll put a lot of weight on both. If it’s not sure, it’ll give a lower confidence score. Our models are trained on the aggregate of how people labeled things, so you get an average of what people’s opinion is.”
Although shoppers will likely see the benefits of Glisten’s tech in time, the company has found that its customers are actually two steps removed from the point of sale.
“What we realized over time was that the right customer is the customer who feels the pain point of having messy unreliable product data,” Deng explained. “That’s mainly tech companies that work with retailers. Our first customer was actually a pricing optimization company, another was a digital marketing company. Those are pretty outside what we though the applications would be.”
It makes sense if you think about it. The more you know about the product, the more data you have to correlate with consumer behaviors, trends, and such. Knowing summer dresses are coming back, but knowing blue and green floral designs with 3/4 sleeves are coming back is better.
Glisten co-founders Sarah Wooder (left) and Alice Deng.
The model is initially aimed at fashion and clothing in general, but it can be adapted to other categories without having to reinvent the wheel — the same algorithms could find the defining characteristics of cars, beauty products, and so on.
Competition is mainly internal tagging teams (the manual review we established none of us would like to do) and general-purpose computer vision algorithms, which don’t produce the kind of structured data Glisten does.
Even ahead of Y Combinator’s demo day next week the company is already seeing 5 figures of monthly recurring revenue, with their sales process limited to individual outreach to people they thought would find it useful. “There’s been a crazy amount of sales these past few weeks,” Deng said.
Soon Glisten may be powering many a product search engine online, though ideally you won’t even notice — with luck you’ll just find what you’re looking for that much easier.

WTF is computer vision?

Deep North raises $25.7M for AI that uses CCTV to build retail analytics

Amazon and others have raised awareness of how the in-store shopping experience can be sped up (and into the future) using computer vision to let a person pay for and take away items without ever interacting with a cashier, human or otherwise. Today, a startup is announcing funding for its own take on how to use AI-based video detection get more insights out of the retail experience. Deep North, which has built an analytics platform that builds insights for retailers based on the the videos from the CCTV and other cameras that those retailers already use, is today announcing that it has raised $25.7 million in funding, a Series A round that it plans to use to continue expanding its platform.
Deep North’s AI currently measures such parameters as daily entries and exits; occupancy; queue times; conversions and heat maps — a list and product roadmap that it’s planning to continue growing with this latest investment. It says that using cameras to build its insights is more accurate and scalable than current solutions that include devices like beacons, RFID tags, mobile networks, smartphone tracking and shopping data. A typical installation takes a weekend to do.
The funding is being led by London VC Celeres Investments (backer of self-driving startup Phantom AI, among others), with participation also from Engage, AI List Capital and others. The startup is not disclosing its valuation, and previously Deep North has not disclosed how much it has raised.
Previously known as VMAXX, the Bay Area-based startup, according to CEO and co-founder Rohan Sanil, currently is in use by customers in the US and Europe. It does not disclose customer names, but Sanil said the list includes shopping centers, retailers, commercial real estate businesses and transportation hubs.
There are a number of retail analytics plays on the market today, but up to now the vast majority of them have been based on using other kinds of  non-visual (and non-video) data to build their pictures of how a business is working, including logs of sales, card payments, in-store beacons, in-store WiFi and smartphone usage.
This list is, indeed, extensive and already provides a startling amount of data on the average shopper, but it has its drawbacks. Some people don’t use in-store WiFi; beacons are not as ubiquitous as CCTV; certain shopping data is a false positive, in the sense that if you don’t buy anything, it’s harder to track why not and where everything went wrong in getting you to shop; and perhaps most importantly you can’t see how shoppers are behaving, where they are looking and walking.
“The data collected [by these other means] is only 30-60% accurate and then extrapolated,” Sanil notes in a blog post. And that is not the only challenge. “The other is the enormous cost of the technology along with the software – which requires a team of programmers to get anything beyond stock analysis – plus being locked into a single vendor.”
Video systems “make a lot more sense,” he adds, and so does using those that are already installed in retailers’ locations. “The customers we see have no interest in deploying and paying for additional infrastructure, when the average store has several cameras already, and a typical big box store has dozens. Making our vision work means quantifying what a camera can see – and seeing through the cameras already in use.” The company typically integrates with 60-70% of a company’s installed cameras to run its analytics.
It’s that differentiation that has attracted investors. “Deep North’s platform allows retailers to gain real time insights on data points that were previously unattainable in the physical world. By leveraging existing video footage to understand activity and behavior, operators can now make informed decisions with the help of their prescriptive analytics engine,” said Azhaan Merchant of Celeres Investments, in a statement.
CCTV has had a problematic profile in the world of data privacy, where people pinpoint it as enemy number one in our rapidly expanding surveillance economy, and have ironically pointed out that it rarely is fit for the purpose it was originally set out to serve, which is deterring and identifying shoplifters. It’s notable to me that Deep North doesn’t actually ever use the term CCTV. (“Customers use a variety of terms for their cameras including CCTV, camera networks and loss prevention cameras so we’ve chosen to use a broader term that encompasses them,” a spokesperson said.)
Whatever you choose to call them, if a retailer has already made the leap into having these cameras installed, using them for analytics gives that business another way of getting a better return on investment. Sanil says that in any case, its platform is respectful of privacy.
“Deep North is not able to ascertain the identity of any individual captured via in-store footage,” he said. “We have no capability to link the metadata to any single individual. Further, Deep North does not capture personally identifiable information (PII) and was developed to govern and preserve the integrity of each and every individual by the highest possible standards of anonymization. Deep North does not retain any PII whatsoever, and only stores derived metadata that produces metrics such as number of entries, number of exits, etc. Deep North strives to stay compliant with all existing privacy policies including GDPR and the California Consumer Privacy Act.” (It has operations in Europe where it would need to comply with GDPR.)
Still, Deep North’s combination of computer vision with retail technology is a signal of a bigger trend. Many providers of security cameras have started to incorporate retail analytics into their wider offerings, and those that are concentrating on check out, like Amazon but also startups like Trigo, are likely also to consider this area too. Longer term, as retailers, but also their IT providers, look to get more intelligence about how their businesses are working in a bid for better margins, we’re likely to see even more players in this space.
For Deep North, that might mean also expanding into a wider set of products that not only are able to generate insights into how people shop, but then to use to those to build recommendations into how stores are laid out, or prompts to shoppers for what they might consider next as they browse.

Insurtech startup Akur8 raises $8.9M from BlackFin and MTech Capital

Far from replacing humans, Artificial Intelligence is actually coming to the aid of a very old profession that has fallen out of fashion to such an extent that people are increasingly not joining it. I speak of the rarified world of the Actuary. Actuaries deal with the measurement and management of risk and uncertainty using what is known as ‘actuarial science’. However, in recent years, university graduates who may have entered the industry in the past are now often attracted to the slightly ‘hotter’ world of Data Science, leaving the Actuarial world rather struggling.
Into this problem steps a Paris -based start-up called Akur8. Akur8 is a SaaS Insurtech startup specialized in insurance pricing optimization using AI, aiming to address the insurance industry’s issue of ever-increasing pressure to find a way to offer quick and appropriate prices for retail and commercial insurance customers.
Akur8 has now raised €8m ($8.9m) in a Series A funding round from BlackFin Capital Partners and MTech Capital, its total funding to €10m ($11.2m) following initial investment and incubation from Kamet Ventures in 2018.
Akur8 says its platform can build risk models more than ten times faster than the traditional manual process, reducing the pricing time to market to hours rather than weeks. It is currently deployed by multiple insurance companies throughout Europe and The Americas.
The startup takes the view that standard machine learning approaches don’t work as well because they are hard to reverse engineer, so its AI algorithm automates the insurance pricing process while ‘showing its working’, as it were, thus providing a level of transparency that’s required in the industry to maintain confidence and comply with regulatory requirements.
Samuel Falmagne, Founder and CEO of Akur8, said in a statement that their “product that gives carriers the ability to meet customers’ expectations for real-time pricing while improving the accuracy of their risk assessment, thus significantly reducing their loss ratio.”
Julien Creuzé, managing director at BlackFin Capital Partners, said: “Akur8 has developed a highly differentiated AI-based solution for risk modeling and pricing, with tremendous value potential for the insurers who embrace it.” Kevin McLoughlin, partner and co-founder of MTech Capital, said: “Our investment thesis is centered around backing visionary founders with the ambition to transform insurance through the use of technology. We are proud to be backing Akur8 as a unique player solving a critical issue for the entire industry.”

VisualOne smartens up home security cameras with object and action recognition

“Smart” cameras are to be found in millions of homes, but the truth is they’re not all that smart. Facial recognition and motion detection are their main tricks… but what if you want to know if the dog jumped on the couch, or if your toddler is playing with the stove? VisualOne equips cameras with the intellect to understand a bit more of the world and give you more granular — and important — information.
Founder Mohammad Rafiee said that the idea came to him after he got a puppy (Zula) and was dissatisfied with the options he had for monitoring her activities while he was away. Here she is doing what dogs do best:
There are no bad dogs, but chairs are for people.
“There were specific things I wanted to know were happening, like I wanted to check if the dog got picked up by the dog walker. The cameras’ motion detection is useless — she’s always moving,” he lamented. “In fact, with a lot of these cameras, just a change in the lighting or wind or rain can trigger the motion alert, so it’s completely impractical.”
“My background is in machine learning. I was thinking about it, and realized we’re at a stage where this problem is starting to become solvable,” he continued.
Some tasks in computer vision, indeed, are as good as solved — detecting faces and common objects such as cars and bikes can be done quickly and efficiently. But that’s not always useful — what’s the point of knowing someone rode their bike past your house? In order for this to have value, the objects need to be understood as part of a greater context, and that’s what Rafiee and VisualOne are undertaking.
Unfortunately, it’s far from easy — or else everyone would be doing it already. Identifying a cat is simple, and identifying a table is simple, but identifying a cat on a table is surprisingly hard.
“It’s a very difficult problem. So we’re breaking it down to things we can solve right now, then building on that,” Rafiee explained. “With deep learning techniques we can identify different objects, and we build models on top of those to specify different interactions, or specific objects being in specific locations. Like a car in the wrong spot, or a dog getting on a couch. We can recognize that with high accuracy right now — we have a list of supported objects and models that we’re expanding.”

In case you’re not convinced that the capabilities are that much advanced from the usual “activity in the living room” or “Kendra is at the front door” notifications, here are a few situations that VisualOne is set up to detect:
Kid playing with the stove
Toddler climbing furniture
Kid holding a knife
Baby left alone for too long
Raccoon getting into garbage
Elderly person taking her medications
Elderly person in bed for too long
Car parked in the wrong spot
Garage door left open
Dog chewing on a shoe
Cat scratching the furniture
The process for creating these triggers is pretty straightforward.
If one of those doesn’t make you think “actually… that would be really good to know” then perhaps a basic security camera is enough for your purposes after all. Not everyone has a knife-curious toddler. But those of you who do are probably scrolling furiously past this paragraph looking for where to buy one of these things.
Unfortunately VisualOne isn’t something you can just install on any old existing system — with the prominent exception of Nest, which it can plug into. Camera workflows are generally too locked down for security and privacy purposes to allow for third-party apps and services to be slipped in. But the company isn’t trying to bankrupt everyone with an ultra-luxury offering. It’s using off-the-shelf cameras from Wyze and loading them with its own software stack.
Rafiee said he pictures VisualOne as a mid-tier option for people who want to have more than a basic camera setup but aren’t convinced by the more expensive plays. That way the company avoids going head-on with commodity hardware’s race to the bottom or the brand warfare taking place between Google and Amazon’s Nest and Ring. Cameras cost $30-$40, and the service is $7 per month currently.
Ultimately the low-end companies may want to license from VisualOne, while the high-end companies will be developing their own full stack at great cost, making it difficult for them to go downmarket. “Hardware is hard, and AI is specialized — unless you’re a giant company it’s hard to do both. I think we can fill the gap in the market for mid-market companies without those resources,” he said.
Of course privacy is paramount as well, and Rafiee said that because of the way their system works, although the AI lives in the cloud and therefore requires the cameras to be online (like most others), no important user data needs to be or will be stored on VisualOne servers. “We do inference in the cloud so we can be hardware agnostic, but we don’t need to store any data. So we don’t add any risk,” he said.
VisualOne is launching today (after a stint in YC’s latest cohort) with an initial set of objects and interactions, and will continue developing more as it observes which use cases prove popular and effective.

Matternet’s new drone landing station looks like a sci-fi movie prop

Drones making deliveries is of course the hot new hyperlocal tech play, but where are these futuristic aircraft supposed to land? On the lawn? Matternet has built a landing station for its cargo drones that looks less like a piece of infrastructure and more like a death ray from a ’60s sci-fi movie.
Far from the free-form delivery network envisioned by Prime Air or the like, Matternet’s drone deployments have been fixed point-to-point affairs focused on quickly connecting a handful of locations that frequently trade time-sensitive deliveries: hospitals.

UCSD hospital gets a drone delivery program powered by Matternet and UPS

The company has performed pilot tests in Switzerland and North Carolina, and just started a new one in San Diego, in which medical facilities are able to send blood samples, medications, and (soon, one hopes) vaccines and other supplies back and forth without worrying about traffic or other complications on the ground.
But there’s the problem of where exactly the drones land, and what happens afterwards. Does someone have to swap out the battery? Who says when it’s safe to approach the drone, and how to detach its payload? Whatever the process is, it could probably be easier and more automated, and that’s what the station aims to accomplish.
With its techno-organic curves and flower-like hatch on top, the 10-foot-tall station seems to channel the likes of Star Trek: The Original Series and Lost in Space, and no doubt it’s intended to be eye-catching as well as functional.

When the drone arrives, the top opens up and the drone lands right in the center, where it is enclosed and grasped by the station’s machinery, unburdened of its payload, and given a fresh battery. The payload is contained in the tower until it is called for by an authorized person, who scans a dongle to receive their package.
If there’s just the one drone, it can live in the top part, the bulb or whatever you’d call it, until it’s needed again. If there are multiple deliveries or drones, however, the one inside will leave and enter a holding pattern about 60 feet above, in an “imaginary donut.”
The station will get its first installation in the second quarter of this year, at one of Matternet’s existing customer hospitals. Presumably it will roll out more widely once this shakeout period ends.
You can see the full operation in the dramatization below:

R&D Roundup: Smart chips, dream logic and crowdsourcing space

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.
This week: crowdsourcing in space, vision on a chip, robots underground and under the skin and other developments.
The eye is the brain
Computer vision is a challenging problem, but the perennial insult added to this difficulty is the fact that humans process visual information as well as we do. Part of that is because in computers, the “eye” — a photosensitive sensor — merely collects information and relays it to a “brain” or processing unit. In the human visual system, the eye itself does rudimentary processing before images are even sent to the brain, and when they do arrive, the task of breaking them down is split apart and parallelized in an amazingly effective manner.
The chip, divided into several sub-areas, which specialize in detecting different shapes
Researchers at the Vienna University of Technology (TU Wien) integrate neural network logic directly into the sensor, grouping pixels and subpixels into tiny pattern recognition engines by individually tuning their sensitivity and carefully analyzing their output. In one demonstration described in Nature, the sensor was set up so that images of simplified letters falling on it would be recognized in nanoseconds because of their distinctive voltage response. That’s way, way faster than sending it off to a distant chip for analysis.

Insurance AI startup Synthesized raises $2.8M from IQ Capital and Mundi Ventures

The insurance industry depends on data to support a number of functions the average person in the street is usually completely unaware of such as “informed risk selection”, underwriting and claims management. Like many industries, it would like to automate much of this but it’s just not that simple.
Synthesized is a UK startup that tries to reduce friction on preparing all the data that’s needed, to enable insurers to share data safely, complying with regulations. The more that happens, the more innovation can happen, such as insuring for a low-carbon economy, something which will become increasingly important.
It’s now raised $2.8m in a new round of funding co-led by Cambridge-based IQ Capital and Mundi Ventures, with participation from Seedcamp, Pretiosum Ventures, and a number of finance and technology executives in the UK. Financing from the round will be used to double the number of its employees in London, and build out its sales and product teams.
Cofounder Nicolai Baldin said: “Synthesized substantially reduces the time to develop and comprehensively test data-driven projects and as a result empowers engineers to build better products and services for end-users. With the new funding from IQ Capital and Mundi Ventures, Synthesized is well-positioned to facilitate its business operations to turbocharge development processes across many sectors, such as finance, insurance and healthcare.”
Ed Stacey, managing partner at IQ Capital said: “Responsible organizations are waking up to the need to ensure that their deployed machine learning systems are fair and unbiased, as well as being robust and accurate. Synthesized’s ability to create multiple, balanced data sets in a flexible way gives organizations and their customers the confidence they need in deployed production systems, while also greatly speeding up the development process. Javier Santiso, CEO and Founder of Alma Mundi Ventures, said that “The prospects for Synthesized are bright and we see the impact of synthetic data permeating almost every industry.”
Synthesized competes in various ways with product from Gretel AI, Snorkel, Tonic AI, Hazy and Mostly AI.

Adtech giant Criteo is being investigated by France’s data watchdog

Adtech giant Criteo is under investigation by the French data protection watchdog, the CNIL, following a complaint filed by privacy rights campaign group Privacy International.
“I can confirm that the CNIL has opened up an investigation into Criteo . We are in the trial phase, so we can’t communicate at this stage,” a CNIL spokesperson told us.
Privacy International has been campaigning for more than a year for European data protection agencies to investigate several adtech players and data brokers involved in programmatic advertising.
Yesterday it said the French regulator has finally opened a probe of Criteo.
“CNIL’s confirmation that they are investigating Criteo is important and we warmly welcome it,” it said in the  statement. “The AdTech ecosystem is based on vast privacy infringements, exploiting people’s data on a daily basis. Whether its through deceptive consent banners or by infesting mental health websites these companies enable a surveillance environment where all you moves online are tracked to profile and target you, with little space to contest.”
We’ve reached out to Criteo for comment.
Back in November 2018, a few months after Europe’s updated data protection framework (GDPR) came into force, Privacy International filed complaints against a number of companies operating in the space — including Criteo.
A subsequent investigation by the rights group last year also found adtech trackers on mental health websites sharing sensitive user data for ad targeting purposes.
Last May Ireland’s Data Protection Commission also opened a formal investigation into Quantcast, following Privacy International’s complaint and a swathe of separate GDPR complaints targeting the real-time bidding (RTB) process involved in programmatic advertising.
The crux of the RTB complaints is that the process is inherently insecure since it entails the leaky broadcasting of people’s personal data with no way for it to be controlled once it’s out there vs GDPR’s requirement for personal data to be processed securely.
In June the UK’s Information Commission’s Office also fired a warning shot at the behavioral ad industry — saying it had “systemic concerns” about the compliance of RTB. Although the regulator has so far failed to take any enforcement action, despite issuing another blog post last December in which it discussed the “industry problem” with lawfulness — preferring instead to encourage adtech to reform itself. (Relevant: Google announcing it will phase out support for third party cookies.)
In its 2018 adtech complaint, Privacy International called for France’s CNIL, the UK’s ICO and Ireland’s DPC to investigate Criteo, Quantcast and a third company called Tapad — arguing their processing of Internet users’ data (including special category personal data) has no lawful basis, neither fulfilling GDPR’s requirements for consent nor legitimate interest.
Privacy International’s complaint argued that additional GDPR principles — including transparency, fairness, purpose limitation, data minimisation, accuracy and integrity and confidently — were also not being fulfilled; and called for further investigation to ascertain compliance with other legal rights and safeguards GDPR gives Europeans over their personal data, including the right to information; access; rights related to automated decision making and profiling; data protection and by design and default; and data protection impact assessments.
In specific complaints against Criteo, Privacy International raised concerns about its Shopper Graph tool, which is used to predict real-time product interest, and which Criteo has touted as having data on nearly three-quarters of the worlds’ shoppers, fed by cross-device online tracking of people’s digital activity which is not limited to cookies and gets supplemented by offline data; and its Dynamic Retargeting tool, which enables the retargeting of tracked shoppers with behaviorally targeted ads via Criteo sharing data with scores of ‘partners’ including publishers and ad exchanges involved in the RTB process to auction online ad slots.
At the time of the original complaint Privacy International said Criteo told it it was relying on consent to track individuals obtained via its advertising (and publisher) partners — who, per GDPR, would need to obtain informed, specific and freely given consent up-front before dropping any tracking cookies (or other tracer technologies) — as well as claiming a legal base known as legitimate interest, saying it believed this was a valid ground so that it could comply with its contractual obligations toward its clients and partners.
However legitimate interests requires a balancing test to be carried out to consider impacts on the individual’s interests, as part of a wider assessment process to determine whether it can be applied.
It’s Privacy International’s contention that legitimate interest is not a valid legal basis in this case.
Now the CNIL will look in detail at Criteo’s data processing to determine whether or not there are GDPR violations. If it finds breaches of the law, the regulation allows for monetary penalties to be issued that can scale as high as 4% of a company’s global turnover. EU data protection agencies can also order changes to how data is processed.
Commenting on the CNIL’s investigation of Criteo’s business, Dr Lukasz Olejnik, an independent privacy researcher and consultant whose research on the privacy implications of RTB predates all the aforementioned complaints told us: “I am not surprised with the investigation as in Real-Time Bidding transparency and consent were always very problematic and at best non-obvious. I don’t know how retrospective consent could be reconciled.”
“It is rather beyond doubt that a thorough privacy impact assessment (data protection impact assessment) had to be conducted for many aspects of such systems or its uses, so this particular angle of the complaint should not controversial,” Olejnik added.
“My long views on Real-Time Bidding is that it was not a technology created with particular focus on security and privacy. As a transformative technology in the long-term it also contributed to broader issues like the dissemination of harmful content like political disinformation.”
The CNIL probe certainly adds to Criteo’s business woes, with the company reporting declining revenue last year and predicting more to come in 2020. More aggressive moves by browser makers to bake in tracker blocking is clearly having an impact on its core business.
In a recent interview with Digiday CEO Megan Clarken talked about wanting to broaden the range of services it offers to advertisers and reduce its reliance on its traditional retargeting.
Criteo has also been investing heavily in artificial intelligence in recent years — ploughing in $23M in 2018 to open an AI lab in Paris.

Google’s Vint Cerf voices support for common criteria for political ad targeting

Google VP Vint Cerf has voiced support for a single set of standards for Internet platforms to apply around political advertising.
Speaking to the UK parliament’s Democracy and Digital Technologies Committee today, the long time Googler — who has been chief Internet evangelist at the tech giant since 2005 — was asked about the targeting criteria it allows for political ads and whether he thinks there should be a common definition all platforms should apply.
“Your idea that there might be common criteria for political advertising I think has a certain merit to it,” he told the committee. “Because then we would see consistency of treatment — and that’s important because there are so many different platforms available for purposes of — not just advertising but political speech.”
“In the US we’ve already experienced the serious side effects of some of the abuse of these platforms and the ability to target specific audiences for purposes of inciting disagreement,” he added. “We should make it difficult for our platforms to be abused in that way.”
The committee had raised the point that Google and Facebook currently apply different criteria around political ads — also asking whether advertisers could use Google’s tools to target political issue ads at a particular geographical region, such as South Bend in Northern Indiana.
“I don’t think that criterion is allowed in our advertising system,” Cerf responded on that specific example. “I don’t think that we’re that refined, particularly in the political space… We have a small number of criteria that are permitted for targeting political ads.”
Last November Google announced limits on political microtargeting — saying it would limit the ability for advertisers to target political demographics, and also committing itself to take action against “demonstrably false claims.”
The move remains in stark contrast to Facebook which dug in at the start of this year — refusing to limit targeting criteria for political ads. Instead it trumpeted a few settings tweaks that it claimed would afford users more controls over ads. As we (and many others) warned at the time, such tweaks offer no meaningful way for Facebook users to prevent the company’s pervasive background profiling of their Internet activity from being repurposed as an attack surface to erode democracy.
Last year some of Facebook’s own staff also critcized its decision not to restrict politicians from lying in ads and called for it to limit the use of Custom Audiences — arguing microtargeting works against the public scrutiny that Facebook claims keeps politicians honest. However the company has held the line on refusing to apply limits to political ads — with the occasional exception.
The committee also asked Cerf if he has any concerns about online misinformation and disinformation emerging on platforms related to the novel coronavirus outbreak.
Cerf responded by saying he’s “very concerned about the abuse of the system and looking for ways to counter that”.
“I use our tools every single day. I don’t think I would survive without having the ability to search through the world wide web — get information — get answers. I exercise critical thinking as much as I can about the sources and the content. I am a very optimistic person with regard to the value of what’s been done so far. I am very concerned about the abuse of the system and looking for ways to counter that — and those ways may be mechanical but they also involve the ‘wet ware’ up here,” he said, gesturing at his head.
“So my position is this is all positive stuff but how do we preserve the value of what we defend against the abuse? … We’re human beings and we should try very hard to make our tools serve us and our society in a positive way.”

Amazon is now selling its cashierless store technology to other retailers

Amazon on Monday announced it will now offer its cashierless store technology called “Just Walk Out,” to other retailers. The technology uses a combination of cameras, sensors, computer vision techniques, and deep learning to allow customers to shop then leave the store without waiting in line to pay. This is the same technology that today powers the Amazon Go cashierless convenience stores and Amazon’s newly launched Amazon Go Grocery store in Seattle. 
Reuters first reported the news just ahead of Amazon’s official announcement, adding also that Amazon says it has signed “several” deals with initial customers interested in using Just Walk Out in their own stores. Amazon did not say who those customers were, however.
Amazon has also now launched a website detailing how Just Walk Out works, and answering several questions about this new business line.
The website says that other retailers have expressed interest in the tech for years, which is why it decided to start offering Just Walk Out for sale. The system Amazon is offering includes “all the necessary technology to enable checkout-free shopping,” the site notes. That would mean Amazon is providing the camera hardware and sensor technology, in addition to the software systems. The site doesn’t mention pricing but says the system also comes with 24/7 support via phone and email.
The setup and installation of the system can take as little as a few weeks once Amazon has access to the retailer’s store, Amazon says. For new builds, Amazon can work with the retailer to integrate Just Walk Out during the construction phase or it can do the same as a store undergoes remodels. It can also try to install the technology to an existing store with minimal disruption to customers.
To be clear, the technology being sold is to allow retailers to offer their own customers the ability to shop and pay for items without having to wait in line to pay at a register. It’s not intended to allow retailers to run a franchise of the Amazon Go convenience stores.
For customers, a cashierless store can save time as it doesn’t require waiting to pay. This makes sense for stores like convenience stores or grocers, where people have either limited time to make purchases or where lines can be long as carts are filled with many items. It may not be work for a larger department store where there aren’t items on shelves and much more square footage to cover.
With Amazon’s Just Walk Out, customers can enter the store with their credit card, Amazon’s new website explains. Customers don’t need to have an app installed, nor do they need an Amazon account. As the customer shops, the cameras track the customer’s movements and shelf sensors register if an item is removed or returned. Items picked up by the customers will then be placed in a virtual cart. When the customer leaves, their card is charged for what they bought. Customers can also visit an in-store kiosk if they want a printed receipt, Amazon says. However, one will be emailed automatically, as well.
It’s unclear if such a system is ultimately a benefit to retailers’ bottom line, given the expense of installation and maintenance — even if it does allow the retailer to reduce headcount. And Amazon, of course, is not marketing the technology as a means of cutting down on store staff. Instead, Amazon says store staff can be repurposed to focus on other activities — like greeting customers and answering questions, stocking the shelves, and more. These are activities the retailer should already be staffed appropriately for, though, but that’s often not the case especially as stores have become hubs for online ordering.
Customer reception to such technology is still unknown, too. While Amazon’s stores are still something of a novelty, customers may balk if and when this sort of surveillance-like technology becomes the norm.

Nvidia acquires data storage and management platform SwiftStack

Nvidia today announced that it has acquired SwiftStack, a software-centric data storage and management platform that supports public cloud, on-premises and edge deployments.
The company’s recent launches focused on improving its support for AI, high-performance computing and accelerated computing workloads, which is surely what Nvidia is most interested in here.
“Building AI supercomputers is exciting to the entire SwiftStack team,” says the company’s co-founder and CPO Joe Arnold in today’s announcement. “We couldn’t be more thrilled to work with the talented folks at NVIDIA and look forward to contributing to its world-leading accelerated computing solutions.”
The two companies did not disclose the price of the acquisition, but SwiftStack had previously raised about $23.6 million in Series A and B rounds led by Mayfield Fund and OpenView Venture Partners. Other investors include Storm Ventures and UMC Capital.

SwiftStack, which was founded in 2011, placed an early bet on OpenStack, the massive open-source project that aimed to give enterprises an AWS-like management experience in their own data centers. The company was one of the largest contributors to OpenStack’s Swift object storage platform and offered a number of services around this, though it seems like in recent years, it has downplayed the OpenStack relationship as that platform’s popularity has fizzled in many verticals.
SwiftStack lists the likes of PayPal, Rogers, data center provider DC Blox, Snapfish and Verizon (TechCrunch’s parent company) on its customer page. Nvidia, too, is a customer.
SwiftStack notes that it team will continue to maintain existing set of open source tools like Swift, ProxyFS, 1space and Controller.
“SwiftStack’s technology is already a key part of NVIDIA’s GPU-powered AI infrastructure, and this acquisition will strengthen what we do for you,” says Arnold.

SwiftStack Raises $16M For Its Enterprise Object Storage Service

Ada Health built an AI-driven startup by moving slowly and not breaking things

When Ada Health was founded nine years ago, hardly anyone was talking about combining artificial intelligence and physician care — outside of a handful of futurists.
But the chatbot boom gave way to a powerful combination of AI-augmented health care which others, like Babylon Health in 2013 and KRY in 2015, also capitalized on. The journey Ada was about to take was not an obvious one, so I spoke to Dr. Claire Novorol, Ada’s co-founder and chief medical officer, at the Slush conference last year to unpack their process and strategy.
Co-founded with Daniel Nathrath and Dr. Martin Hirsch, the startup initially set out to be an assistant to doctors rather than something that would have a consumer interface. At the beginning, Novorol said they did not talk about what they were building as an AI so much as it was pure machine learning.
Years later, Ada is a free app, and just like the average chatbot, it asks a series of questions and employs an algorithm to make an initial health assessment. It then proposes next steps, such as making an appointment with a doctor or going to an emergency room. But Ada’s business model is not to supplant doctors but to create partnerships with healthcare providers and encourage patients to use it as an early screening system.
It was Novorol who convinced the company to pivot from creating tools for doctors into a patient-facing app that could save physicians time by providing patients with an initial diagnosis. Since the app launched in 2016, Ada has gone on to raise $69.3 million. In contrast, Babylon Health has raised $635.3 million, while KRY has raised $243.6 million. Ada claims to be the top medical app in 130 countries to date and has completed more than 15 million assessments to date.

YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods

One of the more interesting and useful applications of artificial intelligence technology has been in the world of biotechnology and medicine, where now more than 220 startups (not to mention universities and bigger pharma companies) are using AI to accelerate drug discovery by using it to play out the many permutations resulting from drug and chemical combinations, DNA and other factors.
Now, a startup called Turing — which is part of the current cohort at Y Combinator due to present in the next Demo Day on March 22 — is taking a similar principle but applying it to the world of building (and ‘discovering’) new consumer packaged goods products.
Using machine learning to simulate different combinations of ingredients plus desired outcomes to figure out optimal formulations for different goods (hence the “Turing” name, a reference to Alan Turing’s mathematical model, referred to as the Turing machine), Turing is initially addressing the creation of products in home care (eg detergents), beauty, and food and beverage.
Turing’s founders claim that it is able to save companies millions of dollars by reducing the average time it takes to formulate and test new products, from an average of 12 to 24 months down to a matter of weeks.
Specifically, the aim is to reduce all the time that it takes to test combinations, giving R&D teams more time to be creative.
“Right now, they are spending more time managing experiments than they are innovating,” Manmit Shrimali, Turing’s co-founder and CEO, said.
Turing is in theory coming out of stealth today, but in fact it has already amassed an impressive customer list. It is already generating revenues by working with 8 brands owned by one of the world’s biggest CPG companies, and it is also being trialled by another major CPG behemoth (Turing is disclosing their names publicly, but suffice it to say, they and their brands are household names).
Turing is co-founded by Shrimali and Ajith Govind, two specialists in data science that had worked together on a previous startup called Dextro Analytics. Dextro had set out to help businesses use AI and other kinds of business analytics to help with identifying trends and decision making around marketing, business strategy and other operational areas.
While there, they identified a very specific use case for the same principles that was perhaps even more acute: the research and development divisions of CPG companies, which have (ironically, given their focus on the future) often been behind the curve when it comes to the “digital transformation” that has swept up a lot of other corporate departments.
“We were consulting for product companies and realised that they were struggling,” Shirmali said. Add to that the fact that CPG is precisely the kind of legacy industry that is not natively a tech company but can most definitely benefit from implementing better technology, and that spells out an interesting opportunity for how (and where) to introduce artificial intelligence into the mix.
R&D labs play a specific and critical role in the world of CPG.
Before eventually being shipped into production, this is where products are discovered; tested; tweaked in response to input from customers, marketing, budgetary and manufacturing departments and others; then tested again; then tweaked again; and so on. One of the big clients that Turing works with spends close to $400 million in testing alone.
But R&D is under a lot of pressure these days. While these departments are seeing their budgets getting cut, they continue to have a lot of demands. They are still being expected to meet timelines in producing new products (or often more likely, extensions of products) to keep consumers interested. There are a new host of environmental and health concerns around goods with huge lists of unintelligible ingredients, meaning they have to figure out how to simplify and improve the composition of mass-market products. And smaller direct-to-consumer brands are undercutting their larger competitors by getting to market faster with competitive offerings that have met new consumer tastes and preferences.
“In the CPG world, everyone was focused on marketing, and R&D was a blind spot,” Shrimali said, referring to the extensive investments that CPG have made into figuring out how to use digital to track and connect with users, and also how better to distribute their products. “To address how to use technology better in R&D, people need strong domain knowledge, and we are the first in the market to do that.”
Turing’s focus is to speed up the formulation and testing aspects that go into product creation to cut down on some of the extensive overhead that goes into putting new products into the market.
Part of the reason why it can take upwards of years to create a new product is because of all of the permutations that go into building something and making sure it works consistently as a consumer would expect it to (which still being consistent in production and coming in within budget).
“If just one ingredient is changed in a formulation, it can change everything,” Shirmali noted. And so in the case of something like a laundry detergent, this means running hundreds of tests on hundreds of loads of laundry to make sure that it works as it should.
The Turing platform brings in historical data from across a number of past permutations and tests to essentially virtualise all of this: it suggests optimal mixes and outcomes from them without the need to run the costly physical tests, and in turn this teaches the Turing platform to address future tests and formulations. Shrimali said that the Turing platform has already saved one of the brands some $7 million in testing costs.
Turing’s place in working with R&D gives the company some interesting insights into some of the shifts that the wider industry is undergoing. Currently, Shrimali said one of the biggest priorities for CPG giants include addressing the demand for more traceable, natural and organic formulations.
While no single DTC brand will ever fully eat into the market share of any CPG brand, collectively their presence and resonance with consumers is clearly causing a shift. Sometimes that will lead into acquisitions of the smaller brands, but more generally it reflects a change in consumer demands that the CPG companies are trying to meet. 
Longer term, the plan is for Turing to apply its platform to other aspects that are touched by R&D beyond the formulations of products. The thinking is that changing consumer preferences will also lead into a demand for better “formulations” for the wider product, including more sustainable production and packaging. And that, in turn, represents two areas into which Turing can expand, introducing potentially other kinds of AI technology (such as computer vision) into the mix to help optimise how companies build their next generation of consumer goods.

Hailo raises $60M Series B for its AI chips

Israeli AI chipmaker Hailo today announced that it has raised a $60 million Series B funding round led by its existing investors, who were joined by new strategic investor ABB Technology Ventures, the venture arm of the Swiss-based multination ABB, NEC Corporation and Londons’ Latitude Ventures. The new funding will help Hailo to roll out its Hailo-8 Deep Learning chip and to get into new markets and industries.
“This immense vote of confidence from our new strategic and financial investors, along with existing ones, is a testimony to our breakthrough innovation and market potential,” said Orr Danon, CEO and co-founder of Hailo. “The new funding will help us expedite the deployment of new levels of edge computing capabilities in smart devices and intelligent industries around the world, including areas such as mobility, smart cities, industrial automation, smart retail and beyond.”
I last met with the Hailo team at CES in January. At the time, the company was showing off a number of impressive demos, mostly around real-time image recognition. What makes the Hailo chip stand out is its innovative architecture, which can automatically adapt resources to best run its users’ custom neural networks. With this, the chip doesn’t just run faster but is also far more energy efficient. The company promises 26 tero operations per second in performance from its chip, which it says ” outperforms all other edge processors with its small size, high performance, and low power consumption.”

With this round, Hailo’s total funding is now $88 million. In part, the investor enthusiasm for Hailo is surely driven by the success of other Israeli chip startups. Mobileye, after all, went to Intel for $15.3 billion, which also recently acquired Habana Labs. And the time, of course, is ripe for deep learning chips at the edge, now that AI/ML technology is quickly becoming table stakes.
“Hailo is poised to become a defining player in the rapidly emerging market for AI processors,” said Julian Rowe, Partner at Latitude Ventures. “Their Deep Learning edge chip can be disruptive to so many sectors today, while the new, innovative use cases Hailo’s chips can unlock are just starting to reveal themselves. We’re thrilled to join the team for what lies ahead.”

Hailo launches its newest deep learning chip

Google Cloud announces four new regions as it expands its global footprint

Google Cloud today announced its plans to open four new data center regions. These regions will be in Delhi (India), Doha (Qatar), Melbourne (Australia) and Toronto (Canada) and bring Google Cloud’s total footprint to 26 regions. The company previously announced that it would open regions in Jakarta, Las Vegas, Salt Lake City, Seoul and Warsaw over the course of the next year. The announcement also comes only a few days after Google opened its Salt Lake City data center.
GCP already had a data center presence India, Australia and Canada before this announcement, but with these newly announced regions, it now offers two geographically separate regions for in-country disaster recovery, for example.

Google notes that the region in Doha marks the company’s first strategic collaboration agreement to launch a region in the Middle East with the Qatar Free Zones Authority. One of the launch customers there, is Bespin Global, a major manages services provider in Asia.
“We work with some of the largest Korean enterprises, helping to drive their digital transformation initiatives. One of the key requirements that we have is that we need to deliver the same quality of service to all of our customers around the globe,” said John Lee, CEO, Bespin Global. “Google Cloud’s continuous investments in expanding their own infrastructure to areas like the Middle East make it possible for us to meet our customers where they are.”

Google Cloud makes strides but still has a long way to go

Five, the self-driving startup, raises $41M and pivots into B2B, away from building its own fleet

We are still years away from a time when fully-autonomous cars will be able to drive us from A to B, and the complexity of getting to that point is likely going to need hundreds of billions of dollars of investment before it becomes a reality.
That hard truth is now leading to some shifts in the self-driving startup landscape. England’s Five (formerly known as FiveAI), one of the more ambitious companies in the space, is moving away from its original plan, of designing its own fully self-driving cars, and then running fleets of them in its own transportation service. Instead, it plans to license technology — starting with software to help test and measure the accuracy of a vehicle’s driving systems –that it has created to others building autonomous cars as well as the wider service ecosystem that will exist around that. As part of that pivot, today it’s also announcing a fresh $41 million in funding.
“A year and a bit ago we thought we would probably build the entire thing and take it to market as a whole system,” said co-founder and CEO Stan Boland in an interview. “But we gradually realised just how deep and complex that would be. It was probably through 2019 that we realised that the right thing to do is to focus in on the key pieces.”
The funding, a Series B, includes backing from Trustbridge Partners, insurance giant Direct Line Group and Sistema VC, as well as previous investors Lakestar, Amadeus Capital Partners, Kindred Capital and Notion Capital. The company has now raised $77 million and while it’s not disclosing its valuation, Boland said that it was definitely up on its last round. (Its Series A, in 2017, was for $35 million.)
Five’s change in course is a significant development: the high-profile startup, founded by a team that had previously built and sold several chip companies to the likes of Broadcom, Nvidia and Huawei, had been the leading partner for a big government-backed pilot project, StreetWise, to test and work on autonomous driving systems across boroughs in London. The most recent phase of that project, running driver-assisted rides along a 19-km route across south London, got off the ground only last October after initially getting announced in 2018.
Five might continue to work on research projects like these, Boland said, but the primary business aim for the company will no longer be ultimately to build cars for themselves, but to work on tech that will be sold either to other carmakers, or those building services catering to the autonomous industry.
For example, Direct Line, one of Five’s new investors and also a participant in the StreetWise project, could use testing and measurement to determine risk and pricing for insurance packages for different vehicles.
“Autonomous and assisted driving technology is going to play a huge role in the future of cars,” said Gus Park, MD of Motor Insurance at Direct Line Group, in a statement. “We have worked closely with Five on the StreetWise project, and we share a common interest in solving the formidable challenges that will need to be addressed in bringing safe self-driving to market. Insurers will need to build the capability to measure and underwrite new types of risk. We will be collaborating with Five’s world-class team of scientists, mathematicians and engineers to gain the insight needed to build safe, insurable solutions and bring the motoring revolution ever closer.” Park is also joining Five’s board with this round.
There were already a number of big players in the self-driving space when FiveAI launched — they included the likes of Waymo, Cruise, Uber, Argo AI and many more — and you could have argued that the writing was already on the wall then for long-term consolidation in the industry.
Five’s argument for why a UK — and indeed, European — startup was in a good place to build and operate self-driving cars, and the tech underpinning it, was because of the complexity behind building localised systems: a big US or Asian company might be able to map the streets in Europe, but it wouldn’t have as good of a feel for how people behaved on those roads.
Yet while it may have been easy to see the potential, the process of getting to that point proved to be too challenging.
“What’s happened in the last couple of years is that there has been an appreciation across the industry of just how wide and deep the challenges are for bringing self driving to market,” Boland said. “Many pieces of the jigsaw have to be assembled…. The B2C model needs billions [of investment], but others are finding their niche as great providers of technology needed to deliver the systems properly.”
As FiveAI (named after the “Level 5” that self-driving systems attain when they are truly autonomous), the company built (hacked) vehicles with dozens of sensors and through its tests managed to build a significant trove of vehicle technology.
“We could offer tech in a dozen different areas that are hard for autonomous driving companies,” Boland said. Its testing and measuring tools point to one of the toughest challenges among these: how to assure that the deep learning software a company is using is correctly identifying objects, people, weather, and other physical factors when it may have never seen them before.
“We have learned a lot about the types of errors that propagate from perception into planning… and now we can use that for providing absolute confidence” to those testing the systems, he said.
Self-driving cars are one of the biggest AI challenges of our time: not only is the requirement to essentially build from the ground up computer systems that behave as well as (or ideally better) than multitasking humans behind the wheel; but the consequence of doing that wrong is not just a strange string of words, or some other kind of non sequitur, but injury or death. No surprise that there appears still a very long way to go before we see anything like Level 5 systems in action, but in the meantime, investors are willing to continue placing their bets. Partly because of how advanced it got with its car project on relatively little funding, Five remains an interesting company to investors, and Boland hopes that this will help it with its next round down the road.
“We invest in category-leading companies that are delivering transformational change wherever they’re located,” said David Lin of Trustbridge Partners in a statement. “As Europe’s leading self-driving startup, Five is the furthest ahead in developing a clear understanding of the scientific challenges and novel solutions that move the needle for the whole industry. Five has successfully applied Europe’s outstanding science and engineering base to create a world-class team with the energy and ambition to deliver safe self-driving. We are delighted to join them for this next phase of growth.”

Google cancels Cloud Next because of coronavirus

Google today announced that it is canceling the physical part of Cloud Next, its cloud-focused event and its largest annual conference by far with around 30,000 attendees, over concerns around the current spread of COVID-19.
Given all of the recent conference cancellations, this announcement doesn’t come as a huge surprise, especially after Facebook canceled its F8 developer conference only a few days ago.
Cloud Next was scheduled to run from Apri 6 to 8. Instead of the physical event, Google will now host an online event under the “Google Cloud Next ’20: Digital Connect” moniker. So there will still be keynotes and breakout sessions, as well as the ability to connect with experts.
“Innovation is in Google’s DNA and we are leveraging this strength to bring you an immersive and inspiring event this year without the risk of travel,” the company notes in today’s announcement.
The virtual event will be free and in an email to attendees, Google says that it will automatically refund all tickets to this year’s conference. It will also automatically cancel all hotel reservations made through its conference reservation system.
It now remains to be seen what happens to Google’s other major conference, I/O, which is slated to run from May 12 to 14 in Mountain View. The same holds true for Microsoft’s rival Build conference in Seattle, which is scheduled to start on May 19. These are the two premier annual news events for both companies, but given the current situation, nobody would be surprised if they got canceled, too.

Aurora VP Jinnah Hosein is coming to TC Sessions: Robotics + AI

TechCrunch Sessions: Robotics + AI is tomorrow and we have one more exciting speaker announcement to share.
Jinnah Hosein, the vice president of software engineering at self-driving vehicle startup Aurora, is coming to TC Sessions: Robotics + AI at UC Berkeley on March 3. Hosein will join Ike Robotics CTO and co-founder Jur van den Berg on stage to discuss autonomous vehicles, particularly safety critical software and the various technical approaches being taking to solve this game-changing technology.
If Hosein’s name sounds familiar, it should be. After a 10-year stint at Google, where he rose to director of software engineering, Hosein went to SpaceX . While Hosein was heading up the software engineering at SpaceX, he also was working at Elon Musk’s other company Tesla, where he was interim vp of Autopilot software.
Who else is coming to TC Sessions: Robotics + AI? Nvidia VP of engineering Claire Delaunay, the CEOs of Traptic, Farmwise and Pyka, a packed panel featuring Boston Dynamics’ Construction Technologist Brian Ringley, Built Robotics’ Noah Campbell-Ready, Tessa Lau of Dusty Robotics and Toggle’s Daniel Blank as well as TRI-AD’s CEO James Kuffner and TRI’s VP of Robotics Max Bajrachary. And that’s just a few of the speakers, not to mention demos and exhibits to be found at TC Sessions: Robotics + AI.
Tickets are on sale now for $345; you’ll save $50 when you book now as prices go up at the door.
Student tickets are still available at the super-discounted $50 rate when you book here.

Announcing the agenda for TC Sessions: Mobility 2020

TC Sessions: Mobility is back in San Jose on May 14, and we’re excited to give the first peek of what and who is coming to the main stage. We’re not revealing everything just yet, but already this agenda highlights some of the best and brightest minds in autonomous vehicles, electrification and shared mobility.
We’ve selected the most innovative startups and top leaders from established tech companies working in mobility. This past year saw huge leaps forward, and we’re thrilled to bring the latest and greatest to our stage.
This year, we’re holding a pitch-off competition for early stage mobility companies. More details to come.
Don’t forget that early-bird tickets (including $100 savings) are currently available for a limited time; grab your tickets here before prices increase.
Some speakers have already been announced, and more will be added to the agenda in the coming weeks, so stay tuned. In the meantime, check out this early look at the agenda:
AGENDA
9:35 AM – 10:05 AM
Investing in Mobility: with Reilly Brennan (Trucks VC), Olaf Sakkers (Maniv Mobility) and speakers to be announced.
Reilly Brennan, Olaf Sakkers and two yet-to-be announced venture capitalists will come together to debate the uncertain future of mobility tech and whether VC dollars are enough to push the industry forward.
10:05 AM – 10:25 AM
Coming soon!
10:25 AM – 10:50
The next opportunities in micromobility with Danielle Harris (Elemental Excelerator), Dor Levi (Lyft), and Dmitry Shevelenko (Tortoise)
Worldwide, numerous companies are operating shared micromobility services — so many that the industry is well into a consolidation phase. Despite the over-saturation of the market, there are still opportunities for new players. Dor Levi, head of bikes and scooters at Lyft, Danielle Harris, director of mobility innovation at Elemental Excelerator and Dmitry Shevelenko, founder at Tortoise will discuss.
10:50 AM – 11:10 AM
Waymo Grows Up with Tekedra Mawakana (Waymo)
Waymo Chief Operating Officer Tekedra Mawakana is at the center of Waymo’s future from scaling the autonomous vehicle company’s commercial deployment and directing fleet operations to developing the company’s business path. Tekedra will speak about what lies ahead as Waymo drives forward with its plan to become a grownup business.
11:10 AM – 11:30 AM
Innovation Break
11:30 AM – 11:40 AM
Live Demo. Coming soon!
11:40 AM – 12:00 PM
Setting the Record Straight with Bryan Salesky (Argo AI)
Argo AI has gone from unknown startup to a company providing the autonomous vehicle technology to Ford and VW — not to mention billions in investment from the two global automakers. Co-founder and CEO Bryan Salesky will talk about the company’s journey, what’s next and what it really takes to commercialize autonomous vehicle technology.
1:00 PM – 1:25 PM
Pitch-Off
Select, early-stage companies, hand-picked by TechCrunch editors, will take the stage and have 5 minutes to present their companies.
1:25 PM – 1:45 PM
Building an AV Startup with  Nancy Sun (Ike)
Ike co-founder and chief engineer Nancy Sun will share her experiences in the world of automation and robotics, a ride that has taken her from Apple to Otto and Uber before she set off to start a self-driving truck company. Sun will discuss what the future holds for trucking and the challenges and the secrets behind building a successful mobility startup.
1:45 PM – 2:10 PM
Working with Cities, Not Against Them with Euwyn Poon (Spin) and Shin-pei Tsay (Uber)
Many micromobility services got off to a rough start with cities in the early days of the industry. Now, operators are making a point to work more closely with regulators from the very beginning. Hear from Spin co-founder Euwyn Poon and Uber Director of Policy, Cities and Transportation Shin-pei Tsay on what it takes to make a copacetic relationship between operators and cities.
2:10 PM – 2:30 PM
Innovation Break
2:30 PM – 2:50 PM
The electrification of Porsche with Klaus Zellmer (Porsche)
Porsche has undergone a major transformation in the past several years, investing billions into an electric vehicle program and launching the Taycan, its first all-electric vehicle. Now, Porsche is ramping up for more. North America CEO Klaus Zellmer will talk about Porsche’s path, competition and where it’s headed next.
2:50 PM – 3:15 PM
Navigating Self-Driving Car Regulations with Melissa Froelich (Aurora) and Jody Kelman (Lyft)
Autonomous vehicle developers face a patchwork of local, state and federal regulations. Government policy experts Jody Kelman, who leads the self-driving platform team at Lyft, and Melissa Froelich Senior Manager, Government Affairs at Aurora, discuss how to get your startup back on the road safely.
3:15 PM – 3:35 PM
Coming Soon!
3:35 PM – 4:00 PM
The Future of Trucking with Xiaodi Hou (TuSimple) and Boris Sofman (Waymo)
TuSimple co-founder and CTO Xiaodi Hou and Boris Sofman, former Anki Robotics founder and CEO who now leads Waymo’s trucking unit, will discuss the business and the technical challenges of autonomous trucking.
4:00 PM – 4:20 PM
Innovation Break
4:20 PM – 4:30 PM
Live Demo. Coming soon!
4:30 PM – 4:55 PM
Coming soon!
Don’t forget to grab your tickets and join us this May.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-c7dece0a9646b05c3f0e6096d64bd8d5’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-c7dece0a9646b05c3f0e6096d64bd8d5’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

R&D roundup: soft 3D printing, backscatter Wi-Fi and other bleeding-edge tech

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances along with notes on why they may prove important in the world of tech and startups.
This week: advances in rocketry, machine learning, wireless transmission and more.
Firing up new rockets
In some ways, rocketry is not so different from its beginnings around WWII, but as other bottlenecks give way it is becoming feasible to experiment with truly innovative types of rocket engines. One such type is the rotating detonation engine, an alternative to the standard means of controlling and directing the combustion that creates thrust. The process is highly chaotic, however, and not well understood enough to control properly.

University of Washington researchers set up a test rotating detonation engine and studied the combustion patterns inside using an ultra-high-speed camera. The footage was analyzed to produce the first mathematical model simulating the process. It’s still at a very early stage but understanding the mechanism of a new technology like this is necessary before putting it into practice. When packaged in software, this type of simulation can also be licensed to aerospace firms. You can read the full paper here.

Could lessons from the challenger bank revolution kick-start innovation on the climate crisis?

Now that the world is swimming in data we may be able to address the climate and environmental risks to the planet. But while there is plenty of capital to invest in things like ClimateTech, a lot of the data that’s needed to tackle this big issue is badly applied, leading to a big misallocation of resources. So to deal with the climate we have to get the data right. A big part of the solution is open standards and interoperability.
The story of how the Open Banking Standard developed might show a way forward. Its development out of the UK led to regulated sector-wide interoperability (covering a broad range of areas including IP, legal, liability and licensing, and technology to enable data sharing). It’s meant over 300 fintech companies now use the Standard, which has helped to catalyze similar initiatives.
Open Banking has lead to the explosion in tech startups that we see today. Revolut, Monzo, Starling bank – none of them would have existed without Open Banking.
What if someone created something like the Open Banking Standard, but this time to stimulate climate-friendly innovation around financial products. Afterall, it’s more likely we’ll save the planet is we incentivize firms with financial models to make it work.
Well, it just so happens that one of the key players that developed the Open Banking Standard plans to do the same for data about the climate to allow the insurance industry to engage in the solutions to the climate crisis.

Gavin Starks co-chaired the development of the Open Banking Standard, laying the foundations for regulation and catalyzing international innovation.
But Starks has form in this arena. Prior to co-creating Open Banking, he was the Open Data Institute’s founding CEO, working with Sir Tim Berners-Lee. The ODI may not be well known in Silicon Valley, but it’s launched franchises across 20 countries and trained 10,000 people.
Starks’ previous venture was a pioneer in the climate space: AMEE (Avoidance of Mass Extinctions Engine) organized the world’s environmental data and standards into an open web-service, raising $10M and selling in 2015 PredictX.
Starks also chaired the development of the first Gold Standard Carbon Offset.
But what Starks has set himself is a task different to Open Banking.
His new project is Icebreaker One, a new non-profit which last month raised £1m+ investment, largely funded by the UK’s government-backed body UK Research and Innovation. It’s also supported by a consortium of financial and regulatory institutions.
So what’s the big idea this time?
The idea is to develop an open standard for data sharing that will stimulate climate-friendly financial product innovation and deliver new products.
Just like the Open Banking Standard, Icebreaker One will steer the development of the SERI standard. This is the Standard for Environment, Risk and Insurance (SERI) which has been created to design, test and develop financial products with Icebreaker One members ahead of the COP26 conference in Glasgow later this year.
SERI could provide a framework for an addressable, open marketplace, built around the needs of both the market and the new reality of climate change. If it works, this would enable insurers to share data robustly, legally and securely, driving the use and adoption of artificial intelligence tools within the insurance sector.
It would mean insurers being able to invest in demonstrably low-carbon financial products and services, based on real, hard data.
The current SERI launch partners are Aon, Arup, Agvesto, Bird & Bird, Brit Insurance, Dais LLP, Lloyd’s Register Group and the University of Cambridge.
The thinking behind the initiative is that as large catastrophic climate events occur with higher frequency, the UK’s insurance market is under pressure to evolve.
By creating the data platform, insurers can invest in low-carbon financial products, rather than ignore them because they can’t be priced right.
Starks says: “The time for theory is over—we need rapid and meaningful action. The threat of climate change to the global economy is tangible, and the increase in catastrophic climate events is capable of bankrupting markets and even nation-states. We are already witnessing insurance in some areas becoming untenable – which is a genuine threat to communities and wider society.”
He adds: “We are working with some of the most influential organizations in the world to plan policies and regulation to protect citizens, our environment and our economy; to unlock the power of unused and underutilized data to enable governments and business to respond effectively, responsibly and sustainably to the threats posed by the climate emergency.”
Arup, the multinational professional services firm best known for large engineering projects, is one of those in the SERI consortium.
Volker Buscher, Chief Data Officer at Arup, says: “Responding to climate change and futureproofing the market is vital – and working with Gavin and senior industry figures is a big opportunity to make real-world data work harder, to evolve investment strategies, shine a light on inefficiencies and better understand risk. It’s of benefit to everyone that we create the working blueprint for the freer sharing and licensing of data-at-scale that can be a shot in the arm to climate-affected financial products and services.”
Icebreaker One plans to overcome the locked, legacy culture of the insurance industry.
The task ahead is a big one. Currently, the valuable data needed to unlock this potential is in lately closed-off “data lakes”. The goal is to influence $3.6 trillion of investment.
If the insurance industry can innovate around climate change and the new kinds of risk it creates, then the financial world industry can create the kind of boom Open Banking did.
And that would mean not just brand new insurance products but also new startups in what’s been described as “InsureTech”.
But the greater prize, is of course the planet itself.

Facebook brings its 3D photos feature to users with single-camera phones

Facebook first showed off its 3D photos back in 2018, and shared the technical details behind it a month later. But unless you had one of a handful of phones with dual cameras back then (when they weren’t so common), you couldn’t make your own. Today an update brings 3D photos to those of us still rocking a single camera.
In case you don’t remember or haven’t seen one lately, the 3D photos work by analyzing a 2D picture and slicing it into a ton of layers that move separately when you tilt the phone or scroll. I’m not a big fan of 3D anything, and I don’t even use Facebook, but the simple fact is this feature is pretty cool.

The problem is it used the dual camera feature to help the system determine distance, which informed how the picture should be sliced. That meant I, with my beautiful iPhone SE, was out of the running — along with about a billion other people who hadn’t bought into the dual-camera thing yet.
But over the last few years the computer vision team over at Facebook has been working on making it possible to do this without dual-camera input. At last they succeeded, and this blog post explains, in terms technical enough that I’m not even going to attempt to summarize them here, just how they did it.
The advances mean that many — though not all — relatively modern single-camera phones should be able to use the feature. Google’s Pixel series is now supported, and single-camera iPhones from the 7 forward. The huge diversity of Android devices makes it hard to say which will and won’t be supported — it depends on a few things not usually listed on the spec sheet — but you’ll be able to tell once your Facebook app updates and you take a picture.

Microsoft’s Cortana drops consumer skills as it refocuses on business users

With the next version of Windows 10, coming this spring, Microsoft’s Cortana digital assistant will lose a number of consumer skills around music and connected home, as well as some third-party skills. That’s very much in line with Microsoft’s new focus for Cortana, but it may still come as a surprise to the dozens of loyal Cortana fans.
Microsoft is also turning off Cortana support in its Microsoft Launcher on Android by the end of April and on older versions of Windows that have reached their end-of-service date, which usually comes about 36 months after the original release.

As the company explained last year, it now mostly thinks of Cortana as a service for business users. The new Cortana is all about productivity, with deep integrations into Microsoft suite of Office tools, for example. In this context, consumer services are only a distraction and Microsoft is leaving that market to the likes of Amazon and Google .
Since the new Cortana experience is all about Microsoft 365, the subscription service that includes access to the Office tools, email, online storage and more, it doesn’t come as a surprise that the assistant’s new feature will give you access to data from these tools, including your calendar, Microsoft To Do notes and more.
And while some consumer features are going away, Microsoft stresses that Cortana will still be able to tell you a joke, set alarms and timers, and give you answers from Bing.
For now, all of this only applies to English-speaking users in the U.S. Outside of the U.S., most of the productivity features will launch in the future.

Cortana wants to be your personal executive assistant and read your emails to you, too

Indian research firm Convergence Catalyst is ready for its second act

A 9-year-old is smashing the shuttle far and wide, and frantically pacing back and forth on the court in Bangalore, India, as her competition refuses to back down. Her rival is not a human. She is playing against a machine that is mimicking the game of badminton legend P.V. Sindhu, toned down a few notches to adjust for the age difference.
By the court, her father, Jayanth Kolla, is watching the game and taking notes. Kolla is a familiar name in the tech startup and business ecosystem in India. For the last eight years, he has been helming the research firm Convergence Catalyst, which covers mobility, telecom, AI and IoT.
When his daughter showed interest in badminton, Kolla rushed to explore options, only to realize that the centuries old sports could use some deep tech.
He reached out to a few friends to explore if they could build a device. “I have always wondered how a younger version of players who have made it to the professional arena must have played like,” he said in an interview.
Months later, they had something better.
Sensate Technologies
Kolla founded Sensate Technologies last year and has hired many industry experts and data scientists from Stanford, MIT, and India’s IIT. Sensate is building solutions on deep technologies such as AI, ML, advanced analytics, IoT, robotics and blockchain.
In the last year, the bootstrapped startup has developed seven prototypes, five of which are for sports. It holds eight patents. Which brings us back to the court.
One of the prototypes that Sensate has built is the machine that Kolla’s daughter is playing against. In a recent interview, he demonstrated how Sensate was able to accurately map how a player moves on the court and goes about smashing the shuttle by just looking at two-dimensional videos on YouTube and mobile camera feed. This has been built using Computer Vision AI.

It then fine tunes the gameplay in accordance with the age difference, which is input into a machine that can now mimic that player to a great level, said Kolla.
A handful of startups and established players have sought to address the sports tech market in recent years. SeeHow, another India-based startup, builds and embeds sensors in bats and balls to track specific types of data that batsmen and bowlers generate.
Kolla’s aim is to turn Sensate Technologies into a global deep tech venture foundry and build 20 odd products that would then branch into multiple companies operating in 11 different industries.
Microsoft last year partnered with Indian cricket legend Anil Kumble’s company Spektacom to work on a number of solutions including a smart sticker for bats that contains sensor tech designed to track the performance.
But Kolla’s ambitions go way beyond sports tech.
“The best part about deep technology solutions and platforms is that you build solutions on these technologies to solve a problem in a particular sector and with very little incremental effort, they can solve problems in a completely different sector,” he said.
Kolla, a former product manager at Motorola and Nokia, among other companies, said the startup is also in discussion with one of the world’s biggest companies that is looking to license its tech for their healthcare stack. “This validates our approach.” He declined to name any potential clients as the talks have not materialized yet.

DocuSign acquires Seal Software for $188M to enhance its AI chops

Contract management service DocuSign today announced that it is acquiring Seal Software for $188 million in cash. The acquisition is expected to close later this year. DocuSign, it’s worth noting, previously invested $15 million in Seal Software in 2019.
Seal Software was founded in 2010 and while it may not be a mainstream brand, its customers include the likes of PayPal, Dell, Nokia and DocuSign itself. These companies use Seal for its contract management tools, but also for its analytics, discovery and data extraction services. And it’s these AI smarts the company developed over time to help businesses analyze their contracts that made DocuSign acquire the company. This can help them significantly reduce their time for legal reviews, for example.
“Seal was built to make finding, analyzing, and extracting data from contracts simpler and faster,” DocuSign CEO John O’Melia said in today’s announcement. “We have a natural synergy with DocuSign, and our team is excited to leverage our AI expertise to help make the Agreement Cloud even smarter. Also, given the company’s scale and expansive vision, becoming part of DocuSign will provide great opportunities for our customers and partners.”
DocuSign says it will continue to sell Seal’s analytics tools. What’s surely more important to DocuSign, though, is that it will also leverage the company’s AI tools to bolster its DocuSign CLM offering. CLM is DocuSign’s service for automating the full contract lifecycle, with a graphical interface for creating workflows and collaboration tools for reviewing and tracking changes, among other things. And integration with Seal’s tools, DocuSign argues, will allow it to provide its customers with a “faster, more efficient agreement process,” while Seal’s customers will benefit from deeper integrations with the DocuSign Agreement Cloud.

Fear and liability in algorithmic hiring 

It would be a foolish U.S. business that tried to sell chlorine-washed chicken in Europe — a region where very different food standards apply. But in the high-tech world of algorithmically assisted hiring, it’s a different story.
A number of startups are selling data-driven tech tools designed to comply with U.S. equality laws into the European Union, where their specific flavor of anti-discrimination compliance may be as legally meaningless as the marketing glitter they’re sprinkling — with eye-catching (but unquantifiable) claims of “fairness metrics” and “bias beating” AIs.
First up, if your business is trying to crystal-ball-gaze something as difficult to quantify (let alone predict) as “job fit” and workplace performance, where each individual hire will almost certainly be folded into (and have their performance shaped by) a dynamic mix of other individuals commonly referred to as “a team” — and you’re going about this job matchmaking “astrology” by working off of data sets that are absolutely not representative of our colorful, complex, messy human reality — then the most pressing question is probably, “what are you actually selling?”
Snake oil in software form? Automation of something math won’t ever be able to “fix?” An impossibly reductionist dream of friction-free recruitment?
Deep down in the small print, does your USP sum to claiming to do the least possible damage? And doesn’t that sound, well, kind of awkward?

London-based Gyana raises $3.9M for a no-code approach to data science

Coding and other computer science expertise remain some of the more important skills that a person can have in the working world today, but in the last few years, we have also seen a big rise in a new generation of tools providing an alternative way of reaping the fruits of technology: “no-code” software, which lets anyone — technical or non-technical — build apps, games, AI-based chatbots, and other products that used to be the exclusive terrain of engineers and computer scientists.
Today, one of the newer startups in the category — London-based Gyana, which lets non-technical people run data science analytics on any structured dataset — is announcing a round of £3 million to fuel its next stage of growth.
Led by UK firm Fuel Ventures, other investors in this round include Biz Stone of Twitter, Green Shores Capital and U+I , and it brings the total raised by the startup to $6.8 million since being founded in 2015.
Gyana (Sanskrit for “knowledge”) was co-founded by Joyeeta Das and David Kell, who were both pursuing post-graduate degrees at Oxford: Das, a former engineer, was getting an MBA, and Kell was doing a PhD in physics.

Das said that the idea of building this tool came out of the fact that the pair could see a big disconnect emerging not just in their studies, but also in the world at large — not so much a digital divide, as a digital light year in terms of the distance between the groups of who and who doesn’t know how to work in the realm of data science.
“Everyone talks about using data to inform decision making, and the world becoming data-driven, but actually that proposition is available to less than one percent of the world,” she said.
Out of that, the pair decided to work on building a platform that Das describes as a way to empower “citizen data scientists”, by letting users upload any structured data set (for example, a .CSV file) and running a series of queries on it to be able to visualise trends and other insights more easily.
While the longer term goal may be for any person to be able to produce an analytical insight out of a long list of numbers, the more practical and immediate application has been in enterprise services and building tools for non-technical knowledge workers to make better, data-driven decisions.
To prove out its software, the startup first built an app based on the platform that it calls Neera (Sanskrit for “water”), which specifically parses footfall and other “human movement” metrics, useful for applications in retail, real estate and civic planning — for example to determine well certain retail locations are performing, footfall in popular locations, decisions on where to place or remove stores, or how to price a piece of property.
Starting out with the aim of mid-market and smaller companies — those most likely not to have in-house data scientists to meet their business needs — startup has already picked up a series of customers that are actually quite a lot bigger than that. They include Vodafone, Barclays, EY, Pret a Manger, Knight Frank and the UK Ministry of Defense. It says it has some £1 million in contracts with these firms currently.
That, in turn, has served as the trigger to raise this latest round of funding and to launch Vayu (Sanskrit for “air”) — a more general purpose app that covers a wider set of parameters that can be applied to a dataset. So far, it has been adopted by academic researchers, financial services employees, and others that use analysis in their work, Das said.

With both Vayu and Neera, the aim — refreshingly — is to make the whole experience as privacy-friendly as possible, Das noted. Currently, you download an app if you want to use Gyana, and you keep your data local as you work on it. Gyana has no “anonymization” and no retention of data in its processes, except things like analytics around where your cursor hovers, so that Gyana knows how it can improve its product.
“There are always ways to reverse engineer these things,” Das said of anonymization. “We just wanted to make sure that we are not accidentally creating a situation where, despite learning from anaonyised materials, you can’t reverse engineer what people are analysing. We are just not convinced.”
While there is something commendable about building and shipping a tool with a lot of potential to it, Gyana runs the risk of facing what I think of as the “water, water everywhere” problem. Sometimes if a person really has no experience or specific aim, it can be hard to think of how to get started when you can do anything. Das said they have also identified this, and so while currently Gyana already offers some tutorials and helper tools within the app to nudge the user along, the plan is to eventually bring in a large variety of datasets for people to get started with, and also to develop a more intuitive way to “read” the basics of the files in order to figure out what kinds of data inquiries a person is most likely to want to make.
The rise of “no-code” software has been a swift one in the world of tech spanning the proliferation of startups, big acquisitions, and large funding rounds. Companies like Airtable and DashDash are aimed at building analytics leaning on interfaces that follow the basic design of a spreadsheet; AppSheet, which is a no-code mobile app building platform, was recently acquired by Google; and Roblox (for building games without needing to code) and Uncorq (for app development) have both raised significant funding just this week.
Gartner predicts that by 2024, some 65% of all app development will be made on low- or no-code platforms, and Forrester estimates that the no- and low-code market will be worth some $10 billion this year, rising to $21.2 billion by 2024.
That represents a big business opportunity for the likes of Gyana, which has been unique in using the no-code approach specifically to tackle the area of data science.
However, in the spirit of citizen data scientists, the intention is to keep a consumer version of the apps free to use as it works on signing up enterprise users with more enhanced paid products, which will be priced on an annual license basis (currently clients are paying between $6,000 and $12,000 depending on usage, she said).
“We want to do free for as long as we can,” Das said, both in relation to the data tools and the datasets that it will offer to users. “The biggest value add is not about accessing premium data that is hard to get. We are not a data marketplace but we want to provide data that makes sense to access,” adding that even with business users, “we’d like you to do 90% of what you want to do without paying for anything.”

Tractable claims $25M to sell damage-assessing AIs to more insurance giants

London-based insurtech AI startup Tractable, which is applying artificial intelligence to speed up accident and disaster recovery by using computer vision to perform visual damage appraisal instead of getting humans to do the job, has closed a $25 million Series C, led by Canadian investment fund Georgian Partners.
Existing investors also participated, including Insight Partners and Ignition Partners. The round nearly doubles the 2014-founded startup’s total funding, taking it to $55M raised to date.
When TechCrunch spoke to Tractable’s co-founder and CEO Alexandre Dalyac, back in 2018, he said the company’s aim is to speed up insurance-related response times around events like car accidents and natural disasters by as much as 10x.

Tractable is applying AI to accident and disaster appraisal

Two years on the startup isn’t breaking out any hard metrics — but says its product is used by a number of multinational insurance firms, including Ageas in the UK, France’s Covéa, Japan’s Tokio Marine and Polish insurer Talanx-Warta — to analyse vehicle damage “effectively and efficiently”.
It also says the technology has been involved in accelerating insurance-related assessments for “hundreds of thousands of people worldwide”.
Tractable’s pitch is that AI appraisals of damage to vehicles/property can take place via its platform “in minutes”, thereby allowing for repairs to begin sooner and people’s livelihoods to be restored more quickly.
Though of course if the AI algorithm denies a person’s claim the opposite would happen.
The startup said its new funding will go on expanding its market footprint. It has customers across nine markets, globally, at this point. And in addition to its first offices in the UK and US recently opened a permanent office in Japan — with the stated aim of serving new clients in the Asia region.
It also said the Series C will be used for continued product development by further enhancing its AI.
Its current product line up includes AI for assessing damage to vehicles and another focused on the appraisal of damage caused by natural disasters, such as to buildings by hurricanes.
“Our AI solutions capture and process photos and damage and predict repair costs — at scale,” Tractable claims on its website, noting its proprietary algorithms can be fed by “satellite, drone or smartphone imagery”.
Commenting on the funding in a statement Lonne Jaffe, MD at Insight Partners and also Tractable board director, said: “Tractable has achieved tremendous scale in the past year with a customer base across nine countries, a differentiated data asset, and the expansion of their team to over 100 employees across London, New York, and now Tokyo. We are excited to continue to invest in Tractable as the team brings its powerful AI technology to many more countries.”
Emily Walsh, principal at Georgian Partners, added that the startup’s “sophisticated approach to computer vision applied to accident recovery is resonating with the largest players globally, who are using the platform to make real-time, data-driven decisions while dramatically improving the customer experience”.
“We’re incredibly excited to partner with the Tractable team to help them move even faster on bringing the next wave of technological innovation to accident and disaster recovery across the world,” she added.
It’s worth noting that in the EU citizens have a right, under data protection law, to (human) review of algorithmic decisions if they a legal or similarly significant impact — and insurance would likely fall into that category.
EU policymakers also recently laid out a proposal to regulate certain “high risk” AI systems and said they intend to expand the bloc’s consumer protection rules by bringing in a testing and certification program for the data-sets that feed algorithms powering AI-driven services to support product safety.

Grab your ticket: Only one week to TC Sessions: Robotics + AI 2020

It’s T-minus one week to the big day, March 3, when more than 1,000 startuppers will convene in San Jose, Calif. for TC Sessions: Robotics + AI 2020. We’re talking a hefty cross-section representing big companies and exciting new startups. We’re talking some of the most innovative thinkers, makers, researchers, investors and influencers — all focused on creating the future of these two world-changing technologies.
Don’t miss out on this one-day conference of interviews, panel discussions, Q&As, workshops and demos dedicated to every aspect of robotics and A.I. General admission tickets cost $345. Snag your ticket now and save, because prices go up at the door. Want to save even more? Save 15 percent when you buy four or more tickets. Are you a student? Grab a ticket for just $50.
What do we have planned for this TC Session? Here’s a small sample of the fab programming that awaits you, and be sure to check out the full TC Session agenda here.
Q&A with Founders: This is your chance to ask questions of Sébastien Boyer, co-founder and CEO of FarmWise and Noah Ready-Campbell, founder and CEO of Built Robotics — some of the most successful robotics founders on our stage.
Disney Robotics: Imagineers from Disney will present state-of-the-art robotics built to populate its theme parks.
Investing in Robotics and AI: Lessons from the Industry’s VCs: Dror Berman, founding partner at Innovation Endeavors, Jocelyn Goldfein, managing director at Zetta Venture Partners and Eric Migicovsky, general partner at Y Combinator will discuss the rising tide of venture capital funding in robotics and AI. The investors bring a combination of early stage investing and corporate venture capital expertise, sharing a fondness for the wild world of robotics and AI investing.
And — new this year — don’t miss watching the finalists from our Pitch Night competition. Founders of these early-stage companies, hand-picked by TechCrunch editors, will take the stage and have just five minutes to present their wares.
With just one more week until TC Sessions: Robotics + AI 2020 kicks off, you don’t have much time left to save on tickets. Why pay more at the door? Buy your ticket now and join the best and brightest for a full day dedicated to all things robotics.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Cartesiam helps developers bring AI to microcontrollers

Cartesiam, a startup that aims to bring machine learning to edge devices powered by microcontrollers, has launched a new tool for developers who want an easier way to build services for these devices. The new NanoEdge AI Studio is the first IDE specifically designed for enabling machine learning and inferencing on Arm Cortex-M microcontrollers, which power billions of devices already.
As Cartesiam GM Marc Dupaquier, who co-founded the company in 2016, told me, the company works very closely with Arm, given that both have a vested interest in having developers create new features for these devices. He noted that while the first wave of IoT was all about sending data to the cloud, that has now shifted and most companies now want to limit the amount of data they send out and do a lot more on the device itself. And that’s pretty much one of the founding theses of Cartesiam. “It’s just absurd to send all this data — which, by the way, also exposes the device from a security standpoint,” he said. “What if we could do it much closer to the device itself?”
The company first bet on Intel’s short-lived Curie SoC platform. That obviously didn’t work out all that well, given that Intel axed support for Curie in 2017. Since then, Cartesiam has focused on the Cortex-M platform, which worked out for the better, given how ubiquitous it has become. Since we’re talking about low-powered microcontrollers, though, it’s worth noting that we’re not talking about face recognition or natural language understanding here. Instead, using machine learning on these devices is more about making objects a little bit smarter and, especially in an industrial use case, detecting abnormalities or figuring out when it’s time to do preventive maintenance.
Today, Cartesiam already works with many large corporations that build Cortex-M-based devices. The NanoEdge Studio makes this development work far easier, though. “Developing a smart object must be simple, rapid and affordable — and today, it is not, so we are trying to change it,” said Dupaquier. But the company isn’t trying to pitch its product to data scientists, he stressed. “Our target is not the data scientists. We are actually not smart enough for that. But we are unbelievably smart for the embedded designer. We will resolve 99% of their problems.” He argues that Cartesiam reduced time to market by a factor of 20 to 50, “because you can get your solution running in days, not in multiple years.”
One nifty feature of the NanoEdge Studio is that it automatically tries to find the best algorithm for a given combination of sensors and use cases and the libraries it generates are extremely small and use somewhere between 4K to 16K of RAM.
NanoEdge Studio for both Windows and Linux is now generally available. Pricing starts at €690/month for a single user or €2,490/month for teams.

Freshworks acquires AnsweriQ

Customer engagement platform Freshworks today announced that it has acquired AnsweriQ, a startup that provides AI tools for self-service solutions and agent-assisted use cases where the ultimate goal is to quickly provide customers with answers and make agents more efficient.
The companies did not disclose the acquisition price. AnsweriQ last raised a funding round in 2017, when it received $5 million in a Series A round from Madrona Venture Group.
Freshworks founder and CEO Girish Mathrubootham tells me that he was introduced to the company through a friend, but that he had also previously come across AnsweriQ as a player in the customer service automation space for large clients in high-volume call centers.
“We really liked the team and the product and their ability to go up-market and win larger deals,” Mathrubootham said. “In terms of using the AI/ML customer service, the technology that they’ve built was perfectly complementary to everything else that we were building.”
He also noted the client base, which doesn’t overlap with Freshworks’, and the talent at AnsweriQ, including the leadership team, made this a no-brainer.
AnsweriQ, which has customers that use Freshworks and competing products, will continue to operate its existing products for the time being. Over time, Freshworks, of course, hopes to convert many of these users into Freshworks users as well. The company also plans to integrate AnsweriQ’s technology into its Freddy AI engine. The exact branding for these new capabilities remains unclear, but Mathrubootham suggested FreshiQ as an option.
As for the AnsweriQ leadership team, CEO Pradeep Rathinam will be joining Freshworks as chief customer officer.
Rathinam told me that the company was at the point where he was looking to raise the next round of funding. “As we were going to raise the next round of funding, our choices were to go out and raise the next round and go down this path, or look for a complementary platform on which we can vet our products and then get faster customer acquisition and really scale this to hundreds or thousands of customers,” he said.
He also noted that as a pure AI player, AnsweriQ had to deal with lots of complex data privacy and residency issues, so a more comprehensive platform like Freshworks made a lot of sense.
Freshworks has always been relatively acquisitive. Last year, the company acquired the customer success service Natero, for example. With the $150 million Series H round it announced last November, the company now also has the cash on hand to acquire even more customers. Freshworks is currently valued at about $3.5 billion and has 2,7000 employees in 13 offices. With the acquisition of AnsweriQ, it now also has a foothold in Seattle, which it plans to use to attract local talent to the company.

Freshworks raises $150M Series H on $3.5B valuation

Where top VCs are investing in medical and surgical robotics

The medical and healthcare categories have been leading robotic innovation for decades. Look no further than Intuitive Surgical, whose da Vinci robot has been performing surgery since it received FDA clearance in the early 2000s. These days, the SRI spinoff is currently valued at more than $60 billion.
There’s a lot of money to be made for established companies and still areas to be explored for young startups, both on and off the operating table. The venture community has been betting big on companies developing everything from new surgical robots, assistive robots for medical facilities, robotic medical aid devices or otherwise. 
Medical device and robotics startups raised roughly 600-700 rounds of venture capital in 2019, according to data from Pitchbook and Crunchbase, with most deals occurring at the early stage (over 25% of rounds occurred at the seed stage). With our 2020 Robotics+AI sessions event now just one week away, we’re diving back into another robotics sub-sector to see where robotics VCs are actually writing checks. 
Just as we did with warehouse robotics last week and construction robotics the week before, we asked four leading VCs who are actively investing in medical and surgical robotics to share what’s exciting them most and where they see opportunity in the sector:
Rohit Sharma, True Ventures
Duncan Turner, SOSV & HAX
Peter Hebert, Lux Capital
Haomiao Huang, Kleiner Perkins
Rohit Sharma, True Ventures
Which trends are you most excited about in surgical/medical robotics from an investing perspective?

AI chatbot maker Babylon Health attacks clinician in PR stunt after he goes public with safety concerns

UK startup Babylon Health pulled app data on a critical user in order to create a press release in which it publicly attacks the UK doctor who has spent years raising patient safety concerns about the symptom triage chatbot service.
In the press release released late Monday Babylon refers to Dr David Watkins — via his Twitter handle — as a “troll” and claims he’s “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.
It also writes that Watkins has clocked up “hundreds of hours” and 2,400 tests of its service in a bid to discredit his safety concerns — saying he’s raised “fewer than 100 test results which he considered concerning”.
Babylon’s PR also claims that only in 20 instances did Watkins find “genuine errors in our AI”, whereas other instances are couched as ‘misrepresentations’ or “mistakes”, per an unnamed “panel of senior clinicians” which the startup’s PR says “investigated and re-validated every single one” — suggesting the error rate Watkins identified was just 0.8%.
Screengrab from Babylon’s press release which refers to to Dr Watkins’ “Twitter troll tests”
Responding to the attack in a telephone interview with TechCrunch Watkins described Babylon’s claims as “absolute nonsense” — saying, for example, he has not carried out anywhere near 2,400 tests of its service. “There are certainly not 2,400 completed triage assessments,” he told us. “Absolutely not.”
Asked how many tests he thinks he did complete Watkins suggested it’s likely to be between 800 and 900 full runs through “complete triages” (some of which, he points out, would have been repeat tests to see if the company had fixed issues he’d previously noticed).
He said he identified issues in about one in two or one in three instances of testing the bot — though in 2018 says he was finding far more problems, claiming it was “one in one” at that stage for an earlier version of the app.
Watkins suggests that to get to the 2,400 figure Babylon is likely counting instances where he was unable to complete a full triage because the service was lagging or glitchy. “They’ve manipulated data to try and discredit someone raising patient safety concerns,” he said.
“I obviously test in a fashion which is [that] I know what I’m looking for — because I’ve done this for the past three years and I’m looking for the same issues which I’ve flagged before to see have they fixed them. So trying to suggest that my testing is actually any indication of the chatbot is absurd in itself,” he added.
In another pointed attack Babylon writes Watkins has “posted over 6,000 misleading attacks” — without specifying exactly what kind of attacks it’s referring to (or where they’ve been posted).
Watkins told us he hasn’t even tweeted 6,000 times in total since joining Twitter four years ago — though he has spent three years using the platform to raise concerns about diagnosis issues with Babylon’s chatbot.
Such as this series of tweets where he shows a triage for a female patient failing to pick up a potential heart attack.

The @babylonhealth Chatbot has descended to a whole new level of incompetence, with #DeathByChatbot #GenderBias.
Classic #HeartAttack symptoms in a FEMALE, results in a diagnosis of #PanicAttack or #Depression.
The Chatbot ONLY suggests the possibility of a #HeartAttack in MEN! pic.twitter.com/M8ohPDx0LX
— Dr Murphy (aka David Watkins) (@DrMurphy11) September 8, 2019

Watkins told us he has no idea what the 6,000 figure refers to, and accuses Babylon of having a culture of “trying to silence criticism” rather than engage with genuine clinician concerns.
“Not once have Babylon actually approached me and said ‘hey Dr Murphy — or Dr Watkins — what you’ve tweeted there is misleading’,” he added. “Not once.”
Instead, he said the startup has consistently taken a “dismissive approach” to the safety concerns he’s raised. “My overall concern with the way that they’ve approached this is that yet again they have taken a dismissive approach to criticism and again tried to smear and discredit the person raising concerns,” he said.
Watkins, a consultant oncologist at The Royal Marsden NHS Foundation Trust — who has for several years gone by the online (Twitter) moniker of @DrMurphy11, tweeting videos of Babylon’s chatbot triage he says illustrate the bot failing to correctly identify patient presentations — made his identity public on Monday when he attended a debate at the Royal Society of Medicine.

Dr Murphy unmasked. Now for his positional statement. His driving force – patient safety. Can’t argue with that!! @DrMurphy11 #RSMDigiHealth @RoySocMed pic.twitter.com/hOC7kzlNz3
— clive flashman (@cflashman) February 24, 2020

There he gave a presentation calling for less hype and more independent verification of claims being made by Babylon as such digital systems continue elbowing their way into the healthcare.
In the case of Babylon, the app has a major cheerleader in the current UK Secretary of State for health, Matt Hancock, who has revealed he’s a personal user of the app.
Simultaneously Hancock is pushing the National Health Service to overhaul its infrastructure to enable the plugging in of “healthtech” apps and services. So you can spot the political synergies.
Watkins argues the sector needs more of a focus on robust evidence gathering and independent testing vs mindless ministerial support and partnership ‘endorsements’ as a stand in for due diligence.
He points to the example of Theranos — the disgraced blood testing startup whose co-founder is now facing charges of fraud — saying this should provide a major red flag of the need for independent testing of ‘novel’ health product claims.
“[Over hyping of products] is a tech industry issue which unfortunately seems to have infected healthcare in a couple of situations,” he told us, referring to the startup ‘fake it til you make it’ playbook of hype marketing and scaling without waiting for external verification of heavily marketed claims.
In the case of Babylon, he argues the company has failed to back up puffy marketing with evidence of the sort of extensive clinical testing and validation which he says should be necessary for a health app that’s out in the wild being used by patients. (References to academic studies have not been stood up by providing outsiders with access to data so they can verify its claims, he also says.)
“They’ve got backing from all these people — the founders of Google DeepMind, Bupa, Samsung, Tencent, the Saudis have given them hundreds of millions and they’re a billion dollar company. They’ve got the backing of Matt Hancock. Got a deal with Wolverhampton. It all looks trustworthy,” Watkins went on. “But there is no basis for that trustworthiness. You’re basing the trustworthiness on the ability of a company to partner. And you’re making the assumption that those partners have undertaken due diligence.”
For its part Babylon claims the opposite — saying its app meets existing regulatory standards and pointing to high “patient satisfaction ratings” and a lack of reported harm by users as evidence of safety, writing in the same PR in which it lays into Watkins:
Our track record speaks for itself: our AI has been used millions of times, and not one single patient has reported any harm (a far better safety record than any other health consultation in the world). Our technology meets robust regulatory standards across five different countries, and has been validated as a safe service by the NHS on ten different occasions. In fact, when the NHS reviewed our symptom checker, Healthcheck and clinical portal, they said our method for validating them “has been completed using a robust assessment methodology to a high standard.” Patient satisfaction ratings see over 85% of our patients giving us 5 stars (and 94% giving five and four stars), and the Care Quality Commission recently rated us “Outstanding” for our leadership.
But proposing to the judge efficacy of a health-related service by a patient’s ability to complain if something goes wrong seems, at the very least, an unorthodox approach — flipping the Hippocratic oath principle of ‘first do no harm’ on its head. (Plus, speaking theoretically, someone who’s dead would literally be unable to complain — which could plug a rather large loophole in any ‘safety bar’ being claimed via such an assessment methodology.)
On the regulatory point, Watkins argues that the current UK regime is not set up to respond intelligently to a development like AI chatbots and lacks strong enforcement in this new category.
Complaints he’s filed with the MHRA (Medical and Healthcare products Regulatory Agency) have resulted in it asking Babylon to work on issues, with little or no follow up, he says.
While he notes that confidentiality clauses limit what can be disclosed by the regulator.
All of that might look like a plum opportunity for a certain kind of startup ‘disruptor’, of course.
And Babylon’s app is one of several now applying AI type technologies as a diagnostic aid in chatbot form, across several global markets. Users are typically asked to respond to questions about their symptoms and at the end of the triage process get information on what might be a possible cause. Though Babylon’s PR materials are careful to include a footnote where it caveats that its AI tools “do not provide a medical diagnosis, nor are they a substitute for a doctor”.
Yet, says Watkins, if you read certain headlines and claims made for the company’s product in the media you might be forgiven for coming away with a very different impression — and it’s this level of hype that has him worried.
Other less hype-dispensing chatbots are available, he suggests — name-checking Berlin-based Ada Health as taking a more thoughtful approach on that front.
Asked whether there are specific tests he would like to see Babylon do to stand up its hype, Watkins told us: “The starting point is getting a technology which you feel is safe to actually be in the public domain.”
Notably, the European Commission is working on risk-based regulatory framework for AI applications — including for use-cases in sectors such as healthcare — which would require such systems to be “transparent, traceable and guarantee human oversight”, as well as to use unbiased data for training their AI models.
“Because of the hyperbolic claims that have been put out there previously about Babylon that’s where there’s a big issue. How do they now roll back and make this safe? You can do that by putting in certain warnings with regards to what this should be used for,” said Watkins, raising concerns about the wording used in the app. “Because it presents itself as giving patients diagnosis and it suggests what they should do for them to come out with this disclaimer saying this isn’t giving you any healthcare information, it’s just information — it doesn’t make sense. I don’t know what a patient’s meant to think of that.”
Babylon always present themselves as very patient-facing, very patient-focused, we listen to patients, we hear their feedback. If I was a patient and I’ve got a chatbot telling me what to do and giving me a suggested diagnosis — at the same time it’s telling me ‘ignore this, don’t use it’ — what is it?” he added. “What’s its purpose?
“There are other chatbots which I think have defined that far more clearly — where they are very clear in their intent saying we’re not here to provide you with healthcare advice; we will provide you with information which you can take to your healthcare provider to allow you to have a more informed decision discussion with them. And when you put it in that context, as a patient I think that makes perfect sense. This machine is going to give me information so I can have a more informed discussion with my doctor. Fantastic. So there’s simple things which they just haven’t done. And it drives me nuts. I’m an oncologist — it shouldn’t be me doing this.”
Watkins suggested Babylon’s response to his raising “good faith” patient safety concerns is symptomatic of a deeper malaise within the culture of the company. It has also had a negative impact on him — making him into a target for parts of the rightwing media.
“What they have done, although it may not be users’ health data, they have attempted to utilize data to intimidate an identifiable individual,” he said of the company’s attack him. “As a consequence of them having this threatening approach and attempting to intimidate other parties have though let’s bundle in and attack this guy. So it’s that which is the harm which comes from it. They’ve singled out an individual as someone to attack.”
“I’m concerned that there’s clinicians in that company who, if they see this happening, they’re not going to raise concerns — because you’ll just get discredited in the organization. And that’s really dangerous in healthcare,” Watkins added. “You have to be able to speak up when you see concerns because otherwise patients are at risk of harm and things don’t change. You have to learn from error when you see it. You can’t just carry on doing the same thing again and again and again.”
Others in the medical community have been quick to criticize Babylon for targeting Watkins in such a personal manner and for revealing details about his use of its (medical) service.
As one Twitter user, Sam Gallivan — also a doctor — put it: “Can other high frequency Babylon Health users look forward to having their medical queries broadcast in a press release?”

Can other high frequency @babylonhealth users look forward to having their private medical queries broadcast in a press release?
— Sam Gallivan (@samgal) February 25, 2020

The act certainly raises questions about Babylon’s approach to sensitive health data, if it’s accessing patient information for the purpose of trying to steamroller informed criticism.
We’ve seen similarly ugly stuff in tech before, of course — such as when Uber kept a ‘god-view’ of its ride-hailing service and used it to keep tabs on critical journalists. In that case the misuse of platform data pointed to a toxic culture problem that Uber has had to spend subsequent years sweating to turn around (including changing its CEO).
Babylon’s selective data dump on Watkins is an illustrative example of a digital service’s ability to access and shape individual data at will — pointing to the underlining power asymmetries between these data-capturing technology platforms which are gaining increasing agency over our decisions and the users who only get highly mediated, hyper controlled access to the databases they help to feed.
Watkins, for example, told us he is no longer able to access his query history in the Babylon app — providing a screenshot of an error screen (below) that he says he now sees when he tries to access chat history in the app. He said he does not know why he is no longer able to access his historical usage information but says he was using it as a reference — to help with further testing (and no longer can).
If it’s a bug it’s a convenient one for Babylon PR…

We contacted Babylon to ask it to respond to criticism of its attack on Watkins. The company defended its use of his app data to generate the press release — arguing that the “volume” of queries he had run means the usual data protection rules don’t apply, and further claiming it had only shared “non-personal statistical data”, even though this was attached in the PR to his Twitter identity (and therefore, since Monday, to his real name).
In a statement the Babylon spokesperson told us:
If safety related claims are made about our technology, our medical professionals are required to look into these matters to ensure the accuracy and safety of our products. In the case of the recent use data that was shared publicly, it is clear given the volume of use that this was theoretical data (forming part of an accuracy test and experiment) rather than a genuine health concern from a patient. Given the use volume and the way data was presented publicly, we felt that we needed to address accuracy and use information to reassure our users.  The data shared by us was non-personal statistical data, and Babylon has complied with its data protection obligations throughout. Babylon does not publish genuine individualised user health data.
We also asked the UK’s data protection watchdog about the episode and Babylon making Watkins’ app usage public. The ICO told us: “People have the right to expect that organisations will handle their personal information responsibly and securely. If anyone is concerned about how their data has been handled, they can contact the ICO and we will look into the details.”
Babylon’s clinical innovation director, Dr Keith Grimes, attended the same Royal Society debate as Watkins this week — which was entitled Recent developments in AI and digital health 2020 and billed as a conference that will “cut through the hype around AI”.
So it looks to be no accident that their attack press release was timed to follow hard on the heels of a presentation it would have known (since at least last December) was coming that day — and in which Watkins argued where AI chatbots are concerned “validation is more important than valuation”.

A little challenge to one of our critics…#RSMDigiHealth https://t.co/XqvQpRYMLX
— Babylon (@babylonhealth) February 24, 2020

Last summer Babylon announced a $550M Series C raise, at a $2BN+ valuation.
Investors in the company include Saudi Arabia’s Public Investment Fund, an unnamed U.S.-based health insurance company, Munich Re’s ERGO Fund, Kinnevik, Vostok New Ventures and DeepMind co-founder Demis Hassabis, to name a few helping to fund its marketing.
“They came with a narrative,” said Watkins of Babylon’s message to the Royal Society. “The debate wasn’t particularly instructive or constructive. And I say that purely because Babylon came with a narrative and they were going to stick to that. The narrative was to avoid any discussion about any safety concerns or the fact that there were problems and just describe it as safe.”
The clinician’s counter message to the event was to pose a question EU policymakers are just starting to consider — calling for the AI maker to show data-sets that stand up its safety claims.

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

When that ‘AI company’ isn’t really an AI company

Artificial intelligence is one of the most important fields in technology right now, which makes it ripe for buzzword-savvy startups to leverage for attention. But while machine learning and related technologies are now frequently employed, it’s less common that it’s central to a company’s strategy and IP.
It’s important to note that this sort of posturing doesn’t necessarily mean a company is bad — it’s entirely possible they have an overzealous communications department or PR firm. Just consider the following points warning signs — if you hear these terms, dig a little deeper to find out exactly what the company does.
“Powered by AI”
There are innumerable variations on this particular line, which is a red flag that the company is trying to paint itself with the AI brush rather than differentiate by other means.
“Our machine-learning powered ___,” “our proprietary AI,” “leverages machine learning…” all basically mean the same thing: AI is involved somewhere along the line.
Apps that purport to connect users (“our unique AI-powered matching engine…”) with the right people or resources based on AI recommendations are also a common offender
But machine learning algorithms have been deeply embedded in computing for many years. They can be simple or complex, tried and true or novel and used for highly visible or completely unknown purposes. There are off-the-shelf algorithms developers can buy to help sort images, parse noisy data and perform many other tasks. Recommendation engines are a dime a dozen. Does using one of these make a product “powered by AI”?

Announcing the TC Pitch Night: Robotics + AI startups

The night before the Robotics + AI event at UC Berkeley, TechCrunch is hosting a private Pitch Night, featuring innovative startups in robotics and artificial intelligence. After reviewing hundreds of applications, TechCrunch selected the early stage startups below to pitch in front of industry executives, TC writers and our expert panel of judges – Brian Heater (TC’s own Hardware Editor), Aaron Jacobson (NEA), Jennifer Roberts (Grit Ventures) and Rob Coneybeer (Shasta Ventures).
Founders will pitch in front of the crowd followed by a tough Q&A from the judges. After all companies have pitched, the judges will select the top 5 teams to demo on stage at the main event on March 3rd – TC Sessions: Robotics + AI.
Check out the featured companies here:
AirWorksAugean RoboticsBlinkAI TechnologiesKEWAZO GmbHOlis RoboticsRoboTireSLAMcoreTombotValyant AI
To see the startups pitching at the main event, book your $345 General Admission ticket today and save $50 before prices go up at the door. But no one likes going to events alone. Why not bring the whole team? Groups of four or more save 15% on tickets when you book here.

A team of Imagineers will discuss Disney’s tech breakthroughs at TC Sessions: Robotics+AI, March 3 at UC Berkeley

With TC Sessions Robot+AI a little over a week away, you likely thought we were finished with our big announcements. Understandably. We have top executives from Amazon and Toyota Research, some of the hottest startups and biggest VCs. But there’s still some excitement left to announce.
On March 3, Disney will be returning to the event to discuss some of the breakthroughs the entertainment giant has been making around robotics for its theme parks. We’ll be joined by Disney Imagineers Dawson Dill, Selina Herman and Joe Mohos.
The trio have been working on using robotics to enhance rides at the park, blending physical trackless vehicles with other physical and virtual tools to transport riders both figuratively and literally. The team will discuss their latest breakthroughs in the space and the applications such technologies will have in entertainment and beyond.
Tickets are now available for $345 right here. Take advantage of this discounted pricing now as prices will go up soon! Students, book a super discounted $50 right now and get in on the action.
Join the TechCrunch team and 1000+ of today’s leading minds in robotics and artificial intelligence for this single-day conference. The event will feature great panels and fireside chats, breakout sessions, and plenty of networking opportunities. There will also be an expo hall packed full of startups looking for their big break and awesome hands-on demos for you to interact with.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

 

Do AI startups have worse economics than SaaS shops?

A few days ago, Andreessen Horowitz’s Martin Casado and Matt Bornstein published an interesting piece digging into the world of artificial intelligence (AI) startups, and, more specifically, how those companies perform as businesses. Core to the argument presented is that while founders and investors are wagering “that AI businesses will resemble traditional software companies,” the well-known venture firm is “not so sure.”
Given that TechCrunch cares a lot about startup business fundamentals, the notion that one oft-discussed and well-funded category of venture-backed startup might sport materially less attractive economics than we expected captured our attention.
The Andreessen Horowitz (a16z) perspective is straightforward, arguing that AI-focused companies have lesser gross margins than software companies due to cloud compute and human-input costs, endure issues stemming from “edge-cases” and enjoy less product differentiation from competing companies when compared to software concerns. Today, we’re drilling into the gross margin point, as it’s something inherently numerical that we can get other, informed market participants to weigh in on.
If a16z is correct about AI startups having slimmer gross margins than SaaS companies, they should — all other things held equal — be worth less per dollar of revenue generated; or in simpler terms, they should trade at a revenue multiple discount to SaaS companies, leaving the latter category of technology company still atop the valuation hierarchy.
This matters, given the amount of capital that AI-focused startups have raised.
Is a16z correct about AI gross margins? I wanted to find out. So this week I spoke to a number of investors from firms that have made AI-focused bets to get a handle on their views. Read the full a16z piece, mind. It’s interesting and worth your time.
Today we’re hearing from Rohit Sharma of True Ventures, Jeremy Kaufmann of Scale Venture Partners, Nick Washburn of Intel Capital and Ben Blume of Atomico. We’ll start with a digest of their responses to our questions, with their unedited notes at the end.
AI economics and optimism
We asked our group of venture investors (selected with the help of research from TechCrunch’s Arman Tabatabai) three questions. The first dealt with margins themselves, the second dealt with resulting valuations and, finally, we asked about their current optimism interval regarding AI-focused companies.

Mangrove Capital’s Mark Tluszcz on the huge mHealth opportunity and why focusing on UX is key

Mangrove Capital Partners’ co-founder and CEO Mark Tluszcz is brimming with enthusiasm for what’s coming down the pipe from health tech startups.
Populations armed with mobile devices and hungry for verified and relevant information, combined with the promise of big data and AI, is converging, as he sees it, into a massive opportunity for businesses to rethink how healthcare is delivered, both as a major platform to plugging gaps in stretched public healthcare systems and multiple spaces in between — serving up something more specific and intimate.
Think health-focused digital communities, perhaps targeting a single sex or time of life, as we’re increasingly seeing in the femtech space, or health-focused apps and services that can act as supportive spaces and sounding boards that cater to the particular biological needs of different groups of people.
Tluszcz has made some savvy bets in his time. He was an early investor in Skype, turning a $2 million investment into $200 million, and he’s also made a tidy profit backing web building platform Wix, where he remains as chairman. But the long-time, early-stage tech investor has a new focus after a clutch of investments — in period tracking (Flo), AI diagnostics (K Health) and digital therapeutics (Happify) — have garnered enough momentum to make health the dominant theme of Mangrove Capital’s last fund.
“I really don’t think that there’s a bigger area and a more inefficient area today than healthcare,” he tells us. “One of the things that that whole space is missing is just good usability. And that’s something that Internet entrepreneurs do very well.”
Extra Crunch sat down for an in-depth conversation with Tluszcz to dig into the reasons why he’s so excited about mHealth (as Mangrove calls it) and probe him on some of the challenges that arise when building data-led AI businesses with the potential to deeply impact people’s lives.
The fund has also produced a healthcare report setting out some of its thinking.
This interview has been lightly edited for length and clarity
TechCrunch: Is the breadth of what can fall in the digital health or mHealth category part of why you’re so excited about the opportunities here?
Mark Tluszcz: I think if you take a step back, even from definitions for a moment, and you look around as an investor — and we as a firm, we happen to be thematically driven but no matter who you are — and you say where are there massive pockets of opportunity? And it’s typically in areas where there’s a lot of inefficiency. And anybody who’s tried to go to the doctor anywhere in Europe or around the world or tried to get an appointment with a therapist or whatever realizes how basically inefficient and arcane that process is. From finding out who the right person is, to getting an appointment and going there and paying for it. So healthcare looks to us like one of those arcane industries — the user experience, so to speak, could be so much better. And combine that with the fact that in most cases we know nothing as individuals about health — unless you read a few books and things. But it’s generally the one place where you’re the least informed in life. So you go see your GP and he or she will tell you something and you’re blindly going to take that pill they’re going to give you because you’re not well informed. You don’t understand it.
So I think that’s the exciting part about it. If I now look around and say if I now look at all the industries in the world — and of course there’s interesting stuff happening in financial services, and it continues to happen on commerce, and many, many places — but I really don’t think that there’s a bigger area and a more inefficient area today than healthcare.
You combine that with the power that we’re beginning to see in all these mobile devices — i.e. I have it in my pocket at all times. So that’s factor two. So one is the industry is potentially big and inefficient; two is there’s tools that we have easy to access it. And there has been — I think again — a general frustration on healthcare online I would say of when you go into a search engine, or you go into Web MD or Google or whatever, the general feedback it gives you is you’re about to have a heart attack or you’re about to die because those products are not designed specifically for that. So you as a consumer are confused because you’re not feeling well so you go online. The next day you go see your doctor and he or she says you didn’t go to Google did you, right? I know you’re probably freaked out at this point. So the second point is the tools are there.
Third I’d say is that artificial intelligence, machine learning, which is kind of in the process of gaining a lot of momentum, has made it that we’re able to start to dream that we could one day crunch sufficient data to get new insights into it. So I think you put those three factors together and say this seems like it could be pretty big, in terms of a space.
One of the things that that whole space is missing is just good usability. And that’s something that Internet entrepreneurs do very well. It’s figure out that usability side of it. How do I make that experience more enjoyable or better or whatever? In fact, you see it in fintech. One of the reasons, largely, that these neobanks are winning is that their apps are much better than what you have from the incumbents. There’s no other reason for it. And so I think there’s this big opportunity that’s out there, and it says all these factors lead you to this big, big industry. And then yes, that industry in itself is extremely large — all the way from dieting apps, you might think, all the way to healthy eating apps to longevity apps, to basic information about a particular disease, to basic general practitioner information. You could then break it down into female-specific products, male-specific products — so the breadth is very, very big.
But I think the common core of that is we as humans are getting more information and knowledge about how we are, and that is going to drive, I think, a massive adoption of these products. It’s knowledge, it’s ease of use, and it’s accessibility that just make it a dream come true if we can pull all these pieces together. And this is just speaking about the developed world. This gets even bigger potentially if I go to the third world countries where they don’t even have access to basic healthcare information or basic nutritional information. So I would say that the addressable market in investors’ jargon is just huge. Much more so than in any other industry that I know of today.
Is the fund trying to break that down into particular areas of focus within that or is the fund potentially interested in everything that falls under this digital health/mHealth umbrella?
We are a generalist investment firm. As a generalist investment firm we find these trends and then anything within these trends is going to pique our interest. Where we have made some investments has been really in three areas so far, and we’ll continue to broaden that base.
We’ve made an investment into a company called Flo. They are the number one app in the world for women to help track their menstrual cycles. So you look at that and go can that be big, not big, I don’t know. I can tell you they have 35M monthly active users, so it’s massive.
Now you might say, ‘Why do women need this to help them track their cycles because they’ve been tracking these menstrual cycles other ways for thousands of years?’ This is where, as an investor, you have to combine something like that with new behavioral patterns in people. And so if you look at the younger generation of people today they’re a generation that’s been growing up on notifications — the concept of being notified to do something. Or reminded to do something. And I think these apps do a lot of that as well.
My wife, who’s had two children, might say — which she did before I invested in the company — why would I ever need such an app? And I told her, “Unfortunately you’re the wrong demographic… because when I speak to an 18- year-old she says, ‘Ah, so cool! And by the way do you have an app to remind me to brush my teeth?’ So notifications is what I think what makes it interesting for that younger demographic.
And then curiously enough — this is again the magic of what technology can bring and great products can bring — Flo is a company created by two brothers. They had no particular direct experience of the need for the app. They knew the market was big. They obviously hired women who were more contextually savvy to the problem but they were able to build this fantastic product. And did a bunch of things within the product that they had taken from their previous lives and made it so that the user experience was just so much better than looking at a calendar on your phone. So today 35M women every month use this product tells you that there’s something there — that the tech is coming and that people want to use it. And so that’s one type of a problem, and you can think about a number of others that both males and females will have — for whom making that single user experience better could be interesting. And I could go from that to ten things that might be interesting for women and ten things that might specifically be interesting for men — you can imagine breaking that down. This is why, again, the space is so big. There are so many things that we deal with as men and women [related to health and biology].
Now for me the question is, as a venture investor, will that sub-set be big enough?
And that again is no different than if I was looking at any other industry. If I was in the telecommunications industry — well is voice calling big? Is messaging big enough? Is conference calling big enough? All that is around calling, but you start breaking it down and, in some cases, we’re going to conclude that it’s big enough or that it’s not big enough. But we’re going to have to go through the process of looking at these. And we’re seeing these thematic things pop up all over the place right now. All over Europe and in the U.S. as well.
It did take us a little time to say is this big enough [in the case of Flo] but obviously getting pregnant is big enough. And as a business, think about it: once you know a woman’s menstrual cycle process and then she starts feeding into the system, ‘I am pregnant; I’m going to have a child,’ you start having a lot of information about her life and you can feed a lot of other things to her. Because you know when she’s going to have a child, you can propose advice as well around here’s how the first few months go. Because, as we know, when you have your first child, you’re generally a novice. You’re discovering what all that means. And again you have another opportunity to re-engage with that user. So that’s something that I think is interesting as a space.
So the thematic space is going to be big — the femtech side and the male tech side. All of that’s going to play a big role. One could argue always there are the specific apps that are going to be the winners; we can argue about that. But right now I guess Flo is working very well because those people haven’t found such a targeted user experience in the more generic place. They feel as if they’re in a community of like-minded women. They have forums, they can talk, they have articles they can read, and it’s just a comfortable place for them to spend some time.
So Flo is the first example of a very specific play that we did in healthcare about a year and a half ago. The first investment, in fact, that we made in healthcare.
The second example is opposed to that — it’s a much more general play in healthcare. It’s a company called K Health . Now K Health looked at the world… and said what happens when I wake up at night and I have a pain and I do go to Google and I think I’m going to have a heart attack…. So can I build a product that would mimic, if you will, a doctor? So that I might be able to create an experience when I can have immediacy of information and immediacy of diagnostics on my phone. And then I could figure out what to do with that.
This is an Israeli company and they now have 5 million users in the U.S. that are using the app, which is downloadable from the U.S. app story only. What they did is they spent a year and a half building the technology — the AI and the machine learning — because what they did is they bought a very large dataset from an insurance company. The company sold it to them anonymized. It was personal health records for 2.5 million people for 20, years so we had a lot of information. A lot of this stuff was in handwritten notes. It wasn’t well structured. So it took them a long time to build the software to be able to understand all this information and break it down into billions of data parts that they could now manipulate. And the user experience is just like a WhatsApp chat with a robot.
Their desire is not to do what some other companies are doing, which is ‘answer ten questions and maybe you should talk to a doctor via Skype.’ Because their view was that — at the end of the day — in every developed country there are shortages of doctors. That’s true for the U.K.; it’s true for the U.S. If you predict out to 2030, there’s a huge hole in the number of GPs. Part of that is also totally understandable; who would want to be a GP today? I mean your job in the U.S. and the U.K. is you’re essentially a sausage factory. Come in and you’ve got 3 minutes with your customer. It’s not a great experience for the doctor or the person who goes to the doctor.
So K Health built this fantastic app and what they do is they diagnose you and they say based on the symptoms here’s what K thinks you have, and, by the way, here’s a medicine that people like you were treated with. So there’s an amazing amount of information that you get as a user, and that’s entirely free as a user experience. Their vision is that the diagnostic part will always be free.
There are 5 million people in the US.. using the app who are diagnosing. There are 25 questions that you go through with the robot, ‘K,’ and she diagnoses you. We call that a virtual doctor’s visit. We’re doing 15,000 of those a day. Think about the scale in which we’ve been able to go in a very short time. And all that’s free.
To some extent it’s great for people who can’t necessarily afford doctors — again, that’s not typically a European problem. Because socialized medicine in Europe has made that easy. But it is a problem in the U.S.; it is a problem in Africa, Asia, India and South America. There’s about 4 billion people around the world for whom speaking to a doctor is a problem.
K Health’s view is they’re bringing healthcare free to the world. And then ultimately how they make money will be things like if you want to speak to a doctor because you need a prescription for drugs. The doctor has access to K’s diagnostic and either agrees or disagrees with it and gives you a prescription to do that. And what we’re seeing is an interesting relationship which is where we wanted it to be. Of those 15,000 free doctor visits, less than one percent of those turn into I want to speak to a human and hence pay $15 (that’s the price they’re charging in the U.S. to actually converse with a human). In the U.S., by the way, about a quarter of the population — 75 million people — don’t have complementary insurance. That when they go to the doctor it’s $150. Isn’t that a crazy thing? You can’t afford complementary insurance but you could pay the highest price to go see a doctor. Such madness.
And then there’s a whole element of it’s simple, and it’s convenient. You’re sitting at home thinking, “Okay, I’m not feeling so well” and you’ve got to call a doctor, get an appointment, drive however long it takes, and wait in line with other sick people. So what we’re finding is people are discovering new ways of accessing information…. Human doctors also don’t have time to give empathy in an ever stretched socialized medicine country [such as in Spain]. So what we’re seeing also is a very quick change in user behavior. Two and a half years ago [when K Health started], many people would say I don’t know about that. Now they’re saying convenience — at least in Europe — is why that’s interesting. In the U.S. it’s price.
So that’s the second example; much more general company but one which has the ability to come and answer a very basic need: ‘I’m not feeling well.’
We have 5M users which means we have data on 5M people. On average, a GP in his life will see about 50,000 patients. If you think about just the difference — if you come to K, K has seen 5M people, your GP Max has seen 50k. So, statistically, the app is likely to be better. We know today, through benchmarks and all sorts of other stuff, is that the app is more accurate than humans.
So you look at where that’s heading in general medicine we’ve for a long time created this myth that doctors spent eight years learning a lot of information and as a result they’re really brainy people. They are brainy people but I believe that that learning process is going to be done faster and better through a machine. That’s our bet.
The third example of an investment that we’ve made in the health space is a company called Happify . They’re a company that had developed like a gamification of online treatment if you have certain sicknesses. So, for example, if you’re a little depressive you can use their app and the gamification process and they will help you feel healthier. So so far you’re probably scatching your head saying ‘I don’t know about that…” But that was how they started and then they realized that hang on you can either do that or you can take medicine; you can pop a pill. In fact what many doctors suggest for people who have anxiety or depression.
So then they started engaging with the drugs companies and they realized that these drug companies have a problem which is the patent expiry of their medication. And when patents expire you lose a lot of money. And so what’s very typical in the pharma industry is if you’re able to modify a medicine you can typically either extend or have a new patent. So Happify, what they’ve done with the pharma companies now, is said instead of modifying the medicine and adding something else to it — another molecule for instance — could we associate treatments which is medicine plus online software? Like a digital experience. And that has now been dubbed Digital Therapeutics — DTx — is the common term being used for them. And this company Happify is one of the first in the world to do that. They signed a very large deal with a company called Sanofi — one of the big drug makers. And that’s what they’re going to roll out. When doctors say to their patients I’m diagnosing you with anxiety or depression. Sanofi has a particular medication and they’re going to bundle it now with an online experience — and in all the tests that they’ve done, actually, when you combine the two, the patient is better off at the end of this treatment. So it’s just another example of why this whole space is so large. We never thought we’d be in any business with a pharma business because we’re tech investors. But here all of a sudden the ability to marry tech with medication creates a better end user experience for the patient. And that’s very powerful in itself.
So those are just three areas where we have actually put money in the health space but there are a number of areas that one looks at — either general or more specific.
Yeah it is big. And I think for us at least the more general it stays and it’s seen the more open minded we’re going to be. Because one thing you have to be as an investor, at least early stage like ours, completely open minded. And you can’t bias your process by your own experience. It has to stay very broad.
It’s also why I think clinician led companies and investors are not good — because they come with their own baggage. I think in this case, just like in any other industry, you have to say I’m not going to be polluted by the past and for me to change the experience going forward in any given area I have to fundamentally be ready to reinvent it.
You could propose a Theranos example as a counterpoint to that — but do you think investors in the health space have got over any fallout from that high profile failure at this point?
With that company one could argue who’s fault it really was. Clearly the founder lied and did all sorts of stuff but her investors let her do it. So to some extent the checks and balances just weren’t in place. I’m only saying that because I don’t think that should be the example by which we judge everything else. That’s just a case of a fraudster and dumb investors. That’s going to continue to exist in the future forever and who knows we might come across some of those but I don’t think it’s the benchmark by which one should be judging if healthcare is a good or viable investment. Again I look at Flo, 35M active users. I look at K Health, 5M users in the US who are now beginning to use doctors, order medicine through the platform. I think the simplicity, the ease of use, for me make it that it’s undeniable that this industry’s going to be completely shaken up through this tech. And we need it because at least in the Western world are health systems are so stretched they’re going to break.
Europe vs the US is interesting — because of the existence of public healthcare vs a lack of public healthcare. What difference does that make to the startup opportunities in health in Europe vs the US? Perhaps in Europe things have to be more supplementary to public healthcare systems but perhaps ultimately there isn’t that much difference if healthcare opportunities are increasingly being broken out and people are being encouraged to be more proactive about looking after their own health needs?
Yeah. Take K Health — where you look at it and say from a use example it’s clear that everywhere in the world, including US and Europe, people are going to recognize the simple ease of use and the convenience of it. If I had to spend money to then maybe make money then I would say maybe the US is slightly better because there’s 75M people who can’t afford a doctor and I might be able to sell them something more whereas in Europe I might not. I think it becomes a commercial question more than anything else. Certainly in the UK the NHS [National Health Service] is trying to do a lot of things. It is not a great user experience when you go to the doctor there. But at the end of the day I don’t think the difference between Europe-US makes much of a difference. I think this idea that what these apps want to tend towards — which is healthcare for everybody at a super cheap or free price-point — I think we have an advantage in Europe of thinking of it that way because that’s what we’ve had all our lives. So to some extent what I want to create online is socialized medicine for the world — through K Health. And I learnt that because I live here [in Europe].
Somebody in the US — not the 75M because they have nothing — but all the others, maybe they don’t think there’s a problem because they don’t recognize it. Our view with K Health is the opportunity to make socialized medicine a global phenomenon and hoping that in 95% of the cases access to the app is all you need. And in 5% of the cases you’re going to go the specialists that need to see you — and then maybe there’s enough money to go around for everybody.
And of course, as an investor, we’re interested in global companies. Again you see the theme: Flo, K Health, Happify, all those have a potential global footprint right off the bat.
I think with healthcare there are going to be play that could be national specific and maybe still going to be decent investments. You see in that in financial services. The neo banks are very country specific — whenever they try to get out of their country, like N26, they realize that life isn’t so easy when you go somewhere else. But healthcare I think we have an easier path to going global because there is such a pent up demand and a need for you to just feel good about yourself… Most of the people who go through [the K Health diagnostic] process just want peace of mind. If 95% of the 15k people who go through that process right now just go, “Phew, I feel okay” then we’ve accomplished something quite significant. And imagine if it’s not 15,000 it’s about 150,000 a day, which seems to be quite an easy goal. So healthcare allows us to dream that TAM — in investor terms, target addressable market — is big. I can realistically think with any one of the three companies that I’ve mentioned to you that we could have hundreds of millions of users around the world. Because there’s the need.
There are different regulatory regimes across markets, there are different cultural contexts around the world — do you see this as a winner takes all scenario for health platforms?
No. Not at all. I think ultimately it’s the user — in terms of his or her experience in using an app — that’s going to matter. Flo is not the only menstrual cycle app in the world; it just happens to be by far the biggest. But there’s others. So that’s the perfect example. I don’t think there’s going to be one winner takes it all.
There’s also (UK startup) Babylon Health which sounds quite similar to K Health…
Babylon does something different. They’re essentially a symptom checker designed to push you to have a Skype call with a human doctor…. It answers a bunch of questions, it’ll say, “Well, we think you have this, let’s connect you to a real doctor.” We did not want to invest in a company that ever did that because the real problem is there just aren’t enough doctors and then frankly you and I are not going to want to talk to a doctor from Angola. Because what’s going to happen is there aren’t enough doctors in the Western countries and the solution for those type of companies — Babylon is one, there’s others doing similar things — but if you become what we call lead generation just for doctors where you get a commission for bringing people to speak to a doctor you’re just displacing the problem from in your neighborhood to, broadly speaking, where are the humans? And I think as I said humans, they have their fallacies. If you really want to scale things big and globally you have to let software do it.
No it’s not a winner takes all — for sure.
So the vision is that this stuff starts as a supplement to existing healthcare systems and gradually scales?
Correct. I’ll give you an example in the U.S. with K Health. They have a deal with the second largest insurance company called Anthem. Their go-to-market brand is called Blue Cross, Blue Shield. It’s the second largest one in America… so why is this insurance company interested? Because they know that
There’s not enough doctors.
That the health system in the U.S. is under stress
If they could reduce the amount of doctor’s visits by promoting an app like K, that’s financially beneficial to them.
So they’re going to be proposing it, in various forms, to all their customers by saying, “Before you go see a doctor, why don’t you try K?”
In this particular case with K there’s revenue opportunities from the insurance companies and also directly from the consumer, which makes it also interesting.
You did say different regions, different countries have different systems — yes absolutely and there’s no question that going international requires work. However, having said that, I would say a European, an Indonesian and a Brazilian are largely similar. There’s sometimes this fallacy that Asians, for instance, are so different from us as Western Europeans. And the truth is not really — when you look at it down into the DNA and the functions of the body and stuff like that. Which you do have to do, though. If we were to take K to Indonesia, for example, you do have to make sure that your AI engine has enough data to be able to diagnose some local stuff.
I’ll give you an example. When we launched K in the U.S. and we started off with New York, one of things you have to be able to diagnose is called Lyme disease which is what you get from a tick that bites you. Very, very prevalent in the Greater New York area. Not so much anywhere else in the States. But in New York, if you don’t have it it looks like a cold and then you get very sick. That’s very much a regional thing that you have to have. And so if we were to go to Indonesia we’d have to have thing like Malaria and Dengue. But all that is not so difficult. But yes, there’s some customization.
There are also certain conditions that can be more common for certain ethnicities. There are also differences in how women experience medical conditions vs men. So there can be a lot of issues around how localized health data is…
I would say that that is a very small problem that is a must to be addressed, but it’s a much smaller problem than you think it is. Much smaller. For instance, in the male to female thing — of course medical sometimes plays differently — but when you have a database of 5 million of which 3 million are women, and 2 million are men, you already have that data embedded. It is true that medications work better with certain races also. But again very tiny, very small examples of those. Most doctors know it.
At the big scale that may look very small but to an individual patient if a system is not going to pick up on their condition or prescribe them the right medicine that’s obviously catastrophic from their point of view…
Of course.
Which is why, in the healthcare space, when you’re using AI and data-driven tools to do diagnosis there’s a lot of risk — and that’s part of the consideration for everyone playing in this space. So then the question is how do you break down that risk, how do you make that as small as possible and how do you communicate it to the users — if the proposition is free healthcare with some risk vs. not being able to afford going to the doctor at all?
I appreciate that, as a journalist, you’re trying to say this is a massive risk. I can tell you that as somebody who’s involved in these businesses it is a business risk we have to take into consideration but it is, by far, not insurmountable. We clearly have a responsibility as businesses to say: if I’m going to go to South East Asia, I need to be sure that I cover all the ‘weird’ things that we would not have in our database somewhere else. So I need to do that. How I go about doing that, obviously, is the secret sauce of each company. But you simply cannot launch your product in that region if you don’t solve — in this case Malaria and Dengue disease. It doesn’t make sense [for a general health app]. You’d have too many flaws and people will stop using you.
I don’t think that’s so much the case with Flo, for instance… But all these entrepreneurs who are designing these companies are fully aware that it isn’t a cookie-cutter, one-size fits all — but it is close to that. When you look at the exceptions. We’re not talking about I have to redo my database because 30% or 20% — it’s much, much smaller than that.
And, by the way, at the end of the day, the market will be the judge. In our case, when you go from an Israeli company into the U.S. and you have partners like Blue Cross, Blue Shield, they’ve tested the crap out of your product. And then you’re going to say well I’m going to do this now in Indonesia — well you get partners locally who’re going to help you do that.
One of the drawbacks about healthcare is, I would say, making sure that your product works in all these countries. And doesn’t have holes in the diagnostic side of it.
Which seems in many cases to boil down to getting the data. And that can be a big challenge. As you mentioned with K Health, there was also the need to structure the data as well — but fundamentally it’s taken Israeli population data and is using it in the U.S. You would say that model is going to scale? There are some counter examples, such as Google-owned DeepMind, which has big designs on using AI for healthcare diagnostics and has put a lot of effort into getting access to population-level health data from the NHS in the U.K., when — at the same time — Google has acquired a database of health records from the U.S. Department of Veterans Affairs. So there does seem to be a lot of effort going into trying to get very localized data but it’s challenging. Google perhaps has a head start because it’s Google. So the question then is how do startups get the data they need to address these kinds of opportunities?
If we’re just looking at K Health then obviously it’s a big challenge because you do have to get data in a way. But I would say again your example as well you have a U.S. database and does it match with a UK database. Again it largely does.
In that case the example is quite specific because the dataset Google has from the department of Veterans Affairs skews heavily male (93.6%). So they really do have almost no female data.
But that’s a bad dataset. That’s not anything else but a bad dataset.
It’s instructive that they’re still using it, though. Maybe that illustrates the challenge of getting access to population-level healthcare data for AI model making.
Maybe it does. But I don’t think this is one of those insurmountable things. Again, what we’ve done is we’ve bought a database that had data on 2.5 million patients, data over 20 years. I think that dataset equates extremely well. We’ve now seen it in U.S. markets for over a year. We’ve had nothing but positive feedback. We beat human doctors every time in tests. And so you look at it and you say they’re just business problems that we have to solve. But what we’re seeing is the consumer market is saying holy shit this is just such a better experience than I’ve ever had before.
So the human body — again — is not that complex. Most of the things that we catch are not that complex. And by the way we’ve grown our database — from the 2.5M that we bought we now have 5M. So we now have 2.5M Americans mixing into that database. And the way they diagnose you is they say based on your age, your size, you don’t smoke and so on — perhaps they say they have 300,000 people in their database like you and they’re benchmarking my symptoms against those people. So I think the smart companies are going to do these things very smartly. But you have to know what you’re using as a user as well… If you’re using that vs just a basic symptom checker — that I don’t think is a particularly great new user experience. But some companies are going to be successful doing that. At the end the great dream is how do you bring all this together and how do you give the consumer a fundamentally better choice and better information. That’s K Health.
Why couldn’t Google do the same thing? I don’t know. They just don’t think about it.
That’s a really interesting question — because Google is making big moves in health. They’re consolidating all their projects under one Google Health unit. Amazon is also increasingly interested in the space. What do you make of this big tech interest? Is that a threat or an opportunity for health startups?
Well if you think of it as an investor they’re all obviously buyers of the companies you’re going to build. So that’s a long term opportunity to sell your business. On the shorter term, does it make sense to invest in companies if all of a sudden the mammoth big players are there? By the way, that has been true for many, many other sectors as well. When I first invested in Skype in the early days people would say the telecom guys are going to crush you. Well they didn’t. But all of a sudden telecom, communication became the current that the Internet guys wanted — that’s why eBay ultimately bought us and why they all had their own messenger.
What the future’s made of we don’t know, but what we do know is that consumers want just the best experience and sometimes the best experience comes from people who are very innovative and very hungry as opposed to people who are working in very large companies. Venture capitalists are always investing in companies that somehow are competing one way or another with Amazon, Facebook, Google and all the big guys. It’s just that when you focus your energy on one thing you tend to do it better than if you don’t. And I’m not suggesting that those companies are not investing a lot of money. They are. And that’s because they realize that one of the currencies of the future is the ability to provide healthcare information, treatment and things like that.
You look at a large retail store like Wal-mart in America. Wal-mart serves largely a population that makes $50k or less. The lower income category in North America. But what are they doing to make you more loyal to them? They’re now starting to build into every Wal-mart doctor’s offices. Why would they do that? Is it because they actually know that if you make $50k or less there’s a high chance you don’t have an insurance and there’s a high chance that you can’t afford to go see a doctor. So they’re going to use that to say, “Hey, if you shop with us, instead of paying $150 for a doctor, it’ll be cheaper.” And we’re beginning to see so many examples like this — where all these companies are saying actually healthcare is the biggest and most important thing that somebody thinks about every day. And if we want to make them loyal to our brand we need to offer something that’s in the healthcare space. So the conclusion of why we’re so excited it we’re seeing it happen in real life.
Wal-mart does that — so when Amazon starts buying an online pharmacy I get why they’re doing that. They want to connect with you on an emotional level which is when you’re not feeling well.
So no, I don’t think we’re particularly worried about them. You have to respect they’re large companies, they have a lot of money and things like that. But that’s always been the case. We think that some of these will likely be bought by those players, some of those will likely build their own businesses. At the end of the day it’s who’s going to get that user experience right.
Google of course would like us all to believe that because they’re the search engine of the world they have the first rights to become the health search engine of the world. I tend to think that’s not true. Actually if you look at the history of Google they were the search engine of the world until they forgot about Amazon. And nowadays if you want to buy anything physical where do you search first? You don’t search on Google anymore — you search on Amazon.
But the space is big and there’s a lot of great entrepreneurs and Europe has a lot to offer I think in terms of taking our history of socialized medicine and saying how can tech power that to make it a better experience?
So what should entrepreneurs that are just thinking about this space — what should they be focusing on in terms of things to fix?
Right now the hottest are the three that I mentioned — because those are the ones that we’ve put money into and we’ve put money in because we think those are the hottest areas. I just think that anything where you feel deep conviction about or you’ve had some basic experience with the issue and the problem.
I simply do not think that clinicians can make this change — in any sector. If you look at those companies I mentioned none of the founders are clinicians in any way shape or form. And that’s why they’re successful. Now I’m not suggesting that you don’t have to have doctors on your staff. For sure. At K Health, we have 30 doctors…. What we’re trying to do is change the experience. So the founder, for instance. was a founder of a company called Vroom that buys and sells cars online in the States. When he started he didn’t know a whole lot about healthcare but he said to himself what I know is I don’t like the user experience. It’s a horrible user experience. I don’t like going to the doctor. I can change that.
So I would say if you’re heading into that space your first pre-occupation is how am I going to change the current user experience in a way that’s meaningful. Because that’s the only thing that people care about.
How is possible that two guys could come up with Flo? They were just good product people.
For me, that’s the driving factor — if you’re going to go into this, go into it saying you’re there to break an experience and make it just a way better place to be.
On the size of the opportunity I have seen some suggestions that health is overheated in investment terms. But perhaps that’s more true in the U.S. than Europe?
Any time an investor community gets hold of a theme and makes it the theme of the month or the year — like fintech was for ten years — I think it becomes overfunded because everybody ploughs into that. I could say yes to that statement sure. Lot of players, lot of actors. Money’s pouring in because people believe that the outcome could be big. So I don’t think it’s overheated. I think that we’ve only scratched the surface by doing certain things.
Some of the companies in the healthcare space that are either thinking of going public or are going public are companies that are pretty basic companies around connecting you with doctors online, etc. So I think that the innovation is really, really coming. As AI becomes real and we’re able to manage the data in an effective way… But again you’ve got to get the user experience right.
Flo in my experience — why it’s better than anything else — one is it’s just a great user experience. And then they have a forum on their app, and the forum is anonymized. And this is curious right. I think they anonymized it without knowing what it would do. And what it did was it allowed women to talk about stuff that perhaps they were not comfortable talking about stuff if people knew who they were. Number one issue? Abortion.
There’s a stigma out there around abortion and so by anonymizing the chat forum all of a sudden it created this opportunity for people to just exchange an experience. So that’s why I say the user experience for me is just at the core of that revolution that’s coming.
Why should it be such a horrific experience to be able to talk about that subject? Why should women be put in that position? So that’s why I think user experience is going to be so key to that.
So that’s why we’re excited. And of course the gambit is large. You think about the examples I gave — you can think of dietary examples, men’s health examples. When men turn 50 things start happening. Little things. But there’s at least 15 of those things that are 100% predictable… I just turned 50 and given there’s so much disinformation online I don’t know what’s true. So I think again there’s a fantastic opportunity for somebody to build companies around that theme — again, probably male and female separate.
Menopause would be another obvious one.
Exactly… You don’t know who you can talk to in many cases. So that’s another opportunity. And wow there are so many things out there. And when I go online today I‘m generally not sure if I can believe what I read unless it’s from a source that I can trust.
For 50 year old men erectile dysfunction is another taboo — a bit like the abortion taboo is for women. Men don’t even talk to their male friends about it… So if there was a place where you could go and learn about it I think there’s a big opportunity. I don’t think erectile dysfunction is a business, but I think how men age is one.
So it’s opportunities for communities around particular health/well-being issues.
Exactly. Because we’re looking for truths when we’re going through that experience ourselves.
The addressable market is massive. There’s men turning 50 every year and they’re probably all pretty interested to find out what are the ten or 15 things that could go wrong for them. There’s a lot of opportunities. It’s so broad. The challenge is you have to think about building it for people who are 50. You’re not building it for an 18-year-old. So the user experience again has to be somewhat different probably. And the healthcare goes all the way to the seniors. What are you looking for when you’re 75? So you see it treats anywhere from certainly from 18 all the way up across a broad-based spectrum of things. So it’s one of our major themes for the next five to ten years.
And so the idea of it being overheated in investment terms is a bit too abstract because there are specific areas that are very underinvested — like femtech. So it’s a case of spotting the particular bits of the healthcare opportunity that need more attention.
Yes. You’ve described it perfectly. In our more simpleton terms, we look at it and say if I look at the previous hot industry — fintech — you would end up with companies doing credit cards, companies doing bank accounts, companies doing lending, companies doing recovery — so many pieces of the value chain. In this case the value chain is humans.
We are even more complex than financial services have ever been, so I think the opportunities are even broader to break it down and build businesses that are going to satisfy certain sexes, maybe certain demographics, certain ages and all these kind of things that are out there. We are just so different.

These leaders are coming to Robotics+AI on March 3. Why aren’t you?

TechCrunch Sessions: Robotics+AI brings together a wide group of the ecosystem’s leading minds on March 3 at UC Berkeley. Over 1000+ attendees are expected from all facets of the robotics and artificial intelligence space – investors, students, engineerings, C-levels, technologists, and researchers. We’ve compiled a small list of highlights of attendees’ companies and job titles attending this year’s event below.
ATTENDEE HIGHLIGHTS
ABB Technology Ventures, Vice President
Amazon, Head, re:MARS Product Marketing
Amazon Web Services, Principal Business Development Manager
Autodesk, Director, Robotics
AWS, Principal Technologist
BMW, R&D Engineer
Bosch Venture Capital, Investment Principal
Capital One, President of Critical Stack
Ceres Robotics Inc, CEO
Deloitte, Managing Director
Facebook AI Research, Research Lead
Ford X, Strategy & Operations
Goldman Sachs, Technology Investor
Google, Vice President
Google X, Director, Robotics
Greylock, EIR
Hasbro, Principal Engineer
Honda R&D Americas Inc., Data Engineer
HSBC, Global Relationship Manager
Huawei Technologies, Principal System Architect of Corporate Technology Strategy
Hyundai CRADLE, Industrial Design
Intel, Hardware Engineer
Intuit, Inc., Software Engineer
iRobot, CTO
John Deere, Director, Precision Ag Marketing and Innovation
Kaiser Permanente, Director
Kawasaki Heavy Industries (USA), Inc., Technical Director
LG Electronics, Head of Engineering
LockHeed Martin, Engineering Manager
Moody’s Analytics, Managing Director
Morgan Stanley, Executive Director
NASA, Senior Systems Architect
Nestle, Innovation Manager
NVIDIA, Senior Systems Software Engineer
Qualcomm Ventures, Investment Director
Samsung, Director, Open Innovations & Tech Partnership
Samsung Ventures, Managing Director
Shasta Ventures, Investor
Softbank Ventures Asia, Investor
Surgical Theater, SVP Engineering
Takenaka Corporation, Senior Manager, Technology Planning
Techstars, Managing Director
Tesla, Sr. Machine Learning Engineer
Toyota Research Institute, Manager, Prototyping & Robotics Operations
Uber, Engineering Manager
UPS, Director of Research and Development
STUDENTS & RESEARCHERS FROM:
Columbia University
Georgia Institute of Technology
Harvard University
Northwestern University
Santa Clara University
Stanford University
Texas A&M University
UC Berkeley
UC Davis
UCLA
USC
Yale University
Did you know that TechCrunch provides a white-glove networking app at all our events called CrunchMatch? You can connect and match with people who meet your specific requirements, message them, and connect right at the conference. How cool is that!?
Want to get in on networking with this caliber of people? Book your $345 General Admission ticket today and save $50 before prices go up at the door. But no one likes going to events alone. Why not bring the whole team? Groups of four or more save 15% on tickets when you book here.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Here’s our pick of the top six startups from Pause Fest

We’ve been dropping into the Australian startup scene increasingly over the years as the ecosystem has been building at an increasingly faster pace, most notably at our own TechCrunch Battlefield Australia in 2017. Further evidence that the scene is growing has come recently in the shape of the Pause Fest conference in Melbourne. This event has gone from strength to strength in recent years and is fast becoming a must-attend for Aussie startups aiming for both national international attention.
I was able to drop in ‘virtually’ to interview a number of those showcased in the Startup Pitch Competition, so here’s a run-down of some of the stand-out companies.
Medinet Australia
Medinet Australia is a health tech startup aiming to make healthcare more convenient and accessible to Australians by allowing doctors to do consultations with patients via an app. Somewhat similar to apps like Babylon Health, Medinet’s telehealth app allows patients to obtain clinical advice from a GP remotely; access prescriptions and have medications delivered; access pathology results; directly email their medical certificate to their employer; and access specialist referrals along with upfront information about specialists such as their fees, waitlist, and patient experience. They’ve raised $3M in Angel financing and are looking for institutional funding in due course. Given Australia’s vast distances, Medinet is well-placed to capitalize on the shift of the population towards much more convenient telehealth apps. (1st Place Winner)
Everty
Everty allows companies to easily manage, monitor and monetize Electric Vehicle charging stations. But this isn’t about infrastructure. Instead, they link up workplaces and accounting systems to the EV charging network, thus making it more like a “Salesforce for EV charging”. It’s available for both commercial and home charging tracking. It’s also raised an Angel round and is poised to raise further funding. (2nd Place Winner)
AI On Spectrum
It’s a sad fact that people with Autism statistically tend to die younger, and unfortunately, the suicide rate is much higher for Autistic people. “Ai on Spectrum” takes an accessible approach in helping autistic kids and their families find supportive environments and feel empowered. The game encourages Autism sufferers to explore their emotional side and arms them with coping strategies when times get tough, applying AI and machine learning in the process to assist the user. (3rd Place Winner)
HiveKeeper
Professional bee-keepers need a fast, reliable, easy-to-use record keeper for their bees and this startup does just that. But it’s also developing a software+sensor technology to give beekeepers more accurate analytics, allowing them to get an early-warning about issues and problems. Their technology could even, in the future, be used to alert for coming bushfires by sensing the changed behavior of the bees. (Hacker Exchange Additional Winner)
Relectrify
Rechargeable batteries for things like cars can be re-used again, but the key to employing them is being able to extend their lives. Relectrify says its battery control software can unlock the full performance from every cell, increasing battery cycle life. It will also reduce storage costs by providing AC output without needing a battery inverter for both new and 2nd-life batteries. Its advanced battery management system combines power and electric monitoring to rapidly the check which are stronger cells and which are weaker making it possible to get as much as 30% more battery life, as well as deploying “2nd life storage”. So far, they have a project with Nissan and American Electric Power and have raised a Series A of $4.5M. (SingularityU Additional Winner)
Gabriel
Sadly, seniors and patients can contract bedsores if left too long. People can even die from bedsores. Furthermore, hospitals can end up in litigation over the issue. What’s needed is a technology that can prevent this, as well as predicting where on a patient’s body might be worst affected. That’s what Gabriel has come up with: using multi-modal technology to prevent and detect both falls and bedsores. Its passive monitoring technology is for the home or use in hospitals and consists of a resistive sheet with sensors connecting to a system which can understand the pressure on a bed. It has FDA approval, is patent-pending and is already working in some Hawaiin hospitals. It’s so far raised $2m in Angel and is now raising money.
Here’s a taste of Pause Fest:

Network with CrunchMatch at TC Sessions: Mobility 2020

Got your sights set on attending TC Sessions: Mobility 2020 on May 14 in San Jose? Spend the day with 1,000 or more like-minded founders, makers and leaders across the startup ecosystem. It’s a day-long deep dive dedicated to current and evolving mobility and transportation tech. Think autonomous vehicles, micro-mobility, AI-based mobility applications, battery tech and so much more.
Hold up. Don’t have a ticket yet? Buy your early bird pass right here and save $100.
In addition to taking in all the great speakers (we add more every week), presentations, workshops and demos, you’ll want to meet people and build the relationships that foster startup success, amirite? Get ready for a radical network experience with CrunchMatch. Our free business-matching platform makes finding and connecting with the right people easier than ever. It’s both curated and automated, a potent combination that makes networking simple and productive. Hey needle, kiss that haystack goodbye.
Here’s how it works.
When we launch the CrunchMatch platform, we’ll email all registered attendees. Simply create a profile, identify your role and list your specific criteria, goals and interests. Whomever you want to meet — investors, founders or engineers specializing in autonomous cars or ride-hailing apps. The CrunchMatch algorithm kicks into gear and suggests matches and, subject to your approval, proposes meeting times and sends meeting requests.
CrunchMatch benefits everyone — founders looking for developers, investors in search of hot prospects, founders looking for marketing help — the list is endless, and the tool is free.
You have one programming-packed day to soak up everything this conference offers. Start strategizing now to make the most of your valuable time. CrunchMatch will help you cut through the crowd and network efficiently so that you have time to learn about the latest tech innovations and still connect with people who can help you reach the next level.
TC Sessions: Mobility 2020 takes place on May 14 in San Jose, Calif. Join, meet and learn from the industry’s mightiest minds, makers, innovators and investors. And let CrunchMatch make your time there much easier and more productive. Buy your early bird ticket, and we’ll see you in San Jose!
Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility 2020? Contact our sponsorship sales team by filling out this form.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a87eca5efb6dd8c2df12b4cf017dfea2’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a87eca5efb6dd8c2df12b4cf017dfea2’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Snap accelerator names its latest cohort

Yellow, the accelerator program launched by Snap in 2018, has selected ten companies to join its latest cohort.
The new batch of startups coming from across the U.S. and international cities like London, Mexico City, Seoul and Vilnius are building professional social networks for black professionals and blue collar workers, fashion labels, educational tools in augmented reality, kids entertainment, and an interactive entertainment production company.
The list of new companies include:
Brightly — an Oakland, Calif.-based media company angling to be the conscious consumer’s answer to Refinery29.
Charli Cohen — a London-based fashion and lifestyle brand.
Hardworkers — a Cambridge, Mass.-based professional digital community built for blue-collar workers.
Mogul Millennial — this Dallas-based company is a digital media platform for black entrepreneurs and corporate leaders.
Nuggetverse — Los Angeles-based Nuggetverse is creating a children’s media business based on its marquee character, Tubby Nugget.
SketchAR — this Lithuanian company is developing an AI-based mobile app for teaching drawing using augmented reality.
Stipop — a Seoul-based sticker API developer with a library of over 100,000 stickers created by 5,000 artists.
TRASH — using this machine learning-based video editing toolkit, users can quickly create and edit high-quality, short-form video. The company is backed by none other than the National Science Foundation and based in Los Angeles.
Veam — another Seoul-based social networking company, Veam uses Airdrop as a way to create persistent chats with nearby users as a geolocated social network.
Wabisabi Design, Inc. — hailing from Mexico City, this startup makes mini games in augmented reality for brands and advertisers.
The latest cohort from Snap’s Yellow accelerator
Since launching the platform in 2018, startups from the Snap accelerator have gone on to acquisition (like Stop, Breathe, and Think, which was bought by Meredith Corp.) and to raise bigger rounds of funding (like the voiceover video production toolkit, MuzeTV, and the animation studio Toonstar).
Every company in the Yellow portfolio will receive $150,000 mentorship from industry veterans in and out of Snap, creative office space in Los Angeles and commercial support and partnerships — including Snapchat distribution.
“Building from the momentum of our first two Yellow programs, this new class approaches mobile creativity through the diverse lenses of augmented reality, platforms, commerce and media, yet each company has a clear vision to bring their products to life,” said Mike Su, Director of Yellow. “This class shows us that there’s no shortage of innovation at the intersection of creativity and technology, and we’re excited to be part of each company’s journey.”

VCs to antitrust officials: We’d rather take our chances than see tech regulated

Last week at Stanford, antitrust officials from the U.S. Department of Justice organized a day-long conference that engaged numerous venture capitalists in conversations about big tech. The DOJ wanted to hear from VCs about whether they believe there’s still an opportunity for startups to flourish alongside the likes of Facebook and Google and whether they can anticipate what — if anything — might disrupt the inexorable growth of these giants.
Most of the invited panelists acknowledged there is a problem, but they also said fairly uniformly that they doubted if more regulation was the solution.
Some of the speakers dismissed outright the idea that today’s tech incumbents can’t be outmaneuvered. Sequoia’s Michael Moritz talked about various companies that ruled the world across different decades and later receded into the background, suggesting that we merely need to wait and see which startups will eventually displace today’s giants.
He added that if there’s a real threat lurking anywhere, it isn’t in an overly powerful Google, but rather American high schools that are, according to Moritz, a poor match for their Chinese counterparts. “We’re killing ourselves; we’re killing the future technologists… we’re slowly killing the potential for home-brewed invention.”
Renowned angel investor Ram Shriram similarly downplayed the DOJ’s concerns, saying specifically he didn’t think that “search” as a category could never be again disrupted or that it doesn’t benefit from network effects. He observed that Google itself disrupted numerous search companies when it emerged on the scene in 1998.
Somewhat cynically, we would note that those companies — Lycos, Yahoo, Excite — had a roughly four-year lead over Google at the time, and Google has been massively dominant for nearly all of those 22 years (because of, yes, its network effects).

Thomas Kurian on his first year as Google Cloud CEO

“Yes.”
That was Google Cloud CEO Thomas Kurian’s simple answer when I asked if he thought he’d achieved what he set out to do in his first year.
A year ago, he took the helm of Google’s cloud operations — which includes G Suite — and set about giving the organization a sharpened focus by expanding on a strategy his predecessor Diane Greene first set during her tenure.
It’s no secret that Kurian, with his background at Oracle, immediately put the entire Google Cloud operation on a course to focus on enterprise customers, with an emphasis on a number of key verticals.
So it’s no surprise, then, that the first highlight Kurian cited is that Google Cloud expanded its feature lineup with important capabilities that were previously missing. “When we look at what we’ve done this last year, first is maturing our products,” he said. “We’ve opened up many markets for our products because we’ve matured the core capabilities in the product. We’ve added things like compliance requirements. We’ve added support for many enterprise things like SAP and VMware and Oracle and a number of enterprise solutions.” Thanks to this, he stressed, analyst firms like Gartner and Forrester now rank Google Cloud “neck-and-neck with the other two players that everybody compares us to.”
If Google Cloud’s previous record made anything clear, though, it’s that technical know-how and great features aren’t enough. One of the first actions Kurian took was to expand the company’s sales team to resemble an organization that looked a bit more like that of a traditional enterprise company. “We were able to specialize our sales teams by industry — added talent into the sales organization and scaled up the sales force very, very significantly — and I think you’re starting to see those results. Not only did we increase the number of people, but our productivity improved as well as the sales organization, so all of that was good.”

He also cited Google’s partner business as a reason for its overall growth. Partner influence revenue increased by about 200% in 2019, and its partners brought in 13 times more new customers in 2019 when compared to the previous year.

Announcing the final agenda for Robotics + AI — March 3 at UC Berkeley

TechCrunch is returning to U.C. Berkeley on March 3 to bring together some of the most influential minds in robotics and artificial intelligence. Each year we strive to bring together a cross-section of big companies and exciting new startups, along with top researchers, VCs and thinkers.
In addition to a main stage that includes the likes of Amazon’s Tye Brady, U .C. Berkeley’s Stuart Russell, Anca Dragan of Waymo, Claire Delaunay of NVIDIA, James Kuffner of Toyota’s TRI-AD, and a surprise interview with Disney Imagineers, we’ll also be offering a more intimate Q&A stage featuring speakers from SoftBank Robotics, Samsung, Sony’s Innovation Fund, Qualcomm, NVIDIA and more.
Alongside a selection of handpicked demos, we’ll also be showcasing the winners from our first-ever pitch-off competition for early-stage robotics companies. You won’t get a better look at exciting new robotics technologies than that. Tickets for the event are still available. We’ll see you in a couple of weeks at Zellerbach Hall.
Agenda
8:30 AM – 4:00 PM
Registration Open Hours
General Attendees can pick up their badges starting at 8:30 am at Lower Sproul Plaza located in front of Zellerbach Hall. We close registration at 4:00 pm.
10:00 AM – 10:05 AM
Welcome and Introduction from Matthew Panzarino (TechCrunch) and Randy Katz (UC Berkeley)
10:05 AM – 10:25 AM
Saving Humanity from AI with Stuart Russell (UC Berkeley)
The UC Berkeley professor and AI authority argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
10:25 AM – 10:45 AM
Engineering for the Red Planet with Lucy Condakchian (Maxar Technologies)
Maxar Technologies has been involved with U.S. space efforts for decades, and is about to send its sixth (!) robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian is general manager of robotics at Maxar and will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.
10:45 AM – 11:05 AM
Automating Amazon with Tye Brady (Amazon Robotics)
Amazon Robotics’ chief technology officer will discuss how the company is using the latest in robotics and AI to optimize its massive logistics. He’ll also discuss the future of warehouse automation and how humans and robots share a work space. 
11:05 AM – 11:15 AM
Live Demo from the Stanford Robotics Club 
11:30 AM – 12:00 PM
Book signing with Stuart Russell (UC Berkeley)
Join one of the foremost experts in artificial intelligence as he signs copies of his acclaimed new book, Human Compatible.
11:35 AM – 12:05 PM
Building the Robots that Build with Daniel Blank (Toggle Industries), Tessa Lau (Dusty Robotics), Noah Ready-Campbell (Built Robotics) and Brian Ringley (Boston Dynamics)
Can robots help us build structures faster, smarter and cheaper? Built Robotics makes a self-driving excavator. Toggle is developing a new fabrication of rebar for reinforced concrete, Dusty builds robot-powered tools and longtime robotics pioneers Boston Dynamics have recently joined the construction space. We’ll talk with the founders and experts from these companies to learn how and when robots will become a part of the construction crew.
12:15 PM – 1:00 PM
Q&A: Corporate VC, Partnering and Acquisitions with Kass Dawson (SoftBank Robotics America), Carlos Kokron (Qualcomm Ventures), and Gen Tsuchikawa (Sony Innovation Fund)
Join this interactive Q&A session on the breakout stage with three of the top minds in corporate VC.
1:00 PM – 1:25 PM
Pitch-off 
Select, early-stage companies, hand-picked by TechCrunch editors, will take the stage and have five minutes to present their wares.
1:15 PM – 2:00 PM

Q&A: Founding Robotics Companies with Sebastien Boyer (FarmWise) and Noah Ready-Campbell (Built Robotics)
Your chance to ask questions of some of the most successful robotics founders on our stage

1:25 PM – 1:50 PM

Investing in Robotics and AI: Lessons from the Industry’s VCs with Dror Berman (Innovation Endeavors), Kelly Chen (DCVC) and Eric Migicovsky (Y Combinator)
Leading investors will discuss the rising tide of venture capital funding in robotics and AI. The investors bring a combination of early-stage investing and corporate venture capital expertise, sharing a fondness for the wild world of robotics and AI investing.
1:50 PM – 2:15 PM
Facilitating Human-Robot Interaction with Mike Dooley (Labrador Systems) and Clara Vu (Veo Robotics)
As robots become an ever more meaningful part of our lives, interactions with humans are increasingly inevitable. These experts will discuss the broad implications of HRI in the workplace and home.
2:15 PM – 2:40 PM
Toward a Driverless Future with Anca Dragan (UC Berkeley/Waymo), Jinnah Hosein (Aurora) and Jur van den Berg (Ike)
Autonomous driving is set to be one of the biggest categories for robotics and AI. But there are plenty of roadblocks standing in its way. Experts will discuss how we get there from here. 
2:15 PM – 3:00 PM
Q&A: Investing in Robotics Startups with Rob Coneybeer (Shasta Ventures), Jocelyn Goldfein (Zetta Venture Partners) and Aaron Jacobson (New Enterprise Associates)
Join this interactive Q&A session on the breakout stage with some of the greatest investors in robotics and AI
2:40 PM – 3:10 PM
Disney Robotics
Imagineers from Disney will present start of the art robotics built to populate its theme parks.
3:10 PM – 3:35 PM
Bringing Robots to Life with Max Bajracharya and James Kuffner (Toyota Research Institute Advanced Development)
This summer’s Tokyo Olympics will be a huge proving ground for Toyota’s TRI-AD. Executive James Kuffner and Max Bajracharya will join us to discuss the department’s plans for assistive robots and self-driving cars.
3:15 PM – 4:00 PM
Q&A: Building Robotics Platforms with Claire Delaunay (NVIDIA) and Steve Macenski (Samsung Research America)
Join this interactive Q&A session on the breakout stage with some of the greatest engineers in robotics and AI.
3:35 PM – 4:00 PM
The Next Century of Robo-Exoticism with Abigail De Kosnik (UC Berkeley), David Ewing Duncan, Ken Goldberg (UC Berkeley), and Mark Pauline (Survival Research Labs)
In 1920, Karl Capek coined the term “robot” in a play about mechanical workers organizing a rebellion to defeat their human overlords. One hundred years later, in the context of increasing inequality and xenophobia, the panelists will discuss cultural views of robots in the context of “Robo-Exoticism,” which exaggerates both negative and positive attributes and reinforces old fears, fantasies and stereotypes.
4:00 PM – 4:10 PM 
Live Demo from Somatic
4:10 PM – 4:35 PM
Opening the Black Box with Explainable AI with Trevor Darrell (UC Berkeley), Krishna Gade (Fiddler Labs) and Karen Myers (SRI International)
Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI will discuss what we’re doing about it and what still needs to be done.
4:35 PM – 5:00 PM 
Cultivating Intelligence in Agricultural Robots with Lewis Anderson (Traptic), Sebastian Boyer (FarmWise) and Michael Norcia (Pyka)
The benefits of robotics in agriculture are undeniable, yet at the same time only getting started. Lewis Anderson (Traptic) and Sebastien Boyer (FarmWise) will compare notes on the rigors of developing industrial-grade robots that both pick crops and weed fields respectively, and Pyka’s Michael Norcia will discuss taking flight over those fields with an autonomous crop-spraying drone.
5:00 PM – 5:25 PM
Fostering the Next Generation of Robotics Startups with Claire Delaunay (NVIDIA), Scott Phoenix (Vicarious) and Joshua Wilson (Freedom Robotics) 
Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.
5:30 PM – 7:30 PM
Unofficial After Party, (Cash Bar Only) 
Come hang out at the unofficial After Party at Tap Haus, 2518 Durant Ave, Ste C, Berkeley
Final Tickets Available
We only have so much space in Zellerbach Hall and tickets are selling out fast. Grab your General Admission Ticket right now for $350 and save 50 bucks as prices go up at the door.
Student tickets are just $50 and can be purchased here. Student tickets are limited.
Startup Exhibitor Packages are sold out!

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Tortoise co-founder Dmitry Shevelenko is bringing autonomous scooters to TC Sessions: Mobility

TechCrunch Sessions Mobility is gearing up to be a lit event. The one-day event, taking place May 14 in San Jose, has just added Dmitry Shevelenko, co-founder and president of an automatic repositioning startup for micromobility vehicles. Yes, that means we’ll be having autonomous scooters rolling around on stage. #2020
Tortoise, which recently received approval to deploy its tech in San Jose, is looking to become an operating system of sorts for micromobility vehicles. Just how Android is the operating system for a number of mobile phones, Tortoise wants to be the operating system for micromobility vehicles.
Given the volume of micromobility operators in the space today, Tortoise aims to make it easier for these companies to more strategically deploy their respective vehicles and reposition them when needed. Using autonomous technology in tandem with remote human intervention, Tortoise’s software enables operators to remotely relocate their scooters and bikes to places where riders need them, or, where operators need them to be recharged. On an empty sidewalk, Tortoise may employ autonomous technologies while it may rely on humans to remotely control the vehicle on a highly trafficked city block.
Before co-founding Tortoise, Shevelenko served as Uber’s director of business development. While at Uber, Shevelenko helped the company expand into new mobility and led the acquisition of JUMP Bikes . Needless to say, Shevelenko is well-versed to talk about the next opportunities in micromobility.
Other speakers at TC Sessions Mobility include Waymo COO Tekedra Mawakana, Uber Director of Policy, Cities & Transportation Shin-pei Tsay and Argo AI co-founder and CEO Bryan Salesky.
Tickets are on sale right now for $250 (early bird status). After April 9, tickets go up so be sure to get yours before that deadline. If you’re a student, tickets cost just $50.
Early-stage startups in the mobility space can book an exhibitor package for $2000 and get 4 tickets and a demo table. Packages allow you to get in front of some of the biggest names in the industry and meet new customers. Book your tickets here.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a87eca5efb6dd8c2df12b4cf017dfea2’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a87eca5efb6dd8c2df12b4cf017dfea2’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

SentinelOne raises $200M at a $1.1B valuation to expand its AI-based endpoint security platform

As cybercrime continues to evolve and expand, a startup that is building a business focused on endpoint security has raised a big round of funding. SentinelOne — which provides a machine learning-based solution for monitoring and securing laptops, phones, containerised applications and the many other devices and services connected to a network — has picked up $200 million, a Series E round of funding that it says catapults its valuation to $1.1 billion.
The funding is notable not just for its size but for its velocity: it comes just eight months after SentinelOne announced a Series D of $120 million, which at the time valued the company around $500 million. In other words, the company has more than doubled its valuation in less than a year — a sign of the cybersecurity times.
This latest round is being led by Insight Partners, with Tiger Global Management, Qualcomm Ventures LLC, Vista Public Strategies of Vista Equity Partners, Third Point Ventures, and other undisclosed previous investors all participating.
Tomer Weingarten, CEO and co-founder of the company, said in an interview that while this round gives SentinelOne the flexibility to remain in “startup” mode (privately funded) for some time — especially since it came so quickly on the heels of the previous large round — an IPO “would be the next logical step” for the company. “But we’re not in any rush,” he added. “We have one to two years of growth left as a private company.”
While cybercrime is proving to be a very expensive business (or very lucrative, I guess, depending on which side of the equation you sit on), it has also meant that the market for cybersecurity has significantly expanded.
Endpoint security, the area where SentinelOne concentrates its efforts, last year was estimated to be around an $8 billion market, and analysts project that it could be worth as much as $18.4 billion by 2024.
Driving it is the single biggest trend that has changed the world of work in the last decade. Everyone — whether a road warrior or a desk-based administrator or strategist, a contractor or full-time employee, a front-line sales assistant or back-end engineer or executive — is now connected to the company network, often with more than one device. And that’s before you consider the various other “endpoints” that might be connected to a network, including machines, containers and more. The result is a spaghetti of a problem. One survey from LogMeIn, disconcertingly, even found that some 30% of IT managers couldn’t identify just how many endpoints they managed.
“The proliferation of devices and the expanding network are the biggest issues today,” said Weingarten. “The landscape is expanding and it is getting very hard to monitor not just what your network looks like but what your attackers are looking for.”
This is where an AI-based solution like SentinelOne’s comes into play. The company has roots in the Israeli cyberintelligence community but is based out of Mountain View, and its platform is built around the idea of working automatically not just to detect endpoints and their vulnerabilities, but to apply behavioral models, and various modes of protection, detection and response in one go — in a product that it calls its Singularity Platform that works across the entire edge of the network.
“We are seeing more automated and real-time attacks that themselves are using more machine learning,” Weingarten said. “That translates to the fact that you need defence that moves in real time as with as much automation as possible.”
SentinelOne is by no means the only company working in the space of endpoint protection. Others in the space include Microsoft, CrowdStrike, Kaspersky, McAfee, Symantec and many others.
But nonetheless, its product has seen strong uptake to date. It currently has some 3,500 customers, including three of the biggest companies in the world, and “hundreds” from the global 2,000 enterprises, with what it says has been 113% year-on-year new bookings growth, revenue growth of 104% year-on-year, and 150% growth year-on-year in transactions over $2 million. It has 500 employees today and plans to hire up to 700 by the end of this year.
One of the key differentiators is the focus on using AI, and using it at scale to help mitigate an increasingly complex threat landscape, to take endpoint security to the next level.
“Competition in the endpoint market has cleared with a select few exhibiting the necessary vision and technology to flourish in an increasingly volatile threat landscape,” said Teddie Wardi, MD of Insight Partners, in a statement. “As evidenced by our ongoing financial commitment to SentinelOne along with the resources of Insight Onsite, our business strategy and ScaleUp division, we are confident that SentinelOne has an enormous opportunity to be a market leader in the cybersecurity space.”
Weingarten said that SentinelOne “gets approached every year” to be acquired, although he didn’t name any names. Nevertheless, that also points to the bigger consolidation trend that will be interesting to watch as the company grows. SentinelOne has never made an acquisition to date, but it’s hard to ignore that, as the company to expand its products and features, that it might tap into the wider market to bring in other kinds of technology into its stack.
“There are definitely a lot of security companies out there,” Weingarten noted. “Those that serve a very specific market are the targets for consolidation.”

Microsoft Dynamics 365 update is focused on harnessing data

Microsoft announced a major update to its Dynamics 365 product line today, which correlates to the growing amount of data in the enterprise, and how to collect and understand that data to produce better customer experiences.
This is, in fact, the goal of all vendors in this space including Salesforce and Adobe, who are also looking to help improve the customer experience. James Philips, who was promoted to president of Microsoft Business Applications just this week, says that Microsoft has also been keenly focused on harnessing the growing amount of data and helping make use of that inside the applications he is in charge of.
“To be frank every single thing that we’re doing at Microsoft, not just in business applications but across the entire Microsoft Cloud, is on the back of that vision that data is coming out of everything, and that those organizations that can collect that data, harmonize it and reason over it will be in a position to be proactive versus reactive,” Philips told TechCrunch.
New customer engagement tooling
For starters, the company is adding functionality to its customer data platform (CDP), a concept all major vendors (and a growing group of startups) have embraced. It pulls together all of the customer data from various systems into one place, making it easier to understand how the customer interacts with you with the goal of providing better experiences based on this knowledge. Microsoft’s CDP is called Customer Insights.
The company is adding some new connectors to help complete that picture of the customer. “We’re adding new first- and third-party data connections to Customer Insights that allow our customers to understand, for example audience memberships, brand affinities, demographic, psychographic and other characteristics of customers that are stored and then harnessed from Dynamics 365 Customer Insights,” Philips said.
All of this, might make you wonder how they can collect this level of data and maintain GDPR/CCPA kind of compliance. Philips says that the company has been working on this for some time. “We did work at the company level to build a system that allows us and our customers to search for and then delete information about customers in each product group within Microsoft including my organization,” he explained.
The company has also added new sales forecasting tools and Dynamics 365 Sales Engagement Center. The first allows companies to tap into all this data to better predict the customers who sales is engaged with that are most likely to turn into sales. The second gives inside sales teams tools like next best action. These are not revolutionary by any means in the CRM space, but do provide new capabilities for Microsoft customers.
New operations level tooling
The operations side is related to what happens after the sale when the company begins to collect money and report revenue. To that end, the company is introducing a new product called Dynamic 365 Finance Insights, which you can think of as Customer Insights, except for money.
“This product is designed to help our customers predict and accelerate their cash flow. It’s designed specifically to identify opportunities where to focus your energy, where you may have the best opportunity to either close accounts payables or receivables or the opportunity to understand where you may have cash shortfalls,” Philips said.
Finally the company is introducing Dynamics 365 Project Operations,which provides a way for project-based business like construction, consulting and law to track the needs of the business.
“Those organizations, who are trying to operate in a project-based way now have with Dynamics 365 Project Operations, what we believe is the most widely used project management capability in Microsoft Project being joined now with all of the back-end capabilities for selling, accounting and planning that Dynamic 365 offers, all built on the same Common Data Platform, so that you can marry your front-end operations and operational planning with your back-end resource planning, workforce planning and operational processes,” he explained.
All of these tools are designed to take advantage of the growing amount of data coming into organizations, and provide ways to run businesses in a more automated and intelligent fashion that removes some of the manual steps involved in running a company.
To be clear, Microsoft is not alone in offering this kind of intelligent functionality. It is part of a growing movement to bring intelligence to all aspects of enterprise software, regardless of vendor.

Microsoft gives Salesforce a shove with new Dynamics 365 integrated cloud platform

Europe sets out plan to boost data reuse and regulate “high risk” AIs

European Union lawmakers have set out a first bundle of proposals for a new digital strategy for the bloc, one that’s intended to drive digitalization across all industries and sectors — and enable what Commission president Ursula von der Leyen has described as ‘A Europe fit for the Digital Age‘.
It could also be summed up as a ‘scramble for AI’, with the Commission keen to rub out barriers to the pooling of massive European data sets in order to power a new generation of data-driven services as a strategy to boost regional competitiveness vs China and the U.S.
Pushing for the EU to achieve technological sovereignty is key plank of von der Leyen’s digital policy plan for the 27-Member State bloc.
Presenting the latest on her digital strategy to press in Brussels today, she said: “We want the digital transformation to power our economy and we want to find European solutions in the digital age.”
The top-line proposals are:
AI
Rules for “high risk” AI systems such as in health, policing, or transport requiring such systems are “transparent, traceable and guarantee human oversight”
A requirement that unbiased data is used to train high-risk systems so that they “perform properly, and to ensure respect of fundamental rights, in particular non-discrimination”
Consumer protection rules so authorities can “test and certify” data used by algorithms in a similar way to existing rules that allow for checks to be made on products such as cosmetics, cars or toys
A “broad debate” on the circumstances where use of remote use of biometric identification could be justified
A voluntary labelling scheme for lower risk AI applications
Proposing the creation of an EU governance structure to ensure a framework for compliance with the rules and avoid fragmentation across the bloc
Data
A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing, which the Commission says will establish “practical, fair and clear rules on data access and use, which comply with European values and rights such as personal data protection, consumer protection and competition rules” 
A push to make public sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation
Support for cloud infrastructure platforms and systems to support the data reuse goals. The Commission says it will contribute to investments in European High Impact projects on European data spaces and trustworthy and energy efficient cloud infrastructures
Sectoral specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the green deal, mobility or health
The full data strategy proposal can be found here.
While the Commission’s white paper on AI “excellence and trust” is here.
Next steps will see the Commission taking feedback on the plan — as it kicks off public consultation on both proposals.
A final draft is slated by the end of the year after which the various EU institutions will have their chance to chip into (or chip away at) the plan. So how much policy survives for the long haul remains to be seen.
Tech for good
At a press conference following von der Leyen’s statement Margrethe Vestager, the Commission EVP who heads up digital policy, and Thierry Breton, commissioner for the internal market, went into some of the detail around the Commission’s grand plan for “shaping Europe’s digital future”.
The digital policy package is meant to define how we shape Europe’s digital future “in a way that serves us all”, said Vestager.
The strategy aims to unlock access to “more data and good quality data” to fuel innovation and underpin better public services, she added.
The Commission’s digital EVP Margrethe Vestager discussing the AI whitepaper
Collectively, the package is about embracing the possibilities AI create while managing the risks, she also said, adding that: “The point obviously is to create trust, rather than fear.”
She noted that the two policy pieces being unveiled by the Commission today, on AI and data, form part of a more wide-ranging digital and industrial strategy whole with additional proposals still to be set out.
“The picture that will come when we have assembled the puzzle should illustrate three objectives,” she said. “First that technology should world for people and not the other way round; it is first and foremost about purpose The development, the deployment, the uptake of technology must work in the same direction to make a real positive difference in our daily lives.
“Second that we want a fair and competitive economy — a full Single Market where companies of all sizes can compete on equal terms, where the road from garage to scale up is as short as possible. But it also means an economy where the market power held by a few incumbents cannot be used to block competition. It also means an economy were consumers can take it for granted that their rights are being respected and profits are being taxed where they are made”
Thirdly, she said the Commission plan would support “an open, democratic and sustainable society”.
“This means a society where citizens can control the data that they provide, where digit platforms are accountable for the contents that they feature… This is a fundamental thing — that while we use new digital tools, use AI as a tool, that we build a society based on our fundamental rights,” she added, trailing a forthcoming democracy action plan.
Digital technologies must also actively enable the green transition, said Vestager — pointing to the Commission’s pledge to achieve carbon neutrality by 2050. Digital, satellite, GPS and sensor data would be crucial to this goal, she suggested.
“More than ever a green transition and digital transition goes hand in hand.”
On the data package Breton said the Commission will launch a European and industrial cloud platform alliance to drive interest in building the next gen platforms he said would be needed to enable massive big data sharing across the EU — tapping into 5G and edge computing.
“We want to mobilize up to €2BN in order to create and mobilize this alliance,” he said. “In order to run this data you need to have specific platforms… Most of this data will be created locally and processed locally — thanks to 5G critical network deployments but also locally to edge devices. By 2030 we expect on the planet to have 500BN connected devices… and of course all the devices will exchange information extremely quickly. And here of course we need to have specific mini cloud or edge devices to store this data and to interact locally with the AI applications embedded on top of this.
“And believe me the requirement for these platforms are not at all the requirements that you see on the personal b2c platform… And then we need of course security and cyber security everywhere. You need of course latencies. You need to react in terms of millisecond — not tenths of a second. And that’s a totally different infrastructure.”
“We have everything in Europe to win this battle,” he added. “Because no one has expertise of this battle and the foundation — industrial base — than us. And that’s why we say that maybe the winner of tomorrow will not be the winner of today or yesterday.”
Trustworthy artificial intelligence
On AI Vestager said the major point of the plan is “to build trust” — by using a dual push to create what she called “an ecosystem of excellence” and another focused on trust.
The first piece includes a push by the Commission to stimulate funding, including in R&D and support for research such as by bolstering skills. “We need a lot of people to be able to work with AI,” she noted, saying it would be essential for small and medium sized businesses to be “invited in”.
On trust the plan aims to use risk to determine how much regulation is involved, with the most stringent rules being placed on what it dubs “high risk” AI systems. “That could be when AI tackles fundamental values, it could be life or death situation, any situation that could cause material or immaterial harm or expose us to discrimination,” said Vestager.
To scope this the Commission approach will focus on sectors where such risks might apply — such as energy and recruitment.
If an AI product or service is identified as posing a risk then the proposal is for an enforcement mechanism to test that the product is safe before it is put into use. These proposed “conformity assessments” for high risk AI systems include a number of obligations Vestager said are based on suggestions by the EU’s High Level Expert Group on AI — which put out a slate of AI policy recommendations last year.
The four requirements attached to this bit of the proposals are: 1) that AI systems should be trained using data that “respects European values and rules” and that a record of such data is kept; 2) that an AI system should provide “clear information to users about its purpose, its capabilities but also its limits” and that it be clear to users when they are interacting with an AI rather than a human; 3) AI systems must be “technically robust and accurate in order to be trustworthy”; and 4) they should always ensure “an appropriate level of human involvement and oversight”.
Obviously there are big questions about how such broad-brush requirements will be measured and stood up (as well as actively enforced) in practice.
If an AI product or service is not identified as high risk Vestager noted there would still be regulatory requirements in play — such as the need for developers to comply with existing EU data protection rules.
In her press statement, Commission president von der Leyen highlighted a number of examples of how AI might power a range of benefits for society — from “better and earlier” diagnosis of diseases like cancer to helping with her parallel push for the bloc to be carbon neutral by 2050, such as by enabling precision farming and smart heating — emphasizing that such applications rely on access to big data.
Artificial intelligence is about big data,” she said. “Data, data and again data. And we all know that the more data we have the smarter our algorithms. This is a very simple equation. Therefore it is so important to have access to data that are out there. This is why we want to give our businesses but also the researchers and the public services better access to data.”
“The majority of data we collect today are never ever used even once. And this is not at all sustainable,” she added. “In these data we collect that are out there lies an enormous amount of precious ideas, potential innovation, untapped potential we have to unleash — and therefore we follow the principal that in Europe we have to offer data spaces where you can not only store your data but also share with others. And therefore we want to create European data spaces where businesses, governments and researchers can not only store their data but also have access to other data they need for their innovation.”
She too impressed the need for AI regulation, including to guard against the risk of biased algorithms — saying “we want citizens to trust the new technology”. “We want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible, human centric approach to artificial intelligence,” she added.
She said the planned restrictions on high risk AI would apply in fields such as healthcare, recruitment, transportation, policing and law enforcement — and potentially others.
“We will be particularly careful with sectors where essential human interests and rights are at stake,” she said. “Artificial intelligence must serve people. And therefore artificial intelligence must always comply with people’s rights. This is why a person must always be in control of critical decisions and so called ‘high risk AI’ — this is AI that potentially interferes with people’s rights — have to be tested and certified before they reach our single market.”
“Today’s message is that artificial intelligence is a huge opportunity in Europe, for Europe. We do have a lot but we have to unleash this potential that is out there. We want this innovation in Europe,” von der Leyen added. “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop artificial intelligence and we want to encourage our citizens to feel confident to use it in Europe.”
Towards a rights-respecting common data space
The European Commission has been working on building what it dubs a “data economy” for several years at this point, plugging into its existing Digital Single Market strategy for boosting regional competitiveness.
Its aim is to remove barriers to the sharing of non-personal data within the single market. The Commission has previously worked on regulation to ban most data localization, as well as setting out measures to encourage the reuse of public sector data and open up access to scientific data.
Healthcare data sharing has also been in its sights, with policies to foster interoperability around electronic health records, and it’s been pushing for more private sector data sharing — both b2b and business-to-government.
“Every organisation should be able to store and process data anywhere in the European Union,” it wrote in 2018. It has also called the plan a “common European data space“. Aka “a seamless digital area with the scale that will enable the development of new products and services based on data”.
The focus on freeing up the flow of non-personal data is intended to complement the bloc’s long-standing rules on protecting personal data. The General Data Protection Regulation (GDPR), which came into force in 2018, has reinforced EU citizens’ rights around the processing of their personal information — updating and bolstering prior data protection rules.
The Commission views GDPR as a major success story by merit of how it’s exported conversations about EU digital standards to a global audience.
But it’s fair to say that back home enforcement of the GDPR remains a work in progress, some 21 months in — with many major cross-border complaints attached to how tech and adtech giants are processing people’s data still sitting on the desk of the Irish Data Protection Commission where multinationals tend to locate their EU HQ as a result of favorable corporate tax arrangements.
The Commission’s simultaneous push to encourage the development of AI arguably risks heaping further pressure on the GDPR — as both private and public sectors have been quick to see model-making value locked up in citizens’ data.
Already across Europe there are multiple examples of companies and/or state authorities working on building personal data-fuelled diagnostic AIs for healthcare; using machine learning for risk scoring of benefits claimants; and applying facial recognition as a security aid for law enforcement, to give three examples.
There has also been controversy fast following such developments. Including around issues such as proportionality and the question of consent to legally process people’s data — both under GDPR and in light of EU fundamental privacy rights as well as those set out in the European Convention of Human Rights.
Only this month a Dutch court ordered the state to cease use of a blackbox algorithm for assessing the fraud risk of benefits claimants on human rights grounds — objecting to a lack of transparency around how the system functions and therefore also “insufficient” controllability.
The von der Leyen Commission, which took up its five-year mandate in December, is alive to rights concerns about how AI is being applied, even as it has made it clear it intends to supercharge the bloc’s ability to leverage data and machine learning technologies — eyeing economic gains.
Commission president, Ursula von der Leyen, visiting the AI Intelligence Center in Brussels (via the EC’s EbS Live AudioVisual Service)
The Commission president committed to publishing proposals to regulate AI within the first 100 days — saying she wants a European framework to steer application to ensure powerful learning technologies are used ethically and for the public good.
But a leaked draft of the plan to regulate AI last month suggested it would step back from imposing even a temporary ban on the use of facial recognition technology — leaning instead towards tweaks to existing rules and sector/app specific risk-assessments and requirements.
It’s clear there are competing views at the top of the Commission on how much policy intervention is needed on the tech sector.
Breton has previously voiced opposition to regulating AI — telling the EU parliament just before he was confirmed in post that he “won’t be the voice of regulating AI“.
While Vestager has been steady in her public backing for a framework to govern how AI is applied, talking at her hearing before the EU parliament of the importance of people’s trust and Europe having its own flavor of AI that must “serve humans” and have “a purpose” .
“I don’t think that we can be world leaders without ethical guidelines,” she said then. “I think we will lose it if we just say no let’s do as they do in the rest of the world — let’s pool all the data from everyone, no matter where it comes from, and let’s just invest all our money.”
At the same time Vestager signalled a willingness to be pragmatic in the scope of the rules and how they would be devised — emphasizing the need for speed and agreeing the Commission would need to be “very careful not to over-regulate”, suggesting she’d accept a core minimum to get rules up and running.
Today’s proposal steers away from more stringent AI rules — such as a ban on facial recognition in public places. On biometric AI technologies Vestager described some existing uses as “harmless” during today’s press conference — such as unlocking a phone or for automatic border gates — whereas she stressed the difference in terms of rights risks related to the use of remote biometric identification tech such as facial recognition.
“With this white paper the Commission is launching a debate on the specific circumstance — if any — which might justify the use of such technologies in public space,” she said, putting some emphasis on the word ‘any’.
The Commission is encouraging EU citizens to put questions about the digital strategy for Vestager to answer tomorrow, in a live Q&A at 17.45 CET on Facebook, Twitter and LinkedIn — using the hashtag #DigitalEU

Do you want to know more on the EU’s digital strategy?Use #DigitalEU to share your questions and we will ask them to Margrethe Vestager this Thursday. pic.twitter.com/I90hCR6Gcz
— European Commission (@EU_Commission) February 18, 2020

Platform liability
There is more to come from the Commission on the digital policy front — with a Digital Services Act in the works to update pan-EU liability rules around Internet platforms.
That proposal is slated to be presented later this year and both commissioners said today that details remain to be worked out. The possibility that the Commission will propose rules to more tightly regulate online content platforms already has content farming adtech giants like Facebook cranking up their spin cycles.
During today’s press conference Breton said he would always push for what he dubbed “shared governance” but he warned several times that if platforms don’t agree an acceptable way forward “we will have to regulate” — saying it’s not up for European society to adapt to the platforms but for them to adapt to the EU.
“We will do this within the next eight months. It’s for sure. And everybody knows the rules,” he said. “Of course we’re entering here into dialogues with these platforms and like with any dialogue we don’t know exactly yet what will be the outcome. We may find at the end of the day a good coherent joint strategy which will fulfil our requirements… regarding the responsibilities of the platform. And by the way this is why personally when I meet with them I will always prefer a shared governance. But we have been extremely clear if it doesn’t work then we will have to regulate.”
Internal market commissioner, Thierry Breton

Join the Q&A with top speakers at TC Sessions: Robotics+AI (March 3)

Over the past four years, TechCrunch has brought together some of the biggest names in robotics: founders, CEOs, VCs and researchers for TC Sessions: Robotics+AI. The show has provided a unique opportunity to explore the future and present of robotics, AI and the automation technologies that will define our professional and personal lives.
While the panels have been curated and hosted by our editorial staff, we’ve also long been interested in providing show-goers and opportunity to engage with guests. For this reason, we introduced the Q&A stage, where some of the biggest names can more directly engage with attendees.
This year, we’ve got top names from Softbank, Samsung, Sony’s Innovation Fund, Qualcomm, NVIDIA and more joining us on the stage to answer questions. Here’s the full agenda of this year’s Q&A stage.
11:30 – 12:00 Russell Book signing
Stuart Russell
12:15 – 1:00 Corporate VC, Partnering and Acquisitions
Carlos Kocher (Qualcomm) />
Kass Dawson (Softbank)
Gen Tsuchikawa (Sony Innovation Fund)
1:15 – 2:00 Founders
Sebastien Boyer (FarmWise)
Noah Campbell-Ready (Built Robotics)
2:15 – 3:00 VC
Jocelyn Goldfein (Zetta Venture Partners) />
Rob Coneybeer (Shasta Ventures)
Aaron Jacobson (New Enterprise Associates)
3:15 – 4:00 Building Robotics Platforms
Steven Macenski (Samsung)Claire Delaunay (Nvidia)
$345 General admission tickets are still on sale – book yours here and join 1000+ of today’s leading minds in the business for networking and discovery. The earlier you book the better as prices go up at the door.
Students, save big with a $50 ticket and get full access to the show. Student tickets are available to current students only. Book yours here.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

TubeMogul execs launch Arize AI for AI troublehsooting

A new startup called Arize AI is building what it calls a real-time analytics platform for artificial intelligence and machine learning.
The company is led by CEO Jason Lopatecki, who has also served as chief strategy officer and chief innovation officer at TubeMogul, the video ad company acquired by Adobe. TubeMogul’s co-founder and former CEO Brett Wilson is also an investor and board member.
While Arize AI is only coming out of stealth today, it’s already raised $4 million in funding led by Foundation Capital, with participation from Wilson and Trinity Ventures.
And it’s already made an acquisition: A Y Combinator -backed startup called Monitor ML. The entire Monitor ML team is joining Arize, and its CEO Aparna Dhinakaran (who previously built machine learning infrastructure at Uber) is becoming Arize’s co-founder and chief product officer.
Lopatecki and Dhinakaran said that even when they were leading two separate startups, they were trying to solve similar problems — problems that they both saw at big companies.
“Businesses are deploying these complex models that are hard to understand, they’re not easy to troubleshoot or debug,” Lopatecki said. So if an AI or ML model isn’t delivering the desired results, “The state of the art today is: You file a ticket, the data scientist comes back with a complicated answer, everyone’s scratching their head, everyone hopes the problem’s gone away. As you push more and more models into the organization, that’s just not good enough.”
Similarly Dhinakaran said that at Uber, she saw her team spend a lot of time “answering the question, ‘Hey, is the model performing well?’ And diving into that model performance was really a tough problem.”

Machine learning startup Weights & Biases raises $15M

To solve it, she said, “The first phase is: How can we make it easier to get these real-time analytics and insights about your model straight to the people who are monitoring it in production, the data scientist or the product manager or engineering team?”
Lopatecki added Arize AI is providing more than just “a metric that says it’s good or bad,” but rather a wide range of information that can help teams see how a model is performing — and if there issues, whether those issues are with the data or with the model itself.
Besides giving companies a better handle on how their AI and ML models are doing, Lopatecki said this will also allow customers to make better use of their data scientists: “[You don’t want] the smallest, most expensive team troubleshooting and trying to explain whether it was a correct predictions or not … You want insights surfaced up [to other teams], so your head researcher is doing research, not explaining that research to the rest of the team.
He compared Arize AI’s tools to to Google Analytics, but added, “I don’t want to say it’s an executive dashboard, that’s not the right positioning of the platform. It’s an engineering product, similar to Splunk — it’s really for engineers, not the execs.”
Lopatecki also acknowledged that it can be tough to make sense of the AI and ML landscape right now (“I’m technical, I did EECS at Berkeley, I understand ML extremely well, but even I can be confused by some fo the companies in this space”). He argued that while most other companies are trying to tackle the entire AI pipeline, “We’re really focusing on production.”

New Uber feature uses machine learning to sort business and personal rides

Elon Musk says all advanced AI development should be regulated, including at Tesla

Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of artificial intelligence – the executive and founder tweeted on Monday evening that “all org[anizations] developing advance AI should be regulated, including Tesla.”
Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. At first, OpenAI was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few – ie. for-profit technology companies.

All orgs developing advanced AI should be regulated, including Tesla
— Elon Musk (@elonmusk) February 17, 2020

At the time of its founding in 2015, Musk posited that the group essentially arrived at the idea for OpenAI as an alternative the the less effective course of simply either “sit[ting] on the sidelines” or “encourag[ing] regulatory oversight.” Musk also said in 2017 that he believed that regulation should be put in place to govern the development of AI, preceded first by the formation of some kind of oversight agency that would study and gain insight into the industry before proposing any rules.
In the intervening years, much has changed – including OpenAI. The organization officially formed a for-profit arm owned by a non-profit parent corporation in 2019, and accepted $1 billion in investment from Microsoft along with the formation a wide-ranging partnership, seemingly in contravention of its founding principles.
Musk’s comments this week in response to the MIT profile indicate that he’s quite distant from the organization he helped co-found both ideologically and in a more practical, functional sense. The SpaceX founder also noted that he “must agree” that concerns about OpenAI’s mission expressed last year at the time of its Microsoft announcement “are reasonable,” and said that “OpenAI should be more open.” Musk also noted that he has “no control & only very limited insight into OpenAI” and that his “confidence” in Dario Amodei, OpenAI’s research director, “is not high” when it comes to ensuring safe development of AI.
While it might indeed be surprising to see Musk include Tesla in a general call for regulation of the development of advanced AI, it is in keeping with his general stance on the development of artificial intelligence. Musk has repeatedly warned of the risks associated with creating AI that is more independent and advanced, even going so far as to call it a “fundamental risk to the existence of human civilization.”
He also clarified on Monday that he believes advanced AI development should be regulated both by individual national governments as well as by international governing bodies, like the U.N., in response to a clarifying question from a follower. Time is clearly not doing anything to blunt Musk’s beliefs around the potential threat of AI: Perhaps this will encourage him to ramp up his efforts with Neuralink to give humans a way to even the playing field.

African crowdsolving startup Zindi scales 10,000 data scientists

Cape Town based startup Zindi has registered 10,000 data-scientists on its platform that uses AI and machine learning to crowdsolve complex problems in Africa.
Founded in 2018, the early-stage venture allows companies, NGOs or government institutions to host online competitions around data-oriented challenges.
Zindi opens the contests to the African data scientists on its site who can join a competition, submit solution sets, move up a leader board and win — for a cash prize payout.
The highest purse so far has been $12,000, according to Zindi co-founder Celina Lee. Competition hosts receive the results, which they can use to create new products or integrate into their existing systems and platforms.
It’s free for data scientists to create a profile on the site, but those who fund the competitions pay Zindi a fee, which is how the startup generates revenue.
Zindi’s model has gained the attention of some notable corporate names in and outside of Africa. Those who have hosted competitions include Microsoft, IBM and Liquid Telecom .
The South African National Roads Agency sponsored a challenge in 2019 to reduce traffic fatalities in South Africa. The stated objective: “to build a machine learning model that accurately predicts when and where the next road incident will occur in Cape Town…to enable South African authorities…to put measures in place that will…ensure safety.”
Attaining 10,000 registered data-scientists represents a more than 100% increase for Zindi since August 2019, when TechCrunch last spoke to Lee.
The startup — which is in the process of raising a Series A funding round — plans to connect its larger roster to several new platform initiatives. Zindi will launch a university wide hack-competition, called UmojoHack Africa, across 10 countries in March.
“We’re also working on a section on our site that is specifically designed to run hackathons…something that organizations and universities could use to upskill their students or teams specifically,” Lee said.
Lee (who’s originally from San Francisco) co-founded Zindi with South African Megan Yates and Ghanaian Ekow Duker. They lead a team in the company’s Cape Town office.
For Lee the startup is a merger of two facets of her experience.
“It all just came together. I have this math-y tech background and I was working in non-profits and development, but I’d always been trying to join the two worlds,” she said.
That happened with Zindi, which is fully for-profit — though roughly 80% of the startup’s competitions have some social impact angle, according to Lee.
“In an African context, solving problems for for-profit companies can definitely have social impact as well,” she said.
With most of the continent’s VC focused on fintech or e-commerce startups, Zindi joins a unique group of ventures —  such as Andela and Gebeya — that are building tech-talent in Africa’s data-scientist and software engineer space.
If Zindi can convene data-scientists to solve problems for companies and governments across the entire continent that could open up a vast addressable market.
It could also see the startup become an alternative — on many a project — to more expensive consulting firms operating in Africa’s large economies, such as South Africa, Nigeria and Kenya .

Africa-focused Andela cuts 400 staff as it confirms $50M in revenue

 

Facebook asks for a moat of regulations it already meets

It’s suspiciously convenient that Facebook already fulfills most of the regulatory requirements it’s asking governments to lay on the rest of the tech industry. Facebook CEO Mark Zuckerberg is in Brussels lobbying the European Union’s regulators as they form new laws to govern artificial intelligence, content moderation, and more. But if they follow Facebook’s suggestions, they might reinforce the social network’s power rather than keep it in check by hamstringing companies with fewer resources.
We already saw this happen with GDPR. The idea was to strengthen privacy and weaken exploitative data collection that tech giants like Facebook and Google depend on for their business models. The result was the Facebook and Google actually gained or only slightly lost EU market share while all other adtech vfendors got wrecked by the regulation, according to WhoTracksMe.
GDPR went into effect in May 2018, hurting other ad tech vendors’ EU market share much worse than Google and Facebook. Image credit: WhoTracksMe
Tech giants like Facebook have the profits lawyers, lobbyists, engineers, designers, scale, and steady cash flow to navigate regulatory changes. Unless new laws are squarely targeted at the abuses or dominance of these large companies, their collateral damage can loom large. Rather than spend time and money they don’t have in order to comply, some smaller competitors will fold, scale back, or sell out.
But at least in the case of GDPR, everyone had to add new transparency and opt out features. If Facebook’s slate of requests goes through, it will sail forward largely unpeturbed while rivals and upstarts scramble to get up to speed. I made this argument in March 2018 in my post “Regulation could protect Facebook, not punish it”. Then GDPR did exactly that.
Google gained market share and Facebook only lost a little in the EU following GDPR. Everyone else faired worse. Image via WhoTracksMe
That doesn’t mean these safeguards aren’t sensible for everyone to follow. But regulators need to consider what Facebook isn’t suggesting if it wants to address its scope and brazenness, and what timelines or penalties would be feasible for smaller players.
If we take a quick look at what Facebook is proposing, it becomes obvious that it’s self-servingly suggesting what it’s already accomplished:
User-friendly channels for reporting content – Every post and entity on Facebook can already be flagged by users with an explanation of why
External oversight of policies or enforcement – Facebook is finalizing its independent Oversight Board right now
Periodic public reporting of enforcement data – Facebook publishes a twice-yearly report about enforcement of its Community Standards
Publishing their content standards – Facebook publishes its standards and notes updates to them
Consulting with stakeholders when making significant changes – Facebook consults a Safety Advisory Board and will have its new Oversight Board
Creating a channel for users to appeal a company’s content removal decisions – Facebook’s Oversight Board will review content removal appeals
Incentives to meet specific targets such as keeping the prevalence of violating content below some agreed threshold – Facebook already touts how 99% of child nudity content and 80% of hate speech removed was detected proactively, and that it deletes 99% of ISIS and Al Qaeda content
Facebook CEO Mark Zuckerberg arrives at the European Parliament, prior to his audition on the data privacy scandal on May 22, 2018 at the European Union headquarters in Brussels. (Photo by JOHN THYS / AFP) (Photo credit should read JOHN THYS/AFP/Getty Images)
Finally, Facebook asks that the rules for what content should be prohibited on the internet “recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context”. That’s a lot of leeway. Facebook already allows different content in different geographies to comply with local laws, lets Groups self-police themselves more than the News Feed, and Zuckerberg has voiced support for customizable filters on objectionable content with defaults set by local majorities.
“…Can be enforced at scale” is a last push for laws that wouldn’t require tons of human moderators to enforce that might further drag down Facebook’s share price. ‘100 billion piece of content come in per day, so don’t make us look at it all.’ Investments in safety for elections, content, and cybersecurity already dragged Facebook’s profits down from growth of 61% year-over-year in 2019 to just 7% in 2019.
To be clear, it’s great that Facebook is doing any of this already. Little is formally required. If the company was as evil as some make it out to be, it wouldn’t be doing any of this.

Facebook pushes EU for dilute and fuzzy internet content rules

Then again, Facebook earned $18 billion in profit in 2019 off our data while repeatedly proving it hasn’t adequately protected it. The $5 billion fine and settlement with FTC where Facebook has pledged to build more around privacy and transparency shows it’s still playing catch up given its role as a ubiquitous communications utility.
There’s plenty more for EU and hopefully US regulators to investigate. Should Facebook pay a tax on the use of AI? How does it treat and pay its human content moderators? Would requiring users be allowed to export their interoperable friends list promote much-needed competition in social networking that could let the market compel Facebook to act better?
As the EU internal market commissioner Thierry Breton told reporters following Zuckerberg’s meetings with regulators, “It’s not for us to adapt to those companies, but for them to adapt to us.”

Unemployment is the top risk of AI. I think a tax on its use by big companies could help pay for job retraining the world will desperately need
— Josh Constine (@JoshConstine) February 17, 2020

Friend portability is the must-have Facebook regulation

Facebook pushes EU for dilute and fuzzy Internet content rules

Facebook founder Mark Zuckerberg is in Europe this week — attending a security conference in Germany over the weekend where he spoke about the kind of regulation he’d like applied to his platform ahead of a slate of planned meetings with digital heavyweights at the European Commission.
“I do think that there should be regulation on harmful content,” said Zuckerberg during a Q&A session at the Munich Security Conference, per Reuters, making a pitch for bespoke regulation.
He went on to suggest “there’s a question about which framework you use”, telling delegates: “Right now there are two frameworks that I think people have for existing industries — there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.”
“I actually think where we should be is somewhere in between,” he added, making his plea for Internet platforms to be a special case.
At the conference he also said Facebook now employs 35,000 people to review content on its platform and implement security measures — including suspending around 1 million fake accounts per day, a stat he professed himself “proud” of.
The Facebook chief is due to meet with key commissioners covering the digital sphere this week, including competition chief and digital EVP Margrethe Vestager, internal market commissioner Thierry Breton and Věra Jourová, who is leading policymaking around online disinformation.
The timing of his trip is clearly linked to digital policymaking in Brussels — with the Commission due to set out its thinking around the regulation of artificial intelligence this week. (A leaked draft last month suggested policymaker are eyeing risk-based rules to wrap around AI.)
More widely, the Commission is wrestling with how to respond to a range of problematic online content — from terrorism to disinformation and election interference — which also puts Facebook’s 2BN+ social media empire squarely in regulators’ sights.
Another policymaking plan — a forthcoming Digital Service Act (DSA) — is slated to upgrade liability rules around Internet platforms.
The detail of the DSA has yet to be publicly laid out but any move to rethink platform liabilities could present a disruptive risk for a content distributing giant such as Facebook.
Going into meetings with key commissioners Zuckerberg made his preference for being considered a ‘special’ case clear — saying he wants his platform to be regulated not like the media businesses which his empire has financially disrupted; nor like a dumbpipe telco.
On the latter it’s clear — even to Facebook — that the days of Zuckerberg being able to trot out his erstwhile mantra that ‘we’re just a technology platform’, and wash his hands of tricky content stuff, are long gone.
Russia’s 2016 foray into digital campaigning in the US elections and sundry content horrors/scandals before and since have put paid to that — from nation-state backed fake news campaigns to livestreamed suicides and mass murder.
Facebook has been forced to increase its investment in content moderation. Meanwhile it announced a News section launch last year — saying it would hand pick publishers content to show in a dedicated tab.
The ‘we’re just a platform’ line hasn’t been working for years. And EU policymakers are preparing to do something about that.
With regulation looming Facebook is now directing its lobbying energies onto trying to shape a policymaking debate — calling for what it dubs “the ‘right’ regulation”.
Here the Facebook chief looks to be applying a similar playbook as the Google’s CEO, Sundar Pichai — who recently tripped to Brussels to push for AI rules so dilute they’d act as a tech enabler.
In a blog post published today Facebook pulls its latest policy lever: Putting out a white paper which poses a series of questions intended to frame the debate at a key moment of public discussion around digital policymaking.
Top of this list is a push to foreground focus on free speech, with Facebook questioning “how can content regulation best achieve the goal of reducing harmful speech while preserving free expression?” — before suggesting more of the same: (Free, to its business) user-generated policing of its platform.
Another suggestion it sets out which aligns with existing Facebook moves to steer regulation in a direction it’s comfortable with is for an appeals channel to be created for users to appeal content removal or non-removal. Which of course entirely aligns with a content decision review body Facebook is in the process of setting up — but which is not in fact independent of Facebook.
Facebook is also lobbying in the white paper to be able to throw platform levers to meet a threshold of ‘acceptable vileness’ — i.e. it wants a proportion of law-violating content to be sanctioned by regulators — with the tech giant suggesting: “Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.”
It’s also pushing for the fuzziest and most dilute definition of “harmful content” possible. On this Facebook argues that existing (national) speech laws — such as, presumably, Germany’s Network Enforcement Act (aka the NetzDG law) which already covers online hate speech in that market — should not apply to Internet content platforms, as it claims moderating this type of content is “fundamentally different”.
“Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context,” it writes — lobbying for maximum possible leeway to be baked into the coming rules.
“The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms,” Facebook’s VP of content policy, Monika Bickert, also writes in the blog.
“If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation,” she adds, ticking off more of the tech giant’s usual talking points at the point policymakers start discussing putting hard limits on its ad business.

Is tech socialism really on the rise?

Greg Epstein
Contributor

Share on Twitter

Greg M. Epstein is the Humanist Chaplain at Harvard and MIT, and the author of The New York Times bestselling book “Good Without God.” Described as a “godfather to the [humanist] movement” by The New York Times Magazine in recognition of his efforts to build inclusive, inspiring and ethical communities for the nonreligious and allies, Greg was also named “one of the top faith and moral leaders in the United States” by Faithful Internet, a project of the United Church of Christ and the Stanford Law School Center for Internet and Society.

More posts by this contributor
‘Capitalism generates a lot of wealth depending on the situation’
Justin Kan opens up (Part 2)

In Part 1 of my conversation with Ben Tarnoff, co-founder of leading tech ethics publication Logic, we covered the history and philosophy of 19th century Luddites and how that relates to what he described in his column for The Guardian as today’s over-computerized world.
I’ve casually called myself a Luddite when expressing general frustration with social media or internet culture, but as it turns out, you can’t intelligently discuss what most people think of as an anti-technology movement without understanding the role of technology in capitalism, and vice versa.
At the end of Part 1, I was badgering Tarnoff to speculate on which technologies ought to be preserved even in a Luddite world, and which ones ought to go the way of the mills the original Luddites destroyed. Arguing for a more nuanced approach to the topic, Tarnoff offered the disability rights movement as an example of the approach he hopes will be taken by an emerging class of tech socialists.
TechCrunch: The Americans with Disability Act has been a very powerful body of legislation that has basically forced us to use our technological might to create physical infrastructure, including elevators, buses, vans, the day-to-day machinery of our lives that allow people who otherwise wouldn’t be able to go places, do things, see things, experience things, to do so. And you’re saying one of the things that we could look at is more technology for that sort of thing, right?
Because I think a lot about how in this society, every single one of us walks around with the insecurity that, “there but for the grace of my health go I.” At any moment I could be injured, I could get sick, I could acquire a disability that’s going to limit my participation in society.
Ben Tarnoff: One of the phrases of the disability rights movement is, “nothing about us without us,” which perfectly encapsulates a more democratic approach to technology. What they’re saying is that if you’re an architect, if you’re an urban planner, if you’re a shopkeeper, whatever it is, you’re making design decisions that have the potential to seriously negatively impact a substantial portion of the population. In substantial ways [you could] restrict their democratic rights. Their access to space.

‘Capitalism generates a lot of wealth depending on the situation’

Greg Epstein
Contributor

Share on Twitter

Greg M. Epstein is the Humanist Chaplain at Harvard and MIT, and the author of The New York Times bestselling book “Good Without God.” Described as a “godfather to the [humanist] movement” by The New York Times Magazine in recognition of his efforts to build inclusive, inspiring and ethical communities for the nonreligious and allies, Greg was also named “one of the top faith and moral leaders in the United States” by Faithful Internet, a project of the United Church of Christ and the Stanford Law School Center for Internet and Society.

More posts by this contributor
Justin Kan opens up (Part 2)
Justin Kan opens up (Part 1)

Ben Tarnoff is a columnist at The Guardian, a co-founder of tech ethics magazine Logic and arguably one of the world’s top experts on the intersection of tech and socialism.
But what I think you really need to know by way of introduction to the interview below is that reading Tarnoff and his wife Moira Weigel might be the closest you can get today to following the young Jean Paul Sartre and Simone de Beauvoir in real time.
In September, Tarnoff published a Guardian piece, “To decarbonize we must decomputerize,” in which he argued for a modern Luddism. I’ve casually called myself a Luddite online for many years now:

*Sigh* how have I still tweeted fewer than 1300 times after 4 years on this thing? #firstworldproblemsforluddites
— Greg Epstein (@gregmepstein) May 17, 2013

But I wouldn’t previously have considered writing much about it online, because who in this orbit could possibly identify? Turns out Tarnoff, a leading tech world advocate for Bernie Sanders, does. Which made me wonder: Could Luddism ever become the next trend in Silicon Valley culture?
Of course, I then reviewed exactly who the Luddites actually were and thought, “aha.” Maybe I’ve finally found the topic and the interview that really truly will get me fired from my role as TechCrunch’s ethicist-in-residence; talking to a contemporary tech socialist about the people who famously destroyed machinery because they didn’t feel that it was ethical, humane or in service of their well-being doesn’t necessarily scream “TechCrunch,” does it?
So I began my interview by praising not only his piece on Luddism but several other related pieces he’s written and by asking (with tongue only semi-in-cheek) to please confirm that at least it’s a peaceful Luddism for which he is calling.
Ben Tarnoff (Photo by Richard McBlane/Getty Images for SXSW)
Tarnoff: Thanks for reading the pieces. I really appreciate it.

Nvidia’s Q4 financials look to brighter skies with strong quarterly revenue growth

Major artificial intelligence and graphics chipmaker Nvidia reported its 2020Q4 financials today (the company’s fiscal quarter ends on January 26th, 2020). The company announced revenues of $3.11 billion for the quarter, a jump of 41% from the year ago quarter and a small bump from the third quarter.
Even more importantly, the company’s gross margin improved remarkably year-over-year, moving from 54.7% to 64.9%. The company reported a net income of $950 million for the quarter. After-hours traders jumped into the stock, with Yahoo Finance reporting a roughly 6.32% increase in the company’s share price immediately following the earnings.
That positive news though didn’t overcome the full-year fiscal numbers though, which painted a more complicated picture for the company. Revenue was down slightly for the 2020 fiscal year compared to 2019, and operating expenses, operating income, net income, and diluted earnings all headed the wrong way, in some cases by more than 30%.
Nvidia’s struggles in 2019 weren’t unique to the chipmaker, as last year was bruising for the chip industry overall. The industry’s total sales declined the fastest in more than a decade from a number of factors, including less demand in some parts of the market, oversupply in other parts of the market (driving down prices and thus sales revenue), as well as on-going trade tensions between the U.S., China, South Korea, and Japan.

W(hy)TF are Japan and South Korea in a trade war?

Nvidia itself has had a huge number of ups and down in recent years. Riding the crest of the crypto wave, the company’s stock soared as crypto miners sought the company’s GPUs, which were well-positioned to handle the hashing functions at the core of many proof-of-work crypto protocols. Yet, the crypto winter crushed the stock, which saw a precipitous decline of 50% at the tail end of 2018.

After losing half its value, Nvidia faces reckoning

The past year though has seen Nvidia turn something of a corner. It started the year with a share price of around $150, and today closed at nearly $271, a gain of more than 80%. Part of that story — as it is with the rest of the chip industry — is the sense that a whole new set of workflows (and therefore markets) are moving to silicon, including in automotive, high-performance computing (where Nvidia acquired Mellanox for $6.9 billion early last year), Internet of Things, and even in 5G.
That excitement on the big corporate side has also shown up in the venture world as well. Startups like Cerebras, Nuvia, Graphcore and more are targeting these new workflows, putting pressure on Nvidia, Intel, and other incumbents to outperform these upstarts.

Intuition Robotics raises $36M for its empathetic digital companion

Intuition Robotics, the company best known for its ElliQ robot, a digital home companion for the elderly, today announced that it has raised a $36 million Series B round co-led by SPARX Group and OurCrowd. Toyota AI Ventures, Sompo Holdings, iRobot, Union Tech Ventures, Happiness Capital, Samsung Next, Capital Point and Bloomberg Beta also participated in this round. This brings the total funding for the company, which was founded in 2016, to $58 million.
As the company, which sees it as its mission to build digital assistants that can create emotional bonds between humans and machines, also disclosed today, it is working with the Toyota Research Institute to bring its technology to the automaker’s LQ concept. Toyota previously said that it wanted to bring an empathetic AI assistant to the LQ that could create a bond between driver and car. Intuition Robotics’s Q platform helps power this  assistant, which Toyota calls “Yui.”
Intuition Robotics CEO and co-founder Dor Skuler
Intuition Robotics CEO and co-founder Dor Skuler tells me that the company spent the last two years gathering data through ElliQ. In the process, the company spent more than 10,000 days in the homes of early users to gather data. The youngest of those users were 78 and the oldest 97.
On average, users interacted with ElliQ eight times per day and spent about six minutes on those interactions. When ElliQ made proactive suggestions, users accepted those about half the time.
“We believe that we have been able to prove that she can create an enduring relationship between humans and machines that actually influences people’s feelings and behaviors,” Skuler told me. “That she’s able to create empathy and trust — and anticipate the needs of the users. And that, to us, is the real vision behind the company.”

While Intuition Robotics is most closely identified with ElliQ, though, that’s only one area the company is focusing on. The other is automotive — and as Skuler stressed, as a small startup, focus is key, even as there are some other obvious verticals it could try to get into.
In the car, the empathetic AI assistant will adapt to the individual user and, for example, provide personalized suggestions for trying out new features in the car, or suggest that you open the window and get some fresh air into the car when it senses you are getting tired. As Skuler stressed, the car is actually a great environment for a digital assistant, as it already has plenty of built-in sensors.
“The agent gets the data feed, builds context, looks at the goals and answers three questions: Should I be proactive? Which activity should I promote? And which version to be most effective? And then it controls the outcomes,” Skuler explained. That’s the same process in the car as it would be in ElliQ — and indeed, the same code runs in both.
The Intuition team decided that in order to allow third-parties to build these interactions, it needed to develop specialized tools and a new language that would help designers — not programmers — create the outlines of these interactions for the platform.
Unlike ElliQ, though, the assistant in the car doesn’t move, of course. In Toyota’s example, the car uses lights and a small screen to provide additional interactions with the driver. As Skuler also told me, the company is already working with another automotive company to bring its Q platform to more cars, though he wasn’t ready to disclose this second automotive partner.
“Intuition Robotics is creating disruptive technology that will inspire companies to re-imagine how machines might amplify the human experience,” said Jim Adler, founding managing partner at Toyota AI Ventures, who will also join the company’s board of directors.
Intuition Robotics’ team doubled over the course of the last year and the company now has 85 employees, most of whom are engineers. The company has offices in Israel and San Francisco.
Unsurprisingly, the plans for the new funding focus on building out its assistant’s capabilities. “We’re the only company in the world that can create these context-based, nonlinear personalized interactions that we call a digital companion,” Skuler told me. “We assume people will start doing similar things. There’s a lot more work to do. […] A big part of the work is to increase our research activities and increase the tools and the performance of the runtime engine for the agent.” He also told me that the team continues to gather data about ElliQ so it can prove that it improves the quality of life of its users. And in addition to this, the company obviously also will continue to build out its work around cars.
“We cracked something nobody’s cracked before,” Skuler said. “And now we’re on the verge of getting value out of it. And it will be hard work because this is not an app. It’s really hard work but we want to capture that value.”

Fiddler Labs, SRI and Berkeley experts open up the black box of machine learning at TC Sessions: Robotics+AI

As AI permeates the home, work, and public life, it’s increasingly important to be able to understand why and how it makes its decisions. Explainable AI isn’t just a matter of hitting a switch, though; Experts from UC Berkeley, SRI, and Fiddler Labs will discuss how we should go about it on stage at TC Sessions: Robotics+AI on March 3.
What does explainability really mean? Do we need to start from scratch? How do we avoid exposing proprietary data and methods? Will there be a performance hit? Whose responsibility will it be, and who will ensure it is done properly?
On our panel addressing these questions and more will be two experts, one each from academia and private industry.
Trevor Darrell is a professor at Berkeley’s Computer Science department who helps lead many of the university’s AI-related labs and projects, especially those concerned with the next generation of smart transportation. His research group focuses on perception and human-AI interaction, and he previously led a computer vision group at MIT.
Krishna Gade has passed in his time through Facebook, Pinterest, Twitter and Microsoft, and has seen firsthand how AI is developed privately — and how biases and flawed processes can lead to troubling results. He co-founded Fiddler as an effort to address problems of fairness and transparency by providing an explainable AI framework for enterprise.
Moderating and taking part in the discussion will be SRI International’s Karen Myers, director of the research outfit’s Artificial Intelligence Center and an AI developer herself focused on collaboration, automation, and multi-agent systems.
Save $50 on tickets when you book today. Ticket prices go up at the door and are selling fast. We have two (yes two) Startup Demo Packages Left – book your package now and get your startup in front of 1000+ of today’s leading industry minds. Packages come with 4 tickets – book here.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

European founders look to new markets, aim for profitability

To get a better sense of what lies ahead for the European startup ecosystem, we spoke to several investors and entrepreneurs in the region about their impressions and lessons learned from 2019, along with their predictions for 2020.
We asked for blunt responses — and we weren’t disappointed.
These responses have been edited for clarity and length.
Kenny Ewan, founder/CEO, Wefarm (London)
I’ve often been faced with questions around how we can generate revenue in markets like Africa. There has historically been a view that you can do something good, or you can generate revenue — and companies that talk about developing markets usually get squarely lumped into the former. While mission-led companies achieving tremendous growth has been talked about for a while, 2019 has been a year I have felt conversations with investors and others really begin to shift to the reality of that and it’s thanks to more and more proof points being delivered by startups across the board.
As more and more businesses begin to realize they don’t need to wait for the internet to descend from the sky for these markets to become hubs of commerce and innovation — and see that it’s already happening — I believe 2020 will continue to witness more and more historic tech companies shifting their focus to markets like Africa and that there will be more coverage and discussion as a result.

The engineers behind Google’s Bookbot have launched a delivery robot startup

The engineers behind Google’s short-lived Bookbot — a robot created within the company’s Area 120 incubator for experimental products — have launched their own startup to bring the sidewalk delivery bot back to life.
The secretive startup called Cartken was formed in fall 2019 after Google shuttered an internal program to develop a delivery robot — a move that was prompted by the tech giant’s decision to scale back efforts to compete with Amazon in shopping.
Unlike Amazon, which acquired robot maker Dispatch to help build its Scout delivery device, Google harnessed the talent of its own engineers and logistics experts to develop a sidewalk robot within the walls of Google’s Area 120 incubator. But the project faltered after just a few months, as Google pulled back from retail delivery.
Cartken was founded by engineers of the Bookbot program as well as a logistics expert who was once in charge of operations at Google Express, the service integrated last year into Google Shopping.
Area 120 is a low-key version of Google’s famous X moonshot factory, a place where small teams rapidly build new products that they have a personal interest in. Since 2016, Area 120 has produced around a dozen apps and services, including a crowdsourced transit app, an educational video platform, a virtual customer service agent for small businesses, and an emoji-based guessing game.
BookBot stood out as Area 120’s first publicly announced hardware project. The Google project incubator formed a group in early 2018 to explore autonomous robots. Around the same time, the city of Mountain View decided to allow pilot programs for personal delivery devices (PDDs).
Discussions between Area 120 and Mountain View began in the summer of 2018 and by late February 2019, the BookBot began operating one day a week for the city’s library system.
Apart from its book-collecting duties, the electric six-wheeled device worked in a similar way to the delivery robots made by Amazon, Starship Technologies and Marble. The 32-inch tall BookBot, which is pictured below, was equipped with a suite of sensors for autonomous operation and could be remote controlled by a human operator if needed. The robot was designed to carry up to 50 pounds of cargo, and traveled on sidewalks at a maximum speed of 4.5 miles per hour.
The Google Bookbot. Photo from Google
Users could request a pick-up of books via the library’s website. The BookBot would then navigate to their home and text them when it had arrived. Once the user had deposited the books in the cargo compartment, the robot would return to the library, where workers would check in the materials.
Google team leader Christian Bersch told SilconValley.com at the time that the pilot project would last nine months. “Right now, we just want to learn how this would work, how it operates and what kinds of problems we’d run into,” he said.
On its first run on city sidewalks, “people thought it was super cool, and were breaking out their cameras,” Tracy Gray, Mountain View’s Library Services Director told TechCrunch. “There were no accidents, no technical issues and no vandalism.”
The biggest problem wasn’t interest or operations. It was Google.
The BookBot fell far short of its nine-month pilot. The project quietly ended in June, after less than four months. The BookBot was actually operational in Mountain View for only 12 days, not including two days missed for rain. It covered a total of 60 miles, and served just 36 users, Gray said.
Gray does not know why Area 120 cancelled the BookBot. “It was definitely a benefit for library customers and a great project all around, but I believe Google’s Area 120 went in another direction,” she said.
Area 120 has never explained why it canceled BookBot. Google didn’t comment for this article.
However, BookBot’s demise coincided with a strategic shift within Google. In May, just a month before BookBot ended, Google merged its online shopping service Google Express into Google Shopping, essentially conceding that it could not compete with the retail giants of Amazon and Walmart. As its retail efforts faded, Google spun out its Project Wing drone delivery technology and suspended the BookBot’s development.
That wasn’t the end of the little robot. Bersch left Google in July, along with Jake Stelman, the co-founder of Area 120’s autonomous robotics group, according to LinkedIn profile data. In October, the engineers incorporated Cartken Inc., along with Ryan Quinlan, an operations manager who had worked at both Amazon and Google Express, and another software engineer from the BookBot team.
Cartken is still very much in stealth mode, and declined to comment on this story, as did Google. However, a Korean trade delegation to Silicon Valley in October was told the company had “developed a delivery robot that combines unmanned autonomous vehicles and artificial intelligence.”
Cartken’s website says that it will offer “low-cost delivery through automation,” with an earlier version specifying “low-cost last-mile delivery.” A semi-obscured product image appears to show a matte black variant of the BookBot with wheels, lid and head- and tail lights.
Neither Google nor Cartken would say whether the start-up uses any technologies developed at Area 120, nor whether Google was funding the young company.
Google has a tradition of spawning autonomous vehicle companies. The head of its self-driving car project, Chris Urmson, went on to form Aurora, now valued at more than $2.5 billion, while two other Google engineers formed Nuro, which unveiled a road-legal delivery robot last week. But the process of driving away from Google hasn’t always gone as smoothly.
In 2016, a group of engineers led by Anthony Levandowski left Google’s self-driving car program to form their own autonomous logistics company, Otto, that was quickly acquired by Uber. That led to an epic trade secrets battle that Levandowski is still fighting.

NVIDIA VP Claire Delaunay will discuss empowering robotics startups at TC Sessions: Robotics+AI

Robotics, AI and automation are the future of business. This much seemingly everyone can agree on. Finding the resources to implement these technologies, on the other hand, is a different question entirely. Resource strapped startups and even established companies with sufficient know-how may find it difficult to adapt to changing technologies.
NVIDIA’s VP of Engineering Claire Delaunay will be returning to TC Sessions: Robotics + AI at U.C. Berkeley on March 3 to discuss the component giant’s work in AI and robotics. After showing of NVIDIA’s Isaac platform at last year’s event, the former Google robotics lead will discuss how the company is working to create a platform to help accelerate robotics development.
Delaunay will be joined by Scott Phoenix, the cofounder and CEO of Vicarious, which works to develop AI and general intelligence for robotics and Joshua Wilson, cofounder and CEO of Freedom Robotics, which works to help companies launch and scale robotics systems.
Tickets are on sale now for $345 and you’ll save $50 when you book now as prices go up at the door.
We have two Startup Demo Packages left for $2200. Each package comes with a demo space and 4 tickets to the show. Grab yours here.
Student tickets, are still available at the super discounted $50 rate when you book here.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

A/O PropTech offers giant sandbox for startups disrupting real estate

A/O PropTech is a European VC that officially launched last week after raising €250 million in what it describes as “permanent capital” to invest in companies disrupting the €230 trillion real estate industry.
This approach sees the firm structured more like a corporation with various shareholders, rather than a traditional venture capital fund with a typical life cycle of tow-five years, and positions A/O PropTech as stage-agnostic. The group invests from Series A to later-growth stages and claims to be more patient with regards to the timing of any exits.
A/O PropTech’s investors are described as some of the largest institutional real-estate companies in Europe that hold a pool of residential, commercial and hospitality assets. Noteworthy, proptech companies that the VC backs can potentially leverage these assets as a sandbox to test, pilot and “fast-track the commercial and operational scale” of their offerings, according to a company statement. To date, the firm reports eight proptech companies in its portfolio that span 14 countries and serve 200K real estate units.
I put some questions to A/O PropTech founder and CEO Gregory Dewerpe to drill down into the firm’s investment thesis and how it hopes to stand out from other firms investing in the space. More broadly, Dewerpe discusses the billion-dollar opportunities he believes are there for the taking in property tech over the next decade and beyond.
A/O PropTech founder and CEO Gregory Dewerpe

Students: Score $50 tickets to TC Sessions: Robotics + AI 2020

Are you a student enthralled by robots and the AI that powers them? Do you live within striking distance of UC Berkeley? Ready to learn from the greatest minds and makers in the field? Then we want you at TC Sessions: Robotics + AI 2020 on March 3 at UC Berkeley’s Zellerbach Hall.
We’re investing in the next generation of makers by making our day-long conference super-affordable. Buy your $50 student pass right here.
If you’re not familiar with our Robotics/AI session, listen up. It’s a full day of interviews, panel discussions, Q&As, workshops and demos. And it’s all dedicated to these two world-changing technologies. Last year, we hosted1,500 attendees. We’re talking the industries’ top leaders, founders, investors, technologists, executives and engineering students.
As a student, you’ll rub elbows with the greats. You’ll have ample time to learn and network. Who knows? You might impress the pants off the right person and land an internship, a prime job — or find the co-founder of your dreams.
If networking feels like a chore, never fear. CrunchMatch, our free business matching platform, removes the pain and adds efficiency. Win-win!
You’ll hear from our great slate of speakers, including VCs Eric Migicovsky (Y Combinator), Kelly Chen (DCVC) and Dror Berman (Innovation Endeavors). You’ll also hear from plenty of founders, including experts focused on agricultural, construction and human assistive robotics. And that’s just for starters.
Here are a few more examples of presentations you’ll find in our program agenda.
Fostering the Next Generation of Robotics Startups: Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Joshua Wilson (co-founder & CEO, Freedom Robotics) and Scott Phoenix (co-founder & CEO, Vicarious) will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.
Live Demo from the Stanford Robotics Club: It just wouldn’t be a robotics conference without the opportunity to see robots in action. We’ve got you covered.
Pitch Night Pitch-off Finalists: Early-stage companies, hand-picked by TechCrunch editors, will take the stage and have five minutes to present their wares.
Saving Humanity from AI: UC Berkeley’s Stuart Russell argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
TC Sessions: Robotics + AI 2020 takes place on March 3. We’re making the event affordable for students, because there’s no future tech without them. Invest $50 in your tomorrow — buy your student ticket today, and join us in Berkeley!
Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

UK public sector failing to be open about its use of AI, review finds

A report into the use of artificial intelligence by the UK’s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens’ lives.
Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer funded healthcare — with health minister Matt Hancock setting out a tech-fuelled vision of “preventative, predictive and personalised care” in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of “healthtech” apps and services.
He has also personally championed a chatbot startup, Babylon Health, that’s using AI for healthcare triage — and which is now selling a service in to the NHS.
Policing is another area where AI is being accelerated into UK public service delivery, with a number of police forces trialing facial recognition technology — and London’s Met Police switching over to a live deployment of the AI technology just last month.
However the rush by cash-strapped public services to tap AI ‘efficiencies’ risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data-sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns — all of which require transparency into AIs if there’s to be accountability over automated outcomes.
The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.
Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.
The court objected to a lack of transparency about how the system functions, as well as the associated lack of controllability — ordering an immediate halt to its use.
The UK parliamentary committee which reviews standards in public life has today sounded a similar warning — publishing a series of recommendations for public sector use of AI and warning that the technology challenges three key principles of service delivery: Openness, accountability, and objectivity.
“Under the principle of openness, a current lack of information about government use of AI risks undermining transparency,” it writes in an executive summary.
“Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.”
“This review found that the government is failing on openness,” it goes on, asserting that: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”
In 2018 the UN’s special rapporteur on extreme poverty and human rights raised concerns about the UK’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale — warning then that the impact of a digital welfare state on vulnerable people would be “immense”, and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.
Per the committee’s assessment it is “too early to judge if public sector bodies are successfully upholding accountability”.
Parliamentarians also suggest that “fears over ‘black box’ AI… may be overstated” — and rather dub “explainable AI” a “realistic goal for the public sector”.
On objectivity, they write that data bias is “an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias”.
The use of AI in the UK public sector remains limited at this stage, according to the committee’s review, with healthcare and policing currently having the most developed AI programmes — where the tech is being used to identify eye disease and predict reoffending rates, for example.
“Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage,” the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are “examining how AI can increase efficiency in service delivery”.
It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care — noting the example of Hampshire County Council trialling the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers. And points to a Guardian article which reported that one-third of UK councils use algorithmic systems to make welfare decisions.
But the committee suggests there are still “significant” obstacles to what they describe as “widespread and successful” adoption of AI systems by the UK public sector.
“Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation,” it writes. “It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.”
The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.
“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” it suggests.
Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. “All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery,” the committee writes.
Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI — with the committee noting there are three sets of principles that could apply to the public sector which is generating confusion.
“The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use,” it recommends.
It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies’ use of AI complies with the UK Equality Act 2010.
The committee is not recommending a new regulator should be created to oversee AI — but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.
It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI — supporting the government’s intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalization.)
Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that “ensure that private companies developing AI solutions for the public sector appropriately address public standards”.
“This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements,” it suggests.
Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of “driving blind, with no control over who is in the AI driving seat”.
“This serious report sadly confirms what we know to be the case — that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector,” she said. “The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.
“Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”

White House reportedly aims to double AI research budget to $2B

The White House is pushing to dedicate an additional billion dollars to fund artificial intelligence research, effectively doubling the budget for that purpose outside of Defense Department spending, Reuters reported today, citing people briefed on the plan. Investment in quantum computing would also receive a major boost.
The 2021 budget proposal would reportedly increase AI R&D funding to nearly $2 billion, and quantum to about $860 million, over the next two years.
The U.S. is engaged in what some describe as a “race” with China in the field of AI, though unlike most races this one has no real finish line. Instead, any serious lead means opportunities in business and military applications that may grow to become the next globe-spanning monopoly, a la Google or Facebook — which themselves, as quasi-sovereign powers, invest heavily in the field for their own purposes.
Simply doubling the budget isn’t a magic bullet to take the lead, if anyone can be said to have it, but deploying AI to new fields is not without cost and an increase in grants and other direct funding will almost certainly enable the technology to be applied more widely. Machine learning has proven to be useful for a huge variety of purposes and for many researchers and labs is a natural next step — but expertise and processing power cost money.
It’s not clear how the funds would be disbursed; It’s possible existing programs like federal Small Business Innovation Research awards could be expanded with this topic in mind, or direct funding to research centers like the National Labs could be increased.

Quantum computing’s ‘Hello World’ moment

Research into quantum computing and related fields is likewise costly. Google’s milestone last fall of achieving “quantum superiority,” or so the claim goes, is only the beginning for the science and neither the hardware nor software involved have much in the way of precedents.
Furthermore quantum computers as they exist today and for the foreseeable future have very few valuable applications, meaning pursuing them is only an investment in the most optimistic sense. However, government funding via SBIR and grants like those are intended to de-risk exactly this kind of research.
The proposed budget for NASA is also expected to receive a large increase in order to accelerate and reinforce various efforts within the Artemis Moon landing program. It was not immediately clear how these funds would be raised or from where they would be reallocated.

Trump said to propose roughly $3 billion NASA budget boost for 2021

WP Twitter Auto Publish Powered By : XYZScripts.com