What I know about privacy and AI: Rand Hindi
The future holds so many innovations to make our lives easier, but we are often told that it will come at a price.
We’ll need to choose between time-saving and control, comfort and privacy. Everybody wants our data and we better get used to giving it away.
At the Hello Tomorrow Summit on October 13-14 in Paris, the high-tech innovations that will change our civilization, from DNA stocking to automated vehicles, were on display. On the second biggest stage, a panel tried to answer a question on whether or not those future innovations will make the world a better or a scarier world.
French-Lebanese entrepreneur Rand Hindi was on that panel. His company, Snips, builds software development kits (SDKs) that enables anyone to create their own artificial intelligence (AI) assistants, but with a very strong focus on privacy.
He believes protecting data depends on product makers’ goodwill.
AI requires data, and data protection. The key part to making AI assistants work is to give them a deep understanding of people's life. If you ask an assistant to find a good Japanese restaurant near your hotel, it needs to know where your hotel is, so it need access to your emails, but also to your chat messages, location, calendar, and social networks. If we were to centralize so much data on our servers, we would become the prime target for every hacker and government on the planet. So for us, privacy is not just something that's ethical, it's something that is necessary to make people feel safe about the data we're collecting.
People shouldn't care about privacy. The first question non-tech people have is usually around whether or not those AI assistants will be useful and smart enough to beat Siri. When I explain how you can do that, then obviously privacy becomes a concern. I don't think people should care about privacy, not because it's not important but because it should the default. It should be assumed that you have privacy. Some of the big companies today are trying to make you believe that you have to make a trade-off between AI and privacy. That is false.
Privacy is possible with the right business model. The reason why the big companies are not protecting data is because their business models require them to access data. The implicit value of privacy and the trust that it builds with your users is such that you could probably find other ways to monetize which are a lot better.
Privacy pays. Our model doesn't access data. We're a technology provider, we monetize by providing our technology to companies who want to build their own assistant. You can also continue to monetize the service around the data. If you want to do app recommendation, for instance, you could run the algorithm on the smartphone of the user so you don't need the data on the server of the company to monetize. You can do memberships as well. The consumer demand for privacy is just starting to emerge, we've seen that with messaging apps and password managers. People are willing to pay for privacy.
Implementing privacy is easy. It shouldn't be your job as a product maker to figure out the AI and the privacy. It's our job as a technology company. I think there are two things you can do. The first one is you can push as much as the computation as possible on the device of the user directly because if it stays on his device. That way, if somebody wants to hack into the data of a hundred million people, they will need to hack into a hundred million phones. The second thing is cryptography. You can compute on encrypted data directly. When you combine the two, you have no reason to access users' data.
Feature image via Phil Marden.