When David Weinberger was pursuing his PhD in philosophy and studying the world’s great thinkers, he came to believe that people become who they are through the tools they use, and that these tools shape humanity’s understanding of its place in the world. These days, those tools are the smartphones and supercomputers that, seemingly, are surpassing the abilities of human beings. PW spoke with Weinberger about his new book, Everyday Chaos (Harvard Business Review, May), which explores modern technology’s implications for the business world and the ways in which humans and machines, sometimes uneasily, coexist.

You write that machine learning opens doors for innovation and also endangers it. How are both true at once?

It’s not always visible, but when you use your mobile phone, you’re using something powered by machine learning, even if it’s just to get the weather report or use the type ahead feature or the spam filter. This is a revolutionary technology, not just in the sorts of things it enables us to do, but in how it tells us the world works. Machine learning doesn’t start with a conceptual model or a domain. It begins with data, and it’s not tied to a model simple enough for humans to understand. One of the dangers is that machine learning is only as good as the data that informs it, and data comes from a world that is full of biases and bad assumptions. Unless you’re careful about what your computer learns, it will be tainted by your own biases and stereotypes; data is inevitably contaminated by human failings. This is the mathematical case for diversity. You need smart engineers who can tune the system and make sure that you’ve involved a diverse and representative set of people to collect similarly representative data.

What are the moral implications of using data to navigate an increasingly complex business world?

Let’s say we have a machine learning system that is leading us to an immoral conclusion: say, that a bank should only give rich men home loans because rich men have more money. We can change this. We can tell the system to accommodate a higher threshold of risk, to make room for people who are not super-rich. Through very practical problems, machine learning systems are teaching us to have discussions about what our values are. It’s easy to make decisions when we’re designing tornado warnings; when it comes to more value-laden questions, such as what sort of cities should we live in, what sort of hiring practices should we adhere to, what sort of education should we offer, it’s up to us as humans to decide what our values are, and to have a difficult conversation about what trade-offs we’re willing to make to achieve those values.

In your book, you exhort the reader to “Make. More. Future.” What does this mean?

This is a book about the way the future works. Your job is to figure out which of many possible futures you want, and do everything you can to make it the real one. A business driven by the internet no longer looks to winnow down infinite futures to a few. We’re engaging in activities that 30 years ago would have seemed weird and counterproductive, designed not so that we can anticipate every customer need, but for interoperability in unexpected ways. This is a fundamental change in how we think the future works, and it’s becoming a core strategy for big and small companies. We’re reversing the flow of the futures—rather than narrowing them, we’re opening up more possibility.

Return to the main feature.