The Daily Heller: Humans Teach Computers, Not the Other Way Around!

Posted inThe Daily Heller
Thumbnail for The Daily Heller: Humans Teach Computers, Not the Other Way Around!

Josh Clark is a human. He is founder of the New York design studio Big Medium (originally called Global Moxie from 2002–2015), specializing in future-friendly interfaces for artificial intelligence, connected devices and responsive websites. He teaches that humans teach while computers learn, not the other way around. We are not slaves to technology. Equal rights for all. He is author of several books, including Designing for Touch (A Book Apart) and Tapworthy: Designing Great iPhone Apps (O’Reilly). He is one of the rare speakers on what’s next for digital interfaces that avoids cliches and hardboiled jargon, and has keynoted 100+ events in over 20 countries, and offered countless more private workshops and executive sessions … and he is good!

Before jumping into cyberspace, Clark was a producer of PBS programs at Boston’s WGBH. In 1996, he created the uber-popular “Couch-to–5K” (C25K) running program, which has helped millions of skeptical would-be exercisers take up running. (His motto is the same for fitness as it is for software user experience: "no pain, no pain.") At a recent lecture at SVA MFA Design / Designer as Author, he made me believe that there was more to the future than pandemics and wannabe despots. In fact, after listening to him speak I shifted from an angry technophobe to an enthusiastic adherent. I asked Clark if he'd share his optimistic words and positive manner with us today (and for tomorrow).

Even science fiction writers tend to be wary of future technology running the world. You have a much different attitude, especially about machine learning. Where does your optimism derive?

There’s a stubborn assumption that as technology becomes smarter, it will inevitably replace human judgment and agency. That leads to fears that artificial intelligence will take our jobs, decide everything for us, rule the world. And yeah, that’s a pretty bleak vision.

But that’s not the path that I see, and it’s not even what machine learning is particularly good at. Instead of replacing human judgment, machine learning is best at focusing it. And this is where I’m optimistic. The machines can clear away noise to reveal where to apply our smarts, our creativity, our uniquely human talents. Put another way: Machine learning can help people do what we do best, by letting machines do what they do best. They are almost never the same thing.

Once the desktop computer was introduced, advertisements and PR suggested that we put our faith into its capacities, and leave designing to humans. Will there come a day when the computer outstrips our own faculties?

Well, they already do outstrip us in certain ways. In the case of machine learning, the robots are much, much better than we are at finding needle-in-a-haystack patterns in vast troves of data. In fact, machine learning is essentially pattern matching at unprecedented scale. It peers into ridiculously enormous datasets and extracts patterns and associations. That helps us categorize data, flag problems or outliers, and even make predictions for common scenarios. That means machines are great at all kinds of things that we’re terrible at—tasks that are time-consuming, repetitive, detail-oriented, error-prone and ultimately joyless. But the machines don’t think. They don’t have real intelligence or discernment. They simply understand what’s “normal” in a dataset.

This means they can also approximate creativity by finding patterns in the way that we do things and then try to write, or speak, or paint, or make music the way we would do it. For now, those efforts tend to be fairly nonsensical, beyond certain very narrow applications. The machines don’t think, they don’t reason, they don’t deduce. So while they’re great at processing information, they’re pretty awful in the realms of wisdom, reason, creativity or judgment—the essential stuff that makes us good at design, or good at being human for that matter. I personally think that’s unlikely to change anytime soon.

Instead, machine learning’s ability to detect common patterns—and departures from those patterns—means that it’s really good at calling attention to things that deserve our attention. The opportunity is to build systems that make us better at what we do, rather than replace us.

Here’s an example. We worked with a healthcare company who wanted to help radiologists do their jobs better. Turns out radiologists spend much of their time simply doing triage—looking for some kind of abnormality in x-rays and scans—before they bring their real expertise to bear: figuring out what that abnormality means for the patient. We were able to get computer vision to do a huge amount of that triage—detail-oriented, error-prone, joyless triage—to identify scans that were outside of normal in some way. And then, the machines brought the “interesting” cases to the doctors, so that they could apply their actual expertise. So this does replace some of our work—the joyless tasks we’re bad at—in the service of celebrating and focusing the work that we do best and which is most uniquely human. The machines “set the table” for our most creative efforts. They become companions, not replacements.

You mentioned that I’m optimistic, and you’re right. I choose optimism, and I say “choose” with intention. It’s a choice to lean into new technologies in ways that help instead of harm. I don’t think that will happen on its own. It takes decision and determination. And the risk is that, if we don’t decide for ourselves, the technology will decide for us. It will auto-pilot us into a future that we haven’t explicitly chosen, and I think none of us want the future to be self-driving.

We’ve seen glimpses of what that could look like in the first generation of mainstream AI products; those products have shown us not only what’s possible, but what can go terribly wrong. Flaws and mistakes range from the comically trivial (looking at you, autocorrect) to the deadly serious. AI systems have wrecked entire lives with biased prison sentencing, bungled medical diagnoses, and plane-crashing autopilots. At an even larger scale, we’ve seen AI damage democracy itself by naively automating the broadcast of propaganda and hate speech with both a scale and targeted focus that are unprecedented. These examples show us the limits and dangers of these systems when they overreach.

But these are not entirely or even mostly technology problems. Instead, they reflect a still-developing understanding of how to put artificial intelligence to work in ways that are not only useful and meaningful, but also respectful and responsible.

That’s a calling that goes beyond the technical accuracy of a system and its data. In my opinion, that’s a designer’s calling.

You've warned that designers should treat computer activity as working with 10 good interns and treat the computer as though it were a puppy. What do you mean by these bold statements?

Credit where it’s due: Benedict Evans said that machine learning is “like having infinite interns.” And Matt Jones said that smart systems should be as smart as a puppy: “smart things that don’t try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies.”

There are two things that I love about both observations. First, they acknowledge that machine learning isn’t as clever as we sometimes assume it is. These systems do not have intelligence, or expertise, or logical inference. They simply offer pattern matching at a vast but ultimately child-like (or puppy-like) level.

Second, presentation matters. We know that these systems will sometimes fail, so let’s be honest in presenting that fact. Our work in designing these systems is to set realistic expectations, and then channel behavior in ways that match the system’s ability. That cushions people from surprises, and also makes them more forgiving of mistakes.

Contrast that with our current set of AI assistants—Alexa and Siri and Google Assistant. The expectation they set is, “you can ask me anything.” And as genuinely remarkable and capable as those systems are, they almost always let us down, because they can’t keep that foundational promise. The expectation is wrong, and they channel behavior in ways that don’t match what the system can actually deliver.

How might we bring a more productive humility to the way we present these systems? Starting from thinking of them as puppies or capable interns, rather than omniscient answer machines, is a good place. The presentation of machine-generated results is as important—maybe more so—than the underlying algorithm. Here again, this is a design challenge more than a data-science issue.

If machines teach other machines, isn't there a danger of that replicating our own flaws?

It’s a huge risk. It’s the old computer-science concept, “garbage in, garbage out.” The machines know only the data they’re given, and if we ask them to derive insights from bad data, then their recommendations and predictions will be all wrong. (Political pollsters may be feeling that particular pain right now, too.)

I’ve mentioned that machine learning is all about identifying what’s normal, and then predicting the next normal thing, or perhaps flagging things that aren’t normal. But what happens if our idea of “normal” is in fact garbage? Or what if we’ve trained the machines to optimize for certain things that, perhaps inadvertently, punish other considerations we care about?

Amazon built a system that used machine learning to sift through job applications and identify the most promising hires. They discovered it was biased against women. Their data came from 10 years of résumés, where the overwhelming majority were men. The system basically taught itself that men were preferable. Its naive understanding of the data said so.

In her book Weapons of Math Destruction, Cathy O’Neil calls this “codifying the past.” While you might think that removing humans from a situation would eliminate racism or stereotypes or any very human bias, the real risk is that we bake our bias—our past history—into the very operating system itself. Beyond hiring, we have algorithms involved in prison sentencing, loan evaluations.

We have to be really vigilant about this. The machines are very goal-oriented. We train their models by letting them know when an outcome—a recommendation, a prediction, a categorization—is right or wrong. In other words, we optimize them for certain outcomes, and then they chase “success.” Sometimes those choices have unintended consequences. The Amazon example optimized for certain employee characteristics in a way that de-optimized a value of gender diversity.

So: What are we optimizing for? That’s a question of values and purpose, and a conversation that designers are well-equipped to participate in, through both our training and demeanor. It’s important to be clear and transparent about that optimization, about what a system is for. That understanding should be shared by the business behind the system, by the people who build it into a service and who evaluate that service’s effectiveness, and perhaps most of all by the customers who use it. It’s the responsibility of digital product designers to cultivate literacy in these systems and set appropriate expectations for what a system is built to deliver.

What, in fact, can we learn from our mechanical friends?

I think there are a couple of broad areas. First, they hold up a mirror to us—sometimes a dark mirror. If there’s any silver lining at all to some of the awful outcomes we’ve seen in these systems delivering racist or sexist or similarly damaging results, it’s that they surface problems that must be addressed. As I’ve said, these systems can be powerful in how and where they focus our attention, and that goes for issues at the cultural or systemic level.

The machines surface bias naively, without obfuscation. They reveal trends and truths, good and bad, that lurk beneath the surface. Perhaps those are problems that we have to address in our data or in our mathematical models—but also in the way our culture operates or, in the case of that Amazon job-applicant example, in the limitations of our narrow professional circles.

When bias is revealed, we can act on it, and it gives us the signals for necessary change. We may not be able to eliminate bias from our data, but we can certainly surface that bias as a call to action. We can make adjustments to what we optimize for, not only in our technical systems, but in our culture at the personal, organizational or general level.

The second and related area is that they can surface invisible patterns that we haven’t noticed before. The machines see the world in a different way than we do, and often latch onto trends or clusters in data that we might not have considered in the way that we typically navigate the world.

For example, many museums are sharing digital scans of their collections with the machines to see what kind of patterns the robots find across the collection. Museum curators discover that machine learning often categorizes their collections in ways that no traditional art historian would. They cluster works of art in ways that defy era or school or medium. But in making these unusual connections, the machines spark a kind of creative friction for the curators, giving them fresh perspective on how they understand their domain. “There was something intriguing about the simple fact that the computer was seeing something different than what we were seeing,” said my friend Martha Lucy, a curator at the Barnes Foundation.

What ar
e the dos and don'ts when it comes to how large our expectations should be with computers?

I have three principles that I think are especially important for us to understand, as both designers and consumers.

  1. The machines are weird. They don’t see the world as we do, and they sometimes misinterpret what might seem obvious to us. So strange or just plain wrong results can follow. The more that I work with machine-generated results and interactions, the more that my work has focused on designing for failure and uncertainty—anticipating and cushioning weird results.

  2. Machine learning is probabilistic. The machines don’t see things in black and white; it’s all statistical likelihood, and that likelihood is never 100%. So even if our digital products present an “answer,” the algorithm itself is only partially confident, and under the hood, the machines are pretty clear about just how confident they are. The responsibility of designers is to treat these results as signals and recommendations, not as absolutes. Expressing the machines’ confidence (or uncertainty) in the answer will only bolster users’ trust in the system overall.

  3. The machines reinforce normal. They are all about the status quo, reflecting data as it exists. That makes them great at predicting what will happen next in the status quo, or flagging when something unexpected happens. But they’re only as strong as their data, and the “normal” that’s reflected in that data.

How should designers prepare themselves for the oncoming AI revolution?

The first thing is to recognize that the revolution is already here. Algorithms are everywhere, animating so many of the digital products we rely upon every day. Machine learning determines the news we see, the movies we watch, the products we buy, the way we drive home.

Even as AI has become pervasive in our individual lives, it is not yet widespread in product organizations. Only a select few companies have adopted machine learning as an ordinary part of doing business and building products. To those few, sprinkling AI into a product or business process is just part of the normal process of product design and software creation. For the managers, designers and developers at these companies, it’s already second nature to collect data around a product and apply machine learning to make that product better or more personalized.

While these organizations may be in the vanguard, we’ll all join them soon. There’s nothing magic about these companies or their underlying technologies. Machine-learning models are readily available, even as plug-and-play services that you can start using within the hour. You don’t have to build the whole thing yourself. The tools are already available to all—not just the tech giants.

So I think the big headline is: Get involved, experiment, play. The technology is here and accessible as a design material. Data scientists and algorithm engineers have revealed the possible; now design and other fields can join in to shape that potential—and reveal the meaningful.


Print uses affiliate links