AI vs. Potatoes: The Fluid Narrative of Smarter Design
No matter how you chose to slice and dice the definition of smarter machines: AI, machine learning, deep learning, agents etc.—a few things are relevant for the design narrative. The work performed under the hood of your device is getting more complex, has access to a wider range of sources and performs more steps without your direct involvement. It’s smarter. There are two ways to deal with this: like a potato, or like a colleague.
The potato lives in the same complex world we do. It simply reduces what it is interested in and takes action only on a few things, all based on a simple narrative of survival. That narrative is for all practical purposes hard-coded in the potato and stays the same for millions of years. Most of our devices and applications today follow this potato-model for existence and are built around a few simple hard-coded goals. Push here, get this.
Most interfaces act like potatoes
Humans are arguably more complex than potatoes. But we have to deal with the same core issue: selecting information from an infinitely complex surrounding, organizing that relevant data and events into sequences that creates coherence. Coherence is the story we create to give a select set of data-points purpose, direction and meaning. But unlike potatoes, we work with several interconnected data-points. We refine our narrative by moving or changing data-points in the sequence, or changing the questions and goals—the context—to better fit a given sequence. The point is, with each change on either end of the sequence, the meaning of that narrative changes. It’s a fluid narrative dialogue.
This is also the kind of exchange we expect from human colleagues. We don’t simply ask them to go fetch specific answers—we include them in the narrative that we are building: refining questions and goals and what data-points might be relevant for a specific context. This is the same type of dialogue we should invite when designing for smarter interaction with devices. Using a narrative model as the language for this dialogue will surface the right fluid interaction between man and machine—a potent language we already know well.
Context is the mother of narrative meaning
Now, treating your intelligent machine as a colleague only works if the machine participates in kind. It’s not enough for the machine to toss up random new data points or tweak goals arbitrarily. The machine needs to suggest what it considers interesting in the same narrative terms: if we include this new source of data, then our narrative changes this way. If we change the question slightly, we can include more relevant data-points to make the narrative stronger, etc.
The interface for fluid dialogue will embrace both complexity and help surface the narrative process: a puzzle or matrix that is built to change—morph as the underlying information updates. With each change that we, reality or the machine put forth, we see the effect on all other pieces of our narrative. We can change the goal. We can change or move around the sequence of data-points. And we can change the question or context of the sequence. Instead of pushing a Boolean logic till the potato cracks—one limited query at a time—we can now focus on the higher level task of providing context—to help the machine decide what is relevant information and what is noise.
All experiences are stories
This is great news. All experiences are stories—choosing a narrative language to better capture the progressively complex work of our machines will push interfaces to offer radically more fluid and narrative experiences. Less potato. More smart colleague. With this higher level dialogue, the interaction starts to focus on the meaning of machine findings: context. That is the real beauty of narrative put to work. “Why” is always a much more interesting question than “what” or “how”. And clearly sets us apart from our fellow potatoes.