With AI on every designer’s mind, we thought it would be timely to get the skinny on how Carla Diana is feeling about the anxiety and fear — and excitement — that pervades the field that she has helped to evolve and nurture. Years ago she began working with robots, which by definition are artificial intelligence personified, and pre-COVID founded and currently heads Cranbook’s 4D Design department. (Images are from students in the 4D program.)

Moxi Healthcare Nurse Assistant Robot by Diligent Robotics, 2023, lead design by Carla Diana.
You’ve been ahead of the curve on robotics and AI, how does this recent release of OpenAI software differ from how you have been experimenting for over a decade?
Great question! You bring up a good point about AI tools having been around and available to academics and professionals in the computer science and robotics fields for some time. In my own work designing interactive products and leading design efforts for Diligent Robotics, AI has been a core part of what makes the systems I work with operate.
What’s so different right now is what I might call the “Pandora’s Box” effect. The fact that the tools are now widespread and commonly available means that their use in all aspects of life has grown exponentially overnight, and this will result in publicly present effects that we did not anticipate. And by the nature of the way AI systems learn and grow through use –the more that people use the tools, the more data is infused into the systems––the scale of it is something we haven’t seen before. What’s also different is that mimicking human language and behavior is a public fascination that has not been core to my work, but is front-and-center in terms of the ways that people are experimenting and pushing the current software. In other words, I have been working with teams using AI in the “behind the scenes” kind of way that I wrote about in My Robot Gets Me.
In other words, AI is the engine that drives product behavior, such as using camera vision to interpret gestures to control a product, or help our robot navigate the hallways by using AI to analyze its environment. The current crop of tools has enticed people to explore language-based prompts and full-on conversational exchanges in a new way.

Eoskeleton by Chen Zhuo, 2021, robotic wearable device.
You founded the 4D Design department at Cranbrook, is this current generation of AI consistent with your vision?
The vision for the Cranbrook 4D Design Department has always been focused on bringing talented creative people together to explore the ways that technology gets folded into everyday life. I had always imagined that AI would play a role in experimental 4D Design techniques, and I am seeing that play out in student projects that use virtual agents, AI-infused poetry, and complex generated imagery in their work. What I hadn’t anticipated is how much the other, more traditional practices at Cranbrook would be interested in using AI in their work. I am seeing it crop up in Painting, Fiber Art, Sculpture, and pretty much every creative practice. It’s a really exciting development and aligns with my desire to have 4D Design act as a digital hub for our community.

Heartbeat Printer by Vikram Kalidindi, 2022, functional prototype.
There is great excitement and trepidation about the outcomes of AI and the unforeseen consequences. What do you think is truly worrisome verses falsely feared?
There are many ethical considerations to be considered around the newly emerging AI tools, which is why I think it’s especially important for artists and designers to experiment with it and discuss potential consequences through their work. I think that racial and gender biases that emerge from AI systems that are trained on volumes of existing data are an enormous concern and point in a direction of making societal problems worse. What we saw years ago with Microsoft’s Tay AI system evolving into a neo-nazi sexbot was just a small foreshadowing of what is a much larger, truly insidious problem that may be impossible to control. Additionally, the public fascination with emulating humans by trying to create realistic virtual entities that emerge as chat agents and have online presence in social networks through text, imagery and even video footage is concerning. If it becomes impossible to distinguish between a human and a bot, we become vulnerable to all manner of nefarious schemes that can put us at risk online, or really anywhere where we rely on communication systems.
The loss of jobs is also a real consideration to be tackled, as we start seeing tasks like fact-checking, copywriting, coding, legal research, etc. get handled by AI, however I am of the mindset that as a society we have the potential to shift job roles and take advantage of new tools to allow us to create new, more engaging jobs for people to enjoy and pursue. The challenge is that new roles are not always created in the circumstances where they are helpful, and people at the lower rungs of organizations where modest salaries are needed most to support basic needs are the ones who are affected first and have fewer resources to pursue new opportunities.
I feel that what’s falsely feared is the sci-fi-inspired idea that artificial intelligence will lead to some sort of takeover where a technology-driven super-agent is motivated to control the world, so to speak, and render humans subservient.

Azimuth by Michael Candy, 2021, self-propelling kinetic timepiece
How are you teaching your students to use AI tools?
The Cranbrook educational model is studio-based and revolves around projects and critique, so I have been bringing in visiting artists and scholars to run hands-on workshops and challenge my students to experiment with AI tools in their own studios. We recently had Ray L.C. from City University of Hong Kong run a workshop in Stable Diffusion and in past years we have had Google ATAP’s Timi Oyedeji work with students to use Machine Learning to interpret human gesture as part of design systems. The workshops have given students a basic understanding of how to harness the systems for creative projects, and now I am seeing this knowledge emerge and expand through project work that includes AI as a component of the work. Our own student Claudia Chai has been running workshops that explore prompt creation for MidJourney and ways of using generated visual content as part of augmented reality tests. Beyond that, we have a weekly seminar series that focuses on technology and ethics through readings and discussion, and AI has become a core part of that dialog this year because of its presence in the news.

Fluid Spaces by Merel Noorlander, 2021, projection mapping installation and live performance.
What is behind the curtain? In other words (and to mix metaphors) will designers be able to take some control of this digital genie?
Designers will be able to use this “digital genie” as a powerful tool for all sorts of projects that can improve aspects of work and life in ways that we hadn’t imagined. In health care, for example, AI tools that can process and analyze imagery and other medical data will be able to assist and speed up medical diagnosis in dramatic ways. Designers can fold these kinds of capabilities into new and powerful products that can give us knowledge faster than ever before. It can also be integrated into the creative process in order to spark our imaginations or facilitate brainstorming sessions. For example, architect and University of Michigan professor Matias Del Campo recently visited Cranbrook to share his process of generating architectural imagery in order to invoke fantasy imagery that can assist in visioning exercises with colleagues and clients. While we won’t be able to control what MIT Technology Review’s Melissa Haikkilä calls the “glitchy, spammy, scammy internet” towards which we are now “hurtling”, we can take responsibility for ethical uses of AI in our own projects by leaning towards transparency of output by giving a clear indication of when something that feels like an entity is a “bot” of sorts, and by being discerning about the data sets that are training the tools we use.

Timelessness by Ryan Genena, 2022, responsive light sculpture.
What is your hope for the present and future of AI?
My hope is that it gets folded into creative work as a tool that helps us work more efficiently and enhances the clarity and resolution of our ideas. Though this unfortunately means wading through what I call the “awkward teenage years” that we experience in most new technologies, I am hoping that we emerge with it as something we can harness as a way to enhance our work, rather than replace or dilute what we do. I am hoping that the myopic obsession with emulating human presence dies down so that AI research can be put towards assisting us in complex tasks.
Is there reason to be paranoid?
The thing about a “Pandora’s Box” is that it unleashes the potential for harmful activities to appear in aspects of life that we hadn’t anticipated. We’ll see not just social media posts from AI con artists, but voice calls, video chats, text messages and news stories that could lead us into personal security risks. Should we be paranoid? In my opinion, no, but we should be very, very skeptical for sure.