Back to the Future: Shock & Boredom for Big Tech Design
Tomorrow’s big tech may just be something we’ve already seen. But are we finally ready for it … and do we really need it?
With large high-definition digital displays and augmented/virtual reality available to the public, we now have technology that had only previously existed in science fiction. Multitouch displays, pressure-sensitive and gesture input, voice recognition and fingerprint readers have become acceptable for human-computer interaction. And the interfaces seen in Minority Report (2002) and Marvel’s Iron Man series (2008, 2010, 2013) forecast the possible next next wave of consumer technology. But how many of us will be able to interact with these things confidently, even comfortably? And will we want what’s next?
If the backtracking—or as some call it, failure—of Google Glass is any indication, perhaps we neither want nor need it. Or we’re embarrassed by it. Donovan Beery, founder of Eleven19, explains, “I don’t see myself wanting to go outside with weird glasses on. Which I assume is why I never saw anyone who was actually wearing Glass in person.” The type of setback Google experienced with Glass has happened before to one tech giant or another.
It’s often the result of big expectations, little consumer interest and out-of-reach pricing, or plain old boredom. Sometimes we’re just not ready for what’s next.
BIG AMBITIONS, LOW ADOPTION
Even Apple—the Herculean giant we know today—was once susceptible to such ills. In 1993, Apple released its Newton personal digital assistant, the MessagePad. It put your notes, address book and calendar in the palm of your hand. To input data, you used a stylus, and handwriting gestures were translated into typography on the Newton’s screen. An added bonus: You could send a fax. And if you were lucky and another Newton was nearby, you could beam data from one to the other wirelessly—the one caveat being you had to align the input/output properly, which was easier said than done.
Apple has been at the forefront of tech development for decades. Check out the images that started it all with these 20 Vintage Apple Ads.
Regardless of the notion that the Newton’s handwriting recognition could put an end to the keyboard, the MessagePad wasn’t anything to write home about—whether writing on your Newton or on paper. For all of the whimsy and hype surrounding the device, some consider it one of Apple’s biggest follies: costly, clumsy, impractical, even limited, and blown out of proportion. But Newton showed consumers that technology could be put into your hands, paving the way for many of the devices that have been released since, including our now-ubiquitous smartphones.
Apple’s Newton was a case study in micro-computing, and it paved the way for many of the portable electronic devices we use today. images from QuickTime promotion
Today, many children learn how to use personal electronic devices before they’re able to utter their first words. In my household, iPods and iPads have been used for entertainment, education and creativity since my sons were in kindergarten.
Although they started out using games and are now adept at Minecraft, they’ve been using creative apps for just as long. Now in elementary school, my sons, along with their classmates, have been exposed to iMovie. My youngest son uses iMovie on an iPad regularly, creating short films with iMovie’s trailer templates to mock up his own coming-soon clips using videos he’s captured of everything from our dog to his LEGO constructions.
As my son toils away on iMovie, his friend leans over his shoulder making suggestions and giving compliments, and then they trade places. With their back-and-forth iPad system allowing for co-direction, they’re having fun, but they’re also learning how to interact with the device and each other. It could be iMovie or Microsoft’s HoloLens, a Nintendo Wii game or Minecraft, and in all cases, most children seem easily impressed. This in stark contrast to the debate surrounding wearables such as the Apple Watch, a device that months before its release was considered to be blown out of proportion and unnecessary.
When my father brought home a Newton in the late 1990s, I was unimpressed: no games, no color display, and it appeared to be all business. To me, as a teenager, it too was blown out of proportion. Roymieco Carter, director of the visual arts program and university galleries at North Carolina A&T State University, sees most, if not all technology in that light, and recognizes the feeling himself. “We all have walked into a tech store in the airport, in the shopping centers, wherever.
We enter these stores with the desire to be amazed, and with no apology we leave feeling underwhelmed.” Carter believes that most consumers don’t really care about technology until a friend or neighbor gets it—and then they have to Keep Up With the Joneses and buy it, or get something bigger or better. It’s not about the wanting or the having as much as it’s about the competition.
“Future shock [is] the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time.”
– Alvin Toffler
In other cases, it may be too much, too soon. While speaking with Carter during a phone interview, he referenced Alvin Toffler’s book Future Shock. As Toffler says, “Future shock [is] the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time.” Of the many Tofflerisms, the notion that, “Technology feeds on itself—technology makes more technology possible,” seems to be the case even today as we feed on more and more technology. Maybe we keep updating our phones because we need to, getting other devices like tablets and watches to augment the phones. But that misses the point, according to Nate Voss, senior channel manager of mobility at VML, a full-service global marketing agency. Voss doesn’t see mobile phones or smart watches and other wearables as necessary, but rather, as a convenience.
WANTING, ADOPTING, ACCEPTING
At what point does a convenience become a nagging itch that you always want to scratch, or one that you keep scratching, only you don’t recognize that you’re scratching it? Robert Brunner, founder and partner of the design studio Ammunition, sees the negative side of wearables:
“[They] put information within our reach and vision all the time—not necessarily good.” The success or failure of wearables such as the Apple Watch may be the result of human nature and issues related to Toffler’s Future Shock theories. Or maybe not, because according to Carter, “We are not Future Shocked, we are Future Bored.”
holoAlthough smart watches have been on the market for some time, you’ve probably seen little to no wrists adorned with them.
It could be boredom. Then again, there’s the problem-solving aspect—or the lack thereof. Brunner sees some ideas as “technology in search of a problem,” such as Microsoft’s first Surface touchscreen table (2007) and Google Glass.
The original Microsoft Surface that debuted in 2007 was not a tablet, but a table. image via Microsoft.com
They’re made and released to the public, but with no identifiable purpose or a limited amount of purpose. Google Glass seemed to solve plenty of problems, although its biggest problem, according to Brunner, was that it was intrusive. “Google Glass is a piece of technology on the most sensitive part of the human body—the face. Of course it would be difficult to accept.”
Google Glass came and went, but promised to make everyday tasks easier with its hands-free delivery of content, as well as its ability to capture and stream what users looked at. Signs point to Google Glass making a comeback, with version two quietly being distributed, according to some reports. Google Glass website from 2013 via archive.org
And yet, virtual reality, which requires us to cover our heads, appears to be moving full speed ahead. Nathalie Lawhead, who started out as what she calls a “professional net-artist” during the early days of the internet, later moved into game development and has since started her own company, AlienMelon. Lawhead suspects that games will help virtual reality such as Microsoft’s HoloLens and Oculus VR succeed. “I think the success of VR (like Oculus) can be credited with the fact that they enhance, or work so well with first-person games. You don’t necessarily need to create something completely custom to work with that technology.”
Microsoft’s HoloLens promises to change the face of gaming and entertainment, by bringing you a richer interactive experience. image via Microsoft.com
The Oculus Rift Crescent Bay prototype was shown at both [link to https://www.oculus.com/en-us/blog/oculus-ces-2015/] CES and GDC 2015, and has impressed its users. Developers see virtual reality gaming as the next wave of digital entertainment. images courtesy of Oculus
Based on her observations at the 2015 Game Developer’s Conference, VR is here to stay. “This year’s GDC was rife with VR. Everyone had something. It sort of reminded me of when mobile (touchscreen) was taking off and the press was talking about how mobile will kill [the] PC. Although [the] PC is still alive and well, there is that same confidence in VR this time around. So many games are developing for it. I believe its success can largely be credited to independent developers taking the risk with it, or being able to develop for it.” Voss shares Lawhead’s enthusiasm for VR, which will, according to him, deliver “stunning visual experiences” for users, but the “cool factor” might stand in its way.
Voss explains, “No one looks cool wearing virtual reality, no matter how amazing the experience (and let’s be clear, Oculus Rift is intensely amazing). If they can get the general public to try these out for themselves, perhaps in retail locations and without giving everyone pink eye, to break through that final real-world barrier, they certainly have the potential [for] home entertainment.”
Having seen Minecraft through Holo-Lens, both of my sons want to get their hands on Microsoft’s latest and greatest gizmo, so count me as one of the many people who will make a visit to the Microsoft Store when HoloLens debuts. But even then, something else will be right around the corner. If Steve Jobs’ take on consumers’ wants and needs is any indication of what’s to come, the consumer need not know what they want—even if it’ll make Minecraft “way cooler.” In a 1998 BusinessWeek interview, when Jobs was asked about consumer research for the iMac, he explained, “A lot of times, people don’t know what they want until you show it to them. That’s why a lot of people at Apple get paid a lot of money, because they’re supposed to be on top of these things.”
Apple has proven to be on top of a lot of things, including the iPod, iPhone and iPad. But Apple seems to have made only incremental progress when it comes to big-screen displays, such as television. Voss suspects that long ago Apple could’ve built something much grander than the small set-top box Apple TV. “Apple could have released a television dozens of times by now. Dozens. They could have built a set with a gorgeous screen, plugged in the internet (years ago) and put it out there. And it absolutely would have failed, or at best struggled along, because it wouldn’t have had the conceptual maturity to become a breakthrough product.”
Kate Hollenbach, director of design and computation for Oblong, uses gestural commands to interact with a Kepler Exoplanet Candidates data visualization created by digital artist and designer Jer Thorp. Photo courtesy of Oblong Industries.
FUTURE SHOCK, FUTURE BOREDOM… BE DAMNED
Perhaps there’s something beyond the television, and it might have less to do with curved displays or watching movies and shows in 3D in the comfort of your own home. Maybe it’s the big screen meets the touchscreen—or the multitouch screen. Better yet, what if you didn’t have to touch the screen? Plenty of consumer technology, including Microsoft’s Xbox One with Kinect and Amazon Fire TV enable voice input, and some Samsung televisions do the same. But what about gesture-based input, which would let you collaborate with others to share your ideas and interact with what John Underkoffler calls a “tapestry of pixels”?
No glasses, no helmet, just a space for you to interact with digital media, and for your friends, family or co-workers to share and partake in the experience. As far back as the 1990s, Underkoffler was doing research with gesture-based input at the MIT Media Lab, and would go on to be science adviser for Minority Report, designing computer interfaces for the film. He also advised on The Hulk, Aeon Flux and Iron Man. Underkoffler’s research at MIT became the foundation for developing interactive solutions at Oblong Industries Inc. He designed g-speak, the core technology behind Oblong’s spatial operating environments such as Mezzanine, a collaborative conference room solution used by a variety of Fortune 500 firms. When asked about consumer technology, and if there’s a place for Mezzanine in the home, Underkoffler says “absolutely”—“Mezzlike capabilities on the (e.g.) giant TV in your living room is natural and inevitable.”
I tend to agree with Underkoffler, and I know my youngest son would too since the downside of his iMovie collaborations on a personal electronic device is just that: personal. With an iPod or iPad, other people are watching, weighing in and contributing, but it’s more lurking over the shoulder and giving orders than hands-on collaboration.
I showed my son a video on Oblong’s website, where the company showcases g-speak, with glove-wearing users demonstrating the gesture-based input akin to what many of us saw in Minority Report. “How would you like to use something like this?” I asked my son. “Wouldn’t it be great for iMovie?” Without any hesitation, he enthusiastically replied, “Yes! Where can we get the gloves?!”