Let’s say, for the sake of argument, that we are moving away from written language as a primary means of communication and toward an increasingly robust use of visual cues. If this is so, how sophisticated are our visual language skills? Are we capable of reading images critically, interpreting with the same acumen we apply to text? And is the visual environment that we live in, dominated by the screen, the aesthetic adventure we hoped it would be?
In a recent Fast Company article on Microsoft’s newest version of Windows, most notable for replacing the familiar “desktop” metaphor with an array of brightly colored rectangles, writer Austin Carr suggested that the software giant was emulating the Bauhaus and its fidelity to the essence of materials. “The most innovative element,” he writes, is “its shift away from visual metaphors.”
The upshot of the article was that, with this new system, Microsoft was finally pulling away from its old rival, Apple, which was still wed to the cute little icons that it’s been using since graphic designer Susan Kare drew her first wee garbage can in the 1980s. And those icons are always tied to the real world, usually to the very objects that made obsolete by the app in question. The telephone function on my iPhone, for instance, is represented by a graphic of an old handset receiver even though we rarely touch those old handsets anymore.
With Microsoft’s new approach, a block of color can take on whatever meaning the user assigns to it. Arguably, this is an advance, in visual culture, if not in the functionality of operating systems. It signals a break from our dependence on little graphic anachronisms. So, I think this is a good moment to rethink about how we use visual language and whether the motley assemblage of little pictures, shapes and color cues that we “read” without thinking work.
Certainly computer graphics and computer-generated environments have become increasingly lifelike, to the point where it’s sometimes hard to tell a computer rendering from a photo. I spend a lot of time staring at architects’ websites trying to figure out if buildings have actually been built or not.
But is photorealism what we actually want from our computer-generated visuals? Shouldn’t electronic environments provide something more . . . exotic? The graphic design profession rebelled a vigorously against the grid in the 1980s and ’90s, and now it depends heavily on formats that are even more rigid. And lets not even talk about Facebook, which is visually driven—the natural destination for an unfathomable number of iPhone snapshots—but not very interesting, aesthetically speaking. It is as rigidly formatted as a postwar subdivision, and often just as banal.
I’m not sure that Microsoft’s array of bright colors is any less rigidly formatted than any other screen environment, but it makes me wonder if it’s time to leave behind the online conventions of little pictograms and dreary photorealism (“Here’s what I’m eating right now!”). Maybe we’re on the brink of a creating a new visual language, one specific to the electronic realm, one that finds its roots not in the functionalism of the Bauhaus, but in the pure, out-of-control emotional verve of Abstract Expressionism.
Text still being the central internet communication tool, and the Web being more close at hand than ever, I wouldn’t dare say that we are moving away from written language—on the contrary, we could say that the proximity of the textual form makes us read more than before. In any case, regardless of our abilities with text, it is certain that we are not trained to read images critically, and that the photorealism of digital images works against any possible training to do so.
In architecture, for example, photorealistic images are automatically associated with corporate firms and market forces. If we look at young international offices, not the hallowed ones but those that are pushing the field and/or architectural representation further, we find hardly any trace of photorealism in their visual language. Instead, we enter soft humid hairy dreams in the case of Bittertang (New York), cartoonish worlds in Bureau Spectacular (Chicago), childlike atmospheres in Junya Ishigami (Japan), technological baroque in the case of Izaskun Chinchilla or Andres Jaque (Spain), and flat, 2D landscapes in the case of OFFICE (Brussels), just to name a few. There is, then, at least in architecture, a conscious development of knowledge in terms of its production of visual material.
Nevertheless, the question is not only if we can think critically about images but if we can think critically at all. The global high speed of cultural production and consumption seems to create an intense anesthesia that doesn’t allow for observing in detail, thinking, or analyzing, let alone being critical. Maybe before acquiring the tools for a more sophisticated analysis of images, it would be important to tone up our consciousness, a process by which we learn how to be systematically aware of visual language that doesn't make us think or at least doesn't offer a margin for different interpretations. Only once we are finally liberated from corporate visual impositions, we will be able to go from being passive observers to active agents, those able to disagree and therefore provoke advancements in visual language overall.
Before considering the possibility of new visual languages to replace current visual languages, it's worth second-guessing whether the presence of more and varied visual cues necessarily signals greater visual literacy. Being sensitive and responsive to visual cues isn't the same as being visually literate, just as understanding spoken language and knowing how to speak isn't the same as knowing how to read and write. Even highly educated adults often lack a vocabulary or framework to analyze visual information. Basic drawing skills are rarer still.
So, who exactly is the “we” moving from predominantly verbal to visual modes? Who is “using” visual cues, and who is merely abiding by them? And is there really a zero-sum relationship between text and image so that more images means fewer words? Does a proliferation of screens and camera phones equal a move away from verbal communication? In the case of computer-interface design, a more significant shift may be towards systems that are not visible at all, or are barely visible, like tiny cameras, microphones, and speakers that approximate the functions of human eyes, ears, and voices and are embedded in walls, garments, and skin. We may look back aghast in fifty years, when we're whispering to ambient interfaces, when we think of how much time people spent tapping and typing at the turn of the century; when the obvious answer to the question “What happened to the grid?” will be “What grid?”
All that aside, if visual literacy is a concern, and if you think increased visual literacy will lead to a society with a more expressive visual language and that this would be an improvement, then the important question becomes how to increase visual literacy. In the case of graphic interfaces (or advertisements, or other major elements in the visual environment), “emotional verve” is not a “pure” expression of an “out-of-control” creator but the very element that most effectively controls users’ or audiences’ behavior—hence the efficacy of avatars in winning human trust and loyalty online, and images of women in selling cars, lingerie, cereal, insurance, contact lenses, and dog food.
New interfaces do seem to anticipate a disappearing world that no longer leaves traces of itself. First we had icons representing objects such as trashcans and pages, or places like “home,” and then everything began moving to “the cloud.” Actual keyboards and buttons disappeared into the screen, virtualized. Of course these disappearances aren’t complete, because the cloud in fact consists of huge refrigerated server farms taking up real space out in the suburbs, and actual low-wage workers are out there manufacturing our touchscreens, just as real sex slaves exist somewhere too. So every disappearance in design involves a displacement and relocation in actual, material terms, and maybe it’s only we the users who really vanish at the end of the day . . . along with the actual common world, where human freedom and action are possible, as Hannah Arendt would say.
As the self is replaced by the “selfie” and the painting by its jpeg, design becomes a means not only of hallucinating disappearance but of making nonexistence easy and fun. The new interface wants to convince us of two things: that we are still somehow here in the midst of our missingness (fun, knowledge, and disappearance must be experienced by somebody); and that we are in control of our disappearance, the one depending on the other.
Microsoft’s upgrade also reminds me of the game Simon Says, where players had to imitate a pattern or code by tapping glowing colors in a given sequence. Is this an aesthetic return to the monochrome, a subjective return to infancy, or both? Grids, though, seem patently adult: Manhattan’s urban plan and the sober ordering of information in newspaper columns, Modernist architecture, etc. Abstract Expressionism worked in relation to the grid: rectangular canvases organized on perpendicular walls, its content then mediated via squared catalogue pages. Was expressionist energy ever really unleashed from the grid? Would it operate otherwise? Now I’m thinking of Duchamp’s Standard Stoppages, how the chaotic trajectory of the dropped thread only reads in relation to the standard, straight rule—a kind of antistandard standard. Isn’t this like customizing your smartphone, in a way?
Now design wants to visualize and represent absence and the qualities of the media channel itself. The new interface promotes its power to vanish and puts the user in an imaginary location of control over it. Maybe in the next design wave, design itself will disappear, or seem to, bringing customization to the next level. The monkeylike mimicry of Simon Says would meet the chaos of Duchamp’s freak meter, as the user feeds the device nonvisual cues like voice, heat, nervousness, hormones. When we pick it up, both design and the user are somehow already gone; we are always redesigning, always drifting in relation to design, which also drifts. This would be the simultaneous, reciprocal self-designing of the device and of the user, each feeding back via the other and neither in complete control. Probably this is how crisis-era “resilience” theory is rethinking the coming relations between humans and their environments, after government finally stops managing us from above and decides we are customized enough to do it ourselves.