By CLIVE THOMPSON
Here’s what you see if you look at my face: a skinny titanium headband stretched across my forehead. It looks like a futuristic pair of sunglasses, minus the lenses. On my right-hand side there’s a computer, a metal frame with a small, clear cube of plastic perched just over my eye. When I tilt my head upward a bit, or run my finger along the side of the frame, the cube lights up. What I see, floating six inches in front of me, is a pinkish, translucent computer screen. It gives me access to a few simple apps: Google search, text messaging, Twitter, a to-do list, some hourly news headlines from CNN (“See a Truck Go Airborne, Fly Over Median,” “Dolphin Deaths Alarm Biologists”). Beside the screen is a teensy camera built into the frame of the glasses, ready to record anything I’m looking at.
Google Glass is the company’s attempt to mainstream what the tech industry calls wearable computing, to take the computer off your desk or out of your pocket and keep it in your field of view. In a world where we’re already peering at screens all day long, pecked at by alerts, the prospect of an eyeball computer can provoke a shudder. But over several weeks of using the device myself, I began to experience some of the intriguing — and occasionally delightful — aspects of this new machine. I got used to glancing up to start texting and e-mailing by addressing its surprisingly accurate voice-transcription capabilities. (I admit I once texted my wife while riding my bicycle.) I set up calendar reminders that dinged in my ear. I used an app that guided me back to my car in a parking lot. I sent pictures of magazine articles to Evernote, so I would have reminders of what I’d read. I had tweets from friends float across my gaze.
Despite my quick adoption, however, only rarely did I accomplish something with Glass that I couldn’t already do with, say, my mobile phone. When I first heard about the device, I envisioned using it as a next-level brain supplement, accessing brilliant trivia during conversations, making myself seem omniscient (or insufferable, or both). This happened only occasionally: I startled a friend with information about the author of a rare sci-fi book, for example. But generally I found that Googling was pretty hard; you mostly control Glass with voice commands, and speaking queries aloud in front of others was awkward.
The one thing I used regularly was its camera. I enjoyed taking hands-free shots while playing with my kids and street scenes for which I would probably not have bothered to pull out my phone. I streamed live point-of-view video with friends and family. But it also became clear that the camera is a social bomb. One friend I ran into on the street could focus only on the lens pointing at her. “Can it see into my soul?” she asked. Later, she wrote me an e-mail: “Nice to see you. Or spy you spying, I guess.”
Cameras are everywhere in public, but one fixed to your face sends a more menacing signal: I have the power to record you at a moment’s notice, it seems to declare — and maybe I already am. In the weeks before I got Glass this summer, at least one restaurant banned the device, articles fulminated against it and a parody of its use appeared on “Saturday Night Live.” In public, I sometimes found myself avoiding people’s eyes, as if trying to indicate that I wasn’t recording them. (Of course, if there’s one thing weirder than someone wearing a computer on his face, it’s someone wearing a computer on his face who also refuses to look you in the eye.)
As far as Google is concerned, any social quirks, tensions or paranoias Glass produces now are just temporary side effects — the kind of things we always confront before a new device becomes necessary, accepted, even beloved. Yet there’s always a gulf between how creators intend for their tools to be used and the way people actually use them. There can be a divide, too, between the experience of users and those they interact with. From my perspective, I was wearing a computer, a tool that gave me the constant, easy ability to access information quickly. To everyone else, I was just a guy with a camera on his head. With a technology this strange and new, it’s hard to tell just what it is: a bridge to the rest of the world — or just another screen blocking people out?
In one sense, Glass is nothing new. “We’ve used technology to extend our physicality for thousands of years,” Genevieve Bell, an anthropologist who works for Intel and studies the relationships between people and their digital tools, says. “We had armor, we had boomerangs to extend our reach, we had bows and arrows.” Women in the 18th and 19th centuries wore chatelaines, bags tied to belts that held the tools of their trade: thermometers and scissors for nurses, thimbles and needles for seamstresses. And as Bell points out, our wearable technologies have never been merely practical enhancements. They’re also theatrical, signaling identity, social status and power. A wristwatch isn’t only a means of promoting punctuality; putting one on is also a way to be seen as a punctual person.
As soon as the age of digital computers dawned, its innovators dreamed of wearing them. In 1945, Vannevar Bush, the American inventor and science administrator, envisioned the voracious information seeker of the future wearing a camera “a little larger than a walnut” on his head to capture documents to be stored in his “memex,” a personal collection of documents linked together in a Web-like fashion. In the early ’60s, a mathematician named Edward Thorp teamed up with the information scientist Claude Shannon and built the first wearable digital computer — an easily concealed cigarette-pack-size device they used to beat the roulette wheels at Las Vegas. They worried about being caught — “That was the era of kneebreakers, and worse,” Thorp told me — but it worked.
Beginning in the ’80s and ’90s, computer components became small and light enough to make it feasible to attach them to your body. A group of students at M.I.T. — often called “the Borg,” after the part-machine, part-organic alien collective of “Star Trek” — began experimenting with wearable designs. One student, Thad Starner, created such a computer in 1993 to solve the problem of taking notes in class. Starner noticed that whenever he wrote down what his professors said, he stopped paying attention to what they were saying; his notes were often illegible too.
“All these lessons I was learning were going in one ear and out the other,” he says. So he put computer parts in a backpack and connected them to an L.E.D. display that he clipped to his head and positioned an inch or two in front of his right eye. To input information, he used a one-handed keyboard called a Twiddler. This way, he figured, he could write notes in class while keeping his head up and following the professor. For the next 20 years, many of them as a professor of computer science at the Georgia Institute of Technology, Starner wore his computer almost daily.
He used the device to capture and instantly retrieve knowledge. During pauses in conversation, while riding in cars, while at a lecture, he’d record the most interesting parts of what people were saying. When I visited him in January, his archive was 2.2 million words. Talking to Starner can be a remarkable experience, because he’ll startle you by bringing up precise details from conversations held months, or even years, earlier.
Starner has evolved strict social protocols about when and how to use his wearable computer, to avoid ignoring people. For example, he never checks e-mail while talking to someone. “Your I.Q. goes down like 40 points,” Starner says.
“You’ve got to make the systems so that they help people pay attention to the world in front of them,” he argues. The distinction, he says, is between trying to juggle rival cognitive tasks — task switching — and using several information streams that are all focused on the same matter and reinforce one another. While Starner gave a presentation to the National Academy of Sciences in 2010, for instance, a group of his students listening in remotely at Georgia Tech texted factoids to him. “My students, seeing me stumbling on something, would throw up a URL,” he says. “It made me seem smarter than I am.” Since the facts were germane to his presentation, they didn’t distract him; he easily incorporated them into his talk.
Today’s mobile phones, of course, can do much of what Starner’s home-built machine could do. But the difference, he claims, is how much faster his wearable was. Though we joke about constantly Googling information on our phones, in reality, he suggests, people don’t pull them out for that purpose all that often. In contrast, he can find something in his archived notes in seconds — so he checks it all the time. “The big thing about augmented memory is access time,” he says.
Starner’s handmade device didn’t have a camera. He says that “people had this allergic reaction to the idea that people would be recorded” in the pre-cameraphone era. Indeed, few of the early wearable pioneers regularly wore cameras, for precisely that reason.
One prominent exception was Steve Mann, probably the first regular user of a wearable computer. His was equipped with a camera, one that would eventually be able to take more than 30 pictures per second. When he was a student at M.I.T. in the early ’90s, “people were really freaking out on it,” he says. “I had one M.I.T. professor that tried to shut me down, and said, ‘Well, we certainly can’t have you doing that.’ ” He was told he couldn’t use it in libraries, and when police officers and security guards realized they were being recorded, they’d tell him to turn it off. (He says two transit police officers once tackled him because of it.) He regarded the picture-taking as a type of note-taking: “Remembering is recording,” he says. “It’s like a black-box device for yourself.” It could be highly practical too, as was the case when he was struck by a car and his camera caught the license-plate number.
Mann says he thinks that society will eventually adapt to omnipresent recording by everyday people. “Sousveillance,” he calls it, punning on the French word for “under,” sous — surveillance by the many rather than the few.
Google has known about wearable computers for a long time. In 1998, the company’s founders, Larry Page and Sergey Brin, met Starner when they were still grad students. (Years later, the company would end up hiring Starner as an adviser to work on Glass. Another pioneer, Greg Priest-Dorman, who had worn his own handmade computer for two decades, was hired full time.) Planners at Google insisted that wearables would be the next big shift in networked computing and would have large implications for the company, much the way mobile phones changed how and where Google serves ads, its chief source of income. Executives wanted to understand how Google could adapt. But since nobody was making a wearable computer for the mass market, Google realized it would have to make one itself.
Steve Lee, a company veteran who had worked on Latitude, a Google service that enables users to broadcast their GPS location to friends, was involved in the project’s initial development. He knew from the start that Glass needed to avoid being clunky; it couldn’t have the snaking cables or one-handed keyboards of the hand-built wearable computers.
“Style is subjective,” he told me when I met him this summer at Google’s headquarters in Mountain View, Calif. “But I think up until that point, most attempts at wearables — there wasn’t even a debate. No one would call them stylish.”
The earliest prototypes of Glass were made by taking the components from phones running Android — Google’s mobile operating system — and gluing them to a pair of safety goggles, with a huge L.C.D. in front of one eye. Heft was a hurdle: the prototypes were more than five and a half ounces, creating an untenable amount of “nose-borne weight,” to use an industry term. “If it doesn’t meet a minimum bar for comfort and style, it just doesn’t matter what it will do,” Lee said. Nobody would wear it all day long.
To shrink the device and make it more attractive, Lee hired Isabelle Olsson, a Swedish industrial designer known for her elegant, stripped-down aesthetic. She wasn’t told what she was being hired for. On her first day at work, Olsson was shown the safety-goggle prototype. When she pulled it out of a box and put it on to show me, she looked like a mad scientist.
“My heart skipped a beat,” she said with a laugh. “As a very nontacky person, this idea overwhelmed me a little bit. I’m going to wear a computer on my face? I really felt like we need to simplify this to the extreme. Whatever we can remove, we will remove.” Olsson scattered books of minimalist design around the office to inspire the design team. One of its first major decisions was to make one arm of the glasses function like a trackpad, so that one- and two-finger swipes let users move from screen to screen. So Glass could play audio privately, a “bone-conducting transducer” was put on the inside of the arm; sound plays directly against the skull, making it audible generally only to the user. Lacking a keyboard, the device would be controlled mostly by dictation; the team settled on “O.K., Glass” as the command that would initiate an action.
The team also decided to raise the screen slightly above the right eye, so a user would need to choose to look up at it. This way, the designers felt, it was less likely to get in the way of social interaction. Glass would send another signal when in use: when the screen is on, the glow is visible to people nearby. And they can see a user glancing up.
Glancing, in fact, was the design team’s stated goal for how you interact with the device. Every Glass designer I spoke to insisted that it wasn’t something you were supposed to stare at, zoning out on videos or playing games or reading while ignoring those around you. The expressed hope was that by giving people a quick way to check e-mail and text messages — and to find quick answers while on the go — Glass would encourage them to spend less time, not more, staring at screens. There were technical imperatives at work, too: the device has a short battery life (the screen usually turns off after only a few seconds of inactivity). “We should not be competing with the world,” says Antonio Costa, a Google designer who works on Glass. “We would lose.”
Part of the company’s vision was to integrate Google Now, a virtual assistant that interacts with Gmail, Google Calendar and current location to push information and reminders at you — letting you know an hour before a lunch meeting how long the drive to the restaurant will be based on current traffic, for example. “Glass should be the librarian that knows when to interrupt,” Costa says. It is a corporate strategy by now familiar: create products that make life easier while enmeshing you ever more thoroughly in Google’s product ecosystem. The more personal information you give Google, the more useful Now and Glass become — and the more you rely on them.
Glass has always included a camera; it was “in the spec from the get-go just because it’s so powerful,” Lee told me. He and his team believed that users would appreciate being able to take pictures impulsively without having to pull a phone out of a pocket. They knew that a camera would cause privacy concerns, but Lee argued that design elements would prevent Glass from being used for easy covert recording. First, the screen glows while it’s in use; second, to use the camera, you either have to speak aloud — “O.K., Glass, take a picture” — or reach up to touch a button.
“There’s a clear social gesture,” Lee said. Glass’s point-of-view camera is also hard to use secretly, he added, because you need to be looking directly at people to record them.
Google started selling Glass this spring. Two thousand went to software developers; 8,000 went to people who submitted to Google short descriptions of what they’d do with Glass; those selected paid $1,500 for it. (I received mine this way and paid full price.) Once users began wandering into public life a few months ago, gazing into their glowing eye-screens, it became possible to begin answering the question: how would people use wearable computers in their everyday lives?
The camera, it turns out, was the most immediate draw for the roughly two dozen users I spoke to. It’s the simplest thing to do with the device, and everyone experimented avidly with the new angles and picture-taking moments it made possible. Video calls — the real-time sharing of one’s point of view with others — were also popular. In Maine, a surgeon named Rafael Grossmann used Glass at work: he wore his device while inserting a feeding tube into a patient in the operating room and streamed the video live. Normally, he says, only a handful of students can cluster around a teacher, making it hard for them to see from the surgeon’s perspective. “To have someone else see what I see — it’s just amazing in surgical teaching, in medical teaching, in mentoring someone through a problem,” he says. Later, Grossmann reversed the arrangement, having an I.C.U. nurse wear Glass during a procedure while Grossmann and another surgeon offered advice via video call.
Because there isn’t an app store for Glass, its capabilities were initially limited by the few applications preinstalled by Google — sending and receiving texts, taking and sending videos and pictures, getting directions and doing Google searches. (The company says an app store is coming next year, when Glass is available to the general public.) Google created a set of guidelines for designing applications, and by midsummer, private developers had produced a few offerings. A cooking application, for example, displays pictures to go with a recipe as you make it. “It’s like a little chef alongside you, telling you what to do,” says Gary Gonzalez, a petty officer in the Coast Guard who was chosen among the 8,000.
Cecilia Abadie, a software developer in California, craved a way to make a to-do list, so she created an app called Glass Genie. “Because I’m a working mom, I have my kids asking me things all the time,” she says. Other users figured out ways to get Glass to display notes. Before I went into meetings, I began e-mailing notes to myself, then leaving them on-screen as virtual Post-its.
Many users discovered, as I did, that Googling on Glass was less useful than they expected. It sounds so alluring in theory: all human knowledge, right in front of your eyeball! I had success with simple queries, like checking the weather or finding the title of the next Junie B. Jones book. But Google’s intention to make its computer “glanceable” and “out of the way” also makes sustained reading difficult. When I spent much time looking at Google results, my eyes got tired from looking upward. (The same was true for videos: more than a minute or two was wearying.) I soon abandoned Googling altogether. One user, Zoë van der Meulen, claims that the relative hassle of a Web search on Glass is a feature, not a bug. She describes Googling on a road trip with her husband and says that if she had done it on her phone she would have most likely checked Facebook or Twitter at the expense of interacting with her husband. “I think it’s way less distracting than the cellphone,” she says. I discovered a version of this effect with my Twitter usage. While using my laptop, I used Glass to show comments directed at me personally. But I otherwise ignored Twitter for hours, and paradoxically this meant I was less likely to lose myself in endless Twitter-surfing. It certainly looked ludicrous: a guy sitting at a computer, with another computer on his face. But it did the job.
My own concern about distractions ran in the other direction: the wearable didn’t interrupt me enough. Glass was so busy trying to stay out of the way that it wasn’t as useful as it could have been. I originally hoped to use Glass in the manner of Starner and the other wearable pioneers — as a note-taking machine, a commonplace book grafted on to my consciousness. But the software for that doesn’t exist yet. At best, I could dictate a note that Glass would send to my Evernote account, but I couldn’t search my files or e-mails. To access their pre-existing collections of personal notes and mail through Glass, Starner and Priest-Dorman had to hack it, installing a version of the Linux operating system on the device. Then they used their one-handed keyboards wirelessly, allowing them to type instead of relying on voice recognition.
Other Glass users had reactions similar to mine. Josh Highland, a software developer, says that he was hoping — unrealistically, he knew — for “Terminator vision,” or vast amounts of text and data scrolling into view the way they do in the “Terminator” movies when we see things through Arnold Schwarzenegger’s cyborg eyes. “I expected it to be always on,” Highland says. He says that in trying to prevent too many interruptions, Google overshot the mark. As scholars of multitasking have found, not all distractions are bad. Certainly switching between two unrelated activities can wreck your focus: try playing Angry Birds while listening to a lecture and you’ll do neither well. But if the flurry of alerts and apps you’re juggling are all focused on one piece of work, that might be useful; the distractions are the work, as it were. Like most adroit users of technology, Highland generally tries to avoid diversions. After trying The New York Times and CNN apps on Glass, for example, he quickly turned them off. He didn’t need a news ticker in his eye.
Ultimately it’s difficult to assess how a tool like Glass might change our information habits and everyday behavior, simply because there’s so little software for it now. “Glass is more of a question than an answer,” in the words of Astro Teller, who heads Google X, the company’s “moon shot” skunk works, which supervised Glass’s development; he says he expects to be surprised by what emerges in the way of software. Phil Libin, the C.E.O. of Evernote, told me that my frustrations with Glass were off-base. I was trying to use it to replace a phone or a laptop, but the way head-mounted wearables will be used — assuming the public actually decides to use them — will most likely be very different. “This is not a reshaping of the cellphone,” he added. “This is an entirely new thing.” He predicts that we’ll still use traditional computers and phones for searching the Web, writing and reading documents, doing e-mail. A wearable computer will be more of an awareness device, noting what you’re doing and delivering alerts precisely when you need them, in sync with your other devices: when you’re near a grocery store, you will be told you’re low on vegetables, and an actual shopping list will be sent to your phone, where longer text is more easily read. Depending on your desire for more alerts, this could be regarded as either annoying or lifesaving. But as Libin puts it, “The killer app for this is hyperawareness.”
One software developer and Glass user I talked to, Mike DiGiovanni, described people’s fantasies about what the wearable could do as science fiction. “I’ve had people who thought it could see through clothes,” he says. “I’ve had other people who thought it could see your Facebook profile, your Google profile, by scanning your face.”
Actually, face-scanning may not be that far off. In May, a 24-year-old software engineer named Stephen Balaban received his Glass. Balaban runs Lambda Labs, a wearable-computing company. He created a program for Glass that could take a picture of a face, feed it to Lambda Labs’ server, identify it with face-recognition software, then send an alert to someone. Balaban’s demo wasn’t terribly powerful; it could recognize only one of 12 faces he had prescanned. But it’s precisely the type of application we’re likely to see developed as wearables become common: using the pattern-recognition power of machines to extend our observational powers. “If you’re going to put a machine in the loop that’s coprocessing what you’re seeing, what does the machine do that you can’t do?” he says. “This is providing a sixth or seventh sense to people.”
You can’t use Balaban’s app; Google has banned all facial-recognition programs. According to Steve Lee, the Glass design team “looked into facial-recognition technology early on” but chose not to pursue it. “Clearly, this is something the broader public is concerned about,” Lee says, “so we took the additional step of banning facial-recognition Glassware for Glass.” Realistically, Google can’t stop anyone committed to using such applications; install Linux, for example, and you can run any compatible software you want. Political thinkers have long warned that face-recognition software — deployed worldwide by police forces and spy agencies — will eventually go mainstream, with corporations and individuals scanning people in public, either to sell them things, track them or simply indulge their voyeuristic curiosity.
Such technology is likely to be developed more quickly than privacy laws can evolve. In May 2013, members of the Congressional Bipartisan Privacy Caucus sent a letter to Google asking about face recognition; Susan Molinari, Google’s vice president of public policy and government relations, wrote back saying it wouldn’t be allowed without “strong privacy protections.” She also argued that it had designed “social signals” to make camera use visible. Representative Joe Barton, a Republican from Texas and co-chairman of the caucus, wasn’t convinced.
“They didn’t put it this way,” he tells me, “but their basic response was ‘buyer beware.’ Or ‘bystander beware.’ ” Privacy, he says, has become a major concern for the public, as news of the N.S.A.’s secret systems have spread. But the lobbying force of large technology and marketing firms makes it hard to pass laws giving consumers the ability to control their privacy. “There’s a lot of money to be made in collecting and collating personal information,” Barton says.
Some think Glass’s camera by itself is already an incursion. Ann Cavoukian, the information and privacy commissioner for Ontario, says she likes the idea of wearable computers in principle but isn’t impressed by Google’s “social signals.” Sure, the screen glows, but nobody knows what that means — and in any case, the screen glows whether you’re taking a photo or checking e-mail. In contrast, she points out that in a trial program in Amherstburg in her province, police officers who wore video cameras also had badges clarifying that recording was taking place. “I’m just trying to preserve some semblance of freedom in public spaces, such as when you go for a walk,” she says.
It will only get harder for her, as additional covert techniques emerge. Glass includes an infrared sensor, pointed at a user’s right eye; DiGiovanni used it to create Winky, an app that senses when you wink and then takes a picture. (Winky is an Android app, not Glassware, so it has to be “side-loaded” onto Glass via a cable connected to another computer.) Balaban programmed a Linux-based app that takes a picture every four seconds, without turning on the screen at all. “You can imagine,” he adds dryly, “that is something Google most certainly does not support.”
Lee admits that most people have no idea what a glowing Glass screen indicates. “This is going to require use of education,” he says. But he also says that people will adapt, much as we did when cameras were added to phones a decade ago. The Glass users I spoke to agreed. Only a few had experienced antagonistic reactions from others; Highland did have one woman reach up and push his face away at a tech-industry event. “There were tons of cameras around,” he says, “but the only thing she cared about was Google Glass.”
Anyone worried about privacy faces a countervailing pressure: people who actively want such technology. Balaban says face recognition could be used in a noncreepy ways — at large conferences, for example. When you see someone there, “maybe you’re reminded of the last e-mail you sent to them.”
And many people might crave the new powers beyond face recognition that you can get by marrying computational power to human vision. A camera-equipped wearable could be useful for identifying objects, says Frank Chen, a venture capitalist whose firm, Andreessen Horowitz, invests in apps for Glass. Factory managers could determine inventory at a glance; repairmen could have a wearable computer display “augmented reality” animations to show them precisely how a part fits in a machine. Lecturers could immediately and accurately poll an audience vote.
“We’re going to be only limited by our imagination here,” Chen says.
As screens have proliferated, cultural critics have expressed considerable worry that we’re paying too much time and attention to devices. We’re “alone together,” to use Sherry Turkle’s phrase. The prospect of wearable computers has prompted a new round of such apocalyptic op-eds, nervous jokes and satirical responses. In The New Yorker, the novelist Gary Shteyngart wittily dissected the odd physical movements that Glass requires — how viewing a Web page necessitates using your head as a cursor, tilting it up and down and side to side, looking at a page only you can see, for all intents and purposes a private hallucination. In my part of Brooklyn, peers gently mocked me when I wore Glass. Teenagers on the subway, in contrast, cooed, and I got several cries of “Awesome!” on the street. To my elementary-school-age children, Glass became banal; they became used to me muttering Web queries as I tried to answer their questions. (“Do your glasses know the answer?” they asked me.)
Will a wearable computer become just another screen that comes between people? Most Glass users I spoke to argued it was no stranger than the mobile phone seemed to be in the early ’90s. “I try to smile when I have it on,” says Cynthia Johnston Turner, the director of wind ensembles at Cornell University, “because I get looked at. The people I talk to have been very jazzed about it. As I am. I think it’s the future.” Her husband isn’t quite so enthusiastic. “He sort of shakes his head — ‘This is too weird.’ He has not gone out in public with me with it on.”
After six weeks of trying Glass, I didn’t find it diverting my attention. Admittedly, that could easily change in the future, if it adds more apps vying for my attention. But I’ve already been through this cycle of adaptation with my smartphone. Like many, I’ve developed rules that govern my usage: I rarely check e-mail or social media on weekends; during meals or coffee with friends or while playing with my kids, I try to leave my phone in my pocket and ignore it.
Yet despite my attempt to keep my Glass use low-key, my wife — generally an eager early adopter when it comes to new gadgets — never became comfortable with it. She hated the camera. She didn’t like the way it looked on my face, in part because of the device’s stigma, in part because of its asymmetry. And she kept commenting on the secretive dynamic it created. Most of our other computer devices are semipublic, even social: people can generally peer over your shoulder to see what you’re doing on a laptop or even a smartphone. But with Glass, the contents of your screen are a mystery to others. People are liable to imagine almost anything happening on it while they’re trying to talk to you. Video games? (Probably not; only a few exist for Glass right now.) Porn? (Google has banned porn-specific apps, though you can certainly do a search and call up a pornographic site in the device’s browser.) “I don’t know what’s going on in there,” she said. “And there’s no simple way for you to share it with me.” She was right: technically, Glass has a “screencast” mode that lets you display things on your phone, but it was so cumbersome I never really used it. Even if I wasn’t zoning out, the device made it look as if I was.
Last month I tried a different type of wearable: a wristwatch computer called the MetaWatch Strata. It has a large L.C.D. and displays incoming text messages, alerts and e-mails. Unlike Glass, it can’t reply to messages or do searches, and it has no camera. But socially it was less intrusive. In the 19th century, wristwatches were considered effete and unmanly. But now we’re used to people glancing at their wrists, and nobody remarked on my Dick Tracy-esque accessory. Apple and Microsoft are both rumored to be developing a similar device, and there will surely be other iterations, long before the frequently imagined chip in your head.
With Glass, I eventually settled upon a midpoint. I wore it mostly when alone, or when working at my computer, or when hands-free photography would be a boon. But I quickly removed it in social situations — say, before entering a crowded cafe. I’d have to wait until everyone else has one.
This article has been revised to reflect the following correction:
Correction: September 3, 2013
An earlier version of this article misstated the role that Steve Lee played in the development of Google Glass. He was involved in the project's initial development. He did not lead the project.
This article has been revised to reflect the following correction:
Correction: August 30, 2013
An earlier version of this article misspelled part of the name of a venture capital firm. It is Andreessen Horowitz, not Andreesen Horowitz.
No comments:
Post a Comment