Thursday, January 01, 2015

Man Computer Symbiosis

Earlier this year I was working on our online banking platform and kept thinking about the question, “Will we need people in the finance function in the future or will it all be done by computers?” 

I've come to the conclusion that people will be around for a long time. Humans and computers can do a lot more together then they can alone. J. C. R, Licklider (the founder of the internet) discussed this concept a long time ago in a paper called Man-Computer Symbiosis. Essentially machines do the grunt work, allowing humans to focus on things that are more important. Today humans work together alongside computers almost constantly. Think about driving to dinner by using the computerized maps and GPS on your phone. Or making a call on that phone (another computer). Or even driving the car that is stuffed with tiny computers to help with steering and measure your tire pressure.

I found a wonderful example of Man-Computer Symbiosis from Garry Kasparov -- one of the best chess players ever. He gave a lecture on how humans and computers can partner together when playing chess. I’ll summarize the key points below or you can also view a great piece that Kasparov wrote in the New York Review of Books or watch a video of Kasparov’s lecture.
  • The End of Human Computer Chess? In 1997 the IBM computer Deep Blue beat the world chess champion Garry Kasparov. This was the first time that the best computer in the world beat the best human in the world. Most of the world considered this the end of human / computer chess. Computers would continue to get better each year much faster than people -- leaving human players in the dust.
  • But A New Type of Competition Emerged: The website Playchess.com held a “Freestyle” competition in 2005. People could compete in teams and use computers. Traditionally the use of computers by human players would be considered cheating. There was substantial prize money offered which enticed many of the world’s greatest grandmasters and IBM’s newest supercomputer “Hydra” to enter.
  • A Surprise Winner: As it turns out, grandmasters with laptops could easily beat Hydra and the other supercomputers. But the overall winner was a pair of amateur players with 3 laptops. These were neither the best players, nor the best machines, but they had the best process. As Kasparov writes, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”
Another example is the company Palantir -- a software startup that helps “good guys” (e.g., governments, banks) catch “bad guys” (e.g., terrorists, fraudsters). Most people attack this problem from the perspective of “How can we get computers to find the bad guys?” Palantir takes man-computer symbiosis point of view by providing a tool that makes the good guys much better at their job.

Considering how pervasive computers are to the very fabric of our lives, thinking though the model of Man-Computer Symbiosis is critical to both building the best machines and also deploying and training people most effectively. 

Thursday, October 06, 2011

In the Beginning

I've always enjoyed first person accounts of the beginning of the computer age. What was it like to be there? How did people view new technologies before they became part of our everyday lives? I've put together a list of some of my favorite magazine articles that capture that feeling. My previous blog post on The First Computer Interface captures that sentiment and here are 5 more. Most of the articles are on Kevin Kelly's list of Best Magazine Articles Ever (with the exception of  Inside The Deal That Made Bill Gates $350,000,000).  Here are five articles about the beginning of...
  • Silicon Valley (The Tinkerings of Robert Noyce by Tom Wolfe, Esquire Magazine, December 1983) Robert Noyce founded two of the most important startups in Silicon Valley -- Intel and its predecessor Fairchild Semiconductor. Tom Wolfe (yes, that Tom Wolfe) wrote about Noyce exporting the Midwestern Congregationalist ethic to create the modern culture of Silicon Valley. Noyce believed in a strict meritocracy. Wolfe writes "Noyce’s idea was that every employee should feel that he could go as far and as fast in this industry as his talent would take him.... When they first moved into the building, Noyce worked at an old, scratched, secondhand metal desk. As the company expanded, Noyce kept the same desk, and new stenographers, just hired, were given desks that were not only newer but bigger and better than his." At the same time that Noyce was founding Silicon Valley, another set of small town Midwesterners were sending men into space. After the success of the Apollo 11 mission, NASA’s administrator, Tom Paine, happened to remark in conversation: “This was the triumph of the squares.” This may have been the first reference to geeks conquering the earth (and space).
  • Hacking (Secrets of the Little Blue Box by Ron Rosenbaum, Esquire, 10/1971) The original hackers were called "phone phreaks." These were kids who figured out a weakness in the AT&T telephone system that they could exploit. By putting a 2600 hertz tone to their mouthpiece, they could trick the phone company into giving them free calls. The most famous of the phone phreaks was John Draper (aka Captain Crunch) who discovered that a whistle given away in the children's cereal gave off the magic tone. He also taught Steve Jobs and Steve Wozniak how to phone phreak. The phone hackers exemplified the original hacker ethic -- to explore a giant system to see how  it worked. Of course, like modern hackers, some got a little carried away by the exploration. By the end of the article Rosenbaum writes a little bit about many of the phone phreaks started getting into computer hacking -- which was quite a feat in 1971. There was a great documentary on the history of hacking from Captain Crunch to Steve Wozniak to Kevin Mitnick that does a great "where are they now" of hacking.
  • Video Games (Spacewar by Stewart Brand, Rolling Stone, 11/7/1972) Stewart Brand wrote a fantastic piece on Spacewar -- the world's first video game. Spacewar was written before anyone had  thought about putting graphics on a computer. Its hardware didn't even have a multiply or divide function. Brand talks about the computer geeks at Stanford and MIT who were writing the first computer programs meant to be used by other people (as opposed to writing programs to solve a specific numeric problem.) One of the most entertaining program names was a word processing system called "Expensive Typewriter." At the time, the intranet only had 20 computers but people were starting to understand that if it took hold, this would be the transformation of the news and recording industries. As a side note, there is computer code at the end of the article -- probably the only time code was ever published in Rolling Stone magazine.
  • Microsoft (Inside The Deal That Made Bill Gates $350,000,000, Bro Uttal, Fortune, 7/21/1984) You don't hear much about Bill Gates these days -- a man who seems focused on his privacy. The Guardian published an interview with Gates this summer where the most interesting tidbit was that his children liked to tease him by singing the song Billionaire by Bruno Mars. But Microsoft was a very different company in 1984, when a 30 year old Bill Gates invited Fortune Magazine to spend five months with him while they went through their IPO. This is one of the few journalistic tales of an IPO ever written. The editor's note reads "I doubt that a story like this has been published before or is likely to be done again." It's amazing to see an early Microsoft where Bill Gates used part of the $1.6 million cash he made on the offering to pay off a $150,000 mortgage. He also decided to keep the stock's initial IPO value below $500MM which he felt was uncomfortably high. But the most interesting insight that Uttal has into the young Gates is that he was "something of a ladies’ man and a fiendishly fast driver who has racked up speeding tickets even in the sluggish Mercedes diesel he bought to restrain himself." 
  • Blogging: (You've Got Blog, Rebecca Mead, The New Yorker, 11/13/2000) When I first read this article in 2000, I was introduced me to many things "Blog" including the word "Blog" and "Blogger" as well as some of the original bloggers: EvheadMegnut and Kottke.orgKottke.org is still one of my favorite blogs after a decade. Like many start ups, Blogger was a side project that was written over a weekend. Pyra (their parent company) was supposed to be making project management software. It's interesting to see how early bloggers were the mavericks of modern social networking (though some ideas like putting themselves on webcams 24/7 have thankfully gone away). Blogs made it easier for "regular" people to post -- and Social Networking makes it even easier. Facebook in many ways is just the extension of that -- allowing everyone to have their own webpage. 
As an added bonus, it's worth reading the book Nudist on the Late Shift by Po Bronson. Po gives a wonderful history on what it was like to be part of the Silicon Valley tech boom of the late 90's. Po's book was so compelling that it pulled many newcomers to the Valley. He felt slightly bad about this after the bust and started apologizing.  

Wednesday, July 06, 2011

Lessons From the First Computer Interface (E-Mail)

Errol Morris, the famous documentary director of The Thin Blue Line and other films wrote a great piece in the New York Times called Did My Brother Invent E-Mail With Tom Van Vleck? (Parts 1 | 2 | 3 | 4 | 5). As it turns out, Morris's brother, Noel Morris, worked at MIT on CTSS, which was the predecessor of Multics, which was the predecessor of Unix, which was the predecessor of all the computers that run the internet as well as the Mac OS. Noel was also the person who (along with Tom Van Vleck) wrote the first email program on CTSS.

Morris writes an homage to his brother that looks at some of the very early history of the human computer interface design. In fact, CTSS was the first human computer interface to really exist. These were typewriters jury rigged to a computer to allow interactive input. Before that, programmers had to write programs on punch cards which wasn't much of an interface at all. Fernando Corbato, one of the founders of time sharing computing systems describes frustrating computers were at the time:
FERNANDO CORBATÓ: Back in the early ‘60s, computers were getting bigger. And were expensive. So people resorted to a scheme called batch processing. It was like taking your clothes to the laundromat. You’d take your job in, and leave it in the input bins. The staff people would prerecord it onto these magnetic tapes. The magnetic tapes would be run by the computer. And then, the output would be printed. This cycle would take at best, several hours, or at worst, 24 hours. And it was maddening, because when you’re working on a complicated program, you can make a trivial slip-up — you left out a comma or something — and the program would crash. It was maddening. People are not perfect. You would try very hard to be careful, but you didn’t always make it. You’d design a program. You’d program it. And then you’d have to debug it and get it to work right. A process that could take, literally, a week, weeks, months — 
But visionaries like J. C. R. Licklider realized that computers could be more than a processing device but an extension of a person's abilities. His paper “Man-Computer Symbiosis” with one of the first descriptions of the interdependence that humans and computers would eventually have:
The fig tree is pollinated only by the insect Blastophaga grossorum. The larva of the insect lives in the ovary of the fig tree, and there it get its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership…

Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses… The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.
In this post, I'd like to point out how the creation of the first time sharing machines and email are to many of the  product management challenges that people still have today.

1. Showing is more powerful than telling:
FERNANDO CORBATÓ: “So that was mostly to convince the skeptics that it was not an impossible task, and also, to get people to get a feel for interactive computing. It was amazing to me, and it is still amazing, that people could not imagine what the psychological difference would be to have an interactive terminal. You can talk about it on a blackboard until you are blue in the face, and people would say, ‘Oh, yes, but why do you need that?’ You know, we used to try to think of all these analogies, like describing it in terms of the difference between mailing a letter to your mother and getting [her] on the telephone. To this day I can still remember people only realizing when they saw a real demo, say, ‘Hey, it talks back. Wow! You just type that and you got an answer.’”
The article does a very good job of showing vs. telling by creating an email simulator that provides an interactive demonstration of how the original email program worked on the CTSS. It's much more arcane than you would imagine -- even to the point of using typewriters. Try hitting the backspace button when you're typing a message and  see what happens.

2. Give an early version to your users because you never know how they might use it 
The strongest impacts of an emergent technology are always unanticipated. You can’t know what people are going to do until they get their hands on it and start using it on a daily basis, using it to make a buck and using it for criminal purpose and all the different things that people do.
— William Gibson, interviewed in The Paris Review, Art of Fiction #211
The original time sharing machines were created to make programming and debugging much easier. But to the engineers surprise, people wanted to share data with each other on the machine. In many ways this was the first computer mediated social network. 
TOM VAN VLECK: The idea of time-sharing was to make one big computer look like a lot of different little computers that were completely unconnected to each other. But it turned out that what people really liked about time-sharing was the ability to share data. And so one person would type in a program and then he’d want to give that disk file to someone else. And this was a surprise to the initial CTSS developers who didn’t realize that was going to happen. It’s one of the things that led us to build a new operating system after CTSS — Multics — which was able to do that better. When we wanted to send mail the idea was that you would type a message into a program running on your account and then mail would switch to your addressee’s account and deposit the message there. Only a privileged command that was very carefully written to not do anything bad could do that. And so we had to become trusted enough to be able to write that thing.
3. Incumbents often miss the boat in a big way
IBM Missed the boat on the computing technology but they eventually recovered.
MARVIN MINSKY: Marvin Minsky, one of the early members of Project MAC and director of its AI group, provides an account of an early meeting about time-sharing at IBM. IBM was committed to batch processing. It was part of their business model. “In fact, we went to visit IBM about using a computer with multiple terminals. And the research director at IBM thought that was a really bad idea. We explained the idea, which is that each time somebody presses a key on a terminal it would interrupt the program that the computer was running and jump over to switch over to the program that was not running for this particular person. And if you had 10 people typing on these terminals at five or 10 characters a second that would mean the poor computer was being interrupted 100 times per second to switch programs. And this research director said, ‘Well why would you want to do that?’ We would say, ‘Well it takes six months to develop a program because you run a batch and then it doesn’t work. And you get the results back and you see it stopped at instruction 94. And you figure out why. And then you punch a new deck of cards and put it in and the next day you try again. Whereas with time-sharing you could correct it — you could change this instruction right now and try it again. And so in one day you could do 50 of these instead of 100 days.’ And he said, ‘Well that’s terrible. Why don’t people just think more carefully and write the program so they’re not full of bugs?’”
A far bigger loser was the Post Office
TOM VAN VLECK: Well, I remember vaguely discussing it with people and worrying about what the U.S. Post Office would think of [e-mail] and whether they would tell us not to do it, or tell us that they had to be involved in it. 
ERROL MORRIS: Well, secretly, you were trying to put the post office out of business. 
TOM VAN VLECK: We didn’t realize that at the time, but we were afraid that they would want us to destroy a first class stamp every time we sent a mail message. 
ERROL MORRIS: Really! There would be Noel Morris and Tom Van Vleck stamps.
United States Postal Service 
TOM VAN VLECK: We didn’t want to ask them because we were afraid they would say, “No, of course not.” Or, “We have a monopoly on that.” Which they did. In those days if you sent a box by UPS and you put a letter in the box, you were supposed to destroy a first class stamp. 
ERROL MORRIS: Is that true? 
TOM VAN VLECK: Oh, yes. The U.S. post office had a monopoly on sending mail. So, we didn’t ask until finally some years later, one of the professors at MIT ran into somebody from the Post Office advanced development organization, or whatever it was, at a conference and said, “Hey, we have this thing. Are you concerned with that, are you interested in it?” And he said, “Oh no, forget it, we’re not interested in that.” And we said, “Great, thanks. That’s what we were hoping to hear.” We didn’t ask again.

Sunday, June 26, 2011

Why the iPad Beat Out the Chomebook

In 2010 we saw the release of the iPad along with the announcement of the Chromebook. I clearly remember my original thoughts on both. I thought the Chromebook was genius. In fact, I'd practically built one myself the previous year. My wife had insisted that her computer was too slow even though she had a pretty fast machine that wasn't even 2 years old. So after trying a number of solutions, I settled on bringing out a laptop from 2003 and not loading anything on it other than Google Chrome. It was blazingly fast at browsing the web. I thought that many other people would love to buy an optimized version of this machine (my grandparents for instance.) The Chromebook would boot up immediately and have everything needed for an optimal web experience. For the iPad I had almost the exact opposite reaction. I remember listening to an Engadget podcast that asked  Who really wants a giant iPhone and I heartily agreed. Case closed.

But how did things turn out? The iPad turned out to be a transformative device -- completely creating the category of the mass market tablet. Apple sold over 15 million first generation iPads and had 96% market share until Q4 of 2010. What I hadn't realized at the time was that companies have been trying to make a great tablet computer for years but no had been successful at it. An interesting side effect of Apple creating the tablet market was that there is now no need for the Chromebook. Why would anyone buy a PC just to browse the web when an iPad does that so spectacularly. I suspect that Chomebooks might still have a role in businesses -- especially when you can lease one for $20/month. An optimized Chromebook would go well with Google Apps if your company were totally committed to the platform.

But the iPad has allowed others to transform the product landscape. One product that comes to mind are online news readers. RSS readers is a great technology but have a number of failings. They feel more like email readers with unread messages more than a newspaper. But look at the iPad's best take on the newsreader: Flipboard. There are some really great talks online by Evan Doll, one of Flipboard's founders that talk about what makes Flipboard a great news reader. You can find them at iTunesU in the lectures Designing for the iPad (which was given before Flipboard and the iPad itself were released) and Designing Flipboard. Evan talks about some key things that make Flipboard great:
  • Creating something beautiful that combines design and editorial (like a great magazine)
  • Preventing information overload (an issue of Time Magazine doesn't overwhelm and scare you like your Facebook News Feed might)
  • Leverage the personal nature of social media to create a magazine personalized magazine
After spending time with Flipboard you realize why Flipboard is a fundamentally different (and better) way of consuming online news.

Sunday, June 05, 2011

Your Product Will Never Be Simple Enough

In a recent article, David Pogue wrote that there is no core curriculum for people to understand technology. People often ask him "obvious" questions about technology that they never learned. That's probably why he wrote his missing manual series. We're all familiar with the problems of complex technology that we can never figure out but how can we fix this problem? One goal would be to make technology as easy as tying your shoes. Unfortunately, most of us can't even figure out how to do that right.

Even tying your shoes isn't as easy as it should be. At the TED conference in 2005 Terry Moore  gave a quick 3 minute talk on how to tie your shoes. After a pair of his shoes kept coming untied, he tried to return them. When he went to the store, the sales person said, "Hey, you're tying them wrong." This was a bit upsetting because at 50 he thought "If there's one thing that I thought I'd really nailed, it was how to tie my own shoes." The salesman proceeded to explain that most people tie the weak form of a shoelace knot (also called a granny knot) instead of tying the strong form of the knot (a slip knot). It's three minutes very well spent watching the video as your shoelaces will never become untied again. By the way, for a more thorough treatise on the topic take a look at Ian's Shoelace Site.

So if even shoelaces aren't idiot proof, how can we as product managers expect our customers to use our products correctly. Here are a few ideas:
  1. Make the Primary Use Cases Super Clear: Twitter has many complicated features for power users (e.g., hash tags) that many newbies don't understand. But even the most naive user will pick up on the giant "What's Happening" window at the top of the screen. This design feature was so useful that Facebook quickly copied that design feature.
  2. Allow Users To Come Up To Speed Easily: Microsoft Office is the king at this. First of all, Office has keyboard shortcuts. But a new user doesn't know how to use them. So if they want to copy something they go to Menu -> Edit -> Copy. Then they realize that you can also copy by pressing control-C -- it's right there next to the copy menu item. In Office 2007 they went much further by combining all of the features of menus and toolbars into a single "ribbon". This greatly increases the transparency of the program and brings features much closer to the user. It also provides pre-packaged uses of features (e.g., formatting a table in a pleasing way) that allows people to leverage the power of Office very easily. Then they can customize the features later. If you're interested, there's a great video on the ribbon with the user interface lead for Microsoft Office
  3. Getting Started: Many companies post tutorials or how to lists. YouTube has a good example with their Creators Corner. It has everything that you need in order to create great YouTube videos including "inspiration". Though this site is quite complete, it's a bit overwhelming and takes a while to find. Probably the best way to get people to understand your product is through video. Google produces videos for many of their products like the Google Music cloud. Chase Blueprint does a great job of explaining a very complicated product in a way that makes sense to the everyday customer. These are essentially marketing videos that quickly take users through the primary use cases.  These can be low cost -- like Google does -- and still deliver a simple and clear message on how to use the product.
  4. Tip Of The Day: Users want to be up and running as quickly as possible. But once they get the swing of things, they rarely look for additional features. One way to get users more engaged is to add a "tip of the day" so that every time the user uses the application or logs on to your website, the receive a new idea of how to use your product. Some tip providers like Windows Secrets even send a weekly update to subscribers. Though these lists are often for the primary software that customers use (e.g., Windows, Google Apps, etc.)
  5. Take Advantage Of Rebellious Users: Customers don't always use your product as expected but that's a good thing! It's important to know that even if you've designed your product perfectly, power users will figure out interesting ways of using your product that you'd never imagined. UX Myths has a good list of products that were used in ways that were totally unexpected when they were designed. For example, Twitter moved from a site where people shared what they were doing to what they were thinking about. A classic case is Kleenex which started as a makeup remover and ended up with a very different use case!
  6. FAQs: And of course, if all else fails have good FAQs. A well written FAQ is a really great thing. They were my favorite things on the internet before 1994 when Mosaic kick started the web revolution. And if you want to get really meta, there is a FAQ about FAQs.

Wednesday, May 18, 2011

Digital Innovation

It's been a while since I took a broad look at what’s happening in the digital world. Conveniently enough the Webby awards was a wonderful conglomeration of many of the great innovations happening on the web and on mobile devices. Here are some of the big trends:
  • Interactive Storytelling: Many companies are merging digital and traditional narrative storytelling. Touching Stories allows you to affect a movie in a number of ways. For example shaking the iPad makes the room shake – and the people in the movie fall down. Another great interactive video is A Hunter Shoots a Bear in which a hunter is about to shoot a bear but chickens out and allows you to substitute something else for him to do. It's really well done and is a great job of aligning with the sponsor Tipp-Ex which produces correction tape. On a slightly different note, the band Sour does some great videos on how people interact with social media. Sour’s big video last year sparked a very similar Pepsi Refresh commercial. This year's video is even better. The first person to copy it well in the US market will be heralded as a social marketing genius. 
  • Leveraging Multiple Devices: One of the coolest things we're starting to see is how applications can coordinate across multiple different devices. Apple’s Airplay is a simple example of the how you can take music or video from your iPhone and throw it on to your Apple TV. Collabacam leverages multiple iPhones together to direct a multi camera movie. It syncs all of the different raw footage and allows the director to piece the movie together in real time. Another application is Remote Palette in which the iPad is your canvas and you can choose your colors from a palette on your iPhone. 
  • Mixing Virtual and Real World: One of the most interesting trends is combining the real world with the virtual world. In Toronto, M&Ms ran a “digital treasure hunt” where they secretly hid 30 large red M&M men while the Google Street View cameras were filming. 3 of them ended up in the final map and customers were challenged to find them inside Google Street View. In another digital / reality mash up, Yahoo set up digital bus stops in San Francisco that allowed riders to challenge other bus stops in games of skill. And for those of you that like limited edition sneakers, Airwalk sold a small set of limited edition shoes at “invisible pop up stores.” To buy the shoes you had to download an app and go to a specific location in order to purchase the shoe. 
  • Reality Tagging: Neer is an application that allows consumers to tag their real life behavior to that of their digital world. It lets you automatically notify family members when you leave work and even remind you that you’re out of milk when you pass the grocery store. I’ve wanted something like this for a long time. It’s not available on my phone yet but I’m a bit skeptical about how well it will work. The problem is that GPS uses an enormous amount of battery power so figuring out when to turn on and off is a big challenge. Another really cool use of reality tagging that I’ve heard about (but haven’t figured out how to do) is through Foursquare. The app allows you to tag locations that were reviewed in the New York Times so you could be reminded about them when you are nearby. 
  • Nike: For years Nike has dominated the mobile and social space. Nike’s original Nike+ virtual running club was the first innovative use of social media and they haven’t stopped yet. Nike has an interesting take on augmented reality this year. Almost every version of augmented reality I’ve seen involves adding a graphic element to a real world scene. But with Nike+ or Nike Boom they do augmented reality with sound. You can link these apps to your Facebook account and start playing your music through their apps while you exercise. If somebody likes or comments on your post you'll hear cheering during your run. In Europe they’ve done quite a bit more. In Amsterdam Nike allowed runners to draw running tracks making what looks like graffiti. In London they even created a game which involves getting a team together to run to different locations to win prizes. Not to mention renting the side of a giant skyscraper and posting people’s social media messages on it during the World Cup.

    Sunday, March 13, 2011

    The Future of TV

    2010 was the year that TV officially married the Internet. Actually this was the year that the Internet proposed and TV ran off into the hills. This wasn't the first year that the Internet and TV have been dating. There have been internet enabled TVs (IP TV's) for many years. In fact, many of the internet programmers of the late 90s were actually refugees from the failed interactive TV industry of the early 90s.

    What made 2010 different was that companies like Boxee and Google developed set top boxes that make it super simple to interact with your TV. While tech geeks could always connect their TVs to their computer, these devices made it easy for non-techies to watch internet video in the living room. While the technology didn't change, a much better user interface allowed a large portion of the population an easy way to view IP TV. This change scared the networks.

    The networks right now aren't really sure what to do with broadcast TV on the internet. They are still experimenting and almost co-creating this new format with their technology forward consumers. Some are experimenting with new features like offering one of three commercials for the viewer to choose during a break. Experimenting was fine when the only people watching TV online were geeky early adopters who had a penchant for small screen viewing or those that really enjoyed hooking up an HDMI cable from their computer to their TV. However, now that people can easily browse the web from their TVs the networks feel like they're being rushed into a medium that they aren't comfortable with yet. We've already started to see the major networks blocking Google TV. Technologically it doesn't really make a difference if you're watching from a PC or from an embedded PC (Google TV) inside your Television. But from a consumer perspective it makes a huge difference. It means that the entire audience for NBC might stop viewing it through their cable operator and start watching it through the web.

    Many of the technology geeks don't seem to get it though. I was listening to the Engadget podcast and Josh Topolsky, the editor-in-chief of Engadget said, "Why doesn't Google TV just pretend to be a regular Windows based PC and to get around the blockers." This is a technology solution that would be very easy to implement. The problem is that broadcast TV is a huge priority for the networks and the internet is currently still an important but futuristic sideshow in terms of revenue. So rather than allow Google TV in, they would shut out all PC activity.

    David Pogue of the New York Times says that the whole advertising system isn't ready for this change:
    The reason they don’t, of course, has to do with ads; in the old model, advertisers pay to have their ads shown at a certain time of day, in certain geographical areas, and so on. The networks and Hulu show different, shorter, punchier ads when you’re watching the shows online. Showing them on your TV would violate their advertiser agreements.
    In theory, "smart" television should be much more targeted and effective than traditional "dumb" television. Networks would move from advertising based on a show to advertising based on a specific audience. For example, instead of creating ads that appeal to people that watch The Office -- you could target single males 25-34. You could even make custom ads that focus on your different customer demographics (e.g., car advertisements with single man, single woman or family.) While in theory that makes a lot of sense, most advertisers don't really have customized advertisements yet -- or even know how they might create them. Each car company only has a few different TV advertisements -- partly because it costs a lot of money to make a TV advertisement and partly because you can only spin your brand in so many ways.

    So where will TV's go next year? In my view we will start seeing every TV internet enabled. Even if it's just to watch Netflix and YouTube, that's a lot more attractive than 3D TVs.

    Rob's Future Timeline of Television:
    • Phase 1 (~2011-2012): All TVs are connected to the internet. Much like the way the first WiFi connected Blu-ray players in 2009 were followed by a slew of me-too's the following year. We can all look forward to a Internet connected TV. Google TV is nice and all but until the TV networks get on board, most people will be watching a lot of YouTube and Netflix anyway. While there was a lot of talk at the Google TV launch about creating a new "platform," people are looking to watch video on their TV. As for the platform, no one wants to check their email on their TV, but the success of concepts like new types of games remains to be seen.
    • Phase 2 (~2012-3): At this point the networks figure out how to finally move from the dumb TV model to the new smart TV model -- focusing on targeted ads for specific audiences. They will allow Google TV (or their progeny) to stream any and all content. Advertisers will be more effective and everyone will get along swimmingly. The only problem is that it took so long -- this is what should have happened all the way back in 2010.
    • Phase 3 (~2015): Once you can watch all of your TV online, the game starts to change dramatically. We will enter an era of disaggregation where distribution is separated from content -- similar to the way electric companies work today. One company will provide the "pipes" to your home while others will offer you various different pricing models. Today you can do versions of this like buying your content a la carte (Apple TV) or a bundle of older movies and TV shows (Netflix streaming). More importantly you could buy your entire cable service from anyone. You could buy a bundle of channels, shows and movies all together. For example, you could buy a special dinosaur package that had premium Discovery Channel content, interactive games and even museum tickets. Another opportunity would be to buy local programming from where you spent your childhood up in Oregon. And someone else might be offering a "DVR in the sky" that would provide every possible show on demand.

    Saturday, November 13, 2010

    Why Privacy is About to Change

    The world of data privacy is about to change. Currently most companies feel free to treat your personal data as an asset that can be leveraged by their company. As you no doubt have realized, many companies sell your personal information to other parties to cross sell their products. For example, my friend Marc once put his dog’s name when answering an online promotion only to see that his dog started to get a lot of related mail over the next few months.

    This problem is getting much worse. With the rise of social networking and people’s dependence on the Internet, much more of our private information is now available online. For example, banks often use “private” information to verify your identity when calling customer service. But now such information like mother’s maiden name is easily accessible via Facebook.

    My prediction is that two things are about to happen. One is that people are going to start to become much more concerned about their privacy as they continue to put more and more information online. Secondly, some company with lots of private data (like Facebook) is going to play a bit too fast and loose with privacy, causing a public catastrophe. As the importance of privacy increases and companies fail to safeguard it, we’re looking at a major change in public policy on privacy. Likely this will mean that consumers will own all of their data and companies will need explicit permission to share it with others.

    When I talk to people about this, I often get the response, “This is technology, it’s no place for government policy.” But once technology becomes entrenched into our everyday life, that is exactly when it starts getting regulated. Remember that 100 years ago electricity and telephones were the top technologies of their day and now they are two of the most regulated industries on earth.

    As an example I’d like to talk about another new technology that totally changed the world. It was introduced in Seattle in the early 1960s at theSeattle Artificial Kidney Center. This was one of the first dialysis machines in the world, made possible by advances in technology allowing a permanent stent to be placed into the body. This allowed people to have regular treatments where blood is moved outside of the body and cleansed by a machine. These machines were greatly oversubscribed due to their lifesaving nature and extremely limited availability.

    The head of the center Belding H. Scribner knew that making a decision on who should get treatment was incredibly serious. He created the Admissions and Policy Committee to decide who deserved treatment the most. These decisions were based on characteristics other than medical fit -- the patients were already screened by a panel of doctors. This committee was a cross section of society composed of seven lay people – a lawyer, a minister, a housewife, a state government official, a banker, a labor leader, and a surgeon who served as a "doctor-citizen." The group considered the prospective patient's age, sex, marital status, net worth, income, emotional stability, nature of occupation, extent of education, past performance and future potential. Essentially they needed to determine which of these people was "worth" the most.

    While Scriber’s solution was a good one, it was shocking when it reached the national stage. In November 1962, Life magazine ran an article called, "They Decide Who Lives, Who Dies.” While the article started as a study into this new wonderful and life saving technique, it quickly became a study of what the author referred to as: The Life and Death Committee.

    This article sparked a national conversation and led to the creation and popularization of bioethics. The inventors of dialysis were amazed that public discussion focused on the decision of who got the treatment rather than the amazing ability of the machines to transform what was once a death sentence to a chronic condition.

    Today when there are issues on life saving decisions based on limited availability (i.e., for transplant organs) a person’s worth is no longer considered. Doctors use a number of factors such as age and health to wean down the list. Once patients are one the list, organs are distributed based on severity of the condition, the time on the wait list and the geographical distance between the donor center and the hospital.

    It’s tempting to think that Bioethics is a much greater social issue than personal privacy. But it’s not. Bioethics has just had more time to mature and enter the social consciousness. In fact, I was once in a business school class where we were presented with the Seattle Artificial Kidney problem of deciding who should live. This was a case study used at both the beginning and end of the class – essentially to show how much we’d learned during the class. However, it wasn’t in a Bioethics class but a Decision Sciences class!

    We were given the following problem: “Five people were dying of kidney disease and we only had the ability to save one of the five.” We were given short bios of each person, e.g., a 50 year old doctor with 3 children who is working to cure cancer. We were to rank order which of the people we should save. While the exercise was very interesting and really showed how to rank order on a number of criteria, no one brought up any of the ethical issues. The teacher even seemed unaware of them. Even today with five decades of Bioethics behind us, whole classes of students can ignore the social issues when presented with a technical problem to solve.

    In short, technology can often go unhindered while it is being developed; however, once it becomes enmeshed in the social fabric, decisions are not made on technical merits but on how they affect society as a whole. What was once a technical issue becomes a social one. Or to quote Spiderman's Uncle Ben “With great power comes great responsibility.”

    Tuesday, May 18, 2010

    Are More Choices Good or Bad?

    I've been following some of the recent research on how people make choices. The following summary represents two sides of an argument on whether or not more choice is a good or bad thing. In actuality, Malcolm Gladwell’s point of view is actually the standard thinking -- that more choice is better. Barry Schwartz, the author of Paradox of Choice says that some choice is good but more choice isn’t always better. Below I present their views as a mythical debate on the virtues and vices choice.

    MR GLADWELL, PLEASE BEGIN YOUR ARGUMENT THATMORE CHOICE EQUALS MORE HAPPIENESS



    I'd like to tell you all about the person who has added more happiness to the world than anyone else: Howard Moskowitz. Howard is the creator of chunky tomato sauce. A market researcher, Howard discovered that certain customers had fundamentally different tastes in tomato sauce. When Howard started his research the world thought that there was only one "best" type of tomato sauce that everyone would prefer. This was a platonic ideal of spaghetti sauce -- that was captured in the old world methods of sauce (like Ragu). However, what he discovered was that people’s preferences were not focused on one universal platonic ideal but differed in three dimensions: regular, spicy and chunky. At the time, no one was manufacturing chunky tomato sauce and that's exactly what Moskowitz's client -- Prego created.

    Giving people the right kind of tomato sauce is like making them the right cup of coffee. If you ask people what kind of coffee they like, they will tell you that they like a dark, rich, hearty roast (but only 25-27% of people actually prefer that). Most of you actually want milky weak coffee. If we came up with a blend to coffee to suit everyone, the best score you can get is a 60 out of 100 on average. However, if we could segment you into 3 or 4 coffee clusters, you would move from a score of 60 to 75 or 78. The difference between a 60 and 78 is the difference between coffee that makes you wince and coffee that makes you deliriously happy.

    MR SCHWARTZ, PLEASE PRESENT YOUR OPPOSING ARGUMENT -- THAT TOO MUCH CHOICE CAN MAKE US LESS HAPPY:



    I agree with the vast majority of what Mr. Gladwell has to say with one exception. Though some choice is better than no choice – more choice isn’t always better. This is a bit counterintuitive because in Western society we believe that freedom is good and more choice means more freedom.
    However when I go to the supermarket today there are 175 salad dressings and that’s not including the 15 extra virgin olive oils and 42 vinegars I could mix together for customized Italian dressings. There are 75 varieties of iced tea, 230 soups and 40 brands of toothpaste. However, all of these choices don’t make people any happier -- they actually make them less happy. There are a few reasons for this:

    • Paralysis: With so many options people find it difficult to choose at all. As an example, one of my colleagues examined employee engagement with employer sponsored retirement accounts. She found that for every 10 more funds offered, participation actually goes down 2%.
    • Opportunity Cost: With 175 salad dressings it’s easy to imagine a salad dressing that must be better than what you have chosen. When there are lots of alternatives to consider it is easy to imagine the attractive features of alternatives that you haven’t chosen. These untaken choices subtract from the satisfaction of what we’ve chosen even when we’ve chosen a good option. As the following cartoon suggests, you can never be happy if you’re always wondering if you should be doing something else:


    • Escalation of Expectations: In the old days jeans never fit right. They were stiff and painful and eventually if you washed them enough they fit all right. Today, I went to buy jeans and was completely overwhelmed by the options. I spent an hour trying on jeans and left with the best pair of jeans that I ever had. But, I felt worse, I wrote a whole book to explain this The Paradox of Choice: Why More Is Less. Going in, when I thought there was only one type of jeans, I had no expectations but with 100 pairs of jeans, I should be able to find that perfect pair of jeans. What I got was good but not perfect. When I compared what I got to what I expected I was disappointed. Today everyone expects things to be perfect and you can never be happily surprised – which is a shame.

    IN MY OPINION MR SCHWARTZ WINS THE DEBATE (AND HERE ARE HIS SOLUTIONS)

    In order to make customers happier, part of the solution is to reduce (rather than enhance) number of decisions an individual makes. Think about the freedom of going to a restaurant with a tasting menu. The chef makes the decisions freeing you from choosing what to order – not to mention choosing what kind of salad dressing to buy. It doesn’t have to be that drastic though – customers can be happy with customization as long as it requires relatively little work on their part. A good example of customization with little work is the Pandora radio application. You just tell Pandora the songs that you like and Pandora creates your perfect radio station.

    However, businesses can only do so much to raise customers’ happiness. Marketers continue to bring out new and better products – trying to convince consumers that these new products and new choices will make them happier. While these new options may be slightly better, most of the time it’s at the margins. Consumers need to lower their expectations of new products and realize that even the most customized product only provides the core benefits of the product itself and maybe a little more. No matter how customized your dishwashing detergent is, it will never make you as happy as the woman using it in the television ad. Taking that mindset, customers can be happier by ignoring most of the choices and just focus on the few clusters that really matter – and realize that the other differences won’t be all that significant.

    If you like this sort of discussion you should watch the videos themselves – they very good. Then you might want to listen to the radio program Radiolab on Choice as well as pick up Dan Arielly’s book Predictably Irrational.

    Saturday, February 06, 2010

    Why Default Settings Matter

    In a fascinating talk at the TED conference, Dan Ariely asks the following question, "What determines whether or not someone decides to donate their organs?" When you look at the data, there is a striking difference between two types of countries in the world. Some countries have an organ donation rate of close to 100% while others hover much lower -- not reaching beyond 30%. What could possibly account for this difference especially among countries with similar cultural and ethnic heritage? Why would Austria be so much higher than Germany? Or Sweden be so much higher than Denmark?

    Arielly points us to the paper "Do Defaults Save Lives?" by Eric J. Johnson and Daniel Goldstein. A default choice is the choice that a customer makes by doing nothing -- e.g., always using Internet Explorer to browse the web because that's the way things worked when you bought your computer (in fact, the Microsoft antitrust case in the late 90's revolved to a large extent around these default choices.)

    This startling difference in participation in organ donations is tied to how the question is worded at the DMV. In some countries, the question is worded "Check this box if you want to be an organ donor." In others, the question is worded "Check this box if you do not want to be an organ donor." If the question presumes that you will donate an organ, you will. As Arielly points out, it's not because people are lazy, it's because, the decision is so important that it is paralyzing for people to make for themselves. They shouldn't have to choose what to do -- someone should make the choice for them and "the sysem should just work."


    When the people at the DMV created these forms, did they realize that they were going to have the single most important effect on the state of organ donations in their country -- certainly not. But that's the point. To a large extent, the default behaviors that the designer created are those that will stick.

    Certain choices, especially those with high stakes and low information are very hard for people to make. Let's look at one product in specific that people need -- 401(k) programs. Logically you would think that the more choices you gave someone on their retirement plans, the more likely they would be to join one of the programs. However, as a company adds additional plan choices the rate of participation actually drops because consumers don't know what to do with all of these extra choices. Richard Thaler and Cass Sunstein talk about this paradox at length in their book Nudges. They discuss an innovative solution to the 401(k) problem with the "Save More Tomorrow Program." The idea is that most people don't change their 401(k) options once they join a company. Therefore, the plan should, by default, change your options for you. So this plan increases your 401(k) contribution at the beginning of each year, right at the time you get a raise. This is around the same time most raises are given out so people don't feel like they are losing any money. The company does the "right" thing for you if you decide to do nothing. This is an example of nudging the customer in the right direction or in technical speak "Libertarian Paternalism." It allows people to make their own decisions, but presents them with the best option for themselves as the default choice. It's the equivalent of saying "You can have whatever you want with dinner but I'm serving skinless chicken with fresh vegetables."

    As product designers, we often expect customers to make the best choices for themselves given enough options. However, as we have seen, in many areas, people don't choose what is best -- they choose what is easiest. There are very few products that customers will engage with and learn how to use the myriad of features. If you still don't believe me, take a look at this New York Times article on the relatively few applications downloaded on the iPhone. Remember that for the vast majority of consumers they won't be changing their default settings, so make sure that the product works great right out of the box.