IUI Example – Creative Inspiration, from Google

If you read any of “the year ahead” predictions for 2019, or even ones from the last few years, one thing you’ll undoubtedly come across is that Robots or AI will eventually take everyone’s job… maybe not today, or tomorrow, but eventually. I personally don’t buy this line of thinking, and think that Marc Andreessen had it ‘mostly’ right in a post he wrote way back in 2014: “This is probably a good time to say that I don’t believe that robots will eat all the jobs…”

One of the most interesting topics in modern times is the “robots eat all the jobs” thesis. It boils down to this: Computers can increasingly substitute for human labor, thus displacing jobs and creating unemployment. Your job, and every job, goes to a machine.

This sort of thinking is textbook Luddism, relying on a “lump-of-labor” fallacy – the idea that there is a fixed amount of work to be done…The counterargument to a finite supply of work comes from economist Milton Friedman — Human wants and needs are infinite, which means there is always more to do. I would argue that 200 years of recent history confirms Friedman’s point of view.

Marc Andreessen

Marc then goes on to rip apart the Robots take all the jobseventuality” with a number of really compelling arguments and thought experiments.

The post is incredible and I’m probably not smart enough to do it justice, but let me try to offer a quick summary of his arguments and then I’ll provide my take… and circle back to the creative inspiration title of this post!

The overly simplistic version of the points in his very long post are:

  1. For the Luddites to be right today (that robots / AI will take all the jobs) one has to believe that there won’t be any new wants or needs (which runs counter to human nature) and that people / humans won’t contribute to those things being created
  2. While it’s true that automation / technology displaces work (and that is something that must be addressed), the flip side is that the same automation / technology increases standards of living. The way that I’ve always thought about this is thinking about all the safety features in cars – blind-spot monitoring systems were only available in high end vehicles a few years ago and now most cars have them, this is a life-saving feature… not just a cheaper big screen TV. Technology enables both of these…
  3. He offers three suggestions to help people who are hurt by technological change: first, focus on increasing access to education. Second, let markets work. Third, create & sustain a vigorous social safety net… with those three things in place, humans will do what they’ve always done: create things to address and/or create new wants and needs.

Now, I love all three of those, and it would probably be enough to stop there… but Marc goes MUCH further…

  • The flip side of robots eating jobs is that this same technological advancement also democratizes manufacturing – it puts the power of production in everyone’s hands! I love this thinking!
  • Costs go down / things get cheaper… robots producing things means lower costs, which lead to falling prices, which stretches people’s purchasing power and raises people’s standard of living.

He wraps up the long post by restating his “this is a good time to state that I don’t think robots will eat all the jobs…” and offers the following…  which I’ll take one by one…

First, robots and AI are not nearly as powerful and sophisticated as I think people are starting to fear. Really. With my venture capital and technologist hat on I wish they were, but they’re not. There are enormous gaps between what we want them to do, and what they can do.

Marc Andreessen

I think Marc is still right on the first one. I’m a goofy optimist. Always have been, always will be. There is a line that someone used on me once that I’ll never forget: “don’t mistake a clear vision for short distance” and while technically true given the subject we were discussing, one never knows where the next breakthrough is going to come from! Again, he was right in 2014, but probably less so every day.

One other point I’d like to make here is that they don’t have to be as powerful & sophisticated as people talk about in the general press. What I’m focused on are the specific examples of real progress. I think we are a long way from the “Singularity” (great video here) but trying to pinpoint it’s arrival isn’t really super exciting to me, not practically anyway.

The questions to really be asking are: “which jobs”, “when”, and “how”. My thought is that it won’t be some zero-sum game.

Second, even when robots and AI are far more powerful, there will still be many things that people can do that robots and AI can’t. For example: creativity, innovation, exploration, art, science, entertainment, and caring for others. We have no idea how to make machines do these.

Okay, this is one of the main reasons I wanted to write about this, and as someone who works in a creative field, Creativity is something that I’m incredibly interested in… so I was a bit floored when I started digging into Magenta…

A primary goal of the Magenta project is to demonstrate that machine learning can be used to enable and enhance the creative potential of all people.

There is a lot to Magenta, so I’ll focus on two parts that really caught my attention.

First, is the Sketch-RNN – given a source sketch, it will auto generate additional sketches. Pretty rudimentary, but two things: first, it is a rudimentary starting image and second, imagine what this could do over time?

Second is Music Transformer – given a starting sequence, it will generate music with long term coherence to the original sample provided.

It doesn’t take much imagination to come up with ideas on where were going to start seeing uses for this type of innovation. If you’ve ever played with Garageband on your iOS devices (or the full blown Logic Studio on a Mac, or similar DAW software), you can start to think that we’re seeing a further advancement in the democratization for the creation of art & entertainment.

As designers, the sketching stuff should be of particular interest. One of the most important parts of my design process is the Diverge / Emerge / Converge diamond. I can see a future where tools like this will help us quickly explore more divergent ideas.

You should really go check out the samples on that page, they are truly incredible.

And check out some of the other demo apps.

I thought about ending here, but there are two more points to cover, so back to Marc…

Third, when automation is abundant and cheap, human experiences become rare and valuable. It flows from our nature as human beings. We see it all around us. The price of recorded music goes to zero, and the live music touring business explodes. The price of run-of-the-mill drip coffee drops, and the market for handmade gourmet coffee grows. You see this effect throughout luxury goods markets — handmade high-end clothes.

Marc Andreessen

I love this line of thinking. Imagine a future where we see labels in clothes that read “Made by Humans” like the “Made in the USA” labels we see today?

Finally, his last point…

Fourth, just as most of us today have jobs that weren’t even invented 100 years ago, the same will be true 100 years from now.

Marc Andreessen

That final point is so true. Marc actually ends his post stating he is ‘way long’ on human creativity, as am I…

What do you think? Will technology take our creative jobs away? Change them? Thoughts?

Advertisement

What’s Next in Tech – Macro Drivers

A few years ago I had the good fortune of getting to build and help lead an Advanced Technology Lab for a large consumer goods company. During that time I created something called the ‘Future Opportunity Framework’. The goal of the framework was to give the company the ability to “peak around corners” when it came to the future of technology… to create a practical toolkit to drive prioritized investments and action.

There were four main sections of content:

  1. Macro Drivers – these were the big factors that were fundamental in helping to understand where things were headed. I’m going to share a handful of them in this post.
  2. Models of Change – more of a sidebar, actually. Just an outline of the 6 major types of change (e.g. Linear, Asymptotic, Cyclic, etc.).
  3. Technology Insights – this was the bulk of the content, a little more than 2 dozen insights that could be used as input into the process for ideation
  4. Framework Process – a simple process of What, So What, Now What that could be used to leverage the content above to generate hypotheses, identify opportunities, and determine next next steps.

I’ll be sharing the models of change and some of the Technology Insights in future posts, but wanted to get some of the macro drivers out in order to reference them later. None of these should really be much of a surprise, these were some of the big things that were going on in the world that helped explain some of the changes we were seeing.

Here are 6 of the original 9 macro drivers from early 2015…  again, these should all be fairly obvious…

Increasing Number of Software Libraries & Open Source Projects

Everything is now an API. From accepting payments (Stripe) to incorporating Machine Intelligence (Mahout, DeepLearning4J, etc), developers can find software libraries & open source projects for just about anything. This isn’t just the surface stuff either, we’re seeing an explosion in the availability of software infrastructure & management code availability as well. A great example is when Netflix open sourced their Simian Army, an API to improve availability & reliability of the Netflix Service. Netflix is not alone in this, Leading tech companies are all donating leading edge software libraries to the open source community, including Google, Facebook, Twitter, and even Walmart Labs.

Decreasing Cost to Build & Launch

There are two main parts to this driver. First, as mentioned above, the very practical implication of the increasing availability of quality software libraries means that building a product has never been cheaper (or faster). Second, we’re seeing tremendous growth in cloud environments, like AWS (Amazon Web Services). Any developer in the world can now provision a load-balanced, highly available, fault tolerant, n-tier server environment with a few mouse clicks and only pay for the actual usage.

Shifting Needs & Sources of funding

This is a potential disruption disruption to Silicon Valley. Given 1 & 2 above, tech companies need less funding to get started. Startups no longer need to go to a VC and give away a boatload of equity when the company is pre-revenue or still in seed stage. It’s pretty easy and cheap to get pretty far, whereas the past required funding by a VC just to build out the basic concept of an idea. The other major part of this is the tremendous growth of Crowdfunding sites, like Kickstarter and Indiegogo: got an idea for a product? Post it on a crowdfunding site and let the community fund the idea to start. Although not every project on crowdfunding sites gets funded, and not all the projects are worthwhile, we have seen some major success stories – like Pebble and Oculus – that are providing evidence of the power of these platforms to rewrite the role of venture funding.

Increasing Interest & Investment in Machine Intelligence

It seems everyone is doing something with AI lately. Watson winning on Jeopardy, Facebook opened an AI Lab, Baidu opening an AI Lab (and hiring very senior talent away from Google), Google investing in AI companies at a rapid pace to build out its capabilities. Everyday it seems there is a new advancement made, software library released, or news story about it.

Maturing Hardware Possibilities & Capabilities

For years the only hardware on the market came from large, well established players because building hardware is hard, and expensive. That is really starting to change, rapidly. There are two main reasons this is happening: first, crowdfunding sites are enabling people to sell products to people before they’re even built, offsetting the cost component. The second is the rise of manufacturing capabilities, everything from low cost options in China to specialized firms that will help bring a hardware vision to life by providing experience & expertise to novice hardware companies. One great example of this is the home automation company Smart Things, which recently sold to Samsung, they started on Kickstarter and raised the funds to build out their hardware platform.

Increasing Technology Adoption

This might seem incongruent with the rest of the list, as the other items are focused more on the nuts & bolts of tech, whereas this one is more of a consumer lens. Why this matters is that people are adopting tech to handle more and more parts of their lives. It started with the web, now we’re all carrying really powerful smartphones and rely on them more and more. There is an increasing comfort people have using tech. This is a virtuous cycle that leads more people to start building tech products because the audience is growing and it is becoming cheaper & easier to do so.

What do you think? Any that you’d disagree with?

IUI Example – eBay

I wanted to call this post ‘Reducing Decision Fatigue’, but the reality is that most of the posts I’ve written here could have that title! 🙂 As cited in my recent post about Design Principles, I think a core principle of IUI is to help people make smart decisions quickly.

One of the great papers at the 2017 AAAI (Association for the Advancement of Artificial Intelligence) Spring Symposium was ‘Communicating Machine Learned Choices to E-Commerce Users’. It was written by a bunch of folks at eBay… and the basic premise was that you could use Machine Learning to help guide people through a long list of products by grouping them based on attributes (new vs. used, seller rating, etc.) that were most relevant to the purchase decision of a given product… but doing so required making good design decisions.

The abstract:

When a shopper researches a product on eBay’s marketplace, the number of options available often overwhelms their capacity to evaluate and confidently decide which item to purchase. To simplify the user experience and to maximize the value our marketplace delivers to shoppers, we have machine learned filters—each expressed as a set of attributes and values—that we hypothesize will help frame and advance their purchase journey. The introduction of filters to simplify and shortcut the customer decision journey presents challenges with the UX design. In this paper we share findings from the user experience research and how we integrated research findings into our models.

They started by analyzing historical transactions to identify inherent value placed on specific attributes, and identified them as “global” or “local”. Global attributes are ones that are common across products (e.g. condition) and local attributes are ones that are specific to a subset of products (e.g the OS version of an Android phone), and some of the local attributes actually replace the global attributes (e.g. ‘Rating’ for baseball cars replaces ‘Condition’)

They then came up with something they called the ‘Relative Value’ of an attribute, which basically looked at the premium that shoppers paid for a product given the value of that attribute (e.g. a returnable item vs a non-returnable item).

In the above image, we see that the higher price paid when an item is returnable.

They then went on to review Behavioral Signals, to determine which attributes were “Sticky” and which attributes were “Impulsive” during a shoppers decision making process. Sticky attributes are obviously ones where buyers stick to a specific value (or range) in their purchase journey significantly more than random chance would dictate. Impulsive attributes are ones that correlated with impulsive transactions (short view trail before purchase).

Once they identified the attributes that really mattered, it then came time to figure out how to design the experience… and there were three parts that they covered:

  • Filter Naming – how to communicate understandable and compelling filter titles?
  • Filter Overlap – how to communicate that filters are not mutually exclusive
  • Filter Heterogeneity – how to communicate why eBay is displaying unrelated filter sets in close proximity

For the filter naming, each one could include one or more attributes (global or local) and they were constrained by the need to identify each of the filters with a human readable name. For example, for products where people prefer buying things that are new and want to have the flexibility of returns, and are weary of overseas shipping, they had a theme named ‘Hassle Free’.

Then the Usability testing began, where they tested a variety of titles – from “emotive & engaging” to “simple & descriptive” They discovered a few things:

  1. People overwhelmingly preferred simple titles.
  2. Item condition was the first reference frame most people locked into
  3. People found longer titles, especially those with compound filters, were difficult to understand

They landed on B, the descriptive titles split over two lines.

One of the Design Principles that I recently wrote about was ‘Developing Trust’, so it was really cool to see the following:

User study participants also expressed low confidence in our recommendations when the inventory covered using ML filters was smaller than that of search results. For example, when the value based filters are concentrated on one or two attributes, significant inventory may be left out. We addressed this concern by taking inventory coverage into consideration in our ML research.

They then go on to say…

We also added navigation links for shoppers to explore the entire inventory beyond our recommendation, which has helped us gain users trust in our recommendations. These links to “see all inventory” also provide easy access to listings not highlighted by our filter-sort formula, in support of cases where a shopper’s ‘version of perfect’ went undetected by our analysis.

This is such a cool example of leveraging machine learning to help people make decisions.

What do you think?

IUI Example – Google Flights

IUI Example – Google Flights

Google just rolled out a new feature to Google Flights, that is pretty cool – they are now predicting if your flight is going to be delayed.  As I’ve mentioned previously, I love to travel and really dig any innovation that will improve my travel experience.

As I read about this, I couldn’t help but recall a presentation I saw by Aparna Chennapragada, VP of Product Google, during an O’Reilly AI conference.

All systems imperfect — there will be a precision / recall tradeoff in almost any system that you rely on. But what you want to pay attention to, as a practitioner, is the cost of getting it wrong. Let me give you an example. Let’s say that you’re building a search system and you return a slightly less relevant article in a search result… it’s not the end of the world. But then let’s say that you build a local search product, where you inform the person searching that, yes, Home Depot is open, you should go now. Then, the person gets in the car, goes to Home Depot, and it’s closed, and they say “What the Hell?”. The cost of doing that, the cost of getting that wrong is higher.

She then gives the example of when they were building Google Assistant…

When we were working on the Google Assistant, and we say, hey, you’re flight is on time, you can leave right now and it takes 45 minutes to get to the airport and then you go to the airport and you miss the damn flight and can’t speak at the conference, then the “What the Hell” is much higher.

There are a number reasons a flight can be delayed or cancelled:

  • Mechanical Issue with the plane
  • Weather (at both the departure as well as the destination airport)
  • Late inbound aircraft
  • Crew
  • Etc.

What Google seems to be doing is simply tracking the inbound aircraft, either by gate numbers – if a flight to say New York  is departing at supposed to depart at 8:21 PM and the incoming flight to that gate is delayed, there is a great chance that the New York flight will be delayed. I’m sure they are doing more than that, they probably have tons of historical data and some good algorithms that take things a little further.

As a side note, each one of these is well known, and airlines have operational departments to deal with issues as they arise. I even read a great book a few years ago – ‘A New Approach to Disruption Management For Airline Operations Control’ that went into detail about a proposed multi-agent, intelligent system to improve operations. I also talked about Smart Airport system in a recent post.

The big takeaway here is that when you’re building things like this, it’s really critical to understand the costs of being wrong and what it means to the person using it!

What do you think?

Defining the Intelligent part of Intelligent User Interfaces

Since I’m talking about “AI” here, and took the time to define IUI, I figured it was probably worth some time offering my thoughts on AI…

Let me start off with a disclaimer: I’m not a machine learning engineer. I’ve never written a line of production AI code… I say production code on purpose, because the truth is, I have written some code that I consider to be “smart”, but more as a hobby. I actually began my career back in 1998 as a software engineer and took an interest in AI quite some time ago. I moved from software development to design about 5 years later and haven’t written production software since about 2003.

I first started digging into AI back in the early 2000’s, and the book that served as my introduction to AI was ‘Constructing Intelligent Agents using Java’. The principles that I learned from reading that, and playing around with some of the sample applications really became the foundation of my understanding of AI. I’ve built a few little things along the way, mostly just hobby projects – I built a little SPAM detection app and a new reading app that grouped similar stories (so I didn’t have to see 10 different versions of the same “Apple announces new iPhone”).

As someone who conceives and designs products for a living, I think it’s really helpful to have a solid understanding of what is possible with technology, so I’ve made sure to stay up to date with what is happening in the field as much as possible.

The way that I think about it is, there are really two kinds of “AI”, sometimes referred to as Narrow AI and Strong AI.

Narrow AI applications are optimized for a single problem or domain. As an example, there is a company that makes a system called Smart Airport, the domain is Airport Logistics – this is narrow AI aimed at optimizing the flow of an airport (planes, people, etc.). Just think about the domino effect of one delayed plane, a bad snowstorm, or mechanical issues with a plane. A smart system can run through all the scenarios of how to recover from that much faster than humans. Narrow AI can be incredibly smart, however, if you tried to use a system tuned to the operational efficiency of an airport to run the logistics of a different domain, say a sports venue, it would probably fail miserably.

Strong AI, on the other hand is what most people think of when they talk about AI: “human like intelligence”. As an example, we weren’t born with the knowledge of how to use the Internet, but we learned how to use it. That was not “programmed in”. Nor were we born with the knowledge of how do design things, we learned that. Strong AI, or “General Intelligence” is best characterized by its ability to learn to operate in any domain.

Recently there has been a lot of talk about Deep Learning, and the way that I like to think about this is that it falls somewhere between the two.

DeepMind, in case you haven’t heard of them, is an Artificial Intelligence company that Google acquired. They developed a Deep Learning algorithm that actually learned how to play – and win – video games. It started with some of the classic arcade style video games many of us grew up with, like Pac Man. Okay, you’re probably thinking, so? Who cares? Well, what made it so astonishing is that they didn’t teach the system the rules of the games, they just let it play the game until it learned the rules on it’s own. One of the games it learned to play was Boxing, and not only did it learn how to win, it learned how to optimize winning. It found, on it’s own, that you could pin the opponent in a corner and run up the score. At the time Google acquired them, it had mastered about 2/3rds of the 35 or so games that is was learning to play.

Fast forward to 2015 and DeepMind accomplished what most people thought was an impossible task for Artificial Intelligence, it beat a human champion at Go – in case you’re not familiar with it, Go is an ancient Chinese game in which you place stones on a 19 by 19 board, and capture your opponent’s stones by surrounding them. The rules are very simple, but they give rise to a complex, subtle game.

There are a number of articles online that describe why this accomplishment is such a big deal, but the simple explanation is that unlike Chess, which was solved using Brute Force algorithms, Go is different. Every single move in Go gives rise to infinitely more possible responses. If the average possible response to a move in chess could be one of about 35 moves, in Go the “Branching Factor” is about 250. To give you a sense of what that means, if you want to think 2 moves ahead in chess, there are about 1235 moves to consider (35 x 35). In Go, it is about 62,500 (250 x 250), and three moves would be 15,625,000 (250 x 250 x 250).

Winning at go requires a sort of an intuition of the board, something that a champion develops. This kind of intuition is something that Brute Force algorithms just can’t deal with.

Based on what I wrote above, you’d think that Deep Mind was Strong AI, because it learned on it’s own and made decisions to achieve it’s goals, and that is correct. However, the question of whether Deep Learning is really Strong AI is whether or not it could function in other domains? Could the same Deep Learning algorithm that learned to play & win those video games also run a supply chain? I’m not sure, my suspicion is that it couldn’t, therefore, my guess is that it falls somewhere between Narrow & Strong (but honestly, I’m not really smart enough to know for sure).

Just to be complete here, there is actually another category, which is called Super Intelligence, which is really a natural evolution of General Intelligence. If AGI can learn, then why can’t it learn to improve itself? This is what all the commotion has been about over the last couple of years with people like Elon Musk & others warning about our impending AI Doom.

Back in 2014, I had the good fortune of attending ICRA, the International Conference on Robotics & Automation. One of the workshops I attended was the Workshop on General Intelligence for Humanoid Robots. The guy who organized that is a pioneer in the field of General Intelligence, Ben Goertzel. During his presentation he said something that has really stuck with me:

…so, AI includes, conceptually, making systems that are intelligent like C3P0, HAL9000 or a thousand times smarter than any human being. The field of AI also includes “Expert Systems“ for say, medical diagnosis, that just goes through a hand coded list of rules or say a neural net control system for a self-driving car, which is highly specialized for that type of car. AI is a very big umbrella, so it’s not particularly clear where AI leaves off and algorithms begin, is there really a big difference between all the algorithms in an AI textbook and the algorithms in an algorithms textbook? Drawing borders between disciplines is not the most interesting thing, the world cuts across all the disciplinary boundaries anyway.

Ben Goertzel

That notion of it’s all just algorithms has always been something that I’ve kept in mind when talking about “AI”.

At the end of the day, for the purposes of this blog, I’m going to consider something intelligent as long as it as it loosely conforms to the definition I laid out in my first post here (What is IUI?)

Improving the acumen, acuity, and productivity of people by applying computational intelligence to experience design.

What do you think?

IUI Example – Kayak

This post is incredibly personal, I love to travel. Just look at my profile on Twitter…

Anyway… 🙂

One of the biggest questions someone has when they’re looking to book a flight is whether the price will go up or down, in other words, should they buy now or wait?

Kayak offers people an answer to this question with a little indicator “OUR ADVICE”.

PURCHASE ADVICE ON KAYAK

If you read my recent post on IUI Design Principles, the very first one was “Raise People’s Acumen”:

Acumen is roughly defined as the ability to make good decisions, quickly. Where a principle like this works really well are for things like analytical tools. As an example, if you’re designing a dashboard, think about the decisions that someone would make with the data and figure out how you can enable them to make better decisions, faster. Another way that I’ve written this principle is “Help people make smart decisions quickly”.

This is so perfect. They are answering that critical question of whether or not to buy now.

But they don’t stop there. They also follow one of my other main principles when building Intelligent User Interfaces, “Be transparent, the real job is developing trust

If a machine is going to do something or make a suggestion for a person, they should have the ability to see how that output was chosen. Look for ways to provide some transparency in the system that gives people trust & confidence.

They put a little ‘i’ icon that people can click to provide some detail behind the advice

This is so brilliant.

I’m not sure the explanation is quite as robust as it could be, however…

Let me explain…

This incredible little innovation didn’t originate at Kayak. The company that invented this was actually called Farecast, which Microsoft acquired in 2008… and, shockingly, they don’t offer this when you search for flights on their site.

One of the reasons that I know that the explanation could be better is because I know some of the history of Farecast.

Farecast was founded by Oren Etzioni, a computer science professor at the University of Washington. He came up with the idea back in 2002 when he was on a flight and learned that the people sitting next to him paid much less for their tickets simply by waiting to buy them until a later date. So he had a student go try to forecast whether particular airline fares would increase or decrease as you got closer to the travel date. With just a little bit of data, the student was able to make pretty accurate price predictions on whether someone should buy or wait.

From there Etzioni built Farecast. It was just like other online airfare search sites (OTAs), with one major addition: it added an arrow that simply pointed up or down, indicating which direction fares are headed.

The company, which was originally named Hamlet and had the motto of “to buy or not to buy”, was built using 50 Billion prices that it bought from ITA Software (which was acquired by Google in 2010). ITA is a company that sells price information to airlines, websites, and travel agents, and has information for most of the major carriers. When Farecast bought the data from ITA, it didn’t have prices for JetBlue or Southwest, but could indirectly predict fares for those carriers based on fares from the carriers it did have pricing data for.

Farecast based it’s predictions on 115 indicators that were reweighed every day for every market. They paid attention not just to historical pricing patterns, but also included a number of other factors that shifted the the demand or supply of tickets – things like the price of fuel, the weather, and non-recurring events like sports championships… anyone buy their tickets for Qatar 2022 yet? 🙂

As mentioned at the start of this post, I love travel… and there are some other travel examples I’ll be sharing in upcoming posts (including one from Google Flights).

What do you think? Any examples you can think of leveraging IUI for travel??

The 5 Reasons Design Deliverables Still Matter

Picture of a design space from a project I worked on

I’m going to go a bit off topic here, but still very much design related…

One of my favorite articles of this past year was the wonderful (manifesto?) “The only thing that matters” by Josh Clark. I found it via the UX Design Weekly newsletter from Kenny Chen.

The basic idea he argued for was that highly polished design artifacts — wireframes, user journeys, etc, — aren’t as valuable as they may seem. The only thing that matters is the deliverable itself, the product. Those things still matter, he says, however, the place to spend your time is on the product itself, not creating a bunch of pretty design artifacts.

I found myself nodding my head in agreement continuously as I was reading the article. I think he is spot on in so many ways.

There is much to love about the sentiment here. The artifacts are a means, not an end. We get paid for the product! We get paid for shipping something! SO true! The goal is to make the right thing, not produce a bunch of stuff that no one will ever look at again…

As much as I agree with Josh and nearly everything he outlined, I still think there are a number of reasons why some highly polished design deliverables still matter. This isn’t a rebuttal to what he outlined, it’s more to advocate for an “AND“… to say that there are indeed some reasons for producing some of those design highly polished deliverables…

1.) Synthesis

Experience Map I built for a project I worked on

I still recall the very first Experience Map that I created, I’m not sure that any one of my stakeholders spent more than a few moments looking at it, however, creating it helped me synthesize my understanding of the problem domain. One could argue that you don’t need a highly polished artifact for this, and that’s true, however, the “picture” enabled me to quickly zoom into an area that I wanted to focus on. If I was trying to understand part of the “plan” part of the experience, I knew exactly where to focus my attention.

As I was writing this, the thing that kept running through my head was:

Creative design seems more to be a matter of developing and refining together both the formulation of a problem and ideas for a solution, with constant iteration of analysis, synthesis and evaluation processes between the two notional design ‘spaces’ – problem space and solution space. In creative design, the designer is seeking to generate a matching problem-solution pair, through a ‘co-evolution’ of the problem and the solution.

Or, to simplify that, how does one ensure that they’re “making the right thing”?

The following picture illustrates what I consider to be a very basic design process.

My basic design process

What I want to call out here is the Discovery, Synthesis, Problem Framing Loop. The reason that I call that out so explicitly is that I think we often get pushed into design too quickly, and I’m a big fan of making sure we’re solving the right problem. There is a line I heard once that I really love: we teach people how to solve problems, but not how to find the right problem to be solved. I love that. Synthesis, for me, is where that work begins.

2.) Thoroughness

Taking the time to create these helps ensure that I don’t miss anything. If I’m creating a Persona, as an example, the template causes me to spend time considering each of the sections, and ensures I don’t miss anything. Yeah, I get that this doesn’t mean that the output has to be polished, however, why not spend the extra few minutes. I’m sure I’m not the only one that has a template for persona’s and the delta between creating one in a spreadsheet and creating one that looks nice is about 15 minutes… well worth the time if you ask me… which leads me to the next one…

3.) Evidence

We did the work, we explored the problem space and here is the proof. Chances are that there are some people on the team whose contribution would be invisible without these “highly polished artifacts”, and if you’re selling services to a client or trying to convince your boss you need to hire another design researcher, why would anyone believe that they are necessary? I get I’m being a bit hyperbolic here, and at the end of the day the thing that matters is the working software (or whatever it is), however, unless the organization you’re working for has a high degree of design maturity, I think some of these highly polished artifacts are extremely valuable as evidence of the work that was done.

Dan Brown has written some of the best stuff out there on design deliverables… including a really great book, Communicating Design.

4.) Validation

They say that a picture is worth a thousand words, and along those lines, showing someone a nice journey map or providing a quick, interactive prototype, instead of making them read through a spreadsheet or word doc is a better experience and helps validate direction. There is an old saying that I really love here:

  • Tell me, and I may hear you.
  • Show me, and I may see it.
  • Involve me, and I’ll completely understand.

Giving someone a prototype and letting them click through can yield some great insights. I’m sure we can all recall some design idea we’ve fallen in love with that didn’t quite hit the mark once people saw it. For me, I want to learn about these types of things as cheaply as possible.

I’m really a huge fan of what Josh proposes, in terms of writing code early and by favoring working software over prototypes, we really cut down on the back & forth between design and engineering when we throw designs over the wall (if we’re still doing that).

Interestingly, I recently read ‘Creative Selection: Inside Apple’s Design Process’ and one of the main ideas of the book was that the most important thing you can do is demo your stuff, let people see it and try it — early & often! There is no substitute for feedback!.

5.) Portfolio

As funny as this may sound, I think these design deliverables are an important part of a design portfolio. It’s great to see the finished product, but showing the process someone went through to come up with the idea and the deliverables that were generated are, to me, just as interesting… they are a way to get an understanding of how someone thinks, how they approach a problem.

Again, I really love and agree with the ideas Josh proposed, and think we need to move much closer to just-in-time, collaborative design process! I recently did this on a project, where we brought in a UI Engineer to start building the design as they were emerging, and it worked out beautifly! Instead of handing off wireframes, we handed off working front-end code, and our usability testing was much more effective because we could code in some conditional logic…

What do you think? Have you changed your design process to be more collaborative? I’d love to hear your thoughts!

IUI Example – AirBnB

One of the principles that I covered in my recent post on IUI Design Principles, was: “Reduce Cognitive Load”.

Part of the inspiration for that principle was an article I read a few years ago about a pricing tool that AirBnB built for people listing their properties.

The problem, they had discovered, was that critical moment when a person is trying to do decide what price to charge when they’re listing their property:

In focus groups, we watched people go through the process of listing their properties on our site—and get stumped when they came to the price field. Many would take a look at what their neighbors were charging and pick a comparable price; this involved opening a lot of tabs in their browsers and figuring out which listings were similar to theirs[clip] …some people, unfortunately, just gave up

Dan Hill, Product Lead @ AirBnB

There is really so much to love about the discovery of that insight. At the risk of stating the totally obvious, if people don’t list their properties, the supply side of the market starts to erode, which isn’t good. 🙂

AirBnB isn’t the only company that has to deal with pricing, obviously! I’ve spent a lot of the last 20 years in the Consumer Goods / Retail domains, and think about the number of products on the shelf at your local grocery store, those prices were set by a person.

The awesome thing is how they solved this, and it is a great example of a truly intelligent interface! They actually built a tool called Aerosolve, a machine learning algorithm to provide intelligent pricing recommendations to people listing their properties.

They aren’t the only company that has used algorithms to set prices. Think about ride sharing apps like Uber, Grab or Go-Jek, prices change based on different variables: distance, weather, and demand (give or take). But those are pretty simple, all known quantities and over time develop a tremendous amount of historical data… e.g.  how much more are people willing to pay when it’s raining? In other words, price elasticity.

Setting prices for a property on a site like AirBnB is a little more complicated. In a technical paper that Dan Hill wrote (which is where I got the quote above), he covers both some of the history or Aerosolve as well as some of the challenges in building it.

In addition to all the normal things you’d expect to be factors, like number of rooms, WiFi, seasonality, and so on, there are some other interesting factors that come into play. It turns out that the number of reviews plays a large part in pricing, people are willing to pay more for a listing with good reviews (seem obvious retrospectively).

But what about big events?

Source AirBnB

Consider SXSW, as depicted in the above graphic, what about the World Cup? Those are obvious big examples, but there are events in cities all the time that don’t get the press of things like those events? AirBnB has to account for those as well.

It’s interesting to note here the origin story of AirBnB was that Brian Chesky came up with the idea for the site when we wanted to attend a design conference, realized that all the hotels in SF were sold out, and decided he could pay for his ticket to the conference by renting an air mattress in his apartment to someone who wanted to come to SF but couldn’t get a hotel room.

One of the other really interesting things is the way they had to deal with geographic boundaries for properties. An early version of their algorithm simply drew a circle around a property, and considered anything within that radius a “similar listing”, but what they discovered was that simplistic view had a serious flaw…

Imagine our apartment in Paris for a minute. If the listing is centrally located, say, right by the Pont Neuf just down from the Louvre and Jardin des Tuileries, then our expanding circle quickly begins to encompass very different neighborhoods on opposite sides of the river. In Paris, though both sides of the Seine are safe, people will pay quite different amounts to stay in locations just a hundred meters apart. In other cities there’s an even sharper divide. In London, for instance, prices in the desirable Greenwich area can be more than twice as much as those near the London docks right across the Thames.

We therefore got a cartographer to map the boundaries of every neighborhood in our top cities all over the world. This information created extremely accurate and relevant geospatial definitions we could use to accurately cluster listings around major geographical and structural features like rivers, highways, and transportation connections.

Dan Hill

What a great example of improving the experience people have using a tool by applying computational intelligence.

One of the things I’m going to cover in an upcoming post is how to discover opportunities for IUI.

In the meantime, I’d love to hear what you think about what AirBnB are doing! Thoughts?

IUI Design Principles

I’m a huge fan of principles, and usually try to define them for every project I work on.

As I’ve worked on “smart” applications, like Recommendation Engines and other intelligent apps I’ve found a few that seem to recur and thought that I’d share. This is by no means exhaustive nor will all of these be applicable for every project… however, here are a few that might be a useful starting point:

Raise people’s acumen

Acumen is roughly defined as the ability to make good decisions, quickly. Where a principle like this works really well are for things like analytical tools. As an example, if you’re designing a dashboard, think about the decisions that someone would make with the data and figure out how you can enable them to make better decisions, faster. Another way that I’ve written this principle is “Help people make smart decisions quickly”.

Reduce Cognitive Load

This is really just a rewording of the age old heuristic “don’t make people remember information” from Jakob Nielsen, but I really like adding some dimension to it (“reduce”), suggesting that it’s measurable.

Support the transition from Creation / Authorship to Review / Approve

As machines get smarter, their ability to create or author content grows. Computers are writing articles, generating images, and creating music. Where we used to build tools for people to create things, the future will require us to think about interfaces for people reviewing & approving content created by machines.

Be transparent, the real job is developing trust

If a machine is going to do something or make a suggestion for a person, they should have the ability to see how that output was chosen. Look for ways to provide some transparency in the system that gives people trust & confidence. This one really does tie nicely to the one above, about supporting a transition from creating to approving.  I’ll share some great examples of this in some upcoming posts.

Strive to provide moments of delight

One the areas that I’m particularly interested in right now is Discovery, as the amount of content (music, movies, apps, etc.) has grown, how do people find things that they’ll really love? I think there are some great opportunities to leverage computational intelligence to delight people with recommendations.

What do you think? Any you’d like to add?

IUI Example – Intelligent Remote Control

Picture the remote control from your TV / Cable / Satellite provider.

Chances are, the image in your head is similar to the image most other people come up with. We all know what a remote control looks like. There are rows and rows of buttons, some bigger than others, some with alphanumeric characters, some with symbols, some round, and some rectangular. There is something for power, volume, changing the channels, and a whole host of functions that you probably use very infrequently. Remote controls haven’t changed much in years… they are what they are.

I really began thinking about remote controls back in 2011 after reading the really good book Simple & Usable by Giles Colborne.

In the book he outlines Four strategies for simplicity, and he does so by describing an interview exercise that he runs job applicants through. What he does is ask them to “simplify” the remote control for a DVD player.

Back in 2011, most people probably still had a DVD player, and this exercise presents some tricky problems.

Typically, a DVD remote has about forty buttons, many have more than fifty, and, as GIles suggests, that seems excessive for a device that is used to play and pause movies. When something is that complicated, there should be plenty of scope for simplifying it. But the task turns out to be harder than you’d imagine.

Before he reveals some simplification strategies, he suggests people go off and try it themselves, and offers a template to work from. He has even posted a couple of examples of solutions that people came up with

Giles outlines four basic categories that all the solutions he’s seen fall into.

  • Remove – get rid of all the unnecessary controls until the device is stripped back to it’s essential functions
  • Organize – arrange the buttons into groups that make more sense
  • Hide – hide all but the most important buttons behind a hatch so that the less frequently used buttons don’t distract people
  • Displace – create a very simple remote control with a few basic features and control the rest via a menu on the TV screen, displacing the complexity of the remote control to the TV.

Some people, he says, do a little of each but everyone picks a primary strategy. Each have strengths and weaknesses, and he says that those four strategies work whether you’re looking at a something large, like an entire website, or something small, like an individual page. He goes on to describe each of those four strategies in more detail, and says that a big part of success comes from choosing the right strategy for the problem at hand.

Here is where I’ll let those people who are interested in learning more about those strategies go get the book

For people who own an Apple TV, you’ll notice they really embraced the displace strategy. Their remote is really nice, same with Roku, and other modern device makers – they have a simple device that displaces most of the functionality to the screen.

Those are nice, but there is a company that thought there might be a better way to solve this problem, and part of their app includes some IUI.  

The company is named Peel, and they built a smart remote control. They didn’t follow any of the four strategies that Giles outlined, they got rid of the remote altogether and put in on smartphones and tablets. They’re obviously not the only company to do that part, Logitech did the same thing, as did others, including some TV manufacturers and cable providers.

What makes the Peel remote so interesting is the interface is that they completely reimagined what a remote control could be. They brought the content down to the device, so it isn’t just a bunch of buttons with alphanumeric characters on it. They actually display imagery for the show, like the poster art for a movie or channel logos for networks.

Although a few years old, there was a report from Nielsen back in 2016 that indicated that the average consumer only watches about 19 total channels, or about 10% of the channels available to them. From an intelligence standpoint, it wouldn’t take long for a system to learn the ~20 channels someone watches regularly and make those the primary channels displayed in the interface.

But Peel goes even further, they add smart recommendations.

Instead of making people channel surf, they actually make recommendations of shows to watch. I’m not really sure the efficacy of those recommendations as I’m not much of a TV person (some news, some soccer, stream movies, etc.). Regardless of how good the recommendations are today, it’s hard to argue that the UX of the Peel remote is pretty great and, recommendations can always improve. Don’t get me wrong, I’m not suggesting that recommendations are easy, I’m sure Netflix has spent a ton of money on this, including their $1M dollar competition but consider what recommendations mean in the sense of a remote control…

Think about the access they have to a lot of behavioral information, what channels people tune into, how often they change channels, recurrence (same channel at some repeating interval), etc..

As a simplistic example, they know that for the last few weeks, on Monday night, that a person has changed the channel to ESPN at roughly the same time, so about 30 minute before that time, they could display some graphic for what is coming on at the time that person normally tunes in. It wouldn’t take long for a smart system to learn about the seasonality of sports, and stop suggesting it when that “program” was no longer on.

Both of those are a bare minimum of intelligence but I think still qualify for an intelligent user interface.

What do you think?