Tuesday, 14 July 2015

Career Advice and That Jason Post

Last week Jason Calacanis wrote this post, and I sent a kind of expansion of it to my team.  A few of them came back and told me I should post it, so here goes...

This one stood out from the general noise in ‘career advice tweetstorms’ because, when I read it, I thought to myself holy shit that is exactly how I think of people and exactly how I know my boss thinks about me. That is pretty important intel for anyone hoping to achieve meteoric growth in my kind of company, so I think it is worth you each internalizing what it means for you. Let me break it down a little more first, into some more grounded practicalities:

1 and 2 are items I consider pretty immutable, so just do it. Besides - this isn’t work. If you’re in the right career then this is fun, you’re passionate about it, and you eat it up 24x7 whether you’re paid for it or not. It doesn’t feel like work. Achieving mastery in your craft is its own reward. However I acknowledge that not everyone can be in a job they find personally fulfilling and enlightening and, if you are in a product or technology role just because it pays well, then I’m not going to get all preachy about your motives. Just keep in mind that if you want to stay in that role and keep growing the rewards over time, then you need to do this just as much as (and maybe more than) those who are pursuing their passions. Because they will leave you behind, and they will out-compete you for the best roles.

3 is where Jason and I are going to disagree or – at best – there is a subtlety we agree on for which ‘startup’ is useful shorthand. Where I think we might philosophically agree is that you need to get somewhere where you can be individually visible (not buried in a huge team of homogenous ‘resources’) and your ability to step up and take on more, to exceed the normal boundaries of your primary responsibility, are not structurally constrained. Big, mature organizations tend to be set up such that the system of production (roles/structure, process, inputs and outputs) is defined in the abstract, and pursued ahead of unique or especially talented individuals who may not fit easily into any one predefined box. And, because these things tend to become more rigid over time, it is difficult to exceed one’s personal remit in a constructive way. Pick a business in a growth industry, and pick a team which is big enough to do cool stuff but not so big as to require Vogon-like bureaucracy, and pick a boss who values utilizing (and stretching) individuals where their passions and aptitude converge over having everyone nicely fit into a tidy box with the ‘right’ label on it. Startups are like this out of necessity – that’s why I can agree that ‘startup’ is a compact way to communicate this kind of sentiment – but they don’t have a monopoly on this. It can be a sustainable lifestyle choice in any phase of a business. 

Item 4 is one of the characteristics I have seen in almost every high potential high performer I have had the pleasure of looking after. To what Jason already has there I would add two things; first working hard and taking on more doesn’t mean being in the office 24x7. It just means intensity, urgency, focus, and prioritization. I won’t personally give you any points for being in the office any longer than me – in fact I am slightly more likely to wonder if you need a little extra support. Second thing here is to know your cake from your icing, as a really fabulous CEO I worked for a long time ago told me. Go after more, take on more, over-deliver unexpected surprises, but never at the expense of your core responsibilities. The reason your primary role exists at all is because a lot of customers and colleagues rely on you delivering on time and to a high quality. That’s your cake. And if your cake starts to suffer for more icing – all the extras etc – that marks the difference between a high performing high capacity individual who is obviously in need of a promotion and an irresponsible slacker who doesn’t understand the business needs and is letting the team down. 

I work with a few companies at different stages of growth, and I wish I saw a little more of item 5 everywhere I go. You should actively look for chances to do this, not wait until you’re asked to do a brown bag session or something. The fastest path to true mastery of anything is to have to teach it to another. Or, in career advice terms if you prefer, unless there is at least someone around who is as good as you at what you currently do then this will eventually become a blocker to your personal advancement. Giving you a bigger role is important but creates a difficult rubix cube-esque puzzle, but giving you a bigger role when you have a solid, practiced succession plan ready and waiting becomes a no-brainer. 

6 yup another JFDI. This I would enhance with Postel’s Law. If you practice a kind of ‘human equivalent’ of the robustness principle then you will shut down negativity instead of amplify it and deescalate potential conflict. ‘Assuming positive intent’ is such a powerful tool for keeping everything constructive and feeling awesome about yourself and others. 

I’d read 7 as never be reluctant to ask for things. And not just in reward; also resources, opportunities, mentoring, training, chances to join in senior forums, a place in special projects. Whatever. And I know this town is equity-crazed, and equity is certainly nice, but it isn’t the only way to be rewarded and it isn’t the only path to wealth. Regardless, you should be fairly compensated for the value you create. Amid all today’s rhetoric about leaning in and whatever, just keep in mind that the prerequisite to this is to actually demonstrate your value though real results, consistently over time first. Similar to my addendum on number 4, this distinguishes the merited from the ‘participation award’ crowd who want it all just handed to them for showing up. You own your career. 

8 is table stakes in product. You’re here to define an alternate future, where your company is better tomorrow than it is today. Startups are just one killer feature around which the rest of a business and a complete product emerges, so all the same instincts and behaviors will carry you to success in any product-led business whether or not you might call it a startup. 

So that’s some career advice I think everyone can take something away from. I believe that people achieve more and feel more passionate about the product (and stick around longer!) when they can see how doing so is helping them grow and taking them closer to their idea of success for themselves. That’s why this shit matters to me and it matters that my leadership team take it equally seriously for all my people.

Friday, 16 January 2015

The power and the pitfalls of test and learn

My core philosophy on product is that product/market fit is a journey; it is a function of discovery and learning over time, and it is impractical to expect to be able to make a set of ideal decisions up front (i.e. in advance of any product development and real feedback from real customers).

We now live in the world where the fast eat the slow, and I am going to argue that learning is speed:

At Hotwire we’ve dramatically improved our business performance in the last year by focussing on learning as a primary goal.  It’s a first order metric, along with the ‘hard’ KPIs most businesses are familiar with (revenue, room nights, activations, etc) but - as any good teacher will tell you - measuring learning is extremely hard to do.  Fortunately for us, there’s another feature which is both easy to observe and highly correlated with learning: experimentation.

Whenever you see a high rate of improvement (in nature and in science) you will almost always see a high rate of experimentation.  There’s a great Thomas Edison quote which goes something like; “None of my inventions came by accident. I see a worthwhile need to be met and I make trial after trial until it comes.”  Most of our early learning as human beings is heavily experiment driven; it is interactions with the world that teach us how to behave effectively within it.  But my favorite story about the value of pursuing learning as a primary goal is the Kremer Prize:

In 1959 Henry Kremer, an industrialist and patron of early aviation, established a prize of £50,000 for the first human-powered aircraft to achieve a controlled flight.  Hundreds of attempts were made over 20 years with no success.  And most of these were well-informed experts; aviation companies and universities etc.  The prize went unclaimed until 1977 when Paul MacCready won it with his Gossamer Condor.  The secret to his success wasn’t being smarter than any of those who previously attempted flights or knowing something they didn’t, it was his approach to the problem being fundamentally different.

Most teams spent months designing and building their craft, then they took it out to a field or an airstrip to try it out, then they crashed, then they swept all the pieces up and returned to the hanger for another few months to rebuild.  MacCready focused not on the airframe that would work, but on a cheap construction which was easy to assemble and disassemble by hand in the field.  Using this he was able to try more designs out in one single day than every preceding team added together for the entire previous year.

His formula for success had 2 main features; variation and repetition.  You can see this in nature too; natural selection tries out variations of organisms over and over again (optimizing to the ultimate KPI - life itself!) as those organisms improve their suitability to their environments.  While not strictly mathematical, Fisher’s theorem is a useful formalization of this (and the other examples we’ve discussed here).  It goes something like this; “The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.”  Or, more simply:

The capability to try out more hypothesis at any given time is highly correlated with faster improvement.

So learning is speed, and number of experiments is a useful approximation of learning.  But before you just count all a/b tests etc, there are some subtleties to success here:

The first thing I like to watch out for is confirmation bias.  As you embrace test and learn and make experiments cheaper to run you lower the bar for organizational participation.  This is unquestionably another benefit - harnessing the innovation of a larger slice of your org - but not everyone is as academically disciplined as you’d hope, and there is an underlying human tendency to like our own ideas and see them in a less-critical light than the ideas of others.  A while ago I was fortunate enough to be able to spend some time with Alan Kay, and he told me that science is the process that stops people from falling in love with their own ideas.  This is more than just a principle; if you come up with a hypothesis which you’re trying to prove instead of disprove, you will tend to discount contradictory evidence (i.e. proof points that suggest customers do not like it) and waste a lot of time trying marginally different manifestations of the same core idea in the desperate hope that you can somehow make them love it.  You lose speed, not gain speed, this way.

A fun way to look at this is to examine the difference between scientific thinking and religious thinking.  Jack Cohen is another wonderful human being, with a vast amount to teach the world, with whom I had the pleasure of spending a little time.  He argues that; in religious thinking all that matters is how hard you believe, especially in the presence of contradictory evidence (it’s there to test your belief).  In scientific thinking, all that matters is how hard you doubt, especially in the presence of confirmatory evidence (easy answers there to trick you into overgeneralizing your observations).

People will naturally come up with ideas that they like, but good product hygiene is about coming up with ideas that your customer likes.  Rigor in testing is the ultimate arbiter which can make that distinction clear to you, if you let it.  If you’re not open to being wrong - in fact expecting it and actively seeking to make it so - then you cannot, by definition, learn.

Another big-picture mistake is confusing iteration with learning.  It’s a common problem in the whole ‘transition to agile’ world; break up a big-up-front-plan into a number of predetermined phases, label them ‘iterations’ and then cash in your huge cheque as a profound agile coach.  Starting with the same inflexible ideas and delivering them across a higher number of software releases has some benefits in terms of quality and system risk, but it does nothing to improve your product/market fit.  You had determined the level of product/market fit at the beginning of the project and you have not improved that fit over the whole, say, year you were working on it whether it’s one single release at end of the 12 months on a whole series of incremental drops every two weeks.  Learning is about regularly shipping something customers can touch, then watching their interactions with that thing.  Looking for where it enriches and where it detracts, and only then deciding on the final scope for that next iteration.  The point is that each iteration should reflect the learnings from the previous iteration and improve upon it with respect to the customer experience, and therefore cannot be rigidly determined in advance of that feedback.

To be inclusive and stimulate the organizational pathways for innovating, a pretty broad filter is useful in the beginning.  All ideas are valid, and anything can be reduced to a testable hypothesis.  But, as you execute on a test and learn backlog, selecting ideas which are both coherent with the existing product and carry a higher probability of resulting in an improved experience for the customer becomes important.  Think back to our Kremer Prize example; while MacCready focused on the ability to explore a large number of variants, he was not randomly experimenting with geometry in the hope that he got lucky and found something that would fly.  He was an engineer who ran an aerospace company and he had a detailed understanding of all the mechanics involved in lift and control etc.  In user experience terms that means figuring out what customers are sensitive to, and using those things as inspiration points for ideas.  On the internet these sensitivities are rooted in behavioral science (decision theory, nudge theory etc) and are often things like social signals (how many others have bought the same item or currently viewing this page), urgency messaging (this is a popular item or this deal expires soon), and recommendation (if you like x you might like y) etc.  Discovering what these sensitivities are for your particular product can get you more ideas aligned to the physics of your particular business.

When talking about ways to increase the likelihood of ‘winning’ tests I always like to reiterate the value of ‘losers’ too.  A losing test is essentially an idea someone had for a new product or interaction which, without the experiment disproving the hypothesis, would have resulted in a project that would have taken away valuable product development resources for no (or negative) return.  What did you save right there?  How much customer attrition did you avoid by staying away from that unpopular option?

There is much more to doing this well - a whole series of posts wouldn’t do it justice - but I like to focus on the quality of the thinking first.

So does learning == speed?  At least in the special case of how rapidly you can get to the right product/market fit and grow, it certainly does here.  Is the number of experiments a good heuristic for organizational learning?  If you chart the improvement in our real business performance and the number of concurrent tests over time, you see very similar growth curves, just as Fisher’s theorem predicted...

Saturday, 31 May 2014

A brief lesson on company culture and local optimum

Culture governs what happens when the CEO isn't in the room - or something like that - is one of my favorite little soundbites.  Like all the most profound things I collect I'm not responsible for its genesis and, after a brief google search, I can't definitively pin down a source.  So I'm in the academically-uncomfortable zone of having to leave a quote uncredited... shiver...

I believe in principles and purpose guiding the actions of independent thinkers over prescribed activities and traversing an entirely predetermined course to an arbitrary long term goal (which discourages learning and responding to the environment).

The behavioural norms established in any collection of individuals will either enable or constrain that freedom.  That is why company culture is important.

At this point it is probably worth noting that I'm not making an argument for one leadership philosophy being inherently superior to another, I think there are many organizational pursuits which are well suited to the repeatable (ie low variation by design) and predictable activities yielded by closely micromanaging highly prescriptive processes.  I simply argue that culture happens, whether you actively choose it or not, and you always benefit from understanding your nature and actively creating a culture within which it will be the healthiest.  In the creative and scientific fields I usually act I have a formula that works; based on learning loops, some good old AMP, and not trying to tell people smarter than I am what to do.
Like most people, my first few executive roles were the first chance I had to run things however I wanted, to really test my values in the real world without compromise.  As a potentially unsafe generalisation, you pretty much get to set up your department/division/whatever however you see fit.

And so I did.  And I wrought most excellent, high performing teams, who were quick to adopt and enrich ideas and then turn those ideas into some remarkably successful products.  And we all loved every second of our time together, and we always left on Friday smarter than we arrived on Monday.

That was the outcome I'd hoped for, and I was glad to see some of the principles I held dear proven in the dispassionate and indifferent real world; where the strength of feeling you have for your ideas has zero influence on how effective they are.

But that wasn't the lesson:

I had created these cultures as microcosms within larger organizations which had dissimilar global cultures.  I was fooled for a long time - because it worked so well with only a little friction at the edges - but the real test is what happens when you're no longer there to perpetuate it.  To keep this kind of microcosm going within an incompatible host requires constant force to protect the values and establish/defend the space for creativity and (critically) failure.  You can do it, and you'll achieve what you want to achieve, but all systems normalize over time and without the constant force entropy kicks in fast.

One of the things leadership is about is creating lasting change, a journey that is bigger than yourself and continues with or without you, so if you're serious about culture and you want to make a sustainable difference, then you need to infect the host (or just be the CEO!).  You can use your microcosm to prove the effectiveness of a different set of behaviors but, if you can't cause that change to ripple outwards, then it will most certainly revert when you take yourself out of the situation.

And, no matter how new you are in a role and how difficult it is to imagine moving on, you will.  Unless you suck.  Great people always move onwards and upwards; it is so with your teams (and if you're a good boss then you will encourage and enable it) and it is so with yourself (and if you have a good boss he or she will encourage and enable it).

Think about what you'll leave behind, and how that will be perpetuated without your influence.

Friday, 22 November 2013

Natural Learning vs Institutional Learning

A while back I was lucky enough to spend some time with Roger Schank after a conference he spoke at.  More and more I am finding that a thorough appreciation of human learning, cognition, and memory is essential to the study of AI.  His books, especially Scripts Plans Goals and Understanding and Teaching Minds, are timeless essentials.

His core messages at the event were around how to improve our education system, something I take every opportunity to contribute to, and the contrast he drew between how people actually learn and how we structure education was really arresting as a way to pose the core problem:

Natural learning is:
  • Voluntary
  • Interest driven
  • Goal driven
  • Depends of failure
  • Fun
Which, on the surface, certainly resonates.

Institution learning is:
  • Involuntary
  • Based on the school's goals, not the individuals goals
  • Individuals interests ignored
  • Failure seen as bad
  • Not fun
I know a lot of people whose school experiences support at least a handful of these.

Makes you think.  I guess I will joining that PTA after all - although I guess you need to be a P first!

Saturday, 17 August 2013

The business value of technology

There's already plenty of material on articulating the value of technology in a business sense, but it tends to be quite - I don't know - corporate I guess, and most focus on the justification of a particular framework or product.  That might be quite appropriate in an IT environment, but less helpful in more visceral engineering endeavors.

So what's the justification for good method, design, and computer science?  How do you map that academia to real business concerns?  A while back I made a handy reference guide for my business buddies:

I don't believe that's comprehensive, but I do believe it's prototypical.

Sunday, 6 January 2013

AI and Travel

There’s been a bit of coverage recently about our Natural Language Processing search beta, but all that’s being talked about is the semantic search element.  The journey we’re embarking upon is much more ambitious than that, so I want to take a few minutes to fill in the blanks.

NLP is an important ingredient to this product, but it is not the product.  The ‘product’ is a goal-oriented artificial intelligence specialized for solving travel retail problems.  We need natural language only to provide a human-like interface into that intelligence.


We’re modeling our AI on the human-human interactions that travel agents have with real people.  But first let’s talk a little more about the concept of agency.  ‘Agency’ has a few meanings, the most important one here being an actor able to interact with the world.  We develop agents ourselves every day – subprocesses of the mind which are trained to take unsupervised control of complex tasks for which we have developed some proficiency.

Driving a car is an example most of us can relate to.  When you first start, you have to consciously direct all your actions.  Hands at 10 and 2, check the mirror, engage the clutch, watch the speedometer.  After a few years (hey – I’m a slow learner) you develop what you probably call the ‘knack’ for it and you can drive around listening to music or holding a basic conversation.  Those things you had to think so carefully about have receded back from your conscious focus, delegated to a specialized agent who frees up your attention for other things.  You can use that comfortably in any ‘like’ scenario – ie you don’t need to develop a new one when you exchange your Toyota for a Honda.  Neat mental tool and fundamental to learning.


What we’re trying to do at Expedia is mimic this feat of human intelligence with machine reasoning, to give the level of personalized service and helpful, relevant support that a customer would receive from a real, live agent.

That’s why what we’re doing here is so much more than a semantic search service; it is more like a conversation which enables a customer to start with their intent (a beach holiday, a romantic break, cheap ski vacation etc) and, through an iterative exchange of ideas in question-answer format, end up with the most suitable travel arrangements made.

This isn’t a straightforward journey.  I was recently lucky enough to spend some time with Dr. Steven Pinker discussing this at length, and we concluded that we understand the “A” but we don’t understand the “I” so this kind of project is always part research.  You have to be optimistic to be a computer scientist!

But search can only really get a little bit better before we have to make the leap to AI.  To society this is the move from easy simple access to information to the delegation of problems to agents.  Perhaps now is the right time to touch on the bigger picture, what the future might look like:

The future of search

You won't see this page anymore.  The whole search space will be superseded by a network of generalized intelligences and specialized intelligences, and search engines like Google [as you know it today] will become the back end for that network, no longer a user-facing experience.

Specialized intelligences will know how solve specific problems – they'll have what we call domain expertise – such as changing a washer in a tap, making a candle, or – ahem – planning a vacation.  They'll know how to organize loads of dissimilar data and services into the logical relationships which allow us to achieve those tasks and only those tasks.  Generalized intelligences will be responsible for marshaling these specialized intelligences, so that we don't even have to keep an index of the specialized guys.  So any time you want something you'll consult your generalized intelligence which acts kind of like your e-majordomo; interpreting your wishes, dividing them up into tractable problems, finding solvers and delegating problems to them, then assembling the answer which carries the most confidence and presenting it back to you in human language.  Kind of like how 'people' organizations work today – there are specialists who can undertake specific tasks for you and generalists who can route you to the right specialist (and sometimes have some supervisory function).  It is a pattern that we're used to.

Example; you're going to change your spark plugs (because you have a classic car – we'll all be hydrogen fuel cells by then!) so, assuming you're not an expert mechanic, you'll first look up the general principles – disconnect HT cables, unscrew old plugs, set gap on new plugs, screw in new plugs, reconnect HT cables.  Then you look up the specific details for your vehicle – Haynes manual kind of stuff – how to remove the rocker cover for easier access, how to check the cam timing etc, and then you get your tools and parts together.  You need to go any find the right plugs (obviously) but also need the correct size socket driver and gaskets and grease etc.  That's quite an assembly of information and collecting of items etc from [potentially] lots of different databases and shops.  Or you could just pose the problem to your personal AI and head straight out to the garage.  Perhaps you’ll also be receiving step-by-step instructions, via your HUD, overlaid in real time on the engine itself as you look at it.

This is long term view.  It will happen piecemeal, with sites gradually becoming more intelligent and starting to offer experiences which allow you to pose your problem, rather than hunt out information and evaluate it yourself.  Imagine an Amazon where you could just say “I have a leaky tap” and (perhaps after some Q&A to narrow down the problem) you’d be shown a pipe wrench, a pack of 1/2 inch washers, and some DIY guides showing how to apply those tools without flooding your kitchen.  Right now it shows me a book called “Death and Other Things” by Christopher Hall and a mains powered household gas detector.  Today the onus is on me to know that I need a pipe wrench etc and go looking for those items individually, accumulating as I go.

We kind of already do this.  Back to our car analogy for a second, the complex system that is the modern motor vehicle already contains a number of these agents today.  When I was young (oh no – I have become my parents) my first car had manual everything.  I had to change gears, which meant developing a feel for torque and engine speed.  I had to set the choke, which meant developing an awareness of fuel/air mixture, and I had to switch the radiator fan on and off, forcing me to have explicit knowledge of engine temperature.

In other words, driving a car used to require many more proficiencies than it does today.  The mechanical complexities are managed for us by clever (well, just clever enough…) homunculi so all we need to do is point it in the right direction and push one lever to go faster and another to slow down.

Back to travel

The most important question to ask about any advancement in technology is how it will improve the human experience.  A future in which specialized intelligences take away more of our common problems is essentially connecting intent to effect by fewer intermediary steps.  Appeals to the lazy in all of us.

From our adventures in machine learning specifically, we expect to be able to benefit the traveler and the travel business by:

  • A better user experience; expert guidance through travel planning instead of imposing a significant research burden upon the traveler.
  • More intuitive interface which can be entirely cohesive across dissimilar devices.
  • Higher conversion rates on sites and apps.
  • Faster support for travelers in-trip delivered at a lower cost.

To do this well any machine learning algorithm needs a body of knowledge to train it.  The more comprehensive that body of knowledge, the more confidence you can have in the answers an AI will produce.  Expedia is the world’s largest travel company and has been selling to and supporting millions of travelers for over 15 years.  That experience is captured in petabytes of data generated by instrumenting every aspect of the travel experience.  This puts us in a really good position to do something meaningful for the web travel universe with artificial intelligence, as well as contribute to machine learning and machine reasoning disciplines.

That’s what our innovation labs programme is all about benefiting our partners and contributing to our profession.

Sunday, 29 July 2012

A Few Notes on Leadership

I was recently asked to talk to one of our teams as part of a series they're doing on leadership, taking a selection of different execs from across the business and getting their take on what it is really all about.  Leadership is a topic which resists a definition that is both concise and comprehensive, so I usually prefer to pick out a small selection of what it encompasses and deep dive on those.  Here's what I picked out for them:

Prevent entropy.  Entropy is universal and for anything to progress (or even continue to exist) we need to continually add energy to that thing.  Groups, projects, initiatives, morale, culture, all these things lose momentum when someone isn't adding energy to them, and it is a primary responsibility of leadership to be the dynamo at the center of the important things.

Configure the environment.  Make it OK to do great things, take the friction out of forward progress, teach autonomy, and empower people without making them feel abandoned.  It is a primary responsibility of leadership to establish an environment which encourages talent and quality and is hostile to impediment and waste.

Expand minds.  Not just add new knowledge but also help people to change their mental models of the world around them; to think differently and be able to conceptualize new ideas.  Experimentation, reasoning and analytical skills, and critical thinking are force multipliers for technologists.  It is a primary responsibility of leadership to change what people are capable of, to make their organizations about more than just turning the handle the same way.

Set purpose.  Everyone needs a purpose, and I mean a higher purpose, which transcends job description.  The reason you're really here gives you the confidence to make better decisions.  For example my guys aren't just here to write some code, or hack away at the Linux kernel, they're here to change the way people build web travel apps.  The better you get at describing what that means and why it matters for customers the more independent and powerful your team can become.  It is a primary responsibility of leadership to favor being highly descriptive over of being highly prescriptive, because it creates the space for teams to contribute at another whole level, not just follow orders, yet still be totally on strategy.

Hire up.  Any great human endeavor is bigger than all of us and can only be accomplished by valuing a smart, talented team over valuing being the smartest, most talented person in the org.  It is a primary responsibility of leadership to constantly raise the bar and bring in people than make you look dumb.

But the most significantly thing I think (hope!) this team learned from their research is that leadership is bigger than any one leader.  By now they will have heard a variety of different views from the cross-section of leaders they invited to talk to them.  Humans like to be able to rationalize complex things down into a single answer, or at least a small set of non-conflicting statements, and in a way I hope that they haven't been able to do that here.

The message here is that a large organization is more like an ecology than an individual organism, and therefore needs a healthy dose of heterogeneity to thrive.  As a leadership team we're capable of things that we individually could not be because we leverage that diversity; the different style and substance we each bring intersects enough for cohesion and extends enough to afford us a synoptic view.

Sunday, 8 April 2012

How Alan Kay changed my mind

I was recently fortunate enough to spend an evening with Dr Alan Kay, followed by a half day talk, followed by 2 whole weeks of slow burn cognition for it to really sink in.  Here are my lecture notes, because I recognise the rare gift that is time with someone like Alan and I think the world just might be a better place in some small way if I share it with you.

Firstly; a disclaimer.  I don’t intend to give a comprehensive synopsis of the talk, I’m not sure I could even do it justice, I intend to highlight some of the points which really spoke to me, and the consequent thinking that they started up in my head.

New vs. News

Alan’s talk was titled New vs. News and was more of a journey than a discourse, covering immense ground without meandering into unrelated territory.  At its core was innovation and invention and how to tap into that je ne sais quoi which leads to something truly different.  More importantly, it was about how to get people to recognise what you have created and get it adopted.

The theory is that news - something incremental to what we already have today - is easy for most people to grok because of context.  We already know so much about the general subject it builds on, and what’s being introduced is easy to understand relative to that.  Example; google instant.  We know all about searching the web, a lot of us have even been ‘trained’ into a new syntax to coax better results out of that magic little window; I mean come on, the word google has even become a verb!  Getting an ever changing list of those tasty results back in real time as we typed was pretty easy to get our heads around.  We also possess the necessary passing appreciation of the other prerequisite concepts, like speed and relevancy and refining.

New on the other hand takes imagination and vision to appreciate, because we lack the tools necessary to understand it - we have no common understanding of something else so similar that we can readily make comparisons and build on that knowledge.  Example; the humble mouse.  When that inseparable companion of the GUI was first rolling around PARC desks in the early 70s it was hard for people to see the application.  Computers were big scary things which occupied whole rooms, were fed punched cards or magnetic tape, and standard out was still something very mechanical indeed.  What would one ever really point to?  Or click on?  We have the opposite problem now - it is hard imagine a time when mousing around wasn’t core to the experience of computing.  Luckily, nearly 10 years later, a guy you might have heard of incorporated these concepts into the computers he wanted to build, and the rest is history as they say.

“If I had asked people what they wanted, they would have told me faster horses.” probably not, but frequently attributed to, Henry Ford.

My favourite takeaways

  • Argue more, debate less.  Arguing is a constructive process of trying to find out and to illuminate.  Debating is trying to win.  Organisations need to take a hard look at their behaviour here; as soon as you form an ‘organization’ you create a set of agendas which are irrelevant to problem solving.  A group that can put all that aside - along with their personal preferences on the matter - and have rational discussions is powerful indeed.  I have always like this sentiment “The mark of intellectual honesty is the solicitation of opposing points of view.” from a Tom Clancy novel (believe it or not).
  • Science is the structure which solves the problem of people loving their own ideas.  You have to want to engage in a process which puts some structure into how you conceptualise a problem, explore the ideas, and figure out the best solutions.  Science is that framework.  As a scientist myself I always identified with the ‘study of/organised knowledge’ definitions, but I really like this one because it speaks to how you apply science and why you apply science.  To remove noise.
  • Further to defining science; in some follow up with me Alan has pointed out yet another way to look at this; the idea of ideas.  It is a disciplined way to free yourself from today, from the constraints of How It Works now.  It is a shame that "thinking outside the box" has become a cliche - this is the prerequisite of thinking outside the box.  It is acknowledging that there is a box, so that one is able to zoom out enough to see the container one was in, and then start to discover what's been excluded by that container.
  • 7 ± 2 and how people think.  George A. Miller’s theory which deals with how people absorb information.  Wikipedia explains this more eloquently than I will, so I will stick to emphasis.  Effectively communicating ideas is a critical skill in science so, along with strong writing and presentation, a solid understanding of how people learn and how brains process input is essential knowledge.
  • Learn to think.  Professional tennis players practice their game for 8 hours a day, what training do we do to expand our mental toolkit and keep it sharp?  We probably think that we do this a lot - because we’re always thinking - but that stuff is the Wimbledon Open, not the preparation and training and learning you need to do to make sure that you’re good enough when you get there...
  • Almost all ideas are mediocre to bad, which is why you get no points for them, points are awarded for the successful execution and adoption of one.
  • IQ is less important than you think.  In terms of what you’re able to actually achieve, the impact that you can have on the world, context [coupled with being smart enough] is far more important.  Example; Leonardo da Vinci invented an incredible number of machines, vehicles, siege weapons, ways to automate industrial tasks, but was unable to manifest any of those creations.  Not because he wasn’t smart enough, but because of the environment.  Back in the late 1400’s and early 1500’s the world around da Vinci lacked the rigid construction materials and the understanding of chemistry etc which needed to exist to be able to build and to supply motive power to those inventions.  Conversely, it took Karl Benz in 1881 to build a practical, working motor vehicle and Henry Ford in 1914 to shape mass production and make it accessible.  Clever men indeed but were they da Vinci smart, or did they have an environment around them which had the right collection of primitives solved that they were able to build on?
  • Convenience is a seldom recognised barrier to progress.  Most people really struggle to give up something near and convenient in order to reach something else bigger and far more beneficial but further away.  Twinkies for you insiders!
  • User experience and the 250ms timeout.  A quarter of a second is how long it takes for a brain to get bored - to have seen an image and processed and interpreted it and be ready to act on the meaning - and feel like it ought to be doing something.  Even if that something is make-work (such as superfluous navigation) because that improves the perception of speed.
  • TEMS.  Tinkering Engineering Math Science.  A kind of sophistication curve societies move through when they start to play with technology (and haven’t most of us have moved through this in our education and careers too?)
  • Don’t lose sight of your mission.  Companies always start off with a mission, but can make the mistake of becoming too attached to a specific instantiation of that mission and then start to believe that was the real mission all along.  Example; railroad companies in the US started off in the transportation business, using the technology at the time (rail).  They became attached to rail to the exclusion of progress elsewhere and were outflanked by competitors that they hadn’t even considered to be competition (air and road travel).

Threads this started for me

My overall impression was of how much more there is yet to do in my chosen field.  Invigorated.  And a little bit humbled by the distance yet to travel.  And specifically;

  • So much of the world is just point solutions.  Tiny increments on what we already do and already know.
  • Humans like stories, we’re just wired that way.  Things that can be wrapped up in a narrative are much more digestible by most people.
  • Simplicity is always worth the continual investment and stretch that it requires to achieve.
  • Having a vision in technology is 90% imagination and 10% science; then the job is to transform that into 10% imagination and 90% science so that it becomes buildable today.
  • You have to invest so much more of yourself than you think into understanding the problem if you’re really going to solve for it (or even know what you’re solving for).
  • Good science is timeless.  I have met a lot of other entrepreneurs/innovators/inventors (in the ‘have a patent’ sense of the word) in my few short years on this planet and the majority of them have been lucky tinkerers.  They usually have an interesting story which matches the times but no fundamental underlying lesson which could be applied anywhere.  I have always believed that a rational scientific approach can take anything from point A onward, anytime and in any situation, because it transcends circumstances and specifics.  And this is exactly what was proven out to me through Dr Kay’s talk, in terms more eloquent than I am capable of reciting.
  • And finally; it was eye opening to see how much of this was about humans - the intersection of anthropology and computer science.  If you truly understand people and technology then that’s when you’ll be able to really change the world, not just build things.


That all took 2 weeks to cook in my head.  I hope it is consumable and I hope it starts threads in you that lead you somewhere else too.  Something Dr Kay reiterated several times in his half day session was that his message - the specifics he was trying to communicate to us - was not nearly as important as how that message makes you think for yourself.  And that makes a lot of sense when you reflect upon it from a distance.

Special thanks to the man himself for kindly validating my notes (oh yeah, and for radically expanding my horizons, that too I guess).

Saturday, 3 December 2011

Eachan's 5 Laws of Platform Architecture

Architecture - the shape of the system, the patterns used - is by far the most meaningful thing to get right in any system.  Architectural decisions have far more influence on the capabilities and limitations of any given system than the implementation decisions - frameworks and languages - ever can.

This effect is hugely magnified when you're building a platform, because you're creating tools and raw materials for other developers, and your decisions can enable them to easily build apps any way they can imagine or limit their options to a handful of pre-configured choices.

Side note here; the internet is becoming full of platforms.  It has never been easier to programmatically consume services on the web, even through interfaces originally only intended for humans, and combine existing data in new ways to create whole new experiences.  You might put an app out there for customers, which you consider to be a finished product, and then discover that someone else is using your app as a building block in their app.  One man's front end becomes another man's back end.  I think this is a good thing for momentum and creativity; allowing new takes on current ideas to be spun up and explored quickly and cheaply.

But if you intend to build a platform, if that's what you're explicitly setting out to do, then there are some design guidelines which will lend it greater commercial utility.  Not patterns per se, more like principles, which I'm going to call the 5 laws of platform architecture:

1st Law of Platform Architecture - the value of a platform is directly proportionate to the amount of work it causes to be unnecessary.

Most web businesses are more similar than they are different.  There is a set of web basics they all need (identity, profile, cart, payment, analytics...) and there are usually a set of line-of-business basics that most of them have in common (in my case that's things like inventory, price, geography, booking, weather...) and that's an awful lot of code that we're all writing.  If a platform solves the 'common problems' via a handful of simple web service calls, then partners using said platform can put the majority of their time and passion and investment where it will make the biggest difference to their business; building the unique features which distinguish them from the rest of the marketplace.  I like to ask "what don't my partners have to do because we already did it?" and I like to have a long list of things to put into that bucket.

2nd Law of Platform Architecture - ease of adoption maps directly to business momentum.

Most serious platforms are monetised in the same handful of ways, and in most cases the sooner you can have partners up and running the sooner you're counting revenue from them, ergo time to market for your partners is a lever in your own P&L and if you focus on ease of adoption you will bring on more partners, they will each 'go live' quicker, and it is cheaper than more traditional incentives such as offering improved commercial terms in exchange for faster rollouts etc.  The kinds of things you ought to consider here go beyond the simplicity of the services you expose (which must be easy for the most pedestrian of developers to grok - in the platform game you are penalised for externally-observable complexity) and into how much you invest in documentation, SDK and client code, and developer community support.  These things pay you back.

3rd Law of Platform Architecture - entrypoint depth multiplies the business flexibility of any given service.

Higher layer abstractions always infer business logic.  That's what a composite service really is; someone decided that for use case alpha, you always do X then Y then Z, and therefore aggregating calls out to services A and B and C solves that problem in a single call.  You have just created a simpler solution for alpha - which is an admirable achievement and it belongs in your platform exposed to everyone else who has use case alpha.  But let's say I have use case alpha prime?  A basically similar business process but with key differences that no longer require data from B?  That's why your underlying services belong in your platform too; if a client has some business logic which maps quite closely to the higher layer abstractions then that's great orchestration for that app.  If a partner want to do something a little different, then he might have to consume those composites and do a bunch of his own transformations on the data, perhaps even store the data himself, and then present it to the client apps through his own interfaces.  With the option of consuming services from lower down the stack clients can build unique apps on top of the platform with far less 'back end' work (stripping out the inferred but irrelevant business logic) on the client side.

4th Law of Platform Architecture - compute work becomes exponentially cheaper and faster as it approaches the edge.

At last something more technical!  If you're building a platform, and you're not planning on comprehensive global coverage, then I would question your commitment to your idea on a web where ubiquitousness has never been cheaper to claim.  So let's assume that you are doing this globally.  Without getting into a discourse on distributed systems, you're going to want to deliver straightforward and performant access to every consumer regardless of where they're based.  That is a distributed system problem, and so your platform should run on a system with a diameter greater than 1, and you should try to service requests as close to the edge (and as high up in your stack) as you can.  There are plenty of mature grid and edge computing infrastructures out there to provide the scaffolding for this.  What does it mean for your partners?  They're using a platform with better durability and lower latency than alternatives that they might choose, which translates into better SLAs and better service which also happens to be cheaper to deliver.

5th Law of Platform Architecture - as services become more atomic the addressable market segments become more broad.

My 3rd law deals with entrypoints, and this is nearly the same thing.  Lower layer abstractions allow platform consumption without constraining the use cases, but another critical property of those granular services is their atomicity.  Your low level granular services must perform generic, stateless (independent) operations, all be individually consumable, and each should be of business value individually.  When I say individually consumable I think I really mean individually usable.  If one has to consume a handful of other services to be able to make sense of the data returned from a single service, then it clearly isn't too useful in it's own right.  Perhaps it needs slightly more data or metadata around it or maybe it is just noise in your stack - keeping the catalogue manageable is actually a harder problem than it may seem.  When your lower layer abstractions are individually useful you find that more kinds of businesses can make use of them.  To round this out with an example; exposing our geography data distinctly (i.e. outside of a composite which applies it to a booking flow) enables us to power navigation and mapping apps, instead of just travel apps.

These are guidelines for architecting a platform to maximise its utility to developers and its commercial success as a set of business building blocks for a range markets.  But what about the shape of the system(s) behind each service?  That dimension of architecture - how you manifest each of your APIs - is just as important.

We all have our favourite ways of doing things, and pushing one pattern over another without an understanding of the business problem and environment is just bad science, so the best I can do here is suggest establishing some engineering principles to give engineers a decision making framework within which they can own the problem and still consistently meet the higher standards expected from platforms.  Scalability, Maintainability, Quality, Security, Availability, and User Experience have always worked for me.

Whatever you do keep in mind that if you operate a platform, then you're partly responsible for the availability and integrity and brand and revenue of n number of businesses (i.e. the partners building their apps on your stuff) so you have an even greater responsibility for quality than you do when your building your own customer-facing apps.  I'm a pretty casual guy about most things, but that's a responsibility I take very seriously indeed...

Saturday, 5 November 2011

The difference between taking a shortcut and cutting a corner

There are two very different things to talk about here, with two very different implications for your software, but I'll admit that which way around you label them can simply be language pedantry.  So for the purposes of this discourse, let's say that:
  • Taking a shortcut is using your knowledge and experience in place of studying things from first principles every time.
  • Cutting a corner is a compromise in quality or good practices in the place of the prudent actions otherwise required.
Both are used to speed progress through a given piece of work.  Using knowledge and experience is a time to market competitive advantage gained through having talented people, compromising quality is a time to market advantage gained through process; albeit purchased at the price of later cost.

Taking Shortcuts

Let's talk about taking shortcuts first.  When you can do this confidently it is a magical place to be.  I like to hire really smart people and give them the freedom to use the experience they've gained throughout their careers to leap frog us straight over the other organisations who had to do it all from scratch, the sources of the experience I want to tap into.

Experienced people have been through years of doing everything by careful analysis and research, gradually proving out a larger and larger set of assumptions which they can build up from, and using life's great feedback loops to tune their thinking until their instinctive decisions are as a good as a noobie's 3 months of science.  That's why we hire the kind of people we hire.

Having all that collective wisdom at hand and not using it is irresponsible leadership and suffocating to the innovators you should be growing.  There are still times when the right thing to do is take things back to first principles and do that 3 months of science, but the beauty of experience is it also tells you when it's right to use your instincts and when it's right to figure it out.

Cutting Corners

And now cutting corners.  Reducing time to market but doing something less well (sacrificing quality) or by skipping validation steps such as doing less testing (taking more risk) works in the sort term, but really just exchanges a short term temporal win for a lot more trouble later.  Often more later trouble than you really think, which is why this accelerent is so frequently misused.

I say 'freqently misused' quite deliberately, because in a pragmatic real world it is not always the wrong thing to do.  Occasionally it can even be the path to very big wins, when time is absolutely of the essence and every day counts materially.  But it is always the wrong thing to do to unconsciously blunder into cutting corners, to not be actively managing your quality, and to incur this type of technical debt without having a credible plan to come back to fix it up before it festers too long.

If you are not grown up enough to manage the payback, then you are not grown up enough to incur the debt.


Hire people smarter than you are.  Empower them to use what they know and what they've done for your benefit - that's what they want anyway - and to use their time with you to further expand their horizons.  Carefully manage your quality, incur technical debt strategically, and never do it without paying it back.

You'll do a good job and everyone will have a good time, most importantly your consumers.