Are You Hiring in Your Own Image? Avoid Your Blindside

Are You Hiring in Your Own Image? Avoid Your Blindside

You’re sitting in a quiet meeting when a stranger suddenly bursts into the room, screaming and ranting – what is your immediate reaction?

Your answer might say something about your personality. If your first instinct is to act – perhaps to tackle the person, or run and hide – you’re likely a “doer”.  If your first reaction is empathy; to wonder why the person is so upset, you’re probably a “feeler”.  Lastly, if your first reaction is purely internal – to consider why the person is so mad  – you’re a “thinker”.

This simple personality model – doer, feeler, thinker – is of course just one of many. Like any model, none of us fits perfectly.  No one is purely a doer, feeler or thinker (we’d be a weird bunch if we were), but we do tend to have a primary or dominant characteristic.  Further, it’s important to be aware of the weaknesses of each of these dominant characteristics.

One of the complimentary comments that several members of the team at Wonolo have said to us over the past 4 years is that they think Yong (CEO), AJ (COO) and yours truly (CTO) are a good exec team because we are all very different.

It’s of course very nice to hear and I think the truth here is that, by compensating for each others’ weaknesses, we achieve more than the sum of our parts.

AJ (COO) is a doer; a man of action. His catchphrase might be “Just Do It”.

Last year, when he heard that the US government recommends people walk 10,000 steps per day,  he set himself a goal of doing it every single day.  At the time of writing,  he’s not missed 1 day in 326 – despite weather, holidays, travel, vacation, etc – not one!   It’s hard for me to imagine having the consistency and commitment to achieve this.

Yong is a feeler. We use Slack for internal communication and Yong’s handle is “sobstory” (handles are generally chosen by the team).

I think it’s only right and proper that we have a feeler as a CEO, given that we are in the people business.  Yong uses his “sob stories” to motivate and inspire the team and to bring empathy and humanity to our business.

That’s not to say that Yong’s not a thinker and a doer too – as well as being one of the nicest guys I’ve ever met, he’s also one of the hardest working. AJ, too, is one of the smartest people I know.  But, again, we’re talking about the the dominant aspect of their personalities.

On the flip side, one weaknesses of “doers” tends to be that their bias to action or impatience can make them act before necessary analysis of all the options is done – doers have a low tolerance for long discussions and theory.

Doers also tend to be highly competitive. AJ’s Slack handle is “bookie” owing to his tendency to bet on anything he thinks he can win.

Being competitive is of course a positive quality in many situations.   But, for doers, it can also mean that needing to win the argument is more important than making the right decision, and that achieving the goal can become more important than considering whether the goal in question is actually valuable or important.

As to feelers, their empathy can mean they tend to focus too much on how something feels rather than how it is. It means they struggle with decisions they know are right but which negatively impact people.  They can be subject to emotional manipulation by others who know they can exploit the feels.

I myself am a thinker. My Slack handle is “dirtyprofessor” (“knowitall” was another candidate).

As a thinker, one of my weaknesses is that, once I’ve worked out how to do something, I’m less interested in actually doing it. I’m more interested in the theory than the practice; the abstract over the concrete.  Learning for the sake of learning is fun for me.

Another weakness is that by being very analytical and data-driven, I can tend to get disconnected from the real-world, human impact.

AJ and Yong, respectively, definitely help counter these weaknesses.

I first met AJ and Yong in the summer of 2014 when they were working on Wonolo  inside Coca-Cola. I’d love to say that we immediately saw this complementary set of personal styles and that’s why we decided to join forces but the reality, like many things in startup life, is that we simply Got Lucky.

As I’ve aged and learned, I believe I’ve managed to compensate for the weaknesses of being a “thinker” and become a more rounded person but it’s nice to know that Yong and AJ have my back.

What this experience reminds me of is the importance of recognizing your own weaknesses and not hiring solely in your own image.  By hiring people who are different to you, you can compensate for our own weaknesses; your blindside. There are no “right” personality types and no one fits precisely in one bucket but, by having a well-rounded team, you will avoid many pitfalls.

Please leave a comment if you have one.

Advertisements

Conducting an Effective Postmortem

Conducting an Effective Postmortem

Wouldn’t life would be great if nothing ever went wrong and technology never broke?

Unfortunately, the reality is that things break all the time.

Your product probably breaks a little bit all the time and sometimes breaks big time (hopefully, much more occasionally).

Perhaps your website goes down for 4 hours during your busiest season and you miss $10M in potential revenue. Or, you discover that no payments have been processing for 2 days before anyone noticed.

In these serious to catastrophic cases, you really want to get to the bottom of why the problem occurred, with a view to making sure it never happens again.

That’s where conducting an effective Postmortem becomes vital.

Avoid Blame

The ultimate purpose of the Postmortem is to make sure the problem experienced never occurs again.

In order to do this, a Postmortem involves ferreting out root causes (more on this below). Bringing blame into the Postmortem process itself risks causing defensiveness and “ass-covering”.  This defensiveness tends to obscure the true root causes.

This is not to say that blame isn’t important or can be avoided. Perhaps someone needs to be fired.  But, keep the “blame” part separate from the Postmortem itself in order to get to the true root causes.

The Postmortem Process

I like to conduct the Postmortem process as a group, with all the stakeholders in a room in front of a whiteboard.  It’s important that everyone impacted and/or responsible for the problem is involved and has a voice.

At a high-level, my process for conducting a Postmortem is as follows:

  1. agree and define the impact of the problem to the business – e.g. “we lost $10M in potential revenue”
  2. flush out all the causes of the problem down to their root, as far as possible
  3. agree a set of recommendations aimed at ensuring the problem never occurs again

Let’s use a contrived example for illustration.  Imagine that I fell off my bike and broke my wrist.  We start on the whiteboard with the impact – i.e. I broke my wrist.

Why-Because

My preferred method to analyze causes (step #2 above) is Why-because Analysis.

Why-because is a formalized process but don’t be put off – it can be used more casually with great success and you can add rigor as you become more familiar.

Why-because essentially involves repeatedly asking “Why?” and repeatedly answering with the “because” part.  My 5-year old son is also great at this.

e.g. “Why did I fall off my bike?” “…because I hit a pothole.”
“Why did you hit a pothole?” “…because I wasn’t looking where I was going.”
“Why weren’t you looking where you were going?” “…because I was distracted”

…you get the idea.

Why-because is similar to other processes you may be familiar with like “5 Whys”.  (I have found 5 whys to be insufficient because big problems typically have complex causes and the causal chains are often more than 5 levels deep.)

What you end up with at the end of the Why-because Analysis is a graph that shows you all the contributory causes that caused the impact on your business. More formally, when complete, the Why-because graph should include all the necessary and sufficient causes.

Continuing our example, here’s our Why-because analysis of why I broke my wrist:

Of course, you need to decide when you’ve gone deep enough and can stop asking Why? There is no hard-and-fast rule here – just use your judgement – but you don’t want to end up drilling down to “because the big bang happened” in every case.

One great thing about Why-because graphs like the one above is that you can test them to make sure they’re complete:

  • for each box on the chart, you can ask, had this not occurred, would the problem still have occurred? If the answer is no, it’s a necessary condition.
  • looking at all the boxes on the chart, you can ask, if all of these happened again, would the problem occur again? If the answer is no,  your conditions are not sufficient and you’re not done yet.

Generally, big problems tend to have complex causes. This is because any reasonably mature organization will have checks and balances in place to avoid obvious and predictable failures.

Therefore, you will likely end up with a complex graph that includes a mixture of technical, operational and human contributory factors. It’s particularly important not to overlook or underplay the human factors since fixing the technical and operational issues alone will not avoid the problem recurring.

You can read more about Why-Because on Wikipedia.

Recommendations

The most important part of the process is to create a list of recommendations to act on, informed by the detailed understanding of the causes from the Why-because analysis.

Don’t forget the human factors – these are often the most important to address, e.g. additional training, more staff or better process.

Again, you can test your recommendations by saying, if we do all these things, is it highly likely to prevent this problem from recurring again?  If the answer is no, you’ve not got the right recommendations.

Lastly, give each recommendation an owner who is responsible for taking action and be sure to follow-up.

12 Cognitive Biases that Will Kill your Startup

If you asked me to choose the 3 most important things that determine success or failure of a startup, I would say:

  1. company culture – it has a profound and pervasive effect and it’s essential to consciously and deliberately nurture it.
  2. focus – doing many things badly rather than a few things well is surefire startup suicide.
  3. cognitive biases – the cognitive weaknesses of the human brain can if, unrecognized, wreak havoc.

I’d argue that luck is an even bigger factor than any of these 3 but, by definition, luck is outside your control.

So, let’s talk about #3 – Cognitive Biases…

Definition

“A systematic pattern of deviation from rationality in judgment and decision making.”

Despite what you might like to think, your brain is not a rational, logical computer – you’re a human being.  Your human brain has limitations.  It has bugs.

Cognitive Biases have been experimentally proven (again and again) to exist.

So What?

Cognitive Biases are potentially dangerous to all people and organizations but I think are particularly deadly to startups because:

  • Startups involve making a series of low-data, high-risk decisions. With limited data, Cognitive Biases have a stronger sway.
  • People working on startups are often stressed and working long hours, making them more susceptible.
  • The room for error is often very small because of limited funding runway.
  • Startups are commonly started and staffed by younger people who have less prior knowledge to counteract the impact of Cognitive Biases.

I believe that understanding and being aware of Cognitive Biases leads to better decision making and that better decision making, on balance, leads to better outcomes.

So, let’s look at 12 Cognitive Biases that I have seen are particularly prevalent and particularly dangerous to startups, and how you might spot them and counter them:

1. Correlation vs Causation

Did you know that sleeping with your shoes on is strongly correlated with waking up with a headache?…

…therefore, sleeping with your shoes on causes headaches!

Of course it doesn’t; this is a classic example of confusing correlation with causation.

For some more amusing examples, Tyler Vigen created a great series of charts over at tylervigen.com.  Here’s just one for illustration:

In my experience, confusion between correlation and causation is rife in startups.

Partly it’s the limited availability of data…but I’ve also seen that more data can actually make the problem worse.  With so many metrics tools available, people tend to get lost in the data and miss the bigger context.

Startups run at high speed and many things are changing in parallel, with little ability to control variables.  For example, on several occasions I’ve worked to optimize conversion in a product funnel and been happy to see that the optimization worked, only to discover later that conversion improved due to an unrelated change elsewhere – e.g. positive media coverage.

Remember to always ask yourself, does A really cause B?  Always consider that, perhaps:

  • B causes A
  • both A and B are caused by C
  • A and B effect each other
  • it’s a coincidence

2. Confirmation Bias

Confirmation Bias is the tendency to search for, interpret, favor and recall information in a way that confirms one’s pre-existing beliefs or hypotheses.

More commonly, you might call it “cherry picking”.

I think that Confirmation Bias is a particular problem for startups because they generally have a lot invested in a particular view of how the world should be (or will be) but have very little solid data to go on, especially at an early stage.

Life at a startup is ambiguous; startups struggle and go through hard-times. Therefore, belief is often what carries a team through the tough times.

Startup founders tend to be “true believers”, with a tendency to get high on their own supply.  One aspect of Confirmation Bias is that it can maintain or strengthen beliefs in the face of contrary evidence.

Always ask yourself whether you’re truly open to evidence that contradicts your existing views and beliefs.  Startups regularly pivot but often pivot too late.

3. Overconfidence & the Planning Fallacy

“the most pervasive and potentially catastrophic of all the cognitive biases to which human beings fall victim” – Svenson (1981)
sydney_opera_house_-_construction_-_phase_2_1966

Sydney Opera House:

  • Planned Completion Date: 1963, Planned Cost: $7M
  • Actual Completion Date: 1973, Actual Cost: $102M

Unfortunately, your confidence in your judgments is reliably greater than the accuracy of those judgments.  You are overconfident.

The real kicker is that this is especially true when your confidence is relatively high.

Read that again: you’re probably wrong and the more confident you are that you’re right, the more likely you are to be wrong.

One particular aspect of Overconfidence is what is referred to as the “Planning Fallacy” – most people who are involved in software development are probably familiar with it.  The Planning Fallacy is the primary reason that software development projects are almost always late.

The Planning Fallacy is the tendency for people to be overly optimistic in how much time will be needed to achieve a task.  Counterintuitively, experience doesn’t seem to eliminate the problem – i.e. knowing that similar tasks have taken longer than expected doesn’t solve the problem.

The good news is that the bias behind the Planning Fallacy can be mitigated with a couple of relatively simple “hacks”:

  1. ask someone else – overconfidence generally only occurs when people are estimating their own tasks and disappears when people are estimating for others. So, never ask the person who will be performing the work how long it will take – ask someone else.  Better still, ask a number of other people.
    In software development, a common technique is “Planning Poker” where a group of people provide blind estimates (so they don’t influence each other) of how long a task will take and the median is used.
  2. break tasks into smaller chunks – experiments have shown that the estimation for how long it will take to do a task is generally always less that the sum of the sub-tasks once they are broken out.

4. Group Think

life-of-brian

“Yes; we are all different!”

“Group Think” is probably a term that most people are familiar with.

My experience is that, in startups, it’s closely linked to the Confirmation Bias problem.  As discussed above, startups are carried forward by true believers (founders) and dissent is often considered heretical.

Perhaps the best way to understand Group Think is to look at what people do that makes it happen:

  • Minimize conflict – with a few exceptions, most people don’t want to fight.  Hey, startups are stressful enough. However, there are sometimes necessary conflicts.
  • Suppress dissenting viewpoints – startup founders are usually, almost by definition, very strongly opinionated and convincing. The flip side of this is that they tend – consciously or otherwise – to stifle viewpoints that contradict their worldview.
  • Isolate outside influences – some startups have company cultures that are cult-like. While this may be very useful in getting everyone aligned on an objective, taken too far it is dangerous, leading to “not invented here” syndrome and hubris.
  • Appeal to authority – startup teams must be encouraged to “speak truth to power” and call out the elephant in the room.

5. The Curse of Knowledge

It’s extremely hard to imagine what it’s like to not know what you know.  i.e. you can’t “unknow” something.  This is the Curse of Knowledge.

In the context of a startup, one of the biggest problems is trying to put yourself in the shoes of your user or customer.  The reality is that you can’t.  You’re so deep into the problem that you’re trying to solve that it’s impossible to see it in the same way that an outsider would.

This really underlines the importance of doing User Testing on your product using real users, not internal team members.  Watching people unfamiliar with your business and the problem you are trying to solve use your product is always extremely revealing.  There are always many implicit assumptions that you’ve made and these are only exposed through contact with real users.

6. Anchoring

Anchoring is the tendency to rely too heavily on the first piece of information you get and a tendency not to adjust your position based on further evidence that contradicts it.

Anchoring is what skillful negotiators – e.g. car salespeople – exploit to get the deal they want.  The first price discussed tends to anchor the subsequent negotiation.

In a startup, your first customer deal, your first hire, your first customer loss, etc tend to set a mental template for “how things work” with your business.  It’s important to periodically ask yourself if you’ve become anchored in a world view that isn’t necessarily correct.

7. Sunk Cost Fallacy

You might call this “flogging a dead horse” or “throwing good money after bad”.

Sunk Cost Fallacy is a tendency to continue to rationalize decisions and actions when faced with increasingly negative outcomes.  A “sunk cost” is a cost that you’ve already paid and can’t get back – money that is already spent whether you continue or not.

Sunk Cost Fallacy is, I think, one of the primary reasons that many startups pivot too late and end up running out of money.

Being rigorously data-driven is probably the best antidote to Sunk Cost Fallacy (and other Cognitive Biases).  Set specific metrics that need to be achieved by specific dates in order to assess whether a particular initiative or direction is working and hold yourself and your team to them.

8. Attribution Bias

bro

Attribution Bias is the tendency, when evaluating the causes of the behaviors of a person you dislike, to attribute their positive behaviors to the environment and their negative behaviors to the person’s inherent nature.

We all have to work with people we don’t necessarily like.  It’s important to realize that we probably can’t accurately understand others’ internal motivations and try not to take personally the behaviors that we consider negative.

9. Ostrich Effect / “The Elephant in the Room”

ostrich

The Ostrich Effect is the avoidance of risky/difficult situations by pretending they do not exist.

The “elephant in the room” is an obvious truth that is being ignored or going unaddressed.

It’s critical to build a company culture in which people are encourage to call out the elephant in the room.  If you don’t, you will be trampled by the elephant.

10. Hindsight Bias

tswift

“I knew it all along!” …actually, you didn’t.

Hindsight Bias is our tendency to see an event, after it has occurred, as having been predictable.

Hindsight Bias is a large and fascinating topic.  I can’t possibly do it justice here.

It’s unfortunately one of the hardest to counteract. The only known way is to ask whether or not alternate hypotheses and predictions would have been equally believable ahead of time.

11. Survivorship Bias

Ever had a conversation like this?  “Uber did X therefore we should be doing X!”

In reality, it’s likely that there were many other companies that did X but which don’t exist anymore…so you won’t be hearing from them.

Survivorship Bias is concentrating on the people or things that “survived” some process and overlooking those that didn’t because of their lack of visibility.

Survivorship Bias is one of my favorites simply because it is extremely common in Silicon Valley.  Beware of people claiming that companies succeeded because of specific reasons without data showing that those reasons were in fact the reasons they succeeded.

12. Bias Blindspot

Lastly, we all tend to think of our own perceptions and judgments as being rational, accurate, and free of bias.

In a sample of more than 600 residents of the United States, more than 85% believed they were less biased than the average American.

This is despite the overwhelming amount of experimental evidence that they are not.

Summary

Firstly, forget any idea that you can eliminate these biases – you can’t.  However, you can educate yourself about them, build awareness in your team and encourage people to question themselves and call them out when they see them.

Additionally, the #1 thing you can do to help counteract these biases in your startup is be Data Driven.  Data of course does not solve all problems but, by asking the right questions and getting accurate answers, you can cut through many Cognitive Biases.

Lastly, it’s important to note that these Cognitive Biases are subtle and pernicious. Companies do not usually fail because of one Cognitive Bias affecting one decision. Instead, the impact Cognitive Biases have is across a series of decisions over time.  Be mindful.

Further Reading

There are a few other Cognitive Biases that didn’t quite make the list but which I still think are hugely relevant and I’ve observed in startups:

  • illusion of validity – belief that additional information generates additional relevant data for predictions, even when it evidently does not
  • information bias – the tendency to seek information even when it cannot affect action
  • zero-risk bias – preference for reducing a small risk to zero over a greater reduction in a larger risk
  • loss aversion – the tendency to focus more on what you might lose from a particular decision than what you might gain

The best place to start for these and many others is the List of Cognitive Biases on Wikipedia.

I Crave Your Feedback

Good, bad or indifferent, please leave a comment.  Thanks.

The Coder’s Oath

Doctors take the Hippocratic Oath.  We need a Coder’s Oath.  Will you take it?

I am a software professional.  I swear to fulfill, to the best of my ability and judgment, this covenant:

I build on the shoulders of the software professionals that have gone before me, recognizing that rarely are truly new programming paradigms invented. I therefore commit myself to fully understanding existing solutions before I reinvent the wheel.

I recognize that the simplest solution is almost always the best solution.  I will not over-engineer or prematurely optimize.

I will always seek out the root causes of problems. I understand that the time taken to seek out and address root causes will yield savings in all but the very shortest term.

I will work to understand my cognitive biases but recognize that I can never fully overcome them. In assessing the effort and time required to complete a task, I will consult with my peers to understand the true scope before making a commitment.

While I always strive to increase my skills and knowledge, I recognize that my work, and the work of my peers, will never be without errors. I accept that all software has bugs and that I myself will write many bugs.  I will allow my work to be scrutinized and critiqued by my peers without taking it personally.  I have the courage to say “I don’t know”.

I do not build software in a vacuum or create software for my own glorification or for technology’s sake. Instead, I create software that is valuable to users.

I accept that users are human beings and that human beings often do not behave rationally. I understand that if I build software expecting people to behave rationally, I will be forever frustrated.

While I may have entered into the software field because I am introverted and/or prefer computers to people, I commit to trying to understand users and the reality of how they use my software.

Frustrating though it may be to me, I understand and accept that most users will lack the time or inclination to understand how software works or why it was built the way it was. I accept that, to users, my software is just a tool to get a job done as quickly and easily as possible.

If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to further the software craft and produce software that delivers true value to users.

So, will you take it?  Let me know in the comments.

How Startups can work with Big Companies and not get Killed

(Note: this post was originally published on the Wonolo blog under the title “Collaborate. Innovate. Top Tips for How Large Enterprises and Startups Can Have a Winning Partnership”)

Recently, I was invited by Unum, one of our FORTUNE 500 customers, to participate in a panel session about corporate innovation at Maine Startup and Create Week. At the heart of our discussion was how large companies like Unum can be more innovative and how startups and large companies can work together toward this goal.

It’s a topic that’s close to my heart: I spent the first part of my career in the wireless industry, and back in 1998, I was a part of the founding team of Symbian, one of the first operating system platforms for smartphones (although they weren’t yet called “smartphones” at that point).

Symbian was a joint-venture between Nokia, Ericsson, Motorola, Psion, Panasonic and, later, several others. Their rationale for investing in Symbian was a desire to have a common software platform for smartphones. However, what these companies actually had in common was that they were large, bureaucratic, and they were arch-competitors.

Getting these large companies in the Symbian joint-venture to work together was somewhere between very hard and impossible. (For the full story, see David Wood’s excellent book.) The term of art at the time was term “coopetition,” and it didn’t work. None of the participants in Symbian really had any desire to share their product plans with their competitors.

So, the irony was that, while Symbian was arguably at the spearhead of technology innovation, it was frequently stymied from actually being innovative by the inertia and culture of its participants. This left the market open to more focused, agile and independent companies like Google and Apple to dominate the smartphone market of today. In contrast, Nokia had an ignominious end – broken up and sold off, with billions of dollars in market value destroyed.

So, fast-forward to 2016 and my panel discussion at Maine Startup and Create Week…How can big companies be more innovative, and how can startups and large companies work together to the benefit of both?

Designer, Builder or Maintainer?

First, let’s take a look at the kinds of people that tend to work at startups versus larger companies: I have a simplistic but hopefully powerful model that divides people into three groups – “designers,” “builders” and “maintainers.”

Let’s use an analogy: here in San Francisco, arguably our best-known symbol is the Golden Gate Bridge – just look at any tourist tchotchke.

If we think about the Golden Gate Bridge, first there were the designers. In our culture, the designers generally have the “sexy” job – they are the visionaries.

GG_Bridge_Plans.png

Next come the “builders” who actually constructed the Golden Gate Bridge.

GG_Bridge_Maintainers.png

Last come the “maintainers.”  These are the workers who hang on ropes off the bridge, scraping off rust and continuously repainting it in International Orange.

International_Orange.png

This is the least sexy job in most people’s eyes: which would you rather be – the visionary designer of the Golden Gate Bridge or someone who hangs off it on a rope, scraping rust?

Now, in a startup, what you need for success are just a few designers – these are typically the founders.  You can’t have too many because they tend to butt heads.

What you really need for a startup is a boat-load of builders: these are the doers – people that create and Get Shit Done (GSD). Builders are the backbone of any startup.

What you don’t need in a startup is maintainers: everything in a startup is being created anew so there isn’t anything to maintain. You’re also focused on growth rather than optimization.

Contrast that to a big, established company: there, most people are maintainers. Their job is to ensure that an already successful business continues to be more successful. They are there to grease the wheels and optimize.

So What?

What this means is that there is a cultural mismatch between a large company and a startup.

At the core of the startup mindset is a willingness to fail and an acceptance of it. In a startup, failure is the norm – as the cliché goes, you fail your way to success. Since “failure” has negative connotations, I think it better to simply reframe it as “learning.”

Another important aspect of building a startup is understanding the art of the “good enough.” Because you’re bandwidth-constrained, you are forced to be very selective and very efficient in how you do things. You have to get them done quickly. You have to not let the great be the enemy of the good. You have to focus on delivering 80% of perfection for 20% of the effort.

Naively, when large companies aspire to become more innovative, they trot out clichés like “we reward risk-takers.” This is a lie. The last thing you want when you have a large company generating billions in revenue it to have some cowboy risk-taker come in and break it. What you want are maintainers to keep it working and keep it generating billions of dollars.

To take it back to the Golden Gate Bridge example, would you want a maintenance worker who said, “Let’s see what happens if we take all the bolts out”?

I think it would be better to rephrase it as, “We reward people who make small, smart bets.” Making a series of small, smart bets to test various hypotheses is the basis of iteration, and iteration is the how you build great products and great companies.

Recognizing these problems, many large companies have started to take a different approach – they have created specific initiatives intended to foster and drive innovation. Wonolo itself was created through The Coca-Cola Company’s innovation program.

Creating a Great Corporate Innovation Program

So, how can a large company create an innovation group and/or program likely to succeed? These initiatives can take various forms, but I think these are the most important elements:

  1. Set money aside – the budget for innovation can’t come out of the normal, operating budget for any existing business unit. If it does, it competes with the budget needed for maintenance of what’s already working.
  2. The innovation group must report directly into the CEO – this demonstrates genuine commitment to innovation and also helps unblock bureaucracy.
  3. Be clear with objectives – what specifically are you hoping that the innovation program does for your business? What are you looking to achieve? How does it positively impact your core business?
  4. Build the right team – a good mix is designers and builders from outside of the organization, along with some inside players who can help navigate the existing organization, as long as they carry enough weight. You will also need to reassure your best maintainers that they should stick to what they do well rather than trying to join the innovation program because it’s sexy.
  5. Accept failure – as discussed above, you must accept that failure is a vital part of the process. Not all initiatives you start or companies you fund will be successful, but you will learn something important from each.

How Can a Startup Engage with a Big Company and Win?

Big companies can kill startups. I’ve seen it happen.

Big companies can lead startups on and consume lots of their time and bandwidth with no pay-day. At the end of the process, the large company has perhaps lost a few hundred thousand dollars. Meanwhile, the startup has run out of funding and is dead.

Here’s what I’ve learned (the hard way) to avoid that outcome:

  1. Find your champion. Ted Reed is our champion at Unum. Not only is he an all-round great guy, but he also understands the need to be completely transparent with us. A great champion is your guide to the large company – its structure, how it makes decisions and the key players you’ll need to win over.
  2. Seek trust, honesty and transparency – any great relationship is built on mutual trust. Get feedback early and often (from your champion) on your likelihood to succeed.
  3. Don’t over-invest until you have clear commitment – be prepared to scale back or end the relationship if it’s not clear you are on a path to success. The opportunity cost of your time in a startup is huge. Don’t do anything for free – free means there’s no value, and it won’t be taken seriously.
  4. Ensure clarity in objectives, value and define success – if both sides are not clear on the business value that your product or service is providing to the large customer, be very cautious. Make sure both sides agree on what success means.
  5. Start with a small, well-defined trial – rather than trying to boil the ocean, it’s wise to start with a trial that demonstrates the value your product or service provides to the large company. This has less risk, requires less investment and has a higher likelihood of success. For more tips on how to best go about setting up a pilot, check out our related blog post.

How Can a Big Company Engage with a Startup and Win?

On the other side, how can big companies successfully engage with startups and win? Here’s my personal recipe:

  1. Be honest and transparent – don’t lead startups on. Be honest about chances of success and what it will take.
  2. Be respectful of bandwidth and provide funding, if possible – realize that a startup’s most precious commodities are bandwidth and funding. Do everything you can to reduce the sales cycle. Structure the deal to provide the funding and/or revenue necessary for the startup to succeed.
  3. Have realistic expectations in terms of maturity of a startup and its processes -don’t try to apply your pre-existing vendor onboarding process when engaging with a startup. For example, a 10-person startup won’t pass your 50-page IT security audit.
  4. Respect the need for independence – you may be providing a startup with revenue, funding and a great customer reference. However, a startup needs to be in control of its own destiny and own its own product roadmap. Don’t treat a startup like a consulting company or development shop, unless that’s how the startup sees themselves.

Overall, the relationship between a large company and a startup can be a marriage made in heaven. I would marry Ted Reed if I could.

How to Fire Someone Humanely

I’m surprised by how often I encounter someone in a leadership role who has never had to fire anyone. I suspect it’s a combination of technology companies generally having pretty flat organizations and also the tendency to have dedicated HR functions in larger companies that insulate people from any “unpleasant business”.

Whenever I’ve had to fire someone, I’ve not slept well the night before. Regardless of the reason, you are changing someone’s life and everyone deserves to be treated humanely. It definitely gets easier after you’ve done it a few times but I hope that it never becomes routine.

Here are my 10 recommendations for how to fire someone humanely:

1.  Immediately set the tone of the conversation upfront.  
It gets super-weird if you have a friendly “how are you?” conversation and then fire someone.  I normally get straight down to business and open with “I’m sorry we have to have a difficult conversation today” before the person even sits down.

2.  Do not use the word “fire”.
The word “fire” creates an emotional reaction – to“fire” someone implies an active act of aggression.  I generally say “this will be your last day”.  That way, it’s not a value judgment or a process – it’s just a fact.

3.  Keep it short.
There is no point in dragging it out.  But, you should at least give the person a short and true reason as to why it’s happening, unless there is a good legal reason not to.

4. Do not get drawn into arguments.  
If someone wants to argue, be clear that this is the decision, it’s already made, that you understand they are angry but that it’s not beneficial for either side to drag it out.

5. Have your paperwork in order and be aware of the local employment law.
For example, here in CA, it’s the law that you have to give an employee their final pay in the form of a check when they leave.

6. Say you’re sorry it didn’t work out.
It helps humanize you in the process. It might seem trite or hollow to say “sorry” but, on balance, I believe it helps soften the blow.

7. Don’t fire someone on a Friday.
You’ll read conflicting recommendations in this regard but I am strongly in the camp that says firing someone on a Friday is a Bad Idea(tm).  It means they are likely to just seethe about it all weekend.  If you fire them during the week, they are more likely to focus on finding another job.

8. Have others ready to cut the cord.
You will need to confide in a small group of people ahead of time so they are cued up to terminate account access, etc as soon as the person leaves the room. Create a list of accounts ahead of time so it is not a rush and nothing gets missed.
You may be tempted to think something like “I’ll just leave Dave’s email on until the end of the day”. Resist this temptation at all costs. Although this may seem counter to the “being humane” approach, the potential benefits in terms of seeming more humane and feeling better are massively outweighed by the downside risk of the person doing something stupid.

9. Treat the person with respect and try to make it as comfortable as possible.
If possible, try to make sure that their team mates are not around to gawp as the person clears their desk, etc. This will again require the confidence of someone you trust.

10. It should not be a surprise.
Firing someone should be culmination of a process of clear and honest communication over weeks if not months.  If you are a good manager, the person on the receiving end should be clear on the gap between what is expected of them and their performance.
With the exception of gross misconduct, if there hasn’t been such a dialogue, you probably shouldn’t be firing the person.

So, you inherited a codebase… just how screwed are you?

In a perfect world, every software developer would finish school, build a product from scratch, IPO the company and retire. However, that’s very, very rarely the case and the majority of software people will at one time or another in their careers inherit a codebase that they weren’t involved in creating.

The most common situations where this happens are when a company is acquired by another and when joining a company (because of staff growth or turnover).  But, regardless of the reason, the software developers, managers and executives involved are expected to take it over and continue to achieve forward progress.

The problem is that not all codebases are created equal – they can range from absolute disasters to dream scenarios; from the ridiculous to the sublime.  Plus, aside from the code itself, there are a number of other factors which have a huge impact on the success or failure of the transition.

So, the purpose of this post is to give a framework for assessing how much of a risk the transition will be.  Put another way, it’s also a framework for asking the right questions and for setting expectations at the right level.

How Screwed Are You?…a formula

Every situation is unique, but this is my rough formula to calculate the “Codebase Risk Factor” (CRF):

CRF Formula

 

If you’ve immediately started to glaze-over from the math, don’t worry, I’m going to break it down for you.

This formula factors in what I believe are the 5 main risk factors when you inherit a codebase:

  • Test Coverage (“tc“)
  • Team Availability (“ta“)
  • Team (availability) Duration (“td“)
  • Defect Find/Fix Ratio (“ffr“)
  • Age of Codebase (“ac“)

The formula then weights each of these factors to give an overall score – the higher the number, the more screwed you are.

Let’s look at each of these factors in more detail.

Test Coverage

There are many other places you can go to read more about the benefits of test coverage and Test Driven Development (TDD) in general.  I am not going to make that case here.

Suffice to say that, when you inherit a codebase, Test Coverage is your friend.

For those that are new to the concept, Test Coverage is usually defined as a percentage and measures how much of the codebase comes with automated tests that validate whether it’s working or not.

When you first inherit a codebase, you understand almost nothing about it. However, despite that, in most cases you are going to be expected to a. keep it working and b. add new features. Good Test Coverage lets you add and change things in the codebase with a much lower risk of inadvertently breaking something else that you don’t understand yet.

So, how do you assess Test Coverage? Fortunately, for most platforms and languages (in fact all the platforms and languages I’ve come across), there are one or more automated tools to analyze Test Coverage. These are not perfect and you should always ask the developers who worked on the code originally what they believe the Test Coverage to be (more on that below).

Also, beware of getting a false sense of security from a raw Test Coverage % number. Just like any codebase, not all Test Coverage is created equal. I’ve seen many cases where tests are written but they don’t actually properly test the code in question.  Again, this is why you should seek out the view of the developers who worked on the codebase originally.

Team Availability

To be blunt, if you inherit a codebase but don’t have any access to the developers that wrote it, I’d say you’re in Big Trouble ™.

Only the original developers know where the bodies are buried: they understand how much Tech Debt has accumulated in the codebase that you’ll need to deal with, they know which parts of the code are solid versus which kept them awake at night and they know which bits of code were written by the good developers and which by that idiot that was fired.

By “developers”, I also count devops people since they are arguably the most important in keeping what you inherit up and running.

Now, by “Team Availability”, I don’t necessarily mean that the developers have to still be working for you full-time (although that would be best) but that they are available to answer questions and provide advice on an as-needed basis.

So, Team Availability is also defined as a percentage. You can either think of it as the % of people (e.g. 2 members of the original team of 10 are available, so 20%) or as a % of the people’s time (e.g. the original developers are available 1 day a week, so 20%).

Team (availability) Duration

Having access to the original team is great but often the follow-on question is, for how long?

From experience, I’d say 6 months is enough for most cases. It’s enough time to learn what’s already there and to make significant forward progress with new features.  After 6 months, the original developers no longer have familiarity with the newest code so their usefulness starts to diminish.

If you’re planning on acquiring/inheriting a codebase, I’d recommend that the absolute minimum you should try to keep the original team available for is 3 months.

Defect Find/Fix Ratio

The Defect Find/Fix Ratio is simply a measure of how rapidly bugs are being found in the code versus fixed in the code. It’s a useful measure of how troubled the codebase is at the point in time you inherit it. If bugs are being found faster than they’re being fixed, that’s an indication of a problem.

In practice, the amount of time allocated to bug fixing varies based on the team’s other commitments and, therefore, the Find/Fix Ratio will tend to oscillate.  So, it’s good to look at the Find/Fix Ratio over a period of time; the past month at least.  The lower the ratio, the better.

Age of Codebase

While it’s true that code itself doesn’t age or rust, older codebases are, in general, more problematic.

There are a number of reasons for this: firstly, the older the codebase, the more likely that the people who wrote the code are no longer available and/or have forgotten how it works.

Secondly, the older the codebase, the older the versions of the libraries, packages, tools, etc that it depends on, unless there has been very deliberate effort to update these (hint: this rarely happens). This makes it much harder to make changes to the codebase, especially if some of the versions it depends on have reach end-of-life or are no longer compatible.  This is all Tech Debt that you will have to address before you can move forward.

An Example

Let’s imagine that ACME Corp acquires Widgets Inc and the poor VP Engineering at ACME Corp inherits Widgets Inc’s codebase.

Widget Inc’s codebase is 4 years old.  The team has been pretty stable at 10 people but 7 of those people have quit in anger because of the acquisition.  The remaining 3 have been persuaded to stay around for 4 months in return for some ongoing stock vesting and a retention bonus.

The VP Engineering quizzes the remaining 3 and runs the test coverage tool.  He also looks at the bug tracking system to see the recent trend.  He discovers that Test Coverage is about 40% and, in the past 3 months, 280 bugs have been reported, of which 80 have been fixed.

So, here are our inputs to the formula:

  • test coverage: tc = 40%
  • team availability: ta = 30%
  • team (availability) duration: td = 4 months
  • defect find/fix ratio: ffr = 280/80 = 3.5
  • age of codebase = 4 years

Plugging them into the formula, here’s what we get:

CRFformula_example

 

So, this yields a Codebase Risk Factor of 320, compared to (as the mathematically-inclined will have noticed) a perfect score of 20.

In layman’s terms, pretty screwed.

 

Agree or disagree? Please leave a comment.