I spent better part of last week in Amsterdam, at this year’s edition of Carrier Ethernet World Congress. The life of a marketer at industry events like this one doesn’t normally look overly glamorous, but in between client and industry analyst meetings (and the odd awards dinner – sadly we did not win this time round..) I did manage to spend a little bit of time in the conference hall listening in to a few of the presentations.
To me, the leitmotif of this year’s CEWC was the Cloud – or more specifically, Carrier Ethernet’s role in how Cloud services are delivered and the ways in which operators can capitalize on this opportunity. Telecoms service providers are clearly taking the topic very seriously (at last..) and have either already climbed down the fence they kept sitting on for years, or are planning to do so in the immediate future.
One Cloud related stat that I picked up at the show – and also one of the conference’s eye-openers for me – came from Easynet‘s CTO Justin Fielder. Apparently, 57% of CIOs and IT decision makers in medium/large companies in Europe think that they can efficiently transition their business to the Cloud without the need to upgrade their network – which is absolutely staggering (Easynet surveyed 800 of them across eight European geographies – see here for access to a white paper on the survey).
When you think about the Cloud, things that usually pop into mind are its numerous applications – flexible storage, on-demand compute power, desktop virtualisation, services such as Google Docs or Salesforce.com, etc. As a consumer of these applications, it is perfectly OK for you to stay on that level and let others worry about the plumbing underneath. If you do that as a CIO though, I think it becomes a bit worrying.
After all, at the very heart of the concept of Cloud lies the notion of resources (compute, storage or other) that are geographically distant from their user. An underlying network that can support those Cloud services becomes, in this case, a true “sine qua non” condition for the existence of Cloud itself. What’s more, the performance and reliability of this network is also the single biggest determinant of whether Cloud services work and of the user value they deliver. In other words, the more IT applications a company puts in the Cloud, the higher the capacity (and resiliency) strain put on the network connecting the firm’s offices to those Cloud-based resources. Failing to address this issue upfront could for many cloud services impact the Quality of Service (QoS).
The fact that only 43% of CIOs seem to appreciate the role of networks in their transition to the Cloud puts telecoms service providers in a bit of a pickle. The next big challenge for them in this area will be getting their business customers to look beyond the “sexy” applications and convincing them of the value of the network as an enabler of Cloud itself.
Whether they can do it is another thing. My thinking in this area sways towards the network joining software, platform and infrastructure in the “X-as-a-Service” family (most people put networks into the IaaS drawer, but I think if it’s flexible enough the business model will warrant looking at it as a separate category – although admittedly, it wouldn’t fit nicely into the cloud computing stack..). A network of this kind would be completely elastic and able to respond dynamically to the requirements of the cloud-enabled world. It would need to be able to automatically allocate bandwidth to specific services, as well as switching between the services it operates – all in a completely seamless, on-demand model. In this way business customers would be able to only pay for the actual bandwidth they use across their cloud-enabled IT function, rather than purchasing a static pipe as they do today. This, in turn, would translate into an obvious cost benefit – and hopefully convince them that the plumbing actually is worth looking at.
To my knowledge, no solutions are commercially available today that allow this – it’s clearly still early days in the evolution of the network to truly address the Cloud challenge (albeit I have seen some early demoes and trials already). Watch this space over the next few years though – I’m pretty sure competition will heat up as service providers start putting pressure on network vendors to deliver the flexibility needed in the redefined, cloud-centric world.
Working with a tech hardware company these days often doesn’t really feel “hardwary” at all. Even though developing products (laptops, servers, routers, switches, storage devices, you name it) is what they are often very good at and have done for years, everyone seems to be talking about services these days, and how they have, are, or will transform the company into a “services driven business.”
Different companies understand this lock-pick of a phrase differently, but it borders on a rule that at a certain size – and stage in their evolution – most hardware manufacturers stop treating their services business as “necessary evil” and start seeing it as one of the key growth drivers. Together with this recognition also comes better focus on how services, which originally might simply have been given away for free, are monetised for revenue.
I’ve looked, briefly, at recent sales figures of four global companies with rich legacy of hardware manufacturing: IBM, Xerox, Ericsson and Hewlett-Packard. The graph above clearly suggests that services business in all these firms has played a growing role in the overall sales mix. Some of them jumped on the services gravy train earlier than others, of course, but the trend itself is relatively consistent with all of them.
So why is this the case? What makes services, all of a sudden, such an attractive business to dabble in? Here’s a stab at defining four key reasons (there’s probably dozens more, but these seem to stick out most to me):
- Regularity of revenue flow
When you’re a hardware manufacturer (especially in the B2B market), your sales can get quite lumpy during the year as you often rely on big deals that normally have long sales cycles. The stability, security and repeatability of revenue flow coming from services (many of which are sold on an annuity basis) provides a comfy cushion to top line variability.
- Stickiness of customer relationships
Two factors come into play here – firstly, services are usually delivered as part of a long-term contract, with vendor’s employees often based at the customer office. Contacts are made, bonds get created, and the sticky symbiosis emerges. Secondly, in a value-added service model, a vendor often takes control of a set of key processes in a customer’s organization. As a result, the customer often loses the capability to execute these processes using its own internal resources – and can end up effectively locked in a relationship with the supplier (admittedly this is a bit on the “dark side” of business, but is extremely valuable to the vendor).
- Healthy margins and simple cost structure
The gap between the hourly pay to your employees and the hourly fee your customers have to pay for the services delivered can be huge – certainly big enough to capture a significant amount of value created in the process. This blog post quotes data suggesting that e.g. Oracle’s services margins can be as high as 74%. For a hardware vendor, traditionally heavily reliant on the cost of components further down the supply chain, this “clean” control of margins is tempting. If you add to this the fact that there is no inventory to maintain and manage (beyond staffing levels, of course, although it feels a bit funny to be referring to people as inventory..), the whole cost structure of the business becomes much more transparent.
- Hardware up-sell opportunity
The last, but also one of the most obvious ones. Working day-to-day at a customer’s office, a vendor’s employees have a pretty unique opportunity to identify unmet needs and offer to address them – often outside of formal tender/RFQ process. Once trust is established, the same employees sometimes also become influencers in the purchasing process, helping with initial tender specifications and, effectively, pushing the customer towards decisions favourable to their own company.
Admittedly, none of the four benefits listed is truly ground-breaking or particularly difficult to come up with. They’re quite commonsense, in fact. However, what I find really interesting is the mechanism for hardware companies to decide to jump onto the services bandwagon – what makes them realize the power and potential of services only after years spent in the box-shifting business? Why is it abnormal for hardware vendors to have the insight to offer services (especially value-added services) in a meaningful way at the very onset of their existence?
One thought I have on this is around how the value you deliver to your customers changes as both you and they grow. Smaller vendors tend to lock a lot of the value they create in the products they manufacture, their functions and capabilities – as this is what their small customers expect. As they grow the business, their customers also get bigger and those needs evolve, often becoming too complex for a product to deliver, regardless of how technically advanced it is. That’s probably when vendors get an “aha!” moment, realizing that as specialists in their field they are in a unique position to not only address those evolved needs of their customers, but also capture a good deal of additional value while doing it. And so the transformation to the “services driven business” begins.
Pretty much everyone in the UK has an opinion on Marmite – the ambiguously-coloured bread spread that, apparently, you either love or hate, and never anywhere in between. Marmite’s manufacturer Unilever has, in fact, based the product’s whole brand concept on this strong message of conflict (and done I in a very clever way too – see the image of a chewing gum disposal board I saw earlier today close to my home, which in fact prompted me to write this post).
With iPhone 5 reportedly just round the corner, I started to wonder to what extent Apple’s reputation falls into a similar category as Marmite’s – with very clear groups of lovers and haters of both the brand as a whole and the company’s products. More importantly, however, I thought it would be interesting (slightly prophetic, maybe?) to know how many people actually fall into the third category and, basically, don’t have a strong opinion on it at all.
This kind of data isn’t necessarily easy to come by – certainly not with an amateur blogger’s budget. Fortunately, marvels of today’s technology (and one, called SurveyMonkey, in particular) allow you to magically turn yourself from an amateur blogger into an amateur market researcher – and that’s exactly what I did earlier today.
In a few words – this morning I asked my friends on Facebook and LinkedIn, as well as the campus community at London Business School, one simple, multiple choice question:
What’s your opinion on Apple and its products?
- I love it.
- I hate it.
- I don’t have an opinion.
Since then, a pretty amazing (in my mind) 135 people have responded (thanks!). This obviously doesn’t make the results in any way representative of either the UK’s or the globe’s population, but I think it does give us some interesting – and hopefully meaningful – numbers to look at.
First of all, the boring and unsurprising stuff: Apple definitely has little to worry about in the short run. As anticipated, a very strong 72% of respondents (rounded off) love it to bits. I’d personally attribute this remarkable result to the superior quality and excellent usability of Apple’s products (you can read my thoughts about it in this blog post and this discussion on HBR blog), which the company has delivered to its customers over and over again. For decades, too.
What’s interesting though, the second largest number in my little survey wasn’t made up of Apple’s haters. In fact, nearly one fifth of the respondents (19%) seem to not care too much about Apple (which, of course, leaves 9% for the haters). Compared to the massive figure from the previous paragraph, this doesn’t look big – but in my opinion it could have some implications to the company in the longer term.
Think about it this way – there’s only so many sequels to a product you can pull off that will be revolutionary and fundamental to the world. Take the iPhone, for instance. Originally, it truly was a one-of-a-kind gadget, with usability that was unmatched across the market (and a market-lagging tech specs, which no one seemed to care too much about, and as it turned out rightly so). Today however, we have a multitude of other options to choose from, with smart phones out there that match iPhone’s performance in most areas and in a few possibly even surpass it – notably the Android-powered ones. And nearly 20% of people already think the iPhone is just another smart phone.
In microeconomic theory smart people sometimes talk about the indifference curve. This clever little thing is essentially a graph showing different goods between which a consumer is indifferent. A big behavioural assumption in this theory is that all consumers are rational decision makers who seek to maximize utility (subject to budgetary constraints), which means that consumers purchase that combination of goods and services that will make them happiest given the amount of income they have to spend. In the case of the iPhone, in my opinion with every next generation the incremental “utility profit” that people gain from the product gets smaller – and as a result, the proportion of people indifferent to Apple and its products (including the iPhone) should grow.
This said, my take on the fifth generation of the iPhone is that it could be a very strong product – provided that it’s LTE-capable. Not so long ago AT&T launched 4G services, and so did Verizon, with more telecom service providers following suit shortly – which could be a huge sales pull for the iPhone 5. Knowing Apple, however, that capability will likely only come out with the next release though – which could turn next week’s launch into a bit of a disappointment. Fortunately, we won’t have to wait too long to find out.
(majority of this post was touch-typed on the very nearly obsolete iPhone4, which by the way I have quite ambivalent thoughts about)
CAPEX vs. OPEX based business model in networking – thoughts & questions on Cisco’s Anil Menon’s speech
During my recent visit in India with London Business School, I had the opportunity to listen to one of Cisco’s foremost, Anil Menon, give a dinner speech on the challenges and specificity of running a business (technology business, in this case) in emerging markets.
The speech itself was designed to be more “infotaining” than ground-breaking (one trivia fact I learnt was that there are more mobile phones in use in the world today than toothbrushes – take that 8 out of 10 Cats!), but there was one theme that Mr. Menon touched upon that really made me stop and think. So here goes.
At some point toward the end of his speech Mr. Menon said that he could see a very clear trend in the emerging markets towards an OPEX (as opposed to CAPEX) driven relationships with customers. Because of the way things are traditionally set up in places like India (think family-owned businesses, lightning-fast growth, etc.), people would rather pay for a networking solution on a per-use fee basis than build up a whole internal organisation to purchase and maintain its own network.
And so, for instance, a rural district in India operating a distance learning scheme (which there will be more and more of in years to come, one would hope) could let the networking company set everything up and only pay a fixed fee, say, per completed lesson. Large companies also seem to be heading in that direction.
Personally, I wouldn’t say that this way of doing business is unique to emerging markets. Albeit in a slightly disguised form (of a carrier managed service), it’s present in the developed world as well. In fact, the pay-as-you-go model has been a growing segment for telecoms service providers for quite a while (an Ovum study carried out back in 2008 projected that the managed service market would be worth USD 66 billion by 2012, with a CAGR of 18%).
What is more interesting about this business model, though, is that it bears an uncanny resemblance to how cloud computing works – where, rather than buying the infrastructure itself, the customer pays exactly for what they use, in the form of a service. Plenty of clever companies have obviously already monetised this model – among them big names such as Salesforce.com, Amazon Web Services, or Citrix – and undoubtedly plenty more will in the future.
One fundamental difference springs to my mind here, however. In cloud computing, the infrastructure actually providing the service – be it compute power, storage, etc. – can easily be shared across multiple customers. This is, in fact, one of the biggest appeals of cloud computing and reasons why it makes business sense to cloud service providers (the story on how a company on average only uses ca. 20% of its IT resources and therefore massively overpays for its infrastructure, when it could be buying it as a service from a cloud vendor, is pretty well spread).
In the carrier managed service model, however, a lot of the physical equipment deployed remains dedicated to a single user – you simply cannot create a networking solution without placing hardware on your customer’s premises. Logically thinking, then, the economics of this approach have to be a bit skewed – as sharing that equipment across multiple users isn’t really possible.
I’ll end on a few questions I don’t really have good answers for (but would be very interested to hear anyone’s thoughts):
- How sustainable is this pay-as-you-go model? Given that the growing fleet of equipment needs to sit on SOMEONE’s balance sheet – aren’t service providers at a loss here, on balance?
- How can/do service providers ensure these OPEX-driven relationships are actually delivered in a profitable way?
- What could the role of networking equipment vendors be here? Maybe the “ownership buck” should indeed be passed on down in the value chain to them?
The publication of the final report and recommendations of the Independent Commission on Banking (ICB), led by Sir John Vickers, was by far the most prominent business news story yesterday. In a nutshell, the report proposes a fundamental remodeling of the UK banking industry following the financial and economic crisis we’re just getting ourselves out of (fingers very much crossed), and creating a more stable and competitive basis for UK banking in the long term – which translates into greater resilience, better management of risk, effective safeguarding of retail deposits, and so on.
The report has met with general support and its value to the wider society is hardly debatable. However, I think it might have some (unintentional) impact on the fortunes and misfortunes of the technology industry in the UK. Now, what follows is very much free-flow thinking and could be completely wrong (please feel free to disagree with any of the points below) – but here’s how it goes.
One of the report’s main recommendations is to introduce “structural separation” between banks’ retail and wholesale/investment operations. While the commission does not call for an outright split-up of today’s banking groups, it does recommend a significant scope of ring-fencing that would allow the relationship between retail and investment parts of the same group to be “no greater than regulators generally allow with third parties, and […] conducted on an arm’s length basis.”
If my thinking is not flawed (and I’m no finance/banking guru, despite the excellent Corporate Finance course at London Business School that I’m half-way through right now), this approach will likely end up shaking the way risks are managed across the board. Without the traditional hedge that so far has (ultimately) been provided by its retail operations, a bank’s venture capital and private equity groups are bound to become more conservative in terms of where they locate their funds.
The problem is, technology ventures traditionally carry a relatively high level of risk, which could mean less funding money at British innovators’ disposal and, as a result, fewer technology ventures budding in the UK. In an (unsurprisingly) excellent article entitled “Start me up” and published a few weeks ago, The Economist points to a shortage of venture capital as one of the main factors that prevents the UK’s technology innovation from developing to its full potential. With more scrutiny imposed after the banking industry reform, this problem might become even bigger.
And all that at the time when the Prime Minister wants to turn East London into a second Silicon Valley and we could really do with people with enough gut to say to the British tech industry: “I’ve got a stash of cash and I’m not afraid to use it.”
Frankly, I really hope I’m wrong on this one.
An interesting story on government IT popped up earlier today on the UK enterprise IT magazine Computing’s website. The article quotes Andy de Vale, co-founder of the Agile Delivery Network (ADN), saying that UK taxpayers are paying up to 10 times more than is necessary for government IT projects. Interestingly enough, the article doesn’t really say what the reasons for this staggering status quo are – apart from hinting at working with large conglomerates vs. SMEs (“De Vale hopes the ADN will prompt the government to use SMEs on more IT projects, as it reduces the risk of dealing with one small organisation.”). Nice plug, btw.
Well, I think I can have a go (maybe slightly controversial, but what the hell) at running a quick supplement to Computing’s article. My five big reasons why government IT project cost more than they should and – more often than not – end up in disappointment are (in no particular order):
Anyone who’s ever been involved in a public sector project knows that any action has to be preceded with countless reports, evaluations, papers, studies, recommendation documents, etc. All this is, obviously, done in the interest of transparency – which is fair enough, but they do take time and cost money.
Politics – with a capital “P”
Where else should we look for it than in the government, right? It is not that difficult to find some of these politically motivated project, just look through a few back issues of Private Eye. No finger-pointing here, by the way.
Decision by committee
Culture of empowerment seems to be a chronic disease of the private sector. In the public sector, you’re meant to be doing things properly – be inclusive, over-communicate, make all stakeholders around you happy, and make sure you don’t take those important decisions on your own. Or else, if things go pear-shaped, someone might find out, God forbid. Making 20 (am I being too optimistic here?) people agree on an issue takes time, and those external suppliers waiting for your decision don’t come cheap.
This, on the other hand, is a chronic disease of the public sector – especially today, when the money runs low and there’s a freeze on hiring budgets. There’s an easy (albeit expensive) way round it – hire full-time consultants to supplement your workforce – and, bloat your project costs big time.
You can’t win with election
Oh, the charms of working in an environment with a deadline (election) that’s completely independent of your IT department plans! It really doesn’t help if your project is scheduled to last more than whatever time is left before the next one – as you’re bound to end up with, at best, an endless list of costly tweaks and alterations in the project scope, usually half-way through its lifetime.
There – anyone to take up the gauntlet with an alternative list?