Office 365 – the beginning of Microsoft’s death spiral?

Thursday, 30 June 2011

Microsoft has finally launched a Cloud version of its Office (Word processing, spreadsheet) product, to less than rapturous reception:

Dennis Howlett calls Microsoft Office 365 “a dud”.

The Register points out that “Office 365 …  been designed to complement desktop Office rather than replace it“.

Mary-Jo Foley, normally a Microsoft fan, posts that “Sorry, folks. This is not Office in the cloud

Meanwhile their acquisition of Skype has been slagged off as too expensive with no synergy and the dying act of a company desperate to be seen to be still relevant, the Windows line get worse and worse press, and some investors are calling for CEO Steve Ballmer to be replaced.

Poor Microsoft! I’m not a fan of their products, although I used to be and I use many every day, but I do really sympathise with their problems.

Addressing the Office 365 issue first, it is almost impossible to move a traditional (try not to call them legacy!) application to the Cloud. Cloud applications behave differently, so trying to both be a Cloud application and retain some look and feel of the old product is impossible. Microsoft is torn between wanting a decent Cloud product, yet not killing off the revenue stream from the old product (and killing their traditional route to market, the channel). So many traditional software companies have this problem, trying to both produce modern Cloud applications yet tied by existing user base, channels and revenue streams: SAP, Sage and even Oracle to name but a few.

Meanwhile competitors like Google Apps, Zoho, Open/Libre Office eat Microsoft Office’s lunch and open source (ie, free) software products like Linux and MySQL make inroads into their server based product line. Really Simple System’s Cloud CRM product is gradually moving off Windows Server onto Linux boxes for a myriad of technical (performance) and commercial (price) reasons. Windows Mobile is being killed by Android and iPhone. The only bright spot is that as yet there is still no credible threat to desktop Windows, although Macs are doing well and a few technologists like me are moving to Linux based desktops such as Ubuntu.

And Steve Ballmer? I’m continually drawn to the parallel between the handover of the British Prime Ministership/Labour party from Tony Blair to Gordon Brown: charismatic but tarnished leader hands over to gruff company man just before the ship goes down. Something I’ve noticed over the years is that when the original software company founders leave, the product vision goes and the company becomes just another money making business, milking revenue off the locked-in user base, until the old user base gradually drifts away. The names of Sage, Infor, Oracle and good old Computer Associates come to mind. No wonder Apple investors are worried as to what happens after Steve Jobs goes. Microsoft’s user base and revenue streams are so large that it will take a decade for that to happen, but happen it will.

Meanwhile I predict we’ll see Microsoft taking the traditional software vendor death spiral: ·

  • Short term investors force Steve Ballmer out, replaced with bottom line focussed CEO.
  • New CEO cuts swathes of costs, profits recover, share price is talked up, short term investors sell at healthy profit, new CEO pockets huge bonus and leaves. Products struggle on directionless.
  • CFO takes over as CEO. User base gradually moves on, more costs are cut, products languish.
  • Rinse and repeat until nobody cares any more.
Advertisements

Amazon outage -the view from the mainstream press

Tuesday, 3 May 2011

When a story that gets the IT world excited actually makes it into the mainstream press – then for once the IT world was right to get excited.

So when The Economist covered Amazon’s and Sony’s problems last week, it was proof that cloud computing (and its teething problems) had broken out of the IT world and into general business consciousness.

Interesting to note that The Economist main recommendation was that SaaS vendors should not rely on just one hosting supplier, as I prescribed in my last blog.


More on Amazon outage – SLAs are not the point

Wednesday, 27 April 2011

While we await Amazon’s  autopsy on why their EC2 PaaS (Platform as a Service) went down the toilet for 36 hours, there has been a lot of talk on making sure that users check their hoster’s SLA (Service Level Agreement) to see what uptime they guarantee. But that is missing the point. SLAs are basically an insurance policy that pays out if you site goes down, but in the same way that life insurance doesn’t bring the insured back to life, if the hoster doesn’t meet their SLAs that doesn’t bring your site back online. And like many insurance policies, the small print will always get you when you try and claim.

Meanwhile, let’s just check the maths again on what needs to happen if you want the magic “five nines” of uptime:

36 hours down a year=99.59% uptime
53 minutes down a year=99.99% uptime
5 minutes down a year=99.999% uptime

No matter what Amazon does to learn from this outage, and no matter what SLA you negotiate you them, there is no way that EC2 is going to get to 99.999%. In fact, there is no way ANY one hosting solution will achieve 99.999%. The only way to get to 99.999% is to have (at least) two hosting solutions from different suppliers and to be able to fail over automatically, be they PaaS or your own servers.


SaaS Escrows – useful or pointless?

Friday, 26 November 2010

I keep getting called by traditional software escrow companies who are looking to move into providing such a service for vendors and customers of SaaS products. However, despite the glossy brochures I just don’t see how it can work.

With a traditional software escrow a trusted third party, such as lawyer or specialist escrow like the NCC, would hold a copy of the source code in trust for subscribing customers. If the escrow was triggered, by the vendor going out of business or even simply ceasing to support the product, the  customers could apply to the escrow company to release them the source code. All this made sense when the software product cost 100k or more, the escrow subscription cost the customer a few hundred and the customer had a large in-house team of programmers who could, in theory, maintain the inherited code. In practice I can’t think of a single example where an escrow was triggered, and I pity the poor programmer who would have had two million lines of COBOL dumped on him or her and told to update the system for a new tax rate.

But for SaaS the story is more complex. SaaS customers purchase a service, not a product, and what they would ideally like to know is that should the SaaS vendor go out of business then their application will keep running. To deliver the service you need the whole hardware & software stack: servers, firewalls, load balancers, operating system(s), web server, backend databases, plus a whole pile of add-ins for charting, pdf generation etc. And of  an up-to-date copy of the data, ideally sync’ed in real time. To make sure that all this would work in the event of an escrow being triggered the whole stack would have to built and tested, otherwise the chances of it working on the day are minimal. And, just to make life more interesting, whereas traditional software worked on a six month or yearly release cycle, SaaS systems get updated much more frequently, almost every day in our case. So what you end up with is a replicated datacentre that has to be tested every week to make sure that it still works. Which is basically what we at Really Simple Systems do, keeping a complete system on hot standby for instant switchover.

Doing all this is a lot more work than keeping a CD in a safe, and that cost would have to borne by either the SaaS vendor or their customers. As most customers are paying very little for their SaaS solution (because that’s the point!), paying the same again for an escrow protection doesn’t seem great value. And as customers aren’t clamouring for such a protection, it is hard to see why SaaS vendors would stump up a lot of money for something that their customers’ don’t see the value in.

A better solution is for SaaS vendors to put in their contracts that should their businesses fail, then the data legally belongs to the customer. After all, it is the data that is the most important asset in most systems – once you have the CRM data, then moving into another CRM system is not such a large task and could be done within a few days, even for the largest systems.

Which (he said smugly) is exactly what we do here at Really Simple Systems.


Really Simple Systems goes Free

Thursday, 21 October 2010

This week Really Simple Systems launched its Free CRM product, Really Simple Systems Free Edition. Freemium products are hot topic these days, with commentators divided as to whether they will actually make money instead of just taking market share, so I thought I’d run through the logic of what we’re trying to do and why.

The decision to launch a Freemium product was driven by three issues:

  1. We weren’t really making any money from one or two user systems by the time we had run through a pre and post sales process
  2. We wanted to get a larger market share in the US, where we have some customers but nothing like the numbers we have in the UK, and we don’t have $1m to throw at offices and marketing
  3. If we didn’t do this, somebody else would

We could have probably sorted out 1) by streamlining our sales process. The traditional solution to solve 2) is to raise cash from venture capital but in my experience VCs add a huge amount of pain and cost to small companies and no strategic value, and then walk away with a disproportionate amount of cake at the end. And as to 3), there isn’t an alternative but to be there first, if you believe, as I do, that cloud software will become a commodity offering with correspondingly lower and lower prices.

There is also the challenge that to offer a free product, it really has to be easy to use as we can’t afford to talk people through how to use it, and as the whole ethos of the company is to make CRM really simple, doing this would force us to improve the ease of use of the core product, which would be good for everyone.

So, it will be a learning process for us. We need to work on the documentation, help videos and data load process to help people pick up the product more easily. Our target is to sign 10,000 users in twelve months which will make us the largest (bar none) supplier of cloud CRM systems in the world. All we’ve done so far is put the product on our site and issued a press release and we’re signing up 15 users a day so far, so I think the 10,000 might if anything be on the low side.

As we said in the press release though, “We’re going to make life tough not just for conventional CRM vendors, but for Cloud vendors with high prices and high cost bases.”

Watch this space!


Rigid Procedures push up government IT costs

Wednesday, 11 August 2010

Computer Weekly ran a story on how the UK Government was looking to cut IT costs, and as the Government is currently also making a lot of (good!) noises about how they want to get small business supplying their IT solutions as well, I sent the following letter to to Computer Weekly, which was published this week:

Dear Editor,

I was interested to read your article “How will suppliers cut government IT costs? (Computerweekly.com/241973.htm)” and there are three reasons why Government IT costs will always be higher than those in the private sector.

Firstly, government purchasers always demand systems that have been highly modified to their perceived unique application and security requirements, and are reluctant to compromise by accepting off the shelf commercial systems. Secondly, tendering processes that are designed to purchase expensive solutions will always end up with expensive solutions – the cost of tendering will rule out cheaper vendors that may have had acceptable solutions. Thirdly, the contracting terms for doing business with the government are rigid and more expensive for suppliers, and these costs are passed on.

This is particularly true for cloud applications, where the economy of scale in delivering standardised solutions to 1,000s of users is rapidly reducing the cost of ownership. For example, a private company can make the decision to implement a cloud CRM solution for a nominal monthly fee by simply filling out a web form and entering a credit card. Government users cannot currently benefit from this revolution and in order to take advantage of such innovations, they need to move away from a rigid tendering and vendor evaluation process to more open minded search and selection.


Finally, Less is More

Tuesday, 15 June 2010

My favourite rag The Economist has a great little editorial on how the IT industry is finally coming around to the idea that less is more. The article says that consumers are suffering feature fatigue as vendors pile on new features of limited use that just make the products less easy to use, and that “frugal” innovation, delivering equivalent products for radically less cost, is breaking out of the new economies into the old economies.

All of which is music to my ears. But why does it take a recession for people to realise that products that have everything you need (and no more) and that are easy to use, are actually better than products choc full of features that you’ll never use that just make it slower and complex? And will we all revert to our lazy ways once the good times come around again?