Google cans Nexus One store

Saturday, 15 May 2010

I’m trying hard not to say “I told you so” having just read that Google is abandoning its attempts to sell its Android ‘phone, the Nexus One, direct. I tried to purchase a Nexus One and gave up, stymied by bugs on the Google site (Google’s CheckOut payment acceptance system was down in the UK for days) and disgusted by the haughty, we-dont-want-to-talk-to-customers attitude of the company. I puchased an HTC Desire from my local ‘phone shop, they were helpful, answered my questions and the device is probably superior to Google’s.

The whole episode reinforces the image that Google is a bunch of great engineers, who don’t seem to get out much in the real world.

Advertisements

Google changes channel for Nexus

Wednesday, 28 April 2010

Just when I thought I was the only person who wasn’t impressed by Google’s attempt to sell their new Android ‘phone the Nexus One direct (see umpteen previous posts), I was amused to read that Google has given up it hopes of shifting serious quantities of hardware by selling (but not supporting) direct and will be selling through the traditional channels of network providers and retail shops.

Maybe Google has realised that selling retail needs skills and resources that they don’t have.


Nexus One – not for the man in the street – Part 3

Thursday, 8 April 2010

Looks like we now know the reason why I (and I’m sure many other people in the UK) can’t buy a ‘phone from Google – their Google Checkout has crashed over the past few days.

If I were a merchant using Google Checkout I would not be a happy bunny.

Will Google steal the crown from infamous leaders Tiscali, Egg et all for the worst customer service?


Nexus One – not for the man in the street

Monday, 11 January 2010

Interested to see that existing & potential purchasers of Google’s new iPhone killer the Nexus One are unhappy with the level of customer support they get from the Mountain View boys – apparently there is only email support, no telephone number to call.

If Google want to challenge competitors such as Microsoft and mobile phone operators such as AT&T and Vodafone, they will need to turn from being a B2B company that tries to automate away all customer interaction to keep costs down, to being a B2C company that looks after its customers. Not that the telcos are great at the latter, but have you ever tried to call Google with an Adwords problem? Dream on! When Microsoft launched Bing and wanted advertisers to go with them as well as/instead of Google, they really tried hard to woo customers, and part of that was a call centre to help advertisers set up their campaigns.

Google may be the new kids in town – but when it comes to customer service, they are still kids.


In Search of Five Nines

Wednesday, 2 September 2009

Our hosted CRM system, Really Simple Systems, has been running at 99.99% uptime for the last three years, and try as hard as have we just haven’t been able to achieve the magic ‘five nines’ – 99.999%. In plain English, 99.99% uptime means that the system has to be unavailable for a maximum of only one hour a year; 99.999% means down for only 5 minutes a year. So Google’s outage of nearly two hours yesterday, plus previous outages within the past months, is embarrassing for Google, will make business users wonder if GMail is suitable for business, and is generally bad PR for Cloud Computing.

If I look back at the last three years to see what caused our outages, it’s never ‘our’ fault, that is, our servers and software didn’t go down, the datacentre did. The reasons so far have been: power outage at the datacentre (and the standby power supplies failed); somebody unplugged the main router by mistake; a bug in the router software caused the other routers to crash. But customers don’t care whose fault it is, they just want to be able to access the system all day, every day.

We’ve had a failover system operating in another datacentre for some years, but it is difficult to switch all the customers to it, the data hasn’t been 100% up to date and we have to manually tell them the URL so they can log on. So unless the outage can’t be fixed in 30 minutes (and they all have so far) it hasn’t been worth switching.

Having spent six months looking at the various options to insulate us from datacentre failures, only one looked practical: build a complete duplicate failover system in another datacentre; replicate the data there in real time; be able to switch from the main datacentre to the failover datacentre instantly.

It’s expensive to build a redundant datacentre, but as we get bigger so does the inconvenience that an outage causes our customers.

So that’s what we’ve done. The main servers will be in a datacentre in Manchester, the failover servers and DNS hosting in Maidenhead (200 miles south). The data is replicated in real time (using MySQL replication). If the main datacentre goes down we can switch the DNS to the failover datacentre (almost) instantly, if the failover datacentre goes down then the DNS will stay pointing to the main datacentre anyway.

If anybody can see a hole in this logic, let me know!