[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
Simon Avery wrote: > > That said, I'm sure there's a very big market for people who are very > happy with 99% uptime Come have a web server that is down 3.5 days a year isn't a good sales pitch. But I think you can achieve better than 99% relatively easily, without massive investment. Working somewhere that offers colo in Exeter as a side line in a very small way, I'd prefer to be in a bigger computing facilities with better aircon, and better power. Economies of scale are critical here. On our main web server.... uptime 12:29:54 up 521 days, 17:20, 2 users, load average: 1.19, 0.73, 0.58 The long overdue upgrade to Etch will stop that. Probably a bigger issue is that whilst 521 days looks good this is uptime, just reflects when we last lost power to that rack. Service availability hasn't been 100% for 521 days (I wish). Not least I've rebooted the traffic shaper a couple of times to keep it patched and in a supported release. So even a straight web server has a switch, a traffic shaper, and a router in front of it, all of which can contribute to downtime (on top of loss of power, or aircon, or network connectivity]. So the reason for apparent over specification of hosting facilities is that downtime doesn't add up nicely, and big customers will want more than 99% expected uptime (since like us they'll have quite enough planned downtime for things like upgrades). A lot of course goes into what happens when you don't achieve the SLA. Our customers understand it is a best effort, and we don't have penalty clauses for failing to achieve a given availability. If we did, we'd have to charge a lot more, and we'd have a lot more redundancy to make sure we never give them any money back. -- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html