[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
> I think if you have much more than that you want better tools to manage > them. > > When I was doing Unix workstations I had a scripts for "run this command > on 23 workstations". Sometimes the command was open a shell and run the > software install app, because the software installer sometimes went > wrong and with 23 workstations you could manually deal with the one or > two that might fail each time manually. Sounds Old School to me :) > > I suspect there are relatively few organisations so large that they > genuinely need more than one server doing the same thing for performance > reasons these days (redundancy yes, but performance?). Sure the Googles, > Facebooks and Amazons of this world probably do needs a large number of > identically configured servers, but then "identically configured" scales > a lot easier. Absolutely agree with this. Not that I am 100% into box setup - but I'm sure these days you SSH into a load balanced network rather than a single server. > > For Desktop management I've mostly only dealt with numbers in the 100's, > and at that point they all want to be identical, or in a very limited > number of configurations. > > I worked closely with SUN at one point, and was involved with managing > lots of systems for them. Their approach was pretty straight forward, > and internally they had corporate Jump start profiles and > recommendations, but of course as an IT company they were heavily > involved in managing exceptions to these configurations. Though the > tools used weren't amazingly sophisticated, and most (all?) came on the > regular Solaris install media, and Big Admin website. Were they managing exceptions more than using recommendations ? Would sound about right. :) > > HP use to have some nice (but expensive) tools to group systems, stage > roll outs, queue up actions for machines that are currently not > available, and the like (mostly Microsoft Windows and HP-UX, but handled > some other systems as well). These were surprisingly well polished, so I > assume they were widely used at the time. > > Recently many tools seem to assume your environment is more anarchic > than that, and has some sort of "ideal" configuration, and tries to > identify systems that aren't ideal, and migrate them to the Utopian > view. HP labs in Bristol were also doing some security tools which > worked in a similar fashion, which seems to assume that big corporate > networks will always be a mess and you just need to know how bad it is. HAHA I like it :) Brilliant. > > That said good scripting is essential for this sort of stuff, and the > first thing you learnt for big Microsoft Windows desktop deployments was > how to run a script when a user logs in, although a lot of things those > scripts use to do is now "built-in". I think here the distinction is > scripting from the command line, the command line is cool, but scripts > are what allows you to easily specify how to handle variation, an failure. Yep - I highly doubt I'll ever be writing any long scripts directly into the command line though - more just running them. However, simple bash scripting has become an essential for getting me out of web server holes lately! > -- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/listfaq