[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
On 20/08/13 19:26, Simon Avery wrote: >> I managed to find a rather nice looking PC, Tied it to my bicycle with 3m >> Network Cables and cycled it home. =) >> > That's a very pleasing image you paint. Steptoe with pedals. Love it. > > >> Always the type of person who likes to try something new, I was wondering >> if it is possible to install Linux on to a remote machine. By remote I mean >> another computer on my LAN. And if it is possible what sort of things do I >> need to do to achieve this? >> > You can't install an os onto a remote machine you have no physical access > to; at least a bare metal box that you haven't previous configured to > network boot, and even then tricksy. And really, it's just as well. Or > somebody on your lan might decide to install an os over your existing > install while you're off for lunch. > > If you do have physical access, then it's cd, usb or pxe to install an os. > BA gives more info about the latter. If it's a VM, then your hypervisor > will possibly have a remote console that'll allow you to do what you need. > > >> This lovely little PC will be my future file server so any tips on Distro >> choices would be great but I fear I will lean towards Debian... again... >> > I'd probably use debian too. But Freenas might be worth a twirl, I know > some folk who like it for just acting as a nas. (Although some tend to add > so many plugins they might as well have installed debian in the first place) There is a (big) exception to the no-physical-access gotcha, although it won't effect you in any way Daniel unless you're a *much* stronger cyclist than you're letting on: most servers these days have an out-of-band management interface, typically called an IPMI* in Intel-compatible land. Big-ass RISC boxes from IBM/Oracle/HP etc have similar arrangements using SPs (service processors, etc). These typically enable you to access a remote box that is for all intents and purposes "off" but has a low-level management interface reached normally through a completely separate management VLAN/subnet to the usual network and enables you to configure boot orders, attach an ISO image to boot from, etc. However, it's fair to guess that there is exactly a 0% chance of your "rather nice looking PC" having an IPMI. There are also much more old-school, more complicated out-of-band access methods such as a good old serial line, often run from a serial multiplexer tucked away in a corner of the server room with a single dial-in modem still attached (yes, this is still in use by many sysadmins even in 2013!) There is also a big exception to the DHCP issue I mentioned as well, which I should have come clean on: it's just more complicated, and normally reserved for much larger orgs with a lot of computers to deal with. I used to do this at the NHS for example, where we had thousands of Dells coming in and out per year. A third party picked up our pallets of PCs from Dell and imaged them (with our builds) before bringing them to the hospital and moving them to the correct offices (I know this seems strange, but IT management made us do a lot of weird things we'd have preferred to keep in house) - we'd get a heads up that the boxes were in position and ready to be turned on and would take a CSV from Dell with all the new boxes' MAC addresses in. I'd plug that into my config files, linking each into a first-build VLAN with the relevant MACs assigned to static IP ranges, and the boot/install/config daemons would then know which box was which and which subsequent department-specific roles/software/updates/etc could be reliably deployed in parallel to each of them. A backend logger watched progress, assimilated reports and prodded the install servers to kick them off the first-build VLAN and back onto the regular network to start daily use when all the automated setup was done. Dell have discontinued it now, but they originally provided a tool to reflash/customise the BIOS from Linux and being a huge client, we had an internal special version from them that we'd use at the end of the config process to finally reflash the workstation BIOSes to reset the boot order and disable network boot completely. So, there was quite a large cumbersome process involved, but it is possible with the correct infrastructure and procedures to completely automate the install/upgrade/deployment of remote machines even when they're dumb PCs with no IPMI, no remote shell tools and running Windows. Before I was brought in - primarily to get this system fully operational - it was *fully* automated as the third party even had access to the system accepting the new target MACs so the IT guys didn't even have to do that manual step. Funnily enough, the first thing I did was kick them off after several copy/paste errors resulted in already deployed assets getting sucked back into the build VLAN and re-imaged... multiple times. Stay away from FreeNAS - it's BSD based, and this will just cause you more headaches (unless you want to learn it just for fun of course, in which case, you should just be installing straight FreeBSD anyway). The best thing about FreeNAS is relatively painless ZFS support, but in that case, just go with the freebie Nexenta version instead: proper Solaris kernel with proper ZFS support, but with a familiar Debian userland and apt-get support. Simon's completely right about both people going crazy with FreeNAS plugins (install ALL the things!) and this whole remote install/deployment thing being millions of times easier once your infrastructure is mostly virtualized - it really is. As long as the new PC is up to it, I'd recommend that you install it as a VM host primarily, and then structure any other services you want such as a fileserver, etc, as VM instances on top of it. Regards -- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/listfaq