[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
Robin Cornelius wrote: > > This has made me think, is there any way to mitigate a swap storm, or > a memory aggressive application to keep it under some kind of control. Customary approach is to limit memory per process, and processes per user. Also limit open files, and other key resources per process. See "man ulimit" and /etc/security/limits or similar. Most distros default these to "unlimited". Such things are vital in some circumstances, but often they create more issues than they solve for general use, so folks tend to use "unlimited", and stop using software that misbehaves. Quite a lot of server side applications use these kind of things to limit scope of mistakes. Apache for example doesn't want stray PHP scripts using lots of memory, so a limit is applied (8MB per child spawned by default on most distros). In general limiting individual processes to about 50-90% of physical memory should stop anything going too far astray, depending on how you use your computer. But I rarely have those sort of issues on GNU/Linux. HP-UX use to have quite sophisticated process grouping, where you could put users into groups (and other things could be grouped), and then place similar limits on the group as a whole, although I seem to recall it was a premium feature over and above your regular HP-UX fees. It was a cool feature, but no one ever asked me to use it in anger! Simon, who stuck mod-rewrite in a tight loop yesterday, but still got enough CPU to type /etc/init.d/apache2 stop (just).
Attachment:
signature.asc
Description: OpenPGP digital signature
-- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html