I'll wait for a moment while that sinks in....
And this was on it's way down: I caught the load listed at 252 at it's peak for the 1 minute average! I took this screen shot because it's where the 15 minute average maxed out. I had the 5 minute average over 100 briefly. The server is no slouch, either: 4 real cores+ hyperthreading, 8gb ram. But then, it's disk i/o that's causing this.
Needless to say I'll be calling support in the morning (this is from just a few minutes ago). I've already checked the usual suspects (attack blocker, etc) and I didn't see anything stand out to cause this kind of mayhem. Support already applied the auto-vacuuming patch. That seemed to fix everything... for just over a day
Now for honesty's sake I'll disclose that it was up that high because I was running du on the box, but in my defense the reason I was running du is that 5 min load was already up in the 40's and 1 min in the 60's. It looked like it was all due to disk i/o, and I wanted to see where it was going/coming from. So I may as well post the interesting results from that du:
/var/spool/squid 11GB (I'm not using Web Cache right now on any rack)
/var/lib/postgresql 17GB (the auto-vacuum patch was just recently installed, but disk use is still high here)
- Untangle NG Firewall
- Untangle IC Control
- Help Me Decide