I too have wondered this and was motivated by your question!
I've collected how close I could come to each of the queues you listed with some information related to each. I welcome comments/feedback, any improvement to monitoring makes things easier to manage!
net.core.somaxconn
net.ipv4.tcp_max_syn_backlog
net.core.netdev_max_backlog
$ netstat -an | grep -c SYN_RECV
Will show the current global count of connections in the queue, you can break this up per port and put this in exec statements in snmpd.conf if you wanted to poll it from a monitoring application.
From:
netstat -s
These will show you how often you are seeing requests from the queue:
146533724 packets directly received from backlog
TCPBacklogDrop: 1029
3805 packets collapsed in receive queue due to low socket buffer
fs.file-max
From:
http://linux.die.net/man/5/proc
$ cat /proc/sys/fs/file-nr
2720 0 197774
This (read-only) file gives the number of files presently opened. It
contains three numbers: The number of allocated file handles, the
number of free file handles and the maximum number of file handles.
net.ipv4.ip_local_port_range
If you can build an exclusion list of services (netstat -an | grep LISTEN) then you can deduce how many connections are being used for ephemeral activity:
netstat -an | egrep -v "MYIP.(PORTS|IN|LISTEN)" | wc -l
Should also monitor (from SNMP):
TCP-MIB::tcpCurrEstab.0
It may also be interesting to collect stats about all the states seen in this tree(established/time_wait/fin_wait/etc):
TCP-MIB::tcpConnState.*
net.core.rmem_max
net.core.wmem_max
You'd have to dtrace/strace your system for setsockopt requests. I don't think stats for these requests are tracked otherwise. This isn't really a value that changes from my understanding. The application you've deployed will probably ask for a standard amount. I think you could 'profile' your application with strace and configure this value accordingly. (discuss?)
net.ipv4.tcp_rmem
net.ipv4.tcp_wmem
To track how close you are to the limit you would have to look at the average and max from the tx_queue and rx_queue fields from (on a regular basis):
# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:0FB1 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262030037 1 ffff810759630d80 3000 0 0 2 -1
1: 00000000:A133 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262029925 1 ffff81076d1958c0 3000 0 0 2 -1
To track errors related to this:
# netstat -s
40 packets pruned from receive queue because of socket buffer overrun
Should also be monitoring the global 'buffer' pool (via SNMP):
HOST-RESOURCES-MIB::hrStorageDescr.1 = STRING: Memory Buffers
HOST-RESOURCES-MIB::hrStorageSize.1 = INTEGER: 74172456
HOST-RESOURCES-MIB::hrStorageUsed.1 = INTEGER: 51629704
logrotate
is used by the system to rotate logs so you have 2 choices. You can either incorporate the rotation of these app logs into the systems rotations or setup your own and either run them manually or from the root user's crontab (Assuming the Rails app is run as root given it's directory is /root/...
).
System rotation
To setup a logrotation within the system's pre-existing ones simply add a new file to the directory /etc/logrotate.d
. Call it railsapp.conf
. I'd use the other examples there to construct it. Also confrere with the logrotate
man page.
User rotation
If you want to run your own instance of logrotate
you only have to provide it with command line switches to do so.
- First make a copy of
/etc/logrotate.conf
/root/rails_logrotate.conf
- Edit the file so that it has the log rotation configured the way you want (i.e. keep all logs, rotate weekly, etc.)
Run it
# 1st time
$ logrotate -d -f -s $HOME/my_logrotate.state logrotate.conf
# afterwards
$ logrotate -d -s $HOME/my_logrotate.state logrotate.conf
If things look OK you can re-run these commands without the -d
switch. This is for debugging purposes only and won't actually do any of the tasks, merely show you what it WOULD do.
$ logrotate -s $HOME/my_logrotate.state logrotate.conf
You could also use the -v
switch to make it verbose, similar to the output seen when using the -d
switch.
Example
Start with this log file.
$ dd if=/dev/zero of=afile bs=1k count=10k
10240+0 records in
10240+0 records out
10485760 bytes (10 MB) copied, 0.0702393 s, 149 MB/s
$ ll afile
-rw-rw-r-- 1 saml saml 10485760 Aug 6 14:37 afile
$ touch -t 201307010101 afile
$ ll afile
-rw-rw-r-- 1 saml saml 10485760 Jul 1 01:01 afile
Now run logrotate
$ logrotate -v -f -s $HOME/my_logrotate.state logrotate.conf
reading config file logrotate.conf
reading config info for /home/saml/afile
Handling 1 logs
rotating pattern: /home/saml/afile forced from command line (1 rotations)
empty log files are rotated, old logs are removed
considering log /home/saml/afile
log needs rotating
rotating log /home/saml/afile, log->rotateCount is 1
dateext suffix '-20130806'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
renaming /home/saml/afile to /home/saml/afile-20130806
creating new /home/saml/afile mode = 0664 uid = 500 gid = 501
Check the results
$ ll afile*
-rw-rw-r-- 1 saml saml 0 Aug 6 14:40 afile
-rw-rw-r-- 1 saml saml 10485760 Jul 1 01:01 afile-20130806
Weekly Cron
To make this run every Sunday you could create the following crontab entry for the root user.
$ crontab -e
Add the following lines:
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
0 0 * * sun logrotate -v -f -s $HOME/my_logrotate.state $HOME/logrotate.conf
Then save the above.
You can also use these types of shortcuts instead of specifying the actual days, mins, secs, etc.
string meaning
------ -------
@reboot Run once, at startup.
@yearly Run once a year, "0 0 1 1 *".
@annually (same as @yearly)
@monthly Run once a month, "0 0 1 * *".
@weekly Run once a week, "0 0 * * 0".
@daily Run once a day, "0 0 * * *".
@midnight (same as @daily)
@hourly Run once an hour, "0 * * * *".
References
Best Answer
Yes, all of this has to do with logging. No, none of it has to do with runlevel or "protection ring".
The kernel keeps its logs in a ring buffer. The main reason for this is so that the logs from the system startup get saved until the syslog daemon gets a chance to start up and collect them. Otherwise there would be no record of any logs prior to the startup of the syslog daemon. The contents of that ring buffer can be seen at any time using the
dmesg
command, and its contents are also saved to/var/log/dmesg
just as the syslog daemon is starting up.All logs that do not come from the kernel are sent as they are generated to the syslog daemon so they are not kept in any buffers. The kernel logs are also picked up by the syslog daemon as they are generated but they also continue to be saved (unnecessarily, arguably) to the ring buffer.
The log levels can be seen documented in the syslog(3) manpage and are as follows:
Each level is designed to be less "important" than the previous one. A log file that records logs at one level will also record logs at all of the more important levels too.
The difference between
/var/log/kern.log
and/var/log/mail.log
(for example) is not to do with the level but with the facility, or category. The categories are also documented on the manpage.