Yes, the command you're looking for is
reset
In contrast to clear
, or Ctrl+L, reset
will actually completely re-initialise the terminal, instead of just clearing the screen. However, it won't re-instantiate the shell (bash). That means that bash's state is the same as before, just as if you were merely clearing the screen.
As @Ponkadoodle mentions in the comments, this command should do the same thing more quickly:
tput reset
From the other answers:
You can set a Keyboard Shortcut to reset
the terminal, as explained by towolf.
If you're running Kubuntu, and your terminal is Konsole, you need to go to Edit → Clear history, since reset
doesn't work the same way there, as UncleZeiv notes.
Please review my answer to this recent question. I believe the circumstances are identical.
Do not change your MySQL configuration at this point, as MySQL is not the problem -- it's only a symptom of the problem... which is that you appear to have a system with a small amount of memory and zero swap space.
Your server is not crashing "because" memory can't be allocated for the buffer pool. Your server is crashing... and then is unable to subsequently restart due to the unavailability of system memory. All of the memory configured for the InnoDB buffer pool is requested from the system at mysql startup.
When you see this log message...
120926 08:00:51 mysqld_safe Number of processes running now: 0
...your server has already died. If it hasn't logged anything prior to this, it's not going to log anything about the first crash. The subsequent logs are from after the automatic attempt to restart.
Check your syslog and you should find messages where the kernel went looking for processes to kill due to an extreme out-of-memory condition.
Step 1 would probably be to add some swap space and/or allocating RAM if at all possible.
If that isn't possible, you might actually consider decreasing the innodb-buffer-pool size in your configuration. (I never thought I'd actually hear myself say that). As long as your database is small and your traffic is light, you may not need a buffer pool that large... and since the InnoDB Buffer Pool memory is all allocated at startup whether it's needed or not, this would free up some of your system's memory for whatever else is demanding it. (The 75% to 80%-of-total-RAM recommendation for sizing the buffer pool is only true if the whole server is dedicated to MySQL.)
Step 2 will be to review Apache's forking model and what you might need to do differently in the configuration to prevent it from overwhelming your server. It is pretty likely that uncontrolled growth in quantity or memory requirements of the Apache child processes is starting a cascade of events, resulting in the kernel killing MySQL to try to avoid a complete crash of the entire server.
Depending on how much flexibility you have, you might even consider two separate virtual machines for Apache and MySQL.
Best Answer
First of all there is a documented bug that has seen the recovery performance of 5.7 degrade, this is discussed here https://bugs.mysql.com/bug.php?id=80788 which seems to have been fixed in 5.7.19
Otherwise, these suggestions might help:
https://www.percona.com/blog/2016/06/07/severe-performance-regression-mysql-5-7-crash-recovery/
https://www.percona.com/blog/2014/12/24/innodb-crash-recovery-speed-mysql-5-6/
Although these are on the Percona blog it is not specific to Percona. There are other performance suggestions on the blog, but as you already realise these largely relate to the setting of innodb_log_file_size
Disclosure: I work for Percona.