I'm not a fan. It's about as good an idea as creating a relational table named OrdersOrCustomers with columns defined for both. The storage-engine penalty is slightly lower in Cassandra because of the sparse-cell storage under the hood, but it's still bad practice.
This bites you later when you want to map/reduce over your data; each task will have to scan over all your data, and filter out the rows that don't match what you're actually interested in (e.g., customers). And good luck making sense of the statistics that Cassandra tracks per-CF. ("Is this CF the source of 80% of my application reads because of the order data? Or because of the customer sessions it's combined with? Or the other five data types I threw in?")
If you absolutely positively need tens or hundreds of thousands of CFs? Even then I'd rather run Cassandra without arena allocation, than mutilate my data model like this.
If your server has only 4GB of RAM, AWE isn't going to help you much. AWE is designed for when you have greater than 4GB of RAM you need to use.
Timeouts are harder to define. The timeout period itself is likely defined in your application, so it's not inherently SQL Server that's causing the timeouts. However, what you're running into are queries that take to long to complete. To troubleshoot this, I would first start by reviewing the wait stats to identify specifically what is causing things to drag in your SQL Server. Some waits to look for:
BACKUPTHREAD
means your queries are waiting on backup operations to complete. Other backup waits could indicate problems.
- Memory pressure could be any number of waits, but look for things like
RESOURCE_SEMAPHORE
or IO
waits. Also, check the trend on your Page Life Expectancy in Perfmon (SQL Server: Buffer Manager -> Page Life Expectancy). If this continually trends low, your pages are cycling out of memory to quickly.
Another thing you can do is check how long your log shipping backups are taking to complete:
SELECT
database_name
,backup_finish_date
,datediff(ss,backup_start_date,backup_finish_date) SecondsToBackup
FROM
msdb.dbo.backupset
WHERE
type='l'
ORDER BY
backup_finish_date desc;
As for resolution, if your log shipping is happening all at the same time, you might want to try and stagger that out. If it isn't necessary to log ship every 5 minutes, I'd try and go with your 15 minutes and stagger each database off by 5 minutes so that they're running separately.
The other thing is to consider a hardware upgrade. The box your describing sounds fairly out of date, considering there are laptops you can purchase now that have more horse power than what you're describing. If you can move the server to a virtual machine, you'll be able to immediately upgrade to better resources, which could resolve these issues for you.
Best Answer
According to a Datastax representative, the setting which affects schema changes is
request_timeout_in_ms
.It should be noted that these timeout settings are not hard timeouts from the time your client application connects to the coordinator node. If the connection is lost, then the timeout counter restarts. Hence why I was seeing some CREATE TABLE commands taking 12 seconds even though I had
request_timeout_in_ms
set to 10000ms.