Sql-server – After processing complete, SSAS DB slowly consumes all available space on disk, w. high memory use also

olapsql-server-2012ssas

This problem is with an SSAS 2012 SP3 (11.0.6020.0) multi-dimensional server, hosting a single cube database, containing 1 cube and 16 dimensions. Aggregations have been designed using SQL Management Studio (to 30% gain), and extra aggregations added from the Usage Based Optimization wizard. The server has separate disks for Windows and the SSAS Data and Temp folders. This is what happens:

  • The cube database is fully processed successfully, and afterwards the disk containing the Data folder has a healthy 300GB out of 400GB free. This screenshot shows the largest files from particular partition folder under the .cub folder. All the files have the datestamp 21/04/2016 14:14 (the processing time).
    enter image description here
  • Over the next few hours, the Data folder slowly grows, until it has consumed all of the available 400GB on the disk. This screenshot shows the largest files from same folder as above. Some of the original files can be seem, plus a number of new files. I don't think any of the files prefixed "176" were present immediately after processing.
    enter image description here
  • During this time, the SSAS service is consuming almost all the available memory (24GB out of 32GB), and is very unresponsive – queries are very slow and attempts to access the properties of the server thru SQL Management Studio fail.
    enter image description here
    enter image description here
  • Once all the disk space is used, SSAS becomes unusable, and the SSAS service cannot be restarted.
  • Restarting Windows causes the disk containing the Data folder to get its 300GB free space back – the files dated after 21/04/2016 14:14 have disappeared. However, the process begins to repeat and the free disk space is again eaten up.

What could be going on and how do I stop it?

Best Answer

I found the answer eventually ....

There was a job set up to incrementally process the cube. Incremental cube processing requires a query to be defined to select new fact data to merge into the cube. In my case, this query was defined badly and was selecting the entire set of fact data, i.e. all the data that was already in the cube. Although fully processing the cube did not consume excessive disk space, for some reason incremental processing consumed all the available space. As a result the process never completed successfully. Rebooting the server caused the files created by the unfinished process to be deleted, but then minutes later the processing job would start again.