Q1a: Is the master key password created per DB instance?
A1a: Assuming the question really means "Is the master key created per DB?"" then answer is Yes. Each DB has an different master key. there is also a thing called the service master key, which is per SQL Server instance.
Q1b: When I backup that DB (.bak) will I be able to restore the DB to
another Server2?
A1b: Yes. The master key in the restored database can be open using the original password. Then the Server2 service master key encryption can be added to the restored DB master key.
Q2: Do I need to backup the certificate, master key or symmetric key
if I want to restore to another server?
A2: No. All these objects are part of the database and they are restored along with the database. Specific needs to backup individual keys may arise from operational requirements (eg. key escrow).
Q3: When should the symmetric key be opened?
A3: Keys that require opening have to be opened in the session. Once opened, they stay open until explicitly closed or until the session disconnects. An 'open' key is 'open' only in the session that opened it.
Q4: Should I worry about who can open and close the symmetric key? ...
A4: Now this is the real question. You have two alternatives really, which correspond to two distinct scenarios:
Scenario A: when the service needs to access encrypted data without asking the user for passwords to the data. This is the vast majority of the cases. In this case the service needs to open the keys somehow. the solution is that the SQL Server uses the service master key to encrypt the database master key and the database master key is used to encrypt the certificate's private key and the private key is used to encrypt a symmetric key and the symmetric key encrypts the data. The SQL Server itself can reach any data following this chain, because it has access to the service master key. In this case the data is cryptographically protected only against accidental media loss: a lost laptop with the database on it, or an improperly disposed HDD with database of backup files on it etc. For all other threats, the data is not cryptographically protected, is only protected by the access control: since the SQL Server itself can decrypt the data w/o needing a password from the user, any user with sufficient privileges can access the data w/o knowing the password. In other words, a compromised ASP application may allow access to the encrypted data. As a note, scenarios in which the root of encryption is some secret stored on the ASP web application itself are just a (badly designed) variation on this and do not change anything.
Scenario B: when the service requires the user to provide a password. The password must come from the end user. In a web application, the user must type the password in a form in the browser, it gets sent over SSL channel to the ASP application, which passes it to the SQL Session (again on an SSL channel) and SQL can now decrypt the data using this password. This scenario is typical for multi-tenant applications in which tenants provide the data access password. In this scenarios the data is cryptographically secured against unauthorized access, because the SQL Server itself, nor the intermediate ASP web application, simply do not know the password. Even for a syadmin user with all privileges it would be impossible to read the data. Data can be moved at will and remains just as unscrutable, as it can be, again, only be read by the end-user that has knowledge of the password at the root of the encryption chain. Note that any 'shortcut' deviation in this scenario in which the password is 'saved' somewhere intermediately and not provided by the end-user this scenario degrades immediately to the first Scenario A.
A mandatory read for you: Encryption Hierarchy.
Intuitively, if I was doing an OLAP solution for a retail chain, I'd say your infrastructure is really inappropriate for a system with substantial data volumes. This sort of kit has trouble with the data volumes you get in insurance, which is probably a couple of orders of magnitude smaller than I would expect to see in retail.
As gbn states in the comments, SSRS is just a web application. You can set up a farm - start with one server with a few GB of RAM and a couple of virtual CPUs. Monitor it, and expand it if it's overloaded.
The amount of disk space used by SSAS depends on the volume and the aggregations you put on the cubes. The storage format is quite compact - more so than SQL Server, but if you have large volumes of data it will start to get quite big. If it's getting into the 100+GB range then you should also look at partitioning.
A surprisingly applicable generic solution
Now, your client probably doesn't want to hear this, but VMs are not my recommended hardware configuration for any business intelligence solution. They work OK for transactional applications or anything that is not too computationally intensive. BI for a retail chain is probably a bit aggressive for this sort of infrastrucutre.
As a generic 'starter for 10', My recommended configuration is a bare metal 2-4 socket server like a HP DL380 or DL580, and direct attach SAS storage. The principal reason for this is that a machine of this sort is by far the best bang for buck as a B.I. platform and has a relatively modest entry price. If you put a HBA on the machine then you can mount a LUN off the SAN for backups.
IOPS for IOPS, this sort of kit is an order of magnitude cheaper than any SAN-based virutal solution, particularly on sequential workloads like B.I. The entry level for a setup of this configuration is peanuts - maybe £10-20,000 - and it's a lot cheaper and easier to get performance out of something like this than a VM based solution. For a first approximation, the only situation where this kit is inappropriate for B.I. work is when the data volumes get too large for it, and you need something like Teradata or Netezza.
What can you do with your VMs, though?
A good start would be more RAM - try 32GB. SSAS is a biblical memory hog, and you've got slow infrastructure. If you're stuck with a VM, put as much RAM on it as possible. Disk access is slow on VMs.
A second avenue is to partition the cube over multiple LUNs, where the LUNs are on separate physical arrays. Not all SANs will give you this much control, though. 80GB is getting into the range where you might get a benefit from parititioning, particularly if you've got a platform with slow I/O.
Tune the buggery out of your cube aggregations - try usage based optimisation. The more hits you can get from aggregations the more efficient the server will be.
Without measuring your workload, I doubt anyone here is in a position to make recommendations that are any more specific than that. Although generic, the pointers above should be a reasonable start if you haven't implemented them yet.
Best Answer
Source
If the database has full text catalog, you'll need to backup the filegroups pertaining to it. It is possible that there is a filegroup only containing the index for the full text catalog and nothing else, as seen by "A user-specified filegroup".
Source
Now for filestream, you can see an example of backups and restores here: MSSQL
You can see that the filestream backup data is classified as type S as you talked about. It is also seen in SQL as FILESTREAM type and is it's own file to be restored.
To recap, you may run into F and S type in the filelist. S type seems more likely as it's possible the full text indices reside in a filegroup with tables or other data and would be seen as D. You can specify your own filegroup for full text indices that contain just the indices however.