I presume you have substantial data volumes if you have separate staging and data warehouse servers. From your posting It sounds like your staging server is doing a bit more than just staging the data. You describe merging data into the fact table on the warehouse server, so I presume that the staging server actually has ETL processing on it as well.
I would not generally have recommended a transform-push approach like this; running ETL processing from the warehouse and pulling from a staging area is generally better. The merge happens on the warehouse. If you need to scale out you can replicate the warehouse, and the same operations will happen on the source and targets.
However, I guess you're stuck with the transform-push architecture, so you've got a couple of options:
Use SSIS or another ETL tool to copy the incremental load to a staging table on the warehouse and merge it to the fact tables from there. This will be fairly efficient and can be done with set operations - i.e. a merge statement or insert and update.
Replicate the staging area into another database on your warehouse server and execute the merge from there after the ETL processing on the staging server has run. This is probably more fiddly than the first option, and I can't really see any advantages to it.
If your staging and warehouse databases are located on the same database instance then you can create synonyms in the warehouse and merge through the synonyms. The synonyms prevent a need to hard-code database refererences.
You've got lots of questions in here:
Q: (The lack of foreign keys) confuses me a lot! It is a good practice (not mandatory) to have Fk's in the DWH for a variety of reasons (data integrity, relations visible for semantic layer, ....)
A: Correct, it's normally a good practice to have foreign keys in a data warehouse. However, clustered columnstore indexes don't support that yet.
Q: So MS advocates Clustered Column store indexes for DWH scenarios, However it can not handle FK relationships?!
A: Microsoft gives you tools. It's up to you how you use those tools.
If your biggest challenge is a lack of data integrity in your data warehouse, then the tool you want is conventional tables with foreign keys.
If your biggest challenge is query performance, and you're willing to check your own data integrity as part of the loading process, then the tool you want is clustered columnstore indexes.
Q: However SQL 2014 than adds no real new value for DWH??
A: Thankfully, clustered columnstore wasn't the only new feature in SQL Server 2014. For example, check out the new cardinality estimator.
Q: Why am I so angry and bitter about the way my favorite feature was implemented?
A: You caught me - you didn't really ask that question - but I'll answer it anyway. Welcome to the world of third party software where not everything is built according to your exact specifications. If you feel passionately about a change you'd like to see in a Microsoft product, check out Connect.Microsoft.com. It's their feedback process where you can submit a change, other people can vote it up, and then the product team reads it and tells you why they won't implement it. Sometimes. Most of the time they just mark it as "won't fix, works on my machine" but hey, sometimes you do get some answers.
Best Answer
As long as the indices are smaller than the original table, or sorted differently, they can offer a performance boost, though in practice if you have low cardinality they may not.
The main cost of indices is increased time required for inserts and updates. If your database is updated daily but read from frequently, even large indices may be worthwhile. It depends on your data. Remember: all generalizations are false.
Can you give a more specific example of where you're considering applying an index? If you've already created one, does it show up in execution plans?