Views are typically not implemented for performance. And while you currently cannot implement explicit indexed views (which are just views that SQL Server maintains for you), you can certainly maintain facts manually yourself.
For example, you mention that you currently calculate "whether someone is dead or not" using three CTEs and a CASE expression (sorry, to be pedantic, it's not a statement).
Instead of referencing this set of CTEs every time the view is accessed, why not put that fact in a table (potentially along with other facts that have to be calculated per user), and calculate that periodically in the background? So maybe every 5 minutes (that is just a SWAG, you'll have to determine what's appropriate), you run a SQL Server Agent job that re-populates the table based on what it currently knows is the truth. Now the view just has to reference the table that is the output of that script, instead of calculating it over and over again while the users wait. So for example:
CREATE TABLE dbo.PersonProperties
(
PersonID INT PRIMARY KEY REFERENCES dbo.Persons(PersonID),
IsDead BIT NOT NULL DEFAULT 0
);
Now the job can simply merge that table with the results of the CTE, and then the view can include a reference to that table which simply pulls the BIT column along with a join on the PK. This should be MUCH less expensive at query time that re-evaluating all of that logic every time.
To minimize blocking (e.g. when users are accessing the view at the same time the job is running), you can implement what I call "schema switch-a-roo" and which I blogged about here:
http://www.sqlperformance.com/2012/08/t-sql-queries/t-sql-tuesday-schema-switch-a-roo
So instead of locking resources on the expensive query throughout the operation, the only blocking that happens is when the metadata switch actually takes place.
This works as long as you can afford some brief periods where the data is not accurate. You can tighten up that window to be pretty narrow, but there is always a chance that a person will die in between and for a brief moment a query will return that they are still alive. If you can't afford this, then you make it a part of the process that first introduces that fact to the database to make sure the CTE reflects that immediately and the new table also reflects it immediately.
Still not good enough? The flag a user as "dirty" the second a change for them comes in. The view can union or left join with the stale data for users that are "clean" and go after live data only for the users that are "dirty."
If you are doing this to aid debugging, rather than to facilitate restart after a failure. I would suggest temp tables.
Temp tables are visible only to the scope in which they are created. In other words, if two users execute the same stored procedure (SP) simultaneously they will be given different temp tables and most importantly their data will be completely isolated from each other's. Views are globally visible (permissions not withstanding) and data written by one can be seen by all, with the risk of leakage. If you define a view and write your working values to it you have to add extra columns to separate your values from other users'.
Temp tables are dropped automatically by the system when they go out of scope. This makes your debug code simpler since you don't have to program around dirty restarts after crashes.
Temp tables can be created on-the-fly with the SELECT..INTO
syntax. Thus their schema will automatically keep up with changes to the surrounding schema. Views do not have this advantage.
If you are processing many rows you can index Temp tables with only the same drawbacks that indexing normal tables incurr. In SQL Server at least - which is my speciality - there are many restraints on a view's defintion before it can be indexed, which could affect their perormance or applicability in your case.
SQL Server makes table-valued variables available as a lighter-weight alternative to temp tables. See this SO question for the differences.
If you do happen to be on SQL Server, and are lucky enough to have 2014 available, and your row counts are modest, these working tables are an excellent candidate for non-persisted, in-memory tables.
Best Answer
It sometimes is. It is a common performance optimization to "materialize" or "spool" intermediate results into a temp table, if putting the logic for returning the intermediate results in the main query turns out to be too complex or expensive.
This is not something you should do always. Just if you discover that putting the logic in a single query is not running acceptably.
A good way to organize this is with Common Table Expressions, where you can have a "pipeline" of query logic. eg:
Then if you want to spool one of the subqueries, it's as easy as
"Lifting" the query logic from the CTE into a temp table load, and referencing the temp table in the CTE.