A clustered index will dictate the order of the data in the pages of your table. When your temp table is initially populated, the inserted data will be sorted or, if you're doing multiple inserts to populate, it's possible you will end up with some page fragmentation as the pages must split to accommodate additional data being written.
Assuming your temp table is populated in a single insert, and you have no page fragmentation, the next issue that may or may not affect your performance is the presence of a 4 byte "uniqueifier." Because the clustered key is used as a pointer to each row in the table, it must be unique for that purpose. So if you're using a non-unique clustered index, every nonclustered index (and your clustered index) is going to be 4 bytes wider than you've defined which will affect page density and IO required to perform the query. This will likely only be of very marginal impact as the uniquifier will only be added to duplicates.
The other factor that you need to consider is, how are you joining to this temp table. SQL Server supports 3 physical join operators; Merge, Loop, and Hash. Merge can yield the best performance in many situations.
Unlike the nested loops join where the total cost may be proportional
to the product of the number of rows in the input tables, with a merge
join each table is read at most once and the total cost is
proportional to the sum of the number of rows in the inputs. Thus,
merge join is often a better choice for larger inputs.
It is unlikely SQL will chose a merge operator unless both datasets being joined are sorted by the join criteria. There are other factors as well, of course, but if you can optimize your table for specific, common usage patters, all the better.
The second thing to consider is any other indexes you are going to place on that table since each nonclustered index will contain the keys specified (in the order specified) and the pointer to the clustered index to facilitate the lookup and retrieval of any fields not covered by the index. Again, a narrow clustered key will yield better density in your non-clustered indexes and thus better i/o performance for index scans and seeks. If your nonclustered indexes require many lookups to the clustered index either consider including those columns in your index (sacrificing page density for reduced lookups) or consider clustering on that criteria (if this is your primary usage pattern)
The biggest factor that would steer me towards using a primary key over a unique (or non-unique clustered index) would be the ability to create the index inline in the create table statement. Historically, I have found that the optimizer is less apt to use indexes that I created inside my stored procedure after the create table statement. It's easier to create a primary key inline with the create table statement. I'm not sure if it's possible to do this with a non-constraining index but you can create a unique clustered index inline.
The distinction is mainly historical. The relational model was developed in part in response to the way IMS handled data.
The presently released version of IMS provides the user with a choice for each file: a choice between no indexing at all (the hierarchic sequential organization) or indexing on the primary key only . . .
Source: A Relational Model of Data for Large Shared Data Banks
Codd retained that term. (Same source.)
Normally, one domain (or combination of domains) of a given relation has values which uniquely identify each element (n-tuple) of that relation. Such a domain (or combination) is called a primary key.
It's not unusual for a relation to have more than one "combination of domains" that uniquely identify each tuple. The relational model offers no theoretical basis for elevating the status of one of them to "primary". As far as the relational model is concerned, all candidate keys are created equal.
As far as standard SQL is concerned, any column or combination of columns that's declared UNIQUE can be the target of a foreign key constraint. In the past, dbms platforms differed slightly in how they handled NULL in a column declared UNIQUE. That's part of the reason old-school database designers lean toward NOT NULL UNIQUE declarations; the other part is that NOT NULL UNIQUE is functionally equivalent to PRIMARY KEY.
There is a semantic difference between a UNIQUE index and a UNIQUE constraint. I prefer to see constraints expressed as constraints.
Best Answer
That is what MS-SQL-Server programmers decided the default to be. A good clustered index is one that has unique values (as the Primary Key), is narrow (as most primary keys are or at least should be) and is ever-increasing. So, most of the times, the primary key is a good (or the best) choice for the clustered key (there can be at most one clustered key per table).
Yes, you can. By explicitely defining all indices and especially the primary key as non-clustered. If you think that you don't need a clustered key on a table, you can do that and have the primary key as non-clustered. The unique and not null constraints will still be enforced.