First verify if what you're seeing is the same as what SQL Server can see, by running this:
USE tempdb;
EXEC sp_helpfile;
If all 3 of your files are here, you somehow created them there by accident.
TempDB is a 'special' system DB that gets re-generated every time you start the SQL Server
service.
Setup TempDB the way you want it, restart the server so TempDB recreates itself and run the above query again, to see if the new settings stuck.
If you want to still add a file somewhere manually you can do this:
ALTER DATABASE tempdb ADD FILE (
NAME = N'LogicalNameHere'
,FILENAME = N'D:\Whatever\tempDB2orWhatever.ndf'
,SIZE = xMB
,FILEGROWTH = xMB
);
If you want to keep all of your TempDB files but just move them around, get the info from sp_helpdb
and modify to include your paths/files/names as found here:
USE [master];
GO
ALTER DATABASE tempdb MODIFY FILE (name = tempdev, filename = 'E:\Sqldata\tempdb.mdf');
ALTER DATABASE tempdb MODIFY FILE (name = templog, filename = 'E:\Sqldata\templog.ldf');
Note you should usually have multiple TempDB data files. If you don't, you are risking logical file contention, which is not pretty.
Check out Brent's blog for more info on SGAM and PFS contention.
In my opinion putting the tempdb in the SSD drive without looking at the performance stats is the wrong way to go. SSD drives have high cost/GB and are a premium resource that should be used in the best possible way.
You will probably get better performance moving the most used database files to the SSD, given that you already have broken your database into multiple files and isolated the most problematic objects in their separate filegroup.
That said, SSDs are great for random reads and writes compared to traditional disks, which is the pattern you're seeing mostly in data files. Log files are more subject to sequential reads and writes, which mechanical disks perform just as well as SSDs.
It's true that tempdb is a critical resource for the instance and should be placed in a fast drive, however you're probably going to see a bigger performance gain if you added more RAM (much cheaper than SSDs).
You mileage may vary, but a generic question deserves a generic answer.
Best Answer
Just adding files to the existing file group will not redistribute the existing data.
I would go the route @Max is suggesting and ensure that the new files you are creating are the same size to maximize the proportional fill feature. And before you begin take a full backup.
Once you've migrated all of the tables, indexes, etc over to the new file group you should be able to shrink the mdf down to be under the 512GB point or smaller if you are not going to use the default file group anymore.
At that point take another full backup and then restore that to the new disks with the MOVE TO option in the restore command. This will let you take the new ndf files in the new file group and put them across the SSDs.
Depending on how you are managing the database you might want to set the new file group to be the default for all of the new objects that are created. Otherwise they will still default to the old file group and since it only has one file in it the objects/data in it won't benefit from the proportional fill feature that SQL Server has.
Moving file groups reference