First, patch: make sure you're on 2012 Service Pack 1 Cumulative Update 10 or newer. In SQL 2014, Microsoft changed TempDB to be less eager to write to disk, and they awesomely backported it to 2012 SP1 CU10, so that can alleviate a lot of TempDB write pressure.
Second, get exact numbers on your latency. Check sys.dm_io_virtual_file_stats to see the average write stall for your TempDB files. My favorite way to do this is either:
sp_BlitzFirst @ExpertMode = 1, @Seconds = 30 /* Checks for 30 seconds */
sp_BlitzFirst @SinceStartup = 1 /* Shows data since startup, but includes overnights */
Look at the file stats section, and focus on the physical writes. The SinceStartup data can be a little misleading since it also includes times when CHECKDB is running, and that can really hammer your TempDB.
If your average write latency is over 3ms, then yes, you might have solid state storage in your SAN, but it's still not fast.
Consider local SSDs for TempDB first. Good local SSDs (like Intel's PCIe NVMe cards, which are under $2k USD especially at the sizes you're describing) have extremely low latency, lower than you can achieve with shared storage. However, under virtualization, this comes with a drawback: you can't vMotion the guest from one host to another to react to load or to hardware issues.
Consider a RAM drive last. There are two big gotchas with this approach:
First, if you really do have heavy TempDB write activity, the change rate on memory may be so high that you won't be able to vMotion the guest from one host to another without everyone noticing. During vMotion, you have to copy the contents of RAM from one host to another. If it's really changing that fast, faster than you can copy it over your vMotion network, you can run into issues (especially if this box is involved with mirroring, AGs, or a failover cluster.)
Second, RAM drives are software. In the load testing that I've done, I haven't been all that impressed with their speed under really heavy TempDB activity. If it's so heavy that an enterprise-grade SSD can't keep up, then you're going to be taxing the RAM drive software, too. You'll really want to load test this heavily before going live - try things like lots of simultaneous index rebuilds on different indexes, all using sort-in-tempdb.
First and foremost, look at your SAN vendor's documentation for recommendations regarding storage for SQL Server. This really should be your first step as the vendor hopefully has done a lot of this general analysis for you.
If the documentation doesn't mention hosting databases, do some more digging before you choose to go with Tiered Storage. Understand how data migration between tiers works on your SAN and how frequently it occurs. Beware of anything that will run a scheduled job that tracks how active sectors may be during a time frame (often a 24 hour period, but this if often configurable). I've found that in this scenario when it comes time to migrate the data, that data is often slow to access during the operation. You'll want to insure this process runs during a period of low db activity (which means don't run this during the same time you're backing up your database). If you have a database that's active all day, I'd recommend you NOT use tiered storage at all if this is how tiering works on your SAN. Another problem with migrating data between tiers when this type of analysis is performed is that often times data access patterns change and are not consistent throughout the day. This can result in data living on the faster tiers that really shouldn't be there. For instance, when you run Index Maintenance overnight the SAN may flag that data as hot and migrate it to a higher tier. If those indexes don't get accessed during the next migration analysis window, you're now wasting I/Os on idle data. Depending on your database usage patterns, this could happen quite often where your data is getting migrated to the faster tier only to sit there idle.
Again, look at your vendor documentation. I would hope they clearly outline the recommended approach needed for their hardware. Also, give SAN Storage Best Practices for SQL Server, from Brent Ozar a read. That goes into much more depth on some best practices and is well worth the read.
Best Answer
Regarding drives: if you CAN keep them on the same drive, you either have a very small database (in which case - get it on an M.2 / U.2 SSD with enough IOPS) or have a problem to start with - of the drive is "fake" (like a SAN delivering a LUN that in reality has a lot behind) I keep all my volumes these days on shared storage, and at the moment even on realtively slow HDD (7200 RPM), but it is backed by at the moment 6.4 TB of M.2 SSD which will even grow larger, so my read access time is pretty much guaranteed below one millisecond. Even with all ending up on the same discs, I really want separate volumes to check contention with one look.
Same with mdf and ldf - all on their own volume.