Normally Windows does not support multiple partitions on USB, but Linux has no problems with it. Windows has no trouble seeing the other partitions, it just will not let you assign them drive-letters.
One solution is to use a utility such as Lexar BootIt to flip the Removable Media Bit setting of a USB drive. This utility works with Lexar drives, but for everything else you use it at your own risk.
For another solution to trick Windows into thinking that the USB is an internal disk, see this article:
How To Create Multiple USB Stick Partitions.
The article refers to a product called Hitachi Microdrive whose download doesn't exist any more, but can still be found here. However, it will only work for a 32-bit version of Windows.
The article Fool the BIOS booting any USB stick as a Hard Disk claims that if the USB is already partitioned (which you can do via Linux), many BIOS will treat it as a hard internal disk. Maybe yours does too, but the article is not very recent.
In the absence of specific data about your use of your specific system, the optimal partitioning scheme to adopt would be a single partition. To get best performance make the partition at the outer edge of the disk, make it no larger than needed for the files and leave the rest of the disk unused.
If you make multiple partitions you run the risk of decreasing performance as the disk is forced to make head movements between groups of files at the start of each partition and to maintain multiple filesystem metadata.
In theory you can optimise the placement of frequently accessed files, but partitioning is an extremely crude way to achieve this and if done without careful gathering of statistics is likely to fail to achieve a benefit. For example, on my PC I suspect the most used files are the registry and in Chrome's cache-directory. I think constructing a partitioning scheme around that might be difficult, the most used files may be scattered in disparate folders.
Update
As MSalters commented, Designers of filesystems like NTFS, EXT4 etc go to considerable lengths to optimise their performance. Though of course they also place a high value on reliability and resilience which mean making trade-offs that affect performance.
Opinion: As with so many things it is therefore often counter-productive for end-users to try to second guess the decisions made by operating system developers. For most of us it may be best to configure systems the way we believe OS designers expect most people to do. In other words set things up in the simplest and most straightforward way, accept most of the defaults suggested by the operating system installer. Only if your use-case is very unusual and performance critical might it be worthwhile tuning the installation manually. For example, if I were asked to build a commercial cluster of dedicated Oracle DBMS servers, rather than worry about raw vs cooked filesystems I'd probably just use Oracle's Linux distro and expect it to do the right thing. If serious money was involved I would pay an Oracle consultant to make sure the right configuration options were selected. For the average desktop PC this should be completely unnecessary.
Best Answer
With a drive that big, and planning to use NTFS, I'd highly recommend partitioning -- unless I knew I'd only be using the drive for storing large files -- DVD-5 ISOs, DVD video files, multitrack audio, etc.
If you're planning to store small files, you'll get better use out of the drive by splitting it up into drives of 2-300GB. But tweaking for efficiency and performance is highly dependent on the type(s) of data you'll be storing.
In particular, look at cluster size in regards to what kind of data you expect to store. Cluster size is the smallest chunk of disk space that can hold a file. Windows defaults to 4KB clusters for 1TB partitions, but you can use the commandline formatting tool or a 3rd-party formatting GUI to override this (supported cluster sizes are 1K, 2K, 4K, 8K, 16K, 32K and 64K). A 1TB partition made of 64KB clusters can hold DVD-5 ISOs very efficiently, but is very inefficient with very small files.
(I could be wrong about this: this article claims that MFT entries can range from 1 to 4k, so a <2KB file can actually be stored in the MFT. This should mean better performance for that file. I'm not sure if/how MFT entry size is related to cluster size.)
From a practical standpoint, I've never found a real need for a 1-TB partition. I need 3-400 gigs for music, 200 gigs for photos and other random documents, and the rest for storing old episodes of Buffy the Vampire Slayer in AVI format. Splitting that into smaller partitions helps me organize my data. The downside is, if I haven't planned my partition sizes well, I may need more space on one partition or another, and resizing partitions is risky.