Because performance is heavily application-specific, you cannot quantitatively compare two video cards with different GPU architectures based on specs alone. Different GPU architectures scale differently with various specs such as memory speed, memory size, memory type, and bus width; and the only way to divine the scaling ability is to look at benchmarks.
The only exception to this is if you are comparing two cards which are "obviously" not within the same performance class-for example, comparing a 2 MB S3 ViRGE to a 4 GB Radeon R9 290. The specs are so incredibly disparate (by an entire level of magnitude) that it's not difficult to guess which card is most likely many generations newer and should have better performance.
But for the two cards you list, you have to refer to benchmarks for the exact or very similar applications that you are going to run--and note that similarity is not just a class of applications (such as "games" or "bitcoin mining") because some applications within the same class may be better-optimized for a particular piece of hardware, or a given piece of hardware may have more mature drivers.
That said, there are several specs you can compare if you're looking at two video cards based on the same GPU architecture. For example, see the Nvidia and AMD tables at Wikipedia.
GPU architecture/code name
GPU clock speeds
Number of shaders
- Unified shaders
- Texture mapping units
- Render output units
Memory
- Size (MB or GB)
- Bus type (DDR3, GDDR5, etc.)
- Bus width (bits--e.g., 64-bit, 128-bit, 256-bit)
- Frequency (MHz)
Power consumption
There are many other specs that are derived by multiplying two or more of these basic hardware specs, such as memory bandwidth (GB/s) or processing power (GFLOPS).
Generally-speaking, if you're looking for better performance, you want the newest architectures and the highest numbers for all other specs except TDP. (But sometimes one manufacturer's technology may lag behind that of a competitor.)
Again, performance can be highly application-dependent, as you'll notice if you look the results from any comprehensive benchmark suite for two different cards (especially with different architectures). While one application may benefit from larger memory, another might perform better with greater bandwidth (memory frequency, bus type, and bus width) or processing power (core frequency and number of shaders). Depending on your application, improving certain specs may not yield any performance gain whatsoever.
As if divining the computational performance wasn't already difficult enough, most people also factor in the cost, which may include not only the initial purchase price, but also power consumption.
Best Answer
The HDMI standard itself does support audio. However, whether it works for you depends on the specific card, though all the recent cards I know of from AMD or nVidia do support audio via HDMI.
Some older cards only have partial support, and require a S/PDIF connection to a header on the motherboard/sound card. I had a MSI nVidia 9500 GT that did this. Example 1. Example 2.
Unfortunately, graphics card manufacturers don't seem to put the HDMI audio details into their spec sheets (at least MSI doesn't). Therefore, it's difficult to be certain. You may have some luck contacting the manufacturer, asking the retailer or searching for other users of the card model. Failing that, you can make some assumptions - a brand new card now probably supports it.
Then there's software support. I'm not certain what the situation with Linux is, but audio out via HDMI does come up as a separate sound device on Windows when you have a discrete video card. This may require special drivers, and you might have to change your default output device.