More often than not you can make it work, eventually.
The question is how much work you want to spend tracking down dependencies and fixing paths; You can even install RPMs from redhat into Debian's dpkg system, but it requires a bit of time to actually get everything set up and working (I've done so before, and there was actually less hairpulling than you'd think, but it's all dependent upon which packages you're trying to install).
The binary files are usually compatible (unless there's some breaking change or customization in the kernel API of the specific branch you're using), and the only real trick to making it work is getting the correct version of the necessary libraries installed and set up so that the binaries can find them. If the branches are close enough together (like Ubuntu and Debian), then I wouldn't even think twice about trying to install a debian .deb into an ubuntu system-- almost always it just works.
So, why is this unique to Unix?
Typical operating systems, prior to Unix, treated files one way and treated each peripheral device according to the characteristics of that device. That is, if the output of a program was written to a file on disk, that was the only place the output could go; you could not send it to the printer or the tape drive. Each program had to be aware of each device used for input and output, and have command options to deal with alternate I/O devices.
Unix treats all devices as files, but with special attributes. To simplify programs, standard input and standard output are the default input and output devices of a program. So program output normally intended for the console screen could go anywhere, to a disk file or a printer or a serial port. This is called I/O redirection.
Does other operating systems such as Windows and Macs not operate on files?
Of course all modern OSes support various filesystems and can "operate on files", but the distinction is how are devices handled? Don't know about Mac, but Windows does offer some I/O redirection.
And, compared to what other operating systems is it unique?
Not really any more. Linux has the same feature. Of course, if an OS adopts I/O redirection, then it tends to use other Unix features and ends up Unix-like in the end.
Best Answer
Daniel Andersson's comment about POSIX is the real answer here: there is a standard called POSIX which defines the core of a UNIX-like system, both in terms of shell commands and system calls. In theory, if you write software to the POSIX spec, it should be possible to compile and run it on any UNIX, Linux, BSD etc system.
http://pubs.opengroup.org/onlinepubs/009695399/mindex.html will give you a definitive answer as to what constitutes POSIX, but that's not a useful answer for practical purposes. Someone else may have good command references for common differences between Linux systems and other UNIX-like systems.
One specific example: "killall" on Linux kills all processes with a particular name. On Solaris, it shuts the system down. Important not to use the wrong one.