It's both a historical and security restriction.
Historically, most drives weren't removable. So it made sense to restrict mounting to people who had legitimate physical access, and they would likely have access to the root account. The fstab entries allow administrators to delegate mounting to other users for removable drives.
From a security point of view, there are three major problems with allowing arbitrary users to mount arbitrary block devices or filesystem images at arbitrary locations.
- Mounting to a non-owned location shadows the files at that location. For example: mount a filesystem of your choice on
/etc
, with an /etc/shadow
containing a root password that you know. This is fixed by allowing a user to mount a filesystem only on a directory that he owns.
- Filesystem drivers have often not been tested as thoroughly with malformed filesystem. A buggy filesystem driver could allow a user supplying a malformed filesystem to inject code into the kernel.
- Mounting a filesystem can allow the mounter to cause some files to appear that he would not otherwise have permission to create. Setuid executable and device files are the most obvious examples, and they are fixed by the
nosuid
and nodev
options which are implied by having user
in /etc/fstab
.
So far enforcing user
when mount
is not called by root is enough. But more generally being able to create a file owned by another user is problematic: the content of that file risks being attributed by the purported owner instead of the mounter. A casual attribute-preserving copy by root to a different filesystem would produce a file owned by the declared-but-uninvolved owner. Some programs check that a request to use a file is legitimate by checking that the file is owned by a particular user, and this would no longer be safe (the program must also check that the directories on the access path are owned by that user; if arbitrary mounting was allowed, they would also have to check that none of these directories are a mount point where the mount was created neither by root nor by the desired user).
For practical purposes, it is possible nowadays to mount a filesystem without being root, through FUSE. FUSE drivers run as the mounting user so there is no risk of privilege escalation by exploiting a bug in kernel code. FUSE filesystems can only expose files that the user has the permission to create, which solves the last issue above.
The link /dev/$disk
points to the whole of a block device, but, on a partitioned disk without unallocated space, the only part which isn't also represented in /dev/$disk[num]
is the first 2kb-4mb or so - $disk
's partition table. It's just some information written to the raw device in a format that the firmware and/or OS can read. Different systems interpret it in different ways and for different reasons. I will cover three.
On BIOS systems this table is written in the MBR
master boot record format so the firmware can figure out where to find the bootable executable. It reads the partition table because in order to boot BIOS reads in the first 512 bytes of the partition the table marks with the bootable flag and executes it. Those 512 bytes usually contain a bootloader (like grub
or lilo
on a lot of linux systems) that then chainloads another executable (such as the linux kernel) located on a partition formatted with a filesystem the loader understands.
On EFI systems and/or BIOS systems with newer kernels this partition table can be a GPT
GUID partition table format. EFI firmware understands the FAT filesystem and so it looks for the partition the table describes with the EFI system partition flag, mounts it as FAT, and attempts to execute the path stored in its Boot0000-{GUID} NVRAM variable. This is essentially the same task that BIOS bootloaders are designed to do, and, so long as the executable you wish to load can be interpreted by the firmware (such as most Linux kernels since v. 3.3), obviates their use. EFI firmware is a little more sophisticated.
After boot, if a partition table is present and the kernel understands it, /dev/${disk}1
is mapped to the 4mb+
offset and ends where the partition table says it does. Partitions really are just arbitrary logical dividers like:
start of disk | partition table | partition 1 | ... and so on | end of disk
Though I suppose it could also be:
s.o.d. | p.t. | --- unallocated raw space --- | partition 1 | ... | e.o.d.
It all depends on the layout you define in the partition table - which you can do with tools like fdisk
for MBR
formats or gdisk
for GPT
formats.
- The firmware needs a partition table for the boot device, but the kernel needs one for any subdivided block device on which you wish it to recognize a filesystem. If a disk is partitioned, without the table the kernel would not locate superblocks in a disk scan. It reads the partition table and maps those offsets to links in
/dev/$disk[num]
. At the start of each partition it looks for the superblock. It's just a few kb of data (if that) that tells the kernel what type of filesystem it is. A robust filesystem will distribute backups of its superblock throughout its partition. If the partition does not contain a readable superblock which the kernel understands the kernel will not recognize a filesystem there at all.
In any case, the point is you don't really need these tables on any disk that need not ever be interpreted by firmware - like on disks from which you don't boot (which is also the only workable GPT+BIOS case) - and on which you want only a single filesystem. /dev/$disk
can be formatted in whole with any filesystem you like. You can mkfs.fat /dev/$disk
all day if you want - and probably Windows will anyway as it generally does for device types it marks with the removable flag.
In other words, it is entirely possible to put a filesystem superblock at the head of a disk rather than a partition table, in which case, provided the kernel understands the filesystem, you can:
mount /dev/$disk /path/to/mount/point
But if you want partitions and they are not already there then you need to create them - meaning write a table mapping their locations to the head of the disk - with tools like fdisk
or gdisk
as mentioned.
All of this together leaves me to suggest that your problem is one in these three:
your disk has no partition table and no filesystem
- It was recently wiped, never used, or is otherwise corrupt.
your disk's partition table is not recognized by your os kernel
- BIOS and EFI are not the only firmware types. This is especially true in the mobile/embedded realm where an SDHC card could be especially useful, though many such devices use layers of less-sophisticated filesystems that blur the lines between a filesystem and a partition table.
your disk has no partition table and is formatted with a filesystem not recognized by your os kernel
After rereading your comment above I'm fairly certain it is the latter case. I recommend you get a manual on that tv, try to find out if you can get whatever filesystem it is using loaded as a kernel module in a desktop linux and mount the disk there.
Best Answer
On an ext4 filesystem (like ext2, ext3, and most other Unix-originating filesystems), the effective file permissions don't depend on who mounted the filesystem or on mount options, only on the metadata stored within the filesystem.
If you have a removable filesystem that uses different user IDs from your system, you can use
bindfs
to provide a view of any filesystem with different ownership or permissions. The removable filesystem must be mounted already, e.g. on/mnt/sda1
; then, if you want a particular user to appear as the owner of all files, you can run something like