Because apt-get
(or plain simple apt
) is for the pros and when the pros do a:
sudo apt-get dist-upgrade
and see a new kernel being installed, the pros know they have to reboot to activate it.
The automatic update is for everyone else out there!
Alternatively, you can now install Ubuntu Live kernel Patching from version 16.04 onwards which allows you to update your kernel while your server is running.
:-)
For the archives, after much trial and error this is what I've found (in retrospect, some bits make sense, others quite confusing).
If you sudo apt-get install python-xyz
, the package will appear in pip list
. Of course you can import it in python, but it doesn't appear in pip.get_installed_distributions(). (Obviously the opposite is not true. i.e. if you do pip install xyz
it will not appear in your apt/synaptic)
After sudo apt-get install python-xyz
if you do pip install xyz
what happens depends on the version of pip you have.
OLD v1.5.6 (the current version shipping on Ubuntu PPA):
This version of pip (1.5.6) will just install a new copy of xyz in a different location. You end up with multiple copies and this causes a huge mess. e.g:
- numpy from apt-get is 1.8.2 at /usr/local/lib/python2.7/dist-packages
- numpy from pip is 1.10.4 at ~/.local/lib/python2.7/site-packages
If I do pip install numpy
yet again, it downloads and installs it again. So you could end up with many different versions which you can't really access. I can do pip install numpy
5x times, and then I can do pip uninstall numpy 5x
times! Obviously you can pay attention to not do that, but sometimes other software's install scripts are a bit careless and can mess things up. Unbelievable that Ubuntu officially ships this version of pip.
NEW v8.0.2 (the current version on pip itself):
newer versions of pip (e.g. 8.0.2 which is on pip) will refuse to install the same package saying requirement is already met. So you cannot install a new version. This is good behaviour (more on getting this version of pip later).
In this case you can only upgrade, i.e. install with -U flag.
However when you try to pip install -U xyz
on a package installed with sudo apt-get, you'll get permissions error because the apt-get was installed in /usr/ and you need root access to write there.
So AFAIK you have no choice but to sudo pip install -U xyz
to be able to update it. In this case pip installs the latest package into the same place as sudo apt-get wrote it. e.g. in my case /usr/local/lib/python2.7/dist-packages. This is good.
It's also worth pointing out that the packages on apt are often quite older than those on pip (e.g. numpy v1.8.2 vs 1.10.4, scipy 0.14.1 vs 0.17.0, ipython 2.3 vs 4.0.3, spyder 2.3.5 vs 2.3.8)
So my current thoughts are to get the big things with sudo apt-get
, e.g.
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy pytho
n-nose spyder
and then update them (or at least some of them) with sudo pip install -U
.
NB It seems quite important to get the new pip from pip (very meta)
sudo apt-get install python-pip
sudo pip install -U pip
P.S. I am aware of virtualenv but I have no need for it right now. I need only one development environment.
And here is a little script to dump a list of packages, version and path (but only works on pip installed modules, not those from apt-get)
import pip
pp = pip.get_installed_distributions()
for p in sorted([p.location+"\t"+p.project_name+" ("+p.version+")" for p in pp]):
print p
Best Answer
Not always. You can perfectly use
apt-get
withoutsudo
. There are instances where you don't needsudo
at all, like usingapt-get download
which downloads a package to your current directory,apt-get source
which downloads the debian sources files to your current directory,changelog
which downloads and prints the changelog of a given package, and any command which has the--simulate
/--dry-run
/--no-act
(in the case ofinstall
you need also--no-download
).This is because these actions/commands doesn't require to write system directories.
Now, why
apt-get
needssudo
? Actually it doesn't. You can ditch apt-get, download a package withwget
and usedpkg --extract
and extract the package in whatever directory you like. There's also--instdir
which should work for binary only package.Now, why this isn't the default? Because it's a pain. To do what you want, we would need to repackage each package twice, one for the right way, and another to do what you want. At build, binaries normally need to know where are the files and libraries they need (in some cases, this is hardcoded at compilation).
Now, what you can do instead? Just chroot some environment a la virtualenv, where you can install packages without root.
Summary, this is not the way that apt-get was meant to be used, and I don't know another package manager similar to apt-get which allows you to do that. At the end of the day,
apt-get
is just a front-end to dpkg which could do some of this.