is a caching proxy for Debian packages, allowing a number of computers to share
a single cache. Packages requested from the cache only need to be downloaded
from the Debian mirrors once, no matter how many local machines need to install
them. This saves network bandwidth, improves performance for users, and reduces
the load on the mirrors.
Stand-alone daemon-mode. Fork and run in the background
Inetd daemon-mode: Only use in /etc/inetd.conf
Specify alternative configuration file to default [/etc/apt-cacher/apt-cacher.conf]
Write PID of running process to this file.
Experimental option to chroot to given directory
Number of attempts to bind to daemon port.
Override values in configuration file. Can be given multiple times.
Print brief usage.
Setting up apt-cacher involves two stages: installing apt-cacher itself on a
single machine on your network to act as a server and configuring all client
machines to use the server's cache.
Apt-cacher can be installed to run either as a daemon [preferred] or as a CGI
script on a web server such as Apache. When a client (apt-get(1), aptitude(8),
synaptic(8) etc.) requests a package from the cache machine, the request is
handled by apt-cacher which checks whether it already has that particular
package. If so, the package is returned immediately to the client for
installation. If not, or if the package in the local cache has been superseded
by a more recent version, the package is fetched from the specified
mirror. While being fetched it is simultaneously streamed to the client, and
also saved to the local cache for future use.
Other client machines on your network do not need apt-cacher installed in order
to use the server cache. The only modification on each client computer is to
direct it to use the server cache. See CLIENT CONFIGURATION below for ways of
Apt-cacher can be installed in various ways on the server. The recommended way
is by running the program as a daemon. This should give the best performance and
the lowest overall memory usage.
Edit the file /etc/default/apt-cacher and change AUTOSTART=1, then run (as
NOTE: in inetd mode access control checks are not performed and the
allowed_hosts and denied_hosts options have no effect. Access controls can be
implemented using using inetd or tcpd wrapper. See README.Debian for further
This is not recommended for long-term use because it brings a visible
performance impact on the network and server speed. By default, apt-cacher
package adds a default configuration profile to Apache. Clients can access the
SERVER CONFIGURATION OPTIONS
Apt-cacher uses a configuration file for setting important
options. Additionally there are few command line options to control
behaviour. See COMMAND-LINE OPTIONS above.
The default configuration file is
It is read every time the daemon starts or CGI/inetd slave are
executed. Therefore a stand-alone daemon may need to be restarted or reloaded
using the init script in order to reread its configuration. A running daemon
will also reread the configuration file on receiving SIGHUP (see SIGNALS below).
Each line in the file consists of
configuration_option = value
Long lines can be split by preceding the newlines with '\'. Whitespace is
ignored. Lines beginning with '#' are comments and are ignored. If multiple
assignments of the same option occur, only the last one will take effect. For
binary options, 0 means off or disabled, any other integer means on or
enabled. Options which can accept lists may use either ';' or ',' to separate the
individual list members.
The options available in the config file (and their default settings) are:
The directory where apt-cacher will store local copies of all packages
requested. This can grow to many hundreds of MB, so make sure it is on a
partition with plenty of room. NOTE: the cache directory needs to contain some
subdirectories for correct storage management. If you try to create a custom
directory, please use the script /usr/share/apt-cacher/install.pl or use the
initially created cache directory as example.
The email address of the administrator is displayed in the info page and traffic
Avoid any outgoing connection, return files available in the cache and just
return errors if they are missing.
Only allow access to specific upstream mirrors. The requested URL must match an
item in this list for access to be granted. The part of the URL referring to the
apt-cacher server itself (http://apt-cacher.server:port[/apt-cacher]/) is
ignored. Matching begins immediately after that.
A mapping scheme to rewrite URLs, which converts the first part of the URL after
the apt-cacher server name to a remote mirror. For example, if
Perl regular expression (perlre(1)) which matches all files used by the Debian
installer or Debian Live (files that are uniquely identified by their full path
but don't need to be checked for freshness).
Whether to generate traffic reports daily. Traffic reports can be accessed by
pointing a browser to
Whether to flush obsolete versions of packages from your cache daily. You can
check what will be done by running
which will just show what would be done to the contents of the cache. A package
version is not obsolete if any of the distributions (stable, testing, etc) or
architectures you use reference it. It should be safe to leave this on.
Directory to use for the access and error log files and traffic report. The
access log records all successful package requests using a timestamp, whether
the request was fulfilled from cache, the IP address of the requesting computer,
the size of the package transferred, and the name of the package. The error log
records major faults, and is also used for debug messages if the debug directive
is set to 1. Debugging is toggled by sending SIGUSR1 (see SIGNALS below).
How many hours Package and Release files are cached before they are assumed to
be too old and must be re-fetched. Setting 0 means that the validity of these
files is checked on each access by comparing time stamps in HTTP headers on the
server with those stored locally.
Apt-cacher can pass all its requests to an external http proxy like
Squid, which could be very useful if you are using an ISP that blocks
port 80 and requires all web traffic to go through its proxy. The
format is 'hostname:port'.
Use of an external proxy can be turned on or off with this option.
External http proxy sometimes need authentication to get full access. The
format is 'username:password', eg: 'proxyuser:proxypass'.
Use of external proxy authentication can be turned on or off with this option.
Specify a particular interface to use for the upstream connection. Can be an
interface name, IP address or host name. If unset, the default route is used.
Rate limiting sets the maximum rate in bytes per second used for fetching files
from the upstream mirrors. Syntax is fully defined in wget(1). Use 'k' or 'm'
to use kilobits or megabits per second: e.g. 'limit=25k'. Use 0 or a negative
value for no rate limiting.
The effective user id to change to after allocating the ports.
The effective group id to change to.
Switches on experimental checksum validation of files. Requires
Whether debug mode is enabled. Off by default. When turned on (non-nil), lots of
extra debug information will be written to the error log. This can make the
error log become quite big, so only use it when trying to debug
problems. Additional information from the libcurl backend can be obtained by
increasing this parameter. The correspondence between this setting and
The daemon can be resticted to listen only on particular local IP
address(es). Single item or list of IPs. Use with care.
If your apt-cacher server is directly connected to the Internet and you are
worried about unauthorised fetching of packages through it, you can specify a
range of IP addresses that are allowed to use it. Localhost (127.0.0.1) is
always allowed, other addresses must be matched by allowed_hosts and not by
denied_hosts to be permitted to use the cache. Note that by default apt-cacher
will allow requests from any client, so set a range here if you want to restrict
access. This can be a single item, list, IP address with netmask or IP range See
the default configuration file for further details and examples.
The opposite of allowed_hosts setting, excludes hosts from the list of allowed
hosts. Not used in inetd daemon mode.
Like allowed_hosts for IPv6 clients.
Like denied_hosts for IPv6 clients.
There are two different ways of configuring clients to use apt-cacher's
cache. Ensure that you do not use a mixture of both methods. Changing both
proxy settings and base URLs can create some confusion.
Access cache like a mirror
To use the cache in this way, edit /etc/apt/sources.list on each client and
prepend the address of the apt-cacher server to each deb/src line.
If you configure clients this way and also use apt-listbugs(1) you will need to
exclude bugs.debian.org from proxying, as apt-listbugs sends (unsupported) POST
requests. For example:
It is not recommended to set the http_proxy environment variable as this may
effect a wide variety of applications using a variety of URLs. Apt-cacher will
not work as a general purpose web cache!
Q: Can I just copy some .debs into the cache dir and have it work (precaching)?
A: Almost! A bit additional work is also required to make them usable and
persistent in the cache.
First: alongside with the debs apt-cacher stores additional information: a
flag file to verify that the package is completely downloaded, and a file with
HTTP headers that have been sent from the server.
If you copy .debs straight into the storage directory and don't add those
things, fetching them *will* fail.
Fortunately Apt-cacher now comes with an import helper script to make things
easier. Just put a bunch of .debs into /var/cache/apt-cacher/import (or
a directory called 'import' inside whatever you've set your cache dir to be),
and run /usr/share/apt-cacher/apt-cacher-import.pl (you can specify
alternative source directory with the first parameter). The script will run
through all the package files it finds in that dir and move them around to the
correct locations plus create additional flag/header files. Run it with "-h" to
get more information about how to use additional features - it can work in
recursive mode while discovering the files and save space by making links to
files located elsewhere in the filesystem.
Second: if the daily cleanup operation is enabled (see clean_cache option above)
and there is no Packages.gz (or .bz2) file that refers to the new files, the
package files will be removed really soon. From another point of view: if there are
potential clients that would download these packages and the clients did run
"apt-get update" using apt-cacher once, there is no reason to worry.
Q: Does the daily generation of reports or cleaning the cache dependon whether apt-cacher is running continuously as a daemon?
A: No, the regular maintenance jobs are independent of a running server. They
are executed by cron and use only static data like logs and cached index files
and package directory listing. However, apt-cacher should be configured
correctly because cleanup runs it directly (in inetd mode) to refresh the
Q: Are host names permissible? What if a host is inboth lists (a literal reading of the current description is that thehost is denied)?
A: No, you must supply IP addresses.
Unlike with some other software like Apache, the access control is slightly
different because there is no configurable checking order. Instead, a client
host is checked using both filters, allowed_hosts and denied_hosts. Following
combinations are possible: allowed_hosts=* and denied_hosts is empty, then every
host is allowed; allowed_hosts=<ip data> and denied_hosts=empty, then only
defined hosts are permitted; allowed_hosts=* and denied_hosts=<ip data>, then
every host is accepted except of those matched by denied_hosts;
allowed_hosts=<ip data> and denied_hosts=<ip data>, then only the clients from
allowed_hosts are accepted except of those matched by
denied_hosts. allowed_hosts=<empty> blocks everything. If allowed_hosts is
omitted, * is assumed. denied_hosts must not have an "*" value, use empty
allowed_hosts setting if you want that.
Q: generate_reports: how does being able to view the reports depend onthe web server you are running? Are they only available if apt-cacher
is running on port 80?
The report is generated using a script (started by a cron job, see above) and
is stored as $logdir/report.html. You can access it using the "/report" path in
the access URL. If apt-cacher is running in CGI mode, then the
URL for the browser looks like
Apt-cacher currently only handles forwarding GET requests to HTTP
sources. Support for other access methods (ftp, rsync) is not currently planned.
Apt-cacher handles the following signals:
Causes the configuration file to be re-read.
Toggles printing of debug output to /var/log/apt-cacher/error.log