Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 0 additions & 37 deletions .drone.yml

This file was deleted.

12 changes: 0 additions & 12 deletions .github/FUNDING.yml

This file was deleted.

26 changes: 0 additions & 26 deletions .github/workflows/go.yml

This file was deleted.

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
.idea
.vscode/
1 change: 1 addition & 0 deletions .tool-versions
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
golang 1.20.7
27 changes: 12 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
[![Build Status](https://drone.buckket.org/api/badges/buckket/pkgproxy/status.svg)](https://drone.buckket.org/buckket/pkgproxy)

**pkgproxy** is a caching proxy server specifically designed for caching Arch GNU/Linux packages for pacman.

_This is a major rewrite of https://github.com/buckket/pkgproxy in order to iron out some bugs and implement
concurrent downloading of the same uncached file. It can be used as a drop-in replacement of the original
`pkgproxy`, with the exception that is does not caches databases (which is transparent anyway)._


Updating multiple Arch systems in your home network can be a slow process if you have to download every pkg file
for every machine over and over again. One could setup a local Arch Linux mirror, but it takes a considerable amount of
disk space (~60GB). Instead why not just cache packages you really downloaded on one machine since it’s highly likely that
Expand All @@ -10,21 +13,15 @@ and saves a copy to disk so that future requests of the same file can be served

## Installation

### From source

go get -u git.buckket.org/buckket/pkgproxy

### Packet manager

- Arch Linux: [pkgproxy](https://aur.archlinux.org/packages/pkgproxy/)<sup>AUR</sup>
go install github.com/binary-manu/pkgproxy/cmd/pkgproxy@latest

## Usage

Update your clients mirror list (`/etc/pacman.d/mirrorlist`) to point to `pkgproxy`:

Server = http://${HOST_WITH_PKGPROXY_RUNNING}:8080/$repo/os/$arch

Run `pkgproxy` manually or use a systemd service file (example provided):
Run `pkgproxy` manually or use a systemd service file.

```
Usage:
Expand All @@ -43,12 +40,12 @@ Usage:
Show version information
```

## Limitations
## Things to know

- Multiple incoming requests of the same file are handled sequentially, which may cause pacman to timeout,
especially if a large file is being downloaded.
- All cached files are deleted when `pkgproxy` exits. No files will be deleted by `pkgproxy` as long as
it is running. If you want to limit disk usage create a systemd timer which deletes files older than x days.
- Database files are not cached.
- Packages are cached, and can be downloaded concurrently: there is no blocking while a package is being
stored into the cache.
- All cached files are deleted when `pkgproxy` exits, unless `-keep-cache` is used.

## License

Expand Down
22 changes: 22 additions & 0 deletions cmd/internal/Cache.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
package internal

import "sync"

// Cache is a map protected by a Mutex, which can only be accessed
// via the method LockedDo.
type Cache[K comparable, V any] struct {
cache map[K]V
mutex sync.Mutex
}

// LockedDo executes a function with the cache mutex held, so that
// f is the only user at the moment. The mutex is released as soon as
// f returns.
func (c *Cache[K, V]) LockedDo(f func(cache map[K]V) error) error {
c.mutex.Lock()
defer c.mutex.Unlock()
if c.cache == nil {
c.cache = make(map[K]V)
}
return f(c.cache)
}
28 changes: 28 additions & 0 deletions cmd/internal/PerfectWriter.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
package internal

import "io"

// PerfectWriter never fails a write... if one fails, it lies and returns
// no error, while refusing to write further data. This is used with
// io.MultiWriter so that errors some Writers can be ignored.
// The first write error is made available via Error().
type PerfectWriter struct {
writer io.Writer
err error
}

// NewPerfectWriter wraps a writer into a PerfectWriter and returns it
func NewPerfectWriter(w io.Writer) *PerfectWriter {
return &PerfectWriter{w, nil}
}

func (w *PerfectWriter) Error() error {
return w.err
}

func (w *PerfectWriter) Write(data []byte) (int, error) {
if w.err == nil {
_, w.err = w.writer.Write(data)
}
return len(data), nil
}
57 changes: 57 additions & 0 deletions cmd/internal/WORMSeekCloser.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
package internal

import (
"io"
"sync"
)

// WORMSeekCloser models something which can be read in parallel without
// using the file pointer (hence the ReaderAt), but can be written by one
// writer at a time (Writer). It can also be closed and seeked.
type WORMSeekCloser interface {
io.ReaderAt
io.Writer
io.Closer
io.Seeker
}

// ConcurrentWORMSeekCloser is safe for parallel use. It allows ReadAt
// calls to run in parallel, while other methods are serialized.
type ConcurrentWORMSeekCloser struct {
worm WORMSeekCloser
mutex sync.RWMutex
}

// NewConcurrentWORMSeekCloser wraps a WORMSeekCloser and returns an object safe
// for concurrent use.
func NewConcurrentWORMSeekCloser(inferior WORMSeekCloser) *ConcurrentWORMSeekCloser {
return &ConcurrentWORMSeekCloser{worm: inferior}
}

// ReadAt which is concurrency-safe
func (worm *ConcurrentWORMSeekCloser) ReadAt(p []byte, off int64) (n int, err error) {
worm.mutex.RLock()
defer worm.mutex.RUnlock()
return worm.worm.ReadAt(p, off)
}

// Seek which is concurrency-safe
func (worm *ConcurrentWORMSeekCloser) Seek(offset int64, whence int) (int64, error) {
worm.mutex.Lock()
defer worm.mutex.Unlock()
return worm.worm.Seek(offset, whence)
}

// Write which is concurrency-safe
func (worm *ConcurrentWORMSeekCloser) Write(p []byte) (n int, err error) {
worm.mutex.Lock()
defer worm.mutex.Unlock()
return worm.worm.Write(p)
}

// Close which is concurrency-safe
func (worm *ConcurrentWORMSeekCloser) Close() error {
worm.mutex.Lock()
defer worm.mutex.Unlock()
return worm.worm.Close()
}
Loading