Compare commits
18 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cc0cc8cdf0 | ||
|
|
fb13969798 | ||
|
|
278258ee9f | ||
|
|
9e542cf86b | ||
|
|
244e952f79 | ||
|
|
aa2a8fa223 | ||
|
|
467acb47bf | ||
|
|
0c0d6b2bfc | ||
|
|
ce0e5be406 | ||
|
|
65ce4c90fa | ||
|
|
9897a08d09 | ||
|
|
f5753ba720 | ||
|
|
fcf32a935b | ||
|
|
ec50788987 | ||
|
|
ac0a2da3b5 | ||
|
|
9f84dc42fe | ||
|
|
21f9304235 | ||
|
|
5cedd22bbd |
53
README.md
53
README.md
@@ -92,6 +92,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [listen on port 80 and 443](#listen-on-port-80-and-443) - become a *real* webserver
|
||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||
* [reverse-proxy performance](#reverse-proxy-performance)
|
||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||
* [other extremely specific features](#other-extremely-specific-features) - you'll never find a use for these
|
||||
* [custom mimetypes](#custom-mimetypes) - change the association of a file extension
|
||||
@@ -140,6 +141,7 @@ just run **[copyparty-sfx.py](https://github.com/9001/copyparty/releases/latest/
|
||||
* or if you cannot install python, you can use [copyparty.exe](#copypartyexe) instead
|
||||
* or install [on arch](#arch-package) ╱ [on NixOS](#nixos-module) ╱ [through nix](#nix-package)
|
||||
* or if you are on android, [install copyparty in termux](#install-on-android)
|
||||
* or maybe you have a [synology nas / dsm](./docs/synology-dsm.md)
|
||||
* or if your computer is messed up and nothing else works, [try the pyz](#zipapp)
|
||||
* or if you prefer to [use docker](./scripts/docker/) 🐋 you can do that too
|
||||
* docker has all deps built-in, so skip this step:
|
||||
@@ -646,7 +648,7 @@ dragdrop is the recommended way, but you may also:
|
||||
|
||||
* select some files (not folders) in your file explorer and press CTRL-V inside the browser window
|
||||
* use the [command-line uploader](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy)
|
||||
* upload using [curl or sharex](#client-examples)
|
||||
* upload using [curl, sharex, ishare, ...](#client-examples)
|
||||
|
||||
when uploading files through dragdrop or CTRL-V, this initiates an upload using `up2k`; there are two browser-based uploaders available:
|
||||
* `[🎈] bup`, the basic uploader, supports almost every browser since netscape 4.0
|
||||
@@ -1104,6 +1106,8 @@ on macos, connect from finder:
|
||||
|
||||
in order to grant full write-access to webdav clients, the volflag `daw` must be set and the account must also have delete-access (otherwise the client won't be allowed to replace the contents of existing files, which is how webdav works)
|
||||
|
||||
> note: if you have enabled [IdP authentication](#identity-providers) then that may cause issues for some/most webdav clients; see [the webdav section in the IdP docs](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients)
|
||||
|
||||
|
||||
### connecting to webdav from windows
|
||||
|
||||
@@ -1669,10 +1673,16 @@ some reverse proxies (such as [Caddy](https://caddyserver.com/)) can automatical
|
||||
|
||||
for improved security (and a 10% performance boost) consider listening on a unix-socket with `-i unix:770:www:/tmp/party.sock` (permission `770` means only members of group `www` can access it)
|
||||
|
||||
example webserver configs:
|
||||
example webserver / reverse-proxy configs:
|
||||
|
||||
* [nginx config](contrib/nginx/copyparty.conf) -- entire domain/subdomain
|
||||
* [apache2 config](contrib/apache/copyparty.conf) -- location-based
|
||||
* [apache config](contrib/apache/copyparty.conf)
|
||||
* caddy uds: `caddy reverse-proxy --from :8080 --to unix///dev/shm/party.sock`
|
||||
* caddy tcp: `caddy reverse-proxy --from :8081 --to http://127.0.0.1:3923`
|
||||
* [haproxy config](contrib/haproxy/copyparty.conf)
|
||||
* [lighttpd subdomain](contrib/lighttpd/subdomain.conf) -- entire domain/subdomain
|
||||
* [lighttpd subpath](contrib/lighttpd/subpath.conf) -- location-based (not optimal, but in case you need it)
|
||||
* [nginx config](contrib/nginx/copyparty.conf) -- recommended
|
||||
* [traefik config](contrib/traefik/copyparty.yaml)
|
||||
|
||||
|
||||
### real-ip
|
||||
@@ -1684,6 +1694,38 @@ if you (and maybe everybody else) keep getting a message that says `thank you fo
|
||||
for most common setups, there should be a helpful message in the server-log explaining what to do, but see [docs/xff.md](docs/xff.md) if you want to learn more, including a quick hack to **just make it work** (which is **not** recommended, but hey...)
|
||||
|
||||
|
||||
### reverse-proxy performance
|
||||
|
||||
most reverse-proxies support connecting to copyparty either using uds/unix-sockets (`/dev/shm/party.sock`, faster/recommended) or using tcp (`127.0.0.1`)
|
||||
|
||||
with copyparty listening on a uds / unix-socket / unix-domain-socket and the reverse-proxy connecting to that:
|
||||
|
||||
| index.html | upload | download | software |
|
||||
| ------------ | ----------- | ----------- | -------- |
|
||||
| 28'900 req/s | 6'900 MiB/s | 7'400 MiB/s | no-proxy |
|
||||
| 18'750 req/s | 3'500 MiB/s | 2'370 MiB/s | haproxy |
|
||||
| 9'900 req/s | 3'750 MiB/s | 2'200 MiB/s | caddy |
|
||||
| 18'700 req/s | 2'200 MiB/s | 1'570 MiB/s | nginx |
|
||||
| 9'700 req/s | 1'750 MiB/s | 1'830 MiB/s | apache |
|
||||
| 9'900 req/s | 1'300 MiB/s | 1'470 MiB/s | lighttpd |
|
||||
|
||||
when connecting the reverse-proxy to `127.0.0.1` instead (the basic and/or old-fasioned way), speeds are a bit worse:
|
||||
|
||||
| index.html | upload | download | software |
|
||||
| ------------ | ----------- | ----------- | -------- |
|
||||
| 21'200 req/s | 5'700 MiB/s | 6'700 MiB/s | no-proxy |
|
||||
| 14'500 req/s | 1'700 MiB/s | 2'170 MiB/s | haproxy |
|
||||
| 11'100 req/s | 2'750 MiB/s | 2'000 MiB/s | traefik |
|
||||
| 8'400 req/s | 2'300 MiB/s | 1'950 MiB/s | caddy |
|
||||
| 13'400 req/s | 1'100 MiB/s | 1'480 MiB/s | nginx |
|
||||
| 8'400 req/s | 1'000 MiB/s | 1'000 MiB/s | apache |
|
||||
| 6'500 req/s | 1'270 MiB/s | 1'500 MiB/s | lighttpd |
|
||||
|
||||
in summary, `haproxy > caddy > traefik > nginx > apache > lighttpd`, and use uds when possible (traefik does not support it yet)
|
||||
|
||||
* if these results are bullshit because my config exampels are bad, please submit corrections!
|
||||
|
||||
|
||||
## prometheus
|
||||
|
||||
metrics/stats can be enabled at URL `/.cpr/metrics` for grafana / prometheus / etc (openmetrics 1.0.0)
|
||||
@@ -1997,7 +2039,8 @@ interact with copyparty using non-browser clients
|
||||
* can be downloaded from copyparty: controlpanel -> connect -> [partyfuse.py](http://127.0.0.1:3923/.cpr/a/partyfuse.py)
|
||||
* [rclone](https://rclone.org/) as client can give ~5x performance, see [./docs/rclone.md](docs/rclone.md)
|
||||
|
||||
* sharex (screenshot utility): see [./contrib/sharex.sxcu](contrib/#sharexsxcu)
|
||||
* sharex (screenshot utility): see [./contrib/sharex.sxcu](./contrib/#sharexsxcu)
|
||||
* and for screenshots on macos, see [./contrib/ishare.iscu](./contrib/#ishareiscu)
|
||||
* and for screenshots on linux, see [./contrib/flameshot.sh](./contrib/flameshot.sh)
|
||||
|
||||
* contextlet (web browser integration); see [contrib contextlet](contrib/#send-to-cppcontextletjson)
|
||||
|
||||
@@ -31,6 +31,9 @@ plugins in this section should only be used with appropriate precautions:
|
||||
* [very-bad-idea.py](./very-bad-idea.py) combined with [meadup.js](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/meadup.js) converts copyparty into a janky yet extremely flexible chromecast clone
|
||||
* also adds a virtual keyboard by @steinuil to the basic-upload tab for comfy couch crowd control
|
||||
* anything uploaded through the [android app](https://github.com/9001/party-up) (files or links) are executed on the server, meaning anyone can infect your PC with malware... so protect this with a password and keep it on a LAN!
|
||||
* [kamelåså](https://github.com/steinuil/kameloso) is a much better (and MUCH safer) alternative to this plugin
|
||||
* powered by [chicken-curry-banana-pineapple-peanut pizza](https://a.ocv.me/pub/g/i/2025/01/298437ce-8351-4c8c-861c-fa131d217999.jpg?cache) so you know it's good
|
||||
* and, unlike this plugin, kamelåså even has windows support (nice)
|
||||
|
||||
|
||||
# dependencies
|
||||
|
||||
@@ -6,6 +6,11 @@ WARNING -- DANGEROUS PLUGIN --
|
||||
running this plugin, they can execute malware on your machine
|
||||
so please keep this on a LAN and protect it with a password
|
||||
|
||||
here is a MUCH BETTER ALTERNATIVE (which also works on Windows):
|
||||
https://github.com/steinuil/kameloso
|
||||
|
||||
----------------------------------------------------------------------
|
||||
|
||||
use copyparty as a chromecast replacement:
|
||||
* post a URL and it will open in the default browser
|
||||
* upload a file and it will open in the default application
|
||||
|
||||
@@ -12,14 +12,19 @@
|
||||
* assumes the webserver and copyparty is running on the same server/IP
|
||||
* modify `10.13.1.1` as necessary if you wish to support browsers without javascript
|
||||
|
||||
### [`sharex.sxcu`](sharex.sxcu)
|
||||
* sharex config file to upload screenshots and grab the URL
|
||||
### [`sharex.sxcu`](sharex.sxcu) - Windows screenshot uploader
|
||||
* [sharex](https://getsharex.com/) config file to upload screenshots and grab the URL
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `pw`: password (remove the `pw` line if anon-write)
|
||||
* the `act:bput` thing is optional since copyparty v1.9.29
|
||||
* using an older sharex version, maybe sharex v12.1.1 for example? dw fam i got your back 👉😎👉 [`sharex12.sxcu`](sharex12.sxcu)
|
||||
|
||||
### [`flameshot.sh`](flameshot.sh)
|
||||
### [`ishare.iscu`](ishare.iscu) - MacOS screenshot uploader
|
||||
* [ishare](https://isharemac.app/) config file to upload screenshots and grab the URL
|
||||
* `RequestURL`: full URL to the target folder
|
||||
* `pw`: password (remove the `pw` line if anon-write)
|
||||
|
||||
### [`flameshot.sh`](flameshot.sh) - Linux screenshot uploader
|
||||
* takes a screenshot with [flameshot](https://flameshot.org/) on Linux, uploads it, and writes the URL to clipboard
|
||||
|
||||
### [`send-to-cpp.contextlet.json`](send-to-cpp.contextlet.json)
|
||||
@@ -53,5 +58,10 @@ init-scripts to start copyparty as a service
|
||||
* [`openrc/copyparty`](openrc/copyparty)
|
||||
|
||||
# Reverse-proxy
|
||||
copyparty has basic support for running behind another webserver
|
||||
* [`nginx/copyparty.conf`](nginx/copyparty.conf)
|
||||
copyparty supports running behind another webserver
|
||||
* [`apache/copyparty.conf`](apache/copyparty.conf)
|
||||
* [`haproxy/copyparty.conf`](haproxy/copyparty.conf)
|
||||
* [`lighttpd/subdomain.conf`](lighttpd/subdomain.conf)
|
||||
* [`lighttpd/subpath.conf`](lighttpd/subpath.conf)
|
||||
* [`nginx/copyparty.conf`](nginx/copyparty.conf) -- recommended
|
||||
* [`traefik/copyparty.yaml`](traefik/copyparty.yaml)
|
||||
|
||||
@@ -1,14 +1,29 @@
|
||||
# when running copyparty behind a reverse proxy,
|
||||
# the following arguments are recommended:
|
||||
# if you would like to use unix-sockets (recommended),
|
||||
# you must run copyparty with one of the following:
|
||||
#
|
||||
# -i 127.0.0.1 only accept connections from nginx
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
#
|
||||
# if you are doing location-based proxying (such as `/stuff` below)
|
||||
# you must run copyparty with --rp-loc=stuff
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
|
||||
|
||||
LoadModule proxy_module modules/mod_proxy.so
|
||||
ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
||||
# do not specify ProxyPassReverse
|
||||
|
||||
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||
# NOTE: do not specify ProxyPassReverse
|
||||
|
||||
|
||||
##
|
||||
## then, enable one of the below:
|
||||
|
||||
# use subdomain proxying to unix-socket (best)
|
||||
ProxyPass "/" "unix:///dev/shm/party.sock|http://whatever/"
|
||||
|
||||
# use subdomain proxying to 127.0.0.1 (slower)
|
||||
#ProxyPass "/" "http://127.0.0.1:3923/"
|
||||
|
||||
# use subpath proxying to 127.0.0.1 (slow and maybe buggy)
|
||||
#ProxyPass "/stuff" "http://127.0.0.1:3923/stuff"
|
||||
|
||||
24
contrib/haproxy/copyparty.conf
Normal file
24
contrib/haproxy/copyparty.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
# this config is essentially two separate examples;
|
||||
#
|
||||
# foo1 connects to copyparty using tcp, and
|
||||
# foo2 uses unix-sockets for 27% higher performance
|
||||
#
|
||||
# to use foo2 you must run copyparty with one of the following:
|
||||
#
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
|
||||
defaults
|
||||
mode http
|
||||
option forwardfor
|
||||
timeout connect 1s
|
||||
timeout client 610s
|
||||
timeout server 610s
|
||||
|
||||
listen foo1
|
||||
bind *:8081
|
||||
server srv1 127.0.0.1:3923 maxconn 512
|
||||
|
||||
listen foo2
|
||||
bind *:8082
|
||||
server srv1 /dev/shm/party.sock maxconn 512
|
||||
10
contrib/ishare.iscu
Normal file
10
contrib/ishare.iscu
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"Name": "copyparty",
|
||||
"RequestURL": "http://127.0.0.1:3923/screenshots/",
|
||||
"Headers": {
|
||||
"pw": "PUT_YOUR_PASSWORD_HERE_MY_DUDE",
|
||||
"accept": "json"
|
||||
},
|
||||
"FileFormName": "f",
|
||||
"ResponseURL": "{{fileurl}}"
|
||||
}
|
||||
24
contrib/lighttpd/subdomain.conf
Normal file
24
contrib/lighttpd/subdomain.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
# example usage for benchmarking:
|
||||
#
|
||||
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subdomain.conf
|
||||
#
|
||||
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
||||
# or a unix-socket, but unix-sockets are 37% faster because
|
||||
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
||||
#
|
||||
# this means we must run copyparty with one of the following:
|
||||
#
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
|
||||
server.port = 80
|
||||
server.document-root = "/var/empty"
|
||||
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
||||
server.modules = ( "mod_proxy" )
|
||||
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
||||
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
||||
|
||||
# if you really need to use tcp instead of unix-sockets, do this instead:
|
||||
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
||||
31
contrib/lighttpd/subpath.conf
Normal file
31
contrib/lighttpd/subpath.conf
Normal file
@@ -0,0 +1,31 @@
|
||||
# example usage for benchmarking:
|
||||
#
|
||||
# taskset -c 1 lighttpd -Df ~/dev/copyparty/contrib/lighttpd/subpath.conf
|
||||
#
|
||||
# lighttpd can connect to copyparty using either tcp (127.0.0.1)
|
||||
# or a unix-socket, but unix-sockets are 37% faster because
|
||||
# lighttpd doesn't reuse tcp connections, so we're doing unix-sockets
|
||||
#
|
||||
# this means we must run copyparty with one of the following:
|
||||
#
|
||||
# -i unix:777:/dev/shm/party.sock
|
||||
# -i unix:777:/dev/shm/party.sock,127.0.0.1
|
||||
#
|
||||
# also since this example proxies a subpath instead of the
|
||||
# recommended subdomain-proxying, we must also specify this:
|
||||
#
|
||||
# --rp-loc files
|
||||
#
|
||||
# on fedora/rhel, remember to setsebool -P httpd_can_network_connect 1
|
||||
|
||||
server.port = 80
|
||||
server.document-root = "/var/empty"
|
||||
server.upload-dirs = ( "/dev/shm", "/tmp" )
|
||||
server.modules = ( "mod_proxy" )
|
||||
$HTTP["url"] =~ "^/files" {
|
||||
proxy.forwarded = ( "for" => 1, "proto" => 1 )
|
||||
proxy.server = ( "" => ( ( "host" => "/dev/shm/party.sock" ) ) )
|
||||
|
||||
# if you really need to use tcp instead of unix-sockets, do this instead:
|
||||
#proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3923" ) ) )
|
||||
}
|
||||
@@ -36,9 +36,9 @@ upstream cpp_uds {
|
||||
# but there must be at least one unix-group which both
|
||||
# nginx and copyparty is a member of; if that group is
|
||||
# "www" then run copyparty with the following args:
|
||||
# -i unix:770:www:/tmp/party.sock
|
||||
# -i unix:770:www:/dev/shm/party.sock
|
||||
|
||||
server unix:/tmp/party.sock fail_timeout=1s;
|
||||
server unix:/dev/shm/party.sock fail_timeout=1s;
|
||||
keepalive 1;
|
||||
}
|
||||
|
||||
@@ -61,6 +61,10 @@ server {
|
||||
client_max_body_size 0;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
# improve download speed from 600 to 1500 MiB/s
|
||||
proxy_buffers 32 8k;
|
||||
proxy_buffer_size 16k;
|
||||
proxy_busy_buffers_size 24k;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.16.6"
|
||||
pkgver="1.16.7"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("29a119f7e238c44b0697e5858da8154d883a97ae20ecbb10393904406fa4fe06")
|
||||
sha256sums=("22178c98513072a8ef1e0fdb85d1044becf345ee392a9f5a336cc340ae16e4e9")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.6/copyparty-sfx.py",
|
||||
"version": "1.16.6",
|
||||
"hash": "sha256-gs2jSaXa0XbVbvpW1H4i/Vzovg68Usry0iHWfbddBCc="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.16.7/copyparty-sfx.py",
|
||||
"version": "1.16.7",
|
||||
"hash": "sha256-mAoZre3hArsdXorZwv0mYESn/mtyMXfcUzcOMwnk8Do="
|
||||
}
|
||||
12
contrib/traefik/copyparty.yaml
Normal file
12
contrib/traefik/copyparty.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
# ./traefik --experimental.fastproxy=true --entrypoints.web.address=:8080 --providers.file.filename=copyparty.yaml
|
||||
|
||||
http:
|
||||
services:
|
||||
service-cpp:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://127.0.0.1:3923/"
|
||||
routers:
|
||||
my-router:
|
||||
rule: "PathPrefix(`/`)"
|
||||
service: service-cpp
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 16, 7)
|
||||
VERSION = (1, 16, 8)
|
||||
CODENAME = "COPYparty"
|
||||
BUILD_DT = (2024, 12, 23)
|
||||
BUILD_DT = (2025, 1, 11)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -1897,7 +1897,7 @@ class HttpCli(object):
|
||||
return self.handle_stash(False)
|
||||
|
||||
if "save" in opt:
|
||||
post_sz, _, _, _, path, _ = self.dump_to_file(False)
|
||||
post_sz, _, _, _, _, path, _ = self.dump_to_file(False)
|
||||
self.log("urlform: %d bytes, %r" % (post_sz, path))
|
||||
elif "print" in opt:
|
||||
reader, _ = self.get_body_reader()
|
||||
@@ -1978,11 +1978,11 @@ class HttpCli(object):
|
||||
else:
|
||||
return read_socket(self.sr, bufsz, remains), remains
|
||||
|
||||
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, int, str, str]:
|
||||
# post_sz, sha_hex, sha_b64, remains, path, url
|
||||
def dump_to_file(self, is_put: bool) -> tuple[int, str, str, str, int, str, str]:
|
||||
# post_sz, halg, sha_hex, sha_b64, remains, path, url
|
||||
reader, remains = self.get_body_reader()
|
||||
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
||||
rnd, _, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
rnd, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
lim = vfs.get_dbv(rem)[0].lim
|
||||
fdir = vfs.canonical(rem)
|
||||
if lim:
|
||||
@@ -2132,12 +2132,14 @@ class HttpCli(object):
|
||||
# small toctou, but better than clobbering a hardlink
|
||||
wunlink(self.log, path, vfs.flags)
|
||||
|
||||
halg = "sha512"
|
||||
hasher = None
|
||||
copier = hashcopy
|
||||
if "ck" in self.ouparam or "ck" in self.headers:
|
||||
zs = self.ouparam.get("ck") or self.headers.get("ck") or ""
|
||||
halg = zs = self.ouparam.get("ck") or self.headers.get("ck") or ""
|
||||
if not zs or zs == "no":
|
||||
copier = justcopy
|
||||
halg = ""
|
||||
elif zs == "md5":
|
||||
hasher = hashlib.md5(**USED4SEC)
|
||||
elif zs == "sha1":
|
||||
@@ -2171,7 +2173,7 @@ class HttpCli(object):
|
||||
raise
|
||||
|
||||
if self.args.nw:
|
||||
return post_sz, sha_hex, sha_b64, remains, path, ""
|
||||
return post_sz, halg, sha_hex, sha_b64, remains, path, ""
|
||||
|
||||
at = mt = time.time() - lifetime
|
||||
cli_mt = self.headers.get("x-oc-mtime")
|
||||
@@ -2282,19 +2284,30 @@ class HttpCli(object):
|
||||
self.args.RS + vpath + vsuf,
|
||||
)
|
||||
|
||||
return post_sz, sha_hex, sha_b64, remains, path, url
|
||||
return post_sz, halg, sha_hex, sha_b64, remains, path, url
|
||||
|
||||
def handle_stash(self, is_put: bool) -> bool:
|
||||
post_sz, sha_hex, sha_b64, remains, path, url = self.dump_to_file(is_put)
|
||||
post_sz, halg, sha_hex, sha_b64, remains, path, url = self.dump_to_file(is_put)
|
||||
spd = self._spd(post_sz)
|
||||
t = "%s wrote %d/%d bytes to %r # %s"
|
||||
self.log(t % (spd, post_sz, remains, path, sha_b64[:28])) # 21
|
||||
|
||||
ac = self.uparam.get(
|
||||
"want", self.headers.get("accept", "").lower().split(";")[-1]
|
||||
)
|
||||
mime = "text/plain; charset=utf-8"
|
||||
ac = self.uparam.get("want") or self.headers.get("accept") or ""
|
||||
if ac:
|
||||
ac = ac.split(";", 1)[0].lower()
|
||||
if ac == "application/json":
|
||||
ac = "json"
|
||||
if ac == "url":
|
||||
t = url
|
||||
elif ac == "json" or "j" in self.uparam:
|
||||
jmsg = {"fileurl": url, "filesz": post_sz}
|
||||
if halg:
|
||||
jmsg[halg] = sha_hex[:56]
|
||||
jmsg["sha_b64"] = sha_b64
|
||||
|
||||
mime = "application/json"
|
||||
t = json.dumps(jmsg, indent=2, sort_keys=True)
|
||||
else:
|
||||
t = "{}\n{}\n{}\n{}\n".format(post_sz, sha_b64, sha_hex[:56], url)
|
||||
|
||||
@@ -2304,7 +2317,7 @@ class HttpCli(object):
|
||||
h["X-OC-MTime"] = "accepted"
|
||||
t = "" # some webdav clients expect/prefer this
|
||||
|
||||
self.reply(t.encode("utf-8"), 201, headers=h)
|
||||
self.reply(t.encode("utf-8", "replace"), 201, mime=mime, headers=h)
|
||||
return True
|
||||
|
||||
def bakflip(
|
||||
@@ -2983,7 +2996,7 @@ class HttpCli(object):
|
||||
self.redirect(vpath, "?edit")
|
||||
return True
|
||||
|
||||
def upload_flags(self, vfs: VFS) -> tuple[int, bool, int, list[str], list[str]]:
|
||||
def upload_flags(self, vfs: VFS) -> tuple[int, int, list[str], list[str]]:
|
||||
if self.args.nw:
|
||||
rnd = 0
|
||||
else:
|
||||
@@ -2991,10 +3004,6 @@ class HttpCli(object):
|
||||
if vfs.flags.get("rand"): # force-enable
|
||||
rnd = max(rnd, vfs.flags["nrand"])
|
||||
|
||||
ac = self.uparam.get(
|
||||
"want", self.headers.get("accept", "").lower().split(";")[-1]
|
||||
)
|
||||
want_url = ac == "url"
|
||||
zs = self.uparam.get("life", self.headers.get("life", ""))
|
||||
if zs:
|
||||
vlife = vfs.flags.get("lifetime") or 0
|
||||
@@ -3004,7 +3013,6 @@ class HttpCli(object):
|
||||
|
||||
return (
|
||||
rnd,
|
||||
want_url,
|
||||
lifetime,
|
||||
vfs.flags.get("xbu") or [],
|
||||
vfs.flags.get("xau") or [],
|
||||
@@ -3057,7 +3065,14 @@ class HttpCli(object):
|
||||
if not nullwrite:
|
||||
bos.makedirs(fdir_base)
|
||||
|
||||
rnd, want_url, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
rnd, lifetime, xbu, xau = self.upload_flags(vfs)
|
||||
zs = self.uparam.get("want") or self.headers.get("accept") or ""
|
||||
if zs:
|
||||
zs = zs.split(";", 1)[0].lower()
|
||||
if zs == "application/json":
|
||||
zs = "json"
|
||||
want_url = zs == "url"
|
||||
want_json = zs == "json" or "j" in self.uparam
|
||||
|
||||
files: list[tuple[int, str, str, str, str, str]] = []
|
||||
# sz, sha_hex, sha_b64, p_file, fname, abspath
|
||||
@@ -3379,7 +3394,9 @@ class HttpCli(object):
|
||||
msg += "\n" + errmsg
|
||||
|
||||
self.reply(msg.encode("utf-8", "replace"), status=sc)
|
||||
elif "j" in self.uparam:
|
||||
elif want_json:
|
||||
if len(jmsg["files"]) == 1:
|
||||
jmsg["fileurl"] = jmsg["files"][0]["url"]
|
||||
jtxt = json.dumps(jmsg, indent=2, sort_keys=True).encode("utf-8", "replace")
|
||||
self.reply(jtxt, mime="application/json", status=sc)
|
||||
else:
|
||||
@@ -4551,12 +4568,12 @@ class HttpCli(object):
|
||||
else self.conn.hsrv.nm.map(self.ip) or host
|
||||
)
|
||||
# safer than html_escape/quotep since this avoids both XSS and shell-stuff
|
||||
pw = re.sub(r"[<>&$?`\"']", "_", self.pw or "pw")
|
||||
pw = re.sub(r"[<>&$?`\"']", "_", self.pw or "hunter2")
|
||||
vp = re.sub(r"[<>&$?`\"']", "_", self.uparam["hc"] or "").lstrip("/")
|
||||
pw = pw.replace(" ", "%20")
|
||||
vp = vp.replace(" ", "%20")
|
||||
if pw in self.asrv.sesa:
|
||||
pw = "pwd"
|
||||
pw = "hunter2"
|
||||
|
||||
html = self.j2s(
|
||||
"svcs",
|
||||
|
||||
@@ -856,7 +856,7 @@ class Up2k(object):
|
||||
self.iacct = self.asrv.iacct
|
||||
self.grps = self.asrv.grps
|
||||
|
||||
have_e2d = self.args.idp_h_usr
|
||||
have_e2d = self.args.idp_h_usr or self.args.chpw or self.args.shr
|
||||
vols = list(all_vols.values())
|
||||
t0 = time.time()
|
||||
|
||||
@@ -1119,6 +1119,7 @@ class Up2k(object):
|
||||
reg = {}
|
||||
drp = None
|
||||
emptylist = []
|
||||
dotpart = "." if self.args.dotpart else ""
|
||||
snap = os.path.join(histpath, "up2k.snap")
|
||||
if bos.path.exists(snap):
|
||||
with gzip.GzipFile(snap, "rb") as f:
|
||||
@@ -1131,6 +1132,8 @@ class Up2k(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
reg = reg2 # diff-golf
|
||||
|
||||
if reg2 and "dwrk" not in reg2[next(iter(reg2))]:
|
||||
for job in reg2.values():
|
||||
job["dwrk"] = job["wark"]
|
||||
@@ -1138,7 +1141,8 @@ class Up2k(object):
|
||||
rm = []
|
||||
for k, job in reg2.items():
|
||||
job["ptop"] = ptop
|
||||
if "done" in job:
|
||||
is_done = "done" in job
|
||||
if is_done:
|
||||
job["need"] = job["hash"] = emptylist
|
||||
else:
|
||||
if "need" not in job:
|
||||
@@ -1146,10 +1150,13 @@ class Up2k(object):
|
||||
if "hash" not in job:
|
||||
job["hash"] = []
|
||||
|
||||
fp = djoin(ptop, job["prel"], job["name"])
|
||||
if is_done:
|
||||
fp = djoin(ptop, job["prel"], job["name"])
|
||||
else:
|
||||
fp = djoin(ptop, job["prel"], dotpart + job["name"] + ".PARTIAL")
|
||||
|
||||
if bos.path.exists(fp):
|
||||
reg[k] = job
|
||||
if "done" in job:
|
||||
if is_done:
|
||||
continue
|
||||
job["poke"] = time.time()
|
||||
job["busy"] = {}
|
||||
@@ -1157,11 +1164,18 @@ class Up2k(object):
|
||||
self.log("ign deleted file in snap: %r" % (fp,))
|
||||
if not n4g:
|
||||
rm.append(k)
|
||||
continue
|
||||
|
||||
for x in rm:
|
||||
del reg2[x]
|
||||
|
||||
# optimize pre-1.15.4 entries
|
||||
if next((x for x in reg.values() if "done" in x and "poke" in x), None):
|
||||
zsl = "host tnam busy sprs poke t0c".split()
|
||||
for job in reg.values():
|
||||
if "done" in job:
|
||||
for k in zsl:
|
||||
job.pop(k, None)
|
||||
|
||||
if drp is None:
|
||||
drp = [k for k, v in reg.items() if not v["need"]]
|
||||
else:
|
||||
@@ -3004,7 +3018,7 @@ class Up2k(object):
|
||||
if wark in reg:
|
||||
del reg[wark]
|
||||
job["hash"] = job["need"] = []
|
||||
job["done"] = True
|
||||
job["done"] = 1
|
||||
job["busy"] = {}
|
||||
|
||||
if lost:
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
<meta name="theme-color" content="#{{ tcolor }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/splash.css?_={{ ts }}">
|
||||
<link rel="stylesheet" media="screen" href="{{ r }}/.cpr/ui.css?_={{ ts }}">
|
||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}</style>
|
||||
<style>ul{padding-left:1.3em}li{margin:.4em 0}.txa{float:right;margin:0 0 0 1em}</style>
|
||||
{{ html_head }}
|
||||
</head>
|
||||
|
||||
@@ -31,15 +31,22 @@
|
||||
<br />
|
||||
<span class="os win lin mac">placeholders:</span>
|
||||
<span class="os win">
|
||||
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
|
||||
{% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>W:</b></code>=mountpoint
|
||||
</span>
|
||||
<span class="os lin mac">
|
||||
{% if accs %}<code><b>{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
|
||||
{% if accs %}<code><b id="pw0">{{ pw }}</b></code>=password, {% endif %}<code><b>mp</b></code>=mountpoint
|
||||
</span>
|
||||
<a href="#" id="setpw">use real password</a>
|
||||
</p>
|
||||
|
||||
|
||||
|
||||
{% if args.idp_h_usr %}
|
||||
<p style="line-height:2em"><b>WARNING:</b> this server is using IdP-based authentication, so this stuff may not work as advertised. Depending on server config, these commands can probably only be used to access areas which don't require authentication, unless you auth using any non-IdP accounts defined in the copyparty config. Please see <a href="https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#connecting-webdav-clients">the IdP docs</a></p>
|
||||
{% endif %}
|
||||
|
||||
|
||||
|
||||
{% if not args.no_dav %}
|
||||
<h1>WebDAV</h1>
|
||||
|
||||
@@ -229,6 +236,60 @@
|
||||
|
||||
|
||||
|
||||
<div class="os win">
|
||||
<h1>ShareX</h1>
|
||||
|
||||
<p>to upload screenshots using ShareX <a href="https://github.com/ShareX/ShareX/releases/tag/v12.4.1">v12</a> or <a href="https://getsharex.com/">v15+</a>, save this as <code>copyparty.sxcu</code> and run it:</p>
|
||||
|
||||
<pre class="dl" name="copyparty.sxcu">
|
||||
{ "Name": "copyparty",
|
||||
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
|
||||
"Headers": {
|
||||
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
|
||||
"accept": "url"
|
||||
},
|
||||
"DestinationType": "ImageUploader, TextUploader, FileUploader",
|
||||
"FileFormName": "f" }
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="os mac">
|
||||
<h1>ishare</h1>
|
||||
|
||||
<p>to upload screenshots using <a href="https://isharemac.app/">ishare</a>, save this as <code>copyparty.iscu</code> and run it:</p>
|
||||
|
||||
<pre class="dl" name="copyparty.iscu">
|
||||
{ "Name": "copyparty",
|
||||
"RequestURL": "http{{ s }}://{{ ep }}/{{ rvp }}",
|
||||
"Headers": {
|
||||
{% if accs %}"pw": "<b>{{ pw }}</b>",{% endif %}
|
||||
"accept": "json"
|
||||
},
|
||||
"ResponseURL": "{{ '{{fileurl}}' }}",
|
||||
"FileFormName": "f" }
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="os lin">
|
||||
<h1>flameshot</h1>
|
||||
|
||||
<p>to upload screenshots using <a href="https://flameshot.org/">flameshot</a>, save this as <code>flameshot.sh</code> and run it:</p>
|
||||
|
||||
<pre class="dl" name="flameshot.sh">
|
||||
#!/bin/bash
|
||||
pw="<b>{{ pw }}</b>"
|
||||
url="http{{ s }}://{{ ep }}/{{ rvp }}"
|
||||
filename="$(date +%Y-%m%d-%H%M%S).png"
|
||||
flameshot gui -s -r | curl -sT- "$url$filename?want=url&pw=$pw" | xsel -ib
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
<a href="#" id="repl">π</a>
|
||||
<script>
|
||||
|
||||
@@ -15,6 +15,21 @@ for (var a = 0; a < oa.length; a++) {
|
||||
oa[a].innerHTML = html.replace(rd, '$1').replace(/[ \r\n]+$/, '').replace(/\r?\n/g, '<br />');
|
||||
}
|
||||
|
||||
function add_dls() {
|
||||
oa = QSA('pre.dl');
|
||||
for (var a = 0; a < oa.length; a++) {
|
||||
var an = 'ta' + a,
|
||||
o = ebi(an) || mknod('a', an, 'download');
|
||||
|
||||
oa[a].setAttribute('id', 'tx' + a);
|
||||
oa[a].parentNode.insertBefore(o, oa[a]);
|
||||
o.setAttribute('download', oa[a].getAttribute('name'));
|
||||
o.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(oa[a].innerText));
|
||||
clmod(o, 'txa', 1);
|
||||
}
|
||||
}
|
||||
add_dls();
|
||||
|
||||
|
||||
oa = QSA('.ossel a');
|
||||
for (var a = 0; a < oa.length; a++)
|
||||
@@ -40,3 +55,21 @@ function setos(os) {
|
||||
}
|
||||
|
||||
setos(WINDOWS ? 'win' : LINUX ? 'lin' : MACOS ? 'mac' : 'idk');
|
||||
|
||||
|
||||
ebi('setpw').onclick = function (e) {
|
||||
ev(e);
|
||||
modal.prompt('password:', '', function (v) {
|
||||
if (!v)
|
||||
return;
|
||||
|
||||
var pw0 = ebi('pw0').innerHTML,
|
||||
oa = QSA('b');
|
||||
|
||||
for (var a = 0; a < oa.length; a++)
|
||||
if (oa[a].innerHTML == pw0)
|
||||
oa[a].textContent = v;
|
||||
|
||||
add_dls();
|
||||
});
|
||||
}
|
||||
|
||||
@@ -881,7 +881,7 @@ function up2k_init(subtle) {
|
||||
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo);
|
||||
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null);
|
||||
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort);
|
||||
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && (!subtle || !CHROME || MOBILE || VCHROME >= 107), set_hashw);
|
||||
bcfg_bind(uc, 'hashw', 'hashw', !!WebAssembly && !(CHROME && MOBILE) && (!subtle || !CHROME), set_hashw);
|
||||
bcfg_bind(uc, 'upnag', 'upnag', false, set_upnag);
|
||||
bcfg_bind(uc, 'upsfx', 'upsfx', false, set_upsfx);
|
||||
|
||||
@@ -1972,32 +1972,84 @@ function up2k_init(subtle) {
|
||||
nchunk = 0,
|
||||
chunksize = get_chunksize(t.size),
|
||||
nchunks = Math.ceil(t.size / chunksize),
|
||||
csz_mib = chunksize / 1048576,
|
||||
tread = t.t_hashing,
|
||||
cache_buf = null,
|
||||
cache_car = 0,
|
||||
cache_cdr = 0,
|
||||
hashers = 0,
|
||||
hashtab = {};
|
||||
|
||||
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
||||
var use_workers = hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'),
|
||||
hash_par = (!subtle && !use_workers) ? 0 : csz_mib < 48 ? 2 : csz_mib < 96 ? 1 : 0;
|
||||
|
||||
pvis.setab(t.n, nchunks);
|
||||
pvis.move(t.n, 'bz');
|
||||
|
||||
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
|
||||
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
|
||||
if (use_workers)
|
||||
return wexec_hash(t, chunksize, nchunks);
|
||||
|
||||
var segm_next = function () {
|
||||
if (nchunk >= nchunks || bpend)
|
||||
return false;
|
||||
|
||||
var reader = new FileReader(),
|
||||
nch = nchunk++,
|
||||
var nch = nchunk++,
|
||||
car = nch * chunksize,
|
||||
cdr = Math.min(chunksize + car, t.size);
|
||||
|
||||
st.bytes.hashed += cdr - car;
|
||||
st.etac.h++;
|
||||
|
||||
var orz = function (e) {
|
||||
bpend--;
|
||||
segm_next();
|
||||
hash_calc(nch, e.target.result);
|
||||
if (MOBILE && CHROME && st.slow_io === null && nch == 1 && cdr - car >= 1024 * 512) {
|
||||
var spd = Math.floor((cdr - car) / (Date.now() + 1 - tread));
|
||||
st.slow_io = spd < 40 * 1024;
|
||||
console.log('spd {0}, slow: {1}'.format(spd, st.slow_io));
|
||||
}
|
||||
|
||||
if (cdr <= cache_cdr && car >= cache_car) {
|
||||
try {
|
||||
var ofs = car - cache_car,
|
||||
ofs2 = ofs + (cdr - car),
|
||||
buf = cache_buf.subarray(ofs, ofs2);
|
||||
|
||||
hash_calc(nch, buf);
|
||||
}
|
||||
catch (ex) {
|
||||
vis_exh(ex + '', 'up2k.js', '', '', ex);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
var reader = new FileReader(),
|
||||
fr_cdr = cdr;
|
||||
|
||||
if (st.slow_io) {
|
||||
var step = cdr - car,
|
||||
tgt = 48 * 1048576;
|
||||
|
||||
while (step && fr_cdr - car < tgt)
|
||||
fr_cdr += step;
|
||||
if (fr_cdr - car > tgt && fr_cdr > cdr)
|
||||
fr_cdr -= step;
|
||||
if (fr_cdr > t.size)
|
||||
fr_cdr = t.size;
|
||||
}
|
||||
|
||||
var orz = function (e) {
|
||||
bpend = 0;
|
||||
var buf = e.target.result;
|
||||
if (fr_cdr > cdr) {
|
||||
cache_buf = new Uint8Array(buf);
|
||||
cache_car = car;
|
||||
cache_cdr = fr_cdr;
|
||||
buf = cache_buf.subarray(0, cdr - car);
|
||||
}
|
||||
if (hashers < hash_par)
|
||||
segm_next();
|
||||
|
||||
hash_calc(nch, buf);
|
||||
};
|
||||
reader.onload = function (e) {
|
||||
try { orz(e); } catch (ex) { vis_exh(ex + '', 'up2k.js', '', '', ex); }
|
||||
};
|
||||
@@ -2024,17 +2076,20 @@ function up2k_init(subtle) {
|
||||
|
||||
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
|
||||
};
|
||||
bpend++;
|
||||
reader.readAsArrayBuffer(t.fobj.slice(car, cdr));
|
||||
bpend = 1;
|
||||
tread = Date.now();
|
||||
reader.readAsArrayBuffer(t.fobj.slice(car, fr_cdr));
|
||||
|
||||
return true;
|
||||
};
|
||||
|
||||
var hash_calc = function (nch, buf) {
|
||||
hashers++;
|
||||
var orz = function (hashbuf) {
|
||||
var hslice = new Uint8Array(hashbuf).subarray(0, 33),
|
||||
b64str = buf2b64(hslice);
|
||||
|
||||
hashers--;
|
||||
hashtab[nch] = b64str;
|
||||
t.hash.push(nch);
|
||||
pvis.hashed(t);
|
||||
@@ -3044,7 +3099,7 @@ function up2k_init(subtle) {
|
||||
new_state = false;
|
||||
fixed = true;
|
||||
}
|
||||
if (new_state === undefined)
|
||||
if (new_state === undefined && preferred === undefined)
|
||||
new_state = can_write ? false : have_up2k_idx ? true : undefined;
|
||||
}
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@ var wah = '',
|
||||
HTTPS = ('' + location).indexOf('https:') === 0,
|
||||
TOUCH = 'ontouchstart' in window,
|
||||
MOBILE = TOUCH,
|
||||
CHROME = !!window.chrome,
|
||||
CHROME = !!window.chrome, // safari=false
|
||||
VCHROME = CHROME ? 1 : 0,
|
||||
IE = /Trident\//.test(navigator.userAgent),
|
||||
FIREFOX = ('netscape' in window) && / rv:/.test(navigator.userAgent),
|
||||
|
||||
@@ -25,6 +25,9 @@
|
||||
## [`changelog.md`](changelog.md)
|
||||
* occasionally grabbed from github release notes
|
||||
|
||||
## [`synology-dsm.md`](synology-dsm.md)
|
||||
* running copyparty on a synology nas
|
||||
|
||||
## [`devnotes.md`](devnotes.md)
|
||||
* technical stuff
|
||||
|
||||
|
||||
@@ -1,3 +1,34 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1223-0005 `v1.16.7` an idp fix for xmas
|
||||
|
||||
# ☃️🎄 **there is still time** 🎅🎁
|
||||
|
||||
❄️❄️❄️ please [enjoy some appropriate music](https://a.ocv.me/pub/demo/music/.bonus/#af-55d4554d) -- you'll probably like this more than the idp thing honestly ❄️❄️❄️
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* more improvements to the recent-uploads feature 87598dcd
|
||||
* move html rendering to clientside
|
||||
* any changes to the filter-text applies in real-time
|
||||
* loads 50% faster, reduces server-load by 30%
|
||||
* inhibits search engines from indexing it
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* using idp without e2d could mess with uploads dd6e9ea7
|
||||
* u2c (commandline uploader): fix window title 946a8c5b
|
||||
* mDNS/SSDP: fix incorrect log colors when multiple primary IPs are lost 552897ab
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* ui: make it more obvious that the volume-control is a volume-control 7f044372
|
||||
* copyparty.exe: update deps (jinja2, markupsafe, pyinstaller) c0dacbc4
|
||||
* improve safety of custom plugins 988a7223
|
||||
* if you've made your own plugins which expect certain values (host-header, filekeys) to be html-safe, then you'll want to upgrade
|
||||
* also fixes rss-feed xml if password contains special characters
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1219-0037 `v1.16.6` merry \x58mas
|
||||
|
||||
|
||||
48
docs/chunksizes.py
Executable file
48
docs/chunksizes.py
Executable file
@@ -0,0 +1,48 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# there's far better ways to do this but its 4am and i dont wanna think
|
||||
|
||||
# just pypy it my dude
|
||||
|
||||
import math
|
||||
|
||||
def humansize(sz, terse=False):
|
||||
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
|
||||
if sz < 1024:
|
||||
break
|
||||
|
||||
sz /= 1024.0
|
||||
|
||||
ret = " ".join([str(sz)[:4].rstrip("."), unit])
|
||||
|
||||
if not terse:
|
||||
return ret
|
||||
|
||||
return ret.replace("iB", "").replace(" ", "")
|
||||
|
||||
|
||||
def up2k_chunksize(filesize):
|
||||
chunksize = 1024 * 1024
|
||||
stepsize = 512 * 1024
|
||||
while True:
|
||||
for mul in [1, 2]:
|
||||
nchunks = math.ceil(filesize * 1.0 / chunksize)
|
||||
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
|
||||
return chunksize
|
||||
|
||||
chunksize += stepsize
|
||||
stepsize *= mul
|
||||
|
||||
|
||||
def main():
|
||||
prev = 1048576
|
||||
n = n0 = 524288
|
||||
while True:
|
||||
csz = up2k_chunksize(n)
|
||||
if csz > prev:
|
||||
print(f"| {n-n0:>18_} | {humansize(n-n0):>8} | {prev:>13_} | {humansize(prev):>8} |".replace("_", " "))
|
||||
prev = csz
|
||||
n += n0
|
||||
|
||||
|
||||
main()
|
||||
@@ -6,6 +6,7 @@
|
||||
* [up2k](#up2k) - quick outline of the up2k protocol
|
||||
* [why not tus](#why-not-tus) - I didn't know about [tus](https://tus.io/)
|
||||
* [why chunk-hashes](#why-chunk-hashes) - a single sha512 would be better, right?
|
||||
* [list of chunk-sizes](#list-of-chunk-sizes) - specific chunksizes are enforced
|
||||
* [hashed passwords](#hashed-passwords) - regarding the curious decisions
|
||||
* [http api](#http-api)
|
||||
* [read](#read)
|
||||
@@ -95,6 +96,44 @@ hashwasm would solve the streaming issue but reduces hashing speed for sha512 (x
|
||||
|
||||
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
|
||||
|
||||
### list of chunk-sizes
|
||||
|
||||
specific chunksizes are enforced depending on total filesize
|
||||
|
||||
each pair of filesize/chunksize is the largest filesize which will use its listed chunksize; a 512 MiB file will use chunksize 2 MiB, but if the file is one byte larger than 512 MiB then it becomes 3 MiB
|
||||
|
||||
for the purpose of performance (or dodging arbitrary proxy limitations), it is possible to upload combined and/or partial chunks using stitching and/or subchunks respectively
|
||||
|
||||
| filesize | filesize | chunksize | chunksz |
|
||||
| -----------------: | -------: | ------------: | ------: |
|
||||
| 268 435 456 | 256 MiB | 1 048 576 | 1.0 MiB |
|
||||
| 402 653 184 | 384 MiB | 1 572 864 | 1.5 MiB |
|
||||
| 536 870 912 | 512 MiB | 2 097 152 | 2.0 MiB |
|
||||
| 805 306 368 | 768 MiB | 3 145 728 | 3.0 MiB |
|
||||
| 1 073 741 824 | 1.0 GiB | 4 194 304 | 4.0 MiB |
|
||||
| 1 610 612 736 | 1.5 GiB | 6 291 456 | 6.0 MiB |
|
||||
| 2 147 483 648 | 2.0 GiB | 8 388 608 | 8.0 MiB |
|
||||
| 3 221 225 472 | 3.0 GiB | 12 582 912 | 12 MiB |
|
||||
| 4 294 967 296 | 4.0 GiB | 16 777 216 | 16 MiB |
|
||||
| 6 442 450 944 | 6.0 GiB | 25 165 824 | 24 MiB |
|
||||
| 137 438 953 472 | 128 GiB | 33 554 432 | 32 MiB |
|
||||
| 206 158 430 208 | 192 GiB | 50 331 648 | 48 MiB |
|
||||
| 274 877 906 944 | 256 GiB | 67 108 864 | 64 MiB |
|
||||
| 412 316 860 416 | 384 GiB | 100 663 296 | 96 MiB |
|
||||
| 549 755 813 888 | 512 GiB | 134 217 728 | 128 MiB |
|
||||
| 824 633 720 832 | 768 GiB | 201 326 592 | 192 MiB |
|
||||
| 1 099 511 627 776 | 1.0 TiB | 268 435 456 | 256 MiB |
|
||||
| 1 649 267 441 664 | 1.5 TiB | 402 653 184 | 384 MiB |
|
||||
| 2 199 023 255 552 | 2.0 TiB | 536 870 912 | 512 MiB |
|
||||
| 3 298 534 883 328 | 3.0 TiB | 805 306 368 | 768 MiB |
|
||||
| 4 398 046 511 104 | 4.0 TiB | 1 073 741 824 | 1.0 GiB |
|
||||
| 6 597 069 766 656 | 6.0 TiB | 1 610 612 736 | 1.5 GiB |
|
||||
| 8 796 093 022 208 | 8.0 TiB | 2 147 483 648 | 2.0 GiB |
|
||||
| 13 194 139 533 312 | 12.0 TiB | 3 221 225 472 | 3.0 GiB |
|
||||
| 17 592 186 044 416 | 16.0 TiB | 4 294 967 296 | 4.0 GiB |
|
||||
| 26 388 279 066 624 | 24.0 TiB | 6 442 450 944 | 6.0 GiB |
|
||||
| 35 184 372 088 832 | 32.0 TiB | 8 589 934 592 | 8.0 GiB |
|
||||
|
||||
|
||||
# hashed passwords
|
||||
|
||||
@@ -173,6 +212,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
|
||||
| method | params | body | result |
|
||||
|--|--|--|--|
|
||||
| PUT | | (binary data) | upload into file at URL |
|
||||
| PUT | `?j` | (binary data) | ...and reply with json |
|
||||
| PUT | `?ck` | (binary data) | upload without checksum gen (faster) |
|
||||
| PUT | `?ck=md5` | (binary data) | return md5 instead of sha512 |
|
||||
| PUT | `?gz` | (binary data) | compress with gzip and write into file at URL |
|
||||
@@ -197,6 +237,7 @@ upload modifiers:
|
||||
| http-header | url-param | effect |
|
||||
|--|--|--|
|
||||
| `Accept: url` | `want=url` | return just the file URL |
|
||||
| `Accept: json` | `want=json` | return upload info as json; same as `?j` |
|
||||
| `Rand: 4` | `rand=4` | generate random filename with 4 characters |
|
||||
| `Life: 30` | `life=30` | delete file after 30 seconds |
|
||||
| `CK: no` | `ck` | disable serverside checksum (maybe faster) |
|
||||
|
||||
22
docs/idp.md
22
docs/idp.md
@@ -20,3 +20,25 @@ this means that, if an IdP volume is located inside a folder that is readable by
|
||||
and likewise -- if the IdP volume is inside a folder that is only accessible by certain users, but the IdP volume is configured to allow access from unauthenticated users, then the contents of the volume will NOT be accessible until it is revived
|
||||
|
||||
until this limitation is fixed (if ever), it is recommended to place IdP volumes inside an appropriate parent volume, so they can inherit acceptable permissions until their revival; see the "strategic volumes" at the bottom of [./examples/docker/idp/copyparty.conf](./examples/docker/idp/copyparty.conf)
|
||||
|
||||
|
||||
## Connecting webdav clients
|
||||
|
||||
If you use only idp and want to connect via rclone you have to adapt a few things.
|
||||
The following steps are for Authelia, but should be easy adaptable to other IdPs and clients. There may be better/smarter ways to do this, but this is a known solution.
|
||||
|
||||
1. Add a rule for your domain and set it to one factor
|
||||
```
|
||||
rules:
|
||||
- domain: 'sub.domain.tld'
|
||||
policy: one_factor
|
||||
```
|
||||
2. After you created your rclone config find its location with `rclone config file` and add the headers option to it, change the string to `username:password` base64 encoded. Make sure to set the right url location, otherwise you will get a 401 from copyparty.
|
||||
```
|
||||
[servername-dav]
|
||||
type = webdav
|
||||
url = https://sub.domain.tld/u/user/priv/
|
||||
vendor = owncloud
|
||||
pacer_min_sleep = 0.01ms
|
||||
headers = Proxy-Authorization,basic base64encodedstring==
|
||||
```
|
||||
@@ -259,6 +259,12 @@ for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS=
|
||||
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
|
||||
# create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk
|
||||
sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||
# same but for chunksizes 16M (3.1G), 24M (4.1G), 48M (128.1G)
|
||||
sz=3321225472; csz=16777216;
|
||||
sz=4394967296; csz=25165824;
|
||||
sz=6509559808; csz=33554432;
|
||||
sz=138438953472; csz=50331648;
|
||||
f=csz-$csz; truncate -s $sz $f; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=$f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||
|
||||
# py2 on osx
|
||||
brew install python@2
|
||||
|
||||
140
docs/synology-dsm.md
Normal file
140
docs/synology-dsm.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# running copyparty on synology dsm nas
|
||||
|
||||

|
||||
|
||||
this has been tested on a `Synology ds218+` NAS with 1 SHR storage-pool and 1 volume, but the same steps should work in more advanced setups too
|
||||
|
||||
verified on DSM 7.1 and 7.2, but not on 6.x since my flea-market ds218+ refuses to install it for some reason
|
||||
|
||||
|
||||
|
||||
# ok let's go
|
||||
|
||||
go to controlpanel -> shared-folders, and create the following shared-folders if you don't already have appropriate ones:
|
||||
|
||||
* a shared-folder for configuration files, preferably on SSD if you have one
|
||||
|
||||
* one or more shared-folders for your actual data/media to share
|
||||
|
||||
(btw, when you create the shared-folders, it asks whether you want to enable data checksum and file compression, i would recommend both)
|
||||
|
||||
the rest of this doc assumes that these two shared-folders are named `configs` and `media1`, and that you made an empty folder inside the `configs` shared-folder named `cpp`
|
||||
|
||||
* your copyparty config file (see below) should be named `something.conf` directly inside that cpp folder, for example `/configs/cpp/copyparty.conf`
|
||||
|
||||
* during first start, copyparty will create a folder there named `copyparty`, in other words `/configs/cpp/copyparty` which you should leave alone; that's where copyparty stores its indexes and other runtime config
|
||||
|
||||
|
||||
|
||||
## recommended copyparty config
|
||||
|
||||
open the Package Center and install `Text Editor` (by Synology Inc.) to create and edit your copyparty config:
|
||||
|
||||

|
||||
|
||||
* note the `copyparty` and `hist` folders in that screenshot which are autogenerated by copyparty and to be left alone
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
e2d, e2t # remember uploads & read media tags
|
||||
rss, daw, ver # some other nice-to-have features
|
||||
#dedup # you may want this, or maybe not
|
||||
hist: /cfg/hist # don't pollute the shared-folder
|
||||
name: synology # shows in the browser, can be anything
|
||||
|
||||
[accounts]
|
||||
ed: wark # username ed, password wark
|
||||
|
||||
[/] # share the following at the webroot:
|
||||
/w # the "/w" docker-volume (the shared-folder)
|
||||
accs:
|
||||
A: ed # give Admin to username ed
|
||||
|
||||
# hide the synology system files by creating a hidden volume
|
||||
[/@eaDir]
|
||||
/w/@eaDir
|
||||
```
|
||||
|
||||
if you ever change the copyparty config file, then [restart the container](https://ocv.me/copyparty/doc/pics/dsm71-02.png) to make the changes take effect
|
||||
|
||||
okay now continue with one of these:
|
||||
|
||||
* [DSM v7.2 or newer](#dsm-v72-or-newer)
|
||||
|
||||
* [all older DSM versions](#dsm-v6x-dsm-v71x-or-older)
|
||||
|
||||
|
||||
|
||||
# DSM v7.2 or newer
|
||||
|
||||
`Docker` was replaced by `Container Manager` in DSM v7.2 but they're almost the same thing;
|
||||
|
||||
* open the `Package Center` and install the [Container Manager package](https://ocv.me/copyparty/doc/pics/dsm72-01.png) by `Docker Inc.`
|
||||
* open the `Container Manager` app
|
||||
* go to the `Registry` tab and search for `copyparty`
|
||||
* [doubleclick copyparty/ac](https://ocv.me/copyparty/doc/pics/dsm72-02.png) and keep the [default `latest`](https://ocv.me/copyparty/doc/pics/dsm72-03.png) when it asks you which tag to use
|
||||
* switch to the `Container` tab and click `Create`
|
||||
* [choose `copyparty/ac:latest`](https://ocv.me/copyparty/doc/pics/dsm72-04.png) and click `Next`
|
||||
|
||||
finally, in the [Advanced Settings](https://ocv.me/copyparty/doc/pics/dsm72-05.png) window,
|
||||
|
||||
* under `Port Settings`, type `3923` into the `Local Port` textbox
|
||||
* click `Add Folder` and select `/configs/cpp` on your nas (the `cpp` folder in the `configs` shared-folder), and change `Mount path` to `/cfg`
|
||||
* click `Add Folder` and select `/media1` on your nas (the shared-folder that copyparty can share in its web-UI) and change `Mount path` to `/w`
|
||||
* if you are adding multiple shared-folders for media, then the `Mount path` of the 2nd folder should be something like `/w/share2` or `/w/music`
|
||||
|
||||
copyparty will launch and become available at http://192.168.1.9:3923/ (assuming `192.168.1.9` is your nas ip)
|
||||
|
||||
|
||||
# DSM v6.x, DSM v7.1.x or older
|
||||
|
||||
if you're using DSM 7.1 or older, then you don't have [Container Manager](https://www.synology.com/en-global/dsm/packages/ContainerManager) yet and you'll have to use [Docker](https://www.synology.com/en-global/dsm/packages/Docker?os_ver=6.2&search=docker) instead. Here's how:
|
||||
|
||||
* open the `Package Center` and install the [Docker package](https://ocv.me/copyparty/doc/pics/dsm71-01.png) by `Docker Inc.`
|
||||
* open the `Docker` app
|
||||
* go to the `Registry` tab and search for `copyparty`
|
||||
* [doubleclick copyparty/ac](https://ocv.me/copyparty/doc/pics/dsm71-02.png) and keep the [default `latest`](https://ocv.me/copyparty/doc/pics/dsm71-03.png) when it asks you which tag to use
|
||||
* switch to the `Container` tab and click `Create`
|
||||
* [choose `copyparty/ac:latest`](https://ocv.me/copyparty/doc/pics/dsm71-04.png) and `Next`
|
||||
* in the [Network](https://ocv.me/copyparty/doc/pics/dsm71-05.png) window, keep the default `Use the selected networks: [x] bridge`
|
||||
* in the [General Settings](https://ocv.me/copyparty/doc/pics/dsm71-06.png) window, just keep everything default (in other words, everything disabled)
|
||||
* in the [Port Settings](https://ocv.me/copyparty/doc/pics/dsm71-07.png) window, change `Local Port` to `3923` (or choose something else, but it cannot be the default `Auto`)
|
||||
|
||||
finally, in the [Volume Settings](https://ocv.me/copyparty/doc/pics/dsm71-08.png) window, add a docker volume for copyparty config, and at least one volume for media-files which copyparty can share in its web-UI
|
||||
|
||||
* click `Add Folder` and select `/configs/cpp` on your nas (the `cpp` folder in the `configs` shared-folder), and change `Mount path` to `/cfg`
|
||||
* click `Add Folder` and select `/media1` on your nas (the shared-folder that copyparty can share in its web-UI) and change `Mount path` to `/w`
|
||||
* if you are adding multiple shared-folders for media, then the `Mount path` of the 2nd folder should be something like `/w/share2` or `/w/music`
|
||||
|
||||
copyparty will launch and become available at http://192.168.1.9:3923/ (assuming `192.168.1.9` is your nas ip)
|
||||
|
||||
|
||||
# misc notes
|
||||
|
||||
note that if you only want to share some folders inside your data volume, and not all of it, then you can either give copyparty the whole shared-folder anyways and control/restrict access in the copyparty config file (recommended), or you can add each folder as a new docker volume (not as flexible)
|
||||
|
||||
|
||||
|
||||
## regarding ram usage
|
||||
|
||||
the ram usage indicator in both `Docker` and `Container Manager` is misleading because it also counts the kernel disk cache which makes the number insanely high -- the synology resource monitor shows the correct values, usually less than 100 MiB
|
||||
|
||||
to see the actual memory usage by copyparty, see `Resource Monitor` -> `Task Manager` -> `Processes` and look at the `Private Memory` of `python3` which is probably copyparty
|
||||
|
||||
|
||||
|
||||
## regarding performance
|
||||
|
||||
when uploading files to the synology nas with the respective web-UIs,
|
||||
|
||||
* `File Station` does about 16 MiB/s,
|
||||
|
||||
* `Synology Drive Server` does about 50 MiB/s; deceivingly fast upload speeds at first, but when the file is fully uploaded, there is a lengthy "processing" step at the end, reducing the average speed to about 50% of the initial
|
||||
|
||||
* copyparty maxes the HDD write-speeds, 99 MiB/s
|
||||
|
||||
when uploading to the synology nas over webdav,
|
||||
|
||||
* `WebDAV Server` by `Synology Inc.` in the Package Center does 86 MiB/s
|
||||
|
||||
* copyparty does 79 MiB/s; the NAS CPU is a bottleneck because copyparty verifies the upload checksum while `WebDAV Server` doesn't
|
||||
@@ -27,7 +27,7 @@ ac96786e5d35882e0c5b724794329c9125c2b86ae7847f17acfc49f0d294312c6afc1c3f248655de
|
||||
6df21f0da408a89f6504417c7cdf9aaafe4ed88cfa13e9b8fa8414f604c0401f885a04bbad0484dc51a29284af5d1548e33c6cc6bfb9896d9992c1b1074f332d MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl
|
||||
8a6e2b13a2ec4ef914a5d62aad3db6464d45e525a82e07f6051ed10474eae959069e165dba011aefb8207cdfd55391d73d6f06362c7eb247b08763106709526e mutagen-1.47.0-py3-none-any.whl
|
||||
0203ec2551c4836696cfab0b2c9fff603352f03fa36e7476e2e1ca7ec57a3a0c24bd791fcd92f342bf817f0887854d9f072e0271c643de4b313d8c9569ba8813 packaging-24.1-py3-none-any.whl
|
||||
2be320b4191f208cdd6af183c77ba2cf460ea52164ee45ac3ff17d6dfa57acd9deff016636c2dd42a21f4f6af977d5f72df7dacf599bebcf41757272354d14c1 pillow-10.4.0-cp312-cp312-win_amd64.whl
|
||||
12d7921dc7dfd8a4b0ea0fa2bae8f1354fcdd59ece3d7f4e075aed631f9ba791dc142c70b1ccd1e6287c43139df1db26bd57a7a217c8da3a77326036495cdb57 pillow-11.1.0-cp312-cp312-win_amd64.whl
|
||||
f0463895e9aee97f31a2003323de235fed1b26289766dc0837261e3f4a594a31162b69e9adbb0e9a31e2e2d4b5f25c762ed1669553df7dc89a8ba4f85d297873 pyinstaller-6.11.1-py3-none-win_amd64.whl
|
||||
d550a0a14428386945533de2220c4c2e37c0c890fc51a600f626c6ca90a32d39572c121ec04c157ba3a8d6601cb021f8433d871b5c562a3d342c804fffec90c1 pyinstaller_hooks_contrib-2024.11-py3-none-any.whl
|
||||
0f623c9ab52d050283e97a986ba626d86b04cd02fa7ffdf352740576940b142b264709abadb5d875c90f625b28103d7210b900e0d77f12c1c140108bd2a159aa python-3.12.8-amd64.exe
|
||||
|
||||
@@ -38,7 +38,7 @@ fns=(
|
||||
MarkupSafe-2.1.5-cp312-cp312-win_amd64.whl
|
||||
mutagen-1.47.0-py3-none-any.whl
|
||||
packaging-24.1-py3-none-any.whl
|
||||
pillow-10.4.0-cp312-cp312-win_amd64.whl
|
||||
pillow-11.1.0-cp312-cp312-win_amd64.whl
|
||||
pyinstaller-6.10.0-py3-none-win_amd64.whl
|
||||
pyinstaller_hooks_contrib-2024.8-py3-none-any.whl
|
||||
python-3.12.7-amd64.exe
|
||||
|
||||
@@ -20,7 +20,7 @@ cat $f | awk '
|
||||
o{next}
|
||||
/^#/{s=1;rs=0;pr()}
|
||||
/^#* *(nix package)/{rs=1}
|
||||
/^#* *(themes|install on android|dev env setup|just the sfx|complete release|optional gpl stuff|nixos module)|```/{s=rs}
|
||||
/^#* *(themes|install on android|dev env setup|just the sfx|complete release|optional gpl stuff|nixos module|reverse-proxy perf)|```/{s=rs}
|
||||
/^#/{
|
||||
lv=length($1);
|
||||
sub(/[^ ]+ /,"");
|
||||
|
||||
Reference in New Issue
Block a user