Compare commits
21 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c1c0ecca13 | ||
|
|
ee62836383 | ||
|
|
705f598b1a | ||
|
|
414de88925 | ||
|
|
53ffd245dd | ||
|
|
cf1b756206 | ||
|
|
22b58e31ef | ||
|
|
b7f9bf5a28 | ||
|
|
aba680b6c2 | ||
|
|
fabada95f6 | ||
|
|
9ccd8bb3ea | ||
|
|
1d68acf8f0 | ||
|
|
1e7697b551 | ||
|
|
4a4ec88d00 | ||
|
|
6adc778d62 | ||
|
|
6b7ebdb7e9 | ||
|
|
3d7facd774 | ||
|
|
eaee1f2cab | ||
|
|
ff012221ae | ||
|
|
c398553748 | ||
|
|
3ccbcf6185 |
100
README.md
100
README.md
@@ -80,12 +80,14 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/))
|
||||
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
|
||||
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
* [ip auth](#ip-auth) - autologin based on IP range (CIDR)
|
||||
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
|
||||
* [user-changeable passwords](#user-changeable-passwords) - if permitted, users can change their own passwords
|
||||
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
|
||||
* [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed
|
||||
* [hiding from google](#hiding-from-google) - tell search engines you don't wanna be indexed
|
||||
* [themes](#themes)
|
||||
* [complete examples](#complete-examples)
|
||||
* [listen on port 80 and 443](#listen-on-port-80-and-443) - become a *real* webserver
|
||||
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
|
||||
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
|
||||
* [prometheus](#prometheus) - metrics/stats can be enabled
|
||||
@@ -114,7 +116,7 @@ turn almost any device into a file server with resumable uploads/downloads using
|
||||
* [https](#https) - both HTTP and HTTPS are accepted
|
||||
* [recovering from crashes](#recovering-from-crashes)
|
||||
* [client crashes](#client-crashes)
|
||||
* [frefox wsod](#frefox-wsod) - firefox 87 can crash during uploads
|
||||
* [firefox wsod](#firefox-wsod) - firefox 87 can crash during uploads
|
||||
* [HTTP API](#HTTP-API) - see [devnotes](./docs/devnotes.md#http-api)
|
||||
* [dependencies](#dependencies) - mandatory deps
|
||||
* [optional dependencies](#optional-dependencies) - install these to enable bonus features
|
||||
@@ -581,7 +583,7 @@ it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video f
|
||||
* pyvips is 3x faster than Pillow, Pillow is 3x faster than FFmpeg
|
||||
* disable thumbnails for specific volumes with volflag `dthumb` for all, or `dvthumb` / `dathumb` / `dithumb` for video/audio/images only
|
||||
|
||||
audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
|
||||
audio files are converted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
|
||||
|
||||
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
|
||||
* the order is significant, so if both `cover.png` and `folder.jpg` exist in a folder, it will pick the first matching `--th-covers` entry (`folder.jpg`)
|
||||
@@ -667,7 +669,7 @@ see [up2k](./docs/devnotes.md#up2k) for details on how it works, or watch a [dem
|
||||
|
||||
**protip:** if you enable `favicon` in the `[⚙️] settings` tab (by typing something into the textbox), the icon in the browser tab will indicate upload progress -- also, the `[🔔]` and/or `[🔊]` switches enable visible and/or audible notifications on upload completion
|
||||
|
||||
the up2k UI is the epitome of polished inutitive experiences:
|
||||
the up2k UI is the epitome of polished intuitive experiences:
|
||||
* "parallel uploads" specifies how many chunks to upload at the same time
|
||||
* `[🏃]` analysis of other files should continue while one is uploading
|
||||
* `[🥔]` shows a simpler UI for faster uploads from slow devices
|
||||
@@ -716,7 +718,7 @@ you can unpost even if you don't have regular move/delete access, however only f
|
||||
|
||||
### self-destruct
|
||||
|
||||
uploads can be given a lifetime, afer which they expire / self-destruct
|
||||
uploads can be given a lifetime, after which they expire / self-destruct
|
||||
|
||||
the feature must be enabled per-volume with the `lifetime` [upload rule](#upload-rules) which sets the upper limit for how long a file gets to stay on the server
|
||||
|
||||
@@ -743,7 +745,7 @@ the control-panel shows the ETA for all incoming files , but only for files bei
|
||||
|
||||
cut/paste, rename, and delete files/folders (if you have permission)
|
||||
|
||||
file selection: click somewhere on the line (not the link itsef), then:
|
||||
file selection: click somewhere on the line (not the link itself), then:
|
||||
* `space` to toggle
|
||||
* `up/down` to move
|
||||
* `shift-up/down` to move-and-select
|
||||
@@ -777,6 +779,7 @@ semi-intentional limitations:
|
||||
|
||||
* cleanup of expired shares only works when global option `e2d` is set, and/or at least one volume on the server has volflag `e2d`
|
||||
* only folders from the same volume are shared; if you are sharing a folder which contains other volumes, then the contents of those volumes will not be available
|
||||
* related to [IdP volumes being forgotten on shutdown](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#idp-volumes-are-forgotten-on-shutdown), any shares pointing into a user's IdP volume will be unavailable until that user makes their first request after a restart
|
||||
* no option to "delete after first access" because tricky
|
||||
* when linking something to discord (for example) it'll get accessed by their scraper and that would count as a hit
|
||||
* browsers wouldn't be able to resume a broken download unless the requester's IP gets allowlisted for X minutes (ref. tricky)
|
||||
@@ -936,6 +939,8 @@ see [./srv/expand/](./srv/expand/) for usage and examples
|
||||
|
||||
* files named `README.md` / `readme.md` will be rendered after directory listings unless `--no-readme` (but `.epilogue.html` takes precedence)
|
||||
|
||||
* and `PREADME.md` / `preadme.md` is shown above directory listings unless `--no-readme` or `.prologue.html`
|
||||
|
||||
* `README.md` and `*logue.html` can contain placeholder values which are replaced server-side before embedding into directory listings; see `--help-exp`
|
||||
|
||||
|
||||
@@ -987,7 +992,11 @@ uses [multicast dns](https://en.wikipedia.org/wiki/Multicast_DNS) to give copypa
|
||||
|
||||
all enabled services ([webdav](#webdav-server), [ftp](#ftp-server), [smb](#smb-server)) will appear in mDNS-aware file managers (KDE, gnome, macOS, ...)
|
||||
|
||||
the domain will be http://partybox.local if the machine's hostname is `partybox` unless `--name` specifies soemthing else
|
||||
the domain will be `partybox.local` if the machine's hostname is `partybox` unless `--name` specifies something else
|
||||
|
||||
and the web-UI will be available at http://partybox.local:3923/
|
||||
|
||||
* if you want to get rid of the `:3923` so you can use http://partybox.local/ instead then see [listen on port 80 and 443](#listen-on-port-80-and-443)
|
||||
|
||||
|
||||
### ssdp
|
||||
@@ -1013,7 +1022,7 @@ print a qr-code [(screenshot)](https://user-images.githubusercontent.com/241032/
|
||||
* `--qrz 1` forces 1x zoom instead of autoscaling to fit the terminal size
|
||||
* 1x may render incorrectly on some terminals/fonts, but 2x should always work
|
||||
|
||||
it uses the server hostname if [mdns](#mdns) is enbled, otherwise it'll use your external ip (default route) unless `--qri` specifies a specific ip-prefix or domain
|
||||
it uses the server hostname if [mdns](#mdns) is enabled, otherwise it'll use your external ip (default route) unless `--qri` specifies a specific ip-prefix or domain
|
||||
|
||||
|
||||
## ftp server
|
||||
@@ -1038,7 +1047,7 @@ some recommended FTP / FTPS clients; `wark` = example password:
|
||||
|
||||
## webdav server
|
||||
|
||||
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a greay way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
|
||||
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a great way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
|
||||
|
||||
click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos
|
||||
|
||||
@@ -1142,8 +1151,8 @@ authenticate with one of the following:
|
||||
tweaking the ui
|
||||
|
||||
* set default sort order globally with `--sort` or per-volume with the `sort` volflag; specify one or more comma-separated columns to sort by, and prefix the column name with `-` for reverse sort
|
||||
* the column names you can use are visible as tooltips when hovering over the column headers in the directory listing, for example `href ext sz ts tags/.up_at tags/Cirle tags/.tn tags/Artist tags/Title`
|
||||
* to sort in music order (album, track, artist, title) with filename as fallback, you could `--sort tags/Cirle,tags/.tn,tags/Artist,tags/Title,href`
|
||||
* the column names you can use are visible as tooltips when hovering over the column headers in the directory listing, for example `href ext sz ts tags/.up_at tags/Circle tags/.tn tags/Artist tags/Title`
|
||||
* to sort in music order (album, track, artist, title) with filename as fallback, you could `--sort tags/Circle,tags/.tn,tags/Artist,tags/Title,href`
|
||||
* to sort by upload date, first enable showing the upload date in the listing with `-e2d -mte +.up_at` and then `--sort tags/.up_at`
|
||||
|
||||
see [./docs/rice](./docs/rice) for more, including how to add stuff (css/`<meta>`/...) to the html `<head>` tag, or to add your own translation
|
||||
@@ -1166,7 +1175,11 @@ if you want to entirely replace the copyparty response with your own jinja2 temp
|
||||
|
||||
enable symlink-based upload deduplication globally with `--dedup` or per-volume with volflag `dedup`
|
||||
|
||||
when someone tries to upload a file that already exists on the server, the upload will be politely declined and a symlink is created instead, pointing to the nearest copy on disk, thus reducinc disk space usage
|
||||
by default, when someone tries to upload a file that already exists on the server, the upload will be politely declined, and the server will copy the existing file over to where the upload would have gone
|
||||
|
||||
if you enable deduplication with `--dedup` then it'll create a symlink instead of a full copy, thus reducing disk space usage
|
||||
|
||||
* on the contrary, if your server is hooked up to s3-glacier or similar storage where reading is expensive, and you cannot use `--safe-dedup=1` because you have other software tampering with your files, so you want to entirely disable detection of duplicate data instead, then you can specify `--no-clone` globally or `noclone` as a volflag
|
||||
|
||||
**warning:** when enabling dedup, you should also:
|
||||
* enable indexing with `-e2dsa` or volflag `e2dsa` (see [file indexing](#file-indexing) section below); strongly recommended
|
||||
@@ -1207,7 +1220,7 @@ through arguments:
|
||||
* `-e2t` enables metadata indexing on upload
|
||||
* `-e2ts` also scans for tags in all files that don't have tags yet
|
||||
* `-e2tsr` also deletes all existing tags, doing a full reindex
|
||||
* `-e2v` verfies file integrity at startup, comparing hashes from the db
|
||||
* `-e2v` verifies file integrity at startup, comparing hashes from the db
|
||||
* `-e2vu` patches the database with the new hashes from the filesystem
|
||||
* `-e2vp` panics and kills copyparty instead
|
||||
|
||||
@@ -1420,11 +1433,27 @@ redefine behavior with plugins ([examples](./bin/handlers/))
|
||||
replace 404 and 403 errors with something completely different (that's it for now)
|
||||
|
||||
|
||||
## ip auth
|
||||
|
||||
autologin based on IP range (CIDR) , using the global-option `--ipu`
|
||||
|
||||
for example, if everyone with an IP that starts with `192.168.123` should automatically log in as the user `spartacus`, then you can either specify `--ipu=192.168.123.0/24=spartacus` as a commandline option, or put this in a config file:
|
||||
|
||||
```yaml
|
||||
[global]
|
||||
ipu: 192.168.123.0/24=spartacus
|
||||
```
|
||||
|
||||
repeat the option to map additional subnets
|
||||
|
||||
**be careful with this one!** if you have a reverseproxy, then you definitely want to make sure you have [real-ip](#real-ip) configured correctly, and it's probably a good idea to nullmap the reverseproxy's IP just in case; so if your reverseproxy is sending requests from `172.24.27.9` then that would be `--ipu=172.24.27.9/32=`
|
||||
|
||||
|
||||
## identity providers
|
||||
|
||||
replace copyparty passwords with oauth and such
|
||||
|
||||
you can disable the built-in password-based login sysem, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
|
||||
you can disable the built-in password-based login system, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
|
||||
|
||||
a popular choice is [Authelia](https://www.authelia.com/) (config-file based), another one is [authentik](https://goauthentik.io/) (GUI-based, more complex)
|
||||
|
||||
@@ -1451,7 +1480,7 @@ if permitted, users can change their own passwords in the control-panel
|
||||
|
||||
* if you run multiple copyparty instances with different users you *almost definitely* want to specify separate DBs for each instance
|
||||
|
||||
* if [password hashing](#password-hashing) is enbled, the passwords in the db are also hashed
|
||||
* if [password hashing](#password-hashing) is enabled, the passwords in the db are also hashed
|
||||
|
||||
* ...which means that all user-defined passwords will be forgotten if you change password-hashing settings
|
||||
|
||||
@@ -1471,7 +1500,7 @@ you may improve performance by specifying larger values for `--iobuf` / `--s-rd-
|
||||
|
||||
## hiding from google
|
||||
|
||||
tell search engines you dont wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
|
||||
tell search engines you don't wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
|
||||
|
||||
* `--no-robots` adds HTTP (`X-Robots-Tag`) and HTML (`<meta>`) headers with `noindex, nofollow` globally
|
||||
* volflag `[...]:c,norobots` does the same thing for that single volume
|
||||
@@ -1546,6 +1575,33 @@ if you want to change the fonts, see [./docs/rice/](./docs/rice/)
|
||||
`-lo log/cpp-%Y-%m%d-%H%M%S.txt.xz`
|
||||
|
||||
|
||||
## listen on port 80 and 443
|
||||
|
||||
become a *real* webserver which people can access by just going to your IP or domain without specifying a port
|
||||
|
||||
**if you're on windows,** then you just need to add the commandline argument `-p 80,443` and you're done! nice
|
||||
|
||||
**if you're on macos,** sorry, I don't know
|
||||
|
||||
**if you're on Linux,** you have the following 4 options:
|
||||
|
||||
* **option 1:** set up a [reverse-proxy](#reverse-proxy) -- this one makes a lot of sense if you're running on a proper headless server, because that way you get real HTTPS too
|
||||
|
||||
* **option 2:** NAT to port 3923 -- this is cumbersome since you'll need to do it every time you reboot, and the exact command may depend on your linux distribution:
|
||||
```bash
|
||||
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3923
|
||||
iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3923
|
||||
```
|
||||
|
||||
* **option 3:** disable the [security policy](https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html) which prevents the use of 80 and 443; this is *probably* fine:
|
||||
```
|
||||
setcap CAP_NET_BIND_SERVICE=+eip $(realpath $(which python))
|
||||
python copyparty-sfx.py -p 80,443
|
||||
```
|
||||
|
||||
* **option 4:** run copyparty as root (please don't)
|
||||
|
||||
|
||||
## reverse-proxy
|
||||
|
||||
running copyparty next to other websites hosted on an existing webserver such as nginx, caddy, or apache
|
||||
@@ -1894,7 +1950,7 @@ interact with copyparty using non-browser clients
|
||||
|
||||
* [igloo irc](https://iglooirc.com/): Method: `post` Host: `https://you.com/up/?want=url&pw=hunter2` Multipart: `yes` File parameter: `f`
|
||||
|
||||
copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uplaods:
|
||||
copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uploads:
|
||||
|
||||
b512(){ printf "$((sha512sum||shasum -a512)|sed -E 's/ .*//;s/(..)/\\x\1/g')"|base64|tr '+/' '-_'|head -c44;}
|
||||
b512 <movie.mkv
|
||||
@@ -1994,7 +2050,7 @@ when uploading files,
|
||||
* up to 30% faster uploads if you hide the upload status list by switching away from the `[🚀]` up2k ui-tab (or closing it)
|
||||
* optionally you can switch to the lightweight potato ui by clicking the `[🥔]`
|
||||
* switching to another browser-tab also works, the favicon will update every 10 seconds in that case
|
||||
* unlikely to be a problem, but can happen when uploding many small files, or your internet is too fast, or PC too slow
|
||||
* unlikely to be a problem, but can happen when uploading many small files, or your internet is too fast, or PC too slow
|
||||
|
||||
|
||||
# security
|
||||
@@ -2042,7 +2098,7 @@ other misc notes:
|
||||
|
||||
behavior that might be unexpected
|
||||
|
||||
* users without read-access to a folder can still see the `.prologue.html` / `.epilogue.html` / `README.md` contents, for the purpose of showing a description on how to use the uploader for example
|
||||
* users without read-access to a folder can still see the `.prologue.html` / `.epilogue.html` / `PREADME.md` / `README.md` contents, for the purpose of showing a description on how to use the uploader for example
|
||||
* users can submit `<script>`s which autorun (in a sandbox) for other visitors in a few ways;
|
||||
* uploading a `README.md` -- avoid with `--no-readme`
|
||||
* renaming `some.html` to `.epilogue.html` -- avoid with either `--no-logues` or `--no-dot-ren`
|
||||
@@ -2120,13 +2176,13 @@ if [cfssl](https://github.com/cloudflare/cfssl/releases/latest) is installed, co
|
||||
|
||||
## client crashes
|
||||
|
||||
### frefox wsod
|
||||
### firefox wsod
|
||||
|
||||
firefox 87 can crash during uploads -- the entire browser goes, including all other browser tabs, everything turns white
|
||||
|
||||
however you can hit `F12` in the up2k tab and use the devtools to see how far you got in the uploads:
|
||||
|
||||
* get a complete list of all uploads, organized by statuts (ok / no-good / busy / queued):
|
||||
* get a complete list of all uploads, organized by status (ok / no-good / busy / queued):
|
||||
`var tabs = { ok:[], ng:[], bz:[], q:[] }; for (var a of up2k.ui.tab) tabs[a.in].push(a); tabs`
|
||||
|
||||
* list of filenames which failed:
|
||||
@@ -2243,7 +2299,7 @@ then again, if you are already into downloading shady binaries from the internet
|
||||
|
||||
## zipapp
|
||||
|
||||
another emergency alternative, [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) has less features, is slow, requires python 3.7 or newer, worse compression, and more importantly is unable to benefit from more recent versions of jinja2 and such (which makes it less secure)... lots of drawbacks with this one really -- but it does not unpack any temporay files to disk, so it *may* just work if the regular sfx fails to start because the computer is messed up in certain funky ways, so it's worth a shot if all else fails
|
||||
another emergency alternative, [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) has less features, is slow, requires python 3.7 or newer, worse compression, and more importantly is unable to benefit from more recent versions of jinja2 and such (which makes it less secure)... lots of drawbacks with this one really -- but it does not unpack any temporary files to disk, so it *may* just work if the regular sfx fails to start because the computer is messed up in certain funky ways, so it's worth a shot if all else fails
|
||||
|
||||
run it by doubleclicking it, or try typing `python copyparty.pyz` in your terminal/console/commandline/telex if that fails
|
||||
|
||||
|
||||
42
bin/u2c.py
42
bin/u2c.py
@@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
S_VERSION = "2.1"
|
||||
S_BUILD_DT = "2024-09-23"
|
||||
S_VERSION = "2.2"
|
||||
S_BUILD_DT = "2024-10-13"
|
||||
|
||||
"""
|
||||
u2c.py: upload to copyparty
|
||||
@@ -728,6 +728,7 @@ def handshake(ar, file, search):
|
||||
while True:
|
||||
sc = 600
|
||||
txt = ""
|
||||
t0 = time.time()
|
||||
try:
|
||||
zs = json.dumps(req, separators=(",\n", ": "))
|
||||
sc, txt = web.req("POST", url, {}, zs.encode("utf-8"), MJ)
|
||||
@@ -752,7 +753,9 @@ def handshake(ar, file, search):
|
||||
print("\nERROR: login required, or wrong password:\n%s" % (txt,))
|
||||
raise BadAuth()
|
||||
|
||||
eprint("handshake failed, retrying: %s\n %s\n\n" % (file.name, em))
|
||||
t = "handshake failed, retrying: %s\n t0=%.3f t1=%.3f td=%.3f\n %s\n\n"
|
||||
now = time.time()
|
||||
eprint(t % (file.name, t0, now, now - t0, em))
|
||||
time.sleep(ar.cd)
|
||||
|
||||
try:
|
||||
@@ -869,8 +872,8 @@ class Ctl(object):
|
||||
self.hash_b = 0
|
||||
self.up_f = 0
|
||||
self.up_c = 0
|
||||
self.up_b = 0
|
||||
self.up_br = 0
|
||||
self.up_b = 0 # num bytes handled
|
||||
self.up_br = 0 # num bytes actually transferred
|
||||
self.uploader_busy = 0
|
||||
self.serialized = False
|
||||
|
||||
@@ -1013,11 +1016,14 @@ class Ctl(object):
|
||||
t = "%s eta @ %s/s, %s, %d# left\033[K" % (self.eta, spd, sleft, nleft)
|
||||
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
|
||||
|
||||
if self.ar.wlist:
|
||||
self.at_hash = time.time() - self.t0
|
||||
|
||||
if self.hash_b and self.at_hash:
|
||||
spd = humansize(self.hash_b / self.at_hash)
|
||||
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
|
||||
if self.up_b and self.at_up:
|
||||
spd = humansize(self.up_b / self.at_up)
|
||||
if self.up_br and self.at_up:
|
||||
spd = humansize(self.up_br / self.at_up)
|
||||
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
|
||||
|
||||
if not self.recheck:
|
||||
@@ -1136,10 +1142,16 @@ class Ctl(object):
|
||||
self.up_b = self.hash_b
|
||||
|
||||
if self.ar.wlist:
|
||||
vp = file.rel.decode("utf-8")
|
||||
if self.ar.chs:
|
||||
zsl = [
|
||||
"%s %d %d" % (zsii[0], n, zsii[1])
|
||||
for n, zsii in enumerate(file.cids)
|
||||
]
|
||||
print("chs: %s\n%s" % (vp, "\n".join(zsl)))
|
||||
zsl = [self.ar.wsalt, str(file.size)] + [x[0] for x in file.kchunks]
|
||||
zb = hashlib.sha512("\n".join(zsl).encode("utf-8")).digest()[:33]
|
||||
wark = ub64enc(zb).decode("utf-8")
|
||||
vp = file.rel.decode("utf-8")
|
||||
if self.ar.jw:
|
||||
print("%s %s" % (wark, vp))
|
||||
else:
|
||||
@@ -1177,6 +1189,7 @@ class Ctl(object):
|
||||
self.q_upload.put(None)
|
||||
return
|
||||
|
||||
chunksz = up2k_chunksize(file.size)
|
||||
upath = file.abs.decode("utf-8", "replace")
|
||||
if not VT100:
|
||||
upath = upath.lstrip("\\?")
|
||||
@@ -1236,9 +1249,14 @@ class Ctl(object):
|
||||
file.up_c -= len(hs)
|
||||
for cid in hs:
|
||||
sz = file.kchunks[cid][1]
|
||||
self.up_br -= sz
|
||||
self.up_b -= sz
|
||||
file.up_b -= sz
|
||||
|
||||
if hs and not file.up_b:
|
||||
# first hs of this file; is this an upload resume?
|
||||
file.up_b = chunksz * max(0, len(file.kchunks) - len(hs))
|
||||
|
||||
file.ucids = hs
|
||||
|
||||
if not hs:
|
||||
@@ -1252,7 +1270,7 @@ class Ctl(object):
|
||||
c1 = c2 = ""
|
||||
|
||||
spd_h = humansize(file.size / file.t_hash, True)
|
||||
if file.up_b:
|
||||
if file.up_c:
|
||||
t_up = file.t1_up - file.t0_up
|
||||
spd_u = humansize(file.size / t_up, True)
|
||||
|
||||
@@ -1262,13 +1280,12 @@ class Ctl(object):
|
||||
t = " found %s %s(%.2fs,%s/s)%s"
|
||||
print(t % (upath, c1, file.t_hash, spd_h, c2))
|
||||
else:
|
||||
kw = "uploaded" if file.up_b else " found"
|
||||
kw = "uploaded" if file.up_c else " found"
|
||||
print("{0} {1}".format(kw, upath))
|
||||
|
||||
self._check_if_done()
|
||||
continue
|
||||
|
||||
chunksz = up2k_chunksize(file.size)
|
||||
njoin = (self.ar.sz * 1024 * 1024) // chunksz
|
||||
cs = hs[:]
|
||||
while cs:
|
||||
@@ -1365,7 +1382,7 @@ def main():
|
||||
cores = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
|
||||
hcores = min(cores, 3) # 4% faster than 4+ on py3.9 @ r5-4500U
|
||||
|
||||
ver = "{0} v{1} https://youtu.be/BIcOO6TLKaY".format(S_BUILD_DT, S_VERSION)
|
||||
ver = "{0}, v{1}".format(S_BUILD_DT, S_VERSION)
|
||||
if "--version" in sys.argv:
|
||||
print(ver)
|
||||
return
|
||||
@@ -1403,6 +1420,7 @@ source file/folder selection uses rsync syntax, meaning that:
|
||||
|
||||
ap = app.add_argument_group("file-ID calculator; enable with url '-' to list warks (file identifiers) instead of upload/search")
|
||||
ap.add_argument("--wsalt", type=unicode, metavar="S", default="hunter2", help="salt to use when creating warks; must match server config")
|
||||
ap.add_argument("--chs", action="store_true", help="verbose (print the hash/offset of each chunk in each file)")
|
||||
ap.add_argument("--jw", action="store_true", help="just identifier+filepath, not mtime/size too")
|
||||
|
||||
ap = app.add_argument_group("performance tweaks")
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Maintainer: icxes <dev.null@need.moe>
|
||||
pkgname=copyparty
|
||||
pkgver="1.15.3"
|
||||
pkgver="1.15.6"
|
||||
pkgrel=1
|
||||
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
|
||||
arch=("any")
|
||||
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
|
||||
)
|
||||
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
|
||||
backup=("etc/${pkgname}.d/init" )
|
||||
sha256sums=("d4b02a8d618749c317161773fdd3b66992557f682b7cccd8c4c8583497c4cb24")
|
||||
sha256sums=("abb5c1705cd80ea553d647d4a7b35b5e1dac5a517200551bcca79aa199f30875")
|
||||
|
||||
build() {
|
||||
cd "${srcdir}/${pkgname}-${pkgver}"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.15.3/copyparty-sfx.py",
|
||||
"version": "1.15.3",
|
||||
"hash": "sha256-OmoLwakVaZM9QwkujT4wwhqC5KcaS5u81DDTjS2r7MI="
|
||||
"url": "https://github.com/9001/copyparty/releases/download/v1.15.6/copyparty-sfx.py",
|
||||
"version": "1.15.6",
|
||||
"hash": "sha256-0ikt3jv9/XT/w/ew+R4rZxF6s7LwNhUvUYYIZtkQqbk="
|
||||
}
|
||||
@@ -16,8 +16,6 @@ except:
|
||||
TYPE_CHECKING = False
|
||||
|
||||
if True:
|
||||
from types import ModuleType
|
||||
|
||||
from typing import Any, Callable, Optional
|
||||
|
||||
PY2 = sys.version_info < (3,)
|
||||
@@ -110,7 +108,6 @@ RES = set(zs.strip().split("\n"))
|
||||
|
||||
class EnvParams(object):
|
||||
def __init__(self) -> None:
|
||||
self.pkg: Optional[ModuleType] = None
|
||||
self.t0 = time.time()
|
||||
self.mod = ""
|
||||
self.cfg = ""
|
||||
|
||||
@@ -218,8 +218,6 @@ def init_E(EE: EnvParams) -> None:
|
||||
|
||||
raise Exception("could not find a writable path for config")
|
||||
|
||||
assert __package__ # !rm
|
||||
E.pkg = sys.modules[__package__]
|
||||
E.mod = os.path.dirname(os.path.realpath(__file__))
|
||||
if E.mod.endswith("__init__"):
|
||||
E.mod = os.path.dirname(E.mod)
|
||||
@@ -782,7 +780,7 @@ def get_sects():
|
||||
dedent(
|
||||
"""
|
||||
specify --exp or the "exp" volflag to enable placeholder expansions
|
||||
in README.md / .prologue.html / .epilogue.html
|
||||
in README.md / PREADME.md / .prologue.html / .epilogue.html
|
||||
|
||||
--exp-md (volflag exp_md) holds the list of placeholders which can be
|
||||
expanded in READMEs, and --exp-lg (volflag exp_lg) likewise for logues;
|
||||
@@ -898,7 +896,7 @@ def get_sects():
|
||||
dedent(
|
||||
"""
|
||||
the mDNS protocol is multicast-based, which means there are thousands
|
||||
of fun and intersesting ways for it to break unexpectedly
|
||||
of fun and interesting ways for it to break unexpectedly
|
||||
|
||||
things to check if it does not work at all:
|
||||
|
||||
@@ -1007,6 +1005,7 @@ def add_upload(ap):
|
||||
ap2.add_argument("--hardlink", action="store_true", help="enable hardlink-based dedup; will fallback on symlinks when that is impossible (across filesystems) (volflag=hardlink)")
|
||||
ap2.add_argument("--hardlink-only", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=hardlinkonly)")
|
||||
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
|
||||
ap2.add_argument("--no-clone", action="store_true", help="do not use existing data on disk to satisfy dupe uploads; reduces server HDD reads in exchange for much more network load (volflag=noclone)")
|
||||
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
|
||||
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
|
||||
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440.0, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
|
||||
@@ -1088,6 +1087,7 @@ def add_auth(ap):
|
||||
ap2.add_argument("--ses-db", metavar="PATH", type=u, default=ses_db, help="where to store the sessions database (if you run multiple copyparty instances, make sure they use different DBs)")
|
||||
ap2.add_argument("--ses-len", metavar="CHARS", type=int, default=20, help="session key length; default is 120 bits ((20//4)*4*6)")
|
||||
ap2.add_argument("--no-ses", action="store_true", help="disable sessions; use plaintext passwords in cookies")
|
||||
ap2.add_argument("--ipu", metavar="CIDR=USR", type=u, action="append", help="users with IP matching \033[33mCIDR\033[0m are auto-authenticated as username \033[33mUSR\033[0m; example: [\033[32m172.16.24.0/24=dave]")
|
||||
|
||||
|
||||
def add_chpw(ap):
|
||||
@@ -1256,7 +1256,7 @@ def add_safety(ap):
|
||||
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
|
||||
ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to turn something into a dotfile")
|
||||
ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings")
|
||||
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme.md into directory listings")
|
||||
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme/preadme.md into directory listings")
|
||||
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
|
||||
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
|
||||
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
|
||||
@@ -1454,7 +1454,7 @@ def add_ui(ap, retry):
|
||||
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
|
||||
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
|
||||
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
|
||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)")
|
||||
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README/PREADME.md documents (volflags: no_sb_md | sb_md)")
|
||||
ap2.add_argument("--no-sb-lg", action="store_true", help="don't sandbox prologue/epilogue docs (volflags: no_sb_lg | sb_lg); enables non-js support")
|
||||
|
||||
|
||||
@@ -1478,6 +1478,7 @@ def add_debug(ap):
|
||||
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
|
||||
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
|
||||
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
|
||||
ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m")
|
||||
|
||||
|
||||
# fmt: on
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 15, 4)
|
||||
VERSION = (1, 15, 7)
|
||||
CODENAME = "fill the drives"
|
||||
BUILD_DT = (2024, 10, 4)
|
||||
BUILD_DT = (2024, 10, 14)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
||||
@@ -924,7 +924,7 @@ class AuthSrv(object):
|
||||
|
||||
for un, gn in un_gn:
|
||||
# if ap/vp has a user/group placeholder, make sure to keep
|
||||
# track so the same user/gruop is mapped when setting perms;
|
||||
# track so the same user/group is mapped when setting perms;
|
||||
# otherwise clear un/gn to indicate it's a regular volume
|
||||
|
||||
src1 = src0.replace("${u}", un or "\n")
|
||||
|
||||
@@ -13,6 +13,7 @@ def vf_bmap() -> dict[str, str]:
|
||||
"dav_rt": "davrt",
|
||||
"ed": "dots",
|
||||
"hardlink_only": "hardlinkonly",
|
||||
"no_clone": "noclone",
|
||||
"no_dirsz": "nodirsz",
|
||||
"no_dupe": "nodupe",
|
||||
"no_forget": "noforget",
|
||||
@@ -135,7 +136,8 @@ flagcats = {
|
||||
"hardlink": "enable hardlink-based file deduplication,\nwith fallback on symlinks when that is impossible",
|
||||
"hardlinkonly": "dedup with hardlink only, never symlink;\nmake a full copy if hardlink is impossible",
|
||||
"safededup": "verify on-disk data before using it for dedup",
|
||||
"nodupe": "rejects existing files (instead of symlinking them)",
|
||||
"noclone": "take dupe data from clients, even if available on HDD",
|
||||
"nodupe": "rejects existing files (instead of linking/cloning them)",
|
||||
"sparse": "force use of sparse files, mainly for s3-backed storage",
|
||||
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
|
||||
"nosub": "forces all uploads into the top folder of the vfs",
|
||||
|
||||
@@ -76,6 +76,7 @@ class FtpAuth(DummyAuthorizer):
|
||||
else:
|
||||
raise AuthenticationFailed("banned")
|
||||
|
||||
args = self.hub.args
|
||||
asrv = self.hub.asrv
|
||||
uname = "*"
|
||||
if username != "anonymous":
|
||||
@@ -86,6 +87,9 @@ class FtpAuth(DummyAuthorizer):
|
||||
uname = zs
|
||||
break
|
||||
|
||||
if args.ipu and uname == "*":
|
||||
uname = args.ipu_iu[args.ipu_nm.map(ip)]
|
||||
|
||||
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
|
||||
g = self.hub.gpwd
|
||||
if g.lim:
|
||||
|
||||
@@ -127,6 +127,10 @@ _ = (argparse, threading)
|
||||
|
||||
NO_CACHE = {"Cache-Control": "no-cache"}
|
||||
|
||||
LOGUES = [[0, ".prologue.html"], [1, ".epilogue.html"]]
|
||||
|
||||
READMES = [[0, ["preadme.md", "PREADME.md"]], [1, ["readme.md", "README.md"]]]
|
||||
|
||||
|
||||
class HttpCli(object):
|
||||
"""
|
||||
@@ -585,6 +589,9 @@ class HttpCli(object):
|
||||
or "*"
|
||||
)
|
||||
|
||||
if self.args.ipu and self.uname == "*":
|
||||
self.uname = self.conn.ipu_iu[self.conn.ipu_nm.map(self.ip)]
|
||||
|
||||
self.rvol = self.asrv.vfs.aread[self.uname]
|
||||
self.wvol = self.asrv.vfs.awrite[self.uname]
|
||||
self.avol = self.asrv.vfs.aadmin[self.uname]
|
||||
@@ -2029,13 +2036,32 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
def bakflip(
|
||||
self, f: typing.BinaryIO, ofs: int, sz: int, sha: str, flags: dict[str, Any]
|
||||
self,
|
||||
f: typing.BinaryIO,
|
||||
ap: str,
|
||||
ofs: int,
|
||||
sz: int,
|
||||
good_sha: str,
|
||||
bad_sha: str,
|
||||
flags: dict[str, Any],
|
||||
) -> None:
|
||||
now = time.time()
|
||||
t = "bad-chunk: %.3f %s %s %d %s %s %s"
|
||||
t = t % (now, bad_sha, good_sha, ofs, self.ip, self.uname, ap)
|
||||
self.log(t, 5)
|
||||
|
||||
if self.args.bf_log:
|
||||
try:
|
||||
with open(self.args.bf_log, "ab+") as f2:
|
||||
f2.write((t + "\n").encode("utf-8", "replace"))
|
||||
except Exception as ex:
|
||||
self.log("append %s failed: %r" % (self.args.bf_log, ex))
|
||||
|
||||
if not self.args.bak_flips or self.args.nw:
|
||||
return
|
||||
|
||||
sdir = self.args.bf_dir
|
||||
fp = os.path.join(sdir, sha)
|
||||
fp = os.path.join(sdir, bad_sha)
|
||||
if bos.path.exists(fp):
|
||||
return self.log("no bakflip; have it", 6)
|
||||
|
||||
@@ -2156,11 +2182,17 @@ class HttpCli(object):
|
||||
except UnrecvEOF:
|
||||
raise Pebkac(422, "client disconnected while posting JSON")
|
||||
|
||||
self.log("decoding {} bytes of {} json".format(len(json_buf), enc))
|
||||
try:
|
||||
body = json.loads(json_buf.decode(enc, "replace"))
|
||||
try:
|
||||
zds = {k: v for k, v in body.items()}
|
||||
zds["hash"] = "%d chunks" % (len(body["hash"]))
|
||||
except:
|
||||
zds = body
|
||||
t = "POST len=%d type=%s ip=%s user=%s req=%r json=%s"
|
||||
self.log(t % (len(json_buf), enc, self.ip, self.uname, self.req, zds))
|
||||
except:
|
||||
raise Pebkac(422, "you POSTed invalid json")
|
||||
raise Pebkac(422, "you POSTed %d bytes of invalid json" % (len(json_buf),))
|
||||
|
||||
# self.reply(b"cloudflare", 503)
|
||||
# return True
|
||||
@@ -2358,7 +2390,9 @@ class HttpCli(object):
|
||||
|
||||
if sha_b64 != chash:
|
||||
try:
|
||||
self.bakflip(f, cstart[0], post_sz, sha_b64, vfs.flags)
|
||||
self.bakflip(
|
||||
f, path, cstart[0], post_sz, chash, sha_b64, vfs.flags
|
||||
)
|
||||
except:
|
||||
self.log("bakflip failed: " + min_ex())
|
||||
|
||||
@@ -2415,13 +2449,13 @@ class HttpCli(object):
|
||||
finally:
|
||||
if locked:
|
||||
# now block until all chunks released+confirmed
|
||||
x = broker.ask("up2k.confirm_chunks", ptop, wark, locked)
|
||||
x = broker.ask("up2k.confirm_chunks", ptop, wark, written, locked)
|
||||
num_left, t = x.get()
|
||||
if num_left < 0:
|
||||
self.loud_reply(t, status=500)
|
||||
return False
|
||||
t = "got %d more chunks, %d left"
|
||||
self.log(t % (len(locked), num_left), 6)
|
||||
self.log(t % (len(written), num_left), 6)
|
||||
|
||||
if num_left < 0:
|
||||
raise Pebkac(500, "unconfirmed; see serverlog")
|
||||
@@ -3039,7 +3073,7 @@ class HttpCli(object):
|
||||
if ex.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
# if file exists, chekc that timestamp matches the client's
|
||||
# if file exists, check that timestamp matches the client's
|
||||
if srv_lastmod >= 0:
|
||||
same_lastmod = cli_lastmod3 in [-1, srv_lastmod3]
|
||||
if not same_lastmod:
|
||||
@@ -3249,10 +3283,10 @@ class HttpCli(object):
|
||||
|
||||
def _add_logues(
|
||||
self, vn: VFS, abspath: str, lnames: Optional[dict[str, str]]
|
||||
) -> tuple[list[str], str]:
|
||||
) -> tuple[list[str], list[str]]:
|
||||
logues = ["", ""]
|
||||
if not self.args.no_logues:
|
||||
for n, fn in enumerate([".prologue.html", ".epilogue.html"]):
|
||||
for n, fn in LOGUES:
|
||||
if lnames is not None and fn not in lnames:
|
||||
continue
|
||||
fn = "%s/%s" % (abspath, fn)
|
||||
@@ -3264,25 +3298,31 @@ class HttpCli(object):
|
||||
logues[n], vn.flags.get("exp_lg") or []
|
||||
)
|
||||
|
||||
readme = ""
|
||||
if not self.args.no_readme and not logues[1]:
|
||||
if lnames is None:
|
||||
fns = ["README.md", "readme.md"]
|
||||
elif "readme.md" in lnames:
|
||||
fns = [lnames["readme.md"]]
|
||||
readmes = ["", ""]
|
||||
for n, fns in [] if self.args.no_readme else READMES:
|
||||
if logues[n]:
|
||||
continue
|
||||
elif lnames is None:
|
||||
pass
|
||||
elif fns[0] in lnames:
|
||||
fns = [lnames[fns[0]]]
|
||||
else:
|
||||
fns = []
|
||||
|
||||
txt = ""
|
||||
for fn in fns:
|
||||
fn = "%s/%s" % (abspath, fn)
|
||||
if bos.path.isfile(fn):
|
||||
with open(fsenc(fn), "rb") as f:
|
||||
readme = f.read().decode("utf-8")
|
||||
txt = f.read().decode("utf-8")
|
||||
break
|
||||
if readme and "exp" in vn.flags:
|
||||
readme = self._expand(readme, vn.flags.get("exp_md") or [])
|
||||
|
||||
return logues, readme
|
||||
if txt and "exp" in vn.flags:
|
||||
txt = self._expand(txt, vn.flags.get("exp_md") or [])
|
||||
|
||||
readmes[n] = txt
|
||||
|
||||
return logues, readmes
|
||||
|
||||
def _expand(self, txt: str, phs: list[str]) -> str:
|
||||
for ph in phs:
|
||||
@@ -4783,7 +4823,7 @@ class HttpCli(object):
|
||||
|
||||
fmt = fmt.format(len(nfmt.format(biggest)))
|
||||
retl = [
|
||||
"# {}: {}".format(x, ls[x])
|
||||
("# %s: %s" % (x, ls[x])).replace(r"</span> // <span>", " // ")
|
||||
for x in ["acct", "perms", "srvinf"]
|
||||
if x in ls
|
||||
]
|
||||
@@ -5136,9 +5176,9 @@ class HttpCli(object):
|
||||
j2a["no_prism"] = True
|
||||
|
||||
if not self.can_read and not is_dk:
|
||||
logues, readme = self._add_logues(vn, abspath, None)
|
||||
logues, readmes = self._add_logues(vn, abspath, None)
|
||||
ls_ret["logues"] = j2a["logues"] = logues
|
||||
ls_ret["readme"] = cgv["readme"] = readme
|
||||
ls_ret["readmes"] = cgv["readmes"] = readmes
|
||||
|
||||
if is_ls:
|
||||
return self.tx_ls(ls_ret)
|
||||
@@ -5395,11 +5435,18 @@ class HttpCli(object):
|
||||
else:
|
||||
taglist = list(tagset)
|
||||
|
||||
logues, readme = self._add_logues(vn, abspath, lnames)
|
||||
logues, readmes = self._add_logues(vn, abspath, lnames)
|
||||
ls_ret["logues"] = j2a["logues"] = logues
|
||||
ls_ret["readme"] = cgv["readme"] = readme
|
||||
ls_ret["readmes"] = cgv["readmes"] = readmes
|
||||
|
||||
if not files and not dirs and not readme and not logues[0] and not logues[1]:
|
||||
if (
|
||||
not files
|
||||
and not dirs
|
||||
and not readmes[0]
|
||||
and not readmes[1]
|
||||
and not logues[0]
|
||||
and not logues[1]
|
||||
):
|
||||
logues[1] = "this folder is empty"
|
||||
|
||||
if "descript.ion" in lnames and os.path.isfile(
|
||||
@@ -5444,7 +5491,11 @@ class HttpCli(object):
|
||||
if doc:
|
||||
j2a["docname"] = doc
|
||||
doctxt = None
|
||||
if next((x for x in files if x["name"] == doc), None):
|
||||
dfn = lnames.get(doc.lower())
|
||||
if dfn and dfn != doc:
|
||||
# found Foo but want FOO
|
||||
dfn = next((x for x in files if x["name"] == doc), None)
|
||||
if dfn:
|
||||
docpath = os.path.join(abspath, doc)
|
||||
sz = bos.path.getsize(docpath)
|
||||
if sz < 1024 * self.args.txt_max:
|
||||
|
||||
@@ -59,6 +59,8 @@ class HttpConn(object):
|
||||
self.asrv: AuthSrv = hsrv.asrv # mypy404
|
||||
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
|
||||
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
|
||||
self.ipu_iu: Optional[dict[str, str]] = hsrv.ipu_iu
|
||||
self.ipu_nm: Optional[NetMap] = hsrv.ipu_nm
|
||||
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
|
||||
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
|
||||
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore
|
||||
|
||||
@@ -69,6 +69,7 @@ from .util import (
|
||||
build_netmap,
|
||||
has_resource,
|
||||
ipnorm,
|
||||
load_ipu,
|
||||
load_resource,
|
||||
min_ex,
|
||||
shut_socket,
|
||||
@@ -175,6 +176,11 @@ class HttpSrv(object):
|
||||
self.j2 = {x: env.get_template(x + ".html") for x in jn}
|
||||
self.prism = has_resource(self.E, "web/deps/prism.js.gz")
|
||||
|
||||
if self.args.ipu:
|
||||
self.ipu_iu, self.ipu_nm = load_ipu(self.log, self.args.ipu)
|
||||
else:
|
||||
self.ipu_iu = self.ipu_nm = None
|
||||
|
||||
self.ipa_nm = build_netmap(self.args.ipa)
|
||||
self.xff_nm = build_netmap(self.args.xff_src)
|
||||
self.xff_lan = build_netmap("lan")
|
||||
|
||||
@@ -473,7 +473,7 @@ class MTag(object):
|
||||
sv = str(zv).split("/")[0].strip().lstrip("0")
|
||||
ret[sk] = sv or 0
|
||||
|
||||
# normalize key notation to rkeobo
|
||||
# normalize key notation to rekobo
|
||||
okey = ret.get("key")
|
||||
if okey:
|
||||
key = str(okey).replace(" ", "").replace("maj", "").replace("min", "m")
|
||||
|
||||
@@ -84,7 +84,7 @@ class SSDPr(object):
|
||||
name = self.args.doctitle
|
||||
zs = zs.strip().format(c(ubase), c(url), c(name), c(self.args.zsid))
|
||||
hc.reply(zs.encode("utf-8", "replace"))
|
||||
return False # close connectino
|
||||
return False # close connection
|
||||
|
||||
|
||||
class SSDPd(MCast):
|
||||
|
||||
@@ -60,6 +60,7 @@ from .util import (
|
||||
alltrace,
|
||||
ansi_re,
|
||||
build_netmap,
|
||||
load_ipu,
|
||||
min_ex,
|
||||
mp,
|
||||
odfusion,
|
||||
@@ -221,6 +222,11 @@ class SvcHub(object):
|
||||
noch.update([x for x in zsl if x])
|
||||
args.chpw_no = noch
|
||||
|
||||
if args.ipu:
|
||||
iu, nm = load_ipu(self.log, args.ipu)
|
||||
setattr(args, "ipu_iu", iu)
|
||||
setattr(args, "ipu_nm", nm)
|
||||
|
||||
if not self.args.no_ses:
|
||||
self.setup_session_db()
|
||||
|
||||
|
||||
@@ -95,7 +95,7 @@ class TcpSrv(object):
|
||||
continue
|
||||
|
||||
# binding 0.0.0.0 after :: fails on dualstack
|
||||
# but is necessary on non-dualstakc
|
||||
# but is necessary on non-dualstack
|
||||
if successful_binds:
|
||||
continue
|
||||
|
||||
|
||||
@@ -357,17 +357,18 @@ class Up2k(object):
|
||||
return '[{"timeout":1}]'
|
||||
|
||||
ret: list[tuple[int, str, int, int, int]] = []
|
||||
userset = set([(uname or "\n"), "*"])
|
||||
try:
|
||||
for ptop, tab2 in self.registry.items():
|
||||
cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
if not cfg:
|
||||
continue
|
||||
addr = (ip or "\n") if cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if cfg in (1, 3) else ""
|
||||
user = userset if cfg in (1, 3) else None
|
||||
for job in tab2.values():
|
||||
if (
|
||||
"done" in job
|
||||
or (user and user != job["user"])
|
||||
or (user and job["user"] not in user)
|
||||
or (addr and addr != job["addr"])
|
||||
):
|
||||
continue
|
||||
@@ -1015,6 +1016,7 @@ class Up2k(object):
|
||||
vpath = k
|
||||
|
||||
_, flags = self._expr_idx_filter(flags)
|
||||
n4g = bool(flags.get("noforget"))
|
||||
|
||||
ft = "\033[0;32m{}{:.0}"
|
||||
ff = "\033[0;35m{}{:.0}"
|
||||
@@ -1068,20 +1070,39 @@ class Up2k(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
if reg2 and "dwrk" not in reg2[next(iter(reg2))]:
|
||||
for job in reg2.values():
|
||||
job["dwrk"] = job["wark"]
|
||||
|
||||
rm = []
|
||||
for k, job in reg2.items():
|
||||
fp = djoin(job["ptop"], job["prel"], job["name"])
|
||||
job["ptop"] = ptop
|
||||
if "done" in job:
|
||||
job["need"] = job["hash"] = emptylist
|
||||
else:
|
||||
if "need" not in job:
|
||||
job["need"] = []
|
||||
if "hash" not in job:
|
||||
job["hash"] = []
|
||||
|
||||
fp = djoin(ptop, job["prel"], job["name"])
|
||||
if bos.path.exists(fp):
|
||||
reg[k] = job
|
||||
if "done" in job:
|
||||
job["need"] = job["hash"] = emptylist
|
||||
continue
|
||||
job["poke"] = time.time()
|
||||
job["busy"] = {}
|
||||
else:
|
||||
self.log("ign deleted file in snap: [{}]".format(fp))
|
||||
if not n4g:
|
||||
rm.append(k)
|
||||
continue
|
||||
|
||||
for x in rm:
|
||||
del reg2[x]
|
||||
|
||||
if drp is None:
|
||||
drp = [k for k, v in reg.items() if not v.get("need", [])]
|
||||
drp = [k for k, v in reg.items() if not v["need"]]
|
||||
else:
|
||||
drp = [x for x in drp if x in reg]
|
||||
|
||||
@@ -1545,7 +1566,7 @@ class Up2k(object):
|
||||
at = 0
|
||||
|
||||
# skip upload hooks by not providing vflags
|
||||
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, "", "", ip, at)
|
||||
self.db_add(db.c, {}, rd, fn, lmod, sz, "", "", wark, wark, "", "", ip, at)
|
||||
db.n += 1
|
||||
tfa += 1
|
||||
td = time.time() - db.t
|
||||
@@ -2779,9 +2800,10 @@ class Up2k(object):
|
||||
|
||||
cj["name"] = sanitize_fn(cj["name"], "")
|
||||
cj["poke"] = now = self.db_act = self.vol_act[ptop] = time.time()
|
||||
wark = self._get_wark(cj)
|
||||
wark = dwark = self._get_wark(cj)
|
||||
job = None
|
||||
pdir = djoin(ptop, cj["prel"])
|
||||
inc_ap = djoin(pdir, cj["name"])
|
||||
try:
|
||||
dev = bos.stat(pdir).st_dev
|
||||
except:
|
||||
@@ -2796,6 +2818,7 @@ class Up2k(object):
|
||||
reg = self.registry[ptop]
|
||||
vfs = self.asrv.vfs.all_vols[cj["vtop"]]
|
||||
n4g = bool(vfs.flags.get("noforget"))
|
||||
noclone = bool(vfs.flags.get("noclone"))
|
||||
rand = vfs.flags.get("rand") or cj.get("rand")
|
||||
lost: list[tuple["sqlite3.Cursor", str, str]] = []
|
||||
|
||||
@@ -2805,6 +2828,12 @@ class Up2k(object):
|
||||
vols = [(ptop, jcur)] if jcur else []
|
||||
if vfs.flags.get("xlink"):
|
||||
vols += [(k, v) for k, v in self.cur.items() if k != ptop]
|
||||
|
||||
if noclone:
|
||||
wark = up2k_wark_from_metadata(
|
||||
self.salt, cj["size"], cj["lmod"], cj["prel"], cj["name"]
|
||||
)
|
||||
|
||||
if vfs.flags.get("up_ts", "") == "fu" or not cj["lmod"]:
|
||||
# force upload time rather than last-modified
|
||||
cj["lmod"] = int(time.time())
|
||||
@@ -2817,10 +2846,10 @@ class Up2k(object):
|
||||
|
||||
if self.no_expr_idx:
|
||||
q = r"select * from up where w = ?"
|
||||
argv = [wark]
|
||||
argv = [dwark]
|
||||
else:
|
||||
q = r"select * from up where substr(w,1,16)=? and +w=?"
|
||||
argv = [wark[:16], wark]
|
||||
argv = [dwark[:16], dwark]
|
||||
|
||||
c2 = cur.execute(q, tuple(argv))
|
||||
for _, dtime, dsize, dp_dir, dp_fn, ip, at in c2:
|
||||
@@ -2828,6 +2857,9 @@ class Up2k(object):
|
||||
dp_dir, dp_fn = s3dec(dp_dir, dp_fn)
|
||||
|
||||
dp_abs = djoin(ptop, dp_dir, dp_fn)
|
||||
if noclone and dp_abs != inc_ap:
|
||||
continue
|
||||
|
||||
try:
|
||||
st = bos.stat(dp_abs)
|
||||
if stat.S_ISLNK(st.st_mode):
|
||||
@@ -2836,7 +2868,7 @@ class Up2k(object):
|
||||
if st.st_size != dsize:
|
||||
t = "candidate ignored (db/fs desync): {}, size fs={} db={}, mtime fs={} db={}, file: {}"
|
||||
t = t.format(
|
||||
wark, st.st_size, dsize, st.st_mtime, dtime, dp_abs
|
||||
dwark, st.st_size, dsize, st.st_mtime, dtime, dp_abs
|
||||
)
|
||||
self.log(t)
|
||||
raise Exception()
|
||||
@@ -2883,7 +2915,6 @@ class Up2k(object):
|
||||
alts.append((score, -len(alts), j, cur, dp_dir, dp_fn))
|
||||
|
||||
job = None
|
||||
inc_ap = djoin(cj["ptop"], cj["prel"], cj["name"])
|
||||
for dupe in sorted(alts, reverse=True):
|
||||
rj = dupe[2]
|
||||
orig_ap = djoin(rj["ptop"], rj["prel"], rj["name"])
|
||||
@@ -2893,11 +2924,11 @@ class Up2k(object):
|
||||
break
|
||||
else:
|
||||
self.log("asserting contents of %s" % (orig_ap,))
|
||||
dhashes, st = self._hashlist_from_file(orig_ap)
|
||||
dwark = up2k_wark_from_hashlist(self.salt, st.st_size, dhashes)
|
||||
if wark != dwark:
|
||||
hashes2, st = self._hashlist_from_file(orig_ap)
|
||||
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
|
||||
if dwark != wark2:
|
||||
t = "will not dedup (fs index desync): fs=%s, db=%s, file: %s"
|
||||
self.log(t % (dwark, wark, orig_ap))
|
||||
self.log(t % (wark2, dwark, orig_ap))
|
||||
lost.append(dupe[3:])
|
||||
continue
|
||||
data_ok = True
|
||||
@@ -2962,11 +2993,11 @@ class Up2k(object):
|
||||
|
||||
elif inc_ap != orig_ap and not data_ok and "done" in reg[wark]:
|
||||
self.log("asserting contents of %s" % (orig_ap,))
|
||||
dhashes, _ = self._hashlist_from_file(orig_ap)
|
||||
dwark = up2k_wark_from_hashlist(self.salt, st.st_size, dhashes)
|
||||
if wark != dwark:
|
||||
hashes2, _ = self._hashlist_from_file(orig_ap)
|
||||
wark2 = up2k_wark_from_hashlist(self.salt, st.st_size, hashes2)
|
||||
if wark != wark2:
|
||||
t = "will not dedup (fs index desync): fs=%s, idx=%s, file: %s"
|
||||
self.log(t % (dwark, wark, orig_ap))
|
||||
self.log(t % (wark2, wark, orig_ap))
|
||||
del reg[wark]
|
||||
|
||||
if job or wark in reg:
|
||||
@@ -3023,6 +3054,7 @@ class Up2k(object):
|
||||
|
||||
job = deepcopy(job)
|
||||
job["wark"] = wark
|
||||
job["dwrk"] = dwark
|
||||
job["at"] = cj.get("at") or now
|
||||
zs = "vtop ptop prel name lmod host user addr poke"
|
||||
for k in zs.split():
|
||||
@@ -3093,7 +3125,7 @@ class Up2k(object):
|
||||
raise
|
||||
|
||||
if cur and not self.args.nw:
|
||||
zs = "prel name lmod size ptop vtop wark host user addr at"
|
||||
zs = "prel name lmod size ptop vtop wark dwrk host user addr at"
|
||||
a = [job[x] for x in zs.split()]
|
||||
self.db_add(cur, vfs.flags, *a)
|
||||
cur.connection.commit()
|
||||
@@ -3123,6 +3155,7 @@ class Up2k(object):
|
||||
|
||||
job = {
|
||||
"wark": wark,
|
||||
"dwrk": dwark,
|
||||
"t0": now,
|
||||
"sprs": sprs,
|
||||
"hash": deepcopy(cj["hash"]),
|
||||
@@ -3165,6 +3198,7 @@ class Up2k(object):
|
||||
"lmod": job["lmod"],
|
||||
"sprs": job.get("sprs", sprs),
|
||||
"hash": job["need"],
|
||||
"dwrk": dwark,
|
||||
"wark": wark,
|
||||
}
|
||||
|
||||
@@ -3191,7 +3225,7 @@ class Up2k(object):
|
||||
):
|
||||
sql = "update up set mt=? where substr(w,1,16)=? and +rd=? and +fn=?"
|
||||
try:
|
||||
cur.execute(sql, (cj["lmod"], wark[:16], job["prel"], job["name"]))
|
||||
cur.execute(sql, (cj["lmod"], dwark[:16], job["prel"], job["name"]))
|
||||
cur.connection.commit()
|
||||
|
||||
ap = djoin(job["ptop"], job["prel"], job["name"])
|
||||
@@ -3433,19 +3467,19 @@ class Up2k(object):
|
||||
self.mutex.release()
|
||||
return -1, ""
|
||||
try:
|
||||
return self._confirm_chunks(ptop, wark, chashes)
|
||||
return self._confirm_chunks(ptop, wark, chashes, chashes)
|
||||
finally:
|
||||
self.reg_mutex.release()
|
||||
self.mutex.release()
|
||||
|
||||
def confirm_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
self, ptop: str, wark: str, written: list[str], locked: list[str]
|
||||
) -> tuple[int, str]:
|
||||
with self.mutex, self.reg_mutex:
|
||||
return self._confirm_chunks(ptop, wark, chashes)
|
||||
return self._confirm_chunks(ptop, wark, written, locked)
|
||||
|
||||
def _confirm_chunks(
|
||||
self, ptop: str, wark: str, chashes: list[str]
|
||||
self, ptop: str, wark: str, written: list[str], locked: list[str]
|
||||
) -> tuple[int, str]:
|
||||
if True:
|
||||
self.db_act = self.vol_act[ptop] = time.time()
|
||||
@@ -3457,11 +3491,11 @@ class Up2k(object):
|
||||
except Exception as ex:
|
||||
return -2, "confirm_chunk, wark(%r)" % (ex,) # type: ignore
|
||||
|
||||
for chash in chashes:
|
||||
for chash in locked:
|
||||
job["busy"].pop(chash, None)
|
||||
|
||||
try:
|
||||
for chash in chashes:
|
||||
for chash in written:
|
||||
job["need"].remove(chash)
|
||||
except Exception as ex:
|
||||
return -2, "confirm_chunk, chash(%s) %r" % (chash, ex) # type: ignore
|
||||
@@ -3513,7 +3547,7 @@ class Up2k(object):
|
||||
except:
|
||||
self.log("failed to utime ({}, {})".format(dst, times))
|
||||
|
||||
zs = "prel name lmod size ptop vtop wark host user addr"
|
||||
zs = "prel name lmod size ptop vtop wark dwrk host user addr"
|
||||
z2 = [job[x] for x in zs.split()]
|
||||
wake_sr = False
|
||||
try:
|
||||
@@ -3586,6 +3620,7 @@ class Up2k(object):
|
||||
ptop: str,
|
||||
vtop: str,
|
||||
wark: str,
|
||||
dwark: str,
|
||||
host: str,
|
||||
usr: str,
|
||||
ip: str,
|
||||
@@ -3608,6 +3643,7 @@ class Up2k(object):
|
||||
ptop,
|
||||
vtop,
|
||||
wark,
|
||||
dwark,
|
||||
host,
|
||||
usr,
|
||||
ip,
|
||||
@@ -3622,7 +3658,7 @@ class Up2k(object):
|
||||
raise
|
||||
|
||||
if "e2t" in self.flags[ptop]:
|
||||
self.tagq.put((ptop, wark, rd, fn, sz, ip, at))
|
||||
self.tagq.put((ptop, dwark, rd, fn, sz, ip, at))
|
||||
self.n_tagq += 1
|
||||
|
||||
return True
|
||||
@@ -3650,6 +3686,7 @@ class Up2k(object):
|
||||
ptop: str,
|
||||
vtop: str,
|
||||
wark: str,
|
||||
dwark: str,
|
||||
host: str,
|
||||
usr: str,
|
||||
ip: str,
|
||||
@@ -3666,13 +3703,13 @@ class Up2k(object):
|
||||
db_ip = "1.1.1.1" if self.args.no_db_ip else ip
|
||||
|
||||
sql = "insert into up values (?,?,?,?,?,?,?)"
|
||||
v = (wark, int(ts), sz, rd, fn, db_ip, int(at or 0))
|
||||
v = (dwark, int(ts), sz, rd, fn, db_ip, int(at or 0))
|
||||
try:
|
||||
db.execute(sql, v)
|
||||
except:
|
||||
assert self.mem_cur # !rm
|
||||
rd, fn = s3enc(self.mem_cur, rd, fn)
|
||||
v = (wark, int(ts), sz, rd, fn, db_ip, int(at or 0))
|
||||
v = (dwark, int(ts), sz, rd, fn, db_ip, int(at or 0))
|
||||
db.execute(sql, v)
|
||||
|
||||
self.volsize[db] += sz
|
||||
@@ -3716,11 +3753,11 @@ class Up2k(object):
|
||||
for cd in cds:
|
||||
# one for each unique cooldown duration
|
||||
try:
|
||||
db.execute(q, (cd, wark[:16], rd, fn))
|
||||
db.execute(q, (cd, dwark[:16], rd, fn))
|
||||
except:
|
||||
assert self.mem_cur # !rm
|
||||
rd, fn = s3enc(self.mem_cur, rd, fn)
|
||||
db.execute(q, (cd, wark[:16], rd, fn))
|
||||
db.execute(q, (cd, dwark[:16], rd, fn))
|
||||
|
||||
if self.xiu_asleep:
|
||||
self.xiu_asleep = False
|
||||
@@ -3812,10 +3849,12 @@ class Up2k(object):
|
||||
with self.mutex, self.reg_mutex:
|
||||
abrt_cfg = self.flags.get(ptop, {}).get("u2abort", 1)
|
||||
addr = (ip or "\n") if abrt_cfg in (1, 2) else ""
|
||||
user = (uname or "\n") if abrt_cfg in (1, 3) else ""
|
||||
user = ((uname or "\n"), "*") if abrt_cfg in (1, 3) else None
|
||||
reg = self.registry.get(ptop, {}) if abrt_cfg else {}
|
||||
for wark, job in reg.items():
|
||||
if (user and user != job["user"]) or (addr and addr != job["addr"]):
|
||||
if (addr and addr != job["addr"]) or (
|
||||
user and job["user"] not in user
|
||||
):
|
||||
continue
|
||||
jrem = djoin(job["prel"], job["name"])
|
||||
if ANYWIN:
|
||||
@@ -4175,6 +4214,7 @@ class Up2k(object):
|
||||
dvn.realpath,
|
||||
dvn.vpath,
|
||||
w,
|
||||
w,
|
||||
"",
|
||||
"",
|
||||
ip or "",
|
||||
@@ -4857,6 +4897,7 @@ class Up2k(object):
|
||||
ptop,
|
||||
vtop,
|
||||
wark,
|
||||
wark,
|
||||
"",
|
||||
usr,
|
||||
ip,
|
||||
|
||||
@@ -665,11 +665,15 @@ class HLog(logging.Handler):
|
||||
|
||||
|
||||
class NetMap(object):
|
||||
def __init__(self, ips: list[str], cidrs: list[str], keep_lo=False) -> None:
|
||||
def __init__(
|
||||
self, ips: list[str], cidrs: list[str], keep_lo=False, strict_cidr=False
|
||||
) -> None:
|
||||
"""
|
||||
ips: list of plain ipv4/ipv6 IPs, not cidr
|
||||
cidrs: list of cidr-notation IPs (ip/prefix)
|
||||
"""
|
||||
self.mutex = threading.Lock()
|
||||
|
||||
if "::" in ips:
|
||||
ips = [x for x in ips if x != "::"] + list(
|
||||
[x.split("/")[0] for x in cidrs if ":" in x]
|
||||
@@ -696,7 +700,7 @@ class NetMap(object):
|
||||
bip = socket.inet_pton(fam, ip.split("/")[0])
|
||||
self.bip.append(bip)
|
||||
self.b2sip[bip] = ip.split("/")[0]
|
||||
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, False)
|
||||
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, strict_cidr)
|
||||
|
||||
self.bip.sort(reverse=True)
|
||||
|
||||
@@ -707,8 +711,10 @@ class NetMap(object):
|
||||
try:
|
||||
return self.cache[ip]
|
||||
except:
|
||||
pass
|
||||
with self.mutex:
|
||||
return self._map(ip)
|
||||
|
||||
def _map(self, ip: str) -> str:
|
||||
v6 = ":" in ip
|
||||
ci = IPv6Address(ip) if v6 else IPv4Address(ip)
|
||||
bip = next((x for x in self.bip if ci in self.b2net[x]), None)
|
||||
@@ -1163,7 +1169,7 @@ class Magician(object):
|
||||
return ret
|
||||
|
||||
mime = magic.from_file(fpath, mime=True)
|
||||
mime = re.split("[; ]", mime, 1)[0]
|
||||
mime = re.split("[; ]", mime, maxsplit=1)[0]
|
||||
try:
|
||||
return EXTS[mime]
|
||||
except:
|
||||
@@ -2678,6 +2684,31 @@ def build_netmap(csv: str):
|
||||
return NetMap(ips, cidrs, True)
|
||||
|
||||
|
||||
def load_ipu(log: "RootLogger", ipus: list[str]) -> tuple[dict[str, str], NetMap]:
|
||||
ip_u = {"": "*"}
|
||||
cidr_u = {}
|
||||
for ipu in ipus:
|
||||
try:
|
||||
cidr, uname = ipu.split("=")
|
||||
cip, csz = cidr.split("/")
|
||||
except:
|
||||
t = "\n invalid value %r for argument --ipu; must be CIDR=UNAME (192.168.0.0/16=amelia)"
|
||||
raise Exception(t % (ipu,))
|
||||
uname2 = cidr_u.get(cidr)
|
||||
if uname2 is not None:
|
||||
t = "\n invalid value %r for argument --ipu; cidr %s already mapped to %r"
|
||||
raise Exception(t % (ipu, cidr, uname2))
|
||||
cidr_u[cidr] = uname
|
||||
ip_u[cip] = uname
|
||||
try:
|
||||
nm = NetMap(["::"], list(cidr_u.keys()), True, True)
|
||||
except Exception as ex:
|
||||
t = "failed to translate --ipu into netmap, probably due to invalid config: %r"
|
||||
log("root", t % (ex,), 1)
|
||||
raise
|
||||
return ip_u, nm
|
||||
|
||||
|
||||
def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
|
||||
readsz = min(bufsz, 128 * 1024)
|
||||
with open(fsenc(fn), "rb", bufsz) as f:
|
||||
@@ -3622,10 +3653,10 @@ def stat_resource(E: EnvParams, name: str):
|
||||
return None
|
||||
|
||||
|
||||
def _find_impresource(E: EnvParams, name: str):
|
||||
def _find_impresource(pkg: types.ModuleType, name: str):
|
||||
assert impresources # !rm
|
||||
try:
|
||||
files = impresources.files(E.pkg)
|
||||
files = impresources.files(pkg)
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
@@ -3635,7 +3666,7 @@ def _find_impresource(E: EnvParams, name: str):
|
||||
_rescache_has = {}
|
||||
|
||||
|
||||
def _has_resource(E: EnvParams, name: str):
|
||||
def _has_resource(name: str):
|
||||
try:
|
||||
return _rescache_has[name]
|
||||
except:
|
||||
@@ -3644,14 +3675,17 @@ def _has_resource(E: EnvParams, name: str):
|
||||
if len(_rescache_has) > 999:
|
||||
_rescache_has.clear()
|
||||
|
||||
assert __package__ # !rm
|
||||
pkg = sys.modules[__package__]
|
||||
|
||||
if impresources:
|
||||
res = _find_impresource(E, name)
|
||||
res = _find_impresource(pkg, name)
|
||||
if res and res.is_file():
|
||||
_rescache_has[name] = True
|
||||
return True
|
||||
|
||||
if pkg_resources:
|
||||
if _pkg_resource_exists(E.pkg.__name__, name):
|
||||
if _pkg_resource_exists(pkg.__name__, name):
|
||||
_rescache_has[name] = True
|
||||
return True
|
||||
|
||||
@@ -3660,14 +3694,15 @@ def _has_resource(E: EnvParams, name: str):
|
||||
|
||||
|
||||
def has_resource(E: EnvParams, name: str):
|
||||
return _has_resource(E, name) or os.path.exists(os.path.join(E.mod, name))
|
||||
return _has_resource(name) or os.path.exists(os.path.join(E.mod, name))
|
||||
|
||||
|
||||
def load_resource(E: EnvParams, name: str, mode="rb") -> IO[bytes]:
|
||||
enc = None if "b" in mode else "utf-8"
|
||||
|
||||
if impresources:
|
||||
res = _find_impresource(E, name)
|
||||
assert __package__ # !rm
|
||||
res = _find_impresource(sys.modules[__package__], name)
|
||||
if res and res.is_file():
|
||||
if enc:
|
||||
return res.open(mode, encoding=enc)
|
||||
@@ -3676,8 +3711,10 @@ def load_resource(E: EnvParams, name: str, mode="rb") -> IO[bytes]:
|
||||
return res.open(mode)
|
||||
|
||||
if pkg_resources:
|
||||
if _pkg_resource_exists(E.pkg.__name__, name):
|
||||
stream = pkg_resources.resource_stream(E.pkg.__name__, name)
|
||||
assert __package__ # !rm
|
||||
pkg = sys.modules[__package__]
|
||||
if _pkg_resource_exists(pkg.__name__, name):
|
||||
stream = pkg_resources.resource_stream(pkg.__name__, name)
|
||||
if enc:
|
||||
stream = codecs.getreader(enc)(stream)
|
||||
return stream
|
||||
|
||||
@@ -896,7 +896,7 @@ html.y #path a:hover {
|
||||
max-width: 52em;
|
||||
}
|
||||
.mdo.sb,
|
||||
#epi.logue.mdo>iframe {
|
||||
.logue.mdo>iframe {
|
||||
max-width: 54em;
|
||||
}
|
||||
.mdo,
|
||||
|
||||
@@ -22,7 +22,7 @@ var Ls = {
|
||||
"vq": "video quality / bitrate",
|
||||
"pixfmt": "subsampling / pixel structure",
|
||||
"resw": "horizontal resolution",
|
||||
"resh": "veritcal resolution",
|
||||
"resh": "vertical resolution",
|
||||
"chs": "audio channels",
|
||||
"hz": "sample rate"
|
||||
},
|
||||
@@ -707,7 +707,7 @@ var Ls = {
|
||||
"wt_selinv": "inverter utvalg",
|
||||
"wt_selzip": "last ned de valgte filene som et arkiv",
|
||||
"wt_seldl": "last ned de valgte filene$NSnarvei: Y",
|
||||
"wt_npirc": "kopiér sang-info (irc-formattert)",
|
||||
"wt_npirc": "kopiér sang-info (irc-formatert)",
|
||||
"wt_nptxt": "kopiér sang-info",
|
||||
"wt_grid": "bytt mellom ikoner og listevisning$NSnarvei: G",
|
||||
"wt_prev": "forrige sang$NSnarvei: J",
|
||||
@@ -886,7 +886,7 @@ var Ls = {
|
||||
|
||||
"f_dls": 'linkene i denne mappen er nå\nomgjort til nedlastningsknapper',
|
||||
|
||||
"f_partial": "For å laste ned en fil som enda ikke er ferdig opplastet, klikk på filen som har samme filnavn som denne, men uten <code>.PARTIAL</code> på slutten. Da vil serveren passe på at nedlastning går bra. Derfor anbefales det sterkt å trykke ABRYT eller Escape-tasten.\n\nHvis du virkelig ønsker å laste ned denne <code>.PARTIAL</code>-filen på en ukontrollert måte, trykk OK / Enter for å ignorere denne advarselen. Slik vil du høyst sannsynlig motta korrupt data.",
|
||||
"f_partial": "For å laste ned en fil som enda ikke er ferdig opplastet, klikk på filen som har samme filnavn som denne, men uten <code>.PARTIAL</code> på slutten. Da vil serveren passe på at nedlastning går bra. Derfor anbefales det sterkt å trykke AVBRYT eller Escape-tasten.\n\nHvis du virkelig ønsker å laste ned denne <code>.PARTIAL</code>-filen på en ukontrollert måte, trykk OK / Enter for å ignorere denne advarselen. Slik vil du høyst sannsynlig motta korrupt data.",
|
||||
|
||||
"ft_paste": "Lim inn {0} filer$NSnarvei: ctrl-V",
|
||||
"fr_eperm": 'kan ikke endre navn:\ndu har ikke “move”-rettigheten i denne mappen',
|
||||
@@ -7345,6 +7345,9 @@ var treectl = (function () {
|
||||
|
||||
var lg0 = res.logues ? res.logues[0] || "" : "",
|
||||
lg1 = res.logues ? res.logues[1] || "" : "",
|
||||
mds = res.readmes && treectl.ireadme,
|
||||
md0 = mds ? res.readmes[0] || "" : "",
|
||||
md1 = mds ? res.readmes[1] || "" : "",
|
||||
dirchg = get_evpath() != cdir;
|
||||
|
||||
if (lg1 === Ls.eng.f_empty)
|
||||
@@ -7354,9 +7357,14 @@ var treectl = (function () {
|
||||
if (dirchg)
|
||||
sandbox(ebi('epi'), sb_lg, '', lg1);
|
||||
|
||||
clmod(ebi('pro'), 'mdo');
|
||||
clmod(ebi('epi'), 'mdo');
|
||||
if (res.readme && treectl.ireadme)
|
||||
show_readme(res.readme);
|
||||
|
||||
if (md0)
|
||||
show_readme(md0, 0);
|
||||
|
||||
if (md1)
|
||||
show_readme(md1, 1);
|
||||
else if (!dirchg)
|
||||
sandbox(ebi('epi'), sb_lg, '', lg1);
|
||||
|
||||
@@ -8877,14 +8885,17 @@ function set_tabindex() {
|
||||
}
|
||||
|
||||
|
||||
function show_readme(md) {
|
||||
if (!treectl.ireadme)
|
||||
return sandbox(ebi('epi'), '', '', 'a');
|
||||
function show_readme(md, n) {
|
||||
var tgt = ebi(n ? 'epi' : 'pro');
|
||||
|
||||
show_md(md, 'README.md', ebi('epi'));
|
||||
if (!treectl.ireadme)
|
||||
return sandbox(tgt, '', '', 'a');
|
||||
|
||||
show_md(md, n ? 'README.md' : 'PREADME.md', tgt);
|
||||
}
|
||||
if (readme)
|
||||
show_readme(readme);
|
||||
for (var a = 0; a < readmes.length; a++)
|
||||
if (readmes[a])
|
||||
show_readme(readmes[a], a);
|
||||
|
||||
|
||||
function sandbox(tgt, rules, cls, html) {
|
||||
|
||||
@@ -1,3 +1,80 @@
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1011-2256 `v1.15.6` preadme
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* #105 files named `preadme.md` appear at the top of directory listings 1d68acf8
|
||||
* entirely disable dedup with `--no-clone` / volflag `noclone` 3d7facd7 6b7ebdb7
|
||||
* even if a file exists for sure on the server HDD, let the client continue uploading instead of reusing the existing data
|
||||
* using this option "never" makes sense, unless you're using something like S3 Glacier storage where reading is really expensive but writing is cheap
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* up2k jank after detecting a bitflip or network glitch 4a4ec88d
|
||||
* instead of resuming the interrupted upload like it should, the upload client could get stuck or start over
|
||||
* #104 support viewing dotfile documents when dotfiles are hidden 9ccd8bb3
|
||||
* fix a buttload of typos 6adc778d 1e7697b5
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1005-1803 `v1.15.5` pyz all the cores
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* the pkgres / pyz changes in 1.15.4 broke multiprocessing c3985537
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* pyz: drop easymde to save some bytes + make it a tiny bit faster
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-1004-2319 `v1.15.4` hermetic
|
||||
|
||||
## 🧪 new features
|
||||
|
||||
* [u2c](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy) (commandline uploader):
|
||||
* remove all dependencies; now entirely self-contained 9daeed92
|
||||
* made it 3x faster for small files, 2x faster in general
|
||||
* improve `-x` behavior to not traverse into excluded folders b9c5c7bb
|
||||
* [partyfuse](https://github.com/9001/copyparty/tree/hovudstraum/bin#partyfusepy) (fuse client; mount a copyparty server as a local filesystem):
|
||||
* 9x faster directory listings 03f0f994
|
||||
* 4x faster downloads on high-latency connections 847a2bdc
|
||||
* embed `fuse.py` (its only dependency) -- can be downloaded from the connect-page 44f2b63e
|
||||
* support mounting nginx and iis servers too, not just copyparty c81e8984
|
||||
* reduce ram usage down to 10% when running without `-e2d` 88a1c5ca
|
||||
* does not affect servers with `-e2d` enabled (was already optimal)
|
||||
* share folders as qr-codes e4542064
|
||||
* when creating a share, you get a qr-code for quick access
|
||||
* buttons in the shares controlpanel to reshow it, optionally with the password embedded into the qr-code
|
||||
* #98 read embedded webdeps and templates with `pkg_resources`; thx @shizmob! a462a644 d866841c
|
||||
* [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) now runs straight from the source file without unpacking anything to disk
|
||||
* ...and is now much slower at returning resource GETs, but that is fine
|
||||
* og / opengraph / discord embeds: support filekeys ae982006
|
||||
* add option for natural sorting; thx @oshiteku! 9804f25d
|
||||
* eyecandy timer bar on toasts 0dfe1d5b
|
||||
* smb-server: impacket 0.12 is out! dc4d0d8e
|
||||
* now *possible* to list folders with more than 400 files (it's REALLY slow)
|
||||
|
||||
## 🩹 bugfixes
|
||||
|
||||
* webdav:
|
||||
* support `<allprop/>` in propfind dc157fa2
|
||||
* list volumes when root is unmapped 480ac254
|
||||
* previously, clients couldn't connect to the root of a copyparty server unless a volume existed at `/`
|
||||
* #101 show `.prologue.html` and `.epilogue.html` in directory listings even if user cannot see hidden files 21be82ef
|
||||
* #100 confusing toast when pressing F2 without selecting anything 2715ee6c
|
||||
* fix prometheus metrics 678675a9
|
||||
|
||||
## 🔧 other changes
|
||||
|
||||
* #100 allow uploading `.prologue.html` and `.epilogue.html` 19a5985f
|
||||
* #102 make translation easier when running in docker
|
||||
|
||||
|
||||
|
||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||
# 2024-0916-0107 `v1.15.3` incoming eta
|
||||
|
||||
|
||||
@@ -279,6 +279,7 @@ the rest is mostly optional; if you need a working env for vscode or similar
|
||||
python3 -m venv .venv
|
||||
. .venv/bin/activate
|
||||
pip install jinja2 strip_hints # MANDATORY
|
||||
pip install argon2-cffi # password hashing
|
||||
pip install mutagen # audio metadata
|
||||
pip install pyftpdlib # ftp server
|
||||
pip install partftpy # tftp server
|
||||
|
||||
@@ -257,6 +257,8 @@ for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS=
|
||||
|
||||
# up2k worst-case testfiles: create 64 GiB (256 x 256 MiB) of sparse files; each file takes 1 MiB disk space; each 1 MiB chunk is globally unique
|
||||
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
|
||||
# create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk
|
||||
sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
|
||||
|
||||
# py2 on osx
|
||||
brew install python@2
|
||||
|
||||
@@ -71,7 +71,7 @@ def cnv(src):
|
||||
|
||||
def main():
|
||||
src = readclip()
|
||||
src = re.split("0{100,200}", src[::-1], 1)[1][::-1]
|
||||
src = re.split("0{100,200}", src[::-1], maxsplit=1)[1][::-1]
|
||||
with open("helptext.html", "wb") as fo:
|
||||
for ln in cnv(iter(src.split("\n")[:-3])):
|
||||
fo.write(ln.encode("utf-8") + b"\r\n")
|
||||
|
||||
@@ -11,6 +11,15 @@ gtar=$(command -v gtar || command -v gnutar) || true
|
||||
realpath() { grealpath "$@"; }
|
||||
}
|
||||
|
||||
tmv() {
|
||||
touch -r "$1" t
|
||||
mv t "$1"
|
||||
}
|
||||
ised() {
|
||||
sed -r "$1" <"$2" >t
|
||||
tmv "$2"
|
||||
}
|
||||
|
||||
targs=(--owner=1000 --group=1000)
|
||||
[ "$OSTYPE" = msys ] &&
|
||||
targs=()
|
||||
@@ -35,6 +44,12 @@ cd pyz
|
||||
cp -pR ../sfx/{copyparty,partftpy} .
|
||||
cp -pR ../sfx/{ftp,j2}/* .
|
||||
|
||||
true && {
|
||||
rm -rf copyparty/web/mde.* copyparty/web/deps/easymde*
|
||||
echo h > copyparty/web/mde.html
|
||||
ised '/edit2">edit \(fancy/d' copyparty/web/md.html
|
||||
}
|
||||
|
||||
ts=$(date -u +%s)
|
||||
hts=$(date -u +%Y-%m%d-%H%M%S)
|
||||
ver="$(cat ../sfx/ver)"
|
||||
|
||||
@@ -98,6 +98,7 @@ def tc1(vflags):
|
||||
|
||||
args = [
|
||||
"-q",
|
||||
"-j0",
|
||||
"-p4321",
|
||||
"-e2dsa",
|
||||
"-e2tsr",
|
||||
|
||||
@@ -103,7 +103,7 @@ var tl_browser = {
|
||||
"vq": "video quality / bitrate",
|
||||
"pixfmt": "subsampling / pixel structure",
|
||||
"resw": "horizontal resolution",
|
||||
"resh": "veritcal resolution",
|
||||
"resh": "vertical resolution",
|
||||
"chs": "audio channels",
|
||||
"hz": "sample rate"
|
||||
},
|
||||
|
||||
@@ -122,13 +122,13 @@ class Cfg(Namespace):
|
||||
def __init__(self, a=None, v=None, c=None, **ka0):
|
||||
ka = {}
|
||||
|
||||
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
|
||||
ka.update(**{k: False for k in ex.split()})
|
||||
|
||||
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
|
||||
ka.update(**{k: True for k in ex.split()})
|
||||
|
||||
ex = "ah_cli ah_gen css_browser hist js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
||||
ex = "ah_cli ah_gen css_browser hist ipu js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
|
||||
ka.update(**{k: None for k in ex.split()})
|
||||
|
||||
ex = "hash_mt safe_dedup srch_time u2abort u2j u2sz"
|
||||
|
||||
Reference in New Issue
Block a user