Compare commits

...

71 Commits

Author SHA1 Message Date
ed
ce3cab0295 v1.16.0 2024-11-10 19:32:37 +00:00
ed
c784e5285e u2c: adaptive connection:keepalive expiration 2024-11-10 17:43:40 +00:00
ed
2bf9055cae detect free RAM on startup for sane defaults
* if free ram on startup is less than 2 GiB,
   use smaller chunks for parallel file hashing

* if --th-max-ram is lower than 0.25 (256 MiB),
   print a warning that thumbnails will not work

* make thumbnail cleaner immediately do a sweep on startup,
   forgetting any failed conversions so they can be retried
   in case the memory limit was increased since last run
2024-11-10 15:43:19 +00:00
ed
8aba5aed4f list active downloads in controlpanel 2024-11-10 02:12:18 +00:00
ed
0ce7cf5e10 update comparison / versus.md 2024-11-09 14:44:03 +00:00
ed
96edcbccd7 https://ocv.me/stuff/goed-gedaan.jpg 2024-11-08 22:11:33 +00:00
ed
4603afb6de don't consume ctrl-shift-c (devtools inspector) 2024-11-08 21:51:54 +00:00
ed
56317b00af filecopy: ui for resolving name conflicts 2024-11-08 02:12:28 +00:00
ed
cacec9c1f3 support copying files/folders; closes #115
behaves according to the target volume's deduplication config;
will create symlinks / hardlinks instead if dedup is enabled
2024-11-07 21:41:53 +00:00
ed
44ee07f0b2 IdP: async reload; closes #114
whenever a new idp user is registered, up2k will continuously
reload in the background until all users have been processed

just like before, this blocks up2k uploads from each user
until said user makes it into a reload, but as of now,
reloads will batch and execute without interrupting read-access

needs further testing before next release,
probably some rough edges to sand down
2024-11-04 22:31:48 +00:00
ed
6a8d5e1731 ui: batch-rename: remember last regex + format 2024-11-02 18:06:39 +00:00
ed
d9962f65b3 ui: folder loading indicator stole focus
show a spinning halfcircle around the +/- instead of
moving the focus to the selected folder in the sidebar,
since that could mess with keyboard scrolling
2024-11-02 17:58:30 +00:00
ed
119e88d87b bubble OS-filesystem errors to client
send a 500 or 404 if a folder is inaccessible or does not exist

previously it would return an empty directory listing instead
2024-11-02 17:38:17 +00:00
ed
71d9e010d9 ui: make hotkey-help less eager to show itself
would appear when typing `?` into textboxes
2024-10-30 19:40:48 +00:00
ed
5718caa957 ui: url-options to set grid/thumbs on/off 2024-10-30 19:24:00 +00:00
ed
efd8a32ed6 ui: show switch-to-https on 403s too 2024-10-28 03:38:15 +00:00
ed
b22d700e16 update pkgs to 1.15.10 2024-10-27 09:27:38 +00:00
ed
ccdacea0c4 v1.15.10 2024-10-27 07:51:11 +00:00
ed
4bdcbc1cb5 shares: allow upload, unpost
* files can be uploaded into writeable shares

* add "write-only" button to the create-share ui

* unpost is possible while viewing the relevant share
2024-10-26 21:36:07 +00:00
ed
833c6cf2ec partyfuse: bump dircache size
dircache size should exceed max dir depth, because the OS
may periodically listdir all parents of current folder
2024-10-26 18:25:21 +00:00
ed
dd6dbdd90a http 304: client-option to force-disable cache
an extremely brutish workaround for issues such as #110 where
browsers receive an HTTP 304 and misinterpret as HTTP 200

option `--no304=1` adds the button `no304` to the controlpanel
which can be enabled to force-disable caching in that browser

the button is default-disabled; by specifying `--no304=2`
instead of `--no304=1` the button becomes default-enabled

can also always be enabled by accessing `/?setck=no304=y`
2024-10-26 17:56:54 +00:00
ed
63013cc565 http 304: k304 obsoleted for ie11 by Vary
the Vary header killed caching in all versions of internet explorer
so there's no point conditionally enabling k304 for trident anymore
2024-10-25 22:32:58 +00:00
ed
912402364a http 304: strip Content-Length and Content-Type
these response headers are usually not included in 304 replies,
and their presence are suspected to confuse some clients (#110)

also strip `out_headerlist` (primarily cookie assignments)
2024-10-25 22:24:40 +00:00
ed
159f51b12b http 304: if-range, backdating
add support for the `If-Range` header which is generally used to
prevent resuming a partial download after the source file on the
server has been modified, by returning HTTP 200 instead of a 206

also simplifies `If-Modified-Since` and `If-Range` handling;
previously this was a spec-compliant lexical comparison,
now it's a basic string-comparison instead. The server will now
reply 200 also when the server mtime is older than the client's.
This is technically not according to spec, but should be safer,
as it allows backdating timestamps without purging client cache
2024-10-25 22:05:59 +00:00
ed
7678a91b0e debug: --ohead (log response headers) 2024-10-25 20:00:19 +00:00
ed
b13899c63d make --u2sz more intuitive
previously, it only accepted the 3-tuple `min,default,max`

if given a single integer (or any other unexpected value),
the up2k js would enter an infinite loop, eat all the ram
and crash the browser (nice)

fix this by accepting a single integer (for example 96)
and translating it to `1,96,96`
2024-10-22 21:37:51 +00:00
ed
3a0d882c5e fix NetMap -j0 compat
would crash on startup if `-j0` was
combined with `--ipa` or `--ipu`
2024-10-22 20:53:19 +00:00
ed
cb81f0ad6d readme: add nintendo 3ds to supported browsers 2024-10-21 00:06:13 +00:00
ed
518bacf628 add pingvin-share to comparison 2024-10-20 01:18:07 +00:00
ed
ca63b03e55 update pkgs to 1.15.9 2024-10-18 23:54:46 +00:00
ed
cecef88d6b v1.15.9 2024-10-18 23:42:20 +00:00
ed
7ffd805a03 add RSS feed output; closes #109 2024-10-18 23:24:12 +00:00
ed
a7e2a0c981 up2k: fix chinese-specific js crash; closes #108
the client-side ETA, included as metadata in POSTs,
would crash the js with the initial "Starting..." text
2024-10-18 19:04:22 +00:00
ed
2a570bb4ca fix --df for webdav; closes #107
PUT uploads, as used by webdav, would stat the absolute
path of the file to be created, which would throw ENOENT

strip components until the path is an existing directory

and also try to enforce disk space / volume size limits
even when the incoming file is of unknown size
2024-10-18 18:14:35 +00:00
ed
5ca8f0706d up2k.js: detect broken webworkers;
the first time a file is to be hashed after a website refresh,
a set of webworkers are launched for efficient parallelization

in the unlikely event of a network outage exactly at this point,
the workers will fail to start, and the hashing would never begin

add a ping/pong sequence to smoketest the workers, and
fallback to hashing on the main-thread when necessary
2024-10-18 16:50:15 +00:00
ed
a9b4436cdc up2k: improve upload retry/timeout
* `js:` make handshake retries more aggressive
* `u2c:` reduce chunks timeout + ^
* `main:` reduce tcp timeout to 128sec (js is 42s)
* `httpcli:` less confusing log messages
2024-10-18 16:24:31 +00:00
ed
5f91999512 update pkgs to 1.15.8 2024-10-16 22:22:29 +00:00
ed
9f000beeaf v1.15.8 2024-10-16 21:53:23 +00:00
ed
ff0a71f212 gallery: play m4v videos 2024-10-16 21:36:11 +00:00
ed
22dfc6ec24 ui-toast: hide countdown if infinite 2024-10-16 21:32:47 +00:00
ed
48147c079e subchunks: fix eta, cfg-ui 2024-10-16 21:17:00 +00:00
ed
d715479ef6 add chickenbit to force hashwasm 2024-10-16 20:23:02 +00:00
ed
fc8298c468 up2k: avoid cloudflare upload size-limit
previously, the biggest file that could be uploaded through
cloudflare was 383 GiB, due to max num chunks being 4096

`--u2sz`, which takes three ints (min-size, target, max-size)
can now be used to enforce a max chunksize; chunks larger
than max-size gets split into smaller subchunks / chunklets

subchunks cannot be stitched/joined, and subchunks of the
same chunk must be uploaded sequentially, one at a time

if a subchunk fails due to bitflips or connection-loss,
then the entire chunk must (and will) be reuploaded
2024-10-16 19:29:08 +00:00
ed
e94ca5dc91 up2k: improve logging 2024-10-16 15:41:19 +00:00
ed
114b71b751 up2k: fix filesystem toctou
previously and currently, as an upload completes, its "done" flag
is not set until all the data has been flushed to disk

however, the list of missing chunks becomes empty before the flush,
and that list was incorrectly used to determine completion state
in some dedup-related logic

as a result, duplicate uploads could initially fail, and would
succeed after the client automatically retried a handful of times
2024-10-16 15:32:58 +00:00
ed
b2770a2087 u2c: support more crazy filenames
newlines, invalid utf8, and worst of all... %20 (whitespace)

due to up2k protocol limitations,
filenames are normalized when they hit the server,
but folders get to keep their intended jank
2024-10-15 23:01:07 +00:00
ed
cba1878bb2 u2c: don't get stuck at fifos and such 2024-10-15 22:53:55 +00:00
ed
a2e037d6af u2c: fix chunksize calculation
files which were exactly 128 GiB large would fail
(you can't make this shit up)
2024-10-15 22:39:48 +00:00
ed
65a2b6a223 u2c: fix excessive FDs
it would open separate FDs for all chunks to be uploaded...

open and close files as they are needed during upload instead
2024-10-15 22:30:15 +00:00
ed
9ed799e803 update pkgs to 1.15.7 2024-10-13 23:07:31 +00:00
ed
c1c0ecca13 v1.15.7 2024-10-13 22:44:57 +00:00
ed
ee62836383 bitflip logging 2024-10-13 22:37:35 +00:00
ed
705f598b1a up2k non-e2d fixes:
* respect noforget when loading snaps
* ...but actually forget deleted files otherwise
* insert empty need/hash as necessary
2024-10-13 22:10:27 +00:00
ed
414de88925 u2c v2.2 2024-10-13 22:07:41 +00:00
ed
53ffd245dd u2c: fix progress indicator for resumed uploads 2024-10-13 22:07:07 +00:00
ed
cf1b756206 u2c: option to list chunk hashes 2024-10-13 22:06:02 +00:00
ed
22b58e31ef unpost: authed users can see anon on same ip 2024-10-13 22:00:15 +00:00
ed
b7f9bf5a28 cidr-based autologin 2024-10-13 21:56:26 +00:00
ed
aba680b6c2 update pkgs to 1.15.6 2024-10-11 23:16:24 +00:00
ed
fabada95f6 v1.15.6 2024-10-11 22:56:10 +00:00
ed
9ccd8bb3ea support viewing dotfile docs; closes #104 2024-10-11 22:06:43 +00:00
ed
1d68acf8f0 add preadme.md; closes #105 2024-10-11 21:52:44 +00:00
ed
1e7697b551 misc cleanup;
* more typos
* python 3.13 deprecations
2024-10-11 20:46:40 +00:00
ed
4a4ec88d00 up2k: fix hs after bitflips / net-glitch
chunk stitching could cause handshakes to initiate
a new upload rather than resume an ongoing one
2024-10-11 19:48:44 +00:00
ed
6adc778d62 fix a buttload of typos 2024-10-11 18:58:14 +00:00
ed
6b7ebdb7e9 upgrade old snaps to dwrk + fix ptop
ptop would be wrong if a volume was moved on-disk since last run
2024-10-09 06:05:55 +00:00
ed
3d7facd774 add option to entirely disable dedup
global-option `--no-clone` / volflag `noclone` entirely disables
serverside deduplication; clients will then fully upload dupe files

can be useful when `--safe-dedup=1` is not an option due to other
software tampering with the on-disk files, and your filesystem has
prohibitively slow or expensive reads
2024-10-08 21:27:19 +00:00
ed
eaee1f2cab update pkgs to 1.15.5 2024-10-05 18:20:20 +00:00
ed
ff012221ae v1.15.5 2024-10-05 18:03:04 +00:00
ed
c398553748 pkgres: fix multiprocessing 2024-10-05 17:32:08 +00:00
ed
3ccbcf6185 update pkgs to 1.15.4 2024-10-04 23:56:45 +00:00
56 changed files with 2624 additions and 613 deletions

144
README.md
View File

@@ -47,6 +47,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [file manager](#file-manager) - cut/paste, rename, and delete files/folders (if you have permission)
* [shares](#shares) - share a file or folder by creating a temporary link
* [batch rename](#batch-rename) - select some files and press `F2` to bring up the rename UI
* [rss feeds](#rss-feeds) - monitor a folder with your RSS reader
* [media player](#media-player) - plays almost every audio format there is
* [audio equalizer](#audio-equalizer) - and [dynamic range compressor](https://en.wikipedia.org/wiki/Dynamic_range_compression)
* [fix unreliable playback on android](#fix-unreliable-playback-on-android) - due to phone / app settings
@@ -80,12 +81,14 @@ turn almost any device into a file server with resumable uploads/downloads using
* [event hooks](#event-hooks) - trigger a program on uploads, renames etc ([examples](./bin/hooks/))
* [upload events](#upload-events) - the older, more powerful approach ([examples](./bin/mtag/))
* [handlers](#handlers) - redefine behavior with plugins ([examples](./bin/handlers/))
* [ip auth](#ip-auth) - autologin based on IP range (CIDR)
* [identity providers](#identity-providers) - replace copyparty passwords with oauth and such
* [user-changeable passwords](#user-changeable-passwords) - if permitted, users can change their own passwords
* [using the cloud as storage](#using-the-cloud-as-storage) - connecting to an aws s3 bucket and similar
* [hiding from google](#hiding-from-google) - tell search engines you dont wanna be indexed
* [hiding from google](#hiding-from-google) - tell search engines you don't wanna be indexed
* [themes](#themes)
* [complete examples](#complete-examples)
* [listen on port 80 and 443](#listen-on-port-80-and-443) - become a *real* webserver
* [reverse-proxy](#reverse-proxy) - running copyparty next to other websites
* [real-ip](#real-ip) - teaching copyparty how to see client IPs
* [prometheus](#prometheus) - metrics/stats can be enabled
@@ -114,7 +117,7 @@ turn almost any device into a file server with resumable uploads/downloads using
* [https](#https) - both HTTP and HTTPS are accepted
* [recovering from crashes](#recovering-from-crashes)
* [client crashes](#client-crashes)
* [frefox wsod](#frefox-wsod) - firefox 87 can crash during uploads
* [firefox wsod](#firefox-wsod) - firefox 87 can crash during uploads
* [HTTP API](#HTTP-API) - see [devnotes](./docs/devnotes.md#http-api)
* [dependencies](#dependencies) - mandatory deps
* [optional dependencies](#optional-dependencies) - install these to enable bonus features
@@ -217,7 +220,7 @@ also see [comparison to similar software](./docs/versus.md)
* upload
* ☑ basic: plain multipart, ie6 support
* ☑ [up2k](#uploading): js, resumable, multithreaded
* **no filesize limit!** ...unless you use Cloudflare, then it's 383.9 GiB
* **no filesize limit!** even on Cloudflare
* ☑ stash: simple PUT filedropper
* ☑ filename randomizer
* ☑ write-only folders
@@ -425,7 +428,7 @@ configuring accounts/volumes with arguments:
permissions:
* `r` (read): browse folder contents, download files, download as zip/tar, see filekeys/dirkeys
* `w` (write): upload files, move files *into* this folder
* `w` (write): upload files, move/copy files *into* this folder
* `m` (move): move files/folders *from* this folder
* `d` (delete): delete files/folders
* `.` (dots): user can ask to show dotfiles in directory listings
@@ -505,7 +508,8 @@ the browser has the following hotkeys (always qwerty)
* `ESC` close various things
* `ctrl-K` delete selected files/folders
* `ctrl-X` cut selected files/folders
* `ctrl-V` paste
* `ctrl-C` copy selected files/folders to clipboard
* `ctrl-V` paste (move/copy)
* `Y` download selected files
* `F2` [rename](#batch-rename) selected file/folder
* when a file/folder is selected (in not-grid-view):
@@ -574,6 +578,7 @@ click the `🌲` or pressing the `B` hotkey to toggle between breadcrumbs path (
press `g` or `` to toggle grid-view instead of the file listing and `t` toggles icons / thumbnails
* can be made default globally with `--grid` or per-volume with volflag `grid`
* enable by adding `?imgs` to a link, or disable with `?imgs=0`
![copyparty-thumbs-fs8](https://user-images.githubusercontent.com/241032/129636211-abd20fa2-a953-4366-9423-1c88ebb96ba9.png)
@@ -581,7 +586,7 @@ it does static images with Pillow / pyvips / FFmpeg, and uses FFmpeg for video f
* pyvips is 3x faster than Pillow, Pillow is 3x faster than FFmpeg
* disable thumbnails for specific volumes with volflag `dthumb` for all, or `dvthumb` / `dathumb` / `dithumb` for video/audio/images only
audio files are covnerted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
audio files are converted into spectrograms using FFmpeg unless you `--no-athumb` (and some FFmpeg builds may need `--th-ff-swr`)
images with the following names (see `--th-covers`) become the thumbnail of the folder they're in: `folder.png`, `folder.jpg`, `cover.png`, `cover.jpg`
* the order is significant, so if both `cover.png` and `folder.jpg` exist in a folder, it will pick the first matching `--th-covers` entry (`folder.jpg`)
@@ -652,7 +657,7 @@ up2k has several advantages:
* uploads resume if you reboot your browser or pc, just upload the same files again
* server detects any corruption; the client reuploads affected chunks
* the client doesn't upload anything that already exists on the server
* no filesize limit unless imposed by a proxy, for example Cloudflare, which blocks uploads over 383.9 GiB
* no filesize limit, even when a proxy limits the request size (for example Cloudflare)
* much higher speeds than ftp/scp/tarpipe on some internet connections (mainly american ones) thanks to parallel connections
* the last-modified timestamp of the file is preserved
@@ -667,7 +672,7 @@ see [up2k](./docs/devnotes.md#up2k) for details on how it works, or watch a [dem
**protip:** if you enable `favicon` in the `[⚙️] settings` tab (by typing something into the textbox), the icon in the browser tab will indicate upload progress -- also, the `[🔔]` and/or `[🔊]` switches enable visible and/or audible notifications on upload completion
the up2k UI is the epitome of polished inutitive experiences:
the up2k UI is the epitome of polished intuitive experiences:
* "parallel uploads" specifies how many chunks to upload at the same time
* `[🏃]` analysis of other files should continue while one is uploading
* `[🥔]` shows a simpler UI for faster uploads from slow devices
@@ -688,6 +693,8 @@ note that since up2k has to read each file twice, `[🎈] bup` can *theoreticall
if you are resuming a massive upload and want to skip hashing the files which already finished, you can enable `turbo` in the `[⚙️] config` tab, but please read the tooltip on that button
if the server is behind a proxy which imposes a request-size limit, you can configure up2k to sneak below the limit with server-option `--u2sz` (the default is 96 MiB to support Cloudflare)
### file-search
@@ -716,7 +723,7 @@ you can unpost even if you don't have regular move/delete access, however only f
### self-destruct
uploads can be given a lifetime, afer which they expire / self-destruct
uploads can be given a lifetime, after which they expire / self-destruct
the feature must be enabled per-volume with the `lifetime` [upload rule](#upload-rules) which sets the upper limit for how long a file gets to stay on the server
@@ -743,7 +750,7 @@ the control-panel shows the ETA for all incoming files , but only for files bei
cut/paste, rename, and delete files/folders (if you have permission)
file selection: click somewhere on the line (not the link itsef), then:
file selection: click somewhere on the line (not the link itself), then:
* `space` to toggle
* `up/down` to move
* `shift-up/down` to move-and-select
@@ -751,10 +758,11 @@ file selection: click somewhere on the line (not the link itsef), then:
* shift-click another line for range-select
* cut: select some files and `ctrl-x`
* copy: select some files and `ctrl-c`
* paste: `ctrl-v` in another folder
* rename: `F2`
you can move files across browser tabs (cut in one tab, paste in another)
you can copy/move files across browser tabs (cut/copy in one tab, paste in another)
## shares
@@ -777,6 +785,7 @@ semi-intentional limitations:
* cleanup of expired shares only works when global option `e2d` is set, and/or at least one volume on the server has volflag `e2d`
* only folders from the same volume are shared; if you are sharing a folder which contains other volumes, then the contents of those volumes will not be available
* related to [IdP volumes being forgotten on shutdown](https://github.com/9001/copyparty/blob/hovudstraum/docs/idp.md#idp-volumes-are-forgotten-on-shutdown), any shares pointing into a user's IdP volume will be unavailable until that user makes their first request after a restart
* no option to "delete after first access" because tricky
* when linking something to discord (for example) it'll get accessed by their scraper and that would count as a hit
* browsers wouldn't be able to resume a broken download unless the requester's IP gets allowlisted for X minutes (ref. tricky)
@@ -840,6 +849,30 @@ or a mix of both:
the metadata keys you can use in the format field are the ones in the file-browser table header (whatever is collected with `-mte` and `-mtp`)
## rss feeds
monitor a folder with your RSS reader , optionally recursive
must be enabled per-volume with volflag `rss` or globally with `--rss`
the feed includes itunes metadata for use with podcast readers such as [AntennaPod](https://antennapod.org/)
a feed example: https://cd.ocv.me/a/d2/d22/?rss&fext=mp3
url parameters:
* `pw=hunter2` for password auth
* `recursive` to also include subfolders
* `title=foo` changes the feed title (default: folder name)
* `fext=mp3,opus` only include mp3 and opus files (default: all)
* `nf=30` only show the first 30 results (default: 250)
* `sort=m` sort by mtime (file last-modified), newest first (default)
* `u` = upload-time; NOTE: non-uploaded files have upload-time `0`
* `n` = filename
* `a` = filesize
* uppercase = reverse-sort; `M` = oldest file first
## media player
plays almost every audio format there is (if the server has FFmpeg installed for on-demand transcoding)
@@ -936,6 +969,8 @@ see [./srv/expand/](./srv/expand/) for usage and examples
* files named `README.md` / `readme.md` will be rendered after directory listings unless `--no-readme` (but `.epilogue.html` takes precedence)
* and `PREADME.md` / `preadme.md` is shown above directory listings unless `--no-readme` or `.prologue.html`
* `README.md` and `*logue.html` can contain placeholder values which are replaced server-side before embedding into directory listings; see `--help-exp`
@@ -987,7 +1022,11 @@ uses [multicast dns](https://en.wikipedia.org/wiki/Multicast_DNS) to give copypa
all enabled services ([webdav](#webdav-server), [ftp](#ftp-server), [smb](#smb-server)) will appear in mDNS-aware file managers (KDE, gnome, macOS, ...)
the domain will be http://partybox.local if the machine's hostname is `partybox` unless `--name` specifies soemthing else
the domain will be `partybox.local` if the machine's hostname is `partybox` unless `--name` specifies something else
and the web-UI will be available at http://partybox.local:3923/
* if you want to get rid of the `:3923` so you can use http://partybox.local/ instead then see [listen on port 80 and 443](#listen-on-port-80-and-443)
### ssdp
@@ -1013,7 +1052,7 @@ print a qr-code [(screenshot)](https://user-images.githubusercontent.com/241032/
* `--qrz 1` forces 1x zoom instead of autoscaling to fit the terminal size
* 1x may render incorrectly on some terminals/fonts, but 2x should always work
it uses the server hostname if [mdns](#mdns) is enbled, otherwise it'll use your external ip (default route) unless `--qri` specifies a specific ip-prefix or domain
it uses the server hostname if [mdns](#mdns) is enabled, otherwise it'll use your external ip (default route) unless `--qri` specifies a specific ip-prefix or domain
## ftp server
@@ -1038,7 +1077,7 @@ some recommended FTP / FTPS clients; `wark` = example password:
## webdav server
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a greay way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
with read-write support, supports winXP and later, macos, nautilus/gvfs ... a great way to [access copyparty straight from the file explorer in your OS](#mount-as-drive)
click the [connect](http://127.0.0.1:3923/?hc) button in the control-panel to see connection instructions for windows, linux, macos
@@ -1142,8 +1181,8 @@ authenticate with one of the following:
tweaking the ui
* set default sort order globally with `--sort` or per-volume with the `sort` volflag; specify one or more comma-separated columns to sort by, and prefix the column name with `-` for reverse sort
* the column names you can use are visible as tooltips when hovering over the column headers in the directory listing, for example `href ext sz ts tags/.up_at tags/Cirle tags/.tn tags/Artist tags/Title`
* to sort in music order (album, track, artist, title) with filename as fallback, you could `--sort tags/Cirle,tags/.tn,tags/Artist,tags/Title,href`
* the column names you can use are visible as tooltips when hovering over the column headers in the directory listing, for example `href ext sz ts tags/.up_at tags/Circle tags/.tn tags/Artist tags/Title`
* to sort in music order (album, track, artist, title) with filename as fallback, you could `--sort tags/Circle,tags/.tn,tags/Artist,tags/Title,href`
* to sort by upload date, first enable showing the upload date in the listing with `-e2d -mte +.up_at` and then `--sort tags/.up_at`
see [./docs/rice](./docs/rice) for more, including how to add stuff (css/`<meta>`/...) to the html `<head>` tag, or to add your own translation
@@ -1166,7 +1205,11 @@ if you want to entirely replace the copyparty response with your own jinja2 temp
enable symlink-based upload deduplication globally with `--dedup` or per-volume with volflag `dedup`
when someone tries to upload a file that already exists on the server, the upload will be politely declined and a symlink is created instead, pointing to the nearest copy on disk, thus reducinc disk space usage
by default, when someone tries to upload a file that already exists on the server, the upload will be politely declined, and the server will copy the existing file over to where the upload would have gone
if you enable deduplication with `--dedup` then it'll create a symlink instead of a full copy, thus reducing disk space usage
* on the contrary, if your server is hooked up to s3-glacier or similar storage where reading is expensive, and you cannot use `--safe-dedup=1` because you have other software tampering with your files, so you want to entirely disable detection of duplicate data instead, then you can specify `--no-clone` globally or `noclone` as a volflag
**warning:** when enabling dedup, you should also:
* enable indexing with `-e2dsa` or volflag `e2dsa` (see [file indexing](#file-indexing) section below); strongly recommended
@@ -1207,7 +1250,7 @@ through arguments:
* `-e2t` enables metadata indexing on upload
* `-e2ts` also scans for tags in all files that don't have tags yet
* `-e2tsr` also deletes all existing tags, doing a full reindex
* `-e2v` verfies file integrity at startup, comparing hashes from the db
* `-e2v` verifies file integrity at startup, comparing hashes from the db
* `-e2vu` patches the database with the new hashes from the filesystem
* `-e2vp` panics and kills copyparty instead
@@ -1420,11 +1463,27 @@ redefine behavior with plugins ([examples](./bin/handlers/))
replace 404 and 403 errors with something completely different (that's it for now)
## ip auth
autologin based on IP range (CIDR) , using the global-option `--ipu`
for example, if everyone with an IP that starts with `192.168.123` should automatically log in as the user `spartacus`, then you can either specify `--ipu=192.168.123.0/24=spartacus` as a commandline option, or put this in a config file:
```yaml
[global]
ipu: 192.168.123.0/24=spartacus
```
repeat the option to map additional subnets
**be careful with this one!** if you have a reverseproxy, then you definitely want to make sure you have [real-ip](#real-ip) configured correctly, and it's probably a good idea to nullmap the reverseproxy's IP just in case; so if your reverseproxy is sending requests from `172.24.27.9` then that would be `--ipu=172.24.27.9/32=`
## identity providers
replace copyparty passwords with oauth and such
you can disable the built-in password-based login sysem, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
you can disable the built-in password-based login system, and instead replace it with a separate piece of software (an identity provider) which will then handle authenticating / authorizing of users; this makes it possible to login with passkeys / fido2 / webauthn / yubikey / ldap / active directory / oauth / many other single-sign-on contraptions
a popular choice is [Authelia](https://www.authelia.com/) (config-file based), another one is [authentik](https://goauthentik.io/) (GUI-based, more complex)
@@ -1451,7 +1510,7 @@ if permitted, users can change their own passwords in the control-panel
* if you run multiple copyparty instances with different users you *almost definitely* want to specify separate DBs for each instance
* if [password hashing](#password-hashing) is enbled, the passwords in the db are also hashed
* if [password hashing](#password-hashing) is enabled, the passwords in the db are also hashed
* ...which means that all user-defined passwords will be forgotten if you change password-hashing settings
@@ -1471,7 +1530,7 @@ you may improve performance by specifying larger values for `--iobuf` / `--s-rd-
## hiding from google
tell search engines you dont wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
tell search engines you don't wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
* `--no-robots` adds HTTP (`X-Robots-Tag`) and HTML (`<meta>`) headers with `noindex, nofollow` globally
* volflag `[...]:c,norobots` does the same thing for that single volume
@@ -1546,6 +1605,33 @@ if you want to change the fonts, see [./docs/rice/](./docs/rice/)
`-lo log/cpp-%Y-%m%d-%H%M%S.txt.xz`
## listen on port 80 and 443
become a *real* webserver which people can access by just going to your IP or domain without specifying a port
**if you're on windows,** then you just need to add the commandline argument `-p 80,443` and you're done! nice
**if you're on macos,** sorry, I don't know
**if you're on Linux,** you have the following 4 options:
* **option 1:** set up a [reverse-proxy](#reverse-proxy) -- this one makes a lot of sense if you're running on a proper headless server, because that way you get real HTTPS too
* **option 2:** NAT to port 3923 -- this is cumbersome since you'll need to do it every time you reboot, and the exact command may depend on your linux distribution:
```bash
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3923
iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3923
```
* **option 3:** disable the [security policy](https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html) which prevents the use of 80 and 443; this is *probably* fine:
```
setcap CAP_NET_BIND_SERVICE=+eip $(realpath $(which python))
python copyparty-sfx.py -p 80,443
```
* **option 4:** run copyparty as root (please don't)
## reverse-proxy
running copyparty next to other websites hosted on an existing webserver such as nginx, caddy, or apache
@@ -1601,6 +1687,7 @@ scrape_configs:
currently the following metrics are available,
* `cpp_uptime_seconds` time since last copyparty restart
* `cpp_boot_unixtime_seconds` same but as an absolute timestamp
* `cpp_active_dl` number of active downloads
* `cpp_http_conns` number of open http(s) connections
* `cpp_http_reqs` number of http(s) requests handled
* `cpp_sus_reqs` number of 403/422/malicious requests
@@ -1850,6 +1937,9 @@ quick summary of more eccentric web-browsers trying to view a directory index:
| **ie4** and **netscape** 4.0 | can browse, upload with `?b=u`, auth with `&pw=wark` |
| **ncsa mosaic** 2.7 | does not get a pass, [pic1](https://user-images.githubusercontent.com/241032/174189227-ae816026-cf6f-4be5-a26e-1b3b072c1b2f.png) - [pic2](https://user-images.githubusercontent.com/241032/174189225-5651c059-5152-46e9-ac26-7e98e497901b.png) |
| **SerenityOS** (7e98457) | hits a page fault, works with `?b=u`, file upload not-impl |
| **nintendo 3ds** | can browse, upload, view thumbnails (thx bnjmn) |
<p align="center"><img src="https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853" /></p>
# client examples
@@ -1894,7 +1984,7 @@ interact with copyparty using non-browser clients
* [igloo irc](https://iglooirc.com/): Method: `post` Host: `https://you.com/up/?want=url&pw=hunter2` Multipart: `yes` File parameter: `f`
copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uplaods:
copyparty returns a truncated sha512sum of your PUT/POST as base64; you can generate the same checksum locally to verify uploads:
b512(){ printf "$((sha512sum||shasum -a512)|sed -E 's/ .*//;s/(..)/\\x\1/g')"|base64|tr '+/' '-_'|head -c44;}
b512 <movie.mkv
@@ -1994,7 +2084,7 @@ when uploading files,
* up to 30% faster uploads if you hide the upload status list by switching away from the `[🚀]` up2k ui-tab (or closing it)
* optionally you can switch to the lightweight potato ui by clicking the `[🥔]`
* switching to another browser-tab also works, the favicon will update every 10 seconds in that case
* unlikely to be a problem, but can happen when uploding many small files, or your internet is too fast, or PC too slow
* unlikely to be a problem, but can happen when uploading many small files, or your internet is too fast, or PC too slow
# security
@@ -2042,7 +2132,7 @@ other misc notes:
behavior that might be unexpected
* users without read-access to a folder can still see the `.prologue.html` / `.epilogue.html` / `README.md` contents, for the purpose of showing a description on how to use the uploader for example
* users without read-access to a folder can still see the `.prologue.html` / `.epilogue.html` / `PREADME.md` / `README.md` contents, for the purpose of showing a description on how to use the uploader for example
* users can submit `<script>`s which autorun (in a sandbox) for other visitors in a few ways;
* uploading a `README.md` -- avoid with `--no-readme`
* renaming `some.html` to `.epilogue.html` -- avoid with either `--no-logues` or `--no-dot-ren`
@@ -2120,13 +2210,13 @@ if [cfssl](https://github.com/cloudflare/cfssl/releases/latest) is installed, co
## client crashes
### frefox wsod
### firefox wsod
firefox 87 can crash during uploads -- the entire browser goes, including all other browser tabs, everything turns white
however you can hit `F12` in the up2k tab and use the devtools to see how far you got in the uploads:
* get a complete list of all uploads, organized by statuts (ok / no-good / busy / queued):
* get a complete list of all uploads, organized by status (ok / no-good / busy / queued):
`var tabs = { ok:[], ng:[], bz:[], q:[] }; for (var a of up2k.ui.tab) tabs[a.in].push(a); tabs`
* list of filenames which failed:
@@ -2243,7 +2333,7 @@ then again, if you are already into downloading shady binaries from the internet
## zipapp
another emergency alternative, [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) has less features, is slow, requires python 3.7 or newer, worse compression, and more importantly is unable to benefit from more recent versions of jinja2 and such (which makes it less secure)... lots of drawbacks with this one really -- but it does not unpack any temporay files to disk, so it *may* just work if the regular sfx fails to start because the computer is messed up in certain funky ways, so it's worth a shot if all else fails
another emergency alternative, [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) has less features, is slow, requires python 3.7 or newer, worse compression, and more importantly is unable to benefit from more recent versions of jinja2 and such (which makes it less secure)... lots of drawbacks with this one really -- but it does not unpack any temporary files to disk, so it *may* just work if the regular sfx fails to start because the computer is messed up in certain funky ways, so it's worth a shot if all else fails
run it by doubleclicking it, or try typing `python copyparty.pyz` in your terminal/console/commandline/telex if that fails

View File

@@ -2,7 +2,7 @@ standalone programs which are executed by copyparty when an event happens (uploa
these programs either take zero arguments, or a filepath (the affected file), or a json message with filepath + additional info
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbr/xar/xbd/xad/xban)
run copyparty with `--help-hooks` for usage details / hook type explanations (xm/xbu/xau/xiu/xbc/xac/xbr/xar/xbd/xad/xban)
> **note:** in addition to event hooks (the stuff described here), copyparty has another api to run your programs/scripts while providing way more information such as audio tags / video codecs / etc and optionally daisychaining data between scripts in a processing pipeline; if that's what you want then see [mtp plugins](../mtag/) instead

View File

@@ -393,7 +393,8 @@ class Gateway(object):
if r.status != 200:
self.closeconn()
info("http error %s reading dir %r", r.status, web_path)
raise FuseOSError(errno.ENOENT)
err = errno.ENOENT if r.status == 404 else errno.EIO
raise FuseOSError(err)
ctype = r.getheader("Content-Type", "")
if ctype == "application/json":
@@ -1128,7 +1129,7 @@ def main():
# dircache is always a boost,
# only want to disable it for tests etc,
cdn = 9 # max num dirs; 0=disable
cdn = 24 # max num dirs; keep larger than max dir depth; 0=disable
cds = 1 # numsec until an entry goes stale
where = "local directory"

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env python3
from __future__ import print_function, unicode_literals
S_VERSION = "2.1"
S_BUILD_DT = "2024-09-23"
S_VERSION = "2.6"
S_BUILD_DT = "2024-11-10"
"""
u2c.py: upload to copyparty
@@ -62,6 +62,9 @@ else:
unicode = str
WTF8 = "replace" if PY2 else "surrogateescape"
VT100 = platform.system() != "Windows"
@@ -151,6 +154,7 @@ class HCli(object):
self.tls = tls
self.verify = ar.te or not ar.td
self.conns = []
self.hconns = []
if tls:
import ssl
@@ -170,7 +174,7 @@ class HCli(object):
"User-Agent": "u2c/%s" % (S_VERSION,),
}
def _connect(self):
def _connect(self, timeout):
args = {}
if PY37:
args["blocksize"] = 1048576
@@ -182,9 +186,11 @@ class HCli(object):
if self.ctx:
args = {"context": self.ctx}
return C(self.addr, self.port, timeout=999, **args)
return C(self.addr, self.port, timeout=timeout, **args)
def req(self, meth, vpath, hdrs, body=None, ctype=None):
now = time.time()
hdrs.update(self.base_hdrs)
if self.ar.a:
hdrs["PW"] = self.ar.a
@@ -195,7 +201,11 @@ class HCli(object):
0 if not body else body.len if hasattr(body, "len") else len(body)
)
c = self.conns.pop() if self.conns else self._connect()
# large timeout for handshakes (safededup)
conns = self.hconns if ctype == MJ else self.conns
while conns and self.ar.cxp < now - conns[0][0]:
conns.pop(0)[1].close()
c = conns.pop()[1] if conns else self._connect(999 if ctype == MJ else 128)
try:
c.request(meth, vpath, body, hdrs)
if PY27:
@@ -204,8 +214,15 @@ class HCli(object):
rsp = c.getresponse()
data = rsp.read()
self.conns.append(c)
conns.append((time.time(), c))
return rsp.status, data.decode("utf-8")
except http_client.BadStatusLine:
if self.ar.cxp > 4:
t = "\nWARNING: --cxp probably too high; reducing from %d to 4"
print(t % (self.ar.cxp,))
self.ar.cxp = 4
c.close()
raise
except:
c.close()
raise
@@ -228,7 +245,7 @@ class File(object):
self.lmod = lmod # type: float
self.abs = os.path.join(top, rel) # type: bytes
self.name = self.rel.split(b"/")[-1].decode("utf-8", "replace") # type: str
self.name = self.rel.split(b"/")[-1].decode("utf-8", WTF8) # type: str
# set by get_hashlist
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
@@ -267,10 +284,41 @@ class FileSlice(object):
raise Exception(9)
tlen += clen
self.len = tlen
self.len = self.tlen = tlen
self.cdr = self.car + self.len
self.ofs = 0 # type: int
self.f = open(file.abs, "rb", 512 * 1024)
self.f = None
self.seek = self._seek0
self.read = self._read0
def subchunk(self, maxsz, nth):
if self.tlen <= maxsz:
return -1
if not nth:
self.car0 = self.car
self.cdr0 = self.cdr
self.car = self.car0 + maxsz * nth
if self.car >= self.cdr0:
return -2
self.cdr = self.car + min(self.cdr0 - self.car, maxsz)
self.len = self.cdr - self.car
self.seek(0)
return nth
def unsub(self):
self.car = self.car0
self.cdr = self.cdr0
self.len = self.tlen
def _open(self):
self.seek = self._seek
self.read = self._read
self.f = open(self.file.abs, "rb", 512 * 1024)
self.f.seek(self.car)
# https://stackoverflow.com/questions/4359495/what-is-exactly-a-file-like-object-in-python
@@ -282,10 +330,15 @@ class FileSlice(object):
except:
pass # py27 probably
def close(self, *a, **ka):
return # until _open
def tell(self):
return self.ofs
def seek(self, ofs, wh=0):
def _seek(self, ofs, wh=0):
assert self.f # !rm
if wh == 1:
ofs = self.ofs + ofs
elif wh == 2:
@@ -299,12 +352,22 @@ class FileSlice(object):
self.ofs = ofs
self.f.seek(self.car + ofs)
def read(self, sz):
def _read(self, sz):
assert self.f # !rm
sz = min(sz, self.len - self.ofs)
ret = self.f.read(sz)
self.ofs += len(ret)
return ret
def _seek0(self, ofs, wh=0):
self._open()
return self.seek(ofs, wh)
def _read0(self, sz):
self._open()
return self.read(sz)
class MTHash(object):
def __init__(self, cores):
@@ -557,13 +620,17 @@ def walkdir(err, top, excl, seen):
for ap, inf in sorted(statdir(err, top)):
if excl.match(ap):
continue
yield ap, inf
if stat.S_ISDIR(inf.st_mode):
yield ap, inf
try:
for x in walkdir(err, ap, excl, seen):
yield x
except Exception as ex:
err.append((ap, str(ex)))
elif stat.S_ISREG(inf.st_mode):
yield ap, inf
else:
err.append((ap, "irregular filetype 0%o" % (inf.st_mode,)))
def walkdirs(err, tops, excl):
@@ -609,11 +676,12 @@ def walkdirs(err, tops, excl):
# mostly from copyparty/util.py
def quotep(btxt):
# type: (bytes) -> bytes
quot1 = quote(btxt, safe=b"/")
if not PY2:
quot1 = quot1.encode("ascii")
return quot1.replace(b" ", b"+") # type: ignore
return quot1.replace(b" ", b"%20") # type: ignore
# from copyparty/util.py
@@ -641,7 +709,7 @@ def up2k_chunksize(filesize):
while True:
for mul in [1, 2]:
nchunks = math.ceil(filesize * 1.0 / chunksize)
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks < 4096):
if nchunks <= 256 or (chunksize >= 32 * 1024 * 1024 and nchunks <= 4096):
return chunksize
chunksize += stepsize
@@ -720,7 +788,7 @@ def handshake(ar, file, search):
url = file.url
else:
if b"/" in file.rel:
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8", "replace")
url = quotep(file.rel.rsplit(b"/", 1)[0]).decode("utf-8")
else:
url = ""
url = ar.vtop + url
@@ -728,6 +796,7 @@ def handshake(ar, file, search):
while True:
sc = 600
txt = ""
t0 = time.time()
try:
zs = json.dumps(req, separators=(",\n", ": "))
sc, txt = web.req("POST", url, {}, zs.encode("utf-8"), MJ)
@@ -752,7 +821,9 @@ def handshake(ar, file, search):
print("\nERROR: login required, or wrong password:\n%s" % (txt,))
raise BadAuth()
eprint("handshake failed, retrying: %s\n %s\n\n" % (file.name, em))
t = "handshake failed, retrying: %s\n t0=%.3f t1=%.3f td=%.3f\n %s\n\n"
now = time.time()
eprint(t % (file.name, t0, now, now - t0, em))
time.sleep(ar.cd)
try:
@@ -763,15 +834,15 @@ def handshake(ar, file, search):
if search:
return r["hits"], False
file.url = r["purl"]
file.url = quotep(r["purl"].encode("utf-8", WTF8)).decode("utf-8")
file.name = r["name"]
file.wark = r["wark"]
return r["hash"], r["sprs"]
def upload(fsl, stats):
# type: (FileSlice, str) -> None
def upload(fsl, stats, maxsz):
# type: (FileSlice, str, int) -> None
"""upload a range of file data, defined by one or more `cid` (chunk-hash)"""
ctxt = fsl.cids[0]
@@ -789,21 +860,34 @@ def upload(fsl, stats):
if stats:
headers["X-Up2k-Stat"] = stats
nsub = 0
try:
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
while nsub != -1:
nsub = fsl.subchunk(maxsz, nsub)
if nsub == -2:
return
if nsub >= 0:
headers["X-Up2k-Subc"] = str(maxsz * nsub)
headers.pop(CLEN, None)
nsub += 1
if sc == 400:
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
sc, txt = web.req("POST", fsl.file.url, headers, fsl, MO)
if sc >= 400:
raise Exception("http %s: %s" % (sc, txt))
if sc == 400:
if (
"already being written" in txt
or "already got that" in txt
or "only sibling chunks" in txt
):
fsl.file.nojoin = 1
if sc >= 400:
raise Exception("http %s: %s" % (sc, txt))
finally:
fsl.f.close()
if fsl.f:
fsl.f.close()
if nsub != -1:
fsl.unsub()
class Ctl(object):
@@ -869,8 +953,8 @@ class Ctl(object):
self.hash_b = 0
self.up_f = 0
self.up_c = 0
self.up_b = 0
self.up_br = 0
self.up_b = 0 # num bytes handled
self.up_br = 0 # num bytes actually transferred
self.uploader_busy = 0
self.serialized = False
@@ -935,7 +1019,7 @@ class Ctl(object):
print(" %d up %s" % (ncs - nc, cid))
stats = "%d/0/0/%d" % (nf, self.nfiles - nf)
fslice = FileSlice(file, [cid])
upload(fslice, stats)
upload(fslice, stats, self.ar.szm)
print(" ok!")
if file.recheck:
@@ -1013,11 +1097,14 @@ class Ctl(object):
t = "%s eta @ %s/s, %s, %d# left\033[K" % (self.eta, spd, sleft, nleft)
eprint(txt + "\033]0;{0}\033\\\r{0}{1}".format(t, tail))
if self.ar.wlist:
self.at_hash = time.time() - self.t0
if self.hash_b and self.at_hash:
spd = humansize(self.hash_b / self.at_hash)
eprint("\nhasher: %.2f sec, %s/s\n" % (self.at_hash, spd))
if self.up_b and self.at_up:
spd = humansize(self.up_b / self.at_up)
if self.up_br and self.at_up:
spd = humansize(self.up_br / self.at_up)
eprint("upload: %.2f sec, %s/s\n" % (self.at_up, spd))
if not self.recheck:
@@ -1051,7 +1138,7 @@ class Ctl(object):
print(" ls ~{0}".format(srd))
zt = (
self.ar.vtop,
quotep(rd.replace(b"\\", b"/")).decode("utf-8", "replace"),
quotep(rd.replace(b"\\", b"/")).decode("utf-8"),
)
sc, txt = web.req("GET", "%s%s?ls&lt&dots" % zt, {})
if sc >= 400:
@@ -1060,13 +1147,16 @@ class Ctl(object):
j = json.loads(txt)
for f in j["dirs"] + j["files"]:
rfn = f["href"].split("?")[0].rstrip("/")
ls[unquote(rfn.encode("utf-8", "replace"))] = f
ls[unquote(rfn.encode("utf-8", WTF8))] = f
except Exception as ex:
print(" mkdir ~{0} ({1})".format(srd, ex))
if self.ar.drd:
dp = os.path.join(top, rd)
lnodes = set(os.listdir(dp))
try:
lnodes = set(os.listdir(dp))
except:
lnodes = list(ls) # fs eio; don't delete
if ptn:
zs = dp.replace(sep, b"/").rstrip(b"/") + b"/"
zls = [zs + x for x in lnodes]
@@ -1074,7 +1164,7 @@ class Ctl(object):
lnodes = [x.split(b"/")[-1] for x in zls]
bnames = [x for x in ls if x not in lnodes and x != b".hist"]
vpath = self.ar.url.split("://")[-1].split("/", 1)[-1]
names = [x.decode("utf-8", "replace") for x in bnames]
names = [x.decode("utf-8", WTF8) for x in bnames]
locs = [vpath + srd + "/" + x for x in names]
while locs:
req = locs
@@ -1136,10 +1226,16 @@ class Ctl(object):
self.up_b = self.hash_b
if self.ar.wlist:
vp = file.rel.decode("utf-8")
if self.ar.chs:
zsl = [
"%s %d %d" % (zsii[0], n, zsii[1])
for n, zsii in enumerate(file.cids)
]
print("chs: %s\n%s" % (vp, "\n".join(zsl)))
zsl = [self.ar.wsalt, str(file.size)] + [x[0] for x in file.kchunks]
zb = hashlib.sha512("\n".join(zsl).encode("utf-8")).digest()[:33]
wark = ub64enc(zb).decode("utf-8")
vp = file.rel.decode("utf-8")
if self.ar.jw:
print("%s %s" % (wark, vp))
else:
@@ -1177,6 +1273,7 @@ class Ctl(object):
self.q_upload.put(None)
return
chunksz = up2k_chunksize(file.size)
upath = file.abs.decode("utf-8", "replace")
if not VT100:
upath = upath.lstrip("\\?")
@@ -1236,9 +1333,14 @@ class Ctl(object):
file.up_c -= len(hs)
for cid in hs:
sz = file.kchunks[cid][1]
self.up_br -= sz
self.up_b -= sz
file.up_b -= sz
if hs and not file.up_b:
# first hs of this file; is this an upload resume?
file.up_b = chunksz * max(0, len(file.kchunks) - len(hs))
file.ucids = hs
if not hs:
@@ -1252,7 +1354,7 @@ class Ctl(object):
c1 = c2 = ""
spd_h = humansize(file.size / file.t_hash, True)
if file.up_b:
if file.up_c:
t_up = file.t1_up - file.t0_up
spd_u = humansize(file.size / t_up, True)
@@ -1262,14 +1364,13 @@ class Ctl(object):
t = " found %s %s(%.2fs,%s/s)%s"
print(t % (upath, c1, file.t_hash, spd_h, c2))
else:
kw = "uploaded" if file.up_b else " found"
kw = "uploaded" if file.up_c else " found"
print("{0} {1}".format(kw, upath))
self._check_if_done()
continue
chunksz = up2k_chunksize(file.size)
njoin = (self.ar.sz * 1024 * 1024) // chunksz
njoin = self.ar.sz // chunksz
cs = hs[:]
while cs:
fsl = FileSlice(file, cs[:1])
@@ -1321,7 +1422,7 @@ class Ctl(object):
)
try:
upload(fsl, stats)
upload(fsl, stats, self.ar.szm)
except Exception as ex:
t = "upload failed, retrying: %s #%s+%d (%s)\n"
eprint(t % (file.name, cids[0][:8], len(cids) - 1, ex))
@@ -1365,7 +1466,7 @@ def main():
cores = (os.cpu_count() if hasattr(os, "cpu_count") else 0) or 2
hcores = min(cores, 3) # 4% faster than 4+ on py3.9 @ r5-4500U
ver = "{0} v{1} https://youtu.be/BIcOO6TLKaY".format(S_BUILD_DT, S_VERSION)
ver = "{0}, v{1}".format(S_BUILD_DT, S_VERSION)
if "--version" in sys.argv:
print(ver)
return
@@ -1403,14 +1504,17 @@ source file/folder selection uses rsync syntax, meaning that:
ap = app.add_argument_group("file-ID calculator; enable with url '-' to list warks (file identifiers) instead of upload/search")
ap.add_argument("--wsalt", type=unicode, metavar="S", default="hunter2", help="salt to use when creating warks; must match server config")
ap.add_argument("--chs", action="store_true", help="verbose (print the hash/offset of each chunk in each file)")
ap.add_argument("--jw", action="store_true", help="just identifier+filepath, not mtime/size too")
ap = app.add_argument_group("performance tweaks")
ap.add_argument("-j", type=int, metavar="CONNS", default=2, help="parallel connections")
ap.add_argument("-J", type=int, metavar="CORES", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
ap.add_argument("--sz", type=int, metavar="MiB", default=64, help="try to make each POST this big")
ap.add_argument("--szm", type=int, metavar="MiB", default=96, help="max size of each POST (default is cloudflare max)")
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
ap.add_argument("-ns", action="store_true", help="no status panel (for slow consoles and macos)")
ap.add_argument("--cxp", type=float, metavar="SEC", default=57, help="assume http connections expired after SEConds")
ap.add_argument("--cd", type=float, metavar="SEC", default=5, help="delay before reattempting a failed handshake/upload")
ap.add_argument("--safe", action="store_true", help="use simple fallback approach")
ap.add_argument("-z", action="store_true", help="ZOOMIN' (skip uploading files if they exist at the destination with the ~same last-modified timestamp, so same as yolo / turbo with date-chk but even faster)")
@@ -1436,6 +1540,9 @@ source file/folder selection uses rsync syntax, meaning that:
if ar.dr:
ar.ow = True
ar.sz *= 1024 * 1024
ar.szm *= 1024 * 1024
ar.x = "|".join(ar.x or [])
setattr(ar, "wlist", ar.url == "-")

View File

@@ -1,6 +1,6 @@
# Maintainer: icxes <dev.null@need.moe>
pkgname=copyparty
pkgver="1.15.3"
pkgver="1.15.10"
pkgrel=1
pkgdesc="File server with accelerated resumable uploads, dedup, WebDAV, FTP, TFTP, zeroconf, media indexer, thumbnails++"
arch=("any")
@@ -21,7 +21,7 @@ optdepends=("ffmpeg: thumbnails for videos, images (slower) and audio, music tag
)
source=("https://github.com/9001/${pkgname}/releases/download/v${pkgver}/${pkgname}-${pkgver}.tar.gz")
backup=("etc/${pkgname}.d/init" )
sha256sums=("d4b02a8d618749c317161773fdd3b66992557f682b7cccd8c4c8583497c4cb24")
sha256sums=("070d5bdebe57c247427ceea6b9029f93097e4b8996cabfc59d0ec248d063b993")
build() {
cd "${srcdir}/${pkgname}-${pkgver}"

View File

@@ -1,5 +1,5 @@
{
"url": "https://github.com/9001/copyparty/releases/download/v1.15.3/copyparty-sfx.py",
"version": "1.15.3",
"hash": "sha256-OmoLwakVaZM9QwkujT4wwhqC5KcaS5u81DDTjS2r7MI="
"url": "https://github.com/9001/copyparty/releases/download/v1.15.10/copyparty-sfx.py",
"version": "1.15.10",
"hash": "sha256-9InxXpCfgnsvcNdRwWhQ74TpI24Osdr0lN0IwIULL3I="
}

View File

@@ -16,8 +16,6 @@ except:
TYPE_CHECKING = False
if True:
from types import ModuleType
from typing import Any, Callable, Optional
PY2 = sys.version_info < (3,)
@@ -82,6 +80,7 @@ web/deps/prismd.css
web/deps/scp.woff2
web/deps/sha512.ac.js
web/deps/sha512.hw.js
web/iiam.gif
web/md.css
web/md.html
web/md.js
@@ -110,7 +109,6 @@ RES = set(zs.strip().split("\n"))
class EnvParams(object):
def __init__(self) -> None:
self.pkg: Optional[ModuleType] = None
self.t0 = time.time()
self.mod = ""
self.cfg = ""

View File

@@ -50,6 +50,8 @@ from .util import (
PARTFTPY_VER,
PY_DESC,
PYFTPD_VER,
RAM_AVAIL,
RAM_TOTAL,
SQLITE_VER,
UNPLICATIONS,
Daemon,
@@ -218,8 +220,6 @@ def init_E(EE: EnvParams) -> None:
raise Exception("could not find a writable path for config")
assert __package__ # !rm
E.pkg = sys.modules[__package__]
E.mod = os.path.dirname(os.path.realpath(__file__))
if E.mod.endswith("__init__"):
E.mod = os.path.dirname(E.mod)
@@ -686,6 +686,8 @@ def get_sects():
\033[36mxbu\033[35m executes CMD before a file upload starts
\033[36mxau\033[35m executes CMD after a file upload finishes
\033[36mxiu\033[35m executes CMD after all uploads finish and volume is idle
\033[36mxbc\033[35m executes CMD before a file copy
\033[36mxac\033[35m executes CMD after a file copy
\033[36mxbr\033[35m executes CMD before a file rename/move
\033[36mxar\033[35m executes CMD after a file rename/move
\033[36mxbd\033[35m executes CMD before a file delete
@@ -782,7 +784,7 @@ def get_sects():
dedent(
"""
specify --exp or the "exp" volflag to enable placeholder expansions
in README.md / .prologue.html / .epilogue.html
in README.md / PREADME.md / .prologue.html / .epilogue.html
--exp-md (volflag exp_md) holds the list of placeholders which can be
expanded in READMEs, and --exp-lg (volflag exp_lg) likewise for logues;
@@ -898,7 +900,7 @@ def get_sects():
dedent(
"""
the mDNS protocol is multicast-based, which means there are thousands
of fun and intersesting ways for it to break unexpectedly
of fun and interesting ways for it to break unexpectedly
things to check if it does not work at all:
@@ -1007,6 +1009,7 @@ def add_upload(ap):
ap2.add_argument("--hardlink", action="store_true", help="enable hardlink-based dedup; will fallback on symlinks when that is impossible (across filesystems) (volflag=hardlink)")
ap2.add_argument("--hardlink-only", action="store_true", help="do not fallback to symlinks when a hardlink cannot be made (volflag=hardlinkonly)")
ap2.add_argument("--no-dupe", action="store_true", help="reject duplicate files during upload; only matches within the same volume (volflag=nodupe)")
ap2.add_argument("--no-clone", action="store_true", help="do not use existing data on disk to satisfy dupe uploads; reduces server HDD reads in exchange for much more network load (volflag=noclone)")
ap2.add_argument("--no-snap", action="store_true", help="disable snapshots -- forget unfinished uploads on shutdown; don't create .hist/up2k.snap files -- abandoned/interrupted uploads must be cleaned up manually")
ap2.add_argument("--snap-wri", metavar="SEC", type=int, default=300, help="write upload state to ./hist/up2k.snap every \033[33mSEC\033[0m seconds; allows resuming incomplete uploads after a server crash")
ap2.add_argument("--snap-drop", metavar="MIN", type=float, default=1440.0, help="forget unfinished uploads after \033[33mMIN\033[0m minutes; impossible to resume them after that (360=6h, 1440=24h)")
@@ -1018,7 +1021,7 @@ def add_upload(ap):
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="windows-only: minimum size of incoming uploads through up2k before they are made into sparse files")
ap2.add_argument("--turbo", metavar="LVL", type=int, default=0, help="configure turbo-mode in up2k client; [\033[32m-1\033[0m] = forbidden/always-off, [\033[32m0\033[0m] = default-off and warn if enabled, [\033[32m1\033[0m] = default-off, [\033[32m2\033[0m] = on, [\033[32m3\033[0m] = on and disable datecheck")
ap2.add_argument("--u2j", metavar="JOBS", type=int, default=2, help="web-client: number of file chunks to upload in parallel; 1 or 2 is good for low-latency (same-country) connections, 4-8 for android clients, 16 for cross-atlantic (max=64)")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for this size. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sz", metavar="N,N,N", type=u, default="1,64,96", help="web-client: default upload chunksize (MiB); sets \033[33mmin,default,max\033[0m in the settings gui. Each HTTP POST will aim for \033[33mdefault\033[0m, and never exceed \033[33mmax\033[0m. Cloudflare max is 96. Big values are good for cross-atlantic but may increase HDD fragmentation on some FS. Disable this optimization with [\033[32m1,1,1\033[0m]")
ap2.add_argument("--u2sort", metavar="TXT", type=u, default="s", help="upload order; [\033[32ms\033[0m]=smallest-first, [\033[32mn\033[0m]=alphabetical, [\033[32mfs\033[0m]=force-s, [\033[32mfn\033[0m]=force-n -- alphabetical is a bit slower on fiber/LAN but makes it easier to eyeball if everything went fine")
ap2.add_argument("--write-uplog", action="store_true", help="write POST reports to textfiles in working-directory")
@@ -1038,7 +1041,7 @@ def add_network(ap):
else:
ap2.add_argument("--freebind", action="store_true", help="allow listening on IPs which do not yet exist, for example if the network interfaces haven't finished going up. Only makes sense for IPs other than '0.0.0.0', '127.0.0.1', '::', and '::1'. May require running as root (unless net.ipv6.ip_nonlocal_bind)")
ap2.add_argument("--s-thead", metavar="SEC", type=int, default=120, help="socket timeout (read request header)")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=186.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-tbody", metavar="SEC", type=float, default=128.0, help="socket timeout (read/write request/response bodies). Use 60 on fast servers (default is extremely safe). Disable with 0 if reverse-proxied for a 2%% speed boost")
ap2.add_argument("--s-rd-sz", metavar="B", type=int, default=256*1024, help="socket read size in bytes (indirectly affects filesystem writes; recommendation: keep equal-to or lower-than \033[33m--iobuf\033[0m)")
ap2.add_argument("--s-wr-sz", metavar="B", type=int, default=256*1024, help="socket write size in bytes")
ap2.add_argument("--s-wr-slp", metavar="SEC", type=float, default=0.0, help="debug: socket write delay in seconds")
@@ -1088,6 +1091,7 @@ def add_auth(ap):
ap2.add_argument("--ses-db", metavar="PATH", type=u, default=ses_db, help="where to store the sessions database (if you run multiple copyparty instances, make sure they use different DBs)")
ap2.add_argument("--ses-len", metavar="CHARS", type=int, default=20, help="session key length; default is 120 bits ((20//4)*4*6)")
ap2.add_argument("--no-ses", action="store_true", help="disable sessions; use plaintext passwords in cookies")
ap2.add_argument("--ipu", metavar="CIDR=USR", type=u, action="append", help="users with IP matching \033[33mCIDR\033[0m are auto-authenticated as username \033[33mUSR\033[0m; example: [\033[32m172.16.24.0/24=dave]")
def add_chpw(ap):
@@ -1201,6 +1205,8 @@ def add_hooks(ap):
ap2.add_argument("--xbu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file upload starts")
ap2.add_argument("--xau", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file upload finishes")
ap2.add_argument("--xiu", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after all uploads finish and volume is idle")
ap2.add_argument("--xbc", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file copy")
ap2.add_argument("--xac", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file copy")
ap2.add_argument("--xbr", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file move/rename")
ap2.add_argument("--xar", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m after a file move/rename")
ap2.add_argument("--xbd", metavar="CMD", type=u, action="append", help="execute \033[33mCMD\033[0m before a file delete")
@@ -1233,6 +1239,7 @@ def add_optouts(ap):
ap2.add_argument("--no-dav", action="store_true", help="disable webdav support")
ap2.add_argument("--no-del", action="store_true", help="disable delete operations")
ap2.add_argument("--no-mv", action="store_true", help="disable move/rename operations")
ap2.add_argument("--no-cp", action="store_true", help="disable copy operations")
ap2.add_argument("-nth", action="store_true", help="no title hostname; don't show \033[33m--name\033[0m in <title>")
ap2.add_argument("-nih", action="store_true", help="no info hostname -- don't show in UI")
ap2.add_argument("-nid", action="store_true", help="no info disk-usage -- don't show in UI")
@@ -1256,7 +1263,7 @@ def add_safety(ap):
ap2.add_argument("--no-dot-mv", action="store_true", help="disallow moving dotfiles; makes it impossible to move folders containing dotfiles")
ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to turn something into a dotfile")
ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings")
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme.md into directory listings")
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme/preadme.md into directory listings")
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
ap2.add_argument("--force-js", action="store_true", help="don't send folder listings as HTML, force clients to use the embedded json instead -- slight protection against misbehaving search engines which ignore \033[33m--no-robots\033[0m")
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything (volflag=norobots)")
@@ -1307,6 +1314,7 @@ def add_logging(ap):
ap2.add_argument("--log-conn", action="store_true", help="debug: print tcp-server msgs")
ap2.add_argument("--log-htp", action="store_true", help="debug: print http-server threadpool scaling")
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="print request \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
ap2.add_argument("--ohead", metavar="HEADER", type=u, action='append', help="print response \033[33mHEADER\033[0m; [\033[32m*\033[0m]=all")
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$|/\.(_|ql_|DS_Store$|localized$)", help="dont log URLs matching regex \033[33mRE\033[0m")
@@ -1315,9 +1323,12 @@ def add_admin(ap):
ap2.add_argument("--no-reload", action="store_true", help="disable ?reload=cfg (reload users/volumes/volflags from config file)")
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
ap2.add_argument("--no-stack", action="store_true", help="disable ?stack (list all stacks)")
ap2.add_argument("--dl-list", metavar="LVL", type=int, default=2, help="who can see active downloads in the controlpanel? [\033[32m0\033[0m]=nobody, [\033[32m1\033[0m]=admins, [\033[32m2\033[0m]=everyone")
def add_thumbnail(ap):
th_ram = (RAM_AVAIL or RAM_TOTAL or 9) * 0.6
th_ram = int(max(min(th_ram, 6), 1) * 10) / 10
ap2 = ap.add_argument_group('thumbnail options')
ap2.add_argument("--no-thumb", action="store_true", help="disable all thumbnails (volflag=dthumb)")
ap2.add_argument("--no-vthumb", action="store_true", help="disable video thumbnails (volflag=dvthumb)")
@@ -1325,7 +1336,7 @@ def add_thumbnail(ap):
ap2.add_argument("--th-size", metavar="WxH", default="320x256", help="thumbnail res (volflag=thsize)")
ap2.add_argument("--th-mt", metavar="CORES", type=int, default=CORES, help="num cpu cores to use for generating thumbnails")
ap2.add_argument("--th-convt", metavar="SEC", type=float, default=60.0, help="conversion timeout in seconds (volflag=convt)")
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=6.0, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
ap2.add_argument("--th-ram-max", metavar="GB", type=float, default=th_ram, help="max memory usage (GiB) permitted by thumbnailer; not very accurate")
ap2.add_argument("--th-crop", metavar="TXT", type=u, default="y", help="crop thumbnails to 4:3 or keep dynamic height; client can override in UI unless force. [\033[32my\033[0m]=crop, [\033[32mn\033[0m]=nocrop, [\033[32mfy\033[0m]=force-y, [\033[32mfn\033[0m]=force-n (volflag=crop)")
ap2.add_argument("--th-x3", metavar="TXT", type=u, default="n", help="show thumbs at 3x resolution; client can override in UI unless force. [\033[32my\033[0m]=yes, [\033[32mn\033[0m]=no, [\033[32mfy\033[0m]=force-yes, [\033[32mfn\033[0m]=force-no (volflag=th3x)")
ap2.add_argument("--th-dec", metavar="LIBS", default="vips,pil,ff", help="image decoders, in order of preference")
@@ -1357,6 +1368,14 @@ def add_transcoding(ap):
ap2.add_argument("--ac-maxage", metavar="SEC", type=int, default=86400, help="delete cached transcode output after \033[33mSEC\033[0m seconds")
def add_rss(ap):
ap2 = ap.add_argument_group('RSS options')
ap2.add_argument("--rss", action="store_true", help="enable RSS output (experimental)")
ap2.add_argument("--rss-nf", metavar="HITS", type=int, default=250, help="default number of files to return (url-param 'nf')")
ap2.add_argument("--rss-fext", metavar="E,E", type=u, default="", help="default list of file extensions to include (url-param 'fext'); blank=all")
ap2.add_argument("--rss-sort", metavar="ORD", type=u, default="m", help="default sort order (url-param 'sort'); [\033[32mm\033[0m]=last-modified [\033[32mu\033[0m]=upload-time [\033[32mn\033[0m]=filename [\033[32ms\033[0m]=filesize; Uppercase=oldest-first. Note that upload-time is 0 for non-uploaded files")
def add_db_general(ap, hcores):
noidx = APPLESAN_TXT if MACOS else ""
ap2 = ap.add_argument_group('general db options')
@@ -1452,9 +1471,10 @@ def add_ui(ap, retry):
ap2.add_argument("--pb-url", metavar="URL", type=u, default="https://github.com/9001/copyparty", help="powered-by link; disable with \033[33m-np\033[0m")
ap2.add_argument("--ver", action="store_true", help="show version on the control panel (incompatible with \033[33m-nb\033[0m)")
ap2.add_argument("--k304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable k304 on the controlpanel (workaround for buggy reverse-proxies); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
ap2.add_argument("--no304", metavar="NUM", type=int, default=0, help="configure the option to enable/disable no304 on the controlpanel (workaround for buggy caching in browsers); [\033[32m0\033[0m] = hidden and default-off, [\033[32m1\033[0m] = visible and default-off, [\033[32m2\033[0m] = visible and default-on")
ap2.add_argument("--md-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for README.md docs (volflag=md_sbf); see https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe#attr-sandbox")
ap2.add_argument("--lg-sbf", metavar="FLAGS", type=u, default="downloads forms popups scripts top-navigation-by-user-activation", help="list of capabilities to ALLOW for prologue/epilogue docs (volflag=lg_sbf)")
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README.md documents (volflags: no_sb_md | sb_md)")
ap2.add_argument("--no-sb-md", action="store_true", help="don't sandbox README/PREADME.md documents (volflags: no_sb_md | sb_md)")
ap2.add_argument("--no-sb-lg", action="store_true", help="don't sandbox prologue/epilogue docs (volflags: no_sb_lg | sb_lg); enables non-js support")
@@ -1478,6 +1498,7 @@ def add_debug(ap):
ap2.add_argument("--bak-flips", action="store_true", help="[up2k] if a client uploads a bitflipped/corrupted chunk, store a copy according to \033[33m--bf-nc\033[0m and \033[33m--bf-dir\033[0m")
ap2.add_argument("--bf-nc", metavar="NUM", type=int, default=200, help="bak-flips: stop if there's more than \033[33mNUM\033[0m files at \033[33m--kf-dir\033[0m already; default: 6.3 GiB max (200*32M)")
ap2.add_argument("--bf-dir", metavar="PATH", type=u, default="bf", help="bak-flips: store corrupted chunks at \033[33mPATH\033[0m; default: folder named 'bf' wherever copyparty was started")
ap2.add_argument("--bf-log", metavar="PATH", type=u, default="", help="bak-flips: log corruption info to a textfile at \033[33mPATH\033[0m")
# fmt: on
@@ -1525,6 +1546,7 @@ def run_argparse(
add_db_metadata(ap)
add_thumbnail(ap)
add_transcoding(ap)
add_rss(ap)
add_ftp(ap)
add_webdav(ap)
add_tftp(ap)
@@ -1747,6 +1769,9 @@ def main(argv: Optional[list[str]] = None) -> None:
if al.ihead:
al.ihead = [x.lower() for x in al.ihead]
if al.ohead:
al.ohead = [x.lower() for x in al.ohead]
if HAVE_SSL:
if al.ssl_ver:
configure_ssl_ver(al)

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (1, 15, 4)
CODENAME = "fill the drives"
BUILD_DT = (2024, 10, 4)
VERSION = (1, 16, 0)
CODENAME = "COPYparty"
BUILD_DT = (2024, 11, 10)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -66,6 +66,7 @@ if PY2:
LEELOO_DALLAS = "leeloo_dallas"
SEE_LOG = "see log for details"
SEESLOG = " (see serverlog for details)"
SSEELOG = " ({})".format(SEE_LOG)
BAD_CFG = "invalid config; {}".format(SEE_LOG)
SBADCFG = " ({})".format(BAD_CFG)
@@ -164,8 +165,11 @@ class Lim(object):
self.chk_rem(rem)
if sz != -1:
self.chk_sz(sz)
self.chk_vsz(broker, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish
else:
sz = 0
self.chk_vsz(broker, ptop, sz, volgetter)
self.chk_df(abspath, sz) # side effects; keep last-ish
ap2, vp2 = self.rot(abspath)
if abspath == ap2:
@@ -205,7 +209,15 @@ class Lim(object):
if self.dft < time.time():
self.dft = int(time.time()) + 300
self.dfv = get_df(abspath)[0] or 0
df, du, err = get_df(abspath, True)
if err:
t = "failed to read disk space usage for [%s]: %s"
self.log(t % (abspath, err), 3)
self.dfv = 0xAAAAAAAAA # 42.6 GiB
else:
self.dfv = df or 0
for j in list(self.reg.values()) if self.reg else []:
self.dfv -= int(j["size"] / (len(j["hash"]) or 999) * len(j["need"]))
@@ -540,15 +552,14 @@ class VFS(object):
return self._get_dbv(vrem)
shv, srem = src
return shv, vjoin(srem, vrem)
return shv._get_dbv(vjoin(srem, vrem))
def _get_dbv(self, vrem: str) -> tuple["VFS", str]:
dbv = self.dbv
if not dbv:
return self, vrem
tv = [self.vpath[len(dbv.vpath) :].lstrip("/"), vrem]
vrem = "/".join([x for x in tv if x])
vrem = vjoin(self.vpath[len(dbv.vpath) :].lstrip("/"), vrem)
return dbv, vrem
def canonical(self, rem: str, resolve: bool = True) -> str:
@@ -580,10 +591,11 @@ class VFS(object):
scandir: bool,
permsets: list[list[bool]],
lstat: bool = False,
throw: bool = False,
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
"""replaces _ls for certain shares (single-file, or file selection)"""
vn, rem = self.shr_src # type: ignore
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat)
abspath, real, _ = vn.ls(rem, "\n", scandir, permsets, lstat, throw)
real = [x for x in real if os.path.basename(x[0]) in self.shr_files]
return abspath, real, {}
@@ -594,11 +606,12 @@ class VFS(object):
scandir: bool,
permsets: list[list[bool]],
lstat: bool = False,
throw: bool = False,
) -> tuple[str, list[tuple[str, os.stat_result]], dict[str, "VFS"]]:
"""return user-readable [fsdir,real,virt] items at vpath"""
virt_vis = {} # nodes readable by user
abspath = self.canonical(rem)
real = list(statdir(self.log, scandir, lstat, abspath))
real = list(statdir(self.log, scandir, lstat, abspath, throw))
real.sort()
if not rem:
# no vfs nodes in the list of real inodes
@@ -660,6 +673,10 @@ class VFS(object):
"""
recursively yields from ./rem;
rel is a unix-style user-defined vpath (not vfs-related)
NOTE: don't invoke this function from a dbv; subvols are only
descended into if rem is blank due to the _ls `if not rem:`
which intention is to prevent unintended access to subvols
"""
fsroot, vfs_ls, vfs_virt = self.ls(rem, uname, scandir, permsets, lstat=lstat)
@@ -900,7 +917,7 @@ class AuthSrv(object):
self._reload()
return True
broker.ask("_reload_blocking", False).get()
broker.ask("reload", False, True).get()
return True
def _map_volume_idp(
@@ -924,7 +941,7 @@ class AuthSrv(object):
for un, gn in un_gn:
# if ap/vp has a user/group placeholder, make sure to keep
# track so the same user/gruop is mapped when setting perms;
# track so the same user/group is mapped when setting perms;
# otherwise clear un/gn to indicate it's a regular volume
src1 = src0.replace("${u}", un or "\n")
@@ -1370,7 +1387,7 @@ class AuthSrv(object):
flags[name] = True
return
zs = "mtp on403 on404 xbu xau xiu xbr xar xbd xad xm xban"
zs = "mtp on403 on404 xbu xau xiu xbc xac xbr xar xbd xad xm xban"
if name not in zs.split():
if value is True:
t = "└─add volflag [{}] = {} ({})"
@@ -1925,7 +1942,7 @@ class AuthSrv(object):
vol.flags[k] = odfusion(getattr(self.args, k), vol.flags[k])
# append additive args from argv to volflags
hooks = "xbu xau xiu xbr xar xbd xad xm xban".split()
hooks = "xbu xau xiu xbc xac xbr xar xbd xad xm xban".split()
for name in "mtp on404 on403".split() + hooks:
self._read_volflag(vol.flags, name, getattr(self.args, name), True)
@@ -2376,7 +2393,7 @@ class AuthSrv(object):
self._reload()
return True, "new password OK"
broker.ask("_reload_blocking", False, False).get()
broker.ask("reload", False, False).get()
return True, "new password OK"
def setup_chpw(self, acct: dict[str, str]) -> None:
@@ -2628,7 +2645,7 @@ class AuthSrv(object):
]
csv = set("i p th_covers zm_on zm_off zs_on zs_off".split())
zs = "c ihead mtm mtp on403 on404 xad xar xau xiu xban xbd xbr xbu xm"
zs = "c ihead ohead mtm mtp on403 on404 xac xad xar xau xiu xban xbc xbd xbr xbu xm"
lst = set(zs.split())
askip = set("a v c vc cgen exp_lg exp_md theme".split())
fskip = set("exp_lg exp_md mv_re_r mv_re_t rm_re_r rm_re_t".split())

View File

@@ -43,6 +43,9 @@ class BrokerMp(object):
self.procs = []
self.mutex = threading.Lock()
self.retpend: dict[int, Any] = {}
self.retpend_mutex = threading.Lock()
self.num_workers = self.args.j or CORES
self.log("broker", "booting {} subprocesses".format(self.num_workers))
for n in range(1, self.num_workers + 1):
@@ -54,6 +57,8 @@ class BrokerMp(object):
self.procs.append(proc)
proc.start()
Daemon(self.periodic, "mp-periodic")
def shutdown(self) -> None:
self.log("broker", "shutting down")
for n, proc in enumerate(self.procs):
@@ -90,8 +95,10 @@ class BrokerMp(object):
self.log(*args)
elif dest == "retq":
# response from previous ipc call
raise Exception("invalid broker_mp usage")
with self.retpend_mutex:
retq = self.retpend.pop(retq_id)
retq.put(args[0])
else:
# new ipc invoking managed service in hub
@@ -109,7 +116,6 @@ class BrokerMp(object):
proc.q_pend.put((retq_id, "retq", rv))
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
# new non-ipc invoking managed service in hub
obj = self.hub
for node in dest.split("."):
@@ -121,17 +127,30 @@ class BrokerMp(object):
retq.put(rv)
return retq
def wask(self, dest: str, *args: Any) -> list[Union[ExceptionalQueue, NotExQueue]]:
# call from hub to workers
ret = []
for p in self.procs:
retq = ExceptionalQueue(1)
retq_id = id(retq)
with self.retpend_mutex:
self.retpend[retq_id] = retq
p.q_pend.put((retq_id, dest, list(args)))
ret.append(retq)
return ret
def say(self, dest: str, *args: Any) -> None:
"""
send message to non-hub component in other process,
returns a Queue object which eventually contains the response if want_retval
(not-impl here since nothing uses it yet)
"""
if dest == "listen":
if dest == "httpsrv.listen":
for p in self.procs:
p.q_pend.put((0, dest, [args[0], len(self.procs)]))
elif dest == "set_netdevs":
elif dest == "httpsrv.set_netdevs":
for p in self.procs:
p.q_pend.put((0, dest, list(args)))
@@ -140,3 +159,19 @@ class BrokerMp(object):
else:
raise Exception("what is " + str(dest))
def periodic(self) -> None:
while True:
time.sleep(1)
tdli = {}
tdls = {}
qs = self.wask("httpsrv.read_dls")
for q in qs:
qr = q.get()
dli, dls = qr
tdli.update(dli)
tdls.update(dls)
tdl = (tdli, tdls)
for p in self.procs:
p.q_pend.put((0, "httpsrv.write_dls", tdl))

View File

@@ -82,37 +82,38 @@ class MpWorker(BrokerCli):
while True:
retq_id, dest, args = self.q_pend.get()
# self.logw("work: [{}]".format(d[0]))
if dest == "retq":
# response from previous ipc call
with self.retpend_mutex:
retq = self.retpend.pop(retq_id)
retq.put(args)
continue
if dest == "shutdown":
self.httpsrv.shutdown()
self.logw("ok bye")
sys.exit(0)
return
elif dest == "reload":
if dest == "reload":
self.logw("mpw.asrv reloading")
self.asrv.reload()
self.logw("mpw.asrv reloaded")
continue
elif dest == "reload_sessions":
if dest == "reload_sessions":
with self.asrv.mutex:
self.asrv.load_sessions()
continue
elif dest == "listen":
self.httpsrv.listen(args[0], args[1])
obj = self
for node in dest.split("."):
obj = getattr(obj, node)
elif dest == "set_netdevs":
self.httpsrv.set_netdevs(args[0])
elif dest == "retq":
# response from previous ipc call
with self.retpend_mutex:
retq = self.retpend.pop(retq_id)
retq.put(args)
else:
raise Exception("what is " + str(dest))
rv = obj(*args) # type: ignore
if retq_id:
self.say("retq", rv, retq_id=retq_id)
def ask(self, dest: str, *args: Any) -> Union[ExceptionalQueue, NotExQueue]:
retq = ExceptionalQueue(1)
@@ -123,5 +124,5 @@ class MpWorker(BrokerCli):
self.q_yield.put((retq_id, dest, list(args)))
return retq
def say(self, dest: str, *args: Any) -> None:
self.q_yield.put((0, dest, list(args)))
def say(self, dest: str, *args: Any, retq_id=0) -> None:
self.q_yield.put((retq_id, dest, list(args)))

View File

@@ -53,11 +53,11 @@ class BrokerThr(BrokerCli):
return NotExQueue(obj(*args)) # type: ignore
def say(self, dest: str, *args: Any) -> None:
if dest == "listen":
if dest == "httpsrv.listen":
self.httpsrv.listen(args[0], 1)
return
if dest == "set_netdevs":
if dest == "httpsrv.set_netdevs":
self.httpsrv.set_netdevs(args[0])
return

View File

@@ -13,6 +13,7 @@ def vf_bmap() -> dict[str, str]:
"dav_rt": "davrt",
"ed": "dots",
"hardlink_only": "hardlinkonly",
"no_clone": "noclone",
"no_dirsz": "nodirsz",
"no_dupe": "nodupe",
"no_forget": "noforget",
@@ -45,6 +46,7 @@ def vf_bmap() -> dict[str, str]:
"og_no_head",
"og_s_title",
"rand",
"rss",
"xdev",
"xlink",
"xvol",
@@ -101,10 +103,12 @@ def vf_cmap() -> dict[str, str]:
"mte",
"mth",
"mtp",
"xac",
"xad",
"xar",
"xau",
"xban",
"xbc",
"xbd",
"xbr",
"xbu",
@@ -135,7 +139,8 @@ flagcats = {
"hardlink": "enable hardlink-based file deduplication,\nwith fallback on symlinks when that is impossible",
"hardlinkonly": "dedup with hardlink only, never symlink;\nmake a full copy if hardlink is impossible",
"safededup": "verify on-disk data before using it for dedup",
"nodupe": "rejects existing files (instead of symlinking them)",
"noclone": "take dupe data from clients, even if available on HDD",
"nodupe": "rejects existing files (instead of linking/cloning them)",
"sparse": "force use of sparse files, mainly for s3-backed storage",
"daw": "enable full WebDAV write support (dangerous);\nPUT-operations will now \033[1;31mOVERWRITE\033[0;35m existing files",
"nosub": "forces all uploads into the top folder of the vfs",
@@ -209,6 +214,8 @@ flagcats = {
"xbu=CMD": "execute CMD before a file upload starts",
"xau=CMD": "execute CMD after a file upload finishes",
"xiu=CMD": "execute CMD after all uploads finish and volume is idle",
"xbc=CMD": "execute CMD before a file copy",
"xac=CMD": "execute CMD after a file copy",
"xbr=CMD": "execute CMD before a file rename/move",
"xar=CMD": "execute CMD after a file rename/move",
"xbd=CMD": "execute CMD before a file delete",

View File

@@ -76,6 +76,7 @@ class FtpAuth(DummyAuthorizer):
else:
raise AuthenticationFailed("banned")
args = self.hub.args
asrv = self.hub.asrv
uname = "*"
if username != "anonymous":
@@ -86,6 +87,9 @@ class FtpAuth(DummyAuthorizer):
uname = zs
break
if args.ipu and uname == "*":
uname = args.ipu_iu[args.ipu_nm.map(ip)]
if not uname or not (asrv.vfs.aread.get(uname) or asrv.vfs.awrite.get(uname)):
g = self.hub.gpwd
if g.lim:
@@ -292,6 +296,7 @@ class FtpFs(AbstractedFS):
self.uname,
not self.args.no_scandir,
[[True, False], [False, True]],
throw=True,
)
vfs_ls = [x[0] for x in vfs_ls1]
vfs_ls.extend(vfs_virt.keys())

File diff suppressed because it is too large Load Diff

View File

@@ -59,6 +59,8 @@ class HttpConn(object):
self.asrv: AuthSrv = hsrv.asrv # mypy404
self.u2fh: Util.FHC = hsrv.u2fh # mypy404
self.pipes: Util.CachedDict = hsrv.pipes # mypy404
self.ipu_iu: Optional[dict[str, str]] = hsrv.ipu_iu
self.ipu_nm: Optional[NetMap] = hsrv.ipu_nm
self.ipa_nm: Optional[NetMap] = hsrv.ipa_nm
self.xff_nm: Optional[NetMap] = hsrv.xff_nm
self.xff_lan: NetMap = hsrv.xff_lan # type: ignore

View File

@@ -1,6 +1,7 @@
# coding: utf-8
from __future__ import print_function, unicode_literals
import hashlib
import math
import os
import re
@@ -69,6 +70,7 @@ from .util import (
build_netmap,
has_resource,
ipnorm,
load_ipu,
load_resource,
min_ex,
shut_socket,
@@ -79,6 +81,7 @@ from .util import (
)
if TYPE_CHECKING:
from .authsrv import VFS
from .broker_util import BrokerCli
from .ssdp import SSDPr
@@ -128,6 +131,12 @@ class HttpSrv(object):
self.bans: dict[str, int] = {}
self.aclose: dict[str, int] = {}
dli: dict[str, tuple[float, int, "VFS", str, str]] = {} # info
dls: dict[str, tuple[float, int]] = {} # state
self.dli = self.tdli = dli
self.dls = self.tdls = dls
self.iiam = '<img src="%s.cpr/iiam.gif" />' % (self.args.SRS,)
self.bound: set[tuple[str, int]] = set()
self.name = "hsrv" + nsuf
self.mutex = threading.Lock()
@@ -143,6 +152,7 @@ class HttpSrv(object):
self.t_periodic: Optional[threading.Thread] = None
self.u2fh = FHC()
self.u2sc: dict[str, tuple[int, "hashlib._Hash"]] = {}
self.pipes = CachedDict(0.2)
self.metrics = Metrics(self)
self.nreq = 0
@@ -175,6 +185,11 @@ class HttpSrv(object):
self.j2 = {x: env.get_template(x + ".html") for x in jn}
self.prism = has_resource(self.E, "web/deps/prism.js.gz")
if self.args.ipu:
self.ipu_iu, self.ipu_nm = load_ipu(self.log, self.args.ipu)
else:
self.ipu_iu = self.ipu_nm = None
self.ipa_nm = build_netmap(self.args.ipa)
self.xff_nm = build_netmap(self.args.xff_src)
self.xff_lan = build_netmap("lan")
@@ -197,6 +212,9 @@ class HttpSrv(object):
self.start_threads(4)
if nid:
self.tdli = {}
self.tdls = {}
if self.args.stackmon:
start_stackmon(self.args.stackmon, nid)
@@ -571,3 +589,32 @@ class HttpSrv(object):
ident += "a"
self.u2idx_free[ident] = u2idx
def read_dls(
self,
) -> tuple[
dict[str, tuple[float, int, str, str, str]], dict[str, tuple[float, int]]
]:
"""
mp-broker asking for local dl-info + dl-state;
reduce overhead by sending just the vfs vpath
"""
dli = {k: (a, b, c.vpath, d, e) for k, (a, b, c, d, e) in self.dli.items()}
return (dli, self.dls)
def write_dls(
self,
sdli: dict[str, tuple[float, int, str, str, str]],
dls: dict[str, tuple[float, int]],
) -> None:
"""
mp-broker pushing total dl-info + dl-state;
swap out the vfs vpath with the vfs node
"""
dli: dict[str, tuple[float, int, "VFS", str, str]] = {}
for k, (a, b, c, d, e) in sdli.items():
vn = self.asrv.vfs.all_vols[c]
dli[k] = (a, b, vn, d, e)
self.tdli = dli
self.tdls = dls

View File

@@ -72,6 +72,9 @@ class Metrics(object):
v = "{:.3f}".format(self.hsrv.t0)
addug("cpp_boot_unixtime", "seconds", v, t)
t = "number of active downloads"
addg("cpp_active_dl", str(len(self.hsrv.tdls)), t)
t = "number of open http(s) client connections"
addg("cpp_http_conns", str(self.hsrv.ncli), t)
@@ -128,7 +131,7 @@ class Metrics(object):
addbh("cpp_disk_size_bytes", "total HDD size of volume")
addbh("cpp_disk_free_bytes", "free HDD space in volume")
for vpath, vol in allvols:
free, total = get_df(vol.realpath)
free, total, _ = get_df(vol.realpath, False)
if free is None or total is None:
continue

View File

@@ -473,7 +473,7 @@ class MTag(object):
sv = str(zv).split("/")[0].strip().lstrip("0")
ret[sk] = sv or 0
# normalize key notation to rkeobo
# normalize key notation to rekobo
okey = ret.get("key")
if okey:
key = str(okey).replace(" ", "").replace("maj", "").replace("min", "m")

View File

@@ -84,7 +84,7 @@ class SSDPr(object):
name = self.args.doctitle
zs = zs.strip().format(c(ubase), c(url), c(name), c(self.args.zsid))
hc.reply(zs.encode("utf-8", "replace"))
return False # close connectino
return False # close connection
class SSDPd(MCast):

View File

@@ -60,6 +60,7 @@ from .util import (
alltrace,
ansi_re,
build_netmap,
load_ipu,
min_ex,
mp,
odfusion,
@@ -111,7 +112,7 @@ class SvcHub(object):
self.stopping = False
self.stopped = False
self.reload_req = False
self.reloading = 0
self.reload_mutex = threading.Lock()
self.stop_cond = threading.Condition()
self.nsigs = 3
self.retcode = 0
@@ -210,6 +211,15 @@ class SvcHub(object):
t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
zs = ""
if args.th_ram_max < 0.22:
zs = "generate thumbnails"
elif args.th_ram_max < 1:
zs = "generate audio waveforms or spectrograms"
if zs:
t = "WARNING: --th-ram-max is very small (%.2f GiB); will not be able to %s"
self.log("root", t % (args.th_ram_max, zs), 3)
if args.chpw and args.idp_h_usr:
t = "ERROR: user-changeable passwords is incompatible with IdP/identity-providers; you must disable either --chpw or --idp-h-usr"
self.log("root", t, 1)
@@ -221,9 +231,15 @@ class SvcHub(object):
noch.update([x for x in zsl if x])
args.chpw_no = noch
if args.ipu:
iu, nm = load_ipu(self.log, args.ipu, True)
setattr(args, "ipu_iu", iu)
setattr(args, "ipu_nm", nm)
if not self.args.no_ses:
self.setup_session_db()
args.shr1 = ""
if args.shr:
self.setup_share_db()
@@ -372,6 +388,14 @@ class SvcHub(object):
self.broker = Broker(self)
# create netmaps early to avoid firewall gaps,
# but the mutex blocks multiprocessing startup
for zs in "ipu_iu ftp_ipa_nm tftp_ipa_nm".split():
try:
getattr(args, zs).mutex = threading.Lock()
except:
pass
def setup_session_db(self) -> None:
if not HAVE_SQLITE3:
self.args.no_ses = True
@@ -446,6 +470,7 @@ class SvcHub(object):
raise Exception(t)
al.shr = "/%s/" % (al.shr,)
al.shr1 = al.shr[1:]
create = True
modified = False
@@ -755,8 +780,8 @@ class SvcHub(object):
al.idp_h_grp = al.idp_h_grp.lower()
al.idp_h_key = al.idp_h_key.lower()
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa)
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa)
al.ftp_ipa_nm = build_netmap(al.ftp_ipa or al.ipa, True)
al.tftp_ipa_nm = build_netmap(al.tftp_ipa or al.ipa, True)
mte = ODict.fromkeys(DEF_MTE.split(","), True)
al.mte = odfusion(mte, al.mte)
@@ -803,6 +828,24 @@ class SvcHub(object):
if len(al.tcolor) == 3: # fc5 => ffcc55
al.tcolor = "".join([x * 2 for x in al.tcolor])
zs = al.u2sz
zsl = zs.split(",")
if len(zsl) not in (1, 3):
t = "invalid --u2sz; must be either one number, or a comma-separated list of three numbers (min,default,max)"
raise Exception(t)
if len(zsl) < 3:
zsl = ["1", zs, zs]
zi2 = 1
for zs in zsl:
zi = int(zs)
# arbitrary constraint (anything above 2 GiB is probably unintended)
if zi < 1 or zi > 2047:
raise Exception("invalid --u2sz; minimum is 1, max is 2047")
if zi < zi2:
raise Exception("invalid --u2sz; values must be equal or ascending")
zi2 = zi
al.u2sz = ",".join(zsl)
return True
def _ipa2re(self, txt) -> Optional[re.Pattern]:
@@ -970,41 +1013,18 @@ class SvcHub(object):
except:
self.log("root", "ssdp startup failed;\n" + min_ex(), 3)
def reload(self) -> str:
with self.up2k.mutex:
if self.reloading:
return "cannot reload; already in progress"
self.reloading = 1
Daemon(self._reload, "reloading")
return "reload initiated"
def _reload(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
with self.up2k.mutex:
if self.reloading != 1:
return
self.reloading = 2
def reload(self, rescan_all_vols: bool, up2k: bool) -> str:
t = "config has been reloaded"
with self.reload_mutex:
self.log("root", "reloading config")
self.asrv.reload(9 if up2k else 4)
if up2k:
self.up2k.reload(rescan_all_vols)
t += "; volumes are now reinitializing"
else:
self.log("root", "reload done")
self.broker.reload()
self.reloading = 0
def _reload_blocking(self, rescan_all_vols: bool = True, up2k: bool = True) -> None:
while True:
with self.up2k.mutex:
if self.reloading < 2:
self.reloading = 1
break
time.sleep(0.05)
# try to handle multiple pending IdP reloads at once:
time.sleep(0.2)
self._reload(rescan_all_vols=rescan_all_vols, up2k=up2k)
return t
def _reload_sessions(self) -> None:
with self.asrv.mutex:
@@ -1018,7 +1038,7 @@ class SvcHub(object):
if self.reload_req:
self.reload_req = False
self.reload()
self.reload(True, True)
self.shutdown()

View File

@@ -100,7 +100,7 @@ def gen_hdr(
# spec says to put zeros when !crc if bit3 (streaming)
# however infozip does actual sz and it even works on winxp
# (same reasning for z64 extradata later)
# (same reasoning for z64 extradata later)
vsz = 0xFFFFFFFF if z64 else sz
ret += spack(b"<LL", vsz, vsz)

View File

@@ -95,7 +95,7 @@ class TcpSrv(object):
continue
# binding 0.0.0.0 after :: fails on dualstack
# but is necessary on non-dualstakc
# but is necessary on non-dualstack
if successful_binds:
continue
@@ -371,7 +371,7 @@ class TcpSrv(object):
if self.args.q:
print(msg)
self.hub.broker.say("listen", srv)
self.hub.broker.say("httpsrv.listen", srv)
self.srv = srvs
self.bound = bound
@@ -379,7 +379,7 @@ class TcpSrv(object):
self._distribute_netdevs()
def _distribute_netdevs(self):
self.hub.broker.say("set_netdevs", self.netdevs)
self.hub.broker.say("httpsrv.set_netdevs", self.netdevs)
self.hub.start_zeroconf()
gencert(self.log, self.args, self.netdevs)
self.hub.restart_ftpd()

View File

@@ -269,6 +269,7 @@ class Tftpd(object):
"*",
not self.args.no_scandir,
[[True, False]],
throw=True,
)
dnames = set([x[0] for x in vfs_ls if stat.S_ISDIR(x[1].st_mode)])
dirs1 = [(v.st_mtime, v.st_size, k + "/") for k, v in vfs_ls if k in dnames]

View File

@@ -20,7 +20,6 @@ from .util import (
FFMPEG_URL,
Cooldown,
Daemon,
Pebkac,
afsenc,
fsenc,
min_ex,
@@ -164,6 +163,7 @@ class ThumbSrv(object):
self.ram: dict[str, float] = {}
self.memcond = threading.Condition(self.mutex)
self.stopping = False
self.rm_nullthumbs = True # forget failed conversions on startup
self.nthr = max(1, self.args.th_mt)
self.q: Queue[Optional[tuple[str, str, str, VFS]]] = Queue(self.nthr * 4)
@@ -862,7 +862,6 @@ class ThumbSrv(object):
def cleaner(self) -> None:
interval = self.args.th_clean
while True:
time.sleep(interval)
ndirs = 0
for vol, histpath in self.asrv.vfs.histtab.items():
if histpath.startswith(vol):
@@ -876,6 +875,8 @@ class ThumbSrv(object):
self.log("\033[Jcln err in %s: %r" % (histpath, ex), 3)
self.log("\033[Jcln ok; rm {} dirs".format(ndirs))
self.rm_nullthumbs = False
time.sleep(interval)
def clean(self, histpath: str) -> int:
ret = 0
@@ -896,7 +897,9 @@ class ThumbSrv(object):
prev_b64 = None
prev_fp = ""
try:
t1 = statdir(self.log_func, not self.args.no_scandir, False, thumbpath)
t1 = statdir(
self.log_func, not self.args.no_scandir, False, thumbpath, False
)
ents = sorted(list(t1))
except:
return 0
@@ -937,6 +940,10 @@ class ThumbSrv(object):
continue
if self.rm_nullthumbs and not inf.st_size:
bos.unlink(fp)
continue
if b64 == prev_b64:
self.log("rm replaced [{}]".format(fp))
bos.unlink(prev_fp)

View File

@@ -95,7 +95,7 @@ class U2idx(object):
uv: list[Union[str, int]] = [wark[:16], wark]
try:
return self.run_query(uname, vols, uq, uv, False, 99999)[0]
return self.run_query(uname, vols, uq, uv, False, True, 99999)[0]
except:
raise Pebkac(500, min_ex())
@@ -301,7 +301,7 @@ class U2idx(object):
q += " lower({}) {} ? ) ".format(field, oper)
try:
return self.run_query(uname, vols, q, va, have_mt, lim)
return self.run_query(uname, vols, q, va, have_mt, True, lim)
except Exception as ex:
raise Pebkac(500, repr(ex))
@@ -312,6 +312,7 @@ class U2idx(object):
uq: str,
uv: list[Union[str, int]],
have_mt: bool,
sort: bool,
lim: int,
) -> tuple[list[dict[str, Any]], list[str], bool]:
if self.args.srch_dbg:
@@ -458,7 +459,8 @@ class U2idx(object):
done_flag.append(True)
self.active_id = ""
ret.sort(key=itemgetter("rp"))
if sort:
ret.sort(key=itemgetter("rp"))
return ret, list(taglist.keys()), lim < 0 and not clamped

File diff suppressed because it is too large Load Diff

View File

@@ -213,6 +213,9 @@ except:
ansi_re = re.compile("\033\\[[^mK]*[mK]")
BOS_SEP = ("%s" % (os.sep,)).encode("ascii")
surrogateescape.register_surrogateescape()
if WINDOWS and PY2:
FS_ENCODING = "utf-8"
@@ -433,6 +436,27 @@ UNHUMANIZE_UNITS = {
VF_CAREFUL = {"mv_re_t": 5, "rm_re_t": 5, "mv_re_r": 0.1, "rm_re_r": 0.1}
def read_ram() -> tuple[float, float]:
a = b = 0
try:
with open("/proc/meminfo", "rb", 0x10000) as f:
zsl = f.read(0x10000).decode("ascii", "replace").split("\n")
p = re.compile("^MemTotal:.* kB")
zs = next((x for x in zsl if p.match(x)))
a = int((int(zs.split()[1]) / 0x100000) * 100) / 100
p = re.compile("^MemAvailable:.* kB")
zs = next((x for x in zsl if p.match(x)))
b = int((int(zs.split()[1]) / 0x100000) * 100) / 100
except:
pass
return a, b
RAM_TOTAL, RAM_AVAIL = read_ram()
pybin = sys.executable or ""
if EXE:
pybin = ""
@@ -665,11 +689,22 @@ class HLog(logging.Handler):
class NetMap(object):
def __init__(self, ips: list[str], cidrs: list[str], keep_lo=False) -> None:
def __init__(
self,
ips: list[str],
cidrs: list[str],
keep_lo=False,
strict_cidr=False,
defer_mutex=False,
) -> None:
"""
ips: list of plain ipv4/ipv6 IPs, not cidr
cidrs: list of cidr-notation IPs (ip/prefix)
"""
# fails multiprocessing; defer assignment
self.mutex: Optional[threading.Lock] = None if defer_mutex else threading.Lock()
if "::" in ips:
ips = [x for x in ips if x != "::"] + list(
[x.split("/")[0] for x in cidrs if ":" in x]
@@ -696,7 +731,7 @@ class NetMap(object):
bip = socket.inet_pton(fam, ip.split("/")[0])
self.bip.append(bip)
self.b2sip[bip] = ip.split("/")[0]
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, False)
self.b2net[bip] = (IPv6Network if v6 else IPv4Network)(ip, strict_cidr)
self.bip.sort(reverse=True)
@@ -707,8 +742,13 @@ class NetMap(object):
try:
return self.cache[ip]
except:
pass
# intentionally crash the calling thread if unset:
assert self.mutex # type: ignore # !rm
with self.mutex:
return self._map(ip)
def _map(self, ip: str) -> str:
v6 = ":" in ip
ci = IPv6Address(ip) if v6 else IPv4Address(ip)
bip = next((x for x in self.bip if ci in self.b2net[x]), None)
@@ -1011,6 +1051,7 @@ class MTHash(object):
self.sz = 0
self.csz = 0
self.stop = False
self.readsz = 1024 * 1024 * (2 if (RAM_AVAIL or 2) < 1 else 12)
self.omutex = threading.Lock()
self.imutex = threading.Lock()
self.work_q: Queue[int] = Queue()
@@ -1086,7 +1127,7 @@ class MTHash(object):
while chunk_rem > 0:
with self.imutex:
f.seek(ofs)
buf = f.read(min(chunk_rem, 1024 * 1024 * 12))
buf = f.read(min(chunk_rem, self.readsz))
if not buf:
raise Exception("EOF at " + str(ofs))
@@ -1163,7 +1204,7 @@ class Magician(object):
return ret
mime = magic.from_file(fpath, mime=True)
mime = re.split("[; ]", mime, 1)[0]
mime = re.split("[; ]", mime, maxsplit=1)[0]
try:
return EXTS[mime]
except:
@@ -2185,6 +2226,23 @@ def unquotep(txt: str) -> str:
return w8dec(unq2)
def vroots(vp1: str, vp2: str) -> tuple[str, str]:
"""
input("q/w/e/r","a/s/d/e/r") output("/q/w/","/a/s/d/")
"""
while vp1 and vp2:
zt1 = vp1.rsplit("/", 1) if "/" in vp1 else ("", vp1)
zt2 = vp2.rsplit("/", 1) if "/" in vp2 else ("", vp2)
if zt1[1] != zt2[1]:
break
vp1 = zt1[0]
vp2 = zt2[0]
return (
"/%s/" % (vp1,) if vp1 else "/",
"/%s/" % (vp2,) if vp2 else "/",
)
def vsplit(vpath: str) -> tuple[str, str]:
if "/" not in vpath:
return "", vpath
@@ -2486,23 +2544,28 @@ def wunlink(log: "NamedLogger", abspath: str, flags: dict[str, Any]) -> bool:
return _fs_mvrm(log, abspath, "", False, flags)
def get_df(abspath: str) -> tuple[Optional[int], Optional[int]]:
def get_df(abspath: str, prune: bool) -> tuple[Optional[int], Optional[int], str]:
try:
# some fuses misbehave
assert ctypes # type: ignore # !rm
ap = fsenc(abspath)
while prune and not os.path.isdir(ap) and BOS_SEP in ap:
# strip leafs until it hits an existing folder
ap = ap.rsplit(BOS_SEP, 1)[0]
if ANYWIN:
assert ctypes # type: ignore # !rm
abspath = fsdec(ap)
bfree = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW( # type: ignore
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
)
return (bfree.value, None)
return (bfree.value, None, "")
else:
sv = os.statvfs(fsenc(abspath))
sv = os.statvfs(ap)
free = sv.f_frsize * sv.f_bfree
total = sv.f_frsize * sv.f_blocks
return (free, total)
except:
return (None, None)
return (free, total, "")
except Exception as ex:
return (None, None, repr(ex))
if not ANYWIN and not MACOS:
@@ -2640,7 +2703,7 @@ def list_ips() -> list[str]:
return list(ret)
def build_netmap(csv: str):
def build_netmap(csv: str, defer_mutex: bool = False):
csv = csv.lower().strip()
if csv in ("any", "all", "no", ",", ""):
@@ -2675,7 +2738,34 @@ def build_netmap(csv: str):
cidrs.append(zs)
ips = [x.split("/")[0] for x in cidrs]
return NetMap(ips, cidrs, True)
return NetMap(ips, cidrs, True, False, defer_mutex)
def load_ipu(
log: "RootLogger", ipus: list[str], defer_mutex: bool = False
) -> tuple[dict[str, str], NetMap]:
ip_u = {"": "*"}
cidr_u = {}
for ipu in ipus:
try:
cidr, uname = ipu.split("=")
cip, csz = cidr.split("/")
except:
t = "\n invalid value %r for argument --ipu; must be CIDR=UNAME (192.168.0.0/16=amelia)"
raise Exception(t % (ipu,))
uname2 = cidr_u.get(cidr)
if uname2 is not None:
t = "\n invalid value %r for argument --ipu; cidr %s already mapped to %r"
raise Exception(t % (ipu, cidr, uname2))
cidr_u[cidr] = uname
ip_u[cip] = uname
try:
nm = NetMap(["::"], list(cidr_u.keys()), True, True, defer_mutex)
except Exception as ex:
t = "failed to translate --ipu into netmap, probably due to invalid config: %r"
log("root", t % (ex,), 1)
raise
return ip_u, nm
def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
@@ -2692,10 +2782,12 @@ def yieldfile(fn: str, bufsz: int) -> Generator[bytes, None, None]:
def hashcopy(
fin: Generator[bytes, None, None],
fout: Union[typing.BinaryIO, typing.IO[Any]],
slp: float = 0,
max_sz: int = 0,
hashobj: Optional["hashlib._Hash"],
max_sz: int,
slp: float,
) -> tuple[int, str, str]:
hashobj = hashlib.sha512()
if not hashobj:
hashobj = hashlib.sha512()
tlen = 0
for buf in fin:
tlen += len(buf)
@@ -2721,7 +2813,10 @@ def sendfile_py(
bufsz: int,
slp: float,
use_poll: bool,
dls: dict[str, tuple[float, int]],
dl_id: str,
) -> int:
sent = 0
remains = upper - lower
f.seek(lower)
while remains > 0:
@@ -2738,6 +2833,10 @@ def sendfile_py(
except:
return remains
if dl_id:
sent += len(buf)
dls[dl_id] = (time.time(), sent)
return 0
@@ -2750,6 +2849,8 @@ def sendfile_kern(
bufsz: int,
slp: float,
use_poll: bool,
dls: dict[str, tuple[float, int]],
dl_id: str,
) -> int:
out_fd = s.fileno()
in_fd = f.fileno()
@@ -2762,7 +2863,7 @@ def sendfile_kern(
while ofs < upper:
stuck = stuck or time.time()
try:
req = min(2 ** 30, upper - ofs)
req = min(0x2000000, upper - ofs) # 32 MiB
if use_poll:
poll.poll(10000)
else:
@@ -2786,13 +2887,16 @@ def sendfile_kern(
return upper - ofs
ofs += n
if dl_id:
dls[dl_id] = (time.time(), ofs - lower)
# print("sendfile: ok, sent {} now, {} total, {} remains".format(n, ofs - lower, upper - ofs))
return 0
def statdir(
logger: Optional["RootLogger"], scandir: bool, lstat: bool, top: str
logger: Optional["RootLogger"], scandir: bool, lstat: bool, top: str, throw: bool
) -> Generator[tuple[str, os.stat_result], None, None]:
if lstat and ANYWIN:
lstat = False
@@ -2828,6 +2932,12 @@ def statdir(
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(abspath)), 6)
except Exception as ex:
if throw:
zi = getattr(ex, "errno", 0)
if zi == errno.ENOENT:
raise Pebkac(404, str(ex))
raise
t = "{} @ {}".format(repr(ex), top)
if logger:
logger(src, t, 1)
@@ -2836,7 +2946,7 @@ def statdir(
def dir_is_empty(logger: "RootLogger", scandir: bool, top: str):
for _ in statdir(logger, scandir, False, top):
for _ in statdir(logger, scandir, False, top, False):
return False
return True
@@ -2849,7 +2959,7 @@ def rmdirs(
top = os.path.dirname(top)
depth -= 1
stats = statdir(logger, scandir, lstat, top)
stats = statdir(logger, scandir, lstat, top, False)
dirs = [x[0] for x in stats if stat.S_ISDIR(x[1].st_mode)]
dirs = [os.path.join(top, x) for x in dirs]
ok = []
@@ -3622,10 +3732,10 @@ def stat_resource(E: EnvParams, name: str):
return None
def _find_impresource(E: EnvParams, name: str):
def _find_impresource(pkg: types.ModuleType, name: str):
assert impresources # !rm
try:
files = impresources.files(E.pkg)
files = impresources.files(pkg)
except ImportError:
return None
@@ -3635,7 +3745,7 @@ def _find_impresource(E: EnvParams, name: str):
_rescache_has = {}
def _has_resource(E: EnvParams, name: str):
def _has_resource(name: str):
try:
return _rescache_has[name]
except:
@@ -3644,14 +3754,17 @@ def _has_resource(E: EnvParams, name: str):
if len(_rescache_has) > 999:
_rescache_has.clear()
assert __package__ # !rm
pkg = sys.modules[__package__]
if impresources:
res = _find_impresource(E, name)
res = _find_impresource(pkg, name)
if res and res.is_file():
_rescache_has[name] = True
return True
if pkg_resources:
if _pkg_resource_exists(E.pkg.__name__, name):
if _pkg_resource_exists(pkg.__name__, name):
_rescache_has[name] = True
return True
@@ -3660,14 +3773,15 @@ def _has_resource(E: EnvParams, name: str):
def has_resource(E: EnvParams, name: str):
return _has_resource(E, name) or os.path.exists(os.path.join(E.mod, name))
return _has_resource(name) or os.path.exists(os.path.join(E.mod, name))
def load_resource(E: EnvParams, name: str, mode="rb") -> IO[bytes]:
enc = None if "b" in mode else "utf-8"
if impresources:
res = _find_impresource(E, name)
assert __package__ # !rm
res = _find_impresource(sys.modules[__package__], name)
if res and res.is_file():
if enc:
return res.open(mode, encoding=enc)
@@ -3676,8 +3790,10 @@ def load_resource(E: EnvParams, name: str, mode="rb") -> IO[bytes]:
return res.open(mode)
if pkg_resources:
if _pkg_resource_exists(E.pkg.__name__, name):
stream = pkg_resources.resource_stream(E.pkg.__name__, name)
assert __package__ # !rm
pkg = sys.modules[__package__]
if _pkg_resource_exists(pkg.__name__, name):
stream = pkg_resources.resource_stream(pkg.__name__, name)
if enc:
stream = codecs.getreader(enc)(stream)
return stream

View File

@@ -32,7 +32,7 @@ window.baguetteBox = (function () {
scrollCSS = ['', ''],
scrollTimer = 0,
re_i = /^[^?]+\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4)(\?|$)/i,
re_v = /^[^?]+\.(webm|mkv|mp4|m4v)(\?|$)/i,
anims = ['slideIn', 'fadeIn', 'none'],
data = {}, // all galleries
imagesElements = [],

View File

@@ -896,7 +896,7 @@ html.y #path a:hover {
max-width: 52em;
}
.mdo.sb,
#epi.logue.mdo>iframe {
.logue.mdo>iframe {
max-width: 54em;
}
.mdo,
@@ -1710,6 +1710,18 @@ html.dz .btn {
background: var(--btn-1-bg);
text-shadow: none;
}
#tree ul a.ld::before {
font-weight: bold;
font-family: sans-serif;
display: inline-block;
text-align: center;
width: 1em;
margin: 0 .3em 0 -1.3em;
color: var(--fg-max);
opacity: 0;
content: '◠';
animation: .5s linear infinite forwards spin, ease .25s 1 forwards fadein;
}
#tree ul a.par {
color: var(--fg-max);
}
@@ -1931,11 +1943,10 @@ html.y #tree.nowrap .ntree a+a:hover {
#rn_f.m td+td {
width: 50%;
}
#rn_f .err td {
background: var(--err-bg);
color: var(--fg-max);
}
#rn_f .err input[readonly] {
#rn_f .err td,
#rn_f .err input[readonly],
#rui .ng input[readonly] {
color: var(--err-fg);
background: var(--err-bg);
}
#rui input[readonly] {

View File

@@ -1,7 +1,7 @@
"use strict";
var XHR = XMLHttpRequest,
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4)(\?|$)/i;
img_re = /\.(a?png|avif|bmp|gif|heif|jpe?g|jfif|svg|webp|webm|mkv|mp4|m4v)(\?|$)/i;
var Ls = {
"eng": {
@@ -22,7 +22,7 @@ var Ls = {
"vq": "video quality / bitrate",
"pixfmt": "subsampling / pixel structure",
"resw": "horizontal resolution",
"resh": "veritcal resolution",
"resh": "vertical resolution",
"chs": "audio channels",
"hz": "sample rate"
},
@@ -37,8 +37,9 @@ var Ls = {
["T", "toggle thumbnails / icons"],
["🡅 A/D", "thumbnail size"],
["ctrl-K", "delete selected"],
["ctrl-X", "cut selected"],
["ctrl-V", "paste into folder"],
["ctrl-X", "cut selection to clipboard"],
["ctrl-C", "copy selection to clipboard"],
["ctrl-V", "paste (move/copy) here"],
["Y", "download selected"],
["F2", "rename selected"],
@@ -83,7 +84,7 @@ var Ls = {
["I/K", "prev/next file"],
["M", "close textfile"],
["E", "edit textfile"],
["S", "select file (for cut/rename)"],
["S", "select file (for cut/copy/rename)"],
]
],
@@ -133,6 +134,7 @@ var Ls = {
"wt_ren": "rename selected items$NHotkey: F2",
"wt_del": "delete selected items$NHotkey: ctrl-K",
"wt_cut": "cut selected items &lt;small&gt;(then paste somewhere else)&lt;/small&gt;$NHotkey: ctrl-X",
"wt_cpy": "copy selected items to clipboard$N(to paste them somewhere else)$NHotkey: ctrl-C",
"wt_pst": "paste a previously cut / copied selection$NHotkey: ctrl-V",
"wt_selall": "select all files$NHotkey: ctrl-A (when file focused)",
"wt_selinv": "invert selection",
@@ -327,6 +329,7 @@ var Ls = {
"fr_emore": "select at least one item to rename",
"fd_emore": "select at least one item to delete",
"fc_emore": "select at least one item to cut",
"fcp_emore": "select at least one item to copy to clipboard",
"fs_sc": "share the folder you're in",
"fs_ss": "share the selected files",
@@ -379,16 +382,28 @@ var Ls = {
"fc_ok": "cut {0} items",
"fc_warn": 'cut {0} items\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
"fp_ecut": "first cut some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
"fp_ename": "these {0} items cannot be moved here (names already exist):",
"fcc_ok": "copied {0} items to clipboard",
"fcc_warn": 'copied {0} items to clipboard\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
"fp_apply": "use these names",
"fp_ecut": "first cut or copy some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
"fp_ename": "{0} items cannot be moved here because the names are already taken. Give them new names below to continue, or blank the name to skip them:",
"fcp_ename": "{0} items cannot be copied here because the names are already taken. Give them new names below to continue, or blank the name to skip them:",
"fp_emore": "there are still some filename collisions left to fix",
"fp_ok": "move OK",
"fcp_ok": "copy OK",
"fp_busy": "moving {0} items...\n\n{1}",
"fcp_busy": "copying {0} items...\n\n{1}",
"fp_err": "move failed:\n",
"fcp_err": "copy failed:\n",
"fp_confirm": "move these {0} items here?",
"fcp_confirm": "copy these {0} items here?",
"fp_etab": 'failed to read clipboard from other browser tab',
"fp_name": "uploading a file from your device. Give it a name:",
"fp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Move {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
"fcp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Copy {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
"fp_both_b": '<a href="#" id="modal-ok">Move</a><a href="#" id="modal-ng">Upload</a>',
"fcp_both_b": '<a href="#" id="modal-ok">Copy</a><a href="#" id="modal-ng">Upload</a>',
"mk_noname": "type a name into the text field on the left before you do that :p",
@@ -400,7 +415,7 @@ var Ls = {
"tvt_dl": "download this file$NHotkey: Y\">💾 download",
"tvt_prev": "show previous document$NHotkey: i\">⬆ prev",
"tvt_next": "show next document$NHotkey: K\">⬇ next",
"tvt_sel": "select file &nbsp; ( for cut / delete / ... )$NHotkey: S\">sel",
"tvt_sel": "select file &nbsp; ( for cut / copy / delete / ... )$NHotkey: S\">sel",
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
"gt_vau": "don't show videos, just play the audio\">🎧",
@@ -605,8 +620,9 @@ var Ls = {
["T", "miniatyrbilder på/av"],
["🡅 A/D", "ikonstørrelse"],
["ctrl-K", "slett valgte"],
["ctrl-X", "klipp ut"],
["ctrl-V", "lim inn"],
["ctrl-X", "klipp ut valgte"],
["ctrl-C", "kopiér til utklippstavle"],
["ctrl-V", "lim inn (flytt/kopiér)"],
["Y", "last ned valgte"],
["F2", "endre navn på valgte"],
@@ -702,12 +718,13 @@ var Ls = {
"wt_ren": "gi nye navn til de valgte filene$NSnarvei: F2",
"wt_del": "slett de valgte filene$NSnarvei: ctrl-K",
"wt_cut": "klipp ut de valgte filene &lt;small&gt;(for å lime inn et annet sted)&lt;/small&gt;$NSnarvei: ctrl-X",
"wt_pst": "lim inn filer (som tidligere ble klippet ut et annet sted)$NSnarvei: ctrl-V",
"wt_cpy": "kopiér de valgte filene til utklippstavlen$N(for å lime inn et annet sted)$NSnarvei: ctrl-C",
"wt_pst": "lim inn filer (som tidligere ble klippet ut / kopiert et annet sted)$NSnarvei: ctrl-V",
"wt_selall": "velg alle filer$NSnarvei: ctrl-A (mens fokus er på en fil)",
"wt_selinv": "inverter utvalg",
"wt_selzip": "last ned de valgte filene som et arkiv",
"wt_seldl": "last ned de valgte filene$NSnarvei: Y",
"wt_npirc": "kopiér sang-info (irc-formattert)",
"wt_npirc": "kopiér sang-info (irc-formatert)",
"wt_nptxt": "kopiér sang-info",
"wt_grid": "bytt mellom ikoner og listevisning$NSnarvei: G",
"wt_prev": "forrige sang$NSnarvei: J",
@@ -845,7 +862,7 @@ var Ls = {
"mt_oscv": "vis album-cover på infoskjermen\">bilde",
"mt_follow": "bla slik at sangen som spilles alltid er synlig\">🎯",
"mt_compact": "tettpakket avspillerpanel\">⟎",
"mt_uncache": "prøv denne hvis en sang ikke spiller riktig\">uncache",
"mt_uncache": "prøv denne hvis en sang ikke spiller riktig\">oppfrisk",
"mt_mloop": "repeter hele mappen\">🔁 gjenta",
"mt_mnext": "hopp til neste mappe og fortsett\">📂 neste",
"mt_cflac": "konverter flac / wav-filer til opus\">flac",
@@ -886,7 +903,7 @@ var Ls = {
"f_dls": 'linkene i denne mappen er nå\nomgjort til nedlastningsknapper',
"f_partial": "For å laste ned en fil som enda ikke er ferdig opplastet, klikk på filen som har samme filnavn som denne, men uten <code>.PARTIAL</code> på slutten. Da vil serveren passe på at nedlastning går bra. Derfor anbefales det sterkt å trykke ABRYT eller Escape-tasten.\n\nHvis du virkelig ønsker å laste ned denne <code>.PARTIAL</code>-filen på en ukontrollert måte, trykk OK / Enter for å ignorere denne advarselen. Slik vil du høyst sannsynlig motta korrupt data.",
"f_partial": "For å laste ned en fil som enda ikke er ferdig opplastet, klikk på filen som har samme filnavn som denne, men uten <code>.PARTIAL</code> på slutten. Da vil serveren passe på at nedlastning går bra. Derfor anbefales det sterkt å trykke AVBRYT eller Escape-tasten.\n\nHvis du virkelig ønsker å laste ned denne <code>.PARTIAL</code>-filen på en ukontrollert måte, trykk OK / Enter for å ignorere denne advarselen. Slik vil du høyst sannsynlig motta korrupt data.",
"ft_paste": "Lim inn {0} filer$NSnarvei: ctrl-V",
"fr_eperm": 'kan ikke endre navn:\ndu har ikke “move”-rettigheten i denne mappen',
@@ -896,6 +913,7 @@ var Ls = {
"fr_emore": "velg minst én fil som skal få nytt navn",
"fd_emore": "velg minst én fil som skal slettes",
"fc_emore": "velg minst én fil som skal klippes ut",
"fcp_emore": "velg minst én fil som skal kopieres til utklippstavlen",
"fs_sc": "del mappen du er i nå",
"fs_ss": "del de valgte filene",
@@ -948,16 +966,28 @@ var Ls = {
"fc_ok": "klippet ut {0} filer",
"fc_warn": 'klippet ut {0} filer\n\nmen: kun <b>denne</b> nettleserfanen har mulighet til å lime dem inn et annet sted, siden antallet filer er helt hinsides',
"fp_ecut": "du må klippe ut noen filer / mapper først\n\nmerk: du kan gjerne jobbe på kryss av nettleserfaner; klippe ut i én fane, lime inn i en annen",
"fp_ename": "disse {0} filene kan ikke flyttes til målmappen fordi det allerede finnes filer med samme navn:",
"fcc_ok": "kopierte {0} filer til utklippstavlen",
"fcc_warn": 'kopierte {0} filer til utklippstavlen\n\nmen: kun <b>denne</b> nettleserfanen har mulighet til å lime dem inn et annet sted, siden antallet filer er helt hinsides',
"fp_apply": "bekreft og lim inn nå",
"fp_ecut": "du må klippe ut eller kopiere noen filer / mapper først\n\nmerk: du kan gjerne jobbe på kryss av nettleserfaner; klippe ut i én fane, lime inn i en annen",
"fp_ename": "{0} filer kan ikke flyttes til målmappen fordi det allerede finnes filer med samme navn. Gi dem nye navn nedenfor, eller gi dem et blankt navn for å hoppe over dem:",
"fcp_ename": "{0} filer kan ikke kopieres til målmappen fordi det allerede finnes filer med samme navn. Gi dem nye navn nedenfor, eller gi dem et blankt navn for å hoppe over dem:",
"fp_emore": "det er fortsatt flere navn som må endres",
"fp_ok": "flytting OK",
"fcp_ok": "kopiering OK",
"fp_busy": "flytter {0} filer...\n\n{1}",
"fcp_busy": "kopierer {0} filer...\n\n{1}",
"fp_err": "flytting feilet:\n",
"fcp_err": "kopiering feilet:\n",
"fp_confirm": "flytt disse {0} filene hit?",
"fcp_confirm": "kopiér disse {0} filene hit?",
"fp_etab": 'kunne ikke lese listen med filer ifra den andre nettleserfanen',
"fp_name": "Laster opp én fil fra enheten din. Velg filnavn:",
"fp_both_m": '<h6>hva skal limes inn her?</h6><code>Enter</code> = Flytt {0} filer fra «{1}»\n<code>ESC</code> = Last opp {2} filer fra enheten din',
"fcp_both_m": '<h6>hva skal limes inn her?</h6><code>Enter</code> = Kopiér {0} filer fra «{1}»\n<code>ESC</code> = Last opp {2} filer fra enheten din',
"fp_both_b": '<a href="#" id="modal-ok">Flytt</a><a href="#" id="modal-ng">Last opp</a>',
"fcp_both_b": '<a href="#" id="modal-ok">Kopiér</a><a href="#" id="modal-ng">Last opp</a>',
"mk_noname": "skriv inn et navn i tekstboksen til venstre først :p",
@@ -1176,6 +1206,7 @@ var Ls = {
["🡅 A/D", "缩略图大小"],
["ctrl-K", "删除选中项"],
["ctrl-X", "剪切选中项"],
["ctrl-C", "复制选中项"], //m
["ctrl-V", "粘贴到文件夹"],
["Y", "下载选中项"],
["F2", "重命名选中项"],
@@ -1271,6 +1302,7 @@ var Ls = {
"wt_ren": "重命名选中的项目$N快捷键: F2",
"wt_del": "删除选中的项目$N快捷键: ctrl-K",
"wt_cut": "剪切选中的项目&lt;small&gt;(然后粘贴到其他地方)&lt;/small&gt;$N快捷键: ctrl-X",
"wt_cpy": "将选中的项目复制到剪贴板&lt;small&gt;(然后粘贴到其他地方)&lt;/small&gt;$N快捷键: ctrl-C", //m
"wt_pst": "粘贴之前剪切/复制的选择$N快捷键: ctrl-V",
"wt_selall": "选择所有文件$N快捷键: ctrl-A当文件被聚焦时",
"wt_selinv": "反转选择",
@@ -1465,6 +1497,7 @@ var Ls = {
"fr_emore": "选择至少一个项目以重命名",
"fd_emore": "选择至少一个项目以删除",
"fc_emore": "选择至少一个项目以剪切",
"fcp_emore": "选择至少一个要复制到剪贴板的项目", //m
"fs_sc": "分享你所在的文件夹",
"fs_ss": "分享选定的文件",
@@ -1517,16 +1550,28 @@ var Ls = {
"fc_ok": "剪切 {0} 项",
"fc_warn": '剪切 {0} 项\n\n但只有 <b>这个</b> 浏览器标签页可以粘贴它们\n因为选择非常庞大',
"fp_ecut": "首先剪切一些文件/文件夹以粘贴/移动\n\n注意你可以在不同的浏览器标签页之间剪切/粘贴",
"fp_ename": "这些 {0} 项不能移动到这里(名称已存在):",
"fcc_ok": "已将 {0} 项复制到剪贴板", //m
"fcc_warn": '已将 {0} 项复制到剪贴板\n\n但只有 <b>这个</b> 浏览器标签页可以粘贴它们\n因为选择非常庞大', //m
"fp_apply": "确认并立即粘贴", //m
"fp_ecut": "首先剪切或复制一些文件/文件夹以粘贴/移动\n\n注意你可以在不同的浏览器标签页之间剪切/粘贴", //m
"fp_ename": "{0} 项不能移动到这里,因为名称已被占用。请在下方输入新名称以继续,或将名称留空以跳过这些项:", //m
"fcp_ename": "{0} 项不能复制到这里,因为名称已被占用。请在下方输入新名称以继续,或将名称留空以跳过这些项:", //m
"fp_emore": "还有一些文件名冲突需要解决", //m
"fp_ok": "移动成功",
"fcp_ok": "复制成功", //m
"fp_busy": "正在移动 {0} 项...\n\n{1}",
"fcp_busy": "正在复制 {0} 项...\n\n{1}", //m
"fp_err": "移动失败:\n",
"fcp_err": "复制失败:\n", //m
"fp_confirm": "将这些 {0} 项移动到这里?",
"fcp_confirm": "将这些 {0} 项复制到这里?", //m
"fp_etab": '无法从其他浏览器标签页读取剪贴板',
"fp_name": "从你的设备上传一个文件。给它一个名字:",
"fp_both_m": '<h6>选择粘贴内容</h6><code>Enter</code> = 从 «{1}» 移动 {0} 个文件\n<code>ESC</code> = 从你的设备上传 {2} 个文件',
"fcp_both_m": '<h6>选择粘贴内容</h6><code>Enter</code> = 从 «{1}» 复制 {0} 个文件\n<code>ESC</code> = 从你的设备上传 {2} 个文件', //m
"fp_both_b": '<a href="#" id="modal-ok">移动</a><a href="#" id="modal-ng">上传</a>',
"fcp_both_b": '<a href="#" id="modal-ok">复制</a><a href="#" id="modal-ng">上传</a>', //m
"mk_noname": "在左侧文本框中输入名称,然后再执行此操作 :p",
@@ -1771,6 +1816,7 @@ ebi('widget').innerHTML = (
' href="#" id="fren" tt="' + L.wt_ren + '">✎<span>name</span></a><a' +
' href="#" id="fdel" tt="' + L.wt_del + '">⌫<span>del.</span></a><a' +
' href="#" id="fcut" tt="' + L.wt_cut + '">✂<span>cut</span></a><a' +
' href="#" id="fcpy" tt="' + L.wt_cpy + '">⧉<span>copy</span></a><a' +
' href="#" id="fpst" tt="' + L.wt_pst + '">📋<span>paste</span></a>' +
'</span><span id="wzip"><a' +
' href="#" id="selall" tt="' + L.wt_selall + '">sel.<br />all</a><a' +
@@ -2118,8 +2164,9 @@ function set_files_html(html) {
// actx breaks background album playback on ios
var ACtx = !IPHONE && (window.AudioContext || window.webkitAudioContext),
ACB = sread('au_cbv') || 1,
noih = /[?&]v\b/.exec('' + location),
hash0 = location.hash,
sloc0 = '' + location,
noih = /[?&]v\b/.exec(sloc0),
ldks = [],
dks = {},
dk, mp;
@@ -4376,6 +4423,7 @@ var fileman = (function () {
var bren = ebi('fren'),
bdel = ebi('fdel'),
bcut = ebi('fcut'),
bcpy = ebi('fcpy'),
bpst = ebi('fpst'),
bshr = ebi('fshr'),
t_paste,
@@ -4388,14 +4436,19 @@ var fileman = (function () {
catch (ex) { }
r.render = function () {
if (r.clip === null)
if (r.clip === null) {
r.clip = jread('fman_clip', []).slice(1);
r.ccp = r.clip.length && r.clip[0] == '//c';
if (r.ccp)
r.clip.shift();
}
var sel = msel.getsel(),
nsel = sel.length,
enren = nsel,
endel = nsel,
encut = nsel,
encpy = nsel,
enpst = r.clip && r.clip.length,
hren = !(have_mv && has(perms, 'write') && has(perms, 'move')),
hdel = !(have_del && has(perms, 'delete')),
@@ -4409,6 +4462,7 @@ var fileman = (function () {
clmod(bren, 'en', enren);
clmod(bdel, 'en', endel);
clmod(bcut, 'en', encut);
clmod(bcpy, 'en', encpy);
clmod(bpst, 'en', enpst);
clmod(bshr, 'en', 1);
@@ -4510,9 +4564,12 @@ var fileman = (function () {
'<tr><td>perms</td><td class="sh_axs">',
];
for (var a = 0; a < perms.length; a++)
if (perms[a] != 'admin')
if (!has(['admin', 'move'], perms[a]))
html.push('<a href="#" class="tgl btn">' + perms[a] + '</a>');
if (has(perms, 'write'))
html.push('<a href="#" class="btn">write-only</a>');
html.push('</td></tr></div');
shui.innerHTML = html.join('\n');
@@ -4576,6 +4633,9 @@ var fileman = (function () {
function shspf() {
clmod(this, 'on', 't');
if (this.textContent == 'write-only')
for (var a = 0; a < pbtns.length; a++)
clmod(pbtns[a], 'on', pbtns[a].textContent == 'write');
}
clmod(pbtns[0], 'on', 1);
@@ -4697,9 +4757,9 @@ var fileman = (function () {
var html = sel.length > 1 ? ['<div>'] : [
'<div>',
'<button class="rn_dec" n="0" tt="' + L.frt_dec + '</button>',
'<button class="rn_dec" id="rn_dec_0" tt="' + L.frt_dec + '</button>',
'//',
'<button class="rn_reset" n="0" tt="' + L.frt_rst + '</button>'
'<button class="rn_reset" id="rn_reset_0" tt="' + L.frt_rst + '</button>'
];
html = html.concat([
@@ -4726,8 +4786,8 @@ var fileman = (function () {
if (sel.length == 1)
html.push(
'<div><table id="rn_f">\n' +
'<tr><td>old:</td><td><input type="text" id="rn_old" n="0" readonly /></td></tr>\n' +
'<tr><td>new:</td><td><input type="text" id="rn_new" n="0" /></td></tr>');
'<tr><td>old:</td><td><input type="text" id="rn_old_0" readonly /></td></tr>\n' +
'<tr><td>new:</td><td><input type="text" id="rn_new_0" /></td></tr>');
else {
html.push(
'<div><table id="rn_f" class="m">' +
@@ -4736,10 +4796,10 @@ var fileman = (function () {
html.push(
'<tr><td>' +
(cheap ? '</td>' :
'<button class="rn_dec" n="' + a + '">decode</button>' +
'<button class="rn_reset" n="' + a + '">' + t_rst + '</button></td>') +
'<td><input type="text" id="rn_new" n="' + a + '" /></td>' +
'<td><input type="text" id="rn_old" n="' + a + '" readonly /></td></tr>');
'<button class="rn_dec" id="rn_dec_' + a + '">decode</button>' +
'<button class="rn_reset" id="rn_reset_' + a + '">' + t_rst + '</button></td>') +
'<td><input type="text" id="rn_new_' + a + '" /></td>' +
'<td><input type="text" id="rn_old_' + a + '" readonly /></td></tr>');
}
html.push('</table></div>');
@@ -4753,9 +4813,8 @@ var fileman = (function () {
rui.innerHTML = html.join('\n');
for (var a = 0; a < f.length; a++) {
var k = '[n="' + a + '"]';
f[a].iold = QS('#rn_old' + k);
f[a].inew = QS('#rn_new' + k);
f[a].iold = ebi('rn_old_' + a);
f[a].inew = ebi('rn_new_' + a);
f[a].inew.value = f[a].iold.value = f[a].ofn;
if (!cheap)
@@ -4766,11 +4825,11 @@ var fileman = (function () {
if (kc.endsWith('Enter'))
return rn_apply();
};
QS('.rn_dec' + k).onclick = function (e) {
ebi('rn_dec_' + a).onclick = function (e) {
ev(e);
f[a].inew.value = uricom_dec(f[a].inew.value);
};
QS('.rn_reset' + k).onclick = function (e) {
ebi('rn_reset_' + a).onclick = function (e) {
ev(e);
rn_reset(a);
};
@@ -4813,6 +4872,9 @@ var fileman = (function () {
inew = ebi('rn_pnew'),
defp = '$lpad((tn),2,0). [(artist) - ](title).(ext)';
ire.value = sread('cpp_rn_re') || '';
ifmt.value = sread('cpp_rn_fmt') || '';
var presets = {};
presets[defp] = ['', defp];
presets = jread("rn_pre", presets);
@@ -4903,6 +4965,8 @@ var fileman = (function () {
function rn_apply(e) {
ev(e);
swrite('cpp_rn_re', ire.value);
swrite('cpp_rn_fmt', ifmt.value);
if (r.win || r.slash) {
var changed = 0;
for (var a = 0; a < f.length; a++) {
@@ -4952,7 +5016,6 @@ var fileman = (function () {
};
r.delete = function (e) {
ev(e);
var sel = msel.getsel(),
vps = [];
@@ -4962,6 +5025,8 @@ var fileman = (function () {
if (!sel.length)
return toast.err(3, L.fd_emore);
ev(e);
if (clgot(bdel, 'hide'))
return toast.err(3, L.fd_eperm);
@@ -5001,13 +5066,15 @@ var fileman = (function () {
};
r.cut = function (e) {
ev(e);
var sel = msel.getsel(),
vps = [];
stamp = Date.now(),
vps = [stamp];
if (!sel.length)
return toast.err(3, L.fc_emore);
ev(e);
if (clgot(bcut, 'hide'))
return toast.err(3, L.fc_eperm);
@@ -5034,9 +5101,11 @@ var fileman = (function () {
catch (ex) { }
}, 1);
r.ccp = false;
r.clip = vps.slice(1);
try {
var stamp = Date.now();
vps = JSON.stringify([stamp].concat(vps));
vps = JSON.stringify(vps);
if (vps.length > 1024 * 1024)
throw 'a';
@@ -5050,6 +5119,60 @@ var fileman = (function () {
}
};
r.cpy = function (e) {
var sel = msel.getsel(),
stamp = Date.now(),
vps = [stamp, '//c'];
if (!sel.length)
return toast.err(3, L.fcp_emore);
ev(e);
var els = [], griden = thegrid.en;
for (var a = 0; a < sel.length; a++) {
vps.push(sel[a].vp);
if (sel.length < 100)
try {
if (griden)
els.push(QS('#ggrid>a[ref="' + sel[a].id + '"]'));
else
els.push(ebi(sel[a].id).closest('tr'));
clmod(els[a], 'fcut');
}
catch (ex) { }
}
setTimeout(function () {
try {
for (var a = 0; a < els.length; a++)
clmod(els[a], 'fcut', 1);
}
catch (ex) { }
}, 1);
if (vps.length < 3)
vps.pop();
r.ccp = true;
r.clip = vps.slice(2);
try {
vps = JSON.stringify(vps);
if (vps.length > 1024 * 1024)
throw 'a';
swrite('fman_clip', vps);
r.tx(stamp);
if (sel.length)
toast.inf(1.5, L.fcc_ok.format(sel.length));
}
catch (ex) {
toast.warn(30, L.fcc_warn.format(sel.length));
}
};
document.onpaste = function (e) {
var xfer = e.clipboardData || window.clipboardData;
if (!xfer || !xfer.files || !xfer.files.length)
@@ -5065,9 +5188,9 @@ var fileman = (function () {
return r.clip_up(files);
var src = r.clip.length == 1 ? r.clip[0] : vsplit(r.clip[0])[0],
msg = L.fp_both_m.format(r.clip.length, src, files.length);
msg = (r.ccp ? L.fcp_both_m : L.fp_both_m).format(r.clip.length, src, files.length);
modal.confirm(msg, r.paste, function () { r.clip_up(files); }, null, L.fp_both_b);
modal.confirm(msg, r.paste, function () { r.clip_up(files); }, null, (r.ccp ? L.fcp_both_b : L.fp_both_b));
};
r.clip_up = function (files) {
@@ -5123,65 +5246,147 @@ var fileman = (function () {
if (clgot(bpst, 'hide'))
return toast.err(3, L.fp_eperm);
var req = [],
exists = [],
var html = [
'<div>',
'<button id="rn_cancel" tt="' + L.frt_abrt + '</button>',
'<button id="rn_apply">✅ ' + L.fp_apply + '</button>',
' &nbsp; src: ' + esc(r.clip[0].replace(/[^/]+$/, '')),
'</div>',
'<p id="cnmt"></p>',
'<div><table id="rn_f" class="m">',
'<tr><td>' + L.fr_lnew + '</td><td>' + L.fr_lold + '</td></tr>',
],
ui = false,
f = [],
indir = [],
srcdir = vsplit(r.clip[0])[0],
links = QSA('#files tbody td:nth-child(2) a');
for (var a = 0, aa = links.length; a < aa; a++)
indir.push(vsplit(noq_href(links[a]))[1]);
indir.push(uricom_dec(vsplit(noq_href(links[a]))[1]));
for (var a = 0; a < r.clip.length; a++) {
var found = false;
for (var b = 0; b < indir.length; b++) {
if (r.clip[a].endsWith('/' + indir[b])) {
exists.push(r.clip[a]);
found = true;
var t = {
'ok': true,
'src': r.clip[a],
'dst': uricom_dec(r.clip[a].split('/').pop()),
};
f.push(t);
for (var b = 0; b < indir.length; b++)
if (t.dst == indir[b]) {
t.ok = false;
ui = true;
}
}
if (!found)
req.push(r.clip[a]);
html.push('<tr' + (!t.ok ? ' class="ng"' : '') + '><td><input type="text" id="rn_new_' + a + '" value="' + esc(t.dst) + '" /></td><td><input type="text" id="rn_old_' + a + '" value="' + esc(t.dst) + '" readonly /></td></tr>');
}
if (exists.length)
toast.warn(30, L.fp_ename.format(exists.length) + '<ul>' + uricom_adec(exists, true).join('') + '</ul>');
if (!req.length)
return;
function paster() {
var xhr = new XHR(),
vp = req.shift();
if (!vp) {
toast.ok(2, L.fp_ok);
var t = f.shift();
if (!t) {
toast.ok(2, r.ccp ? L.fcp_ok : L.fp_ok);
treectl.goto();
r.tx(srcdir);
return;
}
toast.show('inf r', 0, esc(L.fp_busy.format(req.length + 1, uricom_dec(vp))));
if (!t.dst)
return paster();
var dst = get_evpath() + vp.split('/').pop();
toast.show('inf r', 0, esc((r.ccp ? L.fcp_busy : L.fp_busy).format(f.length + 1, uricom_dec(f.src))));
xhr.open('POST', vp + '?move=' + dst, true);
var xhr = new XHR(),
act = r.ccp ? '?copy=' : '?move=',
dst = get_evpath() + uricom_enc(t.dst);
xhr.open('POST', t.src + act + dst, true);
xhr.onload = xhr.onerror = paste_cb;
xhr.send();
}
function paste_cb() {
if (this.status !== 201) {
var msg = unpre(this.responseText);
toast.err(9, L.fp_err + msg);
toast.err(9, (r.ccp ? L.fcp_err : L.fp_err) + msg);
return;
}
paster();
}
modal.confirm(L.fp_confirm.format(req.length) + '<ul>' + uricom_adec(req, true).join('') + '</ul>', function () {
function okgo() {
paster();
jwrite('fman_clip', [Date.now()]);
}, null);
};
}
if (!ui) {
var src = [];
for (var a = 0; a < f.length; a++)
src.push(f[a].src);
return modal.confirm((r.ccp ? L.fcp_confirm : L.fp_confirm).format(f.length) + '<ul>' + uricom_adec(src, true).join('') + '</ul>', okgo, null);
}
var rui = ebi('rui');
if (!rui) {
rui = mknod('div', 'rui');
document.body.appendChild(rui);
}
html.push('</table>');
rui.innerHTML = html.join('\n');
tt.att(rui);
function rn_apply(e) {
for (var a = 0; a < f.length; a++)
if (!f[a].ok) {
toast.err(30, L.fp_emore);
return setcnmt(true);
}
rn_cancel(e);
okgo();
}
function rn_cancel(e) {
ev(e);
rui.parentNode.removeChild(rui);
}
ebi('rn_cancel').onclick = rn_cancel;
ebi('rn_apply').onclick = rn_apply;
var first_bad = 0;
function setcnmt(sel) {
var nbad = 0;
for (var a = 0; a < f.length; a++) {
if (f[a].ok)
continue;
if (!nbad)
first_bad = a;
nbad += 1;
}
ebi('cnmt').innerHTML = (r.ccp ? L.fcp_ename : L.fp_ename).format(nbad);
if (sel && nbad) {
var el = ebi('rn_new_' + first_bad);
el.focus();
el.setSelectionRange(0, el.value.lastIndexOf('.'), "forward");
}
}
setcnmt(true);
for (var a = 0; a < f.length; a++)
(function (a) {
var inew = ebi('rn_new_' + a);
inew.onkeydown = function (e) {
if (((e.code || e.key) + '').endsWith('Enter'))
return rn_apply();
};
inew.oninput = function (e) {
f[a].dst = this.value;
f[a].ok = true;
if (f[a].dst)
for (var b = 0; b < indir.length; b++)
if (indir[b] == this.value)
f[a].ok = false;
clmod(this.closest('tr'), 'ng', !f[a].ok);
setcnmt();
};
})(a);
}
function onmsg(msg) {
r.clip = null;
@@ -5219,6 +5424,7 @@ var fileman = (function () {
bren.onclick = r.rename;
bdel.onclick = r.delete;
bcut.onclick = r.cut;
bcpy.onclick = r.cpy;
bpst.onclick = r.paste;
bshr.onclick = r.share;
@@ -6041,6 +6247,19 @@ var thegrid = (function () {
toast.warn(10, L.ul_btnlk);
};
if (/[?&]grid\b/.exec(sloc0))
swrite('griden', /[?&]grid=0\b/.exec(sloc0) ? 0 : 1)
if (/[?&]thumb\b/.exec(sloc0))
swrite('thumbs', /[?&]thumb=0\b/.exec(sloc0) ? 0 : 1)
if (/[?&]imgs\b/.exec(sloc0)) {
var n = /[?&]imgs=0\b/.exec(sloc0) ? 0 : 1;
swrite('griden', n);
if (n)
swrite('thumbs', 1);
}
bcfg_bind(r, 'thumbs', 'thumbs', true, r.setdirty);
bcfg_bind(r, 'ihop', 'ihop', true);
bcfg_bind(r, 'vau', 'gridvau', false);
@@ -6120,8 +6339,6 @@ function tree_neigh(n) {
links[act].click();
else
treectl.treego.call(links[act]);
links[act].focus();
}
@@ -6219,9 +6436,6 @@ var ahotkeys = function (e) {
ae = document.activeElement,
aet = ae && ae != document.body ? ae.nodeName.toLowerCase() : '';
if (e.key == '?')
return hkhelp();
if (k == 'Escape' || k == 'Esc') {
ae && ae.blur();
tt.hide();
@@ -6302,15 +6516,24 @@ var ahotkeys = function (e) {
if (aet && aet != 'a' && aet != 'tr' && aet != 'td' && aet != 'div' && aet != 'pre')
return;
if (ctrl(e)) {
if (e.key == '?')
return hkhelp();
if (!e.shiftKey && ctrl(e)) {
var sel = window.getSelection && window.getSelection() || {};
sel = sel && !sel.isCollapsed && sel.direction != 'none';
if (k == 'KeyX' || k == 'x')
return fileman.cut();
return fileman.cut(e);
if ((k == 'KeyC' || k == 'c') && !sel)
return fileman.cpy(e);
if (k == 'KeyV' || k == 'v')
return fileman.d_paste();
return fileman.d_paste(e);
if (k == 'KeyK' || k == 'k')
return fileman.delete();
return fileman.delete(e);
return;
}
@@ -7226,6 +7449,7 @@ var treectl = (function () {
r.reqls(href, true);
r.dir_cb = tree_scrollto;
thegrid.setvis(true);
clmod(this, 'ld', 1);
}
r.reqls = function (url, hpush, back, hydrate) {
@@ -7345,6 +7569,9 @@ var treectl = (function () {
var lg0 = res.logues ? res.logues[0] || "" : "",
lg1 = res.logues ? res.logues[1] || "" : "",
mds = res.readmes && treectl.ireadme,
md0 = mds ? res.readmes[0] || "" : "",
md1 = mds ? res.readmes[1] || "" : "",
dirchg = get_evpath() != cdir;
if (lg1 === Ls.eng.f_empty)
@@ -7354,9 +7581,14 @@ var treectl = (function () {
if (dirchg)
sandbox(ebi('epi'), sb_lg, '', lg1);
clmod(ebi('pro'), 'mdo');
clmod(ebi('epi'), 'mdo');
if (res.readme && treectl.ireadme)
show_readme(res.readme);
if (md0)
show_readme(md0, 0);
if (md1)
show_readme(md1, 1);
else if (!dirchg)
sandbox(ebi('epi'), sb_lg, '', lg1);
@@ -7372,6 +7604,9 @@ var treectl = (function () {
r.ls_cb = null;
fun();
}
if (window.have_shr && QS('#op_unpost.act') && (cdir.startsWith(SR + have_shr) || get_evpath().startsWith(SR + have_shr)))
goto('unpost');
}
r.chk_index_html = function (top, res) {
@@ -8877,14 +9112,17 @@ function set_tabindex() {
}
function show_readme(md) {
if (!treectl.ireadme)
return sandbox(ebi('epi'), '', '', 'a');
function show_readme(md, n) {
var tgt = ebi(n ? 'epi' : 'pro');
show_md(md, 'README.md', ebi('epi'));
if (!treectl.ireadme)
return sandbox(tgt, '', '', 'a');
show_md(md, n ? 'README.md' : 'PREADME.md', tgt);
}
if (readme)
show_readme(readme);
for (var a = 0; a < readmes.length; a++)
if (readmes[a])
show_readme(readmes[a], a);
function sandbox(tgt, rules, cls, html) {
@@ -9104,7 +9342,7 @@ var unpost = (function () {
r.me = me;
}
var q = SR + '/?ups';
var q = get_evpath() + '?ups';
if (filt.value)
q += '&filter=' + uricom_enc(filt.value, true);

BIN
copyparty/web/iiam.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 B

View File

@@ -90,6 +90,10 @@ table {
text-align: left;
white-space: nowrap;
}
.vols td:empty,
.vols th:empty {
padding: 0;
}
.num {
border-right: 1px solid #bbb;
}
@@ -222,3 +226,6 @@ html.bz {
color: #bbd;
background: #11121d;
}
html.bz .vols img {
filter: sepia(0.8) hue-rotate(180deg);
}

View File

@@ -44,6 +44,18 @@
</table>
{%- endif %}
{%- if dls %}
<h1 id="ae">active downloads:</h1>
<table class="vols">
<thead><tr><th>%</th><th>sent</th><th>speed</th><th>eta</th><th>idle</th><th></th><th>dir</th><th>file</th></tr></thead>
<tbody>
{% for u in dls %}
<tr><td>{{ u[0] }}</td><td>{{ u[1] }}</td><td>{{ u[2] }}</td><td>{{ u[3] }}</td><td>{{ u[4] }}</td><td>{{ u[5] }}</td><td><a href="{{ u[6] }}">{{ u[7]|e }}</a></td><td>{{ u[8] }}</td></tr>
{% endfor %}
</tbody>
</table>
{%- endif %}
{%- if avol %}
<h1>admin panel:</h1>
<table><tr><td> <!-- hehehe -->
@@ -129,11 +141,20 @@
{% if k304 or k304vis %}
{% if k304 %}
<li><a id="h" href="{{ r }}/?k304=n">disable k304</a> (currently enabled)
<li><a id="h" href="{{ r }}/?cc&setck=k304=n">disable k304</a> (currently enabled)
{%- else %}
<li><a id="i" href="{{ r }}/?k304=y" class="r">enable k304</a> (currently disabled)
<li><a id="i" href="{{ r }}/?cc&setck=k304=y" class="r">enable k304</a> (currently disabled)
{% endif %}
<blockquote id="j">enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
<blockquote id="j">enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general</blockquote></li>
{% endif %}
{% if no304 or no304vis %}
{% if no304 %}
<li><a id="ab" href="{{ r }}/?cc&setck=no304=n">disable no304</a> (currently enabled)
{%- else %}
<li><a id="ac" href="{{ r }}/?cc&setck=no304=y" class="r">enable no304</a> (currently disabled)
{% endif %}
<blockquote id="ad">enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!</blockquote></li>
{% endif %}
<li><a id="k" href="{{ r }}/?reset" class="r" onclick="localStorage.clear();return true">reset client settings</a></li>

View File

@@ -34,6 +34,10 @@ var Ls = {
"ta2": "gjenta for å bekrefte nytt passord:",
"ta3": "fant en skrivefeil; vennligst prøv igjen",
"aa1": "innkommende:",
"ab1": "skru av no304",
"ac1": "skru på no304",
"ad1": "no304 stopper all bruk av cache. Hvis ikke k304 var nok, prøv denne. Vil mangedoble dataforbruk!",
"ae1": "utgående:",
},
"eng": {
"d2": "shows the state of all active threads",
@@ -80,6 +84,10 @@ var Ls = {
"ta2": "重复以确认新密码:",
"ta3": "发现拼写错误;请重试",
"aa1": "正在接收的文件:", //m
"ab1": "关闭 k304",
"ac1": "开启 k304",
"ad1": "启用 no304 将禁用所有缓存;如果 k304 不够,可以尝试此选项。这将消耗大量的网络流量!", //m
"ae1": "正在下载:", //m
}
};

View File

@@ -73,9 +73,9 @@ html {
position: absolute;
height: 1px;
top: 1px;
right: 1%;
width: 99%;
animation: toastt var(--tmtime) steps(var(--tmstep)) forwards;
right: 1px;
left: 1px;
animation: toastt var(--tmtime) 0.07s steps(var(--tmstep)) forwards;
transform-origin: right;
}
@keyframes toastt {

View File

@@ -17,10 +17,14 @@ function goto_up2k() {
var up2k = null,
up2k_hooks = [],
hws = [],
hws_ok = 0,
hws_ng = false,
sha_js = WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
m = 'will use ' + sha_js + ' instead of native sha512 due to';
try {
if (sread('nosubtle') || window.nosubtle)
throw 'chickenbit';
var cf = crypto.subtle || crypto.webkitSubtle;
cf.digest('SHA-512', new Uint8Array(1)).then(
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
@@ -242,7 +246,7 @@ function U2pvis(act, btns, uc, st) {
p = bd * 100.0 / sz,
nb = bd - bd0,
spd = nb / (td / 1000),
eta = (sz - bd) / spd;
eta = spd ? (sz - bd) / spd : 3599;
return [p, s2ms(eta), spd / (1024 * 1024)];
};
@@ -853,8 +857,13 @@ function up2k_init(subtle) {
setmsg(suggest_up2k, 'msg');
var u2szs = u2sz.split(','),
u2sz_min = parseInt(u2szs[0]),
u2sz_tgt = parseInt(u2szs[1]),
u2sz_max = parseInt(u2szs[2]);
var parallel_uploads = ebi('nthread').value = icfg_get('nthread', u2j),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz.split(',')[1]),
stitch_tgt = ebi('u2szg').value = icfg_get('u2sz', u2sz_tgt),
uc = {},
fdom_ctr = 0,
biggest_file = 0;
@@ -1353,6 +1362,10 @@ function up2k_init(subtle) {
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
hws.push(new Worker(SR + '/.cpr/w.hash.js?_=' + TS));
if (!subtle)
for (var a = 0; a < hws.length; a++)
hws[a].postMessage('nosubtle');
console.log(hws.length + " hashers");
}
@@ -1863,10 +1876,12 @@ function up2k_init(subtle) {
function chill(t) {
var now = Date.now();
if ((t.coolmul || 0) < 2 || now - t.cooldown < t.coolmul * 700)
if ((t.coolmul || 0) < 5 || now - t.cooldown < t.coolmul * 700)
t.coolmul = Math.min((t.coolmul || 0.5) * 2, 32);
t.cooldown = Math.max(t.cooldown || 1, Date.now() + t.coolmul * 1000);
var cd = now + 1000 * (t.coolmul + Math.random() * 4 + 2);
t.cooldown = Math.floor(Math.max(cd, t.cooldown || 1));
return t;
}
/////
@@ -1951,7 +1966,7 @@ function up2k_init(subtle) {
pvis.setab(t.n, nchunks);
pvis.move(t.n, 'bz');
if (hws.length && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
if (hws.length && !hws_ng && uc.hashw && (nchunks > 1 || document.visibilityState == 'hidden'))
// resolving subtle.digest w/o worker takes 1sec on blur if the actx hack breaks
return wexec_hash(t, chunksize, nchunks);
@@ -2060,16 +2075,27 @@ function up2k_init(subtle) {
free = [],
busy = {},
nbusy = 0,
init = 0,
hashtab = {},
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
if (!hws_ok)
init = setTimeout(function() {
hws_ng = true;
toast.warn(30, 'webworkers failed to start\n\nwill be a bit slower due to\nhashing on main-thread');
apop(st.busy.hash, t);
st.todo.hash.unshift(t);
exec_hash();
}, 5000);
for (var a = 0; a < hws.length; a++) {
var w = hws[a];
free.push(w);
w.onmessage = onmsg;
if (init)
w.postMessage('ping');
if (mem > 0)
free.push(w);
mem -= chunksize;
if (mem <= 0)
break;
}
function go_next() {
@@ -2099,6 +2125,12 @@ function up2k_init(subtle) {
d = d.data;
var k = d[0];
if (k == "pong")
if (++hws_ok == hws.length) {
clearTimeout(init);
go_next();
}
if (k == "panic")
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
@@ -2161,7 +2193,8 @@ function up2k_init(subtle) {
tasker();
}
}
go_next();
if (!init)
go_next();
}
/////
@@ -2259,8 +2292,7 @@ function up2k_init(subtle) {
console.log('handshake onerror, retrying', t.name, t);
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
st.todo.handshake.unshift(chill(t));
t.keepalive = keepalive;
};
var orz = function (e) {
@@ -2273,8 +2305,7 @@ function up2k_init(subtle) {
}
catch (ex) {
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.cooldown = Date.now() + 5000 + Math.floor(Math.random() * 3000);
st.todo.handshake.unshift(chill(t));
var txt = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
return toast.err(0, txt + '\n\n' + L.badreply + ':\n\n' + unpre(xhr.responseText));
}
@@ -2453,6 +2484,7 @@ function up2k_init(subtle) {
else {
pvis.seth(t.n, 1, "ERROR");
pvis.seth(t.n, 2, L.u_ehstmp, t);
apop(st.busy.handshake, t);
var err = "",
cls = "ERROR",
@@ -2466,7 +2498,6 @@ function up2k_init(subtle) {
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
console.log("rate-limit: " + penalty);
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
return;
}
@@ -2489,8 +2520,6 @@ function up2k_init(subtle) {
cls = 'defer';
}
}
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
if (err != "") {
if (!t.t_uploading)
@@ -2500,10 +2529,15 @@ function up2k_init(subtle) {
pvis.seth(t.n, 2, err);
pvis.move(t.n, 'ng');
apop(st.busy.handshake, t);
tasker();
return;
}
st.todo.handshake.unshift(chill(t));
if (rsp.indexOf('server HDD is full') + 1)
return toast.err(0, L.u_ehsdf + "\n\n" + rsp.replace(/.*; /, ''));
err = t.t_uploading ? L.u_ehsfin : t.srch ? L.u_ehssrch : L.u_ehsinit;
xhrchk(xhr, err + "\n\nfile: " + t.name + "\n\nerror ", "404, target folder not found", "warn", t);
}
@@ -2574,8 +2608,7 @@ function up2k_init(subtle) {
nparts = upt.nparts,
pcar = nparts[0],
pcdr = nparts[nparts.length - 1],
snpart = pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr),
tries = 0;
maxsz = (u2sz_max > 1 ? u2sz_max : 2040) * 1024 * 1024;
if (t.done)
return console.log('done; skip chunk', t.name, t);
@@ -2595,6 +2628,30 @@ function up2k_init(subtle) {
if (cdr >= t.size)
cdr = t.size;
if (cdr - car <= maxsz)
return upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car, []);
var car0 = car, subs = [];
while (car < cdr) {
subs.push([car, Math.min(cdr, car + maxsz)]);
car += maxsz;
}
upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
}
function upload_sub(t, upt, pcar, pcdr, car, cdr, chunksize, car0, subs) {
var nparts = upt.nparts,
is_sub = subs.length;
if (is_sub) {
var x = subs.shift();
car = x[0];
cdr = x[1];
}
var snpart = is_sub ? ('' + pcar + '(' + (car-car0) +'+'+ (cdr-car)) :
pcar == pcdr ? pcar : ('' + pcar + '~' + pcdr);
var orz = function (xhr) {
st.bytes.inflight -= xhr.bsent;
var txt = unpre((xhr.response && xhr.response.err) || xhr.responseText);
@@ -2608,6 +2665,10 @@ function up2k_init(subtle) {
return;
}
if (xhr.status == 200) {
car = car0;
if (subs.length)
return upload_sub(t, upt, pcar, pcdr, 0, 0, chunksize, car0, subs);
var bdone = cdr - car;
for (var a = pcar; a <= pcdr; a++) {
pvis.prog(t, a, Math.min(bdone, chunksize));
@@ -2616,6 +2677,7 @@ function up2k_init(subtle) {
st.bytes.finished += cdr - car;
st.bytes.uploaded += cdr - car;
t.bytes_uploaded += cdr - car;
t.cooldown = t.coolmul = 0;
st.etac.u++;
st.etac.t++;
}
@@ -2674,7 +2736,7 @@ function up2k_init(subtle) {
toast.warn(9.98, L.u_cuerr.format(snpart, Math.ceil(t.size / chunksize), t.name), t);
t.nojoin = t.nojoin || t.postlist.length; // maybe rproxy postsize limit
console.log('chunkpit onerror,', ++tries, t.name, t);
console.log('chunkpit onerror,', t.name, t);
orz2(xhr);
};
@@ -2692,9 +2754,13 @@ function up2k_init(subtle) {
xhr.open('POST', t.purl, true);
xhr.setRequestHeader("X-Up2k-Hash", ctxt);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
if (is_sub)
xhr.setRequestHeader("X-Up2k-Subc", car - car0);
xhr.setRequestHeader("X-Up2k-Stat", "{0}/{1}/{2}/{3} {4}/{5} {6}".format(
pvis.ctr.ok, pvis.ctr.ng, pvis.ctr.bz, pvis.ctr.q, btot, btot - bfin,
st.eta.t.split(' ').pop()));
st.eta.t.indexOf('/s, ')+1 ? st.eta.t.split(' ').pop() : 'x'));
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
@@ -2812,13 +2878,13 @@ function up2k_init(subtle) {
}
var read_u2sz = function () {
var el = ebi('u2szg'), n = parseInt(el.value), dv = u2sz.split(',');
var el = ebi('u2szg'), n = parseInt(el.value);
stitch_tgt = n = (
isNaN(n) ? dv[1] :
n < dv[0] ? dv[0] :
n > dv[2] ? dv[2] : n
isNaN(n) ? u2sz_tgt :
n < u2sz_min ? u2sz_min :
n > u2sz_max ? u2sz_max : n
);
if (n == dv[1]) sdrop('u2sz'); else swrite('u2sz', n);
if (n == u2sz_tgt) sdrop('u2sz'); else swrite('u2sz', n);
if (el.value != n) el.value = n;
};
ebi('u2szg').addEventListener('blur', read_u2sz);

View File

@@ -1527,21 +1527,26 @@ var toast = (function () {
if (sec)
te = setTimeout(r.hide, sec * 1000);
var tb = ebi('toastt');
if (same && delta < 1000 && tb) {
tb.style.animation = 'none';
tb.offsetHeight;
tb.style.animation = null;
if (same && delta < 1000) {
var tb = ebi('toastt');
if (tb) {
tb.style.animation = 'none';
tb.offsetHeight;
tb.style.animation = null;
}
return;
}
if (txt.indexOf('<body>') + 1)
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
setcvar('--tmtime', sec + 's');
setcvar('--tmstep', sec * 15);
obj.innerHTML = '<div id="toastt"></div><a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
var html = '';
if (sec) {
setcvar('--tmtime', (sec - 0.15) + 's');
setcvar('--tmstep', Math.floor(sec * 20));
html += '<div id="toastt"></div>';
}
obj.innerHTML = html + '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
obj.className = cl;
sec += obj.offsetWidth;
obj.className += ' vis';

View File

@@ -20,6 +20,7 @@ catch (ex) {
function load_fb() {
subtle = null;
importScripts('deps/sha512.hw.js');
console.log('using fallback hasher');
}
@@ -29,6 +30,12 @@ var reader = null,
onmessage = (d) => {
if (d.data == 'nosubtle')
return load_fb();
if (d.data == 'ping')
return postMessage(['pong']);
if (busy)
return postMessage(["panic", 'worker got another task while busy']);

View File

@@ -1,3 +1,195 @@
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1027-0751 `v1.15.10` temporary upload links
## 🧪 new features
* [shares](https://github.com/9001/copyparty#shares) can now be uploaded into, and unpost works too 4bdcbc1c
* useful to create temporary URLs for other people to upload to
* shares can be write-only, so visitors can't browse or see any files
* #110 HTTP 304 (caching):
* support `If-Range` for HTTP 206 159f51b1
* add server-side and client-side options to force-disable cache dd6dbdd9
* `--no304=1` shows a button in the controlpanel to disable caching
* `--no304=2` makes that button auto-enabled
* even when `--no304` is not specified, accessing the URL `/?setck=no304=y` force-disables cache
* when cache is force-disabled, browsers will waste a lot of network traffic / data usage
* might help to avoid bugs in browsers or proxies, for example if media files suddenly stop loading
* but such bugs should be exceedingly rare, so do not enable this unless actually necessary
## 🩹 bugfixes
* #110 HTTP 304 (caching):
* remove `Content-Length` and `Content-Type` response headers from 304 replies 91240236
* browsers don't need these, and some middlewares might get confused if they're present
* #113 fix crash on startup if `-j0` was combined with `--ipa` or `--ipu` 3a0d882c
* #111 fix javascript crash if `--u2sz` was set to an invalid value b13899c6
## 🔧 other changes
* #110 HTTP 304 (caching):
* never automatically enable k304 because the `Vary` header killed support for caching in msie anyways 63013cc5
* change time comparison for `If-Modified-Since` to require an exact timestamp match, instead of the intended "modified since". This technically violates the http-spec, but should be safer for backdating file mtimes 159f51b1
* new option `--ohead` to log response headers 7678a91b
* added [nintendo 3ds](https://github.com/user-attachments/assets/88deab3d-6cad-4017-8841-2f041472b853) to the [list of supported browsers](https://github.com/9001/copyparty#browser-support) cb81f0ad
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1018-2342 `v1.15.9` rss server
## 🧪 new features
* #109 [rss feed generator](https://github.com/9001/copyparty#rss-feeds) 7ffd805a
* monitor folders recursively with RSS readers
## 🩹 bugfixes
* #107 `--df` diskspace limits was incompatible with webdav 2a570bb4
* #108 up2k javascript crash (only affected the Chinese translation) a7e2a0c9
## 🔧 other changes
* up2k: detect buggy webworkers 5ca8f070
* up2k: improve upload retry/timeout logic a9b4436c
* js: make handshake retries more aggressive
* u2c: reduce chunks timeout + ^
* main: reduce tcp timeout to 128sec (js is 42s)
* httpcli: less confusing log messages
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1016-2153 `v1.15.8` the sky is the limit
## 🧪 new features
* subchunks; avoid the Cloudflare filesize limit entirely fc8298c4 48147c07
* the previous max filesize was `383.9 GiB`, now only the sky is the limit
* if you're using another proxy with a more restrictive limit than Cloudflare's 100 MiB, for example 64 MiB, then `--u2sz 1,64,64`
* m4v videos can be played in the gallery ff0a71f2
## 🩹 bugfixes
* up2k: uploading duplicate files could initially fail (but would succeed after a few automatic retries) due to a toctou 114b71b7
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
* directory scanner got stuck if it found a FIFO cba1878b
* excessive number of FDs when uploading large files 65a2b6a2
* chunksize calculation; only affected files exactly 128 GiB large a2e037d6
* support filenames with newlines and invalid utf-8 b2770a20
* invalid utf-8 is replaced by `?` when they hit the server
## 🔧 other changes
* don't show the toast countdown bar if duration is infinite 22dfc6ec
* chickenbit to disable the browser's built-in sha512 implementation and force the bundled wasm instead d715479e
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1013-2244 `v1.15.7` the 'a' in "ip address" stands for authentication
## 🧪 new features
* [cidr-based autologin](https://github.com/9001/copyparty#ip-auth) b7f9bf5a
* map a cidr ip-range to a username; anyone connecting from that ip-range will autologin as that user
* thx to @byteturtle for the idea!
* [u2c](https://github.com/9001/copyparty/blob/hovudstraum/bin/README.md#u2cpy) / commandline uploader:
* option `--chs` to list individual chunk hashes cf1b7562
* fix progress indicator when resuming an upload 53ffd245
* up2k: verbose logging of detected/corrected bitflips ee628363
* *foreshadowing intensifies* (story still developing)
## 🩹 bugfixes
* up2k with database disabled / running without `-e2d` 705f598b
* respect `noforget` when loading snaps
* ...but actually forget deleted files otherwise
* snap-loader adds empty need/hash entries as necessary
## 🔧 other changes
* authed users can now unpost recent uploads of unauthed users from the same IP 22b58e31
* would have become problematic now that cidr-based autologin is a thing
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1011-2256 `v1.15.6` preadme
## 🧪 new features
* #105 files named `preadme.md` appear at the top of directory listings 1d68acf8
* entirely disable dedup with `--no-clone` / volflag `noclone` 3d7facd7 6b7ebdb7
* even if a file exists for sure on the server HDD, let the client continue uploading instead of reusing the existing data
* using this option "never" makes sense, unless you're using something like S3 Glacier storage where reading is really expensive but writing is cheap
## 🩹 bugfixes
* up2k jank after detecting a bitflip or network glitch 4a4ec88d
* instead of resuming the interrupted upload like it should, the upload client could get stuck or start over
* #104 support viewing dotfile documents when dotfiles are hidden 9ccd8bb3
* fix a buttload of typos 6adc778d 1e7697b5
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1005-1803 `v1.15.5` pyz all the cores
## 🩹 bugfixes
* the pkgres / pyz changes in 1.15.4 broke multiprocessing c3985537
## 🔧 other changes
* pyz: drop easymde to save some bytes + make it a tiny bit faster
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-1004-2319 `v1.15.4` hermetic
## 🧪 new features
* [u2c](https://github.com/9001/copyparty/tree/hovudstraum/bin#u2cpy) (commandline uploader):
* remove all dependencies; now entirely self-contained 9daeed92
* made it 3x faster for small files, 2x faster in general
* improve `-x` behavior to not traverse into excluded folders b9c5c7bb
* [partyfuse](https://github.com/9001/copyparty/tree/hovudstraum/bin#partyfusepy) (fuse client; mount a copyparty server as a local filesystem):
* 9x faster directory listings 03f0f994
* 4x faster downloads on high-latency connections 847a2bdc
* embed `fuse.py` (its only dependency) -- can be downloaded from the connect-page 44f2b63e
* support mounting nginx and iis servers too, not just copyparty c81e8984
* reduce ram usage down to 10% when running without `-e2d` 88a1c5ca
* does not affect servers with `-e2d` enabled (was already optimal)
* share folders as qr-codes e4542064
* when creating a share, you get a qr-code for quick access
* buttons in the shares controlpanel to reshow it, optionally with the password embedded into the qr-code
* #98 read embedded webdeps and templates with `pkg_resources`; thx @shizmob! a462a644 d866841c
* [copyparty.pyz](https://github.com/9001/copyparty/releases/latest/download/copyparty.pyz) now runs straight from the source file without unpacking anything to disk
* ...and is now much slower at returning resource GETs, but that is fine
* og / opengraph / discord embeds: support filekeys ae982006
* add option for natural sorting; thx @oshiteku! 9804f25d
* eyecandy timer bar on toasts 0dfe1d5b
* smb-server: impacket 0.12 is out! dc4d0d8e
* now *possible* to list folders with more than 400 files (it's REALLY slow)
## 🩹 bugfixes
* webdav:
* support `<allprop/>` in propfind dc157fa2
* list volumes when root is unmapped 480ac254
* previously, clients couldn't connect to the root of a copyparty server unless a volume existed at `/`
* #101 show `.prologue.html` and `.epilogue.html` in directory listings even if user cannot see hidden files 21be82ef
* #100 confusing toast when pressing F2 without selecting anything 2715ee6c
* fix prometheus metrics 678675a9
## 🔧 other changes
* #100 allow uploading `.prologue.html` and `.epilogue.html` 19a5985f
* #102 make translation easier when running in docker
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
# 2024-0916-0107 `v1.15.3` incoming eta

View File

@@ -140,6 +140,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| GET | `?tar&j` | pregenerate jpg thumbnails |
| GET | `?tar&p` | pregenerate audio waveforms |
| GET | `?shares` | list your shared files/folders |
| GET | `?dls` | show active downloads (do this as admin) |
| GET | `?ups` | show recent uploads from your IP |
| GET | `?ups&filter=f` | ...where URL contains `f` |
| GET | `?mime=foo` | specify return mimetype `foo` |
@@ -163,6 +164,7 @@ authenticate using header `Cookie: cppwd=foo` or url param `&pw=foo`
| method | params | result |
|--|--|--|
| POST | `?copy=/foo/bar` | copy the file/folder at URL to /foo/bar |
| POST | `?move=/foo/bar` | move/rename the file/folder at URL to /foo/bar |
| method | params | body | result |
@@ -208,6 +210,12 @@ upload modifiers:
| method | params | result |
|--|--|--|
| GET | `?pw=x` | logout |
| GET | `?grid` | ui: show grid-view |
| GET | `?imgs` | ui: show grid-view with thumbnails |
| GET | `?grid=0` | ui: show list-view |
| GET | `?imgs=0` | ui: show list-view |
| GET | `?thumb` | ui, grid-mode: show thumbnails |
| GET | `?thumb=0` | ui, grid-mode: show icons |
# event hooks
@@ -279,6 +287,7 @@ the rest is mostly optional; if you need a working env for vscode or similar
python3 -m venv .venv
. .venv/bin/activate
pip install jinja2 strip_hints # MANDATORY
pip install argon2-cffi # password hashing
pip install mutagen # audio metadata
pip install pyftpdlib # ftp server
pip install partftpy # tftp server

View File

@@ -257,6 +257,8 @@ for d in /usr /var; do find $d -type f -size +30M 2>/dev/null; done | while IFS=
# up2k worst-case testfiles: create 64 GiB (256 x 256 MiB) of sparse files; each file takes 1 MiB disk space; each 1 MiB chunk is globally unique
for f in {0..255}; do echo $f; truncate -s 256M $f; b1=$(printf '%02x' $f); for o in {0..255}; do b2=$(printf '%02x' $o); printf "\x$b1\x$b2" | dd of=$f bs=2 seek=$((o*1024*1024)) conv=notrunc 2>/dev/null; done; done
# create 6.06G file with 16 bytes of unique data at start+end of each 32M chunk
sz=6509559808; truncate -s $sz f; csz=33554432; sz=$((sz/16)); step=$((csz/16)); ofs=0; while [ $ofs -lt $sz ]; do dd if=/dev/urandom of=f bs=16 count=2 seek=$ofs conv=notrunc iflag=fullblock; [ $ofs = 0 ] && ofs=$((ofs+step-1)) || ofs=$((ofs+step)); done
# py2 on osx
brew install python@2

View File

@@ -58,7 +58,9 @@ currently up to date with [awesome-selfhosted](https://github.com/awesome-selfho
* [h5ai](#h5ai)
* [autoindex](#autoindex)
* [miniserve](#miniserve)
* [pingvin-share](#pingvin-share)
* [briefly considered](#briefly-considered)
* [notes](#notes)
# recommendations
@@ -106,6 +108,7 @@ some softwares not in the matrixes,
* [h5ai](#h5ai)
* [autoindex](#autoindex)
* [miniserve](#miniserve)
* [pingvin-share](#pingvin-share)
symbol legend,
* `█` = absolutely
@@ -426,6 +429,10 @@ symbol legend,
| gimme-that | python | █ mit | 4.8 MB |
| ass | ts | █ isc | • |
| linx | go | ░ gpl3 | 20 MB |
| h5ai | php | █ mit | • |
| autoindex | go | █ mpl2 | 11 MB |
| miniserve | rust | █ mit | 2 MB |
| pingvin-share | go | █ bsd2 | 487 MB |
* `size` = binary (if available) or installed size of program and its dependencies
* copyparty size is for the [standalone python](https://github.com/9001/copyparty/releases/latest/download/copyparty-sfx.py) file; the [windows exe](https://github.com/9001/copyparty/releases/latest/download/copyparty.exe) is **6 MiB**
@@ -458,11 +465,13 @@ symbol legend,
## [hfs3](https://rejetto.com/hfs/)
* nodejs; cross-platform
* vfs with gui config, per-volume permissions
* tested locally, v0.53.2 on archlinux
* 🔵 uploads are resumable
* ⚠️ uploads are not segmented; max upload size 100 MiB on cloudflare
* ⚠️ uploads are not accelerated (copyparty is 3x faster across the atlantic)
* ⚠️ uploads are not integrity-checked
* ⚠️ copies the file after upload; need twice filesize free disk space
* ⚠️ uploading small files is decent; `107` files per sec (copyparty does `670`/sec, 6x faster)
* ⚠️ doesn't support crazy filenames
* ✅ config GUI
* ✅ download counter
@@ -471,11 +480,12 @@ symbol legend,
## [nextcloud](https://github.com/nextcloud/server)
* php, mariadb
* tested locally, [linuxserver/nextcloud](https://hub.docker.com/r/linuxserver/nextcloud) v30.0.2 (sqlite)
* ⚠️ [isolated on-disk file hierarchy] in per-user folders
* not that bad, can probably be remedied with bindmounts or maybe symlinks
* ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ uploading small files is slow; `2.2` files per sec (copyparty does `87`/sec), tested locally with [linuxserver/nextcloud](https://hub.docker.com/r/linuxserver/nextcloud) (sqlite)
* ⚠️ uploading small files is slow; `4` files per sec (copyparty does `670`/sec, 160x faster)
* ⚠️ no write-only / upload-only folders
* ⚠️ http/webdav only; no ftp, zeroconf
* ⚠️ less awesome music player
@@ -491,11 +501,12 @@ symbol legend,
## [seafile](https://github.com/haiwen/seafile)
* c, mariadb
* tested locally, [official container](https://manual.seafile.com/latest/docker/deploy_seafile_with_docker/) v11.0.13
* ⚠️ [isolated on-disk file hierarchy](https://manual.seafile.com/maintain/seafile_fsck/), incompatible with other software
* *much worse than nextcloud* in that regard
* ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ uploading small files is slow; `2.7` files per sec (copyparty does `87`/sec), tested locally with [official container](https://manual.seafile.com/docker/deploy_seafile_with_docker/)
* ⚠️ uploading small files is slow; `4.7` files per sec (copyparty does `670`/sec, 140x faster)
* ⚠️ no write-only / upload-only folders
* ⚠️ big folders cannot be zip-downloaded
* ⚠️ http/webdav only; no ftp, zeroconf
@@ -519,9 +530,11 @@ symbol legend,
## [dufs](https://github.com/sigoden/dufs)
* rust; cross-platform (windows, linux, macos)
* tested locally, v0.43.0 on archlinux (plain binary)
* ⚠️ uploads not resumable / accelerated / integrity-checked
* ⚠️ on cloudflare: max upload size 100 MiB
* ⚠️ across the atlantic, copyparty is 3x faster
* ⚠️ uploading small files is decent; `97` files per sec (copyparty does `670`/sec, 7x faster)
* ⚠️ doesn't support crazy filenames
* ✅ per-url access control (copyparty is per-volume)
* 🔵 basic but really snappy ui
@@ -564,10 +577,12 @@ symbol legend,
## [filebrowser](https://github.com/filebrowser/filebrowser)
* go; cross-platform (windows, linux, mac)
* tested locally, v2.31.2 on archlinux (plain binary)
* 🔵 uploads are resumable and segmented
* 🔵 multiple files are uploaded in parallel, but...
* ⚠️ big files are not accelerated (copyparty is 5x faster across the atlantic)
* ⚠️ uploads are not integrity-checked
* ⚠️ uploading small files is decent; `69` files per sec (copyparty does `670`/sec, 9x faster)
* ⚠️ http only; no webdav / ftp / zeroconf
* ⚠️ doesn't support crazy filenames
* ⚠️ no directory tree nav
@@ -605,6 +620,7 @@ symbol legend,
* ⚠️ no zeroconf (mdns/ssdp)
* ⚠️ impractical directory URLs
* ⚠️ AGPL licensed
* 🔵 uploading small files is fast; `340` files per sec (copyparty does `670`/sec)
* 🔵 ftp, ftps, webdav
* ✅ sftp server
* ✅ settings gui
@@ -719,7 +735,31 @@ symbol legend,
* 🔵 upload, tar/zip download, qr-code
* ✅ faster at loading huge folders
## [pingvin-share](https://github.com/stonith404/pingvin-share)
* node; linux (docker)
* mainly for uploads, not a general file server
* 🔵 uploads are segmented (avoids cloudflare size limit)
* 🔵 segments are written directly to target file (HDD-friendly)
* ⚠️ uploads not resumable after a browser or laptop crash
* ⚠️ uploads are not accelerated / integrity-checked
* ⚠️ across the atlantic, copyparty is 3x faster
* measured with chunksize 96 MiB; pingvin's default 10 MiB is much slower
* ⚠️ can't upload folders with subfolders
* ⚠️ no upload ETA
* 🔵 expiration times, shares, upload-undo
* ✅ config + user-registration gui
* ✅ built-in OpenID and LDAP support
* 💾 [IdP middleware](https://github.com/9001/copyparty#identity-providers) and config-files
* ✅ probably more than one person who understands the code
# briefly considered
* [pydio](https://github.com/pydio/cells): python/agpl3, looks great, fantastic ux -- but needs mariadb, systemwide install
* [gossa](https://github.com/pldubouilh/gossa): go/mit, minimalistic, basic file upload, text editor, mkdir and rename (no delete/move)
# notes
* high-latency connections (cross-atlantic uploads) can be accurately simulated with `tc qdisc add dev eth0 root netem delay 100ms`

View File

@@ -25,6 +25,7 @@ classifiers = [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy",

View File

@@ -71,7 +71,7 @@ def cnv(src):
def main():
src = readclip()
src = re.split("0{100,200}", src[::-1], 1)[1][::-1]
src = re.split("0{100,200}", src[::-1], maxsplit=1)[1][::-1]
with open("helptext.html", "wb") as fo:
for ln in cnv(iter(src.split("\n")[:-3])):
fo.write(ln.encode("utf-8") + b"\r\n")

View File

@@ -11,6 +11,15 @@ gtar=$(command -v gtar || command -v gnutar) || true
realpath() { grealpath "$@"; }
}
tmv() {
touch -r "$1" t
mv t "$1"
}
ised() {
sed -r "$1" <"$2" >t
tmv "$2"
}
targs=(--owner=1000 --group=1000)
[ "$OSTYPE" = msys ] &&
targs=()
@@ -35,6 +44,12 @@ cd pyz
cp -pR ../sfx/{copyparty,partftpy} .
cp -pR ../sfx/{ftp,j2}/* .
true && {
rm -rf copyparty/web/mde.* copyparty/web/deps/easymde*
echo h > copyparty/web/mde.html
ised '/edit2">edit \(fancy/d' copyparty/web/md.html
}
ts=$(date -u +%s)
hts=$(date -u +%Y-%m%d-%H%M%S)
ver="$(cat ../sfx/ver)"

View File

@@ -94,6 +94,7 @@ copyparty/web/deps/prismd.css,
copyparty/web/deps/scp.woff2,
copyparty/web/deps/sha512.ac.js,
copyparty/web/deps/sha512.hw.js,
copyparty/web/iiam.gif,
copyparty/web/md.css,
copyparty/web/md.html,
copyparty/web/md.js,

View File

@@ -98,6 +98,7 @@ def tc1(vflags):
args = [
"-q",
"-j0",
"-p4321",
"-e2dsa",
"-e2tsr",

View File

@@ -54,7 +54,7 @@ var tl_cpanel = {
"cc1": "other stuff:",
"h1": "disable k304", // TLNote: "j1" explains what k304 is
"i1": "enable k304",
"j1": "enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"j1": "enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"k1": "reset client settings",
"l1": "login for more:",
"m1": "welcome back,", // TLNote: "welcome back, USERNAME"
@@ -76,6 +76,10 @@ var tl_cpanel = {
"ta2": "repeat to confirm new password:",
"ta3": "found a typo; please try again",
"aa1": "incoming files:",
"ab1": "disable no304",
"ac1": "enable no304",
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
"ae1": "active downloads:",
},
};
@@ -103,7 +107,7 @@ var tl_browser = {
"vq": "video quality / bitrate",
"pixfmt": "subsampling / pixel structure",
"resw": "horizontal resolution",
"resh": "veritcal resolution",
"resh": "vertical resolution",
"chs": "audio channels",
"hz": "sample rate"
},
@@ -118,8 +122,9 @@ var tl_browser = {
["T", "toggle thumbnails / icons"],
["🡅 A/D", "thumbnail size"],
["ctrl-K", "delete selected"],
["ctrl-X", "cut selected"],
["ctrl-V", "paste into folder"],
["ctrl-X", "cut selection to clipboard"],
["ctrl-C", "copy selection to clipboard"],
["ctrl-V", "paste (move/copy) here"],
["Y", "download selected"],
["F2", "rename selected"],
@@ -164,7 +169,7 @@ var tl_browser = {
["I/K", "prev/next file"],
["M", "close textfile"],
["E", "edit textfile"],
["S", "select file (for cut/rename)"],
["S", "select file (for cut/copy/rename)"],
]
],
@@ -214,6 +219,7 @@ var tl_browser = {
"wt_ren": "rename selected items$NHotkey: F2",
"wt_del": "delete selected items$NHotkey: ctrl-K",
"wt_cut": "cut selected items &lt;small&gt;(then paste somewhere else)&lt;/small&gt;$NHotkey: ctrl-X",
"wt_cpy": "copy selected items to clipboard$N(to paste them somewhere else)$NHotkey: ctrl-C",
"wt_pst": "paste a previously cut / copied selection$NHotkey: ctrl-V",
"wt_selall": "select all files$NHotkey: ctrl-A (when file focused)",
"wt_selinv": "invert selection",
@@ -408,6 +414,7 @@ var tl_browser = {
"fr_emore": "select at least one item to rename",
"fd_emore": "select at least one item to delete",
"fc_emore": "select at least one item to cut",
"fcp_emore": "select at least one item to copy",
"fs_sc": "share the folder you're in",
"fs_ss": "share the selected files",
@@ -460,16 +467,26 @@ var tl_browser = {
"fc_ok": "cut {0} items",
"fc_warn": 'cut {0} items\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
"fp_ecut": "first cut some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
"fcc_ok": "copied {0} items to clipboard",
"fcc_warn": 'copied {0} items to clipboard\n\nbut: only <b>this</b> browser-tab can paste them\n(since the selection is so absolutely massive)',
"fp_ecut": "first cut or copy some files / folders to paste / move\n\nnote: you can cut / paste across different browser tabs",
"fp_ename": "these {0} items cannot be moved here (names already exist):",
"fcp_ename": "these {0} items cannot be copied here (names already exist):",
"fp_ok": "move OK",
"fcp_ok": "copy OK",
"fp_busy": "moving {0} items...\n\n{1}",
"fcp_busy": "copying {0} items...\n\n{1}",
"fp_err": "move failed:\n",
"fcp_err": "copy failed:\n",
"fp_confirm": "move these {0} items here?",
"fcp_confirm": "copy these {0} items here?",
"fp_etab": 'failed to read clipboard from other browser tab',
"fp_name": "uploading a file from your device. Give it a name:",
"fp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Move {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
"fcp_both_m": '<h6>choose what to paste</h6><code>Enter</code> = Copy {0} files from «{1}»\n<code>ESC</code> = Upload {2} files from your device',
"fp_both_b": '<a href="#" id="modal-ok">Move</a><a href="#" id="modal-ng">Upload</a>',
"fcp_both_b": '<a href="#" id="modal-ok">Copy</a><a href="#" id="modal-ng">Upload</a>',
"mk_noname": "type a name into the text field on the left before you do that :p",
@@ -481,7 +498,7 @@ var tl_browser = {
"tvt_dl": "download this file$NHotkey: Y\">💾 download",
"tvt_prev": "show previous document$NHotkey: i\">⬆ prev",
"tvt_next": "show next document$NHotkey: K\">⬇ next",
"tvt_sel": "select file &nbsp; ( for cut / delete / ... )$NHotkey: S\">sel",
"tvt_sel": "select file &nbsp; ( for cut / copy / delete / ... )$NHotkey: S\">sel",
"tvt_edit": "open file in text editor$NHotkey: E\">✏️ edit",
"gt_vau": "don't show videos, just play the audio\">🎧",

View File

@@ -89,7 +89,7 @@ var tl_cpanel = {{
"cc1": "other stuff:",
"h1": "disable k304", // TLNote: "j1" explains what k304 is
"i1": "enable k304",
"j1": "enabling this will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"j1": "enabling k304 will disconnect your client on every HTTP 304, which can prevent some buggy proxies from getting stuck (suddenly not loading pages), <em>but</em> it will also make things slower in general",
"k1": "reset client settings",
"l1": "login for more:",
"m1": "welcome back,", // TLNote: "welcome back, USERNAME"
@@ -111,6 +111,9 @@ var tl_cpanel = {{
"ta2": "repeat to confirm new password:",
"ta3": "found a typo; please try again",
"aa1": "incoming files:",
"ab1": "disable no304",
"ac1": "enable no304",
"ad1": "enabling no304 will disable all caching; try this if k304 wasn't enough. This will waste a huge amount of network traffic!",
}},
}};

View File

@@ -108,6 +108,7 @@ args = {
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: Jython",
"Programming Language :: Python :: Implementation :: PyPy",

109
tests/test_cp.py Normal file
View File

@@ -0,0 +1,109 @@
#!/usr/bin/env python3
# coding: utf-8
from __future__ import print_function, unicode_literals
import os
import shutil
import tempfile
import unittest
from itertools import product
from copyparty.authsrv import AuthSrv
from copyparty.httpcli import HttpCli
from tests import util as tu
from tests.util import Cfg
class TestDedup(unittest.TestCase):
def setUp(self):
self.td = tu.get_ramdisk()
def tearDown(self):
os.chdir(tempfile.gettempdir())
shutil.rmtree(self.td)
def reset(self):
td = os.path.join(self.td, "vfs")
if os.path.exists(td):
shutil.rmtree(td)
os.mkdir(td)
os.chdir(td)
for a in "abc":
os.mkdir(a)
for b in "fg":
d = "%s/%s%s" % (a, a, b)
os.mkdir(d)
for fn in "x":
fp = "%s/%s%s%s" % (d, a, b, fn)
with open(fp, "wb") as f:
f.write(fp.encode("utf-8"))
return td
def cinit(self):
if self.conn:
self.fstab = self.conn.hsrv.hub.up2k.fstab
self.conn.hsrv.hub.up2k.shutdown()
self.asrv = AuthSrv(self.args, self.log)
self.conn = tu.VHttpConn(self.args, self.asrv, self.log, b"", True)
if self.fstab:
self.conn.hsrv.hub.up2k.fstab = self.fstab
def test(self):
tc_dedup = ["sym", "no"]
vols = [".::A", "a/af:a/af:r", "b:a/b:r"]
tcs = [
"/a?copy=/c/a /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/a/af/afx /c/a/ag/agx /c/a/b/bf/bfx /c/a/b/bg/bgx /c/cf/cfx /c/cg/cgx",
"/b?copy=/d /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx /d/bf/bfx /d/bg/bgx",
"/b/bf?copy=/d /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx /d/bfx",
"/a/af?copy=/d /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx /d/afx",
"/a/af?copy=/ /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /afx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx",
"/a/af/afx?copy=/afx /a/af/afx /a/ag/agx /a/b/bf/bfx /a/b/bg/bgx /afx /b/bf/bfx /b/bg/bgx /c/cf/cfx /c/cg/cgx",
]
self.conn = None
self.fstab = None
self.ctr = 0 # 2304
for dedup, act_exp in product(tc_dedup, tcs):
action, expect = act_exp.split(" ", 1)
t = "dedup:%s action:%s" % (dedup, action)
print("\n\n\033[0;7m# ", t, "\033[0m")
ka = {"dav_inf": True}
if dedup == "hard":
ka["hardlink"] = True
elif dedup == "no":
ka["no_dedup"] = True
self.args = Cfg(v=vols, a=[], **ka)
self.reset()
self.cinit()
self.do_cp(action)
zs = self.propfind()
fns = " ".join(zs[1])
self.assertEqual(expect, fns)
def do_cp(self, action):
hdr = "POST %s HTTP/1.1\r\nConnection: close\r\nContent-Length: 0\r\n\r\n"
buf = (hdr % (action,)).encode("utf-8")
print("CP [%s]" % (action,))
HttpCli(self.conn.setbuf(buf)).run()
ret = self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
print("CP <-- ", ret)
self.assertIn(" 201 Created", ret[0])
self.assertEqual("k\r\n", ret[1])
return ret
def propfind(self):
h = "PROPFIND / HTTP/1.1\r\nConnection: close\r\n\r\n"
HttpCli(self.conn.setbuf(h.encode("utf-8"))).run()
h, t = self.conn.s._reply.decode("utf-8").split("\r\n\r\n", 1)
fns = t.split("<D:response><D:href>")[1:]
fns = [x.split("</D", 1)[0] for x in fns]
fns = [x for x in fns if not x.endswith("/")]
fns.sort()
return h, fns
def log(self, src, msg, c=0):
print(msg)

View File

@@ -57,6 +57,7 @@ class TestMetrics(unittest.TestCase):
ptns = r"""
cpp_uptime_seconds [0-9]\.[0-9]{3}$
cpp_boot_unixtime_seconds [0-9]{7,10}\.[0-9]{3}$
cpp_active_dl 0$
cpp_http_reqs_created [0-9]{7,10}$
cpp_http_reqs_total -1$
cpp_http_conns 9$

View File

@@ -122,22 +122,22 @@ class Cfg(Namespace):
def __init__(self, a=None, v=None, c=None, **ka0):
ka = {}
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title q rand re_dirsz smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
ex = "chpw daw dav_auth dav_inf dav_mac dav_rt e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp early_ban ed emp exp force_js getmod grid gsel hardlink ih ihead magic hardlink_only nid nih no_acode no_athumb no_clone no_cp no_dav no_db_ip no_del no_dirsz no_dupe no_lifetime no_logues no_mv no_pipe no_poll no_readme no_robots no_sb_md no_sb_lg no_scandir no_tarcmp no_thumb no_vthumb no_zip nrand nw og og_no_head og_s_title ohead q rand re_dirsz rss smb srch_dbg stats uqe vague_403 vc ver write_uplog xdev xlink xvol zs"
ka.update(**{k: False for k in ex.split()})
ex = "dedup dotpart dotsrch hook_v no_dhash no_fastboot no_fpool no_htp no_rescan no_sendfile no_ses no_snap no_up_list no_voldump re_dhash plain_ip"
ka.update(**{k: True for k in ex.split()})
ex = "ah_cli ah_gen css_browser hist js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
ex = "ah_cli ah_gen css_browser hist ipu js_browser js_other mime mimes no_forget no_hash no_idx nonsus_urls og_tpl og_ua"
ka.update(**{k: None for k in ex.split()})
ex = "hash_mt safe_dedup srch_time u2abort u2j u2sz"
ka.update(**{k: 1 for k in ex.split()})
ex = "au_vol mtab_age reg_cap s_thead s_tbody th_convt"
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt"
ka.update(**{k: 9 for k in ex.split()})
ex = "db_act k304 loris re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
ex = "db_act k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo"
ka.update(**{k: 0 for k in ex.split()})
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sbf log_fk md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src R RS SR"
@@ -146,7 +146,7 @@ class Cfg(Namespace):
ex = "ban_403 ban_404 ban_422 ban_pw ban_url"
ka.update(**{k: "no" for k in ex.split()})
ex = "grp on403 on404 xad xar xau xban xbd xbr xbu xiu xm"
ex = "grp on403 on404 xac xad xar xau xban xbc xbd xbr xbu xiu xm"
ka.update(**{k: [] for k in ex.split()})
ex = "exp_lg exp_md"
@@ -254,6 +254,8 @@ class VHttpSrv(object):
self.broker = NullBroker(args, asrv)
self.prism = None
self.bans = {}
self.tdls = self.dls = {}
self.tdli = self.dli = {}
self.nreq = 0
self.nsus = 0
@@ -292,6 +294,8 @@ class VHttpConn(object):
self.args = args
self.asrv = asrv
self.bans = {}
self.tdls = self.dls = {}
self.tdli = self.dli = {}
self.freshen_pwd = 0.0
Ctor = VHttpSrvUp2k if use_up2k else VHttpSrv