Compare commits
36 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
389a00ce59 | ||
|
|
7a460de3c2 | ||
|
|
8ea1f4a751 | ||
|
|
1c69ccc6cd | ||
|
|
84b5bbd3b6 | ||
|
|
9ccd327298 | ||
|
|
11df36f3cf | ||
|
|
f62dd0e3cc | ||
|
|
ad18b6e15e | ||
|
|
c00b80ca29 | ||
|
|
92ed4ba3f8 | ||
|
|
7de9775dd9 | ||
|
|
5ce9060e5c | ||
|
|
f727d5cb5a | ||
|
|
4735fb1ebb | ||
|
|
c7d05cc13d | ||
|
|
51c152ff4a | ||
|
|
eeed2a840c | ||
|
|
4aaa111925 | ||
|
|
e31248f018 | ||
|
|
8b4cf022f2 | ||
|
|
4e7455268a | ||
|
|
680f8ae814 | ||
|
|
90555a4cea | ||
|
|
56a62db591 | ||
|
|
cf51997680 | ||
|
|
f05cc18d61 | ||
|
|
5384c2e0f5 | ||
|
|
9bfbf80a0e | ||
|
|
f874d7754f | ||
|
|
a669f79480 | ||
|
|
1c3894743a | ||
|
|
75cdf17df4 | ||
|
|
de7dd1e60a | ||
|
|
0ee574a718 | ||
|
|
faac894706 |
14
.gitignore
vendored
14
.gitignore
vendored
@@ -5,14 +5,16 @@ __pycache__/
|
|||||||
MANIFEST.in
|
MANIFEST.in
|
||||||
MANIFEST
|
MANIFEST
|
||||||
copyparty.egg-info/
|
copyparty.egg-info/
|
||||||
buildenv/
|
|
||||||
build/
|
|
||||||
dist/
|
|
||||||
py2/
|
|
||||||
sfx/
|
|
||||||
unt/
|
|
||||||
.venv/
|
.venv/
|
||||||
|
|
||||||
|
/buildenv/
|
||||||
|
/build/
|
||||||
|
/dist/
|
||||||
|
/py2/
|
||||||
|
/sfx/
|
||||||
|
/unt/
|
||||||
|
/log/
|
||||||
|
|
||||||
# ide
|
# ide
|
||||||
*.sublime-workspace
|
*.sublime-workspace
|
||||||
|
|
||||||
|
|||||||
58
README.md
58
README.md
@@ -56,10 +56,11 @@ try the **[read-only demo server](https://a.ocv.me/pub/demo/)** 👀 running fro
|
|||||||
* [searching](#searching) - search by size, date, path/name, mp3-tags, ...
|
* [searching](#searching) - search by size, date, path/name, mp3-tags, ...
|
||||||
* [server config](#server-config) - using arguments or config files, or a mix of both
|
* [server config](#server-config) - using arguments or config files, or a mix of both
|
||||||
* [ftp-server](#ftp-server) - an FTP server can be started using `--ftp 3921`
|
* [ftp-server](#ftp-server) - an FTP server can be started using `--ftp 3921`
|
||||||
* [file indexing](#file-indexing)
|
* [file indexing](#file-indexing) - enables dedup and music search ++
|
||||||
* [exclude-patterns](#exclude-patterns)
|
* [exclude-patterns](#exclude-patterns) - to save some time
|
||||||
* [periodic rescan](#periodic-rescan) - filesystem monitoring;
|
* [filesystem guards](#filesystem-guards) - avoid traversing into other filesystems
|
||||||
* [upload rules](#upload-rules) - set upload rules using volume flags
|
* [periodic rescan](#periodic-rescan) - filesystem monitoring
|
||||||
|
* [upload rules](#upload-rules) - set upload rules using volflags
|
||||||
* [compress uploads](#compress-uploads) - files can be autocompressed on upload
|
* [compress uploads](#compress-uploads) - files can be autocompressed on upload
|
||||||
* [database location](#database-location) - in-volume (`.hist/up2k.db`, default) or somewhere else
|
* [database location](#database-location) - in-volume (`.hist/up2k.db`, default) or somewhere else
|
||||||
* [metadata from audio files](#metadata-from-audio-files) - set `-e2t` to index tags on upload
|
* [metadata from audio files](#metadata-from-audio-files) - set `-e2t` to index tags on upload
|
||||||
@@ -248,12 +249,18 @@ some improvement ideas
|
|||||||
* Windows: if the `up2k.db` (filesystem index) is on a samba-share or network disk, you'll get unpredictable behavior if the share is disconnected for a bit
|
* Windows: if the `up2k.db` (filesystem index) is on a samba-share or network disk, you'll get unpredictable behavior if the share is disconnected for a bit
|
||||||
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db on a local disk instead
|
* use `--hist` or the `hist` volflag (`-v [...]:c,hist=/tmp/foo`) to place the db on a local disk instead
|
||||||
* all volumes must exist / be available on startup; up2k (mtp especially) gets funky otherwise
|
* all volumes must exist / be available on startup; up2k (mtp especially) gets funky otherwise
|
||||||
|
* [the database can get stuck](https://github.com/9001/copyparty/issues/10)
|
||||||
|
* has only happened once but that is once too many
|
||||||
|
* luckily not dangerous for file integrity and doesn't really stop uploads or anything like that
|
||||||
|
* but would really appreciate some logs if anyone ever runs into it again
|
||||||
* probably more, pls let me know
|
* probably more, pls let me know
|
||||||
|
|
||||||
## not my bugs
|
## not my bugs
|
||||||
|
|
||||||
* [Chrome issue 1317069](https://bugs.chromium.org/p/chromium/issues/detail?id=1317069) -- if you try to upload a folder which contains symlinks by dragging it into the browser, the symlinked files will not get uploaded
|
* [Chrome issue 1317069](https://bugs.chromium.org/p/chromium/issues/detail?id=1317069) -- if you try to upload a folder which contains symlinks by dragging it into the browser, the symlinked files will not get uploaded
|
||||||
|
|
||||||
|
* [Chrome issue 1352210](https://bugs.chromium.org/p/chromium/issues/detail?id=1352210) -- plaintext http may be faster at filehashing than https (but also extremely CPU-intensive)
|
||||||
|
|
||||||
* iPhones: the volume control doesn't work because [apple doesn't want it to](https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/Device-SpecificConsiderations/Device-SpecificConsiderations.html#//apple_ref/doc/uid/TP40009523-CH5-SW11)
|
* iPhones: the volume control doesn't work because [apple doesn't want it to](https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/Device-SpecificConsiderations/Device-SpecificConsiderations.html#//apple_ref/doc/uid/TP40009523-CH5-SW11)
|
||||||
* *future workaround:* enable the equalizer, make it all-zero, and set a negative boost to reduce the volume
|
* *future workaround:* enable the equalizer, make it all-zero, and set a negative boost to reduce the volume
|
||||||
* "future" because `AudioContext` is broken in the current iOS version (15.1), maybe one day...
|
* "future" because `AudioContext` is broken in the current iOS version (15.1), maybe one day...
|
||||||
@@ -311,7 +318,7 @@ examples:
|
|||||||
* `u1` can open the `inc` folder, but cannot see the contents, only upload new files to it
|
* `u1` can open the `inc` folder, but cannot see the contents, only upload new files to it
|
||||||
* `u2` can browse it and move files *from* `/inc` into any folder where `u2` has write-access
|
* `u2` can browse it and move files *from* `/inc` into any folder where `u2` has write-access
|
||||||
* make folder `/mnt/ss` available at `/i`, read-write for u1, get-only for everyone else, and enable accesskeys: `-v /mnt/ss:i:rw,u1:g:c,fk=4`
|
* make folder `/mnt/ss` available at `/i`, read-write for u1, get-only for everyone else, and enable accesskeys: `-v /mnt/ss:i:rw,u1:g:c,fk=4`
|
||||||
* `c,fk=4` sets the `fk` volume-flag to 4, meaning each file gets a 4-character accesskey
|
* `c,fk=4` sets the `fk` volflag to 4, meaning each file gets a 4-character accesskey
|
||||||
* `u1` can upload files, browse the folder, and see the generated accesskeys
|
* `u1` can upload files, browse the folder, and see the generated accesskeys
|
||||||
* other users cannot browse the folder, but can access the files if they have the full file URL with the accesskey
|
* other users cannot browse the folder, but can access the files if they have the full file URL with the accesskey
|
||||||
|
|
||||||
@@ -658,7 +665,9 @@ an FTP server can be started using `--ftp 3921`, and/or `--ftps` for explicit T
|
|||||||
|
|
||||||
## file indexing
|
## file indexing
|
||||||
|
|
||||||
file indexing relies on two database tables, the up2k filetree (`-e2d`) and the metadata tags (`-e2t`), stored in `.hist/up2k.db`. Configuration can be done through arguments, volume flags, or a mix of both.
|
enables dedup and music search ++
|
||||||
|
|
||||||
|
file indexing relies on two database tables, the up2k filetree (`-e2d`) and the metadata tags (`-e2t`), stored in `.hist/up2k.db`. Configuration can be done through arguments, volflags, or a mix of both.
|
||||||
|
|
||||||
through arguments:
|
through arguments:
|
||||||
* `-e2d` enables file indexing on upload
|
* `-e2d` enables file indexing on upload
|
||||||
@@ -671,7 +680,7 @@ through arguments:
|
|||||||
* `-e2vu` patches the database with the new hashes from the filesystem
|
* `-e2vu` patches the database with the new hashes from the filesystem
|
||||||
* `-e2vp` panics and kills copyparty instead
|
* `-e2vp` panics and kills copyparty instead
|
||||||
|
|
||||||
the same arguments can be set as volume flags, in addition to `d2d`, `d2ds`, `d2t`, `d2ts`, `d2v` for disabling:
|
the same arguments can be set as volflags, in addition to `d2d`, `d2ds`, `d2t`, `d2ts`, `d2v` for disabling:
|
||||||
* `-v ~/music::r:c,e2dsa,e2tsr` does a full reindex of everything on startup
|
* `-v ~/music::r:c,e2dsa,e2tsr` does a full reindex of everything on startup
|
||||||
* `-v ~/music::r:c,d2d` disables **all** indexing, even if any `-e2*` are on
|
* `-v ~/music::r:c,d2d` disables **all** indexing, even if any `-e2*` are on
|
||||||
* `-v ~/music::r:c,d2t` disables all `-e2t*` (tags), does not affect `-e2d*`
|
* `-v ~/music::r:c,d2t` disables all `-e2t*` (tags), does not affect `-e2d*`
|
||||||
@@ -685,7 +694,7 @@ note:
|
|||||||
|
|
||||||
### exclude-patterns
|
### exclude-patterns
|
||||||
|
|
||||||
to save some time, you can provide a regex pattern for filepaths to only index by filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash \.iso$` or the volume-flag `:c,nohash=\.iso$`, this has the following consequences:
|
to save some time, you can provide a regex pattern for filepaths to only index by filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash \.iso$` or the volflag `:c,nohash=\.iso$`, this has the following consequences:
|
||||||
* initial indexing is way faster, especially when the volume is on a network disk
|
* initial indexing is way faster, especially when the volume is on a network disk
|
||||||
* makes it impossible to [file-search](#file-search)
|
* makes it impossible to [file-search](#file-search)
|
||||||
* if someone uploads the same file contents, the upload will not be detected as a dupe, so it will not get symlinked or rejected
|
* if someone uploads the same file contents, the upload will not be detected as a dupe, so it will not get symlinked or rejected
|
||||||
@@ -694,6 +703,14 @@ similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noi
|
|||||||
|
|
||||||
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
|
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
|
||||||
|
|
||||||
|
### filesystem guards
|
||||||
|
|
||||||
|
avoid traversing into other filesystems using `--xdev` / volflag `:c,xdev`, skipping any symlinks or bind-mounts to another HDD for example
|
||||||
|
|
||||||
|
and/or you can `--xvol` / `:c,xvol` to ignore all symlinks leaving the volume's top directory, but still allow bind-mounts pointing elsewhere
|
||||||
|
|
||||||
|
**NB: only affects the indexer** -- users can still access anything inside a volume, unless shadowed by another volume
|
||||||
|
|
||||||
### periodic rescan
|
### periodic rescan
|
||||||
|
|
||||||
filesystem monitoring; if copyparty is not the only software doing stuff on your filesystem, you may want to enable periodic rescans to keep the index up to date
|
filesystem monitoring; if copyparty is not the only software doing stuff on your filesystem, you may want to enable periodic rescans to keep the index up to date
|
||||||
@@ -705,7 +722,7 @@ uploads are disabled while a rescan is happening, so rescans will be delayed by
|
|||||||
|
|
||||||
## upload rules
|
## upload rules
|
||||||
|
|
||||||
set upload rules using volume flags, some examples:
|
set upload rules using volflags, some examples:
|
||||||
|
|
||||||
* `:c,sz=1k-3m` sets allowed filesize between 1 KiB and 3 MiB inclusive (suffixes: `b`, `k`, `m`, `g`)
|
* `:c,sz=1k-3m` sets allowed filesize between 1 KiB and 3 MiB inclusive (suffixes: `b`, `k`, `m`, `g`)
|
||||||
* `:c,df=4g` block uploads if there would be less than 4 GiB free disk space afterwards
|
* `:c,df=4g` block uploads if there would be less than 4 GiB free disk space afterwards
|
||||||
@@ -727,16 +744,16 @@ you can also set transaction limits which apply per-IP and per-volume, but these
|
|||||||
|
|
||||||
files can be autocompressed on upload, either on user-request (if config allows) or forced by server-config
|
files can be autocompressed on upload, either on user-request (if config allows) or forced by server-config
|
||||||
|
|
||||||
* volume flag `gz` allows gz compression
|
* volflag `gz` allows gz compression
|
||||||
* volume flag `xz` allows lzma compression
|
* volflag `xz` allows lzma compression
|
||||||
* volume flag `pk` **forces** compression on all files
|
* volflag `pk` **forces** compression on all files
|
||||||
* url parameter `pk` requests compression with server-default algorithm
|
* url parameter `pk` requests compression with server-default algorithm
|
||||||
* url parameter `gz` or `xz` requests compression with a specific algorithm
|
* url parameter `gz` or `xz` requests compression with a specific algorithm
|
||||||
* url parameter `xz` requests xz compression
|
* url parameter `xz` requests xz compression
|
||||||
|
|
||||||
things to note,
|
things to note,
|
||||||
* the `gz` and `xz` arguments take a single optional argument, the compression level (range 0 to 9)
|
* the `gz` and `xz` arguments take a single optional argument, the compression level (range 0 to 9)
|
||||||
* the `pk` volume flag takes the optional argument `ALGORITHM,LEVEL` which will then be forced for all uploads, for example `gz,9` or `xz,0`
|
* the `pk` volflag takes the optional argument `ALGORITHM,LEVEL` which will then be forced for all uploads, for example `gz,9` or `xz,0`
|
||||||
* default compression is gzip level 9
|
* default compression is gzip level 9
|
||||||
* all upload methods except up2k are supported
|
* all upload methods except up2k are supported
|
||||||
* the files will be indexed after compression, so dupe-detection and file-search will not work as expected
|
* the files will be indexed after compression, so dupe-detection and file-search will not work as expected
|
||||||
@@ -756,7 +773,7 @@ in-volume (`.hist/up2k.db`, default) or somewhere else
|
|||||||
|
|
||||||
copyparty creates a subfolder named `.hist` inside each volume where it stores the database, thumbnails, and some other stuff
|
copyparty creates a subfolder named `.hist` inside each volume where it stores the database, thumbnails, and some other stuff
|
||||||
|
|
||||||
this can instead be kept in a single place using the `--hist` argument, or the `hist=` volume flag, or a mix of both:
|
this can instead be kept in a single place using the `--hist` argument, or the `hist=` volflag, or a mix of both:
|
||||||
* `--hist ~/.cache/copyparty -v ~/music::r:c,hist=-` sets `~/.cache/copyparty` as the default place to put volume info, but `~/music` gets the regular `.hist` subfolder (`-` restores default behavior)
|
* `--hist ~/.cache/copyparty -v ~/music::r:c,hist=-` sets `~/.cache/copyparty` as the default place to put volume info, but `~/music` gets the regular `.hist` subfolder (`-` restores default behavior)
|
||||||
|
|
||||||
note:
|
note:
|
||||||
@@ -794,7 +811,7 @@ see the beautiful mess of a dictionary in [mtag.py](https://github.com/9001/copy
|
|||||||
|
|
||||||
provide custom parsers to index additional tags, also see [./bin/mtag/README.md](./bin/mtag/README.md)
|
provide custom parsers to index additional tags, also see [./bin/mtag/README.md](./bin/mtag/README.md)
|
||||||
|
|
||||||
copyparty can invoke external programs to collect additional metadata for files using `mtp` (either as argument or volume flag), there is a default timeout of 30sec, and only files which contain audio get analyzed by default (see ay/an/ad below)
|
copyparty can invoke external programs to collect additional metadata for files using `mtp` (either as argument or volflag), there is a default timeout of 30sec, and only files which contain audio get analyzed by default (see ay/an/ad below)
|
||||||
|
|
||||||
* `-mtp .bpm=~/bin/audio-bpm.py` will execute `~/bin/audio-bpm.py` with the audio file as argument 1 to provide the `.bpm` tag, if that does not exist in the audio metadata
|
* `-mtp .bpm=~/bin/audio-bpm.py` will execute `~/bin/audio-bpm.py` with the audio file as argument 1 to provide the `.bpm` tag, if that does not exist in the audio metadata
|
||||||
* `-mtp key=f,t5,~/bin/audio-key.py` uses `~/bin/audio-key.py` to get the `key` tag, replacing any existing metadata tag (`f,`), aborting if it takes longer than 5sec (`t5,`)
|
* `-mtp key=f,t5,~/bin/audio-key.py` uses `~/bin/audio-key.py` to get the `key` tag, replacing any existing metadata tag (`f,`), aborting if it takes longer than 5sec (`t5,`)
|
||||||
@@ -835,8 +852,8 @@ if this becomes popular maybe there should be a less janky way to do it actually
|
|||||||
tell search engines you dont wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
|
tell search engines you dont wanna be indexed, either using the good old [robots.txt](https://www.robotstxt.org/robotstxt.html) or through copyparty settings:
|
||||||
|
|
||||||
* `--no-robots` adds HTTP (`X-Robots-Tag`) and HTML (`<meta>`) headers with `noindex, nofollow` globally
|
* `--no-robots` adds HTTP (`X-Robots-Tag`) and HTML (`<meta>`) headers with `noindex, nofollow` globally
|
||||||
* volume-flag `[...]:c,norobots` does the same thing for that single volume
|
* volflag `[...]:c,norobots` does the same thing for that single volume
|
||||||
* volume-flag `[...]:c,robots` ALLOWS search-engine crawling for that volume, even if `--no-robots` is set globally
|
* volflag `[...]:c,robots` ALLOWS search-engine crawling for that volume, even if `--no-robots` is set globally
|
||||||
|
|
||||||
also, `--force-js` disables the plain HTML folder listing, making things harder to parse for search engines
|
also, `--force-js` disables the plain HTML folder listing, making things harder to parse for search engines
|
||||||
|
|
||||||
@@ -997,6 +1014,10 @@ this is due to `crypto.subtle` [not yet](https://github.com/w3c/webcrypto/issues
|
|||||||
|
|
||||||
as a result, the hashes are much less useful than they could have been (search the server by sha512, provide the sha512 in the response http headers, ...)
|
as a result, the hashes are much less useful than they could have been (search the server by sha512, provide the sha512 in the response http headers, ...)
|
||||||
|
|
||||||
|
however it allows for hashing multiple chunks in parallel, greatly increasing upload speed from fast storage (NVMe, raid-0 and such)
|
||||||
|
|
||||||
|
* both the [browser uploader](#uploading) and the [commandline one](https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py) does this now, allowing for fast uploading even from plaintext http
|
||||||
|
|
||||||
hashwasm would solve the streaming issue but reduces hashing speed for sha512 (xxh128 does 6 GiB/s), and it would make old browsers and [iphones](https://bugs.webkit.org/show_bug.cgi?id=228552) unsupported
|
hashwasm would solve the streaming issue but reduces hashing speed for sha512 (xxh128 does 6 GiB/s), and it would make old browsers and [iphones](https://bugs.webkit.org/show_bug.cgi?id=228552) unsupported
|
||||||
|
|
||||||
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
|
* blake2 might be a better choice since xxh is non-cryptographic, but that gets ~15 MiB/s on slower androids
|
||||||
@@ -1030,6 +1051,7 @@ when uploading files,
|
|||||||
|
|
||||||
* if you're cpu-bottlenecked, or the browser is maxing a cpu core:
|
* if you're cpu-bottlenecked, or the browser is maxing a cpu core:
|
||||||
* up to 30% faster uploads if you hide the upload status list by switching away from the `[🚀]` up2k ui-tab (or closing it)
|
* up to 30% faster uploads if you hide the upload status list by switching away from the `[🚀]` up2k ui-tab (or closing it)
|
||||||
|
* optionally you can switch to the lightweight potato ui by clicking the `[🥔]`
|
||||||
* switching to another browser-tab also works, the favicon will update every 10 seconds in that case
|
* switching to another browser-tab also works, the favicon will update every 10 seconds in that case
|
||||||
* unlikely to be a problem, but can happen when uploding many small files, or your internet is too fast, or PC too slow
|
* unlikely to be a problem, but can happen when uploding many small files, or your internet is too fast, or PC too slow
|
||||||
|
|
||||||
@@ -1059,7 +1081,7 @@ some notes on hardening
|
|||||||
other misc notes:
|
other misc notes:
|
||||||
|
|
||||||
* you can disable directory listings by giving permission `g` instead of `r`, only accepting direct URLs to files
|
* you can disable directory listings by giving permission `g` instead of `r`, only accepting direct URLs to files
|
||||||
* combine this with volume-flag `c,fk` to generate per-file accesskeys; users which have full read-access will then see URLs with `?k=...` appended to the end, and `g` users must provide that URL including the correct key to avoid a 404
|
* combine this with volflag `c,fk` to generate per-file accesskeys; users which have full read-access will then see URLs with `?k=...` appended to the end, and `g` users must provide that URL including the correct key to avoid a 404
|
||||||
|
|
||||||
|
|
||||||
## gotchas
|
## gotchas
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ run [`install-deps.sh`](install-deps.sh) to build/install most dependencies requ
|
|||||||
* `mtp` modules will not run if a file has existing tags in the db, so clear out the tags with `-e2tsr` the first time you launch with new `mtp` options
|
* `mtp` modules will not run if a file has existing tags in the db, so clear out the tags with `-e2tsr` the first time you launch with new `mtp` options
|
||||||
|
|
||||||
|
|
||||||
## usage with volume-flags
|
## usage with volflags
|
||||||
|
|
||||||
instead of affecting all volumes, you can set the options for just one volume like so:
|
instead of affecting all volumes, you can set the options for just one volume like so:
|
||||||
|
|
||||||
|
|||||||
@@ -47,8 +47,8 @@ CONDITIONAL_UPLOAD = True
|
|||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
if CONDITIONAL_UPLOAD:
|
|
||||||
fp = sys.argv[1]
|
fp = sys.argv[1]
|
||||||
|
if CONDITIONAL_UPLOAD:
|
||||||
zb = sys.stdin.buffer.read()
|
zb = sys.stdin.buffer.read()
|
||||||
zs = zb.decode("utf-8", "replace")
|
zs = zb.decode("utf-8", "replace")
|
||||||
md = json.loads(zs)
|
md = json.loads(zs)
|
||||||
|
|||||||
@@ -97,7 +97,7 @@ def main():
|
|||||||
zs = (
|
zs = (
|
||||||
"ffmpeg -y -hide_banner -nostdin -v warning"
|
"ffmpeg -y -hide_banner -nostdin -v warning"
|
||||||
+ " -err_detect +crccheck+bitstream+buffer+careful+compliant+aggressive+explode"
|
+ " -err_detect +crccheck+bitstream+buffer+careful+compliant+aggressive+explode"
|
||||||
" -xerror -i"
|
+ " -xerror -i"
|
||||||
)
|
)
|
||||||
|
|
||||||
cmd = zs.encode("ascii").split(b" ") + [fsenc(fp)]
|
cmd = zs.encode("ascii").split(b" ") + [fsenc(fp)]
|
||||||
|
|||||||
170
bin/up2k.py
170
bin/up2k.py
@@ -3,7 +3,7 @@ from __future__ import print_function, unicode_literals
|
|||||||
|
|
||||||
"""
|
"""
|
||||||
up2k.py: upload to copyparty
|
up2k.py: upload to copyparty
|
||||||
2022-06-16, v0.15, ed <irc.rizon.net>, MIT-Licensed
|
2022-08-13, v0.18, ed <irc.rizon.net>, MIT-Licensed
|
||||||
https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py
|
https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py
|
||||||
|
|
||||||
- dependencies: requests
|
- dependencies: requests
|
||||||
@@ -22,12 +22,29 @@ import atexit
|
|||||||
import signal
|
import signal
|
||||||
import base64
|
import base64
|
||||||
import hashlib
|
import hashlib
|
||||||
import argparse
|
|
||||||
import platform
|
import platform
|
||||||
import threading
|
import threading
|
||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
import requests
|
try:
|
||||||
|
import argparse
|
||||||
|
except:
|
||||||
|
m = "\n ERROR: need 'argparse'; download it here:\n https://github.com/ThomasWaldmann/argparse/raw/master/argparse.py\n"
|
||||||
|
print(m)
|
||||||
|
raise
|
||||||
|
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
except:
|
||||||
|
if sys.version_info > (2, 7):
|
||||||
|
m = "\n ERROR: need 'requests'; run this:\n python -m pip install --user requests\n"
|
||||||
|
else:
|
||||||
|
m = "requests/2.18.4 urllib3/1.23 chardet/3.0.4 certifi/2020.4.5.1 idna/2.7"
|
||||||
|
m = [" https://pypi.org/project/" + x + "/#files" for x in m.split()]
|
||||||
|
m = "\n ERROR: need these:\n" + "\n".join(m) + "\n"
|
||||||
|
|
||||||
|
print(m)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
# from copyparty/__init__.py
|
# from copyparty/__init__.py
|
||||||
@@ -126,6 +143,89 @@ class FileSlice(object):
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
class MTHash(object):
|
||||||
|
def __init__(self, cores):
|
||||||
|
self.f = None
|
||||||
|
self.sz = 0
|
||||||
|
self.csz = 0
|
||||||
|
self.omutex = threading.Lock()
|
||||||
|
self.imutex = threading.Lock()
|
||||||
|
self.work_q = Queue()
|
||||||
|
self.done_q = Queue()
|
||||||
|
self.thrs = []
|
||||||
|
for _ in range(cores):
|
||||||
|
t = threading.Thread(target=self.worker)
|
||||||
|
t.daemon = True
|
||||||
|
t.start()
|
||||||
|
self.thrs.append(t)
|
||||||
|
|
||||||
|
def hash(self, f, fsz, chunksz, pcb=None, pcb_opaque=None):
|
||||||
|
with self.omutex:
|
||||||
|
self.f = f
|
||||||
|
self.sz = fsz
|
||||||
|
self.csz = chunksz
|
||||||
|
|
||||||
|
chunks = {}
|
||||||
|
nchunks = int(math.ceil(fsz / chunksz))
|
||||||
|
for nch in range(nchunks):
|
||||||
|
self.work_q.put(nch)
|
||||||
|
|
||||||
|
ex = ""
|
||||||
|
for nch in range(nchunks):
|
||||||
|
qe = self.done_q.get()
|
||||||
|
try:
|
||||||
|
nch, dig, ofs, csz = qe
|
||||||
|
chunks[nch] = [dig, ofs, csz]
|
||||||
|
except:
|
||||||
|
ex = ex or qe
|
||||||
|
|
||||||
|
if pcb:
|
||||||
|
pcb(pcb_opaque, chunksz * nch)
|
||||||
|
|
||||||
|
if ex:
|
||||||
|
raise Exception(ex)
|
||||||
|
|
||||||
|
ret = []
|
||||||
|
for n in range(nchunks):
|
||||||
|
ret.append(chunks[n])
|
||||||
|
|
||||||
|
self.f = None
|
||||||
|
self.csz = 0
|
||||||
|
self.sz = 0
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def worker(self):
|
||||||
|
while True:
|
||||||
|
ofs = self.work_q.get()
|
||||||
|
try:
|
||||||
|
v = self.hash_at(ofs)
|
||||||
|
except Exception as ex:
|
||||||
|
v = str(ex)
|
||||||
|
|
||||||
|
self.done_q.put(v)
|
||||||
|
|
||||||
|
def hash_at(self, nch):
|
||||||
|
f = self.f
|
||||||
|
ofs = ofs0 = nch * self.csz
|
||||||
|
hashobj = hashlib.sha512()
|
||||||
|
chunk_sz = chunk_rem = min(self.csz, self.sz - ofs)
|
||||||
|
while chunk_rem > 0:
|
||||||
|
with self.imutex:
|
||||||
|
f.seek(ofs)
|
||||||
|
buf = f.read(min(chunk_rem, 1024 * 1024 * 12))
|
||||||
|
|
||||||
|
if not buf:
|
||||||
|
raise Exception("EOF at " + str(ofs))
|
||||||
|
|
||||||
|
hashobj.update(buf)
|
||||||
|
chunk_rem -= len(buf)
|
||||||
|
ofs += len(buf)
|
||||||
|
|
||||||
|
digest = hashobj.digest()[:33]
|
||||||
|
digest = base64.urlsafe_b64encode(digest).decode("utf-8")
|
||||||
|
return nch, digest, ofs0, chunk_sz
|
||||||
|
|
||||||
|
|
||||||
_print = print
|
_print = print
|
||||||
|
|
||||||
|
|
||||||
@@ -230,8 +330,8 @@ def _scd(err, top):
|
|||||||
abspath = os.path.join(top, fh.name)
|
abspath = os.path.join(top, fh.name)
|
||||||
try:
|
try:
|
||||||
yield [abspath, fh.stat()]
|
yield [abspath, fh.stat()]
|
||||||
except:
|
except Exception as ex:
|
||||||
err.append(abspath)
|
err.append((abspath, str(ex)))
|
||||||
|
|
||||||
|
|
||||||
def _lsd(err, top):
|
def _lsd(err, top):
|
||||||
@@ -240,8 +340,8 @@ def _lsd(err, top):
|
|||||||
abspath = os.path.join(top, name)
|
abspath = os.path.join(top, name)
|
||||||
try:
|
try:
|
||||||
yield [abspath, os.stat(abspath)]
|
yield [abspath, os.stat(abspath)]
|
||||||
except:
|
except Exception as ex:
|
||||||
err.append(abspath)
|
err.append((abspath, str(ex)))
|
||||||
|
|
||||||
|
|
||||||
if hasattr(os, "scandir"):
|
if hasattr(os, "scandir"):
|
||||||
@@ -250,15 +350,21 @@ else:
|
|||||||
statdir = _lsd
|
statdir = _lsd
|
||||||
|
|
||||||
|
|
||||||
def walkdir(err, top):
|
def walkdir(err, top, seen):
|
||||||
"""recursive statdir"""
|
"""recursive statdir"""
|
||||||
|
atop = os.path.abspath(os.path.realpath(top))
|
||||||
|
if atop in seen:
|
||||||
|
err.append((top, "recursive-symlink"))
|
||||||
|
return
|
||||||
|
|
||||||
|
seen = seen[:] + [atop]
|
||||||
for ap, inf in sorted(statdir(err, top)):
|
for ap, inf in sorted(statdir(err, top)):
|
||||||
if stat.S_ISDIR(inf.st_mode):
|
if stat.S_ISDIR(inf.st_mode):
|
||||||
try:
|
try:
|
||||||
for x in walkdir(err, ap):
|
for x in walkdir(err, ap, seen):
|
||||||
yield x
|
yield x
|
||||||
except:
|
except Exception as ex:
|
||||||
err.append(ap)
|
err.append((ap, str(ex)))
|
||||||
else:
|
else:
|
||||||
yield ap, inf
|
yield ap, inf
|
||||||
|
|
||||||
@@ -273,7 +379,7 @@ def walkdirs(err, tops):
|
|||||||
stop = os.path.dirname(top)
|
stop = os.path.dirname(top)
|
||||||
|
|
||||||
if os.path.isdir(top):
|
if os.path.isdir(top):
|
||||||
for ap, inf in walkdir(err, top):
|
for ap, inf in walkdir(err, top, []):
|
||||||
yield stop, ap[len(stop) :].lstrip(sep), inf
|
yield stop, ap[len(stop) :].lstrip(sep), inf
|
||||||
else:
|
else:
|
||||||
d, n = top.rsplit(sep, 1)
|
d, n = top.rsplit(sep, 1)
|
||||||
@@ -322,8 +428,8 @@ def up2k_chunksize(filesize):
|
|||||||
|
|
||||||
|
|
||||||
# mostly from copyparty/up2k.py
|
# mostly from copyparty/up2k.py
|
||||||
def get_hashlist(file, pcb):
|
def get_hashlist(file, pcb, mth):
|
||||||
# type: (File, any) -> None
|
# type: (File, any, any) -> None
|
||||||
"""generates the up2k hashlist from file contents, inserts it into `file`"""
|
"""generates the up2k hashlist from file contents, inserts it into `file`"""
|
||||||
|
|
||||||
chunk_sz = up2k_chunksize(file.size)
|
chunk_sz = up2k_chunksize(file.size)
|
||||||
@@ -331,7 +437,12 @@ def get_hashlist(file, pcb):
|
|||||||
file_ofs = 0
|
file_ofs = 0
|
||||||
ret = []
|
ret = []
|
||||||
with open(file.abs, "rb", 512 * 1024) as f:
|
with open(file.abs, "rb", 512 * 1024) as f:
|
||||||
|
if mth and file.size >= 1024 * 512:
|
||||||
|
ret = mth.hash(f, file.size, chunk_sz, pcb, file)
|
||||||
|
file_rem = 0
|
||||||
|
|
||||||
while file_rem > 0:
|
while file_rem > 0:
|
||||||
|
# same as `hash_at` except for `imutex` / bufsz
|
||||||
hashobj = hashlib.sha512()
|
hashobj = hashlib.sha512()
|
||||||
chunk_sz = chunk_rem = min(chunk_sz, file_rem)
|
chunk_sz = chunk_rem = min(chunk_sz, file_rem)
|
||||||
while chunk_rem > 0:
|
while chunk_rem > 0:
|
||||||
@@ -388,8 +499,9 @@ def handshake(req_ses, url, file, pw, search):
|
|||||||
try:
|
try:
|
||||||
r = req_ses.post(url, headers=headers, json=req)
|
r = req_ses.post(url, headers=headers, json=req)
|
||||||
break
|
break
|
||||||
except:
|
except Exception as ex:
|
||||||
eprint("handshake failed, retrying: {0}\n".format(file.name))
|
em = str(ex).split("SSLError(")[-1]
|
||||||
|
eprint("handshake failed, retrying: {0}\n {1}\n\n".format(file.name, em))
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -398,7 +510,7 @@ def handshake(req_ses, url, file, pw, search):
|
|||||||
raise Exception(r.text)
|
raise Exception(r.text)
|
||||||
|
|
||||||
if search:
|
if search:
|
||||||
return r["hits"]
|
return r["hits"], False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
pre, url = url.split("://")
|
pre, url = url.split("://")
|
||||||
@@ -470,12 +582,19 @@ class Ctl(object):
|
|||||||
|
|
||||||
if err:
|
if err:
|
||||||
eprint("\n# failed to access {0} paths:\n".format(len(err)))
|
eprint("\n# failed to access {0} paths:\n".format(len(err)))
|
||||||
for x in err:
|
for ap, msg in err:
|
||||||
eprint(x.decode("utf-8", "replace") + "\n")
|
if ar.v:
|
||||||
|
eprint("{0}\n `-{1}\n\n".format(ap.decode("utf-8", "replace"), msg))
|
||||||
|
else:
|
||||||
|
eprint(ap.decode("utf-8", "replace") + "\n")
|
||||||
|
|
||||||
eprint("^ failed to access those {0} paths ^\n\n".format(len(err)))
|
eprint("^ failed to access those {0} paths ^\n\n".format(len(err)))
|
||||||
|
|
||||||
|
if not ar.v:
|
||||||
|
eprint("hint: set -v for detailed error messages\n")
|
||||||
|
|
||||||
if not ar.ok:
|
if not ar.ok:
|
||||||
eprint("aborting because --ok is not set\n")
|
eprint("hint: aborting because --ok is not set\n")
|
||||||
return
|
return
|
||||||
|
|
||||||
eprint("found {0} files, {1}\n\n".format(nfiles, humansize(nbytes)))
|
eprint("found {0} files, {1}\n\n".format(nfiles, humansize(nbytes)))
|
||||||
@@ -516,6 +635,8 @@ class Ctl(object):
|
|||||||
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
|
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
|
||||||
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
|
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
|
||||||
|
|
||||||
|
self.mth = MTHash(ar.J) if ar.J > 1 else None
|
||||||
|
|
||||||
self._fancy()
|
self._fancy()
|
||||||
|
|
||||||
def _safe(self):
|
def _safe(self):
|
||||||
@@ -526,7 +647,7 @@ class Ctl(object):
|
|||||||
upath = file.abs.decode("utf-8", "replace")
|
upath = file.abs.decode("utf-8", "replace")
|
||||||
|
|
||||||
print("{0} {1}\n hash...".format(self.nfiles - nf, upath))
|
print("{0} {1}\n hash...".format(self.nfiles - nf, upath))
|
||||||
get_hashlist(file, None)
|
get_hashlist(file, None, None)
|
||||||
|
|
||||||
burl = self.ar.url[:12] + self.ar.url[8:].split("/")[0] + "/"
|
burl = self.ar.url[:12] + self.ar.url[8:].split("/")[0] + "/"
|
||||||
while True:
|
while True:
|
||||||
@@ -679,7 +800,7 @@ class Ctl(object):
|
|||||||
|
|
||||||
time.sleep(0.05)
|
time.sleep(0.05)
|
||||||
|
|
||||||
get_hashlist(file, self.cb_hasher)
|
get_hashlist(file, self.cb_hasher, self.mth)
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
self.hash_f += 1
|
self.hash_f += 1
|
||||||
self.hash_c += len(file.cids)
|
self.hash_c += len(file.cids)
|
||||||
@@ -808,6 +929,9 @@ def main():
|
|||||||
if not VT100:
|
if not VT100:
|
||||||
os.system("rem") # enables colors
|
os.system("rem") # enables colors
|
||||||
|
|
||||||
|
cores = os.cpu_count() if hasattr(os, "cpu_count") else 4
|
||||||
|
hcores = min(cores, 3) # 4% faster than 4+ on py3.9 @ r5-4500U
|
||||||
|
|
||||||
# fmt: off
|
# fmt: off
|
||||||
ap = app = argparse.ArgumentParser(formatter_class=APF, epilog="""
|
ap = app = argparse.ArgumentParser(formatter_class=APF, epilog="""
|
||||||
NOTE:
|
NOTE:
|
||||||
@@ -818,11 +942,13 @@ source file/folder selection uses rsync syntax, meaning that:
|
|||||||
|
|
||||||
ap.add_argument("url", type=unicode, help="server url, including destination folder")
|
ap.add_argument("url", type=unicode, help="server url, including destination folder")
|
||||||
ap.add_argument("files", type=unicode, nargs="+", help="files and/or folders to process")
|
ap.add_argument("files", type=unicode, nargs="+", help="files and/or folders to process")
|
||||||
|
ap.add_argument("-v", action="store_true", help="verbose")
|
||||||
ap.add_argument("-a", metavar="PASSWORD", help="password")
|
ap.add_argument("-a", metavar="PASSWORD", help="password")
|
||||||
ap.add_argument("-s", action="store_true", help="file-search (disables upload)")
|
ap.add_argument("-s", action="store_true", help="file-search (disables upload)")
|
||||||
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
|
ap.add_argument("--ok", action="store_true", help="continue even if some local files are inaccessible")
|
||||||
ap = app.add_argument_group("performance tweaks")
|
ap = app.add_argument_group("performance tweaks")
|
||||||
ap.add_argument("-j", type=int, metavar="THREADS", default=4, help="parallel connections")
|
ap.add_argument("-j", type=int, metavar="THREADS", default=4, help="parallel connections")
|
||||||
|
ap.add_argument("-J", type=int, metavar="THREADS", default=hcores, help="num cpu-cores to use for hashing; set 0 or 1 for single-core hashing")
|
||||||
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
|
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
|
||||||
ap.add_argument("--safe", action="store_true", help="use simple fallback approach")
|
ap.add_argument("--safe", action="store_true", help="use simple fallback approach")
|
||||||
ap.add_argument("-z", action="store_true", help="ZOOMIN' (skip uploading files if they exist at the destination with the ~same last-modified timestamp, so same as yolo / turbo with date-chk but even faster)")
|
ap.add_argument("-z", action="store_true", help="ZOOMIN' (skip uploading files if they exist at the destination with the ~same last-modified timestamp, so same as yolo / turbo with date-chk but even faster)")
|
||||||
|
|||||||
@@ -41,12 +41,14 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
textdec = new TextDecoder('latin1'),
|
textdec = new TextDecoder('latin1'),
|
||||||
md_ptn = new TextEncoder().encode('youtube.com/watch?v='),
|
md_ptn = new TextEncoder().encode('youtube.com/watch?v='),
|
||||||
file_ids = [], // all IDs found for each good_files
|
file_ids = [], // all IDs found for each good_files
|
||||||
|
md_only = [], // `${id} ${fn}` where ID was only found in metadata
|
||||||
mofs = 0,
|
mofs = 0,
|
||||||
mnchk = 0,
|
mnchk = 0,
|
||||||
mfile = '';
|
mfile = '';
|
||||||
|
|
||||||
for (var a = 0; a < good_files.length; a++) {
|
for (var a = 0; a < good_files.length; a++) {
|
||||||
var [fobj, name] = good_files[a],
|
var [fobj, name] = good_files[a],
|
||||||
|
cname = name, // will clobber
|
||||||
sz = fobj.size,
|
sz = fobj.size,
|
||||||
ids = [],
|
ids = [],
|
||||||
id_ok = false,
|
id_ok = false,
|
||||||
@@ -57,23 +59,23 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
|
|
||||||
// look for ID in filename; reduce the
|
// look for ID in filename; reduce the
|
||||||
// metadata-scan intensity if the id looks safe
|
// metadata-scan intensity if the id looks safe
|
||||||
m = /[\[(-]([\w-]{11})[\])]?\.(?:mp4|webm|mkv)$/i.exec(name);
|
m = /[\[(-]([\w-]{11})[\])]?\.(?:mp4|webm|mkv|flv|opus|ogg|mp3|m4a|aac)$/i.exec(name);
|
||||||
id_ok = !!m;
|
id_ok = !!m;
|
||||||
|
|
||||||
while (true) {
|
while (true) {
|
||||||
// fuzzy catch-all;
|
// fuzzy catch-all;
|
||||||
// some ytdl fork did %(title)-%(id).%(ext) ...
|
// some ytdl fork did %(title)-%(id).%(ext) ...
|
||||||
m = /(?:^|[^\w])([\w-]{11})(?:$|[^\w-])/.exec(name);
|
m = /(?:^|[^\w])([\w-]{11})(?:$|[^\w-])/.exec(cname);
|
||||||
if (!m)
|
if (!m)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
name = name.replace(m[1], '');
|
cname = cname.replace(m[1], '');
|
||||||
yt_ids.add(m[1]);
|
yt_ids.add(m[1]);
|
||||||
ids.push(m[1]);
|
ids.push(m[1]);
|
||||||
}
|
}
|
||||||
|
|
||||||
// look for IDs in video metadata,
|
// look for IDs in video metadata,
|
||||||
if (/\.(mp4|webm|mkv)$/i.exec(name)) {
|
if (/\.(mp4|webm|mkv|flv|opus|ogg|mp3|m4a|aac)$/i.exec(name)) {
|
||||||
toast.show('inf r', 0, `analyzing file ${a + 1} / ${good_files.length} :\n${name}\n\nhave analysed ${++mnchk} files in ${(Date.now() - t0) / 1000} seconds, ${humantime((good_files.length - (a + 1)) * (((Date.now() - t0) / 1000) / mnchk))} remaining,\n\nbiggest offset so far is ${mofs}, in this file:\n\n${mfile}`);
|
toast.show('inf r', 0, `analyzing file ${a + 1} / ${good_files.length} :\n${name}\n\nhave analysed ${++mnchk} files in ${(Date.now() - t0) / 1000} seconds, ${humantime((good_files.length - (a + 1)) * (((Date.now() - t0) / 1000) / mnchk))} remaining,\n\nbiggest offset so far is ${mofs}, in this file:\n\n${mfile}`);
|
||||||
|
|
||||||
// check first and last 128 MiB;
|
// check first and last 128 MiB;
|
||||||
@@ -108,8 +110,10 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
|
|
||||||
console.log(`found ${m} @${bofs}, ${name} `);
|
console.log(`found ${m} @${bofs}, ${name} `);
|
||||||
yt_ids.add(m);
|
yt_ids.add(m);
|
||||||
if (!has(ids, m))
|
if (!has(ids, m)) {
|
||||||
ids.push(m);
|
ids.push(m);
|
||||||
|
md_only.push(`${m} ${name}`);
|
||||||
|
}
|
||||||
|
|
||||||
// bail after next iteration
|
// bail after next iteration
|
||||||
chunk = nchunks - 1;
|
chunk = nchunks - 1;
|
||||||
@@ -128,6 +132,13 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (md_only.length)
|
||||||
|
console.log('recovered the following youtube-IDs by inspecting metadata:\n\n' + md_only.join('\n'));
|
||||||
|
else if (yt_ids.size)
|
||||||
|
console.log('did not discover any additional youtube-IDs by inspecting metadata; all the IDs also existed in the filenames');
|
||||||
|
else
|
||||||
|
console.log('failed to find any youtube-IDs at all, sorry');
|
||||||
|
|
||||||
if (false) {
|
if (false) {
|
||||||
var msg = `finished analysing ${mnchk} files in ${(Date.now() - t0) / 1000} seconds,\n\nbiggest offset was ${mofs} in this file:\n\n${mfile}`,
|
var msg = `finished analysing ${mnchk} files in ${(Date.now() - t0) / 1000} seconds,\n\nbiggest offset was ${mofs} in this file:\n\n${mfile}`,
|
||||||
mfun = function () { toast.ok(0, msg); };
|
mfun = function () { toast.ok(0, msg); };
|
||||||
@@ -138,21 +149,19 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
return hooks[0]([], [], [], hooks.slice(1));
|
return hooks[0]([], [], [], hooks.slice(1));
|
||||||
}
|
}
|
||||||
|
|
||||||
toast.inf(5, `running query for ${yt_ids.size} videos...`);
|
toast.inf(5, `running query for ${yt_ids.size} youtube-IDs...`);
|
||||||
|
|
||||||
var xhr = new XHR();
|
var xhr = new XHR();
|
||||||
xhr.open('POST', '/ytq', true);
|
xhr.open('POST', '/ytq', true);
|
||||||
xhr.setRequestHeader('Content-Type', 'text/plain');
|
xhr.setRequestHeader('Content-Type', 'text/plain');
|
||||||
xhr.onload = xhr.onerror = function () {
|
xhr.onload = xhr.onerror = function () {
|
||||||
if (this.status != 200)
|
if (this.status != 200)
|
||||||
return toast.err(0, `sorry, database query failed; _; \n\nplease let us know so we can look at it, thx!!\n\nerror ${this.status}: ${(this.response && this.response.err) || this.responseText} `);
|
return toast.err(0, `sorry, database query failed ;_;\n\nplease let us know so we can look at it, thx!!\n\nerror ${this.status}: ${(this.response && this.response.err) || this.responseText}`);
|
||||||
|
|
||||||
process_id_list(this.responseText);
|
process_id_list(this.responseText);
|
||||||
};
|
};
|
||||||
xhr.send(Array.from(yt_ids).join('\n'));
|
xhr.send(Array.from(yt_ids).join('\n'));
|
||||||
|
|
||||||
setTimeout(function () { process_id_list('Nf-nN1wF5Xo\n'); }, 500);
|
|
||||||
|
|
||||||
function process_id_list(txt) {
|
function process_id_list(txt) {
|
||||||
var wanted_ids = new Set(txt.trim().split('\n')),
|
var wanted_ids = new Set(txt.trim().split('\n')),
|
||||||
wanted_names = new Set(), // basenames with a wanted ID
|
wanted_names = new Set(), // basenames with a wanted ID
|
||||||
@@ -164,7 +173,7 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
if (wanted_ids.has(file_ids[a][b])) {
|
if (wanted_ids.has(file_ids[a][b])) {
|
||||||
wanted_files.add(good_files[a]);
|
wanted_files.add(good_files[a]);
|
||||||
|
|
||||||
var m = /(.*)\.(mp4|webm|mkv)$/i.exec(name);
|
var m = /(.*)\.(mp4|webm|mkv|flv|opus|ogg|mp3|m4a|aac)$/i.exec(name);
|
||||||
if (m)
|
if (m)
|
||||||
wanted_names.add(m[1]);
|
wanted_names.add(m[1]);
|
||||||
|
|
||||||
@@ -197,7 +206,7 @@ async function a_up2k_namefilter(good_files, nil_files, bad_files, hooks) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var n_skip = good_files.length - wanted_files.size,
|
var n_skip = good_files.length - wanted_files.size,
|
||||||
msg = `you added ${good_files.length} files; ${n_skip} of them were skipped --\neither because we already have them,\nor because there is no youtube-ID in your filename.\n\n<code>OK</code> / <code>Enter</code> = continue uploading just the ${wanted_files.size} files we definitely need\n\n<code>Cancel</code> / <code>ESC</code> = override the filter; upload ALL the files you added`;
|
msg = `you added ${good_files.length} files; ${good_files.length == n_skip ? 'all' : n_skip} of them were skipped --\neither because we already have them,\nor because there is no youtube-ID in your filenames.\n\n<code>OK</code> / <code>Enter</code> = continue uploading just the ${wanted_files.size} files we definitely need\n\n<code>Cancel</code> / <code>ESC</code> = override the filter; upload ALL the files you added`;
|
||||||
|
|
||||||
if (!n_skip)
|
if (!n_skip)
|
||||||
upload_filtered();
|
upload_filtered();
|
||||||
|
|||||||
@@ -24,7 +24,18 @@ from .__init__ import ANYWIN, PY2, VT100, WINDOWS, E, unicode
|
|||||||
from .__version__ import CODENAME, S_BUILD_DT, S_VERSION
|
from .__version__ import CODENAME, S_BUILD_DT, S_VERSION
|
||||||
from .authsrv import re_vol
|
from .authsrv import re_vol
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
from .util import IMPLICATIONS, align_tab, ansi_re, min_ex, py_desc, termsize, wrap
|
from .util import (
|
||||||
|
IMPLICATIONS,
|
||||||
|
JINJA_VER,
|
||||||
|
PYFTPD_VER,
|
||||||
|
SQLITE_VER,
|
||||||
|
align_tab,
|
||||||
|
ansi_re,
|
||||||
|
min_ex,
|
||||||
|
py_desc,
|
||||||
|
termsize,
|
||||||
|
wrap,
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from types import FrameType
|
from types import FrameType
|
||||||
@@ -107,12 +118,13 @@ class BasicDodge11874(
|
|||||||
|
|
||||||
|
|
||||||
def lprint(*a: Any, **ka: Any) -> None:
|
def lprint(*a: Any, **ka: Any) -> None:
|
||||||
txt: str = " ".join(unicode(x) for x in a) + ka.get("end", "\n")
|
eol = ka.pop("end", "\n")
|
||||||
|
txt: str = " ".join(unicode(x) for x in a) + eol
|
||||||
printed.append(txt)
|
printed.append(txt)
|
||||||
if not VT100:
|
if not VT100:
|
||||||
txt = ansi_re.sub("", txt)
|
txt = ansi_re.sub("", txt)
|
||||||
|
|
||||||
print(txt, **ka)
|
print(txt, end="", **ka)
|
||||||
|
|
||||||
|
|
||||||
def warn(msg: str) -> None:
|
def warn(msg: str) -> None:
|
||||||
@@ -127,7 +139,7 @@ def ensure_locale() -> None:
|
|||||||
]:
|
]:
|
||||||
try:
|
try:
|
||||||
locale.setlocale(locale.LC_ALL, x)
|
locale.setlocale(locale.LC_ALL, x)
|
||||||
lprint("Locale:", x)
|
lprint("Locale: {}\n".format(x))
|
||||||
break
|
break
|
||||||
except:
|
except:
|
||||||
continue
|
continue
|
||||||
@@ -324,6 +336,7 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
|
|||||||
fk_salt = "hunter2"
|
fk_salt = "hunter2"
|
||||||
|
|
||||||
cores = os.cpu_count() if hasattr(os, "cpu_count") else 4
|
cores = os.cpu_count() if hasattr(os, "cpu_count") else 4
|
||||||
|
hcores = min(cores, 3) # 4% faster than 4+ on py3.9 @ r5-4500U
|
||||||
|
|
||||||
sects = [
|
sects = [
|
||||||
[
|
[
|
||||||
@@ -397,10 +410,12 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
|
|||||||
\033[36md2t\033[35m disables metadata collection, overrides -e2t*
|
\033[36md2t\033[35m disables metadata collection, overrides -e2t*
|
||||||
\033[36md2v\033[35m disables file verification, overrides -e2v*
|
\033[36md2v\033[35m disables file verification, overrides -e2v*
|
||||||
\033[36md2d\033[35m disables all database stuff, overrides -e2*
|
\033[36md2d\033[35m disables all database stuff, overrides -e2*
|
||||||
\033[36mnohash=\\.iso$\033[35m skips hashing file contents if path matches *.iso
|
|
||||||
\033[36mnoidx=\\.iso$\033[35m fully ignores the contents at paths matching *.iso
|
|
||||||
\033[36mhist=/tmp/cdb\033[35m puts thumbnails and indexes at that location
|
\033[36mhist=/tmp/cdb\033[35m puts thumbnails and indexes at that location
|
||||||
\033[36mscan=60\033[35m scan for new files every 60sec, same as --re-maxage
|
\033[36mscan=60\033[35m scan for new files every 60sec, same as --re-maxage
|
||||||
|
\033[36mnohash=\\.iso$\033[35m skips hashing file contents if path matches *.iso
|
||||||
|
\033[36mnoidx=\\.iso$\033[35m fully ignores the contents at paths matching *.iso
|
||||||
|
\033[36mxdev\033[35m do not descend into other filesystems
|
||||||
|
\033[36mxvol\033[35m skip symlinks leaving the volume root
|
||||||
|
|
||||||
\033[0mdatabase, audio tags:
|
\033[0mdatabase, audio tags:
|
||||||
"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...
|
"mte", "mth", "mtp", "mtm" all work the same as -mte, -mth, ...
|
||||||
@@ -537,9 +552,10 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
|
|||||||
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything")
|
ap2.add_argument("--no-robots", action="store_true", help="adds http and html headers asking search engines to not index anything")
|
||||||
ap2.add_argument("--logout", metavar="H", type=float, default="8086", help="logout clients after H hours of inactivity (0.0028=10sec, 0.1=6min, 24=day, 168=week, 720=month, 8760=year)")
|
ap2.add_argument("--logout", metavar="H", type=float, default="8086", help="logout clients after H hours of inactivity (0.0028=10sec, 0.1=6min, 24=day, 168=week, 720=month, 8760=year)")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('yolo options')
|
ap2 = ap.add_argument_group('shutdown options')
|
||||||
ap2.add_argument("--ign-ebind", action="store_true", help="continue running even if it's impossible to listen on some of the requested endpoints")
|
ap2.add_argument("--ign-ebind", action="store_true", help="continue running even if it's impossible to listen on some of the requested endpoints")
|
||||||
ap2.add_argument("--ign-ebind-all", action="store_true", help="continue running even if it's impossible to receive connections at all")
|
ap2.add_argument("--ign-ebind-all", action="store_true", help="continue running even if it's impossible to receive connections at all")
|
||||||
|
ap2.add_argument("--exit", metavar="WHEN", type=u, default="", help="shutdown after WHEN has finished; for example 'idx' will do volume indexing + metadata analysis")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('logging options')
|
ap2 = ap.add_argument_group('logging options')
|
||||||
ap2.add_argument("-q", action="store_true", help="quiet")
|
ap2.add_argument("-q", action="store_true", help="quiet")
|
||||||
@@ -595,6 +611,10 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
|
|||||||
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs)")
|
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs)")
|
||||||
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching paths during e2ds folder scans")
|
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching paths during e2ds folder scans")
|
||||||
ap2.add_argument("--no-idx", metavar="PTN", type=u, help="regex: disable indexing of matching paths during e2ds folder scans")
|
ap2.add_argument("--no-idx", metavar="PTN", type=u, help="regex: disable indexing of matching paths during e2ds folder scans")
|
||||||
|
ap2.add_argument("--no-dhash", action="store_true", help="disable rescan acceleration; do full database integrity check -- makes the db ~5%% smaller and bootup/rescans 3~10x slower")
|
||||||
|
ap2.add_argument("--xdev", action="store_true", help="do not descend into other filesystems (symlink or bind-mount to another HDD, ...)")
|
||||||
|
ap2.add_argument("--xvol", action="store_true", help="skip symlinks leaving the volume root")
|
||||||
|
ap2.add_argument("--hash-mt", metavar="CORES", type=int, default=hcores, help="num cpu cores to use for file hashing; set 0 or 1 for single-core hashing")
|
||||||
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="disk rescan volume interval, 0=off, can be set per-volume with the 'scan' volflag")
|
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="disk rescan volume interval, 0=off, can be set per-volume with the 'scan' volflag")
|
||||||
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until SEC seconds after last db write (uploads, renames, ...)")
|
ap2.add_argument("--db-act", metavar="SEC", type=float, default=10, help="defer any scheduled volume reindexing until SEC seconds after last db write (uploads, renames, ...)")
|
||||||
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline -- terminate searches running for more than SEC seconds")
|
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline -- terminate searches running for more than SEC seconds")
|
||||||
@@ -610,9 +630,9 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
|
|||||||
ap2.add_argument("--mtag-v", action="store_true", help="verbose tag scanning; print errors from mtp subprocesses and such")
|
ap2.add_argument("--mtag-v", action="store_true", help="verbose tag scanning; print errors from mtp subprocesses and such")
|
||||||
ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping")
|
ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping")
|
||||||
ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.)",
|
ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.)",
|
||||||
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,vc,ac,res,.fps,ahash,vhash")
|
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,vc,ac,fmt,res,.fps,ahash,vhash")
|
||||||
ap2.add_argument("-mth", metavar="M,M,M", type=u, help="tags to hide by default (comma-sep.)",
|
ap2.add_argument("-mth", metavar="M,M,M", type=u, help="tags to hide by default (comma-sep.)",
|
||||||
default=".vq,.aq,vc,ac,res,.fps")
|
default=".vq,.aq,vc,ac,fmt,res,.fps")
|
||||||
ap2.add_argument("-mtp", metavar="M=[f,]BIN", type=u, action="append", help="read tag M using program BIN to parse the file")
|
ap2.add_argument("-mtp", metavar="M=[f,]BIN", type=u, action="append", help="read tag M using program BIN to parse the file")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('ui options')
|
ap2 = ap.add_argument_group('ui options')
|
||||||
@@ -634,6 +654,7 @@ def run_argparse(argv: list[str], formatter: Any, retry: bool) -> argparse.Names
|
|||||||
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
|
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
|
||||||
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to Path every S second")
|
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to Path every S second")
|
||||||
ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every SEC")
|
ap2.add_argument("--log-thrs", metavar="SEC", type=float, help="list active threads every SEC")
|
||||||
|
ap2.add_argument("--log-fk", metavar="REGEX", type=u, default="", help="log filekey params for files where path matches REGEX; '.' (a single dot) = all files")
|
||||||
# fmt: on
|
# fmt: on
|
||||||
|
|
||||||
ap2 = ap.add_argument_group("help sections")
|
ap2 = ap.add_argument_group("help sections")
|
||||||
@@ -659,10 +680,17 @@ def main(argv: Optional[list[str]] = None) -> None:
|
|||||||
if argv is None:
|
if argv is None:
|
||||||
argv = sys.argv
|
argv = sys.argv
|
||||||
|
|
||||||
desc = py_desc().replace("[", "\033[1;30m[")
|
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0;36m\n sqlite v{} | jinja2 v{} | pyftpd v{}\n\033[0m'
|
||||||
|
f = f.format(
|
||||||
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0m\n'
|
S_VERSION,
|
||||||
lprint(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
|
CODENAME,
|
||||||
|
S_BUILD_DT,
|
||||||
|
py_desc().replace("[", "\033[1;30m["),
|
||||||
|
SQLITE_VER,
|
||||||
|
JINJA_VER,
|
||||||
|
PYFTPD_VER,
|
||||||
|
)
|
||||||
|
lprint(f)
|
||||||
|
|
||||||
ensure_locale()
|
ensure_locale()
|
||||||
if HAVE_SSL:
|
if HAVE_SSL:
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
VERSION = (1, 3, 8)
|
VERSION = (1, 3, 13)
|
||||||
CODENAME = "god dag"
|
CODENAME = "god dag"
|
||||||
BUILD_DT = (2022, 7, 27)
|
BUILD_DT = (2022, 8, 15)
|
||||||
|
|
||||||
S_VERSION = ".".join(map(str, VERSION))
|
S_VERSION = ".".join(map(str, VERSION))
|
||||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ from .bos import bos
|
|||||||
from .util import (
|
from .util import (
|
||||||
IMPLICATIONS,
|
IMPLICATIONS,
|
||||||
META_NOBOTS,
|
META_NOBOTS,
|
||||||
|
SQLITE_VER,
|
||||||
Pebkac,
|
Pebkac,
|
||||||
absreal,
|
absreal,
|
||||||
fsenc,
|
fsenc,
|
||||||
@@ -707,7 +708,7 @@ class AuthSrv(object):
|
|||||||
raise Exception('invalid mountpoint "{}"'.format(vol_dst))
|
raise Exception('invalid mountpoint "{}"'.format(vol_dst))
|
||||||
|
|
||||||
# cfg files override arguments and previous files
|
# cfg files override arguments and previous files
|
||||||
vol_src = bos.path.abspath(vol_src)
|
vol_src = absreal(vol_src)
|
||||||
vol_dst = vol_dst.strip("/")
|
vol_dst = vol_dst.strip("/")
|
||||||
self._map_volume(vol_src, vol_dst, mount, daxs, mflags)
|
self._map_volume(vol_src, vol_dst, mount, daxs, mflags)
|
||||||
continue
|
continue
|
||||||
@@ -728,12 +729,12 @@ class AuthSrv(object):
|
|||||||
self, lvl: str, uname: str, axs: AXS, flags: dict[str, Any]
|
self, lvl: str, uname: str, axs: AXS, flags: dict[str, Any]
|
||||||
) -> None:
|
) -> None:
|
||||||
if lvl.strip("crwmdg"):
|
if lvl.strip("crwmdg"):
|
||||||
raise Exception("invalid volume flag: {},{}".format(lvl, uname))
|
raise Exception("invalid volflag: {},{}".format(lvl, uname))
|
||||||
|
|
||||||
if lvl == "c":
|
if lvl == "c":
|
||||||
cval: Union[bool, str] = True
|
cval: Union[bool, str] = True
|
||||||
try:
|
try:
|
||||||
# volume flag with arguments, possibly with a preceding list of bools
|
# volflag with arguments, possibly with a preceding list of bools
|
||||||
uname, cval = uname.split("=", 1)
|
uname, cval = uname.split("=", 1)
|
||||||
except:
|
except:
|
||||||
# just one or more bools
|
# just one or more bools
|
||||||
@@ -818,7 +819,7 @@ class AuthSrv(object):
|
|||||||
src = uncyg(src)
|
src = uncyg(src)
|
||||||
|
|
||||||
# print("\n".join([src, dst, perms]))
|
# print("\n".join([src, dst, perms]))
|
||||||
src = bos.path.abspath(src)
|
src = absreal(src)
|
||||||
dst = dst.strip("/")
|
dst = dst.strip("/")
|
||||||
self._map_volume(src, dst, mount, daxs, mflags)
|
self._map_volume(src, dst, mount, daxs, mflags)
|
||||||
|
|
||||||
@@ -847,7 +848,7 @@ class AuthSrv(object):
|
|||||||
if not mount:
|
if not mount:
|
||||||
# -h says our defaults are CWD at root and read/write for everyone
|
# -h says our defaults are CWD at root and read/write for everyone
|
||||||
axs = AXS(["*"], ["*"], None, None)
|
axs = AXS(["*"], ["*"], None, None)
|
||||||
vfs = VFS(self.log_func, bos.path.abspath("."), "", axs, {})
|
vfs = VFS(self.log_func, absreal("."), "", axs, {})
|
||||||
elif "" not in mount:
|
elif "" not in mount:
|
||||||
# there's volumes but no root; make root inaccessible
|
# there's volumes but no root; make root inaccessible
|
||||||
vfs = VFS(self.log_func, "", "", AXS(), {})
|
vfs = VFS(self.log_func, "", "", AXS(), {})
|
||||||
@@ -1029,10 +1030,15 @@ class AuthSrv(object):
|
|||||||
vol.flags["dathumb"] = True
|
vol.flags["dathumb"] = True
|
||||||
vol.flags["dithumb"] = True
|
vol.flags["dithumb"] = True
|
||||||
|
|
||||||
|
have_fk = False
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_vols.values():
|
||||||
fk = vol.flags.get("fk")
|
fk = vol.flags.get("fk")
|
||||||
if fk:
|
if fk:
|
||||||
vol.flags["fk"] = int(fk) if fk is not True else 8
|
vol.flags["fk"] = int(fk) if fk is not True else 8
|
||||||
|
have_fk = True
|
||||||
|
|
||||||
|
if have_fk and re.match(r"^[0-9\.]+$", self.args.fk_salt):
|
||||||
|
self.log("filekey salt: {}".format(self.args.fk_salt))
|
||||||
|
|
||||||
for vol in vfs.all_vols.values():
|
for vol in vfs.all_vols.values():
|
||||||
if "pk" in vol.flags and "gz" not in vol.flags and "xz" not in vol.flags:
|
if "pk" in vol.flags and "gz" not in vol.flags and "xz" not in vol.flags:
|
||||||
@@ -1061,7 +1067,7 @@ class AuthSrv(object):
|
|||||||
if ptn:
|
if ptn:
|
||||||
vol.flags[vf] = re.compile(ptn)
|
vol.flags[vf] = re.compile(ptn)
|
||||||
|
|
||||||
for k in ["e2t", "e2ts", "e2tsr", "e2v", "e2vu", "e2vp"]:
|
for k in ["e2t", "e2ts", "e2tsr", "e2v", "e2vu", "e2vp", "xdev", "xvol"]:
|
||||||
if getattr(self.args, k):
|
if getattr(self.args, k):
|
||||||
vol.flags[k] = True
|
vol.flags[k] = True
|
||||||
|
|
||||||
@@ -1079,7 +1085,7 @@ class AuthSrv(object):
|
|||||||
if "mth" not in vol.flags:
|
if "mth" not in vol.flags:
|
||||||
vol.flags["mth"] = self.args.mth
|
vol.flags["mth"] = self.args.mth
|
||||||
|
|
||||||
# append parsers from argv to volume-flags
|
# append parsers from argv to volflags
|
||||||
self._read_volflag(vol.flags, "mtp", self.args.mtp, True)
|
self._read_volflag(vol.flags, "mtp", self.args.mtp, True)
|
||||||
|
|
||||||
# d2d drops all database features for a volume
|
# d2d drops all database features for a volume
|
||||||
@@ -1142,7 +1148,7 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
for mtp in local_only_mtp:
|
for mtp in local_only_mtp:
|
||||||
if mtp not in local_mte:
|
if mtp not in local_mte:
|
||||||
t = 'volume "/{}" defines metadata tag "{}", but doesnt use it in "-mte" (or with "cmte" in its volume-flags)'
|
t = 'volume "/{}" defines metadata tag "{}", but doesnt use it in "-mte" (or with "cmte" in its volflags)'
|
||||||
self.log(t.format(vol.vpath, mtp), 1)
|
self.log(t.format(vol.vpath, mtp), 1)
|
||||||
errors = True
|
errors = True
|
||||||
|
|
||||||
@@ -1151,7 +1157,7 @@ class AuthSrv(object):
|
|||||||
tags = [y for x in tags for y in x.split(",")]
|
tags = [y for x in tags for y in x.split(",")]
|
||||||
for mtp in tags:
|
for mtp in tags:
|
||||||
if mtp not in all_mte:
|
if mtp not in all_mte:
|
||||||
t = 'metadata tag "{}" is defined by "-mtm" or "-mtp", but is not used by "-mte" (or by any "cmte" volume-flag)'
|
t = 'metadata tag "{}" is defined by "-mtm" or "-mtp", but is not used by "-mte" (or by any "cmte" volflag)'
|
||||||
self.log(t.format(mtp), 1)
|
self.log(t.format(mtp), 1)
|
||||||
errors = True
|
errors = True
|
||||||
|
|
||||||
@@ -1160,7 +1166,7 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
vfs.bubble_flags()
|
vfs.bubble_flags()
|
||||||
|
|
||||||
e2vs = []
|
have_e2d = False
|
||||||
t = "volumes and permissions:\n"
|
t = "volumes and permissions:\n"
|
||||||
for zv in vfs.all_vols.values():
|
for zv in vfs.all_vols.values():
|
||||||
if not self.warn_anonwrite:
|
if not self.warn_anonwrite:
|
||||||
@@ -1179,24 +1185,27 @@ class AuthSrv(object):
|
|||||||
u = u if u else "\033[36m--none--\033[0m"
|
u = u if u else "\033[36m--none--\033[0m"
|
||||||
t += "\n| {}: {}".format(txt, u)
|
t += "\n| {}: {}".format(txt, u)
|
||||||
|
|
||||||
if "e2v" in zv.flags:
|
if "e2d" in zv.flags:
|
||||||
e2vs.append(zv.vpath or "/")
|
have_e2d = True
|
||||||
|
|
||||||
t += "\n"
|
t += "\n"
|
||||||
|
|
||||||
if e2vs:
|
if self.warn_anonwrite:
|
||||||
t += "\n\033[33me2v enabled for the following volumes;\nuploads will be blocked until scan has finished:\n \033[0m"
|
if not self.args.no_voldump:
|
||||||
t += " ".join(e2vs) + "\n"
|
|
||||||
|
|
||||||
if self.warn_anonwrite and not self.args.no_voldump:
|
|
||||||
self.log(t)
|
self.log(t)
|
||||||
|
|
||||||
|
if have_e2d:
|
||||||
|
t = self.chk_sqlite_threadsafe()
|
||||||
|
if t:
|
||||||
|
self.log("\n\033[{}\033[0m\n".format(t))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
zv, _ = vfs.get("/", "*", False, True)
|
zv, _ = vfs.get("/", "*", False, True)
|
||||||
if self.warn_anonwrite and os.getcwd() == zv.realpath:
|
if self.warn_anonwrite and os.getcwd() == zv.realpath:
|
||||||
self.warn_anonwrite = False
|
|
||||||
t = "anyone can write to the current directory: {}\n"
|
t = "anyone can write to the current directory: {}\n"
|
||||||
self.log(t.format(zv.realpath), c=1)
|
self.log(t.format(zv.realpath), c=1)
|
||||||
|
|
||||||
|
self.warn_anonwrite = False
|
||||||
except Pebkac:
|
except Pebkac:
|
||||||
self.warn_anonwrite = True
|
self.warn_anonwrite = True
|
||||||
|
|
||||||
@@ -1210,6 +1219,23 @@ class AuthSrv(object):
|
|||||||
if pwds:
|
if pwds:
|
||||||
self.re_pwd = re.compile("=(" + "|".join(pwds) + ")([]&; ]|$)")
|
self.re_pwd = re.compile("=(" + "|".join(pwds) + ")([]&; ]|$)")
|
||||||
|
|
||||||
|
def chk_sqlite_threadsafe(self) -> str:
|
||||||
|
v = SQLITE_VER[-1:]
|
||||||
|
|
||||||
|
if v == "1":
|
||||||
|
# threadsafe (linux, windows)
|
||||||
|
return ""
|
||||||
|
|
||||||
|
if v == "2":
|
||||||
|
# module safe, connections unsafe (macos)
|
||||||
|
return "33m your sqlite3 was compiled with reduced thread-safety;\n database features (-e2d, -e2t) SHOULD be fine\n but MAY cause database-corruption and crashes"
|
||||||
|
|
||||||
|
if v == "0":
|
||||||
|
# everything unsafe
|
||||||
|
return "31m your sqlite3 was compiled WITHOUT thread-safety!\n database features (-e2d, -e2t) will PROBABLY cause crashes!"
|
||||||
|
|
||||||
|
return "36m cannot verify sqlite3 thread-safety; strange but probably fine"
|
||||||
|
|
||||||
def dbg_ls(self) -> None:
|
def dbg_ls(self) -> None:
|
||||||
users = self.args.ls
|
users = self.args.ls
|
||||||
vol = "*"
|
vol = "*"
|
||||||
|
|||||||
@@ -1,21 +1,17 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
try:
|
|
||||||
import ctypes
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import time
|
import time
|
||||||
|
|
||||||
from .__init__ import ANYWIN, MACOS
|
from .__init__ import ANYWIN, MACOS
|
||||||
from .authsrv import AXS, VFS
|
from .authsrv import AXS, VFS
|
||||||
|
from .bos import bos
|
||||||
from .util import chkcmd, min_ex
|
from .util import chkcmd, min_ex
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from typing import Any, Optional, Union
|
from typing import Optional, Union
|
||||||
|
|
||||||
from .util import RootLogger
|
from .util import RootLogger
|
||||||
except:
|
except:
|
||||||
@@ -44,13 +40,9 @@ class Fstab(object):
|
|||||||
msg = "failed to determine filesystem at [{}]; assuming {}\n{}"
|
msg = "failed to determine filesystem at [{}]; assuming {}\n{}"
|
||||||
|
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
fs = "vfat" # can smb do sparse files? gonna guess no
|
fs = "vfat"
|
||||||
try:
|
try:
|
||||||
# good enough
|
path = self._winpath(path)
|
||||||
disk = path.split(":", 1)[0]
|
|
||||||
disk = "{}:\\".format(disk).lower()
|
|
||||||
assert len(disk) == 3
|
|
||||||
path = disk
|
|
||||||
except:
|
except:
|
||||||
self.log(msg.format(path, fs, min_ex()), 3)
|
self.log(msg.format(path, fs, min_ex()), 3)
|
||||||
return fs
|
return fs
|
||||||
@@ -71,6 +63,19 @@ class Fstab(object):
|
|||||||
self.log("found {} at {}".format(fs, path))
|
self.log("found {} at {}".format(fs, path))
|
||||||
return fs
|
return fs
|
||||||
|
|
||||||
|
def _winpath(self, path: str) -> str:
|
||||||
|
# try to combine volume-label + st_dev (vsn)
|
||||||
|
path = path.replace("/", "\\")
|
||||||
|
vid = path.split(":", 1)[0].strip("\\").split("\\", 1)[0]
|
||||||
|
try:
|
||||||
|
return "{}*{}".format(vid, bos.stat(path).st_dev)
|
||||||
|
except:
|
||||||
|
return vid
|
||||||
|
|
||||||
|
def build_fallback(self) -> None:
|
||||||
|
self.tab = VFS(self.log_func, "idk", "/", AXS(), {})
|
||||||
|
self.trusted = False
|
||||||
|
|
||||||
def build_tab(self) -> None:
|
def build_tab(self) -> None:
|
||||||
self.log("building tab")
|
self.log("building tab")
|
||||||
|
|
||||||
@@ -100,6 +105,9 @@ class Fstab(object):
|
|||||||
def relabel(self, path: str, nval: str) -> None:
|
def relabel(self, path: str, nval: str) -> None:
|
||||||
assert self.tab
|
assert self.tab
|
||||||
self.cache = {}
|
self.cache = {}
|
||||||
|
if ANYWIN:
|
||||||
|
path = self._winpath(path)
|
||||||
|
|
||||||
path = path.lstrip("/")
|
path = path.lstrip("/")
|
||||||
ptn = re.compile(r"^[^\\/]*")
|
ptn = re.compile(r"^[^\\/]*")
|
||||||
vn, rem = self.tab._find(path)
|
vn, rem = self.tab._find(path)
|
||||||
@@ -128,8 +136,7 @@ class Fstab(object):
|
|||||||
except:
|
except:
|
||||||
# prisonparty or other restrictive environment
|
# prisonparty or other restrictive environment
|
||||||
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
self.log("failed to build tab:\n{}".format(min_ex()), 3)
|
||||||
self.tab = VFS(self.log_func, "idk", "/", AXS(), {})
|
self.build_fallback()
|
||||||
self.trusted = False
|
|
||||||
|
|
||||||
assert self.tab
|
assert self.tab
|
||||||
ret = self.tab._find(path)[0]
|
ret = self.tab._find(path)[0]
|
||||||
@@ -139,43 +146,9 @@ class Fstab(object):
|
|||||||
return "idk"
|
return "idk"
|
||||||
|
|
||||||
def get_w32(self, path: str) -> str:
|
def get_w32(self, path: str) -> str:
|
||||||
# list mountpoints: fsutil fsinfo drives
|
if not self.tab:
|
||||||
assert ctypes
|
self.build_fallback()
|
||||||
from ctypes.wintypes import BOOL, DWORD, LPCWSTR, LPDWORD, LPWSTR, MAX_PATH
|
|
||||||
|
|
||||||
def echk(rc: int, fun: Any, args: Any) -> None:
|
assert self.tab
|
||||||
if not rc:
|
ret = self.tab._find(path)[0]
|
||||||
raise ctypes.WinError(ctypes.get_last_error())
|
return ret.realpath
|
||||||
return None
|
|
||||||
|
|
||||||
k32 = ctypes.WinDLL("kernel32", use_last_error=True)
|
|
||||||
k32.GetVolumeInformationW.errcheck = echk
|
|
||||||
k32.GetVolumeInformationW.restype = BOOL
|
|
||||||
k32.GetVolumeInformationW.argtypes = (
|
|
||||||
LPCWSTR,
|
|
||||||
LPWSTR,
|
|
||||||
DWORD,
|
|
||||||
LPDWORD,
|
|
||||||
LPDWORD,
|
|
||||||
LPDWORD,
|
|
||||||
LPWSTR,
|
|
||||||
DWORD,
|
|
||||||
)
|
|
||||||
|
|
||||||
bvolname = ctypes.create_unicode_buffer(MAX_PATH + 1)
|
|
||||||
bfstype = ctypes.create_unicode_buffer(MAX_PATH + 1)
|
|
||||||
serial = DWORD()
|
|
||||||
max_name_len = DWORD()
|
|
||||||
fs_flags = DWORD()
|
|
||||||
|
|
||||||
k32.GetVolumeInformationW(
|
|
||||||
path,
|
|
||||||
bvolname,
|
|
||||||
ctypes.sizeof(bvolname),
|
|
||||||
ctypes.byref(serial),
|
|
||||||
ctypes.byref(max_name_len),
|
|
||||||
ctypes.byref(fs_flags),
|
|
||||||
bfstype,
|
|
||||||
ctypes.sizeof(bfstype),
|
|
||||||
)
|
|
||||||
return bfstype.value
|
|
||||||
|
|||||||
@@ -391,7 +391,7 @@ class Ftpd(object):
|
|||||||
for h, lp in hs:
|
for h, lp in hs:
|
||||||
FTPServer((ip, int(lp)), h, ioloop)
|
FTPServer((ip, int(lp)), h, ioloop)
|
||||||
|
|
||||||
thr = threading.Thread(target=ioloop.loop)
|
thr = threading.Thread(target=ioloop.loop, name="ftp")
|
||||||
thr.daemon = True
|
thr.daemon = True
|
||||||
thr.start()
|
thr.start()
|
||||||
|
|
||||||
|
|||||||
@@ -42,6 +42,7 @@ from .util import (
|
|||||||
exclude_dotfiles,
|
exclude_dotfiles,
|
||||||
fsenc,
|
fsenc,
|
||||||
gen_filekey,
|
gen_filekey,
|
||||||
|
gen_filekey_dbg,
|
||||||
gencookie,
|
gencookie,
|
||||||
get_df,
|
get_df,
|
||||||
get_spd,
|
get_spd,
|
||||||
@@ -108,6 +109,7 @@ class HttpCli(object):
|
|||||||
self.u2fh = conn.u2fh # mypy404
|
self.u2fh = conn.u2fh # mypy404
|
||||||
self.log_func = conn.log_func # mypy404
|
self.log_func = conn.log_func # mypy404
|
||||||
self.log_src = conn.log_src # mypy404
|
self.log_src = conn.log_src # mypy404
|
||||||
|
self.gen_fk = self._gen_fk if self.args.log_fk else gen_filekey
|
||||||
self.tls: bool = hasattr(self.s, "cipher")
|
self.tls: bool = hasattr(self.s, "cipher")
|
||||||
|
|
||||||
# placeholders; assigned by run()
|
# placeholders; assigned by run()
|
||||||
@@ -177,6 +179,9 @@ class HttpCli(object):
|
|||||||
if rem.startswith("/") or rem.startswith("../") or "/../" in rem:
|
if rem.startswith("/") or rem.startswith("../") or "/../" in rem:
|
||||||
raise Exception("that was close")
|
raise Exception("that was close")
|
||||||
|
|
||||||
|
def _gen_fk(self, salt: str, fspath: str, fsize: int, inode: int) -> str:
|
||||||
|
return gen_filekey_dbg(salt, fspath, fsize, inode, self.log, self.args.log_fk)
|
||||||
|
|
||||||
def j2s(self, name: str, **ka: Any) -> str:
|
def j2s(self, name: str, **ka: Any) -> str:
|
||||||
tpl = self.conn.hsrv.j2[name]
|
tpl = self.conn.hsrv.j2[name]
|
||||||
ka["ts"] = self.conn.hsrv.cachebuster()
|
ka["ts"] = self.conn.hsrv.cachebuster()
|
||||||
@@ -711,7 +716,7 @@ class HttpCli(object):
|
|||||||
reader, remains = self.get_body_reader()
|
reader, remains = self.get_body_reader()
|
||||||
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
vfs, rem = self.asrv.vfs.get(self.vpath, self.uname, False, True)
|
||||||
lim = vfs.get_dbv(rem)[0].lim
|
lim = vfs.get_dbv(rem)[0].lim
|
||||||
fdir = os.path.join(vfs.realpath, rem)
|
fdir = vfs.canonical(rem)
|
||||||
if lim:
|
if lim:
|
||||||
fdir, rem = lim.all(self.ip, rem, remains, fdir)
|
fdir, rem = lim.all(self.ip, rem, remains, fdir)
|
||||||
|
|
||||||
@@ -813,7 +818,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
vsuf = ""
|
vsuf = ""
|
||||||
if self.can_read and "fk" in vfs.flags:
|
if self.can_read and "fk" in vfs.flags:
|
||||||
vsuf = "?k=" + gen_filekey(
|
vsuf = "?k=" + self.gen_fk(
|
||||||
self.args.fk_salt,
|
self.args.fk_salt,
|
||||||
path,
|
path,
|
||||||
post_sz,
|
post_sz,
|
||||||
@@ -921,7 +926,7 @@ class HttpCli(object):
|
|||||||
except:
|
except:
|
||||||
raise Pebkac(422, "you POSTed invalid json")
|
raise Pebkac(422, "you POSTed invalid json")
|
||||||
|
|
||||||
# self.reply(b" DD" + b"oS Protection ", 503)
|
# self.reply(b"cloudflare", 503)
|
||||||
# return True
|
# return True
|
||||||
|
|
||||||
if "srch" in self.uparam or "srch" in body:
|
if "srch" in self.uparam or "srch" in body:
|
||||||
@@ -950,7 +955,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
if rem:
|
if rem:
|
||||||
try:
|
try:
|
||||||
dst = os.path.join(vfs.realpath, rem)
|
dst = vfs.canonical(rem)
|
||||||
if not bos.path.isdir(dst):
|
if not bos.path.isdir(dst):
|
||||||
bos.makedirs(dst)
|
bos.makedirs(dst)
|
||||||
except OSError as ex:
|
except OSError as ex:
|
||||||
@@ -1185,7 +1190,7 @@ class HttpCli(object):
|
|||||||
sanitized = sanitize_fn(new_dir, "", [])
|
sanitized = sanitize_fn(new_dir, "", [])
|
||||||
|
|
||||||
if not nullwrite:
|
if not nullwrite:
|
||||||
fdir = os.path.join(vfs.realpath, rem)
|
fdir = vfs.canonical(rem)
|
||||||
fn = os.path.join(fdir, sanitized)
|
fn = os.path.join(fdir, sanitized)
|
||||||
|
|
||||||
if not bos.path.isdir(fdir):
|
if not bos.path.isdir(fdir):
|
||||||
@@ -1224,7 +1229,7 @@ class HttpCli(object):
|
|||||||
sanitized = sanitize_fn(new_file, "", [])
|
sanitized = sanitize_fn(new_file, "", [])
|
||||||
|
|
||||||
if not nullwrite:
|
if not nullwrite:
|
||||||
fdir = os.path.join(vfs.realpath, rem)
|
fdir = vfs.canonical(rem)
|
||||||
fn = os.path.join(fdir, sanitized)
|
fn = os.path.join(fdir, sanitized)
|
||||||
|
|
||||||
if bos.path.exists(fn):
|
if bos.path.exists(fn):
|
||||||
@@ -1245,7 +1250,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
upload_vpath = self.vpath
|
upload_vpath = self.vpath
|
||||||
lim = vfs.get_dbv(rem)[0].lim
|
lim = vfs.get_dbv(rem)[0].lim
|
||||||
fdir_base = os.path.join(vfs.realpath, rem)
|
fdir_base = vfs.canonical(rem)
|
||||||
if lim:
|
if lim:
|
||||||
fdir_base, rem = lim.all(self.ip, rem, -1, fdir_base)
|
fdir_base, rem = lim.all(self.ip, rem, -1, fdir_base)
|
||||||
upload_vpath = "{}/{}".format(vfs.vpath, rem).strip("/")
|
upload_vpath = "{}/{}".format(vfs.vpath, rem).strip("/")
|
||||||
@@ -1286,7 +1291,7 @@ class HttpCli(object):
|
|||||||
else:
|
else:
|
||||||
open_args = {}
|
open_args = {}
|
||||||
tnam = fname = os.devnull
|
tnam = fname = os.devnull
|
||||||
fdir = ""
|
fdir = abspath = ""
|
||||||
|
|
||||||
if lim:
|
if lim:
|
||||||
lim.chk_bup(self.ip)
|
lim.chk_bup(self.ip)
|
||||||
@@ -1318,12 +1323,15 @@ class HttpCli(object):
|
|||||||
lim.chk_bup(self.ip)
|
lim.chk_bup(self.ip)
|
||||||
lim.chk_nup(self.ip)
|
lim.chk_nup(self.ip)
|
||||||
except:
|
except:
|
||||||
|
if not nullwrite:
|
||||||
bos.unlink(tabspath)
|
bos.unlink(tabspath)
|
||||||
bos.unlink(abspath)
|
bos.unlink(abspath)
|
||||||
fname = os.devnull
|
fname = os.devnull
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
if not nullwrite:
|
||||||
atomic_move(tabspath, abspath)
|
atomic_move(tabspath, abspath)
|
||||||
|
|
||||||
files.append(
|
files.append(
|
||||||
(sz, sha_hex, sha_b64, p_file or "(discarded)", fname, abspath)
|
(sz, sha_hex, sha_b64, p_file or "(discarded)", fname, abspath)
|
||||||
)
|
)
|
||||||
@@ -1373,9 +1381,9 @@ class HttpCli(object):
|
|||||||
for sz, sha_hex, sha_b64, ofn, lfn, ap in files:
|
for sz, sha_hex, sha_b64, ofn, lfn, ap in files:
|
||||||
vsuf = ""
|
vsuf = ""
|
||||||
if self.can_read and "fk" in vfs.flags:
|
if self.can_read and "fk" in vfs.flags:
|
||||||
vsuf = "?k=" + gen_filekey(
|
vsuf = "?k=" + self.gen_fk(
|
||||||
self.args.fk_salt,
|
self.args.fk_salt,
|
||||||
abspath,
|
ap,
|
||||||
sz,
|
sz,
|
||||||
0 if ANYWIN or not ap else bos.stat(ap).st_ino,
|
0 if ANYWIN or not ap else bos.stat(ap).st_ino,
|
||||||
)[: vfs.flags["fk"]]
|
)[: vfs.flags["fk"]]
|
||||||
@@ -1453,7 +1461,7 @@ class HttpCli(object):
|
|||||||
raise Pebkac(411)
|
raise Pebkac(411)
|
||||||
|
|
||||||
rp, fn = vsplit(rem)
|
rp, fn = vsplit(rem)
|
||||||
fp = os.path.join(vfs.realpath, rp)
|
fp = vfs.canonical(rp)
|
||||||
lim = vfs.get_dbv(rem)[0].lim
|
lim = vfs.get_dbv(rem)[0].lim
|
||||||
if lim:
|
if lim:
|
||||||
fp, rp = lim.all(self.ip, rp, clen, fp)
|
fp, rp = lim.all(self.ip, rp, clen, fp)
|
||||||
@@ -2310,7 +2318,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
if not is_dir and (self.can_read or self.can_get):
|
if not is_dir and (self.can_read or self.can_get):
|
||||||
if not self.can_read and "fk" in vn.flags:
|
if not self.can_read and "fk" in vn.flags:
|
||||||
correct = gen_filekey(
|
correct = self.gen_fk(
|
||||||
self.args.fk_salt, abspath, st.st_size, 0 if ANYWIN else st.st_ino
|
self.args.fk_salt, abspath, st.st_size, 0 if ANYWIN else st.st_ino
|
||||||
)[: vn.flags["fk"]]
|
)[: vn.flags["fk"]]
|
||||||
got = self.uparam.get("k")
|
got = self.uparam.get("k")
|
||||||
@@ -2534,7 +2542,7 @@ class HttpCli(object):
|
|||||||
if add_fk:
|
if add_fk:
|
||||||
href = "{}?k={}".format(
|
href = "{}?k={}".format(
|
||||||
quotep(href),
|
quotep(href),
|
||||||
gen_filekey(
|
self.gen_fk(
|
||||||
self.args.fk_salt, fspath, sz, 0 if ANYWIN else inf.st_ino
|
self.args.fk_salt, fspath, sz, 0 if ANYWIN else inf.st_ino
|
||||||
)[:add_fk],
|
)[:add_fk],
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -102,7 +102,7 @@ class HttpSrv(object):
|
|||||||
start_log_thrs(self.log, self.args.log_thrs, nid)
|
start_log_thrs(self.log, self.args.log_thrs, nid)
|
||||||
|
|
||||||
self.th_cfg: dict[str, Any] = {}
|
self.th_cfg: dict[str, Any] = {}
|
||||||
t = threading.Thread(target=self.post_init)
|
t = threading.Thread(target=self.post_init, name="hsrv-init2")
|
||||||
t.daemon = True
|
t.daemon = True
|
||||||
t.start()
|
t.start()
|
||||||
|
|
||||||
@@ -165,13 +165,13 @@ class HttpSrv(object):
|
|||||||
"""listens on a shared tcp server"""
|
"""listens on a shared tcp server"""
|
||||||
ip, port = srv_sck.getsockname()
|
ip, port = srv_sck.getsockname()
|
||||||
fno = srv_sck.fileno()
|
fno = srv_sck.fileno()
|
||||||
msg = "subscribed @ {}:{} f{}".format(ip, port, fno)
|
msg = "subscribed @ {}:{} f{} p{}".format(ip, port, fno, os.getpid())
|
||||||
self.log(self.name, msg)
|
self.log(self.name, msg)
|
||||||
|
|
||||||
def fun() -> None:
|
def fun() -> None:
|
||||||
self.broker.say("cb_httpsrv_up")
|
self.broker.say("cb_httpsrv_up")
|
||||||
|
|
||||||
threading.Thread(target=fun).start()
|
threading.Thread(target=fun, name="sig-hsrv-up1").start()
|
||||||
|
|
||||||
while not self.stopping:
|
while not self.stopping:
|
||||||
if self.args.log_conn:
|
if self.args.log_conn:
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ class Ico(object):
|
|||||||
def get(self, ext: str, as_thumb: bool) -> tuple[str, bytes]:
|
def get(self, ext: str, as_thumb: bool) -> tuple[str, bytes]:
|
||||||
"""placeholder to make thumbnails not break"""
|
"""placeholder to make thumbnails not break"""
|
||||||
|
|
||||||
zb = hashlib.md5(ext.encode("utf-8")).digest()[:2]
|
zb = hashlib.sha1(ext.encode("utf-8")).digest()[2:4]
|
||||||
if PY2:
|
if PY2:
|
||||||
zb = [ord(x) for x in zb]
|
zb = [ord(x) for x in zb]
|
||||||
|
|
||||||
|
|||||||
@@ -178,7 +178,7 @@ def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[
|
|||||||
]
|
]
|
||||||
|
|
||||||
if typ == "format":
|
if typ == "format":
|
||||||
kvm = [["duration", ".dur"], ["bit_rate", ".q"]]
|
kvm = [["duration", ".dur"], ["bit_rate", ".q"], ["format_name", "fmt"]]
|
||||||
|
|
||||||
for sk, rk in kvm:
|
for sk, rk in kvm:
|
||||||
v1 = strm.get(sk)
|
v1 = strm.get(sk)
|
||||||
@@ -239,6 +239,9 @@ def parse_ffprobe(txt: str) -> tuple[dict[str, tuple[int, Any]], dict[str, list[
|
|||||||
if ".q" in ret:
|
if ".q" in ret:
|
||||||
del ret[".q"]
|
del ret[".q"]
|
||||||
|
|
||||||
|
if "fmt" in ret:
|
||||||
|
ret["fmt"] = ret["fmt"].split(",")[0]
|
||||||
|
|
||||||
if ".resw" in ret and ".resh" in ret:
|
if ".resw" in ret and ".resh" in ret:
|
||||||
ret["res"] = "{}x{}".format(ret[".resw"], ret[".resh"])
|
ret["res"] = "{}x{}".format(ret[".resw"], ret[".resh"])
|
||||||
|
|
||||||
@@ -310,6 +313,7 @@ class MTag(object):
|
|||||||
"tope",
|
"tope",
|
||||||
],
|
],
|
||||||
"title": ["title", "tit2", "\u00a9nam"],
|
"title": ["title", "tit2", "\u00a9nam"],
|
||||||
|
"comment": ["comment"],
|
||||||
"circle": [
|
"circle": [
|
||||||
"album-artist",
|
"album-artist",
|
||||||
"tpe2",
|
"tpe2",
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ import base64
|
|||||||
import calendar
|
import calendar
|
||||||
import gzip
|
import gzip
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
import shlex
|
import shlex
|
||||||
import signal
|
import signal
|
||||||
import socket
|
import socket
|
||||||
@@ -19,7 +20,7 @@ try:
|
|||||||
from types import FrameType
|
from types import FrameType
|
||||||
|
|
||||||
import typing
|
import typing
|
||||||
from typing import Optional, Union
|
from typing import Any, Optional, Union
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -29,7 +30,15 @@ from .mtag import HAVE_FFMPEG, HAVE_FFPROBE
|
|||||||
from .tcpsrv import TcpSrv
|
from .tcpsrv import TcpSrv
|
||||||
from .th_srv import HAVE_PIL, HAVE_VIPS, HAVE_WEBP, ThumbSrv
|
from .th_srv import HAVE_PIL, HAVE_VIPS, HAVE_WEBP, ThumbSrv
|
||||||
from .up2k import Up2k
|
from .up2k import Up2k
|
||||||
from .util import ansi_re, min_ex, mp, start_log_thrs, start_stackmon, alltrace
|
from .util import (
|
||||||
|
VERSIONS,
|
||||||
|
alltrace,
|
||||||
|
ansi_re,
|
||||||
|
min_ex,
|
||||||
|
mp,
|
||||||
|
start_log_thrs,
|
||||||
|
start_stackmon,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class SvcHub(object):
|
class SvcHub(object):
|
||||||
@@ -49,10 +58,12 @@ class SvcHub(object):
|
|||||||
self.logf: Optional[typing.TextIO] = None
|
self.logf: Optional[typing.TextIO] = None
|
||||||
self.logf_base_fn = ""
|
self.logf_base_fn = ""
|
||||||
self.stop_req = False
|
self.stop_req = False
|
||||||
self.reload_req = False
|
|
||||||
self.stopping = False
|
self.stopping = False
|
||||||
|
self.stopped = False
|
||||||
|
self.reload_req = False
|
||||||
self.reloading = False
|
self.reloading = False
|
||||||
self.stop_cond = threading.Condition()
|
self.stop_cond = threading.Condition()
|
||||||
|
self.nsigs = 3
|
||||||
self.retcode = 0
|
self.retcode = 0
|
||||||
self.httpsrv_up = 0
|
self.httpsrv_up = 0
|
||||||
|
|
||||||
@@ -113,6 +124,9 @@ class SvcHub(object):
|
|||||||
if not args.hardlink and args.never_symlink:
|
if not args.hardlink and args.never_symlink:
|
||||||
args.no_dedup = True
|
args.no_dedup = True
|
||||||
|
|
||||||
|
if args.log_fk:
|
||||||
|
args.log_fk = re.compile(args.log_fk)
|
||||||
|
|
||||||
# initiate all services to manage
|
# initiate all services to manage
|
||||||
self.asrv = AuthSrv(self.args, self.log)
|
self.asrv = AuthSrv(self.args, self.log)
|
||||||
if args.ls:
|
if args.ls:
|
||||||
@@ -132,8 +146,8 @@ class SvcHub(object):
|
|||||||
self.args.th_dec = list(decs.keys())
|
self.args.th_dec = list(decs.keys())
|
||||||
self.thumbsrv = None
|
self.thumbsrv = None
|
||||||
if not args.no_thumb:
|
if not args.no_thumb:
|
||||||
t = "decoder preference: {}".format(", ".join(self.args.th_dec))
|
t = ", ".join(self.args.th_dec) or "(None available)"
|
||||||
self.log("thumb", t)
|
self.log("thumb", "decoder preference: {}".format(t))
|
||||||
|
|
||||||
if "pil" in self.args.th_dec and not HAVE_WEBP:
|
if "pil" in self.args.th_dec and not HAVE_WEBP:
|
||||||
msg = "disabling webp thumbnails because either libwebp is not available or your Pillow is too old"
|
msg = "disabling webp thumbnails because either libwebp is not available or your Pillow is too old"
|
||||||
@@ -192,6 +206,9 @@ class SvcHub(object):
|
|||||||
self.log("root", t, 1)
|
self.log("root", t, 1)
|
||||||
|
|
||||||
self.retcode = 1
|
self.retcode = 1
|
||||||
|
self.sigterm()
|
||||||
|
|
||||||
|
def sigterm(self) -> None:
|
||||||
os.kill(os.getpid(), signal.SIGTERM)
|
os.kill(os.getpid(), signal.SIGTERM)
|
||||||
|
|
||||||
def cb_httpsrv_up(self) -> None:
|
def cb_httpsrv_up(self) -> None:
|
||||||
@@ -255,7 +272,7 @@ class SvcHub(object):
|
|||||||
def run(self) -> None:
|
def run(self) -> None:
|
||||||
self.tcpsrv.run()
|
self.tcpsrv.run()
|
||||||
|
|
||||||
thr = threading.Thread(target=self.thr_httpsrv_up)
|
thr = threading.Thread(target=self.thr_httpsrv_up, name="sig-hsrv-up2")
|
||||||
thr.daemon = True
|
thr.daemon = True
|
||||||
thr.start()
|
thr.start()
|
||||||
|
|
||||||
@@ -283,7 +300,9 @@ class SvcHub(object):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
self.shutdown()
|
self.shutdown()
|
||||||
thr.join()
|
# cant join; eats signals on win10
|
||||||
|
while not self.stopped:
|
||||||
|
time.sleep(0.1)
|
||||||
else:
|
else:
|
||||||
self.stop_thr()
|
self.stop_thr()
|
||||||
|
|
||||||
@@ -292,7 +311,7 @@ class SvcHub(object):
|
|||||||
return "cannot reload; already in progress"
|
return "cannot reload; already in progress"
|
||||||
|
|
||||||
self.reloading = True
|
self.reloading = True
|
||||||
t = threading.Thread(target=self._reload)
|
t = threading.Thread(target=self._reload, name="reloading")
|
||||||
t.daemon = True
|
t.daemon = True
|
||||||
t.start()
|
t.start()
|
||||||
return "reload initiated"
|
return "reload initiated"
|
||||||
@@ -319,9 +338,22 @@ class SvcHub(object):
|
|||||||
|
|
||||||
def signal_handler(self, sig: int, frame: Optional[FrameType]) -> None:
|
def signal_handler(self, sig: int, frame: Optional[FrameType]) -> None:
|
||||||
if self.stopping:
|
if self.stopping:
|
||||||
|
if self.nsigs <= 0:
|
||||||
|
try:
|
||||||
|
threading.Thread(target=self.pr, args=("OMBO BREAKER",)).start()
|
||||||
|
time.sleep(0.1)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if ANYWIN:
|
||||||
|
os.system("taskkill /f /pid {}".format(os.getpid()))
|
||||||
|
else:
|
||||||
|
os.kill(os.getpid(), signal.SIGKILL)
|
||||||
|
else:
|
||||||
|
self.nsigs -= 1
|
||||||
return
|
return
|
||||||
|
|
||||||
if sig == signal.SIGUSR1:
|
if not ANYWIN and sig == signal.SIGUSR1:
|
||||||
self.reload_req = True
|
self.reload_req = True
|
||||||
else:
|
else:
|
||||||
self.stop_req = True
|
self.stop_req = True
|
||||||
@@ -342,9 +374,7 @@ class SvcHub(object):
|
|||||||
|
|
||||||
ret = 1
|
ret = 1
|
||||||
try:
|
try:
|
||||||
with self.log_mutex:
|
self.pr("OPYTHAT")
|
||||||
print("OPYTHAT")
|
|
||||||
|
|
||||||
self.tcpsrv.shutdown()
|
self.tcpsrv.shutdown()
|
||||||
self.broker.shutdown()
|
self.broker.shutdown()
|
||||||
self.up2k.shutdown()
|
self.up2k.shutdown()
|
||||||
@@ -357,22 +387,23 @@ class SvcHub(object):
|
|||||||
break
|
break
|
||||||
|
|
||||||
if n == 3:
|
if n == 3:
|
||||||
print("waiting for thumbsrv (10sec)...")
|
self.pr("waiting for thumbsrv (10sec)...")
|
||||||
|
|
||||||
print("nailed it", end="")
|
self.pr("nailed it", end="")
|
||||||
ret = self.retcode
|
ret = self.retcode
|
||||||
except:
|
except:
|
||||||
print("\033[31m[ error during shutdown ]\n{}\033[0m".format(min_ex()))
|
self.pr("\033[31m[ error during shutdown ]\n{}\033[0m".format(min_ex()))
|
||||||
raise
|
raise
|
||||||
finally:
|
finally:
|
||||||
if self.args.wintitle:
|
if self.args.wintitle:
|
||||||
print("\033]0;\033\\", file=sys.stderr, end="")
|
print("\033]0;\033\\", file=sys.stderr, end="")
|
||||||
sys.stderr.flush()
|
sys.stderr.flush()
|
||||||
|
|
||||||
print("\033[0m")
|
self.pr("\033[0m")
|
||||||
if self.logf:
|
if self.logf:
|
||||||
self.logf.close()
|
self.logf.close()
|
||||||
|
|
||||||
|
self.stopped = True
|
||||||
sys.exit(ret)
|
sys.exit(ret)
|
||||||
|
|
||||||
def _log_disabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
def _log_disabled(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
@@ -439,6 +470,10 @@ class SvcHub(object):
|
|||||||
if self.logf:
|
if self.logf:
|
||||||
self.logf.write(msg)
|
self.logf.write(msg)
|
||||||
|
|
||||||
|
def pr(self, *a: Any, **ka: Any) -> None:
|
||||||
|
with self.log_mutex:
|
||||||
|
print(*a, **ka)
|
||||||
|
|
||||||
def check_mp_support(self) -> str:
|
def check_mp_support(self) -> str:
|
||||||
vmin = sys.version_info[1]
|
vmin = sys.version_info[1]
|
||||||
if WINDOWS:
|
if WINDOWS:
|
||||||
@@ -511,7 +546,8 @@ class SvcHub(object):
|
|||||||
return
|
return
|
||||||
|
|
||||||
self.tstack = time.time()
|
self.tstack = time.time()
|
||||||
zb = alltrace().encode("utf-8", "replace")
|
zs = "{}\n{}".format(VERSIONS, alltrace())
|
||||||
|
zb = zs.encode("utf-8", "replace")
|
||||||
zb = gzip.compress(zb)
|
zb = gzip.compress(zb)
|
||||||
zs = base64.b64encode(zb).decode("ascii")
|
zs = base64.b64encode(zb).decode("ascii")
|
||||||
self.log("stacks", zs)
|
self.log("stacks", zs)
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
|
import os
|
||||||
import re
|
import re
|
||||||
import socket
|
import socket
|
||||||
import sys
|
import sys
|
||||||
@@ -128,7 +129,7 @@ class TcpSrv(object):
|
|||||||
srv.listen(self.args.nc)
|
srv.listen(self.args.nc)
|
||||||
ip, port = srv.getsockname()
|
ip, port = srv.getsockname()
|
||||||
fno = srv.fileno()
|
fno = srv.fileno()
|
||||||
msg = "listening @ {}:{} f{}".format(ip, port, fno)
|
msg = "listening @ {}:{} f{} p{}".format(ip, port, fno, os.getpid())
|
||||||
self.log("tcpsrv", msg)
|
self.log("tcpsrv", msg)
|
||||||
if self.args.q:
|
if self.args.q:
|
||||||
print(msg)
|
print(msg)
|
||||||
|
|||||||
@@ -28,12 +28,13 @@ from .mtag import MParser, MTag
|
|||||||
from .util import (
|
from .util import (
|
||||||
HAVE_SQLITE3,
|
HAVE_SQLITE3,
|
||||||
SYMTIME,
|
SYMTIME,
|
||||||
|
MTHash,
|
||||||
Pebkac,
|
Pebkac,
|
||||||
ProgressPrinter,
|
ProgressPrinter,
|
||||||
absreal,
|
absreal,
|
||||||
atomic_move,
|
atomic_move,
|
||||||
djoin,
|
|
||||||
db_ex_chk,
|
db_ex_chk,
|
||||||
|
djoin,
|
||||||
fsenc,
|
fsenc,
|
||||||
min_ex,
|
min_ex,
|
||||||
quotep,
|
quotep,
|
||||||
@@ -45,6 +46,7 @@ from .util import (
|
|||||||
s3enc,
|
s3enc,
|
||||||
sanitize_fn,
|
sanitize_fn,
|
||||||
statdir,
|
statdir,
|
||||||
|
vjoin,
|
||||||
vsplit,
|
vsplit,
|
||||||
w8b64dec,
|
w8b64dec,
|
||||||
w8b64enc,
|
w8b64enc,
|
||||||
@@ -143,7 +145,8 @@ class Up2k(object):
|
|||||||
if self.sqlite_ver < (3, 9):
|
if self.sqlite_ver < (3, 9):
|
||||||
self.no_expr_idx = True
|
self.no_expr_idx = True
|
||||||
else:
|
else:
|
||||||
self.log("could not initialize sqlite3, will use in-memory registry only")
|
t = "could not initialize sqlite3, will use in-memory registry only"
|
||||||
|
self.log(t, 3)
|
||||||
|
|
||||||
if ANYWIN:
|
if ANYWIN:
|
||||||
# usually fails to set lastmod too quickly
|
# usually fails to set lastmod too quickly
|
||||||
@@ -154,6 +157,11 @@ class Up2k(object):
|
|||||||
|
|
||||||
self.fstab = Fstab(self.log_func)
|
self.fstab = Fstab(self.log_func)
|
||||||
|
|
||||||
|
if self.args.hash_mt < 2:
|
||||||
|
self.mth: Optional[MTHash] = None
|
||||||
|
else:
|
||||||
|
self.mth = MTHash(self.args.hash_mt)
|
||||||
|
|
||||||
if self.args.no_fastboot:
|
if self.args.no_fastboot:
|
||||||
self.deferred_init()
|
self.deferred_init()
|
||||||
|
|
||||||
@@ -175,6 +183,9 @@ class Up2k(object):
|
|||||||
all_vols = self.asrv.vfs.all_vols
|
all_vols = self.asrv.vfs.all_vols
|
||||||
have_e2d = self.init_indexes(all_vols, [])
|
have_e2d = self.init_indexes(all_vols, [])
|
||||||
|
|
||||||
|
if not self.pp and self.args.exit == "idx":
|
||||||
|
return self.hub.sigterm()
|
||||||
|
|
||||||
thr = threading.Thread(target=self._snapshot, name="up2k-snapshot")
|
thr = threading.Thread(target=self._snapshot, name="up2k-snapshot")
|
||||||
thr.daemon = True
|
thr.daemon = True
|
||||||
thr.start()
|
thr.start()
|
||||||
@@ -209,6 +220,7 @@ class Up2k(object):
|
|||||||
def _unblock(self) -> None:
|
def _unblock(self) -> None:
|
||||||
if self.blocked is not None:
|
if self.blocked is not None:
|
||||||
self.blocked = None
|
self.blocked = None
|
||||||
|
if not self.stop:
|
||||||
self.log("uploads are now possible", 2)
|
self.log("uploads are now possible", 2)
|
||||||
|
|
||||||
def get_state(self) -> str:
|
def get_state(self) -> str:
|
||||||
@@ -563,7 +575,6 @@ class Up2k(object):
|
|||||||
t = "online (running mtp)"
|
t = "online (running mtp)"
|
||||||
if scan_vols:
|
if scan_vols:
|
||||||
thr = threading.Thread(target=self._run_all_mtp, name="up2k-mtp-scan")
|
thr = threading.Thread(target=self._run_all_mtp, name="up2k-mtp-scan")
|
||||||
thr.daemon = True
|
|
||||||
else:
|
else:
|
||||||
self.pp = None
|
self.pp = None
|
||||||
t = "online, idle"
|
t = "online, idle"
|
||||||
@@ -572,6 +583,7 @@ class Up2k(object):
|
|||||||
self.volstate[vol.vpath] = t
|
self.volstate[vol.vpath] = t
|
||||||
|
|
||||||
if thr:
|
if thr:
|
||||||
|
thr.daemon = True
|
||||||
thr.start()
|
thr.start()
|
||||||
|
|
||||||
return have_e2d
|
return have_e2d
|
||||||
@@ -671,6 +683,11 @@ class Up2k(object):
|
|||||||
top = vol.realpath
|
top = vol.realpath
|
||||||
rei = vol.flags.get("noidx")
|
rei = vol.flags.get("noidx")
|
||||||
reh = vol.flags.get("nohash")
|
reh = vol.flags.get("nohash")
|
||||||
|
|
||||||
|
dev = 0
|
||||||
|
if vol.flags.get("xdev"):
|
||||||
|
dev = bos.stat(top).st_dev
|
||||||
|
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
reg = self.register_vpath(top, vol.flags)
|
reg = self.register_vpath(top, vol.flags)
|
||||||
assert reg and self.pp
|
assert reg and self.pp
|
||||||
@@ -688,20 +705,42 @@ class Up2k(object):
|
|||||||
excl += list(self.asrv.vfs.histtab.values())
|
excl += list(self.asrv.vfs.histtab.values())
|
||||||
if WINDOWS:
|
if WINDOWS:
|
||||||
excl = [x.replace("/", "\\") for x in excl]
|
excl = [x.replace("/", "\\") for x in excl]
|
||||||
|
else:
|
||||||
|
# ~/.wine/dosdevices/z:/ and such
|
||||||
|
excl += ["/dev", "/proc", "/run", "/sys"]
|
||||||
|
|
||||||
rtop = absreal(top)
|
rtop = absreal(top)
|
||||||
n_add = n_rm = 0
|
n_add = n_rm = 0
|
||||||
try:
|
try:
|
||||||
n_add = self._build_dir(db, top, set(excl), top, rtop, rei, reh, [])
|
n_add = self._build_dir(
|
||||||
|
db,
|
||||||
|
top,
|
||||||
|
set(excl),
|
||||||
|
top,
|
||||||
|
rtop,
|
||||||
|
rei,
|
||||||
|
reh,
|
||||||
|
[],
|
||||||
|
dev,
|
||||||
|
bool(vol.flags.get("xvol")),
|
||||||
|
)
|
||||||
n_rm = self._drop_lost(db.c, top, excl)
|
n_rm = self._drop_lost(db.c, top, excl)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
db_ex_chk(self.log, ex, db_path)
|
|
||||||
t = "failed to index volume [{}]:\n{}"
|
t = "failed to index volume [{}]:\n{}"
|
||||||
self.log(t.format(top, min_ex()), c=1)
|
self.log(t.format(top, min_ex()), c=1)
|
||||||
|
if db_ex_chk(self.log, ex, db_path):
|
||||||
|
self.hub.log_stacks()
|
||||||
|
|
||||||
if db.n:
|
if db.n:
|
||||||
self.log("commit {} new files".format(db.n))
|
self.log("commit {} new files".format(db.n))
|
||||||
|
|
||||||
|
if self.args.no_dhash:
|
||||||
|
if db.c.execute("select d from dh").fetchone():
|
||||||
|
db.c.execute("delete from dh")
|
||||||
|
self.log("forgetting dhashes in {}".format(top))
|
||||||
|
elif n_add or n_rm:
|
||||||
|
self._set_tagscan(db.c, True)
|
||||||
|
|
||||||
db.c.connection.commit()
|
db.c.connection.commit()
|
||||||
|
|
||||||
return True, bool(n_add or n_rm or do_vac)
|
return True, bool(n_add or n_rm or do_vac)
|
||||||
@@ -716,39 +755,51 @@ class Up2k(object):
|
|||||||
rei: Optional[Pattern[str]],
|
rei: Optional[Pattern[str]],
|
||||||
reh: Optional[Pattern[str]],
|
reh: Optional[Pattern[str]],
|
||||||
seen: list[str],
|
seen: list[str],
|
||||||
|
dev: int,
|
||||||
|
xvol: bool,
|
||||||
) -> int:
|
) -> int:
|
||||||
|
if xvol and not rcdir.startswith(top):
|
||||||
|
self.log("skip xvol: [{}] -> [{}]".format(cdir, rcdir), 6)
|
||||||
|
return 0
|
||||||
|
|
||||||
if rcdir in seen:
|
if rcdir in seen:
|
||||||
t = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
|
t = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
|
||||||
self.log(t.format(seen[-1], rcdir, cdir), 3)
|
self.log(t.format(seen[-1], rcdir, cdir), 3)
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
ret = 0
|
||||||
seen = seen + [rcdir]
|
seen = seen + [rcdir]
|
||||||
|
unreg: list[str] = []
|
||||||
|
files: list[tuple[int, int, str]] = []
|
||||||
|
|
||||||
assert self.pp and self.mem_cur
|
assert self.pp and self.mem_cur
|
||||||
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
|
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
|
||||||
ret = 0
|
|
||||||
unreg: list[str] = []
|
rd = cdir[len(top) :].strip("/")
|
||||||
seen_files = {} # != inames; files-only for dropcheck
|
if WINDOWS:
|
||||||
|
rd = rd.replace("\\", "/").strip("/")
|
||||||
|
|
||||||
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
|
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
|
||||||
gl = sorted(g)
|
gl = sorted(g)
|
||||||
inames = {x[0]: 1 for x in gl}
|
partials = set([x[0] for x in gl if "PARTIAL" in x[0]])
|
||||||
for iname, inf in gl:
|
for iname, inf in gl:
|
||||||
if self.stop:
|
if self.stop:
|
||||||
return -1
|
return -1
|
||||||
|
|
||||||
|
rp = vjoin(rd, iname)
|
||||||
abspath = os.path.join(cdir, iname)
|
abspath = os.path.join(cdir, iname)
|
||||||
rp = abspath[len(top) :].lstrip("/")
|
|
||||||
if WINDOWS:
|
|
||||||
rp = rp.replace("\\", "/").strip("/")
|
|
||||||
|
|
||||||
if rei and rei.search(abspath):
|
if rei and rei.search(abspath):
|
||||||
unreg.append(rp)
|
unreg.append(rp)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
nohash = reh.search(abspath) if reh else False
|
|
||||||
lmod = int(inf.st_mtime)
|
lmod = int(inf.st_mtime)
|
||||||
sz = inf.st_size
|
sz = inf.st_size
|
||||||
if stat.S_ISDIR(inf.st_mode):
|
if stat.S_ISDIR(inf.st_mode):
|
||||||
rap = absreal(abspath)
|
rap = absreal(abspath)
|
||||||
|
if dev and inf.st_dev != dev:
|
||||||
|
self.log("skip xdev {}->{}: {}".format(dev, inf.st_dev, abspath), 6)
|
||||||
|
continue
|
||||||
if abspath in excl or rap in excl:
|
if abspath in excl or rap in excl:
|
||||||
unreg.append(rp)
|
unreg.append(rp)
|
||||||
continue
|
continue
|
||||||
@@ -757,7 +808,9 @@ class Up2k(object):
|
|||||||
continue
|
continue
|
||||||
# self.log(" dir: {}".format(abspath))
|
# self.log(" dir: {}".format(abspath))
|
||||||
try:
|
try:
|
||||||
ret += self._build_dir(db, top, excl, abspath, rap, rei, reh, seen)
|
ret += self._build_dir(
|
||||||
|
db, top, excl, abspath, rap, rei, reh, seen, dev, xvol
|
||||||
|
)
|
||||||
except:
|
except:
|
||||||
t = "failed to index subdir [{}]:\n{}"
|
t = "failed to index subdir [{}]:\n{}"
|
||||||
self.log(t.format(abspath, min_ex()), c=1)
|
self.log(t.format(abspath, min_ex()), c=1)
|
||||||
@@ -765,19 +818,53 @@ class Up2k(object):
|
|||||||
self.log("skip type-{:x} file [{}]".format(inf.st_mode, abspath))
|
self.log("skip type-{:x} file [{}]".format(inf.st_mode, abspath))
|
||||||
else:
|
else:
|
||||||
# self.log("file: {}".format(abspath))
|
# self.log("file: {}".format(abspath))
|
||||||
seen_files[iname] = 1
|
|
||||||
if rp.endswith(".PARTIAL") and time.time() - lmod < 60:
|
if rp.endswith(".PARTIAL") and time.time() - lmod < 60:
|
||||||
# rescan during upload
|
# rescan during upload
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if not sz and (
|
if not sz and (
|
||||||
"{}.PARTIAL".format(iname) in inames
|
"{}.PARTIAL".format(iname) in partials
|
||||||
or ".{}.PARTIAL".format(iname) in inames
|
or ".{}.PARTIAL".format(iname) in partials
|
||||||
):
|
):
|
||||||
# placeholder for unfinished upload
|
# placeholder for unfinished upload
|
||||||
continue
|
continue
|
||||||
|
|
||||||
rd, fn = rp.rsplit("/", 1) if "/" in rp else ["", rp]
|
files.append((sz, lmod, iname))
|
||||||
|
|
||||||
|
# folder of 1000 files = ~1 MiB RAM best-case (tiny filenames);
|
||||||
|
# free up stuff we're done with before dhashing
|
||||||
|
gl = []
|
||||||
|
partials.clear()
|
||||||
|
if not self.args.no_dhash:
|
||||||
|
if len(files) < 9000:
|
||||||
|
zh = hashlib.sha1(str(files).encode("utf-8", "replace"))
|
||||||
|
else:
|
||||||
|
zh = hashlib.sha1()
|
||||||
|
_ = [zh.update(str(x).encode("utf-8", "replace")) for x in files]
|
||||||
|
|
||||||
|
dhash = base64.urlsafe_b64encode(zh.digest()[:12]).decode("ascii")
|
||||||
|
sql = "select d from dh where d = ? and h = ?"
|
||||||
|
try:
|
||||||
|
c = db.c.execute(sql, (rd, dhash))
|
||||||
|
drd = rd
|
||||||
|
except:
|
||||||
|
drd = "//" + w8b64enc(rd)
|
||||||
|
c = db.c.execute(sql, (drd, dhash))
|
||||||
|
|
||||||
|
if c.fetchone():
|
||||||
|
return ret
|
||||||
|
|
||||||
|
seen_files = set([x[2] for x in files]) # for dropcheck
|
||||||
|
for sz, lmod, fn in files:
|
||||||
|
if self.stop:
|
||||||
|
return -1
|
||||||
|
|
||||||
|
rp = vjoin(rd, fn)
|
||||||
|
abspath = os.path.join(cdir, fn)
|
||||||
|
nohash = reh.search(abspath) if reh else False
|
||||||
|
|
||||||
|
if fn: # diff-golf
|
||||||
|
|
||||||
sql = "select w, mt, sz from up where rd = ? and fn = ?"
|
sql = "select w, mt, sz from up where rd = ? and fn = ?"
|
||||||
try:
|
try:
|
||||||
c = db.c.execute(sql, (rd, fn))
|
c = db.c.execute(sql, (rd, fn))
|
||||||
@@ -794,7 +881,7 @@ class Up2k(object):
|
|||||||
self.log(t.format(top, rp, len(in_db), rep_db))
|
self.log(t.format(top, rp, len(in_db), rep_db))
|
||||||
dts = -1
|
dts = -1
|
||||||
|
|
||||||
if dts == lmod and dsz == sz and (nohash or dw[0] != "#"):
|
if dts == lmod and dsz == sz and (nohash or dw[0] != "#" or not sz):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
t = "reindex [{}] => [{}] ({}/{}) ({}/{})".format(
|
t = "reindex [{}] => [{}] ({}/{}) ({}/{})".format(
|
||||||
@@ -808,7 +895,7 @@ class Up2k(object):
|
|||||||
|
|
||||||
self.pp.msg = "a{} {}".format(self.pp.n, abspath)
|
self.pp.msg = "a{} {}".format(self.pp.n, abspath)
|
||||||
|
|
||||||
if nohash:
|
if nohash or not sz:
|
||||||
wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn)
|
wark = up2k_wark_from_metadata(self.salt, sz, lmod, rd, fn)
|
||||||
else:
|
else:
|
||||||
if sz > 1024 * 1024:
|
if sz > 1024 * 1024:
|
||||||
@@ -822,6 +909,9 @@ class Up2k(object):
|
|||||||
self.log("hash: {} @ [{}]".format(repr(ex), abspath))
|
self.log("hash: {} @ [{}]".format(repr(ex), abspath))
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if not hashes:
|
||||||
|
return -1
|
||||||
|
|
||||||
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
|
wark = up2k_wark_from_hashlist(self.salt, sz, hashes)
|
||||||
|
|
||||||
self.db_add(db.c, wark, rd, fn, lmod, sz, "", 0)
|
self.db_add(db.c, wark, rd, fn, lmod, sz, "", 0)
|
||||||
@@ -834,6 +924,10 @@ class Up2k(object):
|
|||||||
db.n = 0
|
db.n = 0
|
||||||
db.t = time.time()
|
db.t = time.time()
|
||||||
|
|
||||||
|
if not self.args.no_dhash:
|
||||||
|
db.c.execute("delete from dh where d = ?", (drd,))
|
||||||
|
db.c.execute("insert into dh values (?,?)", (drd, dhash))
|
||||||
|
|
||||||
if self.stop:
|
if self.stop:
|
||||||
return -1
|
return -1
|
||||||
|
|
||||||
@@ -852,15 +946,14 @@ class Up2k(object):
|
|||||||
t = "forgetting {} shadowed autoindexed files in [{}] > [{}]"
|
t = "forgetting {} shadowed autoindexed files in [{}] > [{}]"
|
||||||
self.log(t.format(n, top, rd))
|
self.log(t.format(n, top, rd))
|
||||||
|
|
||||||
|
q = "delete from dh where (d = ? or d like ?||'%')"
|
||||||
|
db.c.execute(q, (erd, erd + "/"))
|
||||||
|
|
||||||
q = "delete from up where (rd = ? or rd like ?||'%') and at == 0"
|
q = "delete from up where (rd = ? or rd like ?||'%') and at == 0"
|
||||||
db.c.execute(q, (erd, erd + "/"))
|
db.c.execute(q, (erd, erd + "/"))
|
||||||
ret += n
|
ret += n
|
||||||
|
|
||||||
# drop missing files
|
# drop missing files
|
||||||
rd = cdir[len(top) + 1 :].strip("/")
|
|
||||||
if WINDOWS:
|
|
||||||
rd = rd.replace("\\", "/").strip("/")
|
|
||||||
|
|
||||||
q = "select fn from up where rd = ?"
|
q = "select fn from up where rd = ?"
|
||||||
try:
|
try:
|
||||||
c = db.c.execute(q, (rd,))
|
c = db.c.execute(q, (rd,))
|
||||||
@@ -911,6 +1004,7 @@ class Up2k(object):
|
|||||||
|
|
||||||
self.log("forgetting {} deleted dirs, {} files".format(len(rm), n_rm))
|
self.log("forgetting {} deleted dirs, {} files".format(len(rm), n_rm))
|
||||||
for rd in rm:
|
for rd in rm:
|
||||||
|
cur.execute("delete from dh where d = ?", (rd,))
|
||||||
cur.execute("delete from up where rd = ?", (rd,))
|
cur.execute("delete from up where rd = ?", (rd,))
|
||||||
|
|
||||||
# then shadowed deleted files
|
# then shadowed deleted files
|
||||||
@@ -1023,7 +1117,7 @@ class Up2k(object):
|
|||||||
sz2 = st.st_size
|
sz2 = st.st_size
|
||||||
mt2 = int(st.st_mtime)
|
mt2 = int(st.st_mtime)
|
||||||
|
|
||||||
if nohash:
|
if nohash or not sz2:
|
||||||
w2 = up2k_wark_from_metadata(self.salt, sz2, mt2, rd, fn)
|
w2 = up2k_wark_from_metadata(self.salt, sz2, mt2, rd, fn)
|
||||||
else:
|
else:
|
||||||
if sz2 > 1024 * 1024 * 32:
|
if sz2 > 1024 * 1024 * 32:
|
||||||
@@ -1035,6 +1129,9 @@ class Up2k(object):
|
|||||||
self.log("hash: {} @ [{}]".format(repr(ex), abspath))
|
self.log("hash: {} @ [{}]".format(repr(ex), abspath))
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if not hashes:
|
||||||
|
return -1
|
||||||
|
|
||||||
w2 = up2k_wark_from_hashlist(self.salt, sz2, hashes)
|
w2 = up2k_wark_from_hashlist(self.salt, sz2, hashes)
|
||||||
|
|
||||||
if w == w2:
|
if w == w2:
|
||||||
@@ -1069,10 +1166,43 @@ class Up2k(object):
|
|||||||
reg = self.register_vpath(ptop, vol.flags)
|
reg = self.register_vpath(ptop, vol.flags)
|
||||||
|
|
||||||
assert reg and self.pp
|
assert reg and self.pp
|
||||||
|
cur = self.cur[ptop]
|
||||||
|
|
||||||
|
if not self.args.no_dhash:
|
||||||
|
with self.mutex:
|
||||||
|
c = cur.execute("select k from kv where k = 'tagscan'")
|
||||||
|
if not c.fetchone():
|
||||||
|
return 0, 0, bool(self.mtag)
|
||||||
|
|
||||||
|
ret = self._build_tags_index_2(ptop)
|
||||||
|
|
||||||
|
with self.mutex:
|
||||||
|
self._set_tagscan(cur, False)
|
||||||
|
cur.connection.commit()
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def _set_tagscan(self, cur: "sqlite3.Cursor", need: bool) -> bool:
|
||||||
|
if self.args.no_dhash:
|
||||||
|
return False
|
||||||
|
|
||||||
|
c = cur.execute("select k from kv where k = 'tagscan'")
|
||||||
|
if bool(c.fetchone()) == need:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if need:
|
||||||
|
cur.execute("insert into kv values ('tagscan',1)")
|
||||||
|
else:
|
||||||
|
cur.execute("delete from kv where k = 'tagscan'")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _build_tags_index_2(self, ptop: str) -> tuple[int, int, bool]:
|
||||||
entags = self.entags[ptop]
|
entags = self.entags[ptop]
|
||||||
flags = self.flags[ptop]
|
flags = self.flags[ptop]
|
||||||
cur = self.cur[ptop]
|
cur = self.cur[ptop]
|
||||||
|
|
||||||
|
n_add = 0
|
||||||
n_rm = 0
|
n_rm = 0
|
||||||
if "e2tsr" in flags:
|
if "e2tsr" in flags:
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
@@ -1102,7 +1232,7 @@ class Up2k(object):
|
|||||||
with self.mutex:
|
with self.mutex:
|
||||||
cur.connection.commit()
|
cur.connection.commit()
|
||||||
|
|
||||||
# bail if a volume flag disables indexing
|
# bail if a volflag disables indexing
|
||||||
if "d2t" in flags or "d2d" in flags:
|
if "d2t" in flags or "d2d" in flags:
|
||||||
return 0, n_rm, True
|
return 0, n_rm, True
|
||||||
|
|
||||||
@@ -1156,9 +1286,10 @@ class Up2k(object):
|
|||||||
|
|
||||||
w = bw[:-1].decode("ascii")
|
w = bw[:-1].decode("ascii")
|
||||||
|
|
||||||
q = "select rd, fn from up where w = ?"
|
with self.mutex:
|
||||||
try:
|
try:
|
||||||
rd, fn = cur.execute(q, (w,)).fetchone()
|
q = "select rd, fn from up where substr(w,1,16)=? and +w=?"
|
||||||
|
rd, fn = cur.execute(q, (w[:16], w)).fetchone()
|
||||||
except:
|
except:
|
||||||
# file modified/deleted since spooling
|
# file modified/deleted since spooling
|
||||||
continue
|
continue
|
||||||
@@ -1168,7 +1299,6 @@ class Up2k(object):
|
|||||||
|
|
||||||
if "mtp" in flags:
|
if "mtp" in flags:
|
||||||
q = "insert into mt values (?,'t:mtp','a')"
|
q = "insert into mt values (?,'t:mtp','a')"
|
||||||
with self.mutex:
|
|
||||||
cur.execute(q, (w[:16],))
|
cur.execute(q, (w[:16],))
|
||||||
|
|
||||||
abspath = os.path.join(ptop, rd, fn)
|
abspath = os.path.join(ptop, rd, fn)
|
||||||
@@ -1182,6 +1312,7 @@ class Up2k(object):
|
|||||||
|
|
||||||
n_add += n_tags
|
n_add += n_tags
|
||||||
n_buf += n_tags
|
n_buf += n_tags
|
||||||
|
nq -= 1
|
||||||
|
|
||||||
td = time.time() - last_write
|
td = time.time() - last_write
|
||||||
if n_buf >= 4096 or td >= max(1, self.timeout - 1):
|
if n_buf >= 4096 or td >= max(1, self.timeout - 1):
|
||||||
@@ -1232,7 +1363,11 @@ class Up2k(object):
|
|||||||
return tf, n
|
return tf, n
|
||||||
|
|
||||||
def _unspool(self, tf: tempfile.SpooledTemporaryFile[bytes]) -> None:
|
def _unspool(self, tf: tempfile.SpooledTemporaryFile[bytes]) -> None:
|
||||||
|
try:
|
||||||
self.spools.remove(tf)
|
self.spools.remove(tf)
|
||||||
|
except:
|
||||||
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
tf.close()
|
tf.close()
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
@@ -1263,6 +1398,9 @@ class Up2k(object):
|
|||||||
if "OFFLINE" not in self.volstate[k]:
|
if "OFFLINE" not in self.volstate[k]:
|
||||||
self.volstate[k] = "online, idle"
|
self.volstate[k] = "online, idle"
|
||||||
|
|
||||||
|
if self.args.exit == "idx":
|
||||||
|
self.hub.sigterm()
|
||||||
|
|
||||||
def _run_one_mtp(self, ptop: str, gid: int) -> None:
|
def _run_one_mtp(self, ptop: str, gid: int) -> None:
|
||||||
if gid != self.gid:
|
if gid != self.gid:
|
||||||
return
|
return
|
||||||
@@ -1555,6 +1693,7 @@ class Up2k(object):
|
|||||||
write_cur.execute(q, (wark[:16], k, v))
|
write_cur.execute(q, (wark[:16], k, v))
|
||||||
ret += 1
|
ret += 1
|
||||||
|
|
||||||
|
self._set_tagscan(write_cur, True)
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def _orz(self, db_path: str) -> "sqlite3.Cursor":
|
def _orz(self, db_path: str) -> "sqlite3.Cursor":
|
||||||
@@ -1578,6 +1717,11 @@ class Up2k(object):
|
|||||||
self.log("WARN: failed to upgrade from v4", 3)
|
self.log("WARN: failed to upgrade from v4", 3)
|
||||||
|
|
||||||
if ver == DB_VER:
|
if ver == DB_VER:
|
||||||
|
try:
|
||||||
|
self._add_dhash_tab(cur)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
nfiles = next(cur.execute("select count(w) from up"))[0]
|
nfiles = next(cur.execute("select count(w) from up"))[0]
|
||||||
self.log("OK: {} |{}|".format(db_path, nfiles))
|
self.log("OK: {} |{}|".format(db_path, nfiles))
|
||||||
@@ -1666,7 +1810,7 @@ class Up2k(object):
|
|||||||
]:
|
]:
|
||||||
cur.execute(cmd)
|
cur.execute(cmd)
|
||||||
|
|
||||||
cur.connection.commit()
|
self._add_dhash_tab(cur)
|
||||||
self.log("created DB at {}".format(db_path))
|
self.log("created DB at {}".format(db_path))
|
||||||
return cur
|
return cur
|
||||||
|
|
||||||
@@ -1681,6 +1825,17 @@ class Up2k(object):
|
|||||||
|
|
||||||
cur.connection.commit()
|
cur.connection.commit()
|
||||||
|
|
||||||
|
def _add_dhash_tab(self, cur: "sqlite3.Cursor") -> None:
|
||||||
|
# v5 -> v5a
|
||||||
|
for cmd in [
|
||||||
|
r"create table dh (d text, h text)",
|
||||||
|
r"create index dh_d on dh(d)",
|
||||||
|
r"insert into kv values ('tagscan',1)",
|
||||||
|
]:
|
||||||
|
cur.execute(cmd)
|
||||||
|
|
||||||
|
cur.connection.commit()
|
||||||
|
|
||||||
def _job_volchk(self, cj: dict[str, Any]) -> None:
|
def _job_volchk(self, cj: dict[str, Any]) -> None:
|
||||||
if not self.register_vpath(cj["ptop"], cj["vcfg"]):
|
if not self.register_vpath(cj["ptop"], cj["vcfg"]):
|
||||||
if cj["ptop"] not in self.registry:
|
if cj["ptop"] not in self.registry:
|
||||||
@@ -1870,7 +2025,11 @@ class Up2k(object):
|
|||||||
job["need"].append(k)
|
job["need"].append(k)
|
||||||
lut[k] = 1
|
lut[k] = 1
|
||||||
|
|
||||||
|
try:
|
||||||
self._new_upload(job)
|
self._new_upload(job)
|
||||||
|
except:
|
||||||
|
self.registry[job["ptop"]].pop(job["wark"], None)
|
||||||
|
raise
|
||||||
|
|
||||||
purl = "{}/{}".format(job["vtop"], job["prel"]).strip("/")
|
purl = "{}/{}".format(job["vtop"], job["prel"]).strip("/")
|
||||||
purl = "/{}/".format(purl) if purl else "/"
|
purl = "/{}/".format(purl) if purl else "/"
|
||||||
@@ -2577,11 +2736,21 @@ class Up2k(object):
|
|||||||
fsz = bos.path.getsize(path)
|
fsz = bos.path.getsize(path)
|
||||||
csz = up2k_chunksize(fsz)
|
csz = up2k_chunksize(fsz)
|
||||||
ret = []
|
ret = []
|
||||||
|
suffix = " MB, {}".format(path)
|
||||||
with open(fsenc(path), "rb", 512 * 1024) as f:
|
with open(fsenc(path), "rb", 512 * 1024) as f:
|
||||||
|
if self.mth and fsz >= 1024 * 512:
|
||||||
|
tlt = self.mth.hash(f, fsz, csz, self.pp, prefix, suffix)
|
||||||
|
ret = [x[0] for x in tlt]
|
||||||
|
fsz = 0
|
||||||
|
|
||||||
while fsz > 0:
|
while fsz > 0:
|
||||||
|
# same as `hash_at` except for `imutex` / bufsz
|
||||||
|
if self.stop:
|
||||||
|
return []
|
||||||
|
|
||||||
if self.pp:
|
if self.pp:
|
||||||
mb = int(fsz / 1024 / 1024)
|
mb = int(fsz / 1024 / 1024)
|
||||||
self.pp.msg = "{}{} MB, {}".format(prefix, mb, path)
|
self.pp.msg = prefix + str(mb) + suffix
|
||||||
|
|
||||||
hashobj = hashlib.sha512()
|
hashobj = hashlib.sha512()
|
||||||
rem = min(csz, fsz)
|
rem = min(csz, fsz)
|
||||||
@@ -2822,8 +2991,17 @@ class Up2k(object):
|
|||||||
abspath = os.path.join(ptop, rd, fn)
|
abspath = os.path.join(ptop, rd, fn)
|
||||||
self.log("hashing " + abspath)
|
self.log("hashing " + abspath)
|
||||||
inf = bos.stat(abspath)
|
inf = bos.stat(abspath)
|
||||||
|
if not inf.st_size:
|
||||||
|
wark = up2k_wark_from_metadata(
|
||||||
|
self.salt, inf.st_size, int(inf.st_mtime), rd, fn
|
||||||
|
)
|
||||||
|
else:
|
||||||
hashes = self._hashlist_from_file(abspath)
|
hashes = self._hashlist_from_file(abspath)
|
||||||
|
if not hashes:
|
||||||
|
return
|
||||||
|
|
||||||
wark = up2k_wark_from_hashlist(self.salt, inf.st_size, hashes)
|
wark = up2k_wark_from_hashlist(self.salt, inf.st_size, hashes)
|
||||||
|
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
self.idx_wark(ptop, wark, rd, fn, inf.st_mtime, inf.st_size, ip, at)
|
self.idx_wark(ptop, wark, rd, fn, inf.st_mtime, inf.st_size, ip, at)
|
||||||
|
|
||||||
@@ -2839,6 +3017,9 @@ class Up2k(object):
|
|||||||
def shutdown(self) -> None:
|
def shutdown(self) -> None:
|
||||||
self.stop = True
|
self.stop = True
|
||||||
|
|
||||||
|
if self.mth:
|
||||||
|
self.mth.stop = True
|
||||||
|
|
||||||
for x in list(self.spools):
|
for x in list(self.spools):
|
||||||
self._unspool(x)
|
self._unspool(x)
|
||||||
|
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ from __future__ import print_function, unicode_literals
|
|||||||
import base64
|
import base64
|
||||||
import contextlib
|
import contextlib
|
||||||
import hashlib
|
import hashlib
|
||||||
|
import math
|
||||||
import mimetypes
|
import mimetypes
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
@@ -21,7 +22,10 @@ import traceback
|
|||||||
from collections import Counter
|
from collections import Counter
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
|
from queue import Queue
|
||||||
|
|
||||||
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, VT100, WINDOWS
|
from .__init__ import ANYWIN, PY2, TYPE_CHECKING, VT100, WINDOWS
|
||||||
|
from .__version__ import S_BUILD_DT, S_VERSION
|
||||||
from .stolen import surrogateescape
|
from .stolen import surrogateescape
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -46,7 +50,7 @@ try:
|
|||||||
from collections.abc import Callable, Iterable
|
from collections.abc import Callable, Iterable
|
||||||
|
|
||||||
import typing
|
import typing
|
||||||
from typing import Any, Generator, Optional, Protocol, Union
|
from typing import Any, Generator, Optional, Pattern, Protocol, Union
|
||||||
|
|
||||||
class RootLogger(Protocol):
|
class RootLogger(Protocol):
|
||||||
def __call__(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
def __call__(self, src: str, msg: str, c: Union[int, str] = 0) -> None:
|
||||||
@@ -83,8 +87,6 @@ else:
|
|||||||
from urllib import quote # pylint: disable=no-name-in-module
|
from urllib import quote # pylint: disable=no-name-in-module
|
||||||
from urllib import unquote # pylint: disable=no-name-in-module
|
from urllib import unquote # pylint: disable=no-name-in-module
|
||||||
|
|
||||||
_: Any = (mp, BytesIO, quote, unquote)
|
|
||||||
__all__ = ["mp", "BytesIO", "quote", "unquote"]
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
struct.unpack(b">i", b"idgi")
|
struct.unpack(b">i", b"idgi")
|
||||||
@@ -213,6 +215,72 @@ REKOBO_KEY = {
|
|||||||
REKOBO_LKEY = {k.lower(): v for k, v in REKOBO_KEY.items()}
|
REKOBO_LKEY = {k.lower(): v for k, v in REKOBO_KEY.items()}
|
||||||
|
|
||||||
|
|
||||||
|
def py_desc() -> str:
|
||||||
|
interp = platform.python_implementation()
|
||||||
|
py_ver = ".".join([str(x) for x in sys.version_info])
|
||||||
|
ofs = py_ver.find(".final.")
|
||||||
|
if ofs > 0:
|
||||||
|
py_ver = py_ver[:ofs]
|
||||||
|
|
||||||
|
try:
|
||||||
|
bitness = struct.calcsize(b"P") * 8
|
||||||
|
except:
|
||||||
|
bitness = struct.calcsize("P") * 8
|
||||||
|
|
||||||
|
host_os = platform.system()
|
||||||
|
compiler = platform.python_compiler()
|
||||||
|
|
||||||
|
m = re.search(r"([0-9]+\.[0-9\.]+)", platform.version())
|
||||||
|
os_ver = m.group(1) if m else ""
|
||||||
|
|
||||||
|
return "{:>9} v{} on {}{} {} [{}]".format(
|
||||||
|
interp, py_ver, host_os, bitness, os_ver, compiler
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _sqlite_ver() -> str:
|
||||||
|
try:
|
||||||
|
co = sqlite3.connect(":memory:")
|
||||||
|
cur = co.cursor()
|
||||||
|
try:
|
||||||
|
vs = cur.execute("select * from pragma_compile_options").fetchall()
|
||||||
|
except:
|
||||||
|
vs = cur.execute("pragma compile_options").fetchall()
|
||||||
|
|
||||||
|
v = next(x[0].split("=")[1] for x in vs if x[0].startswith("THREADSAFE="))
|
||||||
|
cur.close()
|
||||||
|
co.close()
|
||||||
|
except:
|
||||||
|
v = "W"
|
||||||
|
|
||||||
|
return "{}*{}".format(sqlite3.sqlite_version, v)
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
SQLITE_VER = _sqlite_ver()
|
||||||
|
except:
|
||||||
|
SQLITE_VER = "(None)"
|
||||||
|
|
||||||
|
try:
|
||||||
|
from jinja2 import __version__ as JINJA_VER
|
||||||
|
except:
|
||||||
|
JINJA_VER = "(None)"
|
||||||
|
|
||||||
|
try:
|
||||||
|
from pyftpdlib.__init__ import __ver__ as PYFTPD_VER
|
||||||
|
except:
|
||||||
|
PYFTPD_VER = "(None)"
|
||||||
|
|
||||||
|
|
||||||
|
VERSIONS = "copyparty v{} ({})\n{}\n sqlite v{} | jinja v{} | pyftpd v{}".format(
|
||||||
|
S_VERSION, S_BUILD_DT, py_desc(), SQLITE_VER, JINJA_VER, PYFTPD_VER
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
_: Any = (mp, BytesIO, quote, unquote, SQLITE_VER, JINJA_VER, PYFTPD_VER)
|
||||||
|
__all__ = ["mp", "BytesIO", "quote", "unquote", "SQLITE_VER", "JINJA_VER", "PYFTPD_VER"]
|
||||||
|
|
||||||
|
|
||||||
class Cooldown(object):
|
class Cooldown(object):
|
||||||
def __init__(self, maxage: float) -> None:
|
def __init__(self, maxage: float) -> None:
|
||||||
self.maxage = maxage
|
self.maxage = maxage
|
||||||
@@ -429,6 +497,104 @@ class ProgressPrinter(threading.Thread):
|
|||||||
sys.stdout.flush() # necessary on win10 even w/ stderr btw
|
sys.stdout.flush() # necessary on win10 even w/ stderr btw
|
||||||
|
|
||||||
|
|
||||||
|
class MTHash(object):
|
||||||
|
def __init__(self, cores: int):
|
||||||
|
self.pp: Optional[ProgressPrinter] = None
|
||||||
|
self.f: Optional[typing.BinaryIO] = None
|
||||||
|
self.sz = 0
|
||||||
|
self.csz = 0
|
||||||
|
self.stop = False
|
||||||
|
self.omutex = threading.Lock()
|
||||||
|
self.imutex = threading.Lock()
|
||||||
|
self.work_q: Queue[int] = Queue()
|
||||||
|
self.done_q: Queue[tuple[int, str, int, int]] = Queue()
|
||||||
|
self.thrs = []
|
||||||
|
for n in range(cores):
|
||||||
|
t = threading.Thread(target=self.worker, name="mth-" + str(n))
|
||||||
|
t.daemon = True
|
||||||
|
t.start()
|
||||||
|
self.thrs.append(t)
|
||||||
|
|
||||||
|
def hash(
|
||||||
|
self,
|
||||||
|
f: typing.BinaryIO,
|
||||||
|
fsz: int,
|
||||||
|
chunksz: int,
|
||||||
|
pp: Optional[ProgressPrinter] = None,
|
||||||
|
prefix: str = "",
|
||||||
|
suffix: str = "",
|
||||||
|
) -> list[tuple[str, int, int]]:
|
||||||
|
with self.omutex:
|
||||||
|
self.f = f
|
||||||
|
self.sz = fsz
|
||||||
|
self.csz = chunksz
|
||||||
|
|
||||||
|
chunks: dict[int, tuple[str, int, int]] = {}
|
||||||
|
nchunks = int(math.ceil(fsz / chunksz))
|
||||||
|
for nch in range(nchunks):
|
||||||
|
self.work_q.put(nch)
|
||||||
|
|
||||||
|
ex = ""
|
||||||
|
for nch in range(nchunks):
|
||||||
|
qe = self.done_q.get()
|
||||||
|
try:
|
||||||
|
nch, dig, ofs, csz = qe
|
||||||
|
chunks[nch] = (dig, ofs, csz)
|
||||||
|
except:
|
||||||
|
ex = ex or str(qe)
|
||||||
|
|
||||||
|
if pp:
|
||||||
|
mb = int((fsz - nch * chunksz) / 1024 / 1024)
|
||||||
|
pp.msg = prefix + str(mb) + suffix
|
||||||
|
|
||||||
|
if ex:
|
||||||
|
raise Exception(ex)
|
||||||
|
|
||||||
|
ret = []
|
||||||
|
for n in range(nchunks):
|
||||||
|
ret.append(chunks[n])
|
||||||
|
|
||||||
|
self.f = None
|
||||||
|
self.csz = 0
|
||||||
|
self.sz = 0
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def worker(self) -> None:
|
||||||
|
while True:
|
||||||
|
ofs = self.work_q.get()
|
||||||
|
try:
|
||||||
|
v = self.hash_at(ofs)
|
||||||
|
except Exception as ex:
|
||||||
|
v = str(ex) # type: ignore
|
||||||
|
|
||||||
|
self.done_q.put(v)
|
||||||
|
|
||||||
|
def hash_at(self, nch: int) -> tuple[int, str, int, int]:
|
||||||
|
f = self.f
|
||||||
|
ofs = ofs0 = nch * self.csz
|
||||||
|
chunk_sz = chunk_rem = min(self.csz, self.sz - ofs)
|
||||||
|
if self.stop:
|
||||||
|
return nch, "", ofs0, chunk_sz
|
||||||
|
|
||||||
|
assert f
|
||||||
|
hashobj = hashlib.sha512()
|
||||||
|
while chunk_rem > 0:
|
||||||
|
with self.imutex:
|
||||||
|
f.seek(ofs)
|
||||||
|
buf = f.read(min(chunk_rem, 1024 * 1024 * 12))
|
||||||
|
|
||||||
|
if not buf:
|
||||||
|
raise Exception("EOF at " + str(ofs))
|
||||||
|
|
||||||
|
hashobj.update(buf)
|
||||||
|
chunk_rem -= len(buf)
|
||||||
|
ofs += len(buf)
|
||||||
|
|
||||||
|
bdig = hashobj.digest()[:33]
|
||||||
|
udig = base64.urlsafe_b64encode(bdig).decode("utf-8")
|
||||||
|
return nch, udig, ofs0, chunk_sz
|
||||||
|
|
||||||
|
|
||||||
def uprint(msg: str) -> None:
|
def uprint(msg: str) -> None:
|
||||||
try:
|
try:
|
||||||
print(msg, end="")
|
print(msg, end="")
|
||||||
@@ -932,6 +1098,24 @@ def gen_filekey(salt: str, fspath: str, fsize: int, inode: int) -> str:
|
|||||||
).decode("ascii")
|
).decode("ascii")
|
||||||
|
|
||||||
|
|
||||||
|
def gen_filekey_dbg(
|
||||||
|
salt: str,
|
||||||
|
fspath: str,
|
||||||
|
fsize: int,
|
||||||
|
inode: int,
|
||||||
|
log: "NamedLogger",
|
||||||
|
log_ptn: Optional[Pattern[str]],
|
||||||
|
) -> str:
|
||||||
|
ret = gen_filekey(salt, fspath, fsize, inode)
|
||||||
|
|
||||||
|
assert log_ptn
|
||||||
|
if log_ptn.search(fspath):
|
||||||
|
t = "fk({}) salt({}) size({}) inode({}) fspath({})"
|
||||||
|
log(t.format(ret[:8], salt, fsize, inode, fspath))
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
def gencookie(k: str, v: str, dur: Optional[int]) -> str:
|
def gencookie(k: str, v: str, dur: Optional[int]) -> str:
|
||||||
v = v.replace(";", "")
|
v = v.replace(";", "")
|
||||||
if dur:
|
if dur:
|
||||||
@@ -1143,6 +1327,10 @@ def vsplit(vpath: str) -> tuple[str, str]:
|
|||||||
return vpath.rsplit("/", 1) # type: ignore
|
return vpath.rsplit("/", 1) # type: ignore
|
||||||
|
|
||||||
|
|
||||||
|
def vjoin(rd: str, fn: str) -> str:
|
||||||
|
return rd + "/" + fn if rd else fn
|
||||||
|
|
||||||
|
|
||||||
def w8dec(txt: bytes) -> str:
|
def w8dec(txt: bytes) -> str:
|
||||||
"""decodes filesystem-bytes to wtf8"""
|
"""decodes filesystem-bytes to wtf8"""
|
||||||
if PY2:
|
if PY2:
|
||||||
@@ -1206,7 +1394,7 @@ def db_ex_chk(log: "NamedLogger", ex: Exception, db_path: str) -> bool:
|
|||||||
if str(ex) != "database is locked":
|
if str(ex) != "database is locked":
|
||||||
return False
|
return False
|
||||||
|
|
||||||
thr = threading.Thread(target=lsof, args=(log, db_path))
|
thr = threading.Thread(target=lsof, args=(log, db_path), name="dbex")
|
||||||
thr.daemon = True
|
thr.daemon = True
|
||||||
thr.start()
|
thr.start()
|
||||||
|
|
||||||
@@ -1741,29 +1929,6 @@ def gzip_orig_sz(fn: str) -> int:
|
|||||||
return sunpack(b"I", rv)[0] # type: ignore
|
return sunpack(b"I", rv)[0] # type: ignore
|
||||||
|
|
||||||
|
|
||||||
def py_desc() -> str:
|
|
||||||
interp = platform.python_implementation()
|
|
||||||
py_ver = ".".join([str(x) for x in sys.version_info])
|
|
||||||
ofs = py_ver.find(".final.")
|
|
||||||
if ofs > 0:
|
|
||||||
py_ver = py_ver[:ofs]
|
|
||||||
|
|
||||||
try:
|
|
||||||
bitness = struct.calcsize(b"P") * 8
|
|
||||||
except:
|
|
||||||
bitness = struct.calcsize("P") * 8
|
|
||||||
|
|
||||||
host_os = platform.system()
|
|
||||||
compiler = platform.python_compiler()
|
|
||||||
|
|
||||||
m = re.search(r"([0-9]+\.[0-9\.]+)", platform.version())
|
|
||||||
os_ver = m.group(1) if m else ""
|
|
||||||
|
|
||||||
return "{:>9} v{} on {}{} {} [{}]".format(
|
|
||||||
interp, py_ver, host_os, bitness, os_ver, compiler
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def align_tab(lines: list[str]) -> list[str]:
|
def align_tab(lines: list[str]) -> list[str]:
|
||||||
rows = []
|
rows = []
|
||||||
ncols = 0
|
ncols = 0
|
||||||
|
|||||||
@@ -699,18 +699,12 @@ window.baguetteBox = (function () {
|
|||||||
showOverlay(index);
|
showOverlay(index);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
if (index < 0) {
|
|
||||||
if (options.animation)
|
|
||||||
bounceAnimation('left');
|
|
||||||
|
|
||||||
return false;
|
if (index < 0)
|
||||||
}
|
return bounceAnimation('left');
|
||||||
if (index >= imagesElements.length) {
|
|
||||||
if (options.animation)
|
|
||||||
bounceAnimation('right');
|
|
||||||
|
|
||||||
return false;
|
if (index >= imagesElements.length)
|
||||||
}
|
return bounceAnimation('right');
|
||||||
|
|
||||||
var v = vid();
|
var v = vid();
|
||||||
if (v) {
|
if (v) {
|
||||||
@@ -893,10 +887,11 @@ window.baguetteBox = (function () {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function bounceAnimation(direction) {
|
function bounceAnimation(direction) {
|
||||||
slider.className = 'bounce-from-' + direction;
|
slider.className = options.animation == 'slideIn' ? 'bounce-from-' + direction : 'eog';
|
||||||
setTimeout(function () {
|
setTimeout(function () {
|
||||||
slider.className = '';
|
slider.className = '';
|
||||||
}, 400);
|
}, 300);
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
function updateOffset() {
|
function updateOffset() {
|
||||||
|
|||||||
@@ -259,7 +259,7 @@ html.bz {
|
|||||||
--bg-d2: #34384e;
|
--bg-d2: #34384e;
|
||||||
--bg-d3: #34384e;
|
--bg-d3: #34384e;
|
||||||
|
|
||||||
--row-alt: rgba(139, 150, 205, 0.06);
|
--row-alt: #181a27;
|
||||||
|
|
||||||
--btn-bg: #202231;
|
--btn-bg: #202231;
|
||||||
--btn-h-bg: #2d2f45;
|
--btn-h-bg: #2d2f45;
|
||||||
@@ -309,7 +309,7 @@ html.c {
|
|||||||
--a-gray: #0ae;
|
--a-gray: #0ae;
|
||||||
|
|
||||||
--tab-alt: #6ef;
|
--tab-alt: #6ef;
|
||||||
--row-alt: rgba(180,0,255,0.3);
|
--row-alt: #47237d;
|
||||||
--scroll: #ff0;
|
--scroll: #ff0;
|
||||||
|
|
||||||
--btn-fg: #fff;
|
--btn-fg: #fff;
|
||||||
@@ -504,7 +504,7 @@ html.dy {
|
|||||||
--a: #000;
|
--a: #000;
|
||||||
--a-b: #000;
|
--a-b: #000;
|
||||||
--a-hil: #000;
|
--a-hil: #000;
|
||||||
--a-gray: #000;
|
--a-gray: #bbb;
|
||||||
--a-dark: #000;
|
--a-dark: #000;
|
||||||
|
|
||||||
--btn-fg: #000;
|
--btn-fg: #000;
|
||||||
@@ -544,6 +544,9 @@ html.dy {
|
|||||||
|
|
||||||
--tree-bg: #fff;
|
--tree-bg: #fff;
|
||||||
|
|
||||||
|
--g-sel-bg: #000;
|
||||||
|
--g-fsel-bg: #444;
|
||||||
|
--g-fsel-ts: #000;
|
||||||
--g-fg: a;
|
--g-fg: a;
|
||||||
--g-bg: a;
|
--g-bg: a;
|
||||||
--g-b1: a;
|
--g-b1: a;
|
||||||
@@ -707,6 +710,7 @@ html.y #files thead th {
|
|||||||
#files td {
|
#files td {
|
||||||
margin: 0;
|
margin: 0;
|
||||||
padding: .3em .5em;
|
padding: .3em .5em;
|
||||||
|
background: var(--bg);
|
||||||
}
|
}
|
||||||
#files tr:nth-child(2n) td {
|
#files tr:nth-child(2n) td {
|
||||||
background: var(--row-alt);
|
background: var(--row-alt);
|
||||||
@@ -1595,9 +1599,6 @@ html.y #tree.nowrap .ntree a+a:hover {
|
|||||||
margin: .7em 0 .7em .5em;
|
margin: .7em 0 .7em .5em;
|
||||||
padding-left: .5em;
|
padding-left: .5em;
|
||||||
}
|
}
|
||||||
.opwide>div.fill {
|
|
||||||
display: block;
|
|
||||||
}
|
|
||||||
.opwide>div>div>a {
|
.opwide>div>div>a {
|
||||||
line-height: 2em;
|
line-height: 2em;
|
||||||
}
|
}
|
||||||
@@ -1908,10 +1909,13 @@ html.y #bbox-overlay figcaption a {
|
|||||||
transition: left .2s ease, transform .2s ease;
|
transition: left .2s ease, transform .2s ease;
|
||||||
}
|
}
|
||||||
.bounce-from-right {
|
.bounce-from-right {
|
||||||
animation: bounceFromRight .4s ease-out;
|
animation: bounceFromRight .3s ease-out;
|
||||||
}
|
}
|
||||||
.bounce-from-left {
|
.bounce-from-left {
|
||||||
animation: bounceFromLeft .4s ease-out;
|
animation: bounceFromLeft .3s ease-out;
|
||||||
|
}
|
||||||
|
.eog {
|
||||||
|
animation: eog .2s;
|
||||||
}
|
}
|
||||||
@keyframes bounceFromRight {
|
@keyframes bounceFromRight {
|
||||||
0% {margin-left: 0}
|
0% {margin-left: 0}
|
||||||
@@ -1923,6 +1927,9 @@ html.y #bbox-overlay figcaption a {
|
|||||||
50% {margin-left: 30px}
|
50% {margin-left: 30px}
|
||||||
100% {margin-left: 0}
|
100% {margin-left: 0}
|
||||||
}
|
}
|
||||||
|
@keyframes eog {
|
||||||
|
0% {filter: brightness(1.5)}
|
||||||
|
}
|
||||||
#bbox-next,
|
#bbox-next,
|
||||||
#bbox-prev {
|
#bbox-prev {
|
||||||
top: 50%;
|
top: 50%;
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ var Ls = {
|
|||||||
"q": "quality / bitrate",
|
"q": "quality / bitrate",
|
||||||
"Ac": "audio codec",
|
"Ac": "audio codec",
|
||||||
"Vc": "video codec",
|
"Vc": "video codec",
|
||||||
|
"Fmt": "format / container",
|
||||||
"Ahash": "audio checksum",
|
"Ahash": "audio checksum",
|
||||||
"Vhash": "video checksum",
|
"Vhash": "video checksum",
|
||||||
"Res": "resolution",
|
"Res": "resolution",
|
||||||
@@ -106,6 +107,7 @@ var Ls = {
|
|||||||
|
|
||||||
"ct_thumb": "in icon view, toggle icons or thumbnails$NHotkey: T",
|
"ct_thumb": "in icon view, toggle icons or thumbnails$NHotkey: T",
|
||||||
"ct_dots": "show hidden files (if server permits)",
|
"ct_dots": "show hidden files (if server permits)",
|
||||||
|
"ct_dir1st": "sort folders before files",
|
||||||
"ct_readme": "show README.md in folder listings",
|
"ct_readme": "show README.md in folder listings",
|
||||||
|
|
||||||
"cut_turbo": "the yolo button, you probably DO NOT want to enable this:$N$Nuse this if you were uploading a huge amount of files and had to restart for some reason, and want to continue the upload ASAP$N$Nthis replaces the hash-check with a simple <em>"does this have the same filesize on the server?"</em> so if the file contents are different it will NOT be uploaded$N$Nyou should turn this off when the upload is done, and then "upload" the same files again to let the client verify them",
|
"cut_turbo": "the yolo button, you probably DO NOT want to enable this:$N$Nuse this if you were uploading a huge amount of files and had to restart for some reason, and want to continue the upload ASAP$N$Nthis replaces the hash-check with a simple <em>"does this have the same filesize on the server?"</em> so if the file contents are different it will NOT be uploaded$N$Nyou should turn this off when the upload is done, and then "upload" the same files again to let the client verify them",
|
||||||
@@ -116,6 +118,8 @@ var Ls = {
|
|||||||
|
|
||||||
"cut_az": "upload files in alphabetical order, rather than smallest-file-first$N$Nalphabetical order can make it easier to eyeball if something went wrong on the server, but it makes uploading slightly slower on fiber / LAN",
|
"cut_az": "upload files in alphabetical order, rather than smallest-file-first$N$Nalphabetical order can make it easier to eyeball if something went wrong on the server, but it makes uploading slightly slower on fiber / LAN",
|
||||||
|
|
||||||
|
"cut_mt": "use multithreading to accelerate file hashing$N$Nthis uses web-workers and requires$Nmore RAM (up to 512 MiB extra)$N$N30% faster https, 4.5x faster http,$Nand 5.3x faster on android phones",
|
||||||
|
|
||||||
"cft_text": "favicon text (blank and refresh to disable)",
|
"cft_text": "favicon text (blank and refresh to disable)",
|
||||||
"cft_fg": "foreground color",
|
"cft_fg": "foreground color",
|
||||||
"cft_bg": "background color",
|
"cft_bg": "background color",
|
||||||
@@ -286,8 +290,9 @@ var Ls = {
|
|||||||
|
|
||||||
"u_https1": "you should",
|
"u_https1": "you should",
|
||||||
"u_https2": "switch to https",
|
"u_https2": "switch to https",
|
||||||
"u_https3": "for much better performance",
|
"u_https3": "for better performance",
|
||||||
"u_ancient": 'your browser is impressively ancient -- maybe you should <a href="#" onclick="goto(\'bup\')">use bup instead</a>',
|
"u_ancient": 'your browser is impressively ancient -- maybe you should <a href="#" onclick="goto(\'bup\')">use bup instead</a>',
|
||||||
|
"u_nowork": "need firefox 53+ or chrome 57+ or iOS 11+",
|
||||||
"u_enpot": 'switch to <a href="#">potato UI</a> (may improve upload speed)',
|
"u_enpot": 'switch to <a href="#">potato UI</a> (may improve upload speed)',
|
||||||
"u_depot": 'switch to <a href="#">fancy UI</a> (may reduce upload speed)',
|
"u_depot": 'switch to <a href="#">fancy UI</a> (may reduce upload speed)',
|
||||||
"u_gotpot": 'switching to the potato UI for improved upload speed,\n\nfeel free to disagree and switch back!',
|
"u_gotpot": 'switching to the potato UI for improved upload speed,\n\nfeel free to disagree and switch back!',
|
||||||
@@ -308,6 +313,7 @@ var Ls = {
|
|||||||
"u_upping": 'uploading',
|
"u_upping": 'uploading',
|
||||||
"u_cuerr": "failed to upload chunk {0} of {1};\nprobably harmless, continuing\n\nfile: {2}",
|
"u_cuerr": "failed to upload chunk {0} of {1};\nprobably harmless, continuing\n\nfile: {2}",
|
||||||
"u_cuerr2": "server rejected upload (chunk {0} of {1});\n\nfile: {2}\n\nerror ",
|
"u_cuerr2": "server rejected upload (chunk {0} of {1});\n\nfile: {2}\n\nerror ",
|
||||||
|
"u_ehstmp": "will retry; see bottom-right",
|
||||||
"u_ehsfin": "server rejected the request to finalize upload",
|
"u_ehsfin": "server rejected the request to finalize upload",
|
||||||
"u_ehssrch": "server rejected the request to perform search",
|
"u_ehssrch": "server rejected the request to perform search",
|
||||||
"u_ehsinit": "server rejected the request to initiate upload",
|
"u_ehsinit": "server rejected the request to initiate upload",
|
||||||
@@ -343,6 +349,7 @@ var Ls = {
|
|||||||
"q": "kvalitet / bitrate",
|
"q": "kvalitet / bitrate",
|
||||||
"Ac": "lyd-format",
|
"Ac": "lyd-format",
|
||||||
"Vc": "video-format",
|
"Vc": "video-format",
|
||||||
|
"Fmt": "format / innpakning",
|
||||||
"Ahash": "lyd-kontrollsum",
|
"Ahash": "lyd-kontrollsum",
|
||||||
"Vhash": "video-kontrollsum",
|
"Vhash": "video-kontrollsum",
|
||||||
"Res": "oppløsning",
|
"Res": "oppløsning",
|
||||||
@@ -438,6 +445,7 @@ var Ls = {
|
|||||||
|
|
||||||
"ct_thumb": "vis miniatyrbilder istedenfor ikoner$NSnarvei: T",
|
"ct_thumb": "vis miniatyrbilder istedenfor ikoner$NSnarvei: T",
|
||||||
"ct_dots": "vis skjulte filer (gitt at serveren tillater det)",
|
"ct_dots": "vis skjulte filer (gitt at serveren tillater det)",
|
||||||
|
"ct_dir1st": "sorter slik at mapper kommer foran filer",
|
||||||
"ct_readme": "vis README.md nedenfor filene",
|
"ct_readme": "vis README.md nedenfor filene",
|
||||||
|
|
||||||
"cut_turbo": "forenklet befaring ved opplastning; bør sannsynlig <em>ikke</em> skrus på:$N$Nnyttig dersom du var midt i en svær opplastning som måtte restartes av en eller annen grunn, og du vil komme igang igjen så raskt som overhodet mulig.$N$Nnår denne er skrudd på så forenkles befaringen kraftig; istedenfor å utføre en trygg sjekk på om filene finnes på serveren i god stand, så sjekkes kun om <em>filstørrelsen</em> stemmer. Så dersom en korrupt fil skulle befinne seg på serveren allerede, på samme sted med samme størrelse og navn, så blir det <em>ikke oppdaget</em>.$N$Ndet anbefales å kun benytte denne funksjonen for å komme seg raskt igjennom selve opplastningen, for så å skru den av, og til slutt "laste opp" de samme filene én gang til -- slik at integriteten kan verifiseres",
|
"cut_turbo": "forenklet befaring ved opplastning; bør sannsynlig <em>ikke</em> skrus på:$N$Nnyttig dersom du var midt i en svær opplastning som måtte restartes av en eller annen grunn, og du vil komme igang igjen så raskt som overhodet mulig.$N$Nnår denne er skrudd på så forenkles befaringen kraftig; istedenfor å utføre en trygg sjekk på om filene finnes på serveren i god stand, så sjekkes kun om <em>filstørrelsen</em> stemmer. Så dersom en korrupt fil skulle befinne seg på serveren allerede, på samme sted med samme størrelse og navn, så blir det <em>ikke oppdaget</em>.$N$Ndet anbefales å kun benytte denne funksjonen for å komme seg raskt igjennom selve opplastningen, for så å skru den av, og til slutt "laste opp" de samme filene én gang til -- slik at integriteten kan verifiseres",
|
||||||
@@ -448,6 +456,8 @@ var Ls = {
|
|||||||
|
|
||||||
"cut_az": "last opp filer i alfabetisk rekkefølge, istedenfor minste-fil-først$N$Nalfabetisk kan gjøre det lettere å anslå om alt gikk bra, men er bittelitt tregere på fiber / LAN",
|
"cut_az": "last opp filer i alfabetisk rekkefølge, istedenfor minste-fil-først$N$Nalfabetisk kan gjøre det lettere å anslå om alt gikk bra, men er bittelitt tregere på fiber / LAN",
|
||||||
|
|
||||||
|
"cut_mt": "raskere befaring ved å bruke hele CPU'en$N$Ndenne funksjonen anvender web-workers$Nog krever mer RAM (opptil 512 MiB ekstra)$N$N30% raskere https, 4.5x raskere http,$Nog 5.3x raskere på android-telefoner",
|
||||||
|
|
||||||
"cft_text": "ikontekst (blank ut og last siden på nytt for å deaktivere)",
|
"cft_text": "ikontekst (blank ut og last siden på nytt for å deaktivere)",
|
||||||
"cft_fg": "farge",
|
"cft_fg": "farge",
|
||||||
"cft_bg": "bakgrunnsfarge",
|
"cft_bg": "bakgrunnsfarge",
|
||||||
@@ -618,8 +628,9 @@ var Ls = {
|
|||||||
|
|
||||||
"u_https1": "du burde",
|
"u_https1": "du burde",
|
||||||
"u_https2": "bytte til https",
|
"u_https2": "bytte til https",
|
||||||
"u_https3": "for mye høyere hastighet",
|
"u_https3": "for høyere hastighet",
|
||||||
"u_ancient": 'nettleseren din er prehistorisk -- mulig du burde <a href="#" onclick="goto(\'bup\')">bruke bup istedenfor</a>',
|
"u_ancient": 'nettleseren din er prehistorisk -- mulig du burde <a href="#" onclick="goto(\'bup\')">bruke bup istedenfor</a>',
|
||||||
|
"u_nowork": "krever firefox 53+, chrome 57+, eller iOS 11+",
|
||||||
"u_enpot": 'bytt til <a href="#">enkelt UI</a> (gir sannsynlig raskere opplastning)',
|
"u_enpot": 'bytt til <a href="#">enkelt UI</a> (gir sannsynlig raskere opplastning)',
|
||||||
"u_depot": 'bytt til <a href="#">snæsent UI</a> (gir sannsynlig tregere opplastning)',
|
"u_depot": 'bytt til <a href="#">snæsent UI</a> (gir sannsynlig tregere opplastning)',
|
||||||
"u_gotpot": 'byttet til et enklere UI for å laste opp raskere,\n\ndu kan gjerne bytte tilbake altså!',
|
"u_gotpot": 'byttet til et enklere UI for å laste opp raskere,\n\ndu kan gjerne bytte tilbake altså!',
|
||||||
@@ -640,6 +651,7 @@ var Ls = {
|
|||||||
"u_upping": 'sender',
|
"u_upping": 'sender',
|
||||||
"u_cuerr": "kunne ikke laste opp del {0} av {1};\nsikkert harmløst, fortsetter\n\nfil: {2}",
|
"u_cuerr": "kunne ikke laste opp del {0} av {1};\nsikkert harmløst, fortsetter\n\nfil: {2}",
|
||||||
"u_cuerr2": "server nektet opplastningen (del {0} av {1});\n\nfile: {2}\n\nerror ",
|
"u_cuerr2": "server nektet opplastningen (del {0} av {1});\n\nfile: {2}\n\nerror ",
|
||||||
|
"u_ehstmp": "prøver igjen; se mld nederst",
|
||||||
"u_ehsfin": "server nektet forespørselen om å ferdigstille filen",
|
"u_ehsfin": "server nektet forespørselen om å ferdigstille filen",
|
||||||
"u_ehssrch": "server nektet forespørselen om å utføre søk",
|
"u_ehssrch": "server nektet forespørselen om å utføre søk",
|
||||||
"u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning",
|
"u_ehsinit": "server nektet forespørselen om å begynne en ny opplastning",
|
||||||
@@ -820,6 +832,7 @@ ebi('op_cfg').innerHTML = (
|
|||||||
' <a id="griden" class="tgl btn" href="#" tt="' + L.wt_grid + '">田 the grid</a>\n' +
|
' <a id="griden" class="tgl btn" href="#" tt="' + L.wt_grid + '">田 the grid</a>\n' +
|
||||||
' <a id="thumbs" class="tgl btn" href="#" tt="' + L.ct_thumb + '">🖼️ thumbs</a>\n' +
|
' <a id="thumbs" class="tgl btn" href="#" tt="' + L.ct_thumb + '">🖼️ thumbs</a>\n' +
|
||||||
' <a id="dotfiles" class="tgl btn" href="#" tt="' + L.ct_dots + '">dotfiles</a>\n' +
|
' <a id="dotfiles" class="tgl btn" href="#" tt="' + L.ct_dots + '">dotfiles</a>\n' +
|
||||||
|
' <a id="dir1st" class="tgl btn" href="#" tt="' + L.ct_dir1st + '">📁 first</a>\n' +
|
||||||
' <a id="ireadme" class="tgl btn" href="#" tt="' + L.ct_readme + '">📜 readme</a>\n' +
|
' <a id="ireadme" class="tgl btn" href="#" tt="' + L.ct_readme + '">📜 readme</a>\n' +
|
||||||
' </div>\n' +
|
' </div>\n' +
|
||||||
'</div>\n' +
|
'</div>\n' +
|
||||||
@@ -839,6 +852,7 @@ ebi('op_cfg').innerHTML = (
|
|||||||
'<div>\n' +
|
'<div>\n' +
|
||||||
' <h3>' + L.cl_uopts + '</h3>\n' +
|
' <h3>' + L.cl_uopts + '</h3>\n' +
|
||||||
' <div>\n' +
|
' <div>\n' +
|
||||||
|
' <a id="hashw" class="tgl btn" href="#" tt="' + L.cut_mt + '">mt</a>\n' +
|
||||||
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '">turbo</a>\n' +
|
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="' + L.cut_turbo + '">turbo</a>\n' +
|
||||||
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '">date-chk</a>\n' +
|
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="' + L.cut_datechk + '">date-chk</a>\n' +
|
||||||
' <a id="flag_en" class="tgl btn" href="#" tt="' + L.cut_flag + '">💤</a>\n' +
|
' <a id="flag_en" class="tgl btn" href="#" tt="' + L.cut_flag + '">💤</a>\n' +
|
||||||
@@ -856,7 +870,7 @@ ebi('op_cfg').innerHTML = (
|
|||||||
' </div>\n' +
|
' </div>\n' +
|
||||||
'</div>\n' +
|
'</div>\n' +
|
||||||
'<div><h3>' + L.cl_keytype + '</h3><div id="key_notation"></div></div>\n' +
|
'<div><h3>' + L.cl_keytype + '</h3><div id="key_notation"></div></div>\n' +
|
||||||
'<div class="fill"><h3>' + L.cl_hiddenc + ' <a href="#" id="hcolsr">' + L.cl_reset + '</h3><div id="hcols"></div></div>'
|
'<div><h3>' + L.cl_hiddenc + ' <a href="#" id="hcolsr">' + L.cl_reset + '</h3><div id="hcols"></div></div>'
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
||||||
@@ -909,7 +923,7 @@ function opclick(e) {
|
|||||||
goto(dest);
|
goto(dest);
|
||||||
|
|
||||||
var input = QS('.opview.act input:not([type="hidden"])')
|
var input = QS('.opview.act input:not([type="hidden"])')
|
||||||
if (input && !is_touch) {
|
if (input && !TOUCH) {
|
||||||
tt.skip = true;
|
tt.skip = true;
|
||||||
input.focus();
|
input.focus();
|
||||||
}
|
}
|
||||||
@@ -1679,7 +1693,7 @@ var vbar = (function () {
|
|||||||
if (e.button === 0)
|
if (e.button === 0)
|
||||||
can.onmousemove = null;
|
can.onmousemove = null;
|
||||||
};
|
};
|
||||||
if (is_touch) {
|
if (TOUCH) {
|
||||||
can.ontouchstart = mousedown;
|
can.ontouchstart = mousedown;
|
||||||
can.ontouchmove = mousemove;
|
can.ontouchmove = mousemove;
|
||||||
}
|
}
|
||||||
@@ -1784,7 +1798,7 @@ function playpause(e) {
|
|||||||
seek_au_mul(x * 1.0 / rect.width);
|
seek_au_mul(x * 1.0 / rect.width);
|
||||||
};
|
};
|
||||||
|
|
||||||
if (!is_touch)
|
if (!TOUCH)
|
||||||
bar.onwheel = function (e) {
|
bar.onwheel = function (e) {
|
||||||
var dist = Math.sign(e.deltaY) * 10;
|
var dist = Math.sign(e.deltaY) * 10;
|
||||||
if (Math.abs(e.deltaY) < 30 && !e.deltaMode)
|
if (Math.abs(e.deltaY) < 30 && !e.deltaMode)
|
||||||
@@ -1824,7 +1838,7 @@ var mpui = (function () {
|
|||||||
if (++nth > 69) {
|
if (++nth > 69) {
|
||||||
// android-chrome breaks aspect ratio with unannounced viewport changes
|
// android-chrome breaks aspect ratio with unannounced viewport changes
|
||||||
nth = 0;
|
nth = 0;
|
||||||
if (is_touch) {
|
if (MOBILE) {
|
||||||
nth = 1;
|
nth = 1;
|
||||||
pbar.onresize();
|
pbar.onresize();
|
||||||
vbar.onresize();
|
vbar.onresize();
|
||||||
@@ -2477,7 +2491,8 @@ function sortfiles(nodes) {
|
|||||||
if (!nodes.length)
|
if (!nodes.length)
|
||||||
return nodes;
|
return nodes;
|
||||||
|
|
||||||
var sopts = jread('fsort', [["href", 1, ""]]);
|
var sopts = jread('fsort', [["href", 1, ""]]),
|
||||||
|
dir1st = sread('dir1st') !== '0';
|
||||||
|
|
||||||
try {
|
try {
|
||||||
var is_srch = false;
|
var is_srch = false;
|
||||||
@@ -2508,14 +2523,10 @@ function sortfiles(nodes) {
|
|||||||
|
|
||||||
if ((v + '').indexOf('<a ') === 0)
|
if ((v + '').indexOf('<a ') === 0)
|
||||||
v = v.split('>')[1];
|
v = v.split('>')[1];
|
||||||
else if (name == "href" && v) {
|
else if (name == "href" && v)
|
||||||
if (v.split('?')[0].slice(-1) == '/')
|
|
||||||
v = '\t' + v;
|
|
||||||
|
|
||||||
v = uricom_dec(v)[0];
|
v = uricom_dec(v)[0];
|
||||||
}
|
|
||||||
|
|
||||||
nodes[b]._sv = v;
|
nodes[b]._sv = v
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2544,6 +2555,13 @@ function sortfiles(nodes) {
|
|||||||
if (is_srch)
|
if (is_srch)
|
||||||
delete nodes[b].ext;
|
delete nodes[b].ext;
|
||||||
}
|
}
|
||||||
|
if (dir1st) {
|
||||||
|
var r1 = [], r2 = [];
|
||||||
|
for (var b = 0, bb = nodes.length; b < bb; b++)
|
||||||
|
(nodes[b].href.split('?')[0].slice(-1) == '/' ? r1 : r2).push(nodes[b]);
|
||||||
|
|
||||||
|
nodes = r1.concat(r2);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
catch (ex) {
|
catch (ex) {
|
||||||
console.log("failed to apply sort config: " + ex);
|
console.log("failed to apply sort config: " + ex);
|
||||||
@@ -4198,7 +4216,7 @@ document.onkeydown = function (e) {
|
|||||||
clearTimeout(defer_timeout);
|
clearTimeout(defer_timeout);
|
||||||
clearTimeout(search_timeout);
|
clearTimeout(search_timeout);
|
||||||
search_timeout = setTimeout(do_search,
|
search_timeout = setTimeout(do_search,
|
||||||
v && v.length < (is_touch ? 4 : 3) ? 1000 : 500);
|
v && v.length < (MOBILE ? 4 : 3) ? 1000 : 500);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -4437,6 +4455,9 @@ var treectl = (function () {
|
|||||||
bcfg_bind(r, 'dots', 'dotfiles', false, function (v) {
|
bcfg_bind(r, 'dots', 'dotfiles', false, function (v) {
|
||||||
r.goto(get_evpath());
|
r.goto(get_evpath());
|
||||||
});
|
});
|
||||||
|
bcfg_bind(r, 'dir1st', 'dir1st', true, function (v) {
|
||||||
|
treectl.gentab(get_evpath(), treectl.lsc);
|
||||||
|
});
|
||||||
setwrap(bcfg_bind(r, 'wtree', 'wraptree', true, setwrap));
|
setwrap(bcfg_bind(r, 'wtree', 'wraptree', true, setwrap));
|
||||||
setwrap(bcfg_bind(r, 'parpane', 'parpane', true, onscroll));
|
setwrap(bcfg_bind(r, 'parpane', 'parpane', true, onscroll));
|
||||||
bcfg_bind(r, 'htree', 'hovertree', false, reload_tree);
|
bcfg_bind(r, 'htree', 'hovertree', false, reload_tree);
|
||||||
@@ -4633,9 +4654,9 @@ var treectl = (function () {
|
|||||||
return ta[a];
|
return ta[a];
|
||||||
};
|
};
|
||||||
|
|
||||||
r.goto = function (url, push) {
|
r.goto = function (url, push, back) {
|
||||||
get_tree("", url, true);
|
get_tree("", url, true);
|
||||||
r.reqls(url, push, true);
|
r.reqls(url, push, true, back);
|
||||||
};
|
};
|
||||||
|
|
||||||
function get_tree(top, dst, rst) {
|
function get_tree(top, dst, rst) {
|
||||||
@@ -4804,9 +4825,10 @@ var treectl = (function () {
|
|||||||
thegrid.setvis(true);
|
thegrid.setvis(true);
|
||||||
}
|
}
|
||||||
|
|
||||||
r.reqls = function (url, hpush, no_tree) {
|
r.reqls = function (url, hpush, no_tree, back) {
|
||||||
var xhr = new XHR();
|
var xhr = new XHR();
|
||||||
xhr.top = url;
|
xhr.top = url;
|
||||||
|
xhr.back = back
|
||||||
xhr.hpush = hpush;
|
xhr.hpush = hpush;
|
||||||
xhr.ts = Date.now();
|
xhr.ts = Date.now();
|
||||||
xhr.open('GET', xhr.top + '?ls' + (r.dots ? '&dots' : ''), true);
|
xhr.open('GET', xhr.top + '?ls' + (r.dots ? '&dots' : ''), true);
|
||||||
@@ -4874,6 +4896,12 @@ var treectl = (function () {
|
|||||||
if (res.readme)
|
if (res.readme)
|
||||||
show_readme(res.readme);
|
show_readme(res.readme);
|
||||||
|
|
||||||
|
if (this.hpush && !this.back) {
|
||||||
|
var ofs = ebi('wrap').offsetTop;
|
||||||
|
if (document.documentElement.scrollTop > ofs)
|
||||||
|
document.documentElement.scrollTop = ofs;
|
||||||
|
}
|
||||||
|
|
||||||
wintitle();
|
wintitle();
|
||||||
var fun = r.ls_cb;
|
var fun = r.ls_cb;
|
||||||
if (fun) {
|
if (fun) {
|
||||||
@@ -4883,6 +4911,7 @@ var treectl = (function () {
|
|||||||
}
|
}
|
||||||
|
|
||||||
r.gentab = function (top, res) {
|
r.gentab = function (top, res) {
|
||||||
|
r.lsc = res;
|
||||||
var nodes = res.dirs.concat(res.files),
|
var nodes = res.dirs.concat(res.files),
|
||||||
html = mk_files_header(res.taglist),
|
html = mk_files_header(res.taglist),
|
||||||
seen = {};
|
seen = {};
|
||||||
@@ -4895,7 +4924,6 @@ var treectl = (function () {
|
|||||||
bhref = tn.href.split('?')[0],
|
bhref = tn.href.split('?')[0],
|
||||||
fname = uricom_dec(bhref)[0],
|
fname = uricom_dec(bhref)[0],
|
||||||
hname = esc(fname),
|
hname = esc(fname),
|
||||||
sortv = (bhref.slice(-1) == '/' ? '\t' : '') + hname,
|
|
||||||
id = 'f-' + ('00000000' + crc32(fname)).slice(-8),
|
id = 'f-' + ('00000000' + crc32(fname)).slice(-8),
|
||||||
lang = showfile.getlang(fname);
|
lang = showfile.getlang(fname);
|
||||||
|
|
||||||
@@ -4910,8 +4938,8 @@ var treectl = (function () {
|
|||||||
tn.lead = '<a href="?doc=' + tn.href + '" class="doc' + (lang ? ' bri' : '') +
|
tn.lead = '<a href="?doc=' + tn.href + '" class="doc' + (lang ? ' bri' : '') +
|
||||||
'" hl="' + id + '" name="' + hname + '">-txt-</a>';
|
'" hl="' + id + '" name="' + hname + '">-txt-</a>';
|
||||||
|
|
||||||
var ln = ['<tr><td>' + tn.lead + '</td><td sortv="' + sortv +
|
var ln = ['<tr><td>' + tn.lead + '</td><td><a href="' +
|
||||||
'"><a href="' + top + tn.href + '" id="' + id + '">' + hname + '</a>', tn.sz];
|
top + tn.href + '" id="' + id + '">' + hname + '</a>', tn.sz];
|
||||||
|
|
||||||
for (var b = 0; b < res.taglist.length; b++) {
|
for (var b = 0; b < res.taglist.length; b++) {
|
||||||
var k = res.taglist[b],
|
var k = res.taglist[b],
|
||||||
@@ -5049,7 +5077,7 @@ var treectl = (function () {
|
|||||||
if (url.search.indexOf('doc=') + 1 && hbase == cbase)
|
if (url.search.indexOf('doc=') + 1 && hbase == cbase)
|
||||||
return showfile.show(hbase + showfile.sname(url.search), true);
|
return showfile.show(hbase + showfile.sname(url.search), true);
|
||||||
|
|
||||||
r.goto(url.pathname);
|
r.goto(url.pathname, false, true);
|
||||||
};
|
};
|
||||||
|
|
||||||
hist_replace(get_evpath() + window.location.hash);
|
hist_replace(get_evpath() + window.location.hash);
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ function goto_up2k() {
|
|||||||
// usually it's undefined but some chromes throw on invoke
|
// usually it's undefined but some chromes throw on invoke
|
||||||
var up2k = null,
|
var up2k = null,
|
||||||
up2k_hooks = [],
|
up2k_hooks = [],
|
||||||
|
hws = [],
|
||||||
sha_js = window.WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
sha_js = window.WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
|
||||||
m = 'will use ' + sha_js + ' instead of native sha512 due to';
|
m = 'will use ' + sha_js + ' instead of native sha512 due to';
|
||||||
|
|
||||||
@@ -718,6 +719,13 @@ function up2k_init(subtle) {
|
|||||||
"gotallfiles": [gotallfiles] // hooks
|
"gotallfiles": [gotallfiles] // hooks
|
||||||
};
|
};
|
||||||
|
|
||||||
|
if (window.WebAssembly) {
|
||||||
|
for (var a = 0; a < Math.min(navigator.hardwareConcurrency || 4, 16); a++)
|
||||||
|
hws.push(new Worker('/.cpr/w.hash.js'));
|
||||||
|
|
||||||
|
console.log(hws.length + " hashers ready");
|
||||||
|
}
|
||||||
|
|
||||||
function showmodal(msg) {
|
function showmodal(msg) {
|
||||||
ebi('u2notbtn').innerHTML = msg;
|
ebi('u2notbtn').innerHTML = msg;
|
||||||
ebi('u2btn').style.display = 'none';
|
ebi('u2btn').style.display = 'none';
|
||||||
@@ -747,7 +755,7 @@ function up2k_init(subtle) {
|
|||||||
showmodal('<h1>loading ' + fn + '</h1>');
|
showmodal('<h1>loading ' + fn + '</h1>');
|
||||||
import_js('/.cpr/deps/' + fn, unmodal);
|
import_js('/.cpr/deps/' + fn, unmodal);
|
||||||
|
|
||||||
if (is_https) {
|
if (HTTPS) {
|
||||||
// chrome<37 firefox<34 edge<12 opera<24 safari<7
|
// chrome<37 firefox<34 edge<12 opera<24 safari<7
|
||||||
m = L.u_ancient;
|
m = L.u_ancient;
|
||||||
setmsg('');
|
setmsg('');
|
||||||
@@ -790,7 +798,6 @@ function up2k_init(subtle) {
|
|||||||
var parallel_uploads = icfg_get('nthread'),
|
var parallel_uploads = icfg_get('nthread'),
|
||||||
uc = {},
|
uc = {},
|
||||||
fdom_ctr = 0,
|
fdom_ctr = 0,
|
||||||
min_filebuf = 0,
|
|
||||||
biggest_file = 0;
|
biggest_file = 0;
|
||||||
|
|
||||||
bcfg_bind(uc, 'multitask', 'multitask', true, null, false);
|
bcfg_bind(uc, 'multitask', 'multitask', true, null, false);
|
||||||
@@ -801,6 +808,7 @@ function up2k_init(subtle) {
|
|||||||
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo, false);
|
bcfg_bind(uc, 'turbo', 'u2turbo', turbolvl > 1, draw_turbo, false);
|
||||||
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null, false);
|
bcfg_bind(uc, 'datechk', 'u2tdate', turbolvl < 3, null, false);
|
||||||
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort, false);
|
bcfg_bind(uc, 'az', 'u2sort', u2sort.indexOf('n') + 1, set_u2sort, false);
|
||||||
|
bcfg_bind(uc, 'hashw', 'hashw', !!window.WebAssembly, set_hashw, false);
|
||||||
|
|
||||||
var st = {
|
var st = {
|
||||||
"files": [],
|
"files": [],
|
||||||
@@ -838,6 +846,7 @@ function up2k_init(subtle) {
|
|||||||
"t": ""
|
"t": ""
|
||||||
},
|
},
|
||||||
"car": 0,
|
"car": 0,
|
||||||
|
"slow_io": null,
|
||||||
"modn": 0,
|
"modn": 0,
|
||||||
"modv": 0,
|
"modv": 0,
|
||||||
"mod0": null
|
"mod0": null
|
||||||
@@ -1288,8 +1297,13 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
if (!nhash) {
|
if (!nhash) {
|
||||||
var h = L.u_etadone.format(humansize(st.bytes.hashed), pvis.ctr.ok + pvis.ctr.ng);
|
var h = L.u_etadone.format(humansize(st.bytes.hashed), pvis.ctr.ok + pvis.ctr.ng);
|
||||||
if (st.eta.h !== h)
|
if (st.eta.h !== h) {
|
||||||
st.eta.h = ebi('u2etah').innerHTML = h;
|
st.eta.h = ebi('u2etah').innerHTML = h;
|
||||||
|
console.log('{0} hash, {1} up, {2} busy'.format(
|
||||||
|
f2f(st.time.hashing, 1),
|
||||||
|
f2f(st.time.uploading, 1),
|
||||||
|
f2f(st.time.busy, 1)));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!nsend && !nhash) {
|
if (!nsend && !nhash) {
|
||||||
@@ -1665,6 +1679,7 @@ function up2k_init(subtle) {
|
|||||||
var t = st.todo.hash.shift();
|
var t = st.todo.hash.shift();
|
||||||
st.busy.hash.push(t);
|
st.busy.hash.push(t);
|
||||||
st.nfile.hash = t.n;
|
st.nfile.hash = t.n;
|
||||||
|
t.t_hashing = Date.now();
|
||||||
|
|
||||||
var bpend = 0,
|
var bpend = 0,
|
||||||
nchunk = 0,
|
nchunk = 0,
|
||||||
@@ -1675,30 +1690,23 @@ function up2k_init(subtle) {
|
|||||||
pvis.setab(t.n, nchunks);
|
pvis.setab(t.n, nchunks);
|
||||||
pvis.move(t.n, 'bz');
|
pvis.move(t.n, 'bz');
|
||||||
|
|
||||||
|
if (nchunks > 1 && hws.length && uc.hashw)
|
||||||
|
return wexec_hash(t, chunksize, nchunks);
|
||||||
|
|
||||||
var segm_next = function () {
|
var segm_next = function () {
|
||||||
if (nchunk >= nchunks || (bpend > chunksize && bpend >= min_filebuf))
|
if (nchunk >= nchunks || bpend)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
var reader = new FileReader(),
|
var reader = new FileReader(),
|
||||||
nch = nchunk++,
|
nch = nchunk++,
|
||||||
car = nch * chunksize,
|
car = nch * chunksize,
|
||||||
cdr = car + chunksize,
|
cdr = Math.min(chunksize + car, t.size);
|
||||||
t0 = Date.now();
|
|
||||||
|
|
||||||
if (cdr >= t.size)
|
|
||||||
cdr = t.size;
|
|
||||||
|
|
||||||
bpend += cdr - car;
|
|
||||||
st.bytes.hashed += cdr - car;
|
st.bytes.hashed += cdr - car;
|
||||||
|
|
||||||
function orz(e) {
|
function orz(e) {
|
||||||
if (!min_filebuf && nch == 1) {
|
bpend--;
|
||||||
min_filebuf = 1;
|
segm_next();
|
||||||
var td = Date.now() - t0;
|
|
||||||
if (td > 50) {
|
|
||||||
min_filebuf = 32 * 1024 * 1024;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
hash_calc(nch, e.target.result);
|
hash_calc(nch, e.target.result);
|
||||||
}
|
}
|
||||||
reader.onload = function (e) {
|
reader.onload = function (e) {
|
||||||
@@ -1726,6 +1734,7 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
|
toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + err);
|
||||||
};
|
};
|
||||||
|
bpend++;
|
||||||
reader.readAsArrayBuffer(
|
reader.readAsArrayBuffer(
|
||||||
bobslice.call(t.fobj, car, cdr));
|
bobslice.call(t.fobj, car, cdr));
|
||||||
|
|
||||||
@@ -1733,8 +1742,6 @@ function up2k_init(subtle) {
|
|||||||
};
|
};
|
||||||
|
|
||||||
var hash_calc = function (nch, buf) {
|
var hash_calc = function (nch, buf) {
|
||||||
while (segm_next());
|
|
||||||
|
|
||||||
var orz = function (hashbuf) {
|
var orz = function (hashbuf) {
|
||||||
var hslice = new Uint8Array(hashbuf).subarray(0, 33),
|
var hslice = new Uint8Array(hashbuf).subarray(0, 33),
|
||||||
b64str = buf2b64(hslice);
|
b64str = buf2b64(hslice);
|
||||||
@@ -1742,15 +1749,12 @@ function up2k_init(subtle) {
|
|||||||
hashtab[nch] = b64str;
|
hashtab[nch] = b64str;
|
||||||
t.hash.push(nch);
|
t.hash.push(nch);
|
||||||
pvis.hashed(t);
|
pvis.hashed(t);
|
||||||
|
if (t.hash.length < nchunks)
|
||||||
bpend -= buf.byteLength;
|
|
||||||
if (t.hash.length < nchunks) {
|
|
||||||
return segm_next();
|
return segm_next();
|
||||||
}
|
|
||||||
t.hash = [];
|
t.hash = [];
|
||||||
for (var a = 0; a < nchunks; a++) {
|
for (var a = 0; a < nchunks; a++)
|
||||||
t.hash.push(hashtab[a]);
|
t.hash.push(hashtab[a]);
|
||||||
}
|
|
||||||
|
|
||||||
t.t_hashed = Date.now();
|
t.t_hashed = Date.now();
|
||||||
|
|
||||||
@@ -1782,11 +1786,117 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
}, 1);
|
}, 1);
|
||||||
};
|
};
|
||||||
|
|
||||||
t.t_hashing = Date.now();
|
|
||||||
segm_next();
|
segm_next();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function wexec_hash(t, chunksize, nchunks) {
|
||||||
|
var nchunk = 0,
|
||||||
|
reading = 0,
|
||||||
|
max_readers = 1,
|
||||||
|
opt_readers = 2,
|
||||||
|
free = [],
|
||||||
|
busy = {},
|
||||||
|
nbusy = 0,
|
||||||
|
hashtab = {},
|
||||||
|
mem = (MOBILE ? 128 : 256) * 1024 * 1024;
|
||||||
|
|
||||||
|
for (var a = 0; a < hws.length; a++) {
|
||||||
|
var w = hws[a];
|
||||||
|
free.push(w);
|
||||||
|
w.onmessage = onmsg;
|
||||||
|
mem -= chunksize;
|
||||||
|
if (mem <= 0)
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
function go_next() {
|
||||||
|
if (st.slow_io && uc.multitask)
|
||||||
|
// android-chrome filereader latency is ridiculous but scales linearly
|
||||||
|
// (unlike every other platform which instead suffers on parallel reads...)
|
||||||
|
max_readers = opt_readers = free.length;
|
||||||
|
|
||||||
|
if (reading >= max_readers || !free.length || nchunk >= nchunks)
|
||||||
|
return;
|
||||||
|
|
||||||
|
var w = free.pop(),
|
||||||
|
car = nchunk * chunksize,
|
||||||
|
cdr = Math.min(chunksize + car, t.size);
|
||||||
|
|
||||||
|
//console.log('[P ] %d read bgin (%d reading, %d busy)', nchunk, reading + 1, nbusy + 1);
|
||||||
|
w.postMessage([nchunk, t.fobj, car, cdr]);
|
||||||
|
busy[nchunk] = w;
|
||||||
|
nbusy++;
|
||||||
|
reading++;
|
||||||
|
nchunk++;
|
||||||
|
}
|
||||||
|
|
||||||
|
function onmsg(d) {
|
||||||
|
d = d.data;
|
||||||
|
var k = d[0];
|
||||||
|
|
||||||
|
if (k == "panic")
|
||||||
|
return vis_exh(d[1], 'up2k.js', '', '', d[1]);
|
||||||
|
|
||||||
|
if (k == "fail") {
|
||||||
|
pvis.seth(t.n, 1, d[1]);
|
||||||
|
pvis.seth(t.n, 2, d[2]);
|
||||||
|
console.log(d[1], d[2]);
|
||||||
|
|
||||||
|
pvis.move(t.n, 'ng');
|
||||||
|
apop(st.busy.hash, t);
|
||||||
|
st.bytes.finished += t.size;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (k == "ferr")
|
||||||
|
return toast.err(0, 'y o u b r o k e i t\nfile: ' + esc(t.name + '') + '\nerror: ' + d[1]);
|
||||||
|
|
||||||
|
if (k == "read") {
|
||||||
|
reading--;
|
||||||
|
if (MOBILE && CHROME && st.slow_io === null && d[1] == 1 && d[2] > 1024 * 512) {
|
||||||
|
var spd = Math.floor(d[2] / d[3]);
|
||||||
|
st.slow_io = spd < 40 * 1024;
|
||||||
|
console.log('spd {0}, slow: {1}'.format(spd, st.slow_io));
|
||||||
|
}
|
||||||
|
//console.log('[P ] %d read DONE (%d reading, %d busy)', d[1], reading, nbusy);
|
||||||
|
return go_next();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (k == "done") {
|
||||||
|
var nchunk = d[1],
|
||||||
|
hslice = d[2],
|
||||||
|
sz = d[3];
|
||||||
|
|
||||||
|
free.push(busy[nchunk]);
|
||||||
|
delete busy[nchunk];
|
||||||
|
nbusy--;
|
||||||
|
|
||||||
|
//console.log('[P ] %d HASH DONE (%d reading, %d busy)', nchunk, reading, nbusy);
|
||||||
|
|
||||||
|
hashtab[nchunk] = buf2b64(hslice);
|
||||||
|
st.bytes.hashed += sz;
|
||||||
|
t.hash.push(nchunk);
|
||||||
|
pvis.hashed(t);
|
||||||
|
|
||||||
|
if (t.hash.length < nchunks)
|
||||||
|
return nbusy < opt_readers && go_next();
|
||||||
|
|
||||||
|
t.hash = [];
|
||||||
|
for (var a = 0; a < nchunks; a++)
|
||||||
|
t.hash.push(hashtab[a]);
|
||||||
|
|
||||||
|
t.t_hashed = Date.now();
|
||||||
|
|
||||||
|
pvis.seth(t.n, 2, L.u_hashdone);
|
||||||
|
pvis.seth(t.n, 1, '📦 wait');
|
||||||
|
apop(st.busy.hash, t);
|
||||||
|
st.todo.handshake.push(t);
|
||||||
|
tasker();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
go_next();
|
||||||
|
}
|
||||||
|
|
||||||
/////
|
/////
|
||||||
////
|
////
|
||||||
/// head
|
/// head
|
||||||
@@ -2000,6 +2110,9 @@ function up2k_init(subtle) {
|
|||||||
tasker();
|
tasker();
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
|
pvis.seth(t.n, 1, "ERROR");
|
||||||
|
pvis.seth(t.n, 2, L.u_ehstmp);
|
||||||
|
|
||||||
var err = "",
|
var err = "",
|
||||||
rsp = (xhr.responseText + ''),
|
rsp = (xhr.responseText + ''),
|
||||||
ofs = rsp.lastIndexOf('\nURL: ');
|
ofs = rsp.lastIndexOf('\nURL: ');
|
||||||
@@ -2209,7 +2322,7 @@ function up2k_init(subtle) {
|
|||||||
window.addEventListener('resize', onresize);
|
window.addEventListener('resize', onresize);
|
||||||
onresize();
|
onresize();
|
||||||
|
|
||||||
if (is_touch) {
|
if (MOBILE) {
|
||||||
// android-chrome wobbles for a bit; firefox / iOS-safari are OK
|
// android-chrome wobbles for a bit; firefox / iOS-safari are OK
|
||||||
setTimeout(onresize, 20);
|
setTimeout(onresize, 20);
|
||||||
setTimeout(onresize, 100);
|
setTimeout(onresize, 100);
|
||||||
@@ -2363,6 +2476,13 @@ function up2k_init(subtle) {
|
|||||||
localStorage.removeItem('u2sort');
|
localStorage.removeItem('u2sort');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function set_hashw() {
|
||||||
|
if (!window.WebAssembly) {
|
||||||
|
bcfg_set('hashw', false);
|
||||||
|
toast.err(10, L.u_nowork);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
ebi('nthread_add').onclick = function (e) {
|
ebi('nthread_add').onclick = function (e) {
|
||||||
ev(e);
|
ev(e);
|
||||||
bumpthread(1);
|
bumpthread(1);
|
||||||
|
|||||||
@@ -7,12 +7,28 @@ if (!window['console'])
|
|||||||
|
|
||||||
|
|
||||||
var wah = '',
|
var wah = '',
|
||||||
is_touch = 'ontouchstart' in window,
|
HALFMAX = 8192 * 8192 * 8192 * 8192,
|
||||||
is_https = (window.location + '').indexOf('https:') === 0,
|
HTTPS = (window.location + '').indexOf('https:') === 0,
|
||||||
IPHONE = is_touch && /iPhone|iPad|iPod/i.test(navigator.userAgent),
|
TOUCH = 'ontouchstart' in window,
|
||||||
|
MOBILE = TOUCH,
|
||||||
|
CHROME = !!window.chrome,
|
||||||
|
IPHONE = TOUCH && /iPhone|iPad|iPod/i.test(navigator.userAgent),
|
||||||
WINDOWS = navigator.platform ? navigator.platform == 'Win32' : /Windows/.test(navigator.userAgent);
|
WINDOWS = navigator.platform ? navigator.platform == 'Win32' : /Windows/.test(navigator.userAgent);
|
||||||
|
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (navigator.userAgentData.mobile)
|
||||||
|
MOBILE = true;
|
||||||
|
|
||||||
|
if (navigator.userAgentData.platform == 'Windows')
|
||||||
|
WINDOWS = true;
|
||||||
|
|
||||||
|
if (navigator.userAgentData.brands.some(function (d) { return d.brand == 'Chromium' }))
|
||||||
|
CHROME = true;
|
||||||
|
}
|
||||||
|
catch (ex) { }
|
||||||
|
|
||||||
|
|
||||||
var ebi = document.getElementById.bind(document),
|
var ebi = document.getElementById.bind(document),
|
||||||
QS = document.querySelector.bind(document),
|
QS = document.querySelector.bind(document),
|
||||||
QSA = document.querySelectorAll.bind(document),
|
QSA = document.querySelectorAll.bind(document),
|
||||||
@@ -459,6 +475,16 @@ function sortTable(table, col, cb) {
|
|||||||
}
|
}
|
||||||
return reverse * (a.localeCompare(b));
|
return reverse * (a.localeCompare(b));
|
||||||
});
|
});
|
||||||
|
if (sread('dir1st') !== '0') {
|
||||||
|
var r1 = [], r2 = [];
|
||||||
|
for (var i = 0; i < tr.length; i++) {
|
||||||
|
var cell = tr[vl[i][1]].cells[1],
|
||||||
|
href = cell.getAttribute('sortv') || cell.textContent.trim();
|
||||||
|
|
||||||
|
(href.split('?')[0].slice(-1) == '/' ? r1 : r2).push(vl[i]);
|
||||||
|
}
|
||||||
|
vl = r1.concat(r2);
|
||||||
|
}
|
||||||
for (i = 0; i < tr.length; ++i) tb.appendChild(tr[vl[i][1]]);
|
for (i = 0; i < tr.length; ++i) tb.appendChild(tr[vl[i][1]]);
|
||||||
if (cb) cb();
|
if (cb) cb();
|
||||||
}
|
}
|
||||||
@@ -935,7 +961,7 @@ var tt = (function () {
|
|||||||
return r.show.bind(this)();
|
return r.show.bind(this)();
|
||||||
|
|
||||||
tev = setTimeout(r.show.bind(this), 800);
|
tev = setTimeout(r.show.bind(this), 800);
|
||||||
if (is_touch)
|
if (TOUCH)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
this.addEventListener('mousemove', r.move);
|
this.addEventListener('mousemove', r.move);
|
||||||
@@ -1522,13 +1548,13 @@ function xhrchk(xhr, prefix, e404) {
|
|||||||
var errtxt = (xhr.response && xhr.response.err) || xhr.responseText,
|
var errtxt = (xhr.response && xhr.response.err) || xhr.responseText,
|
||||||
fun = toast.err;
|
fun = toast.err;
|
||||||
|
|
||||||
if (xhr.status == 503 && /\bDD(?:wah){0}[o]S [Pp]rote[c]tion|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser/.test(errtxt)) {
|
if (xhr.status == 503 && /[Cc]loud[f]lare|>Just a mo[m]ent|#cf-b[u]bbles|Chec[k]ing your br[o]wser/.test(errtxt)) {
|
||||||
var now = Date.now(), td = now - cf_cha_t;
|
var now = Date.now(), td = now - cf_cha_t;
|
||||||
if (td < 15000)
|
if (td < 15000)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
cf_cha_t = now;
|
cf_cha_t = now;
|
||||||
errtxt = 'Cloudflare DD' + wah + 'oS protection kicked in\n\n<strong>trying to fix it...</strong>';
|
errtxt = 'Clou' + wah + 'dflare protection kicked in\n\n<strong>trying to fix it...</strong>';
|
||||||
fun = toast.warn;
|
fun = toast.warn;
|
||||||
|
|
||||||
qsr('#cf_frame');
|
qsr('#cf_frame');
|
||||||
|
|||||||
77
copyparty/web/w.hash.js
Normal file
77
copyparty/web/w.hash.js
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
"use strict";
|
||||||
|
|
||||||
|
|
||||||
|
function hex2u8(txt) {
|
||||||
|
return new Uint8Array(txt.match(/.{2}/g).map(function (b) { return parseInt(b, 16); }));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
var subtle = null;
|
||||||
|
try {
|
||||||
|
subtle = crypto.subtle || crypto.webkitSubtle;
|
||||||
|
subtle.digest('SHA-512', new Uint8Array(1)).then(
|
||||||
|
function (x) { },
|
||||||
|
function (x) { load_fb(); }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
catch (ex) {
|
||||||
|
load_fb();
|
||||||
|
}
|
||||||
|
function load_fb() {
|
||||||
|
subtle = null;
|
||||||
|
importScripts('/.cpr/deps/sha512.hw.js');
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
onmessage = (d) => {
|
||||||
|
var [nchunk, fobj, car, cdr] = d.data,
|
||||||
|
t0 = Date.now(),
|
||||||
|
reader = new FileReader();
|
||||||
|
|
||||||
|
reader.onload = function (e) {
|
||||||
|
try {
|
||||||
|
//console.log('[ w] %d HASH bgin', nchunk);
|
||||||
|
postMessage(["read", nchunk, cdr - car, Date.now() - t0]);
|
||||||
|
hash_calc(e.target.result);
|
||||||
|
}
|
||||||
|
catch (ex) {
|
||||||
|
postMessage(["panic", ex + '']);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
reader.onerror = function () {
|
||||||
|
var err = reader.error + '';
|
||||||
|
|
||||||
|
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
|
||||||
|
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
|
||||||
|
)
|
||||||
|
return postMessage(["fail", 'OS-error', err + ' @ ' + car]);
|
||||||
|
|
||||||
|
postMessage(["ferr", err]);
|
||||||
|
};
|
||||||
|
//console.log('[ w] %d read bgin', nchunk);
|
||||||
|
reader.readAsArrayBuffer(
|
||||||
|
File.prototype.slice.call(fobj, car, cdr));
|
||||||
|
|
||||||
|
|
||||||
|
var hash_calc = function (buf) {
|
||||||
|
var hash_done = function (hashbuf) {
|
||||||
|
try {
|
||||||
|
var hslice = new Uint8Array(hashbuf).subarray(0, 33);
|
||||||
|
//console.log('[ w] %d HASH DONE', nchunk);
|
||||||
|
postMessage(["done", nchunk, hslice, cdr - car]);
|
||||||
|
}
|
||||||
|
catch (ex) {
|
||||||
|
postMessage(["panic", ex + '']);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (subtle)
|
||||||
|
subtle.digest('SHA-512', buf).then(hash_done);
|
||||||
|
else {
|
||||||
|
var u8buf = new Uint8Array(buf);
|
||||||
|
hashwasm.sha512(u8buf).then(function (v) {
|
||||||
|
hash_done(hex2u8(v))
|
||||||
|
});
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -1,3 +1,217 @@
|
|||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2022-0812-2258 `v1.3.12` quickboot
|
||||||
|
|
||||||
|
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||||
|
* latest gzip edition of the sfx: [v1.0.14](https://github.com/9001/copyparty/releases/tag/v1.0.14#:~:text=release-specific%20notes)
|
||||||
|
|
||||||
|
## new features
|
||||||
|
*but wait, there's more!* not only do you get the [multithreaded file hashing](https://github.com/9001/copyparty/releases/tag/v1.3.11) but also --
|
||||||
|
* faster bootup and volume reindexing when `-e2ds` (file indexing) is enabled
|
||||||
|
* `3x` faster is probably the average on most instances; more files per folder = faster
|
||||||
|
* `9x` faster on a 36 TiB zfs music/media nas with `-e2ts` (metadata indexing), dropping from 46sec to 5sec
|
||||||
|
* and `34x` on another zfs box, 63sec -> 1.8sec
|
||||||
|
* new arg `--no-dhash` disables the speedhax in case it's buggy (skipping files or audio tags)
|
||||||
|
* add option `--exit idx` to abort and shutdown after volume indexing has finished
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
* [u2cli](https://github.com/9001/copyparty/tree/hovudstraum/bin#up2kpy): detect and skip uploading from recursive symlinks
|
||||||
|
* stop reindexing empty files on startup
|
||||||
|
* support fips-compliant cpython builds
|
||||||
|
* replaces md5 with sha1, changing the filetype-associated colors in the gallery view
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2022-0810-2135 `v1.3.11` webworkers
|
||||||
|
|
||||||
|
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||||
|
* latest gzip edition of the sfx: [v1.0.14](https://github.com/9001/copyparty/releases/tag/v1.0.14#:~:text=release-specific%20notes)
|
||||||
|
|
||||||
|
## new features
|
||||||
|
* multithreaded file hashing! **300%** average speed increase
|
||||||
|
* when uploading files through the browser client, based on web-workers
|
||||||
|
* `4.5x` faster on http from a laptop -- `146` -> `670` MiB/s
|
||||||
|
* ` 30%` faster on https from a laptop -- `552` -> `716` MiB/s
|
||||||
|
* `4.2x` faster on http from android -- `13.5` -> `57.1` MiB/s
|
||||||
|
* `5.3x` faster on https from android -- `13.8` -> `73.3` MiB/s
|
||||||
|
* can be disabled using the `mt` togglebtn in the settings pane, for example if your phone runs out of memory (it eats ~250 MiB extra RAM)
|
||||||
|
* `2.3x` faster [u2cli](https://github.com/9001/copyparty/tree/hovudstraum/bin#up2kpy) (cmd-line client) -- `398` -> `930` MiB/s
|
||||||
|
* `2.4x` faster filesystem indexing on the server
|
||||||
|
* thx to @kipukun for the webworker suggestion!
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
* ux: reset scroll when navigating into a new folder
|
||||||
|
* u2cli: better errormsg if the server's tls certificate got rejected
|
||||||
|
* js: more futureproof cloudflare-challenge detection (they got a new one recently)
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
* print warning if the python interpreter was built with an unsafe sqlite
|
||||||
|
* u2cli: add helpful messages on how to make it run on python 2.6
|
||||||
|
|
||||||
|
**trivia:** due to a [chrome bug](https://bugs.chromium.org/p/chromium/issues/detail?id=1352210), http can sometimes be faster than https now ¯\\\_(ツ)\_/¯
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2022-0803-2340 `v1.3.10` folders first
|
||||||
|
|
||||||
|
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||||
|
* latest gzip edition of the sfx: [v1.0.14](https://github.com/9001/copyparty/releases/tag/v1.0.14#:~:text=release-specific%20notes)
|
||||||
|
|
||||||
|
## new features
|
||||||
|
* faster
|
||||||
|
* tag scanner
|
||||||
|
* on windows: uploading to fat32 or smb
|
||||||
|
* toggle-button to sort folders before files (default-on)
|
||||||
|
* almost the same as before, but now also when sorting by size / date
|
||||||
|
* repeatedly hit `ctrl-c` to force-quit if everything dies
|
||||||
|
* new file-indexing guards
|
||||||
|
* `--xdev` / volflag `:c,xdev` stops if it hits another filesystem (bindmount/symlink)
|
||||||
|
* `--xvol` / volflag `:c,xvol` does not follow symlinks pointing outside the volume
|
||||||
|
* only affects file indexing -- does NOT prevent access!
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
* forget uploads that failed to initialize (allows retry in another folder)
|
||||||
|
* wrong filekeys in upload response if volume path contained a symlink
|
||||||
|
* faster shutdown on `ctrl-c` while hashing huge files
|
||||||
|
* ux: fix navpane covering files on horizontal scroll
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
* include version info in the base64 crash-message
|
||||||
|
* ux: make upload errors more visible on mobile
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2022-0727-1407 `v1.3.8` more async
|
||||||
|
|
||||||
|
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||||
|
* latest gzip edition of the sfx: [v1.0.14](https://github.com/9001/copyparty/releases/tag/v1.0.14#:~:text=release-specific%20notes)
|
||||||
|
|
||||||
|
## new features
|
||||||
|
* new arg `--df 4` and volflag `:c,df=4g` to guarantee 4 GiB free disk space by rejecting uploads
|
||||||
|
* some features no longer block new uploads while they're processing
|
||||||
|
* `-e2v` file integrity checker
|
||||||
|
* `-e2ts` initial tag scanner
|
||||||
|
* hopefully fixes a [deadlock](https://www.youtube.com/watch?v=DkKoMveT_jo&t=3s) someone ran into (but probably doesn't)
|
||||||
|
* (the "deadlock" link is an addictive demoscene banger -- the actual issue is #10)
|
||||||
|
* reduced the impact of some features which still do
|
||||||
|
* defer `--re-maxage` reindexing if there was a write (upload/rename/...) recently
|
||||||
|
* `--db-act` sets minimum idle period before reindex can start (default 10sec)
|
||||||
|
* bbox / image-viewer: add video hotkeys 0..9 to seek 0%..90%
|
||||||
|
* audio-player: add audio crossfeed (left-right channel mixer / vocal isolation)
|
||||||
|
* splashpage (`/?h`) shows time since the most recent write
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
* a11y:
|
||||||
|
* enter-key should always trigger onclick
|
||||||
|
* only focus password box if in-bounds
|
||||||
|
* improve skip-to-files
|
||||||
|
* prisonparty: volume labeling in root folders
|
||||||
|
* other minor stuff
|
||||||
|
* forget deleted shadowed files from the db
|
||||||
|
* be less noisy if a client disconnects mid-reply
|
||||||
|
* up2k.js less eager to thrash slow server HDDs
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
* show client's upload ETA in server log
|
||||||
|
* dump stacks and issue `lsof` on the db if a transaction is stuck
|
||||||
|
* will hopefully help if there's any more deadlocks
|
||||||
|
* [up2k-hook-ytid](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/up2k-hook-ytid.js) (the overengineered up2k.js plugin example) now has an mp4/webm/mkv metadata parser
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2022-0716-1848 `v1.3.7` faster
|
||||||
|
|
||||||
|
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||||
|
* latest gzip edition of the sfx: [v1.0.14](https://github.com/9001/copyparty/releases/tag/v1.0.14#:~:text=release-specific%20notes)
|
||||||
|
|
||||||
|
## new features
|
||||||
|
* `up2k.js`: **improved upload speeds!**
|
||||||
|
* **...when there's many small files** (or the browser is slow)
|
||||||
|
* add [potato mode](https://user-images.githubusercontent.com/241032/179336639-8ecc01ea-2662-4cb6-8048-5be3ad599f33.png) -- lightweight UI for faster uploads from slow boxes
|
||||||
|
* enables automatically if it detects a cpu bottleneck (not very accurate)
|
||||||
|
* **...on really fast connections (LAN / fiber)**
|
||||||
|
* batch progress updates to reduce repaints
|
||||||
|
* **...when there is a mix of big and small files**
|
||||||
|
* sort the uploads by size, smallest first, for optimal cpu/network usage
|
||||||
|
* can be overridden to alphabetical order in the settings tab
|
||||||
|
* new arg `--u2sort` changes the default + overrides the override button
|
||||||
|
* improve upload pacing when alphabetical order is enabled
|
||||||
|
* mainly affecting single files that are 300 GiB +
|
||||||
|
* `up2k.js`: add [up2k hooks](https://github.com/9001/copyparty/blob/hovudstraum/contrib/plugins/up2k-hooks.js)
|
||||||
|
* specify *client-side* rules to reject files as they are dropped into the browser
|
||||||
|
* not a hard-reject since people can use [up2k.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py) and whatnot, more like a hint
|
||||||
|
* `up2k.py`: add file integrity checker
|
||||||
|
* new arg `-e2v` to scan volumes and verify file checksums on startup
|
||||||
|
* `-e2vu` updates the db on mismatch, `-e2vp` panics
|
||||||
|
* uploads are blocked while the scan is running -- might get fixed at some point
|
||||||
|
* for now it prints a warning
|
||||||
|
* bbox / image-viewer: doubletap a picture to enter fullscreen mode
|
||||||
|
* md-editor: `ctrl-c/x` affects current line if no selection, and `ctrl-e` is fullscreen
|
||||||
|
* tag-parser plugins:
|
||||||
|
* add support for passing metadata from one mtp to another (parser dependencies)
|
||||||
|
* the `p` flag in [vidchk](https://github.com/9001/copyparty/blob/hovudstraum/bin/mtag/vidchk.py) usage makes it run after the base parser, eating its output
|
||||||
|
* add [rclone uploader](https://github.com/9001/copyparty/blob/hovudstraum/bin/mtag/rclone-upload.py) which optionally and by default depends on vidchk
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
* sfx would crash if it got the same PID as recently (for example across two reboots)
|
||||||
|
* audio equalizer on recent chromes
|
||||||
|
* still can't figure out why chrome sometimes drops the mediasession
|
||||||
|
* bbox: don't attach click events to videos
|
||||||
|
* up2k.py:
|
||||||
|
* more sensible behavior w/ blank files
|
||||||
|
* avoid some extra directory scans when deleting files
|
||||||
|
* faster shutdown on `ctrl-c` during volume indexing
|
||||||
|
* warning from the thumbnail cleaner if the volume has no thumbnails
|
||||||
|
* `>fixing py2 support` `>2022`
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
* up2k.js:
|
||||||
|
* sends a summary of the upload queue to [the server log](https://github.com/9001/copyparty#up2k)
|
||||||
|
* shows a toast while loading huge filedrops to indicate it's still alive
|
||||||
|
* sfx: disable guru meditation unless running on windows
|
||||||
|
* avoids hanging systemd on certain crashes
|
||||||
|
* logs the state of all threads if sqlite hits a timeout
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
|
# 2022-0706-0029 `v1.3.5` sup cloudflare
|
||||||
|
|
||||||
|
* read-only demo server at https://a.ocv.me/pub/demo/
|
||||||
|
* latest gzip edition of the sfx: [v1.0.14](https://github.com/9001/copyparty/releases/tag/v1.0.14#:~:text=release-specific%20notes)
|
||||||
|
|
||||||
|
## new features
|
||||||
|
* detect + recover from cloudflare ddos-protection memes during upload
|
||||||
|
* while carefully avoiding any mention of "DDoS" in the JS because enterprise firewalls do not enjoy that
|
||||||
|
* new option `--favico` to specify a default favicon
|
||||||
|
* set to `🎉` by default, which also enables the fancy upload progress donut 👌
|
||||||
|
* baguettebox (image/video viewer):
|
||||||
|
* toolbar button `⛶` to enter fullscreen mode (same as hotkey `F`)
|
||||||
|
* tap middle of screen to show/hide toolbar
|
||||||
|
* tap left/right-side of pics to navigate prev/next
|
||||||
|
* hotkeys `[` and `]` to set A-B loop in videos
|
||||||
|
* and [URL parameters](https://a.ocv.me/pub/demo/pics-vids/#gf-e2e482ae&t=4.2-6) for that + [initial seekpoint](https://a.ocv.me/pub/demo/pics-vids/#gf-c04bb0f6&t=26s) (same as the audio player)
|
||||||
|
|
||||||
|
## bugfixes
|
||||||
|
* when a tag-parser hits the timeout, `pkill` all its descendants too
|
||||||
|
* and a [new mtp flag](https://github.com/9001/copyparty/#file-parser-plugins) to override that; `kt` (kill tree, default), `km` (kill main, old default), `kn` (kill none)
|
||||||
|
* cpu-wasting spin while waiting for the final handful of files to finish tag-scraping
|
||||||
|
* detection of sparse-files support inside [prisonparty](https://github.com/9001/copyparty/tree/hovudstraum/bin#prisonpartysh) and other strict jails
|
||||||
|
* baguettebox (image/video viewer):
|
||||||
|
* crash on swipe during close
|
||||||
|
* didn't reset terminal color at the end of `?ls=v`
|
||||||
|
* don't try to thumbnail empty files (harmless but dumb)
|
||||||
|
|
||||||
|
## other changes
|
||||||
|
* ux improvements
|
||||||
|
* hide the uploads table until something happens
|
||||||
|
* bump codemirror to 5.65.6
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
|
||||||
# 2022-0627-2057 `v1.3.3` sdcardfs
|
# 2022-0627-2057 `v1.3.3` sdcardfs
|
||||||
|
|
||||||
|
|||||||
@@ -185,7 +185,7 @@ brew install python@2
|
|||||||
pip install virtualenv
|
pip install virtualenv
|
||||||
|
|
||||||
# readme toc
|
# readme toc
|
||||||
cat README.md | awk 'function pr() { if (!h) {return}; if (/^ *[*!#|]/||!s) {printf "%s\n",h;h=0;return}; if (/.../) {printf "%s - %s\n",h,$0;h=0}; }; /^#/{s=1;pr()} /^#* *(file indexing|exclude-patterns|install on android|dev env setup|just the sfx|complete release|optional gpl stuff)|`$/{s=0} /^#/{lv=length($1);sub(/[^ ]+ /,"");bab=$0;gsub(/ /,"-",bab); h=sprintf("%" ((lv-1)*4+1) "s [%s](#%s)", "*",$0,bab);next} !h{next} {sub(/ .*/,"");sub(/[:,]$/,"")} {pr()}' > toc; grep -E '^## readme toc' -B1000 -A2 <README.md >p1; grep -E '^## quickstart' -B2 -A999999 <README.md >p2; (cat p1; grep quickstart -A1000 <toc; cat p2) >README.md; rm p1 p2 toc
|
cat README.md | awk 'function pr() { if (!h) {return}; if (/^ *[*!#|]/||!s) {printf "%s\n",h;h=0;return}; if (/.../) {printf "%s - %s\n",h,$0;h=0}; }; /^#/{s=1;pr()} /^#* *(install on android|dev env setup|just the sfx|complete release|optional gpl stuff)|`$/{s=0} /^#/{lv=length($1);sub(/[^ ]+ /,"");bab=$0;gsub(/ /,"-",bab); h=sprintf("%" ((lv-1)*4+1) "s [%s](#%s)", "*",$0,bab);next} !h{next} {sub(/ .*/,"");sub(/[:;,]$/,"")} {pr()}' > toc; grep -E '^## readme toc' -B1000 -A2 <README.md >p1; grep -E '^## quickstart' -B2 -A999999 <README.md >p2; (cat p1; grep quickstart -A1000 <toc; cat p2) >README.md; rm p1 p2 toc
|
||||||
|
|
||||||
# fix firefox phantom breakpoints,
|
# fix firefox phantom breakpoints,
|
||||||
# suggestions from bugtracker, doesnt work (debugger is not attachable)
|
# suggestions from bugtracker, doesnt work (debugger is not attachable)
|
||||||
|
|||||||
@@ -14,10 +14,6 @@ gtar=$(command -v gtar || command -v gnutar) || true
|
|||||||
realpath() { grealpath "$@"; }
|
realpath() { grealpath "$@"; }
|
||||||
}
|
}
|
||||||
|
|
||||||
which md5sum 2>/dev/null >/dev/null &&
|
|
||||||
md5sum=md5sum ||
|
|
||||||
md5sum="md5 -r"
|
|
||||||
|
|
||||||
mode="$1"
|
mode="$1"
|
||||||
|
|
||||||
[ -z "$mode" ] &&
|
[ -z "$mode" ] &&
|
||||||
|
|||||||
@@ -26,6 +26,11 @@ help() { exec cat <<'EOF'
|
|||||||
# (browsers will try to use 'Consolas' instead)
|
# (browsers will try to use 'Consolas' instead)
|
||||||
#
|
#
|
||||||
# `no-dd` saves ~2k by removing the mouse cursor
|
# `no-dd` saves ~2k by removing the mouse cursor
|
||||||
|
#
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
#
|
||||||
|
# if you are on windows, you can use msys2:
|
||||||
|
# PATH=/c/Users/$USER/AppData/Local/Programs/Python/Python310:"$PATH" ./make-sfx.sh fast
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
}
|
}
|
||||||
@@ -64,6 +69,9 @@ pybin=$(command -v python3 || command -v python) || {
|
|||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
[ $CSN ] ||
|
||||||
|
CSN=sfx
|
||||||
|
|
||||||
langs=
|
langs=
|
||||||
use_gz=
|
use_gz=
|
||||||
zopf=2560
|
zopf=2560
|
||||||
@@ -94,9 +102,9 @@ stamp=$(
|
|||||||
done | sort | tail -n 1 | sha1sum | cut -c-16
|
done | sort | tail -n 1 | sha1sum | cut -c-16
|
||||||
)
|
)
|
||||||
|
|
||||||
rm -rf sfx/*
|
rm -rf $CSN/*
|
||||||
mkdir -p sfx build
|
mkdir -p $CSN build
|
||||||
cd sfx
|
cd $CSN
|
||||||
|
|
||||||
tmpdir="$(
|
tmpdir="$(
|
||||||
printf '%s\n' "$TMPDIR" /tmp |
|
printf '%s\n' "$TMPDIR" /tmp |
|
||||||
@@ -190,7 +198,7 @@ tmpdir="$(
|
|||||||
done
|
done
|
||||||
|
|
||||||
# remove type hints before build instead
|
# remove type hints before build instead
|
||||||
(cd copyparty; python3 ../../scripts/strip_hints/a.py; rm uh)
|
(cd copyparty; "$pybin" ../../scripts/strip_hints/a.py; rm uh)
|
||||||
}
|
}
|
||||||
|
|
||||||
ver=
|
ver=
|
||||||
@@ -232,7 +240,7 @@ ts=$(date -u +%s)
|
|||||||
hts=$(date -u +%Y-%m%d-%H%M%S) # --date=@$ts (thx osx)
|
hts=$(date -u +%Y-%m%d-%H%M%S) # --date=@$ts (thx osx)
|
||||||
|
|
||||||
mkdir -p ../dist
|
mkdir -p ../dist
|
||||||
sfx_out=../dist/copyparty-sfx
|
sfx_out=../dist/copyparty-$CSN
|
||||||
|
|
||||||
echo cleanup
|
echo cleanup
|
||||||
find -name '*.pyc' -delete
|
find -name '*.pyc' -delete
|
||||||
@@ -366,7 +374,7 @@ gzres() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
zdir="$tmpdir/cpp-mksfx"
|
zdir="$tmpdir/cpp-mk$CSN"
|
||||||
[ -e "$zdir/$stamp" ] || rm -rf "$zdir"
|
[ -e "$zdir/$stamp" ] || rm -rf "$zdir"
|
||||||
mkdir -p "$zdir"
|
mkdir -p "$zdir"
|
||||||
echo a > "$zdir/$stamp"
|
echo a > "$zdir/$stamp"
|
||||||
@@ -397,8 +405,8 @@ sed -r 's/(.*)\.(.*)/\2 \1/' | LC_ALL=C sort |
|
|||||||
sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1
|
sed -r 's/([^ ]*) (.*)/\2.\1/' | grep -vE '/list1?$' > list1
|
||||||
|
|
||||||
for n in {1..50}; do
|
for n in {1..50}; do
|
||||||
(grep -vE '\.(gz|br)$' list1; grep -E '\.(gz|br)$' list1 | shuf) >list || true
|
(grep -vE '\.(gz|br)$' list1; grep -E '\.(gz|br)$' list1 | (shuf||gshuf) ) >list || true
|
||||||
s=$(md5sum list | cut -c-16)
|
s=$( (sha1sum||shasum) < list | cut -c-16)
|
||||||
grep -q $s "$zdir/h" && continue
|
grep -q $s "$zdir/h" && continue
|
||||||
echo $s >> "$zdir/h"
|
echo $s >> "$zdir/h"
|
||||||
break
|
break
|
||||||
@@ -418,7 +426,7 @@ pe=bz2
|
|||||||
|
|
||||||
echo compressing tar
|
echo compressing tar
|
||||||
# detect best level; bzip2 -7 is usually better than -9
|
# detect best level; bzip2 -7 is usually better than -9
|
||||||
for n in {2..9}; do cp tar t.$n; $pc -$n t.$n & done; wait; mv -v $(ls -1S t.*.$pe | tail -n 1) tar.bz2
|
for n in {2..9}; do cp tar t.$n; nice $pc -$n t.$n & done; wait; mv -v $(ls -1S t.*.$pe | tail -n 1) tar.bz2
|
||||||
rm t.* || true
|
rm t.* || true
|
||||||
exts=()
|
exts=()
|
||||||
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ cmd = sys.argv[1]
|
|||||||
if cmd == "cpp":
|
if cmd == "cpp":
|
||||||
from copyparty.__main__ import main
|
from copyparty.__main__ import main
|
||||||
|
|
||||||
argv = ["__main__", "-v", "srv::r", "-v", "../../yt:yt:r"]
|
argv = ["__main__", "-vsrv::r:c,e2ds,e2ts"]
|
||||||
main(argv=argv)
|
main(argv=argv)
|
||||||
|
|
||||||
elif cmd == "test":
|
elif cmd == "test":
|
||||||
@@ -29,6 +29,6 @@ else:
|
|||||||
#
|
#
|
||||||
# python -m vmprof -o prof --lines ./scripts/profile.py test
|
# python -m vmprof -o prof --lines ./scripts/profile.py test
|
||||||
|
|
||||||
# linux: ~/.local/bin/vmprofshow prof tree | grep -vF '[1m 0.'
|
# linux: ~/.local/bin/vmprofshow prof tree | awk '$2>1{n=5} !n{next} 1;{n--} !n{print""}'
|
||||||
# macos: ~/Library/Python/3.9/bin/vmprofshow prof tree | grep -vF '[1m 0.'
|
# macos: ~/Library/Python/3.9/bin/vmprofshow prof tree
|
||||||
# win: %appdata%\..\Roaming\Python\Python39\Scripts\vmprofshow.exe prof tree
|
# win: %appdata%\..\Roaming\Python\Python39\Scripts\vmprofshow.exe prof tree
|
||||||
|
|||||||
4
scripts/py2/queue/__init__.py
Normal file
4
scripts/py2/queue/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
# coding: utf-8
|
||||||
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
|
from Queue import Queue, LifoQueue, PriorityQueue, Empty, Full
|
||||||
@@ -1,6 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
|
parallel=2
|
||||||
|
|
||||||
cd ~/dev/copyparty/scripts
|
cd ~/dev/copyparty/scripts
|
||||||
|
|
||||||
v=$1
|
v=$1
|
||||||
@@ -21,16 +23,31 @@ v=$1
|
|||||||
./make-tgz-release.sh $v
|
./make-tgz-release.sh $v
|
||||||
}
|
}
|
||||||
|
|
||||||
rm -f ../dist/copyparty-sfx.*
|
rm -f ../dist/copyparty-sfx*
|
||||||
shift
|
shift
|
||||||
./make-sfx.sh "$@"
|
./make-sfx.sh "$@"
|
||||||
f=../dist/copyparty-sfx.py
|
f=../dist/copyparty-sfx
|
||||||
[ -e $f ] ||
|
[ -e $f.py ] ||
|
||||||
f=../dist/copyparty-sfx-gz.py
|
f=../dist/copyparty-sfx-gz
|
||||||
|
|
||||||
|
$f.py -h >/dev/null
|
||||||
|
|
||||||
|
[ $parallel -gt 1 ] && {
|
||||||
|
printf '\033[%s' s 2r H "0;1;37;44mbruteforcing sfx size -- press enter to terminate" K u "7m $* " K $'27m\n'
|
||||||
|
trap "rm -f .sfx-run; printf '\033[%s' s r u" INT TERM EXIT
|
||||||
|
touch .sfx-run
|
||||||
|
for ((a=0; a<$parallel; a++)); do
|
||||||
|
while [ -e .sfx-run ]; do
|
||||||
|
CSN=sfx$a ./make-sfx.sh re "$@"
|
||||||
|
mv $f$a.py $f.$(wc -c <$f$a.py | awk '{print$1}').py
|
||||||
|
done &
|
||||||
|
done
|
||||||
|
read
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
$f -h
|
|
||||||
while true; do
|
while true; do
|
||||||
mv $f $f.$(wc -c <$f | awk '{print$1}')
|
mv $f.py $f.$(wc -c <$f.py | awk '{print$1}').py
|
||||||
./make-sfx.sh re "$@"
|
./make-sfx.sh re "$@"
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|||||||
@@ -77,3 +77,4 @@ copyparty/web/splash.js,
|
|||||||
copyparty/web/ui.css,
|
copyparty/web/ui.css,
|
||||||
copyparty/web/up2k.js,
|
copyparty/web/up2k.js,
|
||||||
copyparty/web/util.js,
|
copyparty/web/util.js,
|
||||||
|
copyparty/web/w.hash.js,
|
||||||
|
|||||||
@@ -213,11 +213,11 @@ def yieldfile(fn):
|
|||||||
|
|
||||||
|
|
||||||
def hashfile(fn):
|
def hashfile(fn):
|
||||||
h = hashlib.md5()
|
h = hashlib.sha1()
|
||||||
for block in yieldfile(fn):
|
for block in yieldfile(fn):
|
||||||
h.update(block)
|
h.update(block)
|
||||||
|
|
||||||
return h.hexdigest()
|
return h.hexdigest()[:24]
|
||||||
|
|
||||||
|
|
||||||
def unpack():
|
def unpack():
|
||||||
|
|||||||
@@ -94,7 +94,7 @@ class Cfg(Namespace):
|
|||||||
def __init__(self, a=None, v=None, c=None):
|
def __init__(self, a=None, v=None, c=None):
|
||||||
ka = {}
|
ka = {}
|
||||||
|
|
||||||
ex = "e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp ed emp force_js ihead no_acode no_athumb no_del no_logues no_mv no_readme no_robots no_scandir no_thumb no_vthumb no_zip nid nih nw"
|
ex = "e2d e2ds e2dsa e2t e2ts e2tsr e2v e2vu e2vp xdev xvol ed emp force_js ihead no_acode no_athumb no_del no_logues no_mv no_readme no_robots no_scandir no_thumb no_vthumb no_zip nid nih nw"
|
||||||
ka.update(**{k: False for k in ex.split()})
|
ka.update(**{k: False for k in ex.split()})
|
||||||
|
|
||||||
ex = "no_rescan no_sendfile no_voldump"
|
ex = "no_rescan no_sendfile no_voldump"
|
||||||
@@ -106,7 +106,7 @@ class Cfg(Namespace):
|
|||||||
ex = "re_maxage rproxy rsp_slp s_wr_slp theme themes turbo df"
|
ex = "re_maxage rproxy rsp_slp s_wr_slp theme themes turbo df"
|
||||||
ka.update(**{k: 0 for k in ex.split()})
|
ka.update(**{k: 0 for k in ex.split()})
|
||||||
|
|
||||||
ex = "doctitle favico html_head mth textfiles"
|
ex = "doctitle favico html_head mth textfiles log_fk"
|
||||||
ka.update(**{k: "" for k in ex.split()})
|
ka.update(**{k: "" for k in ex.split()})
|
||||||
|
|
||||||
super(Cfg, self).__init__(
|
super(Cfg, self).__init__(
|
||||||
|
|||||||
Reference in New Issue
Block a user