Compare commits
37 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fd7df5c952 | ||
|
|
a270019147 | ||
|
|
55e0209901 | ||
|
|
2b255fbbed | ||
|
|
8a2345a0fb | ||
|
|
bfa9f535aa | ||
|
|
f757623ad8 | ||
|
|
3c7465e268 | ||
|
|
108665fc4f | ||
|
|
ed519c9138 | ||
|
|
2dd2e2c57e | ||
|
|
6c3a976222 | ||
|
|
80cc26bd95 | ||
|
|
970fb84fd8 | ||
|
|
20cbcf6931 | ||
|
|
8fcde2a579 | ||
|
|
b32d1f8ad3 | ||
|
|
03513e0cb1 | ||
|
|
e041a2b197 | ||
|
|
d7d625be2a | ||
|
|
4121266678 | ||
|
|
22971a6be4 | ||
|
|
efbf8d7e0d | ||
|
|
397396ea4a | ||
|
|
e59b077c21 | ||
|
|
4bc39f3084 | ||
|
|
21c3570786 | ||
|
|
2f85c1fb18 | ||
|
|
1e27a4c2df | ||
|
|
456f575637 | ||
|
|
51546c9e64 | ||
|
|
83b4b70ef4 | ||
|
|
a5120d4f6f | ||
|
|
c95941e14f | ||
|
|
0dd531149d | ||
|
|
67da1b5219 | ||
|
|
919bd16437 |
40
README.md
40
README.md
@@ -23,6 +23,7 @@ turn your phone or raspi into a portable file server with resumable uploads/down
|
|||||||
* [on debian](#on-debian)
|
* [on debian](#on-debian)
|
||||||
* [notes](#notes)
|
* [notes](#notes)
|
||||||
* [status](#status)
|
* [status](#status)
|
||||||
|
* [testimonials](#testimonials)
|
||||||
* [bugs](#bugs)
|
* [bugs](#bugs)
|
||||||
* [general bugs](#general-bugs)
|
* [general bugs](#general-bugs)
|
||||||
* [not my bugs](#not-my-bugs)
|
* [not my bugs](#not-my-bugs)
|
||||||
@@ -45,6 +46,7 @@ turn your phone or raspi into a portable file server with resumable uploads/down
|
|||||||
* [browser support](#browser-support)
|
* [browser support](#browser-support)
|
||||||
* [client examples](#client-examples)
|
* [client examples](#client-examples)
|
||||||
* [up2k](#up2k)
|
* [up2k](#up2k)
|
||||||
|
* [performance](#performance)
|
||||||
* [dependencies](#dependencies)
|
* [dependencies](#dependencies)
|
||||||
* [optional dependencies](#optional-dependencies)
|
* [optional dependencies](#optional-dependencies)
|
||||||
* [install recommended deps](#install-recommended-deps)
|
* [install recommended deps](#install-recommended-deps)
|
||||||
@@ -143,6 +145,13 @@ summary: all planned features work! now please enjoy the bloatening
|
|||||||
* ☑ editor (sure why not)
|
* ☑ editor (sure why not)
|
||||||
|
|
||||||
|
|
||||||
|
## testimonials
|
||||||
|
|
||||||
|
small collection of user feedback
|
||||||
|
|
||||||
|
`good enough`, `surprisingly correct`, `certified good software`, `just works`, `why`
|
||||||
|
|
||||||
|
|
||||||
# bugs
|
# bugs
|
||||||
|
|
||||||
* Windows: python 3.7 and older cannot read tags with ffprobe, so use mutagen or upgrade
|
* Windows: python 3.7 and older cannot read tags with ffprobe, so use mutagen or upgrade
|
||||||
@@ -191,10 +200,16 @@ the browser has the following hotkeys
|
|||||||
* `G` toggle list / grid view
|
* `G` toggle list / grid view
|
||||||
* `T` toggle thumbnails / icons
|
* `T` toggle thumbnails / icons
|
||||||
* when playing audio:
|
* when playing audio:
|
||||||
* `0..9` jump to 10%..90%
|
|
||||||
* `U/O` skip 10sec back/forward
|
|
||||||
* `J/L` prev/next song
|
* `J/L` prev/next song
|
||||||
|
* `U/O` skip 10sec back/forward
|
||||||
|
* `0..9` jump to 10%..90%
|
||||||
* `P` play/pause (also starts playing the folder)
|
* `P` play/pause (also starts playing the folder)
|
||||||
|
* when viewing images / playing videos:
|
||||||
|
* `J/L, Left/Right` prev/next file
|
||||||
|
* `Home/End` first/last file
|
||||||
|
* `U/O` skip 10sec back/forward
|
||||||
|
* `P/K/Space` play/pause video
|
||||||
|
* `Esc` close viewer
|
||||||
* when tree-sidebar is open:
|
* when tree-sidebar is open:
|
||||||
* `A/D` adjust tree width
|
* `A/D` adjust tree width
|
||||||
* in the grid view:
|
* in the grid view:
|
||||||
@@ -486,6 +501,23 @@ quick outline of the up2k protocol, see [uploading](#uploading) for the web-clie
|
|||||||
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
|
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
|
||||||
|
|
||||||
|
|
||||||
|
# performance
|
||||||
|
|
||||||
|
defaults are good for most cases, don't mind the `cannot efficiently use multiple CPU cores` message, it's very unlikely to be a problem
|
||||||
|
|
||||||
|
below are some tweaks roughly ordered by usefulness:
|
||||||
|
|
||||||
|
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
|
||||||
|
* `--http-only` or `--https-only` (unless you want to support both protocols) will reduce the delay before a new connection is established
|
||||||
|
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
|
||||||
|
* `--no-hash` when indexing a networked disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||||
|
* `-j` enables multiprocessing (actual multithreading) and can make copyparty perform better in cpu-intensive workloads, for example:
|
||||||
|
* huge amount of short-lived connections
|
||||||
|
* really heavy traffic (downloads/uploads)
|
||||||
|
|
||||||
|
...however it adds an overhead to internal communication so it might be a net loss, see if it works 4 u
|
||||||
|
|
||||||
|
|
||||||
# dependencies
|
# dependencies
|
||||||
|
|
||||||
* `jinja2` (is built into the SFX)
|
* `jinja2` (is built into the SFX)
|
||||||
@@ -618,6 +650,7 @@ roughly sorted by priority
|
|||||||
* reduce up2k roundtrips
|
* reduce up2k roundtrips
|
||||||
* start from a chunk index and just go
|
* start from a chunk index and just go
|
||||||
* terminate client on bad data
|
* terminate client on bad data
|
||||||
|
* logging to file
|
||||||
|
|
||||||
discarded ideas
|
discarded ideas
|
||||||
|
|
||||||
@@ -637,3 +670,6 @@ discarded ideas
|
|||||||
* nah
|
* nah
|
||||||
* look into android thumbnail cache file format
|
* look into android thumbnail cache file format
|
||||||
* absolutely not
|
* absolutely not
|
||||||
|
* indexedDB for hashes, cfg enable/clear/sz, 2gb avail, ~9k for 1g, ~4k for 100m, 500k items before autoeviction
|
||||||
|
* blank hashlist when up-ok to skip handshake
|
||||||
|
* too many confusing side-effects
|
||||||
|
|||||||
@@ -1,7 +1,15 @@
|
|||||||
# when running copyparty behind a reverse-proxy,
|
# when running copyparty behind a reverse proxy,
|
||||||
# make sure that copyparty allows at least as many clients as the proxy does,
|
# the following arguments are recommended:
|
||||||
# so run copyparty with -nc 512 if your nginx has the default limits
|
#
|
||||||
# (worker_processes 1, worker_connections 512)
|
# -nc 512 important, see next paragraph
|
||||||
|
# --http-only lower latency on initial connection
|
||||||
|
# -i 127.0.0.1 only accept connections from nginx
|
||||||
|
#
|
||||||
|
# -nc must match or exceed the webserver's max number of concurrent clients;
|
||||||
|
# nginx default is 512 (worker_processes 1, worker_connections 512)
|
||||||
|
#
|
||||||
|
# you may also consider adding -j0 for CPU-intensive configurations
|
||||||
|
# (not that i can really think of any good examples)
|
||||||
|
|
||||||
upstream cpp {
|
upstream cpp {
|
||||||
server 127.0.0.1:3923;
|
server 127.0.0.1:3923;
|
||||||
|
|||||||
@@ -9,6 +9,9 @@ import os
|
|||||||
PY2 = sys.version_info[0] == 2
|
PY2 = sys.version_info[0] == 2
|
||||||
if PY2:
|
if PY2:
|
||||||
sys.dont_write_bytecode = True
|
sys.dont_write_bytecode = True
|
||||||
|
unicode = unicode
|
||||||
|
else:
|
||||||
|
unicode = str
|
||||||
|
|
||||||
WINDOWS = False
|
WINDOWS = False
|
||||||
if platform.system() == "Windows":
|
if platform.system() == "Windows":
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ import threading
|
|||||||
import traceback
|
import traceback
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
|
|
||||||
from .__init__ import E, WINDOWS, VT100, PY2
|
from .__init__ import E, WINDOWS, VT100, PY2, unicode
|
||||||
from .__version__ import S_VERSION, S_BUILD_DT, CODENAME
|
from .__version__ import S_VERSION, S_BUILD_DT, CODENAME
|
||||||
from .svchub import SvcHub
|
from .svchub import SvcHub
|
||||||
from .util import py_desc, align_tab, IMPLICATIONS, alltrace
|
from .util import py_desc, align_tab, IMPLICATIONS, alltrace
|
||||||
@@ -31,6 +31,8 @@ try:
|
|||||||
except:
|
except:
|
||||||
HAVE_SSL = False
|
HAVE_SSL = False
|
||||||
|
|
||||||
|
printed = ""
|
||||||
|
|
||||||
|
|
||||||
class RiceFormatter(argparse.HelpFormatter):
|
class RiceFormatter(argparse.HelpFormatter):
|
||||||
def _get_help_string(self, action):
|
def _get_help_string(self, action):
|
||||||
@@ -61,8 +63,15 @@ class Dodge11874(RiceFormatter):
|
|||||||
super(Dodge11874, self).__init__(*args, **kwargs)
|
super(Dodge11874, self).__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def lprint(*a, **ka):
|
||||||
|
global printed
|
||||||
|
|
||||||
|
printed += " ".join(unicode(x) for x in a) + ka.get("end", "\n")
|
||||||
|
print(*a, **ka)
|
||||||
|
|
||||||
|
|
||||||
def warn(msg):
|
def warn(msg):
|
||||||
print("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
|
lprint("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
|
||||||
|
|
||||||
|
|
||||||
def ensure_locale():
|
def ensure_locale():
|
||||||
@@ -73,7 +82,7 @@ def ensure_locale():
|
|||||||
]:
|
]:
|
||||||
try:
|
try:
|
||||||
locale.setlocale(locale.LC_ALL, x)
|
locale.setlocale(locale.LC_ALL, x)
|
||||||
print("Locale:", x)
|
lprint("Locale:", x)
|
||||||
break
|
break
|
||||||
except:
|
except:
|
||||||
continue
|
continue
|
||||||
@@ -94,7 +103,7 @@ def ensure_cert():
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
if filecmp.cmp(cert_cfg, cert_insec):
|
if filecmp.cmp(cert_cfg, cert_insec):
|
||||||
print(
|
lprint(
|
||||||
"\033[33m using default TLS certificate; https will be insecure."
|
"\033[33m using default TLS certificate; https will be insecure."
|
||||||
+ "\033[36m\n certificate location: {}\033[0m\n".format(cert_cfg)
|
+ "\033[36m\n certificate location: {}\033[0m\n".format(cert_cfg)
|
||||||
)
|
)
|
||||||
@@ -123,7 +132,7 @@ def configure_ssl_ver(al):
|
|||||||
if "help" in sslver:
|
if "help" in sslver:
|
||||||
avail = [terse_sslver(x[6:]) for x in flags]
|
avail = [terse_sslver(x[6:]) for x in flags]
|
||||||
avail = " ".join(sorted(avail) + ["all"])
|
avail = " ".join(sorted(avail) + ["all"])
|
||||||
print("\navailable ssl/tls versions:\n " + avail)
|
lprint("\navailable ssl/tls versions:\n " + avail)
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|
||||||
al.ssl_flags_en = 0
|
al.ssl_flags_en = 0
|
||||||
@@ -143,7 +152,7 @@ def configure_ssl_ver(al):
|
|||||||
|
|
||||||
for k in ["ssl_flags_en", "ssl_flags_de"]:
|
for k in ["ssl_flags_en", "ssl_flags_de"]:
|
||||||
num = getattr(al, k)
|
num = getattr(al, k)
|
||||||
print("{}: {:8x} ({})".format(k, num, num))
|
lprint("{}: {:8x} ({})".format(k, num, num))
|
||||||
|
|
||||||
# think i need that beer now
|
# think i need that beer now
|
||||||
|
|
||||||
@@ -160,13 +169,13 @@ def configure_ssl_ciphers(al):
|
|||||||
try:
|
try:
|
||||||
ctx.set_ciphers(al.ciphers)
|
ctx.set_ciphers(al.ciphers)
|
||||||
except:
|
except:
|
||||||
print("\n\033[1;31mfailed to set ciphers\033[0m\n")
|
lprint("\n\033[1;31mfailed to set ciphers\033[0m\n")
|
||||||
|
|
||||||
if not hasattr(ctx, "get_ciphers"):
|
if not hasattr(ctx, "get_ciphers"):
|
||||||
print("cannot read cipher list: openssl or python too old")
|
lprint("cannot read cipher list: openssl or python too old")
|
||||||
else:
|
else:
|
||||||
ciphers = [x["description"] for x in ctx.get_ciphers()]
|
ciphers = [x["description"] for x in ctx.get_ciphers()]
|
||||||
print("\n ".join(["\nenabled ciphers:"] + align_tab(ciphers) + [""]))
|
lprint("\n ".join(["\nenabled ciphers:"] + align_tab(ciphers) + [""]))
|
||||||
|
|
||||||
if is_help:
|
if is_help:
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
@@ -249,30 +258,32 @@ def run_argparse(argv, formatter):
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
# fmt: off
|
# fmt: off
|
||||||
ap.add_argument("-c", metavar="PATH", type=str, action="append", help="add config file")
|
u = unicode
|
||||||
ap.add_argument("-nc", metavar="NUM", type=int, default=64, help="max num clients")
|
ap2 = ap.add_argument_group('general options')
|
||||||
ap.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores")
|
ap2.add_argument("-c", metavar="PATH", type=u, action="append", help="add config file")
|
||||||
ap.add_argument("-a", metavar="ACCT", type=str, action="append", help="add account, USER:PASS; example [ed:wark")
|
ap2.add_argument("-nc", metavar="NUM", type=int, default=64, help="max num clients")
|
||||||
ap.add_argument("-v", metavar="VOL", type=str, action="append", help="add volume, SRC:DST:FLAG; example [.::r], [/mnt/nas/music:/music:r:aed")
|
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores")
|
||||||
ap.add_argument("-ed", action="store_true", help="enable ?dots")
|
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, USER:PASS; example [ed:wark")
|
||||||
ap.add_argument("-emp", action="store_true", help="enable markdown plugins")
|
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, SRC:DST:FLAG; example [.::r], [/mnt/nas/music:/music:r:aed")
|
||||||
ap.add_argument("-mcr", metavar="SEC", type=int, default=60, help="md-editor mod-chk rate")
|
ap2.add_argument("-ed", action="store_true", help="enable ?dots")
|
||||||
ap.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads")
|
ap2.add_argument("-emp", action="store_true", help="enable markdown plugins")
|
||||||
ap.add_argument("--sparse", metavar="MiB", type=int, default=4, help="up2k min.size threshold (mswin-only)")
|
ap2.add_argument("-mcr", metavar="SEC", type=int, default=60, help="md-editor mod-chk rate")
|
||||||
ap.add_argument("--urlform", metavar="MODE", type=str, default="print,get", help="how to handle url-forms; examples: [stash], [save,get]")
|
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads")
|
||||||
|
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="up2k min.size threshold (mswin-only)")
|
||||||
|
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-forms; examples: [stash], [save,get]")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('network options')
|
ap2 = ap.add_argument_group('network options')
|
||||||
ap2.add_argument("-i", metavar="IP", type=str, default="0.0.0.0", help="ip to bind (comma-sep.)")
|
ap2.add_argument("-i", metavar="IP", type=u, default="0.0.0.0", help="ip to bind (comma-sep.)")
|
||||||
ap2.add_argument("-p", metavar="PORT", type=str, default="3923", help="ports to bind (comma/range)")
|
ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to bind (comma/range)")
|
||||||
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to keep; 0 = tcp, 1 = origin (first x-fwd), 2 = cloudflare, 3 = nginx, -1 = closest proxy")
|
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to keep; 0 = tcp, 1 = origin (first x-fwd), 2 = cloudflare, 3 = nginx, -1 = closest proxy")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('SSL/TLS options')
|
ap2 = ap.add_argument_group('SSL/TLS options')
|
||||||
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls")
|
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls")
|
||||||
ap2.add_argument("--https-only", action="store_true", help="disable plaintext")
|
ap2.add_argument("--https-only", action="store_true", help="disable plaintext")
|
||||||
ap2.add_argument("--ssl-ver", metavar="LIST", type=str, help="set allowed ssl/tls versions; [help] shows available versions; default is what your python version considers safe")
|
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, help="set allowed ssl/tls versions; [help] shows available versions; default is what your python version considers safe")
|
||||||
ap2.add_argument("--ciphers", metavar="LIST", help="set allowed ssl/tls ciphers; [help] shows available ciphers")
|
ap2.add_argument("--ciphers", metavar="LIST", type=u, help="set allowed ssl/tls ciphers; [help] shows available ciphers")
|
||||||
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
|
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
|
||||||
ap2.add_argument("--ssl-log", metavar="PATH", help="log master secrets")
|
ap2.add_argument("--ssl-log", metavar="PATH", type=u, help="log master secrets")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('opt-outs')
|
ap2 = ap.add_argument_group('opt-outs')
|
||||||
ap2.add_argument("-nw", action="store_true", help="disable writes (benchmark)")
|
ap2.add_argument("-nw", action="store_true", help="disable writes (benchmark)")
|
||||||
@@ -281,14 +292,16 @@ def run_argparse(argv, formatter):
|
|||||||
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
|
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('safety options')
|
ap2 = ap.add_argument_group('safety options')
|
||||||
ap2.add_argument("--ls", metavar="U[,V[,F]]", help="scan all volumes; arguments USER,VOL,FLAGS; example [**,*,ln,p,r]")
|
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="scan all volumes; arguments USER,VOL,FLAGS; example [**,*,ln,p,r]")
|
||||||
ap2.add_argument("--salt", type=str, default="hunter2", help="up2k file-hash salt")
|
ap2.add_argument("--salt", type=u, default="hunter2", help="up2k file-hash salt")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('logging options')
|
ap2 = ap.add_argument_group('logging options')
|
||||||
ap2.add_argument("-q", action="store_true", help="quiet")
|
ap2.add_argument("-q", action="store_true", help="quiet")
|
||||||
|
ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz")
|
||||||
ap2.add_argument("--log-conn", action="store_true", help="print tcp-server msgs")
|
ap2.add_argument("--log-conn", action="store_true", help="print tcp-server msgs")
|
||||||
ap2.add_argument("--ihead", metavar="HEADER", action='append', help="dump incoming header")
|
ap2.add_argument("--log-htp", action="store_true", help="print http-server threadpool scaling")
|
||||||
ap2.add_argument("--lf-url", metavar="RE", type=str, default=r"^/\.cpr/|\?th=[wj]$", help="dont log URLs matching")
|
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="dump incoming header")
|
||||||
|
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$", help="dont log URLs matching")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('admin panel options')
|
ap2 = ap.add_argument_group('admin panel options')
|
||||||
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
|
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
|
||||||
@@ -303,9 +316,9 @@ def run_argparse(argv, formatter):
|
|||||||
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
|
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
|
||||||
ap2.add_argument("--th-ff-jpg", action="store_true", help="force jpg for video thumbs")
|
ap2.add_argument("--th-ff-jpg", action="store_true", help="force jpg for video thumbs")
|
||||||
ap2.add_argument("--th-poke", metavar="SEC", type=int, default=300, help="activity labeling cooldown")
|
ap2.add_argument("--th-poke", metavar="SEC", type=int, default=300, help="activity labeling cooldown")
|
||||||
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval")
|
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval; 0=disabled")
|
||||||
ap2.add_argument("--th-maxage", metavar="SEC", type=int, default=604800, help="max folder age")
|
ap2.add_argument("--th-maxage", metavar="SEC", type=int, default=604800, help="max folder age")
|
||||||
ap2.add_argument("--th-covers", metavar="N,N", type=str, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat for")
|
ap2.add_argument("--th-covers", metavar="N,N", type=u, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat for")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('database options')
|
ap2 = ap.add_argument_group('database options')
|
||||||
ap2.add_argument("-e2d", action="store_true", help="enable up2k database")
|
ap2.add_argument("-e2d", action="store_true", help="enable up2k database")
|
||||||
@@ -314,24 +327,25 @@ def run_argparse(argv, formatter):
|
|||||||
ap2.add_argument("-e2t", action="store_true", help="enable metadata indexing")
|
ap2.add_argument("-e2t", action="store_true", help="enable metadata indexing")
|
||||||
ap2.add_argument("-e2ts", action="store_true", help="enable metadata scanner, sets -e2t")
|
ap2.add_argument("-e2ts", action="store_true", help="enable metadata scanner, sets -e2t")
|
||||||
ap2.add_argument("-e2tsr", action="store_true", help="rescan all metadata, sets -e2ts")
|
ap2.add_argument("-e2tsr", action="store_true", help="rescan all metadata, sets -e2ts")
|
||||||
ap2.add_argument("--hist", metavar="PATH", type=str, help="where to store volume state")
|
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume state")
|
||||||
ap2.add_argument("--no-hash", action="store_true", help="disable hashing during e2ds folder scans")
|
ap2.add_argument("--no-hash", action="store_true", help="disable hashing during e2ds folder scans")
|
||||||
ap2.add_argument("--no-mutagen", action="store_true", help="use ffprobe for tags instead")
|
ap2.add_argument("--no-mutagen", action="store_true", help="use ffprobe for tags instead")
|
||||||
ap2.add_argument("--no-mtag-mt", action="store_true", help="disable tag-read parallelism")
|
ap2.add_argument("--no-mtag-mt", action="store_true", help="disable tag-read parallelism")
|
||||||
ap2.add_argument("-mtm", metavar="M=t,t,t", action="append", type=str, help="add/replace metadata mapping")
|
ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping")
|
||||||
ap2.add_argument("-mte", metavar="M,M,M", type=str, help="tags to index/display (comma-sep.)",
|
ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.)",
|
||||||
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,ac,vc,res,.fps")
|
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,ac,vc,res,.fps")
|
||||||
ap2.add_argument("-mtp", metavar="M=[f,]bin", action="append", type=str, help="read tag M using bin")
|
ap2.add_argument("-mtp", metavar="M=[f,]bin", type=u, action="append", help="read tag M using bin")
|
||||||
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline")
|
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('appearance options')
|
ap2 = ap.add_argument_group('appearance options')
|
||||||
ap2.add_argument("--css-browser", metavar="L", help="URL to additional CSS to include")
|
ap2.add_argument("--css-browser", metavar="L", type=u, help="URL to additional CSS to include")
|
||||||
|
|
||||||
ap2 = ap.add_argument_group('debug options')
|
ap2 = ap.add_argument_group('debug options')
|
||||||
ap2.add_argument("--no-sendfile", action="store_true", help="disable sendfile")
|
ap2.add_argument("--no-sendfile", action="store_true", help="disable sendfile")
|
||||||
ap2.add_argument("--no-scandir", action="store_true", help="disable scandir")
|
ap2.add_argument("--no-scandir", action="store_true", help="disable scandir")
|
||||||
ap2.add_argument("--no-fastboot", action="store_true", help="wait for up2k indexing")
|
ap2.add_argument("--no-fastboot", action="store_true", help="wait for up2k indexing")
|
||||||
ap2.add_argument("--stackmon", metavar="P,S", help="write stacktrace to Path every S second")
|
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
|
||||||
|
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to Path every S second")
|
||||||
|
|
||||||
return ap.parse_args(args=argv[1:])
|
return ap.parse_args(args=argv[1:])
|
||||||
# fmt: on
|
# fmt: on
|
||||||
@@ -348,7 +362,7 @@ def main(argv=None):
|
|||||||
desc = py_desc().replace("[", "\033[1;30m[")
|
desc = py_desc().replace("[", "\033[1;30m[")
|
||||||
|
|
||||||
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0m\n'
|
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0m\n'
|
||||||
print(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
|
lprint(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
|
||||||
|
|
||||||
ensure_locale()
|
ensure_locale()
|
||||||
if HAVE_SSL:
|
if HAVE_SSL:
|
||||||
@@ -362,7 +376,7 @@ def main(argv=None):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
msg = "\033[1;31mWARNING:\033[0;1m\n {} \033[0;33mwas replaced with\033[0;1m {} \033[0;33mand will be removed\n\033[0m"
|
msg = "\033[1;31mWARNING:\033[0;1m\n {} \033[0;33mwas replaced with\033[0;1m {} \033[0;33mand will be removed\n\033[0m"
|
||||||
print(msg.format(dk, nk))
|
lprint(msg.format(dk, nk))
|
||||||
argv[idx] = nk
|
argv[idx] = nk
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
@@ -416,7 +430,7 @@ def main(argv=None):
|
|||||||
|
|
||||||
# signal.signal(signal.SIGINT, sighandler)
|
# signal.signal(signal.SIGINT, sighandler)
|
||||||
|
|
||||||
SvcHub(al).run()
|
SvcHub(al, argv, printed).run()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
VERSION = (0, 11, 30)
|
VERSION = (0, 11, 35)
|
||||||
CODENAME = "the grid"
|
CODENAME = "the grid"
|
||||||
BUILD_DT = (2021, 7, 1)
|
BUILD_DT = (2021, 7, 11)
|
||||||
|
|
||||||
S_VERSION = ".".join(map(str, VERSION))
|
S_VERSION = ".".join(map(str, VERSION))
|
||||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||||
|
|||||||
@@ -10,13 +10,14 @@ import hashlib
|
|||||||
import threading
|
import threading
|
||||||
|
|
||||||
from .__init__ import WINDOWS
|
from .__init__ import WINDOWS
|
||||||
from .util import IMPLICATIONS, uncyg, undot, Pebkac, fsdec, fsenc, statdir, nuprint
|
from .util import IMPLICATIONS, uncyg, undot, Pebkac, fsdec, fsenc, statdir
|
||||||
|
|
||||||
|
|
||||||
class VFS(object):
|
class VFS(object):
|
||||||
"""single level in the virtual fs"""
|
"""single level in the virtual fs"""
|
||||||
|
|
||||||
def __init__(self, realpath, vpath, uread=[], uwrite=[], uadm=[], flags={}):
|
def __init__(self, log, realpath, vpath, uread=[], uwrite=[], uadm=[], flags={}):
|
||||||
|
self.log = log
|
||||||
self.realpath = realpath # absolute path on host filesystem
|
self.realpath = realpath # absolute path on host filesystem
|
||||||
self.vpath = vpath # absolute path in the virtual filesystem
|
self.vpath = vpath # absolute path in the virtual filesystem
|
||||||
self.uread = uread # users who can read this
|
self.uread = uread # users who can read this
|
||||||
@@ -62,6 +63,7 @@ class VFS(object):
|
|||||||
return self.nodes[name].add(src, dst)
|
return self.nodes[name].add(src, dst)
|
||||||
|
|
||||||
vn = VFS(
|
vn = VFS(
|
||||||
|
self.log,
|
||||||
os.path.join(self.realpath, name) if self.realpath else None,
|
os.path.join(self.realpath, name) if self.realpath else None,
|
||||||
"{}/{}".format(self.vpath, name).lstrip("/"),
|
"{}/{}".format(self.vpath, name).lstrip("/"),
|
||||||
self.uread,
|
self.uread,
|
||||||
@@ -79,7 +81,7 @@ class VFS(object):
|
|||||||
|
|
||||||
# leaf does not exist; create and keep permissions blank
|
# leaf does not exist; create and keep permissions blank
|
||||||
vp = "{}/{}".format(self.vpath, dst).lstrip("/")
|
vp = "{}/{}".format(self.vpath, dst).lstrip("/")
|
||||||
vn = VFS(src, vp)
|
vn = VFS(self.log, src, vp)
|
||||||
vn.dbv = self.dbv or self
|
vn.dbv = self.dbv or self
|
||||||
self.nodes[dst] = vn
|
self.nodes[dst] = vn
|
||||||
return vn
|
return vn
|
||||||
@@ -181,7 +183,7 @@ class VFS(object):
|
|||||||
"""return user-readable [fsdir,real,virt] items at vpath"""
|
"""return user-readable [fsdir,real,virt] items at vpath"""
|
||||||
virt_vis = {} # nodes readable by user
|
virt_vis = {} # nodes readable by user
|
||||||
abspath = self.canonical(rem)
|
abspath = self.canonical(rem)
|
||||||
real = list(statdir(nuprint, scandir, lstat, abspath))
|
real = list(statdir(self.log, scandir, lstat, abspath))
|
||||||
real.sort()
|
real.sort()
|
||||||
if not rem:
|
if not rem:
|
||||||
for name, vn2 in sorted(self.nodes.items()):
|
for name, vn2 in sorted(self.nodes.items()):
|
||||||
@@ -208,8 +210,13 @@ class VFS(object):
|
|||||||
rem, uname, scandir, incl_wo=False, lstat=lstat
|
rem, uname, scandir, incl_wo=False, lstat=lstat
|
||||||
)
|
)
|
||||||
|
|
||||||
if seen and not fsroot.startswith(seen[-1]) and fsroot in seen:
|
if (
|
||||||
print("bailing from symlink loop,\n {}\n {}".format(seen[-1], fsroot))
|
seen
|
||||||
|
and (not fsroot.startswith(seen[-1]) or fsroot == seen[-1])
|
||||||
|
and fsroot in seen
|
||||||
|
):
|
||||||
|
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}/{}"
|
||||||
|
self.log("vfs.walk", m.format(seen[-1], fsroot, self.vpath, rem), 3)
|
||||||
return
|
return
|
||||||
|
|
||||||
seen = seen[:] + [fsroot]
|
seen = seen[:] + [fsroot]
|
||||||
@@ -242,6 +249,10 @@ class VFS(object):
|
|||||||
if flt:
|
if flt:
|
||||||
flt = {k: True for k in flt}
|
flt = {k: True for k in flt}
|
||||||
|
|
||||||
|
f1 = "{0}.hist{0}up2k.".format(os.sep)
|
||||||
|
f2a = os.sep + "dir.txt"
|
||||||
|
f2b = "{0}.hist{0}".format(os.sep)
|
||||||
|
|
||||||
for vpath, apath, files, rd, vd in self.walk(
|
for vpath, apath, files, rd, vd in self.walk(
|
||||||
"", vrem, [], uname, dots, scandir, False
|
"", vrem, [], uname, dots, scandir, False
|
||||||
):
|
):
|
||||||
@@ -275,7 +286,11 @@ class VFS(object):
|
|||||||
del vd[x]
|
del vd[x]
|
||||||
|
|
||||||
# up2k filetring based on actual abspath
|
# up2k filetring based on actual abspath
|
||||||
files = [x for x in files if "{0}.hist{0}up2k.".format(os.sep) not in x[1]]
|
files = [
|
||||||
|
x
|
||||||
|
for x in files
|
||||||
|
if f1 not in x[1] and (not x[1].endswith(f2a) or f2b not in x[1])
|
||||||
|
]
|
||||||
|
|
||||||
for f in [{"vp": v, "ap": a, "st": n[1]} for v, a, n in files]:
|
for f in [{"vp": v, "ap": a, "st": n[1]} for v, a, n in files]:
|
||||||
yield f
|
yield f
|
||||||
@@ -466,7 +481,7 @@ class AuthSrv(object):
|
|||||||
)
|
)
|
||||||
except:
|
except:
|
||||||
m = "\n\033[1;31m\nerror in config file {} on line {}:\n\033[0m"
|
m = "\n\033[1;31m\nerror in config file {} on line {}:\n\033[0m"
|
||||||
print(m.format(cfg_fn, self.line_ctr))
|
self.log(m.format(cfg_fn, self.line_ctr), 1)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# case-insensitive; normalize
|
# case-insensitive; normalize
|
||||||
@@ -482,10 +497,10 @@ class AuthSrv(object):
|
|||||||
|
|
||||||
if not mount:
|
if not mount:
|
||||||
# -h says our defaults are CWD at root and read/write for everyone
|
# -h says our defaults are CWD at root and read/write for everyone
|
||||||
vfs = VFS(os.path.abspath("."), "", ["*"], ["*"])
|
vfs = VFS(self.log_func, os.path.abspath("."), "", ["*"], ["*"])
|
||||||
elif "" not in mount:
|
elif "" not in mount:
|
||||||
# there's volumes but no root; make root inaccessible
|
# there's volumes but no root; make root inaccessible
|
||||||
vfs = VFS(None, "")
|
vfs = VFS(self.log_func, None, "")
|
||||||
vfs.flags["d2d"] = True
|
vfs.flags["d2d"] = True
|
||||||
|
|
||||||
maxdepth = 0
|
maxdepth = 0
|
||||||
@@ -497,7 +512,13 @@ class AuthSrv(object):
|
|||||||
if dst == "":
|
if dst == "":
|
||||||
# rootfs was mapped; fully replaces the default CWD vfs
|
# rootfs was mapped; fully replaces the default CWD vfs
|
||||||
vfs = VFS(
|
vfs = VFS(
|
||||||
mount[dst], dst, mread[dst], mwrite[dst], madm[dst], mflags[dst]
|
self.log_func,
|
||||||
|
mount[dst],
|
||||||
|
dst,
|
||||||
|
mread[dst],
|
||||||
|
mwrite[dst],
|
||||||
|
madm[dst],
|
||||||
|
mflags[dst],
|
||||||
)
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@@ -780,7 +801,7 @@ class AuthSrv(object):
|
|||||||
msg = [x[1] for x in files]
|
msg = [x[1] for x in files]
|
||||||
|
|
||||||
if msg:
|
if msg:
|
||||||
nuprint("\n".join(msg))
|
self.log("\n" + "\n".join(msg))
|
||||||
|
|
||||||
if n_bads and flag_p:
|
if n_bads and flag_p:
|
||||||
raise Exception("found symlink leaving volume, and strict is set")
|
raise Exception("found symlink leaving volume, and strict is set")
|
||||||
|
|||||||
@@ -4,17 +4,11 @@ from __future__ import print_function, unicode_literals
|
|||||||
import time
|
import time
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
from .__init__ import PY2, WINDOWS, VT100
|
|
||||||
from .broker_util import try_exec
|
from .broker_util import try_exec
|
||||||
from .broker_mpw import MpWorker
|
from .broker_mpw import MpWorker
|
||||||
from .util import mp
|
from .util import mp
|
||||||
|
|
||||||
|
|
||||||
if PY2 and not WINDOWS:
|
|
||||||
from multiprocessing.reduction import ForkingPickler
|
|
||||||
from StringIO import StringIO as MemesIO # pylint: disable=import-error
|
|
||||||
|
|
||||||
|
|
||||||
class BrokerMp(object):
|
class BrokerMp(object):
|
||||||
"""external api; manages MpWorkers"""
|
"""external api; manages MpWorkers"""
|
||||||
|
|
||||||
@@ -42,7 +36,6 @@ class BrokerMp(object):
|
|||||||
proc.q_yield = q_yield
|
proc.q_yield = q_yield
|
||||||
proc.nid = n
|
proc.nid = n
|
||||||
proc.clients = {}
|
proc.clients = {}
|
||||||
proc.workload = 0
|
|
||||||
|
|
||||||
thr = threading.Thread(
|
thr = threading.Thread(
|
||||||
target=self.collector, args=(proc,), name="mp-collector"
|
target=self.collector, args=(proc,), name="mp-collector"
|
||||||
@@ -53,13 +46,6 @@ class BrokerMp(object):
|
|||||||
self.procs.append(proc)
|
self.procs.append(proc)
|
||||||
proc.start()
|
proc.start()
|
||||||
|
|
||||||
if not self.args.q:
|
|
||||||
thr = threading.Thread(
|
|
||||||
target=self.debug_load_balancer, name="mp-dbg-loadbalancer"
|
|
||||||
)
|
|
||||||
thr.daemon = True
|
|
||||||
thr.start()
|
|
||||||
|
|
||||||
def shutdown(self):
|
def shutdown(self):
|
||||||
self.log("broker", "shutting down")
|
self.log("broker", "shutting down")
|
||||||
for n, proc in enumerate(self.procs):
|
for n, proc in enumerate(self.procs):
|
||||||
@@ -89,20 +75,6 @@ class BrokerMp(object):
|
|||||||
if dest == "log":
|
if dest == "log":
|
||||||
self.log(*args)
|
self.log(*args)
|
||||||
|
|
||||||
elif dest == "workload":
|
|
||||||
with self.mutex:
|
|
||||||
proc.workload = args[0]
|
|
||||||
|
|
||||||
elif dest == "httpdrop":
|
|
||||||
addr = args[0]
|
|
||||||
|
|
||||||
with self.mutex:
|
|
||||||
del proc.clients[addr]
|
|
||||||
if not proc.clients:
|
|
||||||
proc.workload = 0
|
|
||||||
|
|
||||||
self.hub.tcpsrv.num_clients.add(-1)
|
|
||||||
|
|
||||||
elif dest == "retq":
|
elif dest == "retq":
|
||||||
# response from previous ipc call
|
# response from previous ipc call
|
||||||
with self.retpend_mutex:
|
with self.retpend_mutex:
|
||||||
@@ -128,38 +100,9 @@ class BrokerMp(object):
|
|||||||
returns a Queue object which eventually contains the response if want_retval
|
returns a Queue object which eventually contains the response if want_retval
|
||||||
(not-impl here since nothing uses it yet)
|
(not-impl here since nothing uses it yet)
|
||||||
"""
|
"""
|
||||||
if dest == "httpconn":
|
if dest == "listen":
|
||||||
sck, addr = args
|
for p in self.procs:
|
||||||
sck2 = sck
|
p.q_pend.put([0, dest, [args[0], len(self.procs)]])
|
||||||
if PY2:
|
|
||||||
buf = MemesIO()
|
|
||||||
ForkingPickler(buf).dump(sck)
|
|
||||||
sck2 = buf.getvalue()
|
|
||||||
|
|
||||||
proc = sorted(self.procs, key=lambda x: x.workload)[0]
|
|
||||||
proc.q_pend.put([0, dest, [sck2, addr]])
|
|
||||||
|
|
||||||
with self.mutex:
|
|
||||||
proc.clients[addr] = 50
|
|
||||||
proc.workload += 50
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
raise Exception("what is " + str(dest))
|
raise Exception("what is " + str(dest))
|
||||||
|
|
||||||
def debug_load_balancer(self):
|
|
||||||
fmt = "\033[1m{}\033[0;36m{:4}\033[0m "
|
|
||||||
if not VT100:
|
|
||||||
fmt = "({}{:4})"
|
|
||||||
|
|
||||||
last = ""
|
|
||||||
while self.procs:
|
|
||||||
msg = ""
|
|
||||||
for proc in self.procs:
|
|
||||||
msg += fmt.format(len(proc.clients), proc.workload)
|
|
||||||
|
|
||||||
if msg != last:
|
|
||||||
last = msg
|
|
||||||
with self.hub.log_mutex:
|
|
||||||
print(msg)
|
|
||||||
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|||||||
@@ -3,18 +3,13 @@ from __future__ import print_function, unicode_literals
|
|||||||
from copyparty.authsrv import AuthSrv
|
from copyparty.authsrv import AuthSrv
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import time
|
|
||||||
import signal
|
import signal
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
from .__init__ import PY2, WINDOWS
|
|
||||||
from .broker_util import ExceptionalQueue
|
from .broker_util import ExceptionalQueue
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
from .util import FAKE_MP
|
from .util import FAKE_MP
|
||||||
|
|
||||||
if PY2 and not WINDOWS:
|
|
||||||
import pickle # nosec
|
|
||||||
|
|
||||||
|
|
||||||
class MpWorker(object):
|
class MpWorker(object):
|
||||||
"""one single mp instance"""
|
"""one single mp instance"""
|
||||||
@@ -25,10 +20,11 @@ class MpWorker(object):
|
|||||||
self.args = args
|
self.args = args
|
||||||
self.n = n
|
self.n = n
|
||||||
|
|
||||||
|
self.log = self._log_disabled if args.q and not args.lo else self._log_enabled
|
||||||
|
|
||||||
self.retpend = {}
|
self.retpend = {}
|
||||||
self.retpend_mutex = threading.Lock()
|
self.retpend_mutex = threading.Lock()
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
self.workload_thr_alive = False
|
|
||||||
|
|
||||||
# we inherited signal_handler from parent,
|
# we inherited signal_handler from parent,
|
||||||
# replace it with something harmless
|
# replace it with something harmless
|
||||||
@@ -40,7 +36,6 @@ class MpWorker(object):
|
|||||||
|
|
||||||
# instantiate all services here (TODO: inheritance?)
|
# instantiate all services here (TODO: inheritance?)
|
||||||
self.httpsrv = HttpSrv(self, True)
|
self.httpsrv = HttpSrv(self, True)
|
||||||
self.httpsrv.disconnect_func = self.httpdrop
|
|
||||||
|
|
||||||
# on winxp and some other platforms,
|
# on winxp and some other platforms,
|
||||||
# use thr.join() to block all signals
|
# use thr.join() to block all signals
|
||||||
@@ -53,15 +48,15 @@ class MpWorker(object):
|
|||||||
# print('k')
|
# print('k')
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def log(self, src, msg, c=0):
|
def _log_enabled(self, src, msg, c=0):
|
||||||
self.q_yield.put([0, "log", [src, msg, c]])
|
self.q_yield.put([0, "log", [src, msg, c]])
|
||||||
|
|
||||||
|
def _log_disabled(self, src, msg, c=0):
|
||||||
|
pass
|
||||||
|
|
||||||
def logw(self, msg, c=0):
|
def logw(self, msg, c=0):
|
||||||
self.log("mp{}".format(self.n), msg, c)
|
self.log("mp{}".format(self.n), msg, c)
|
||||||
|
|
||||||
def httpdrop(self, addr):
|
|
||||||
self.q_yield.put([0, "httpdrop", [addr]])
|
|
||||||
|
|
||||||
def main(self):
|
def main(self):
|
||||||
while True:
|
while True:
|
||||||
retq_id, dest, args = self.q_pend.get()
|
retq_id, dest, args = self.q_pend.get()
|
||||||
@@ -73,24 +68,8 @@ class MpWorker(object):
|
|||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
return
|
return
|
||||||
|
|
||||||
elif dest == "httpconn":
|
elif dest == "listen":
|
||||||
sck, addr = args
|
self.httpsrv.listen(args[0], args[1])
|
||||||
if PY2:
|
|
||||||
sck = pickle.loads(sck) # nosec
|
|
||||||
|
|
||||||
if self.args.log_conn:
|
|
||||||
self.log("%s %s" % addr, "|%sC-qpop" % ("-" * 4,), c="1;30")
|
|
||||||
|
|
||||||
self.httpsrv.accept(sck, addr)
|
|
||||||
|
|
||||||
with self.mutex:
|
|
||||||
if not self.workload_thr_alive:
|
|
||||||
self.workload_thr_alive = True
|
|
||||||
thr = threading.Thread(
|
|
||||||
target=self.thr_workload, name="mpw-workload"
|
|
||||||
)
|
|
||||||
thr.daemon = True
|
|
||||||
thr.start()
|
|
||||||
|
|
||||||
elif dest == "retq":
|
elif dest == "retq":
|
||||||
# response from previous ipc call
|
# response from previous ipc call
|
||||||
@@ -114,16 +93,3 @@ class MpWorker(object):
|
|||||||
|
|
||||||
self.q_yield.put([retq_id, dest, args])
|
self.q_yield.put([retq_id, dest, args])
|
||||||
return retq
|
return retq
|
||||||
|
|
||||||
def thr_workload(self):
|
|
||||||
"""announce workloads to MpSrv (the mp controller / loadbalancer)"""
|
|
||||||
# avoid locking in extract_filedata by tracking difference here
|
|
||||||
while True:
|
|
||||||
time.sleep(0.2)
|
|
||||||
with self.mutex:
|
|
||||||
if self.httpsrv.num_clients() == 0:
|
|
||||||
# no clients rn, termiante thread
|
|
||||||
self.workload_thr_alive = False
|
|
||||||
return
|
|
||||||
|
|
||||||
self.q_yield.put([0, "workload", [self.httpsrv.workload]])
|
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ from __future__ import print_function, unicode_literals
|
|||||||
|
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
from .authsrv import AuthSrv
|
|
||||||
from .httpsrv import HttpSrv
|
from .httpsrv import HttpSrv
|
||||||
from .broker_util import ExceptionalQueue, try_exec
|
from .broker_util import ExceptionalQueue, try_exec
|
||||||
|
|
||||||
@@ -21,7 +20,6 @@ class BrokerThr(object):
|
|||||||
|
|
||||||
# instantiate all services here (TODO: inheritance?)
|
# instantiate all services here (TODO: inheritance?)
|
||||||
self.httpsrv = HttpSrv(self)
|
self.httpsrv = HttpSrv(self)
|
||||||
self.httpsrv.disconnect_func = self.httpdrop
|
|
||||||
|
|
||||||
def shutdown(self):
|
def shutdown(self):
|
||||||
# self.log("broker", "shutting down")
|
# self.log("broker", "shutting down")
|
||||||
@@ -29,12 +27,8 @@ class BrokerThr(object):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
def put(self, want_retval, dest, *args):
|
def put(self, want_retval, dest, *args):
|
||||||
if dest == "httpconn":
|
if dest == "listen":
|
||||||
sck, addr = args
|
self.httpsrv.listen(args[0], 1)
|
||||||
if self.args.log_conn:
|
|
||||||
self.log("%s %s" % addr, "|%sC-qpop" % ("-" * 4,), c="1;30")
|
|
||||||
|
|
||||||
self.httpsrv.accept(sck, addr)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# new ipc invoking managed service in hub
|
# new ipc invoking managed service in hub
|
||||||
@@ -51,6 +45,3 @@ class BrokerThr(object):
|
|||||||
retq = ExceptionalQueue(1)
|
retq = ExceptionalQueue(1)
|
||||||
retq.put(rv)
|
retq.put(rv)
|
||||||
return retq
|
return retq
|
||||||
|
|
||||||
def httpdrop(self, addr):
|
|
||||||
self.hub.tcpsrv.num_clients.add(-1)
|
|
||||||
|
|||||||
@@ -13,15 +13,12 @@ import ctypes
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
import calendar
|
import calendar
|
||||||
|
|
||||||
from .__init__ import E, PY2, WINDOWS, ANYWIN
|
from .__init__ import E, PY2, WINDOWS, ANYWIN, unicode
|
||||||
from .util import * # noqa # pylint: disable=unused-wildcard-import
|
from .util import * # noqa # pylint: disable=unused-wildcard-import
|
||||||
from .authsrv import AuthSrv
|
from .authsrv import AuthSrv
|
||||||
from .szip import StreamZip
|
from .szip import StreamZip
|
||||||
from .star import StreamTar
|
from .star import StreamTar
|
||||||
|
|
||||||
if not PY2:
|
|
||||||
unicode = str
|
|
||||||
|
|
||||||
|
|
||||||
NO_CACHE = {"Cache-Control": "no-cache"}
|
NO_CACHE = {"Cache-Control": "no-cache"}
|
||||||
NO_STORE = {"Cache-Control": "no-store; max-age=0"}
|
NO_STORE = {"Cache-Control": "no-store; max-age=0"}
|
||||||
@@ -98,9 +95,13 @@ class HttpCli(object):
|
|||||||
try:
|
try:
|
||||||
self.mode, self.req, self.http_ver = headerlines[0].split(" ")
|
self.mode, self.req, self.http_ver = headerlines[0].split(" ")
|
||||||
except:
|
except:
|
||||||
raise Pebkac(400, "bad headers:\n" + "\n".join(headerlines))
|
msg = " ]\n#[ ".join(headerlines)
|
||||||
|
raise Pebkac(400, "bad headers:\n#[ " + msg + " ]")
|
||||||
|
|
||||||
except Pebkac as ex:
|
except Pebkac as ex:
|
||||||
|
self.mode = "GET"
|
||||||
|
self.req = "[junk]"
|
||||||
|
self.http_ver = "HTTP/1.1"
|
||||||
# self.log("pebkac at httpcli.run #1: " + repr(ex))
|
# self.log("pebkac at httpcli.run #1: " + repr(ex))
|
||||||
self.keepalive = self._check_nonfatal(ex)
|
self.keepalive = self._check_nonfatal(ex)
|
||||||
self.loud_reply(unicode(ex), status=ex.code)
|
self.loud_reply(unicode(ex), status=ex.code)
|
||||||
@@ -478,15 +479,17 @@ class HttpCli(object):
|
|||||||
addr = self.ip.replace(":", ".")
|
addr = self.ip.replace(":", ".")
|
||||||
fn = "put-{:.6f}-{}.bin".format(time.time(), addr)
|
fn = "put-{:.6f}-{}.bin".format(time.time(), addr)
|
||||||
path = os.path.join(fdir, fn)
|
path = os.path.join(fdir, fn)
|
||||||
|
if self.args.nw:
|
||||||
|
path = os.devnull
|
||||||
|
|
||||||
with open(fsenc(path), "wb", 512 * 1024) as f:
|
with open(fsenc(path), "wb", 512 * 1024) as f:
|
||||||
post_sz, _, sha_b64 = hashcopy(self.conn, reader, f)
|
post_sz, _, sha_b64 = hashcopy(reader, f)
|
||||||
|
|
||||||
vfs, vrem = vfs.get_dbv(rem)
|
if not self.args.nw:
|
||||||
|
vfs, vrem = vfs.get_dbv(rem)
|
||||||
self.conn.hsrv.broker.put(
|
self.conn.hsrv.broker.put(
|
||||||
False, "up2k.hash_file", vfs.realpath, vfs.flags, vrem, fn
|
False, "up2k.hash_file", vfs.realpath, vfs.flags, vrem, fn
|
||||||
)
|
)
|
||||||
|
|
||||||
return post_sz, sha_b64, remains, path
|
return post_sz, sha_b64, remains, path
|
||||||
|
|
||||||
@@ -607,13 +610,14 @@ class HttpCli(object):
|
|||||||
os.makedirs(fsenc(dst))
|
os.makedirs(fsenc(dst))
|
||||||
except OSError as ex:
|
except OSError as ex:
|
||||||
self.log("makedirs failed [{}]".format(dst))
|
self.log("makedirs failed [{}]".format(dst))
|
||||||
if ex.errno == 13:
|
if not os.path.isdir(fsenc(dst)):
|
||||||
raise Pebkac(500, "the server OS denied write-access")
|
if ex.errno == 13:
|
||||||
|
raise Pebkac(500, "the server OS denied write-access")
|
||||||
|
|
||||||
if ex.errno == 17:
|
if ex.errno == 17:
|
||||||
raise Pebkac(400, "some file got your folder name")
|
raise Pebkac(400, "some file got your folder name")
|
||||||
|
|
||||||
raise Pebkac(500, min_ex())
|
raise Pebkac(500, min_ex())
|
||||||
except:
|
except:
|
||||||
raise Pebkac(500, min_ex())
|
raise Pebkac(500, min_ex())
|
||||||
|
|
||||||
@@ -711,7 +715,7 @@ class HttpCli(object):
|
|||||||
|
|
||||||
with open(fsenc(path), "rb+", 512 * 1024) as f:
|
with open(fsenc(path), "rb+", 512 * 1024) as f:
|
||||||
f.seek(cstart[0])
|
f.seek(cstart[0])
|
||||||
post_sz, _, sha_b64 = hashcopy(self.conn, reader, f)
|
post_sz, _, sha_b64 = hashcopy(reader, f)
|
||||||
|
|
||||||
if sha_b64 != chash:
|
if sha_b64 != chash:
|
||||||
raise Pebkac(
|
raise Pebkac(
|
||||||
@@ -878,7 +882,7 @@ class HttpCli(object):
|
|||||||
with ren_open(fname, "wb", 512 * 1024, **open_args) as f:
|
with ren_open(fname, "wb", 512 * 1024, **open_args) as f:
|
||||||
f, fname = f["orz"]
|
f, fname = f["orz"]
|
||||||
self.log("writing to {}/{}".format(fdir, fname))
|
self.log("writing to {}/{}".format(fdir, fname))
|
||||||
sz, sha512_hex, _ = hashcopy(self.conn, p_data, f)
|
sz, sha512_hex, _ = hashcopy(p_data, f)
|
||||||
if sz == 0:
|
if sz == 0:
|
||||||
raise Pebkac(400, "empty files in post")
|
raise Pebkac(400, "empty files in post")
|
||||||
|
|
||||||
@@ -1061,7 +1065,7 @@ class HttpCli(object):
|
|||||||
raise Pebkac(400, "expected body, got {}".format(p_field))
|
raise Pebkac(400, "expected body, got {}".format(p_field))
|
||||||
|
|
||||||
with open(fsenc(fp), "wb", 512 * 1024) as f:
|
with open(fsenc(fp), "wb", 512 * 1024) as f:
|
||||||
sz, sha512, _ = hashcopy(self.conn, p_data, f)
|
sz, sha512, _ = hashcopy(p_data, f)
|
||||||
|
|
||||||
new_lastmod = os.stat(fsenc(fp)).st_mtime
|
new_lastmod = os.stat(fsenc(fp)).st_mtime
|
||||||
new_lastmod3 = int(new_lastmod * 1000)
|
new_lastmod3 = int(new_lastmod * 1000)
|
||||||
@@ -1251,8 +1255,7 @@ class HttpCli(object):
|
|||||||
if use_sendfile:
|
if use_sendfile:
|
||||||
remains = sendfile_kern(lower, upper, f, self.s)
|
remains = sendfile_kern(lower, upper, f, self.s)
|
||||||
else:
|
else:
|
||||||
actor = self.conn if self.is_mp else None
|
remains = sendfile_py(lower, upper, f, self.s)
|
||||||
remains = sendfile_py(lower, upper, f, self.s, actor)
|
|
||||||
|
|
||||||
if remains > 0:
|
if remains > 0:
|
||||||
logmsg += " \033[31m" + unicode(upper - remains) + "\033[0m"
|
logmsg += " \033[31m" + unicode(upper - remains) + "\033[0m"
|
||||||
@@ -1470,7 +1473,7 @@ class HttpCli(object):
|
|||||||
raise Pebkac(500, x)
|
raise Pebkac(500, x)
|
||||||
|
|
||||||
def tx_stack(self):
|
def tx_stack(self):
|
||||||
if not self.readable or not self.writable:
|
if not self.avol:
|
||||||
raise Pebkac(403, "not admin")
|
raise Pebkac(403, "not admin")
|
||||||
|
|
||||||
if self.args.no_stack:
|
if self.args.no_stack:
|
||||||
@@ -1560,7 +1563,9 @@ class HttpCli(object):
|
|||||||
raise Pebkac(404)
|
raise Pebkac(404)
|
||||||
|
|
||||||
if self.readable:
|
if self.readable:
|
||||||
if rem.startswith(".hist/up2k."):
|
if rem.startswith(".hist/up2k.") or (
|
||||||
|
rem.endswith("/dir.txt") and rem.startswith(".hist/th/")
|
||||||
|
):
|
||||||
raise Pebkac(403)
|
raise Pebkac(403)
|
||||||
|
|
||||||
is_dir = stat.S_ISDIR(st.st_mode)
|
is_dir = stat.S_ISDIR(st.st_mode)
|
||||||
|
|||||||
@@ -45,7 +45,6 @@ class HttpConn(object):
|
|||||||
self.stopping = False
|
self.stopping = False
|
||||||
self.nreq = 0
|
self.nreq = 0
|
||||||
self.nbyte = 0
|
self.nbyte = 0
|
||||||
self.workload = 0
|
|
||||||
self.u2idx = None
|
self.u2idx = None
|
||||||
self.log_func = hsrv.log
|
self.log_func = hsrv.log
|
||||||
self.lf_url = re.compile(self.args.lf_url) if self.args.lf_url else None
|
self.lf_url = re.compile(self.args.lf_url) if self.args.lf_url else None
|
||||||
@@ -184,11 +183,6 @@ class HttpConn(object):
|
|||||||
self.sr = Unrecv(self.s)
|
self.sr = Unrecv(self.s)
|
||||||
|
|
||||||
while not self.stopping:
|
while not self.stopping:
|
||||||
if self.is_mp:
|
|
||||||
self.workload += 50
|
|
||||||
if self.workload >= 2 ** 31:
|
|
||||||
self.workload = 100
|
|
||||||
|
|
||||||
self.nreq += 1
|
self.nreq += 1
|
||||||
cli = HttpCli(self)
|
cli = HttpCli(self)
|
||||||
if not cli.run():
|
if not cli.run():
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ from __future__ import print_function, unicode_literals
|
|||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
|
import math
|
||||||
import base64
|
import base64
|
||||||
import struct
|
|
||||||
import socket
|
import socket
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
@@ -26,9 +26,15 @@ except ImportError:
|
|||||||
)
|
)
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
from .__init__ import E, MACOS
|
from .__init__ import E, PY2, MACOS
|
||||||
|
from .util import spack, min_ex
|
||||||
from .httpconn import HttpConn
|
from .httpconn import HttpConn
|
||||||
|
|
||||||
|
if PY2:
|
||||||
|
import Queue as queue
|
||||||
|
else:
|
||||||
|
import queue
|
||||||
|
|
||||||
|
|
||||||
class HttpSrv(object):
|
class HttpSrv(object):
|
||||||
"""
|
"""
|
||||||
@@ -43,12 +49,19 @@ class HttpSrv(object):
|
|||||||
self.log = broker.log
|
self.log = broker.log
|
||||||
self.asrv = broker.asrv
|
self.asrv = broker.asrv
|
||||||
|
|
||||||
self.disconnect_func = None
|
self.name = "httpsrv-i{:x}".format(os.getpid())
|
||||||
self.mutex = threading.Lock()
|
self.mutex = threading.Lock()
|
||||||
|
self.stopping = False
|
||||||
|
|
||||||
self.clients = {}
|
self.tp_nthr = 0 # actual
|
||||||
self.workload = 0
|
self.tp_ncli = 0 # fading
|
||||||
self.workload_thr_alive = False
|
self.tp_time = None # latest worker collect
|
||||||
|
self.tp_q = None if self.args.no_htp else queue.LifoQueue()
|
||||||
|
|
||||||
|
self.srvs = []
|
||||||
|
self.ncli = 0 # exact
|
||||||
|
self.clients = {} # laggy
|
||||||
|
self.nclimax = 0
|
||||||
self.cb_ts = 0
|
self.cb_ts = 0
|
||||||
self.cb_v = 0
|
self.cb_v = 0
|
||||||
|
|
||||||
@@ -65,10 +78,105 @@ class HttpSrv(object):
|
|||||||
else:
|
else:
|
||||||
self.cert_path = None
|
self.cert_path = None
|
||||||
|
|
||||||
|
if self.tp_q:
|
||||||
|
self.start_threads(4)
|
||||||
|
|
||||||
|
t = threading.Thread(target=self.thr_scaler)
|
||||||
|
t.daemon = True
|
||||||
|
t.start()
|
||||||
|
|
||||||
|
def start_threads(self, n):
|
||||||
|
self.tp_nthr += n
|
||||||
|
if self.args.log_htp:
|
||||||
|
self.log(self.name, "workers += {} = {}".format(n, self.tp_nthr), 6)
|
||||||
|
|
||||||
|
for _ in range(n):
|
||||||
|
thr = threading.Thread(
|
||||||
|
target=self.thr_poolw,
|
||||||
|
name="httpsrv-poolw",
|
||||||
|
)
|
||||||
|
thr.daemon = True
|
||||||
|
thr.start()
|
||||||
|
|
||||||
|
def stop_threads(self, n):
|
||||||
|
self.tp_nthr -= n
|
||||||
|
if self.args.log_htp:
|
||||||
|
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
|
||||||
|
|
||||||
|
for _ in range(n):
|
||||||
|
self.tp_q.put(None)
|
||||||
|
|
||||||
|
def thr_scaler(self):
|
||||||
|
while True:
|
||||||
|
time.sleep(2 if self.tp_ncli else 30)
|
||||||
|
with self.mutex:
|
||||||
|
self.tp_ncli = max(self.ncli, self.tp_ncli - 2)
|
||||||
|
if self.tp_nthr > self.tp_ncli + 8:
|
||||||
|
self.stop_threads(4)
|
||||||
|
|
||||||
|
def listen(self, sck, nlisteners):
|
||||||
|
self.srvs.append(sck)
|
||||||
|
self.nclimax = math.ceil(self.args.nc * 1.0 / nlisteners)
|
||||||
|
t = threading.Thread(target=self.thr_listen, args=(sck,))
|
||||||
|
t.daemon = True
|
||||||
|
t.start()
|
||||||
|
|
||||||
|
def thr_listen(self, srv_sck):
|
||||||
|
"""listens on a shared tcp server"""
|
||||||
|
ip, port = srv_sck.getsockname()
|
||||||
|
fno = srv_sck.fileno()
|
||||||
|
msg = "subscribed @ {}:{} f{}".format(ip, port, fno)
|
||||||
|
self.log(self.name, msg)
|
||||||
|
while not self.stopping:
|
||||||
|
if self.args.log_conn:
|
||||||
|
self.log(self.name, "|%sC-ncli" % ("-" * 1,), c="1;30")
|
||||||
|
|
||||||
|
if self.ncli >= self.nclimax:
|
||||||
|
self.log(self.name, "at connection limit; waiting", 3)
|
||||||
|
while self.ncli >= self.nclimax:
|
||||||
|
time.sleep(0.1)
|
||||||
|
|
||||||
|
if self.args.log_conn:
|
||||||
|
self.log(self.name, "|%sC-acc1" % ("-" * 2,), c="1;30")
|
||||||
|
|
||||||
|
try:
|
||||||
|
sck, addr = srv_sck.accept()
|
||||||
|
except (OSError, socket.error) as ex:
|
||||||
|
self.log(self.name, "accept({}): {}".format(fno, ex), c=6)
|
||||||
|
time.sleep(0.02)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if self.args.log_conn:
|
||||||
|
m = "|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
|
||||||
|
"-" * 3, ip, port % 8, port
|
||||||
|
)
|
||||||
|
self.log("%s %s" % addr, m, c="1;30")
|
||||||
|
|
||||||
|
self.accept(sck, addr)
|
||||||
|
|
||||||
def accept(self, sck, addr):
|
def accept(self, sck, addr):
|
||||||
"""takes an incoming tcp connection and creates a thread to handle it"""
|
"""takes an incoming tcp connection and creates a thread to handle it"""
|
||||||
if self.args.log_conn:
|
now = time.time()
|
||||||
self.log("%s %s" % addr, "|%sC-cthr" % ("-" * 5,), c="1;30")
|
|
||||||
|
if self.tp_time and now - self.tp_time > 300:
|
||||||
|
self.tp_q = None
|
||||||
|
|
||||||
|
if self.tp_q:
|
||||||
|
self.tp_q.put((sck, addr))
|
||||||
|
with self.mutex:
|
||||||
|
self.ncli += 1
|
||||||
|
self.tp_time = self.tp_time or now
|
||||||
|
self.tp_ncli = max(self.tp_ncli, self.ncli + 1)
|
||||||
|
if self.tp_nthr < self.ncli + 4:
|
||||||
|
self.start_threads(8)
|
||||||
|
return
|
||||||
|
|
||||||
|
if not self.args.no_htp:
|
||||||
|
m = "looks like the httpserver threadpool died; please make an issue on github and tell me the story of how you pulled that off, thanks and dog bless\n"
|
||||||
|
self.log(self.name, m, 1)
|
||||||
|
|
||||||
|
with self.mutex:
|
||||||
|
self.ncli += 1
|
||||||
|
|
||||||
thr = threading.Thread(
|
thr = threading.Thread(
|
||||||
target=self.thr_client,
|
target=self.thr_client,
|
||||||
@@ -78,11 +186,34 @@ class HttpSrv(object):
|
|||||||
thr.daemon = True
|
thr.daemon = True
|
||||||
thr.start()
|
thr.start()
|
||||||
|
|
||||||
def num_clients(self):
|
def thr_poolw(self):
|
||||||
with self.mutex:
|
while True:
|
||||||
return len(self.clients)
|
task = self.tp_q.get()
|
||||||
|
if not task:
|
||||||
|
break
|
||||||
|
|
||||||
|
with self.mutex:
|
||||||
|
self.tp_time = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
sck, addr = task
|
||||||
|
me = threading.current_thread()
|
||||||
|
me.name = (
|
||||||
|
"httpsrv-{}-{}".format(addr[0].split(".", 2)[-1][-6:], addr[1]),
|
||||||
|
)
|
||||||
|
self.thr_client(sck, addr)
|
||||||
|
me.name = "httpsrv-poolw"
|
||||||
|
except:
|
||||||
|
self.log(self.name, "thr_client: " + min_ex(), 3)
|
||||||
|
|
||||||
def shutdown(self):
|
def shutdown(self):
|
||||||
|
self.stopping = True
|
||||||
|
for srv in self.srvs:
|
||||||
|
try:
|
||||||
|
srv.close()
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
clients = list(self.clients.keys())
|
clients = list(self.clients.keys())
|
||||||
for cli in clients:
|
for cli in clients:
|
||||||
try:
|
try:
|
||||||
@@ -90,7 +221,14 @@ class HttpSrv(object):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
self.log("httpsrv-n", "ok bye")
|
if self.tp_q:
|
||||||
|
self.stop_threads(self.tp_nthr)
|
||||||
|
for _ in range(10):
|
||||||
|
time.sleep(0.05)
|
||||||
|
if self.tp_q.empty():
|
||||||
|
break
|
||||||
|
|
||||||
|
self.log("httpsrv-i" + str(os.getpid()), "ok bye")
|
||||||
|
|
||||||
def thr_client(self, sck, addr):
|
def thr_client(self, sck, addr):
|
||||||
"""thread managing one tcp client"""
|
"""thread managing one tcp client"""
|
||||||
@@ -100,25 +238,15 @@ class HttpSrv(object):
|
|||||||
with self.mutex:
|
with self.mutex:
|
||||||
self.clients[cli] = 0
|
self.clients[cli] = 0
|
||||||
|
|
||||||
if self.is_mp:
|
|
||||||
self.workload += 50
|
|
||||||
if not self.workload_thr_alive:
|
|
||||||
self.workload_thr_alive = True
|
|
||||||
thr = threading.Thread(
|
|
||||||
target=self.thr_workload, name="httpsrv-workload"
|
|
||||||
)
|
|
||||||
thr.daemon = True
|
|
||||||
thr.start()
|
|
||||||
|
|
||||||
fno = sck.fileno()
|
fno = sck.fileno()
|
||||||
try:
|
try:
|
||||||
if self.args.log_conn:
|
if self.args.log_conn:
|
||||||
self.log("%s %s" % addr, "|%sC-crun" % ("-" * 6,), c="1;30")
|
self.log("%s %s" % addr, "|%sC-crun" % ("-" * 4,), c="1;30")
|
||||||
|
|
||||||
cli.run()
|
cli.run()
|
||||||
|
|
||||||
except (OSError, socket.error) as ex:
|
except (OSError, socket.error) as ex:
|
||||||
if ex.errno not in [10038, 10054, 107, 57, 9]:
|
if ex.errno not in [10038, 10054, 107, 57, 49, 9]:
|
||||||
self.log(
|
self.log(
|
||||||
"%s %s" % addr,
|
"%s %s" % addr,
|
||||||
"run({}): {}".format(fno, ex),
|
"run({}): {}".format(fno, ex),
|
||||||
@@ -128,7 +256,7 @@ class HttpSrv(object):
|
|||||||
finally:
|
finally:
|
||||||
sck = cli.s
|
sck = cli.s
|
||||||
if self.args.log_conn:
|
if self.args.log_conn:
|
||||||
self.log("%s %s" % addr, "|%sC-cdone" % ("-" * 7,), c="1;30")
|
self.log("%s %s" % addr, "|%sC-cdone" % ("-" * 5,), c="1;30")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
fno = sck.fileno()
|
fno = sck.fileno()
|
||||||
@@ -152,35 +280,7 @@ class HttpSrv(object):
|
|||||||
finally:
|
finally:
|
||||||
with self.mutex:
|
with self.mutex:
|
||||||
del self.clients[cli]
|
del self.clients[cli]
|
||||||
|
self.ncli -= 1
|
||||||
if self.disconnect_func:
|
|
||||||
self.disconnect_func(addr) # pylint: disable=not-callable
|
|
||||||
|
|
||||||
def thr_workload(self):
|
|
||||||
"""indicates the python interpreter workload caused by this HttpSrv"""
|
|
||||||
# avoid locking in extract_filedata by tracking difference here
|
|
||||||
while True:
|
|
||||||
time.sleep(0.2)
|
|
||||||
with self.mutex:
|
|
||||||
if not self.clients:
|
|
||||||
# no clients rn, termiante thread
|
|
||||||
self.workload_thr_alive = False
|
|
||||||
self.workload = 0
|
|
||||||
return
|
|
||||||
|
|
||||||
total = 0
|
|
||||||
with self.mutex:
|
|
||||||
for cli in self.clients.keys():
|
|
||||||
now = cli.workload
|
|
||||||
delta = now - self.clients[cli]
|
|
||||||
if delta < 0:
|
|
||||||
# was reset in HttpCli to prevent overflow
|
|
||||||
delta = now
|
|
||||||
|
|
||||||
total += delta
|
|
||||||
self.clients[cli] = now
|
|
||||||
|
|
||||||
self.workload = total
|
|
||||||
|
|
||||||
def cachebuster(self):
|
def cachebuster(self):
|
||||||
if time.time() - self.cb_ts < 1:
|
if time.time() - self.cb_ts < 1:
|
||||||
@@ -199,7 +299,7 @@ class HttpSrv(object):
|
|||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
v = base64.urlsafe_b64encode(struct.pack(">xxL", int(v)))
|
v = base64.urlsafe_b64encode(spack(b">xxL", int(v)))
|
||||||
self.cb_v = v.decode("ascii")[-4:]
|
self.cb_v = v.decode("ascii")[-4:]
|
||||||
self.cb_ts = time.time()
|
self.cb_ts = time.time()
|
||||||
return self.cb_v
|
return self.cb_v
|
||||||
|
|||||||
@@ -7,12 +7,9 @@ import json
|
|||||||
import shutil
|
import shutil
|
||||||
import subprocess as sp
|
import subprocess as sp
|
||||||
|
|
||||||
from .__init__ import PY2, WINDOWS
|
from .__init__ import PY2, WINDOWS, unicode
|
||||||
from .util import fsenc, fsdec, uncyg, REKOBO_LKEY
|
from .util import fsenc, fsdec, uncyg, REKOBO_LKEY
|
||||||
|
|
||||||
if not PY2:
|
|
||||||
unicode = str
|
|
||||||
|
|
||||||
|
|
||||||
def have_ff(cmd):
|
def have_ff(cmd):
|
||||||
if PY2:
|
if PY2:
|
||||||
|
|||||||
@@ -5,11 +5,12 @@ import re
|
|||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
|
import shlex
|
||||||
import threading
|
import threading
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
import calendar
|
import calendar
|
||||||
|
|
||||||
from .__init__ import PY2, WINDOWS, MACOS, VT100
|
from .__init__ import E, PY2, WINDOWS, MACOS, VT100
|
||||||
from .util import mp
|
from .util import mp
|
||||||
from .authsrv import AuthSrv
|
from .authsrv import AuthSrv
|
||||||
from .tcpsrv import TcpSrv
|
from .tcpsrv import TcpSrv
|
||||||
@@ -28,14 +29,18 @@ class SvcHub(object):
|
|||||||
put() can return a queue (if want_reply=True) which has a blocking get() with the response.
|
put() can return a queue (if want_reply=True) which has a blocking get() with the response.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, args):
|
def __init__(self, args, argv, printed):
|
||||||
self.args = args
|
self.args = args
|
||||||
|
self.argv = argv
|
||||||
|
self.logf = None
|
||||||
|
|
||||||
self.ansi_re = re.compile("\033\\[[^m]*m")
|
self.ansi_re = re.compile("\033\\[[^m]*m")
|
||||||
self.log_mutex = threading.Lock()
|
self.log_mutex = threading.Lock()
|
||||||
self.next_day = 0
|
self.next_day = 0
|
||||||
|
|
||||||
self.log = self._log_disabled if args.q else self._log_enabled
|
self.log = self._log_disabled if args.q else self._log_enabled
|
||||||
|
if args.lo:
|
||||||
|
self._setup_logfile(printed)
|
||||||
|
|
||||||
# initiate all services to manage
|
# initiate all services to manage
|
||||||
self.asrv = AuthSrv(self.args, self.log, False)
|
self.asrv = AuthSrv(self.args, self.log, False)
|
||||||
@@ -69,6 +74,52 @@ class SvcHub(object):
|
|||||||
|
|
||||||
self.broker = Broker(self)
|
self.broker = Broker(self)
|
||||||
|
|
||||||
|
def _logname(self):
|
||||||
|
dt = datetime.utcfromtimestamp(time.time())
|
||||||
|
fn = self.args.lo
|
||||||
|
for fs in "YmdHMS":
|
||||||
|
fs = "%" + fs
|
||||||
|
if fs in fn:
|
||||||
|
fn = fn.replace(fs, dt.strftime(fs))
|
||||||
|
|
||||||
|
return fn
|
||||||
|
|
||||||
|
def _setup_logfile(self, printed):
|
||||||
|
base_fn = fn = sel_fn = self._logname()
|
||||||
|
if fn != self.args.lo:
|
||||||
|
ctr = 0
|
||||||
|
# yup this is a race; if started sufficiently concurrently, two
|
||||||
|
# copyparties can grab the same logfile (considered and ignored)
|
||||||
|
while os.path.exists(sel_fn):
|
||||||
|
ctr += 1
|
||||||
|
sel_fn = "{}.{}".format(fn, ctr)
|
||||||
|
|
||||||
|
fn = sel_fn
|
||||||
|
|
||||||
|
try:
|
||||||
|
import lzma
|
||||||
|
|
||||||
|
lh = lzma.open(fn, "wt", encoding="utf-8", errors="replace", preset=0)
|
||||||
|
|
||||||
|
except:
|
||||||
|
import codecs
|
||||||
|
|
||||||
|
lh = codecs.open(fn, "w", encoding="utf-8", errors="replace")
|
||||||
|
|
||||||
|
lh.base_fn = base_fn
|
||||||
|
|
||||||
|
argv = [sys.executable] + self.argv
|
||||||
|
if hasattr(shlex, "quote"):
|
||||||
|
argv = [shlex.quote(x) for x in argv]
|
||||||
|
else:
|
||||||
|
argv = ['"{}"'.format(x) for x in argv]
|
||||||
|
|
||||||
|
msg = "[+] opened logfile [{}]\n".format(fn)
|
||||||
|
printed += msg
|
||||||
|
lh.write("t0: {:.3f}\nargv: {}\n\n{}".format(E.t0, " ".join(argv), printed))
|
||||||
|
self.logf = lh
|
||||||
|
print(msg, end="")
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
thr = threading.Thread(target=self.tcpsrv.run, name="svchub-main")
|
thr = threading.Thread(target=self.tcpsrv.run, name="svchub-main")
|
||||||
thr.daemon = True
|
thr.daemon = True
|
||||||
@@ -99,9 +150,36 @@ class SvcHub(object):
|
|||||||
print("nailed it", end="")
|
print("nailed it", end="")
|
||||||
finally:
|
finally:
|
||||||
print("\033[0m")
|
print("\033[0m")
|
||||||
|
if self.logf:
|
||||||
|
self.logf.close()
|
||||||
|
|
||||||
def _log_disabled(self, src, msg, c=0):
|
def _log_disabled(self, src, msg, c=0):
|
||||||
pass
|
if not self.logf:
|
||||||
|
return
|
||||||
|
|
||||||
|
with self.log_mutex:
|
||||||
|
ts = datetime.utcfromtimestamp(time.time())
|
||||||
|
ts = ts.strftime("%Y-%m%d-%H%M%S.%f")[:-3]
|
||||||
|
self.logf.write("@{} [{}] {}\n".format(ts, src, msg))
|
||||||
|
|
||||||
|
now = time.time()
|
||||||
|
if now >= self.next_day:
|
||||||
|
self._set_next_day()
|
||||||
|
|
||||||
|
def _set_next_day(self):
|
||||||
|
if self.next_day and self.logf and self.logf.base_fn != self._logname():
|
||||||
|
self.logf.close()
|
||||||
|
self._setup_logfile("")
|
||||||
|
|
||||||
|
dt = datetime.utcfromtimestamp(time.time())
|
||||||
|
|
||||||
|
# unix timestamp of next 00:00:00 (leap-seconds safe)
|
||||||
|
day_now = dt.day
|
||||||
|
while dt.day == day_now:
|
||||||
|
dt += timedelta(hours=12)
|
||||||
|
|
||||||
|
dt = dt.replace(hour=0, minute=0, second=0)
|
||||||
|
self.next_day = calendar.timegm(dt.utctimetuple())
|
||||||
|
|
||||||
def _log_enabled(self, src, msg, c=0):
|
def _log_enabled(self, src, msg, c=0):
|
||||||
"""handles logging from all components"""
|
"""handles logging from all components"""
|
||||||
@@ -110,14 +188,7 @@ class SvcHub(object):
|
|||||||
if now >= self.next_day:
|
if now >= self.next_day:
|
||||||
dt = datetime.utcfromtimestamp(now)
|
dt = datetime.utcfromtimestamp(now)
|
||||||
print("\033[36m{}\033[0m\n".format(dt.strftime("%Y-%m-%d")), end="")
|
print("\033[36m{}\033[0m\n".format(dt.strftime("%Y-%m-%d")), end="")
|
||||||
|
self._set_next_day()
|
||||||
# unix timestamp of next 00:00:00 (leap-seconds safe)
|
|
||||||
day_now = dt.day
|
|
||||||
while dt.day == day_now:
|
|
||||||
dt += timedelta(hours=12)
|
|
||||||
|
|
||||||
dt = dt.replace(hour=0, minute=0, second=0)
|
|
||||||
self.next_day = calendar.timegm(dt.utctimetuple())
|
|
||||||
|
|
||||||
fmt = "\033[36m{} \033[33m{:21} \033[0m{}\n"
|
fmt = "\033[36m{} \033[33m{:21} \033[0m{}\n"
|
||||||
if not VT100:
|
if not VT100:
|
||||||
@@ -144,20 +215,20 @@ class SvcHub(object):
|
|||||||
except:
|
except:
|
||||||
print(msg.encode("ascii", "replace").decode(), end="")
|
print(msg.encode("ascii", "replace").decode(), end="")
|
||||||
|
|
||||||
|
if self.logf:
|
||||||
|
self.logf.write(msg)
|
||||||
|
|
||||||
def check_mp_support(self):
|
def check_mp_support(self):
|
||||||
vmin = sys.version_info[1]
|
vmin = sys.version_info[1]
|
||||||
if WINDOWS:
|
if WINDOWS:
|
||||||
msg = "need python 3.3 or newer for multiprocessing;"
|
msg = "need python 3.3 or newer for multiprocessing;"
|
||||||
if PY2:
|
if PY2 or vmin < 3:
|
||||||
# py2 pickler doesn't support winsock
|
|
||||||
return msg
|
|
||||||
elif vmin < 3:
|
|
||||||
return msg
|
return msg
|
||||||
elif MACOS:
|
elif MACOS:
|
||||||
return "multiprocessing is wonky on mac osx;"
|
return "multiprocessing is wonky on mac osx;"
|
||||||
else:
|
else:
|
||||||
msg = "need python 2.7 or 3.3+ for multiprocessing;"
|
msg = "need python 3.3+ for multiprocessing;"
|
||||||
if not PY2 and vmin < 3:
|
if PY2 or vmin < 3:
|
||||||
return msg
|
return msg
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -189,5 +260,5 @@ class SvcHub(object):
|
|||||||
if not err:
|
if not err:
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
self.log("root", err)
|
self.log("svchub", err)
|
||||||
return False
|
return False
|
||||||
|
|||||||
@@ -4,15 +4,14 @@ from __future__ import print_function, unicode_literals
|
|||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
import zlib
|
import zlib
|
||||||
import struct
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from .sutil import errdesc
|
from .sutil import errdesc
|
||||||
from .util import yieldfile, sanitize_fn
|
from .util import yieldfile, sanitize_fn, spack, sunpack
|
||||||
|
|
||||||
|
|
||||||
def dostime2unix(buf):
|
def dostime2unix(buf):
|
||||||
t, d = struct.unpack("<HH", buf)
|
t, d = sunpack(b"<HH", buf)
|
||||||
|
|
||||||
ts = (t & 0x1F) * 2
|
ts = (t & 0x1F) * 2
|
||||||
tm = (t >> 5) & 0x3F
|
tm = (t >> 5) & 0x3F
|
||||||
@@ -36,13 +35,13 @@ def unixtime2dos(ts):
|
|||||||
|
|
||||||
bd = ((dy - 1980) << 9) + (dm << 5) + dd
|
bd = ((dy - 1980) << 9) + (dm << 5) + dd
|
||||||
bt = (th << 11) + (tm << 5) + ts // 2
|
bt = (th << 11) + (tm << 5) + ts // 2
|
||||||
return struct.pack("<HH", bt, bd)
|
return spack(b"<HH", bt, bd)
|
||||||
|
|
||||||
|
|
||||||
def gen_fdesc(sz, crc32, z64):
|
def gen_fdesc(sz, crc32, z64):
|
||||||
ret = b"\x50\x4b\x07\x08"
|
ret = b"\x50\x4b\x07\x08"
|
||||||
fmt = "<LQQ" if z64 else "<LLL"
|
fmt = b"<LQQ" if z64 else b"<LLL"
|
||||||
ret += struct.pack(fmt, crc32, sz, sz)
|
ret += spack(fmt, crc32, sz, sz)
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
@@ -66,7 +65,7 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
|
|||||||
req_ver = b"\x2d\x00" if z64 else b"\x0a\x00"
|
req_ver = b"\x2d\x00" if z64 else b"\x0a\x00"
|
||||||
|
|
||||||
if crc32:
|
if crc32:
|
||||||
crc32 = struct.pack("<L", crc32)
|
crc32 = spack(b"<L", crc32)
|
||||||
else:
|
else:
|
||||||
crc32 = b"\x00" * 4
|
crc32 = b"\x00" * 4
|
||||||
|
|
||||||
@@ -87,14 +86,14 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
|
|||||||
# however infozip does actual sz and it even works on winxp
|
# however infozip does actual sz and it even works on winxp
|
||||||
# (same reasning for z64 extradata later)
|
# (same reasning for z64 extradata later)
|
||||||
vsz = 0xFFFFFFFF if z64 else sz
|
vsz = 0xFFFFFFFF if z64 else sz
|
||||||
ret += struct.pack("<LL", vsz, vsz)
|
ret += spack(b"<LL", vsz, vsz)
|
||||||
|
|
||||||
# windows support (the "?" replace below too)
|
# windows support (the "?" replace below too)
|
||||||
fn = sanitize_fn(fn, ok="/")
|
fn = sanitize_fn(fn, ok="/")
|
||||||
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
|
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
|
||||||
|
|
||||||
z64_len = len(z64v) * 8 + 4 if z64v else 0
|
z64_len = len(z64v) * 8 + 4 if z64v else 0
|
||||||
ret += struct.pack("<HH", len(bfn), z64_len)
|
ret += spack(b"<HH", len(bfn), z64_len)
|
||||||
|
|
||||||
if h_pos is not None:
|
if h_pos is not None:
|
||||||
# 2b comment, 2b diskno
|
# 2b comment, 2b diskno
|
||||||
@@ -106,12 +105,12 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
|
|||||||
ret += b"\x01\x00\x00\x00\xa4\x81"
|
ret += b"\x01\x00\x00\x00\xa4\x81"
|
||||||
|
|
||||||
# 4b local-header-ofs
|
# 4b local-header-ofs
|
||||||
ret += struct.pack("<L", min(h_pos, 0xFFFFFFFF))
|
ret += spack(b"<L", min(h_pos, 0xFFFFFFFF))
|
||||||
|
|
||||||
ret += bfn
|
ret += bfn
|
||||||
|
|
||||||
if z64v:
|
if z64v:
|
||||||
ret += struct.pack("<HH" + "Q" * len(z64v), 1, len(z64v) * 8, *z64v)
|
ret += spack(b"<HH" + b"Q" * len(z64v), 1, len(z64v) * 8, *z64v)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
@@ -136,7 +135,7 @@ def gen_ecdr(items, cdir_pos, cdir_end):
|
|||||||
need_64 = nitems == 0xFFFF or 0xFFFFFFFF in [csz, cpos]
|
need_64 = nitems == 0xFFFF or 0xFFFFFFFF in [csz, cpos]
|
||||||
|
|
||||||
# 2b tnfiles, 2b dnfiles, 4b dir sz, 4b dir pos
|
# 2b tnfiles, 2b dnfiles, 4b dir sz, 4b dir pos
|
||||||
ret += struct.pack("<HHLL", nitems, nitems, csz, cpos)
|
ret += spack(b"<HHLL", nitems, nitems, csz, cpos)
|
||||||
|
|
||||||
# 2b comment length
|
# 2b comment length
|
||||||
ret += b"\x00\x00"
|
ret += b"\x00\x00"
|
||||||
@@ -163,7 +162,7 @@ def gen_ecdr64(items, cdir_pos, cdir_end):
|
|||||||
|
|
||||||
# 8b tnfiles, 8b dnfiles, 8b dir sz, 8b dir pos
|
# 8b tnfiles, 8b dnfiles, 8b dir sz, 8b dir pos
|
||||||
cdir_sz = cdir_end - cdir_pos
|
cdir_sz = cdir_end - cdir_pos
|
||||||
ret += struct.pack("<QQQQ", len(items), len(items), cdir_sz, cdir_pos)
|
ret += spack(b"<QQQQ", len(items), len(items), cdir_sz, cdir_pos)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
@@ -178,7 +177,7 @@ def gen_ecdr64_loc(ecdr64_pos):
|
|||||||
ret = b"\x50\x4b\x06\x07"
|
ret = b"\x50\x4b\x06\x07"
|
||||||
|
|
||||||
# 4b cdisk, 8b start of ecdr64, 4b ndisks
|
# 4b cdisk, 8b start of ecdr64, 4b ndisks
|
||||||
ret += struct.pack("<LQL", 0, ecdr64_pos, 1)
|
ret += spack(b"<LQL", 0, ecdr64_pos, 1)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|||||||
@@ -2,11 +2,9 @@
|
|||||||
from __future__ import print_function, unicode_literals
|
from __future__ import print_function, unicode_literals
|
||||||
|
|
||||||
import re
|
import re
|
||||||
import time
|
|
||||||
import socket
|
import socket
|
||||||
import select
|
|
||||||
|
|
||||||
from .util import chkcmd, Counter
|
from .util import chkcmd
|
||||||
|
|
||||||
|
|
||||||
class TcpSrv(object):
|
class TcpSrv(object):
|
||||||
@@ -20,7 +18,6 @@ class TcpSrv(object):
|
|||||||
self.args = hub.args
|
self.args = hub.args
|
||||||
self.log = hub.log
|
self.log = hub.log
|
||||||
|
|
||||||
self.num_clients = Counter()
|
|
||||||
self.stopping = False
|
self.stopping = False
|
||||||
|
|
||||||
ip = "127.0.0.1"
|
ip = "127.0.0.1"
|
||||||
@@ -66,44 +63,13 @@ class TcpSrv(object):
|
|||||||
for srv in self.srv:
|
for srv in self.srv:
|
||||||
srv.listen(self.args.nc)
|
srv.listen(self.args.nc)
|
||||||
ip, port = srv.getsockname()
|
ip, port = srv.getsockname()
|
||||||
self.log("tcpsrv", "listening @ {0}:{1}".format(ip, port))
|
fno = srv.fileno()
|
||||||
|
msg = "listening @ {}:{} f{}".format(ip, port, fno)
|
||||||
|
self.log("tcpsrv", msg)
|
||||||
|
if self.args.q:
|
||||||
|
print(msg)
|
||||||
|
|
||||||
while not self.stopping:
|
self.hub.broker.put(False, "listen", srv)
|
||||||
if self.args.log_conn:
|
|
||||||
self.log("tcpsrv", "|%sC-ncli" % ("-" * 1,), c="1;30")
|
|
||||||
|
|
||||||
if self.num_clients.v >= self.args.nc:
|
|
||||||
time.sleep(0.1)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if self.args.log_conn:
|
|
||||||
self.log("tcpsrv", "|%sC-acc1" % ("-" * 2,), c="1;30")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# macos throws bad-fd
|
|
||||||
ready, _, _ = select.select(self.srv, [], [])
|
|
||||||
except:
|
|
||||||
ready = []
|
|
||||||
if not self.stopping:
|
|
||||||
raise
|
|
||||||
|
|
||||||
for srv in ready:
|
|
||||||
if self.stopping:
|
|
||||||
break
|
|
||||||
|
|
||||||
sck, addr = srv.accept()
|
|
||||||
sip, sport = srv.getsockname()
|
|
||||||
if self.args.log_conn:
|
|
||||||
self.log(
|
|
||||||
"%s %s" % addr,
|
|
||||||
"|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
|
|
||||||
"-" * 3, sip, sport % 8, sport
|
|
||||||
),
|
|
||||||
c="1;30",
|
|
||||||
)
|
|
||||||
|
|
||||||
self.num_clients.add()
|
|
||||||
self.hub.broker.put(False, "httpconn", sck, addr)
|
|
||||||
|
|
||||||
def shutdown(self):
|
def shutdown(self):
|
||||||
self.stopping = True
|
self.stopping = True
|
||||||
|
|||||||
@@ -9,15 +9,11 @@ import hashlib
|
|||||||
import threading
|
import threading
|
||||||
import subprocess as sp
|
import subprocess as sp
|
||||||
|
|
||||||
from .__init__ import PY2
|
from .__init__ import PY2, unicode
|
||||||
from .util import fsenc, runcmd, Queue, Cooldown, BytesIO, min_ex
|
from .util import fsenc, runcmd, Queue, Cooldown, BytesIO, min_ex
|
||||||
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
|
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
|
||||||
|
|
||||||
|
|
||||||
if not PY2:
|
|
||||||
unicode = str
|
|
||||||
|
|
||||||
|
|
||||||
HAVE_PIL = False
|
HAVE_PIL = False
|
||||||
HAVE_HEIF = False
|
HAVE_HEIF = False
|
||||||
HAVE_AVIF = False
|
HAVE_AVIF = False
|
||||||
@@ -53,7 +49,7 @@ except:
|
|||||||
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
|
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
|
||||||
# ffmpeg -formats
|
# ffmpeg -formats
|
||||||
FMT_PIL = "bmp dib gif icns ico jpg jpeg jp2 jpx pcx png pbm pgm ppm pnm sgi tga tif tiff webp xbm dds xpm"
|
FMT_PIL = "bmp dib gif icns ico jpg jpeg jp2 jpx pcx png pbm pgm ppm pnm sgi tga tif tiff webp xbm dds xpm"
|
||||||
FMT_FF = "av1 asf avi flv m4v mkv mjpeg mjpg mpg mpeg mpg2 mpeg2 h264 avc h265 hevc mov 3gp mp4 ts mpegts nut ogv ogm rm vob webm wmv"
|
FMT_FF = "av1 asf avi flv m4v mkv mjpeg mjpg mpg mpeg mpg2 mpeg2 h264 avc mts h265 hevc mov 3gp mp4 ts mpegts nut ogv ogm rm vob webm wmv"
|
||||||
|
|
||||||
if HAVE_HEIF:
|
if HAVE_HEIF:
|
||||||
FMT_PIL += " heif heifs heic heics"
|
FMT_PIL += " heif heifs heic heics"
|
||||||
@@ -134,9 +130,10 @@ class ThumbSrv(object):
|
|||||||
msg += ", ".join(missing)
|
msg += ", ".join(missing)
|
||||||
self.log(msg, c=3)
|
self.log(msg, c=3)
|
||||||
|
|
||||||
t = threading.Thread(target=self.cleaner, name="thumb-cleaner")
|
if self.args.th_clean:
|
||||||
t.daemon = True
|
t = threading.Thread(target=self.cleaner, name="thumb-cleaner")
|
||||||
t.start()
|
t.daemon = True
|
||||||
|
t.start()
|
||||||
|
|
||||||
def log(self, msg, c=0):
|
def log(self, msg, c=0):
|
||||||
self.log_func("thumb", msg, c)
|
self.log_func("thumb", msg, c)
|
||||||
|
|||||||
@@ -103,13 +103,15 @@ class Up2k(object):
|
|||||||
self.deferred_init()
|
self.deferred_init()
|
||||||
else:
|
else:
|
||||||
t = threading.Thread(
|
t = threading.Thread(
|
||||||
target=self.deferred_init,
|
target=self.deferred_init, name="up2k-deferred-init", args=(0.5,)
|
||||||
name="up2k-deferred-init",
|
|
||||||
)
|
)
|
||||||
t.daemon = True
|
t.daemon = True
|
||||||
t.start()
|
t.start()
|
||||||
|
|
||||||
def deferred_init(self):
|
def deferred_init(self, wait=0):
|
||||||
|
if wait:
|
||||||
|
time.sleep(wait)
|
||||||
|
|
||||||
all_vols = self.asrv.vfs.all_vols
|
all_vols = self.asrv.vfs.all_vols
|
||||||
have_e2d = self.init_indexes(all_vols)
|
have_e2d = self.init_indexes(all_vols)
|
||||||
|
|
||||||
@@ -342,7 +344,15 @@ class Up2k(object):
|
|||||||
for k, v in flags.items()
|
for k, v in flags.items()
|
||||||
]
|
]
|
||||||
if a:
|
if a:
|
||||||
self.log(" ".join(sorted(a)) + "\033[0m")
|
vpath = "?"
|
||||||
|
for k, v in self.asrv.vfs.all_vols.items():
|
||||||
|
if v.realpath == ptop:
|
||||||
|
vpath = k
|
||||||
|
|
||||||
|
if vpath:
|
||||||
|
vpath += "/"
|
||||||
|
|
||||||
|
self.log("/{} {}".format(vpath, " ".join(sorted(a))), "35")
|
||||||
|
|
||||||
reg = {}
|
reg = {}
|
||||||
path = os.path.join(histpath, "up2k.snap")
|
path = os.path.join(histpath, "up2k.snap")
|
||||||
@@ -401,7 +411,7 @@ class Up2k(object):
|
|||||||
if WINDOWS:
|
if WINDOWS:
|
||||||
excl = [x.replace("/", "\\") for x in excl]
|
excl = [x.replace("/", "\\") for x in excl]
|
||||||
|
|
||||||
n_add = self._build_dir(dbw, top, set(excl), top, nohash)
|
n_add = self._build_dir(dbw, top, set(excl), top, nohash, [])
|
||||||
n_rm = self._drop_lost(dbw[0], top)
|
n_rm = self._drop_lost(dbw[0], top)
|
||||||
if dbw[1]:
|
if dbw[1]:
|
||||||
self.log("commit {} new files".format(dbw[1]))
|
self.log("commit {} new files".format(dbw[1]))
|
||||||
@@ -409,11 +419,25 @@ class Up2k(object):
|
|||||||
|
|
||||||
return True, n_add or n_rm or do_vac
|
return True, n_add or n_rm or do_vac
|
||||||
|
|
||||||
def _build_dir(self, dbw, top, excl, cdir, nohash):
|
def _build_dir(self, dbw, top, excl, cdir, nohash, seen):
|
||||||
|
rcdir = cdir
|
||||||
|
if not ANYWIN:
|
||||||
|
try:
|
||||||
|
# a bit expensive but worth
|
||||||
|
rcdir = os.path.realpath(cdir)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if rcdir in seen:
|
||||||
|
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
|
||||||
|
self.log(m.format(seen[-1], rcdir, cdir), 3)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
seen = seen + [cdir]
|
||||||
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
|
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
|
||||||
histpath = self.asrv.vfs.histtab[top]
|
histpath = self.asrv.vfs.histtab[top]
|
||||||
ret = 0
|
ret = 0
|
||||||
g = statdir(self.log, not self.args.no_scandir, False, cdir)
|
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
|
||||||
for iname, inf in sorted(g):
|
for iname, inf in sorted(g):
|
||||||
abspath = os.path.join(cdir, iname)
|
abspath = os.path.join(cdir, iname)
|
||||||
lmod = int(inf.st_mtime)
|
lmod = int(inf.st_mtime)
|
||||||
@@ -422,7 +446,7 @@ class Up2k(object):
|
|||||||
if abspath in excl or abspath == histpath:
|
if abspath in excl or abspath == histpath:
|
||||||
continue
|
continue
|
||||||
# self.log(" dir: {}".format(abspath))
|
# self.log(" dir: {}".format(abspath))
|
||||||
ret += self._build_dir(dbw, top, excl, abspath, nohash)
|
ret += self._build_dir(dbw, top, excl, abspath, nohash, seen)
|
||||||
else:
|
else:
|
||||||
# self.log("file: {}".format(abspath))
|
# self.log("file: {}".format(abspath))
|
||||||
rp = abspath[len(top) + 1 :]
|
rp = abspath[len(top) + 1 :]
|
||||||
@@ -1047,8 +1071,9 @@ class Up2k(object):
|
|||||||
pdir = os.path.join(cj["ptop"], cj["prel"])
|
pdir = os.path.join(cj["ptop"], cj["prel"])
|
||||||
job["name"] = self._untaken(pdir, cj["name"], now, cj["addr"])
|
job["name"] = self._untaken(pdir, cj["name"], now, cj["addr"])
|
||||||
dst = os.path.join(job["ptop"], job["prel"], job["name"])
|
dst = os.path.join(job["ptop"], job["prel"], job["name"])
|
||||||
os.unlink(fsenc(dst)) # TODO ed pls
|
if not self.args.nw:
|
||||||
self._symlink(src, dst)
|
os.unlink(fsenc(dst)) # TODO ed pls
|
||||||
|
self._symlink(src, dst)
|
||||||
|
|
||||||
if not job:
|
if not job:
|
||||||
job = {
|
job = {
|
||||||
|
|||||||
@@ -42,6 +42,20 @@ else:
|
|||||||
from Queue import Queue # pylint: disable=import-error,no-name-in-module
|
from Queue import Queue # pylint: disable=import-error,no-name-in-module
|
||||||
from StringIO import StringIO as BytesIO
|
from StringIO import StringIO as BytesIO
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
struct.unpack(b">i", b"idgi")
|
||||||
|
spack = struct.pack
|
||||||
|
sunpack = struct.unpack
|
||||||
|
except:
|
||||||
|
|
||||||
|
def spack(f, *a, **ka):
|
||||||
|
return struct.pack(f.decode("ascii"), *a, **ka)
|
||||||
|
|
||||||
|
def sunpack(f, *a, **ka):
|
||||||
|
return struct.unpack(f.decode("ascii"), *a, **ka)
|
||||||
|
|
||||||
|
|
||||||
surrogateescape.register_surrogateescape()
|
surrogateescape.register_surrogateescape()
|
||||||
FS_ENCODING = sys.getfilesystemencoding()
|
FS_ENCODING = sys.getfilesystemencoding()
|
||||||
if WINDOWS and PY2:
|
if WINDOWS and PY2:
|
||||||
@@ -123,20 +137,6 @@ REKOBO_KEY = {
|
|||||||
REKOBO_LKEY = {k.lower(): v for k, v in REKOBO_KEY.items()}
|
REKOBO_LKEY = {k.lower(): v for k, v in REKOBO_KEY.items()}
|
||||||
|
|
||||||
|
|
||||||
class Counter(object):
|
|
||||||
def __init__(self, v=0):
|
|
||||||
self.v = v
|
|
||||||
self.mutex = threading.Lock()
|
|
||||||
|
|
||||||
def add(self, delta=1):
|
|
||||||
with self.mutex:
|
|
||||||
self.v += delta
|
|
||||||
|
|
||||||
def set(self, absval):
|
|
||||||
with self.mutex:
|
|
||||||
self.v = absval
|
|
||||||
|
|
||||||
|
|
||||||
class Cooldown(object):
|
class Cooldown(object):
|
||||||
def __init__(self, maxage):
|
def __init__(self, maxage):
|
||||||
self.maxage = maxage
|
self.maxage = maxage
|
||||||
@@ -231,7 +231,7 @@ def nuprint(msg):
|
|||||||
|
|
||||||
def rice_tid():
|
def rice_tid():
|
||||||
tid = threading.current_thread().ident
|
tid = threading.current_thread().ident
|
||||||
c = struct.unpack(b"B" * 5, struct.pack(b">Q", tid)[-5:])
|
c = sunpack(b"B" * 5, spack(b">Q", tid)[-5:])
|
||||||
return "".join("\033[1;37;48;5;{}m{:02x}".format(x, x) for x in c) + "\033[0m"
|
return "".join("\033[1;37;48;5;{}m{:02x}".format(x, x) for x in c) + "\033[0m"
|
||||||
|
|
||||||
|
|
||||||
@@ -284,13 +284,11 @@ def alltrace():
|
|||||||
|
|
||||||
def min_ex():
|
def min_ex():
|
||||||
et, ev, tb = sys.exc_info()
|
et, ev, tb = sys.exc_info()
|
||||||
tb = traceback.extract_tb(tb, 2)
|
tb = traceback.extract_tb(tb)
|
||||||
ex = [
|
fmt = "{} @ {} <{}>: {}"
|
||||||
"{} @ {} <{}>: {}".format(fp.split(os.sep)[-1], ln, fun, txt)
|
ex = [fmt.format(fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in tb]
|
||||||
for fp, ln, fun, txt in tb
|
ex.append("[{}] {}".format(et.__name__, ev))
|
||||||
]
|
return "\n".join(ex[-8:])
|
||||||
ex.append("{}: {}".format(et.__name__, ev))
|
|
||||||
return "\n".join(ex)
|
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
@contextlib.contextmanager
|
||||||
@@ -904,16 +902,10 @@ def yieldfile(fn):
|
|||||||
yield buf
|
yield buf
|
||||||
|
|
||||||
|
|
||||||
def hashcopy(actor, fin, fout):
|
def hashcopy(fin, fout):
|
||||||
is_mp = actor.is_mp
|
|
||||||
hashobj = hashlib.sha512()
|
hashobj = hashlib.sha512()
|
||||||
tlen = 0
|
tlen = 0
|
||||||
for buf in fin:
|
for buf in fin:
|
||||||
if is_mp:
|
|
||||||
actor.workload += 1
|
|
||||||
if actor.workload > 2 ** 31:
|
|
||||||
actor.workload = 100
|
|
||||||
|
|
||||||
tlen += len(buf)
|
tlen += len(buf)
|
||||||
hashobj.update(buf)
|
hashobj.update(buf)
|
||||||
fout.write(buf)
|
fout.write(buf)
|
||||||
@@ -924,15 +916,10 @@ def hashcopy(actor, fin, fout):
|
|||||||
return tlen, hashobj.hexdigest(), digest_b64
|
return tlen, hashobj.hexdigest(), digest_b64
|
||||||
|
|
||||||
|
|
||||||
def sendfile_py(lower, upper, f, s, actor=None):
|
def sendfile_py(lower, upper, f, s):
|
||||||
remains = upper - lower
|
remains = upper - lower
|
||||||
f.seek(lower)
|
f.seek(lower)
|
||||||
while remains > 0:
|
while remains > 0:
|
||||||
if actor:
|
|
||||||
actor.workload += 1
|
|
||||||
if actor.workload > 2 ** 31:
|
|
||||||
actor.workload = 100
|
|
||||||
|
|
||||||
# time.sleep(0.01)
|
# time.sleep(0.01)
|
||||||
buf = f.read(min(1024 * 32, remains))
|
buf = f.read(min(1024 * 32, remains))
|
||||||
if not buf:
|
if not buf:
|
||||||
@@ -979,8 +966,7 @@ def statdir(logger, scandir, lstat, top):
|
|||||||
try:
|
try:
|
||||||
yield [fsdec(fh.name), fh.stat(follow_symlinks=not lstat)]
|
yield [fsdec(fh.name), fh.stat(follow_symlinks=not lstat)]
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
msg = "scan-stat: \033[36m{} @ {}"
|
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(fh.path)), 6)
|
||||||
logger(msg.format(repr(ex), fsdec(fh.path)))
|
|
||||||
else:
|
else:
|
||||||
src = "listdir"
|
src = "listdir"
|
||||||
fun = os.lstat if lstat else os.stat
|
fun = os.lstat if lstat else os.stat
|
||||||
@@ -989,11 +975,10 @@ def statdir(logger, scandir, lstat, top):
|
|||||||
try:
|
try:
|
||||||
yield [fsdec(name), fun(abspath)]
|
yield [fsdec(name), fun(abspath)]
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
msg = "list-stat: \033[36m{} @ {}"
|
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(abspath)), 6)
|
||||||
logger(msg.format(repr(ex), fsdec(abspath)))
|
|
||||||
|
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
logger("{}: \033[31m{} @ {}".format(src, repr(ex), top))
|
logger(src, "{} @ {}".format(repr(ex), top), 1)
|
||||||
|
|
||||||
|
|
||||||
def unescape_cookie(orig):
|
def unescape_cookie(orig):
|
||||||
@@ -1035,7 +1020,7 @@ def guess_mime(url, fallback="application/octet-stream"):
|
|||||||
if ";" not in ret:
|
if ";" not in ret:
|
||||||
if ret.startswith("text/") or ret.endswith("/javascript"):
|
if ret.startswith("text/") or ret.endswith("/javascript"):
|
||||||
ret += "; charset=UTF-8"
|
ret += "; charset=UTF-8"
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
@@ -1070,10 +1055,7 @@ def gzip_orig_sz(fn):
|
|||||||
with open(fsenc(fn), "rb") as f:
|
with open(fsenc(fn), "rb") as f:
|
||||||
f.seek(-4, 2)
|
f.seek(-4, 2)
|
||||||
rv = f.read(4)
|
rv = f.read(4)
|
||||||
try:
|
return sunpack(b"I", rv)[0]
|
||||||
return struct.unpack(b"I", rv)[0]
|
|
||||||
except:
|
|
||||||
return struct.unpack("I", rv)[0]
|
|
||||||
|
|
||||||
|
|
||||||
def py_desc():
|
def py_desc():
|
||||||
|
|||||||
@@ -28,7 +28,8 @@ window.baguetteBox = (function () {
|
|||||||
isOverlayVisible = false,
|
isOverlayVisible = false,
|
||||||
touch = {}, // start-pos
|
touch = {}, // start-pos
|
||||||
touchFlag = false, // busy
|
touchFlag = false, // busy
|
||||||
regex = /.+\.(gif|jpe?g|png|webp)/i,
|
re_i = /.+\.(gif|jpe?g|png|webp)/i,
|
||||||
|
re_v = /.+\.(webm|mp4)/i,
|
||||||
data = {}, // all galleries
|
data = {}, // all galleries
|
||||||
imagesElements = [],
|
imagesElements = [],
|
||||||
documentLastFocus = null;
|
documentLastFocus = null;
|
||||||
@@ -96,10 +97,6 @@ window.baguetteBox = (function () {
|
|||||||
data[selector] = selectorData;
|
data[selector] = selectorData;
|
||||||
|
|
||||||
[].forEach.call(galleryNodeList, function (galleryElement) {
|
[].forEach.call(galleryNodeList, function (galleryElement) {
|
||||||
if (userOptions && userOptions.filter) {
|
|
||||||
regex = userOptions.filter;
|
|
||||||
}
|
|
||||||
|
|
||||||
var tagsNodeList = [];
|
var tagsNodeList = [];
|
||||||
if (galleryElement.tagName === 'A') {
|
if (galleryElement.tagName === 'A') {
|
||||||
tagsNodeList = [galleryElement];
|
tagsNodeList = [galleryElement];
|
||||||
@@ -109,7 +106,7 @@ window.baguetteBox = (function () {
|
|||||||
|
|
||||||
tagsNodeList = [].filter.call(tagsNodeList, function (element) {
|
tagsNodeList = [].filter.call(tagsNodeList, function (element) {
|
||||||
if (element.className.indexOf(userOptions && userOptions.ignoreClass) === -1) {
|
if (element.className.indexOf(userOptions && userOptions.ignoreClass) === -1) {
|
||||||
return regex.test(element.href);
|
return re_i.test(element.href) || re_v.test(element.href);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
if (tagsNodeList.length === 0) {
|
if (tagsNodeList.length === 0) {
|
||||||
@@ -209,24 +206,36 @@ window.baguetteBox = (function () {
|
|||||||
bindEvents();
|
bindEvents();
|
||||||
}
|
}
|
||||||
|
|
||||||
function keyDownHandler(event) {
|
function keyDownHandler(e) {
|
||||||
switch (event.keyCode) {
|
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
|
||||||
case 37: // Left
|
return;
|
||||||
showPreviousImage();
|
|
||||||
break;
|
var k = e.code + '';
|
||||||
case 39: // Right
|
|
||||||
showNextImage();
|
if (k == "ArrowLeft" || k == "KeyJ")
|
||||||
break;
|
showPreviousImage();
|
||||||
case 27: // Esc
|
else if (k == "ArrowRight" || k == "KeyL")
|
||||||
hideOverlay();
|
showNextImage();
|
||||||
break;
|
else if (k == "Escape")
|
||||||
case 36: // Home
|
hideOverlay();
|
||||||
showFirstImage(event);
|
else if (k == "Home")
|
||||||
break;
|
showFirstImage(e);
|
||||||
case 35: // End
|
else if (k == "End")
|
||||||
showLastImage(event);
|
showLastImage(e);
|
||||||
break;
|
else if (k == "Space" || k == "KeyP" || k == "KeyK")
|
||||||
}
|
playpause();
|
||||||
|
else if (k == "KeyU" || k == "KeyO")
|
||||||
|
relseek(k == "KeyU" ? -10 : 10);
|
||||||
|
}
|
||||||
|
|
||||||
|
function keyUpHandler(e) {
|
||||||
|
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
|
||||||
|
return;
|
||||||
|
|
||||||
|
var k = e.code + '';
|
||||||
|
|
||||||
|
if (k == "Space")
|
||||||
|
ev(e);
|
||||||
}
|
}
|
||||||
|
|
||||||
var passiveSupp = false;
|
var passiveSupp = false;
|
||||||
@@ -325,6 +334,7 @@ window.baguetteBox = (function () {
|
|||||||
}
|
}
|
||||||
|
|
||||||
bind(document, 'keydown', keyDownHandler);
|
bind(document, 'keydown', keyDownHandler);
|
||||||
|
bind(document, 'keyup', keyUpHandler);
|
||||||
currentIndex = chosenImageIndex;
|
currentIndex = chosenImageIndex;
|
||||||
touch = {
|
touch = {
|
||||||
count: 0,
|
count: 0,
|
||||||
@@ -366,6 +376,7 @@ window.baguetteBox = (function () {
|
|||||||
|
|
||||||
function hideOverlay(e) {
|
function hideOverlay(e) {
|
||||||
ev(e);
|
ev(e);
|
||||||
|
playvid(false);
|
||||||
if (options.noScrollbars) {
|
if (options.noScrollbars) {
|
||||||
document.documentElement.style.overflowY = 'auto';
|
document.documentElement.style.overflowY = 'auto';
|
||||||
document.body.style.overflowY = 'auto';
|
document.body.style.overflowY = 'auto';
|
||||||
@@ -375,6 +386,7 @@ window.baguetteBox = (function () {
|
|||||||
}
|
}
|
||||||
|
|
||||||
unbind(document, 'keydown', keyDownHandler);
|
unbind(document, 'keydown', keyDownHandler);
|
||||||
|
unbind(document, 'keyup', keyUpHandler);
|
||||||
// Fade out and hide the overlay
|
// Fade out and hide the overlay
|
||||||
overlay.className = '';
|
overlay.className = '';
|
||||||
setTimeout(function () {
|
setTimeout(function () {
|
||||||
@@ -398,8 +410,8 @@ window.baguetteBox = (function () {
|
|||||||
return; // out-of-bounds or gallery dirty
|
return; // out-of-bounds or gallery dirty
|
||||||
}
|
}
|
||||||
|
|
||||||
if (imageContainer.getElementsByTagName('img')[0]) {
|
if (imageContainer.querySelector('img, video')) {
|
||||||
// image is loaded, cb and bail
|
// was loaded, cb and bail
|
||||||
if (callback) {
|
if (callback) {
|
||||||
callback();
|
callback();
|
||||||
}
|
}
|
||||||
@@ -408,7 +420,7 @@ window.baguetteBox = (function () {
|
|||||||
|
|
||||||
var imageElement = galleryItem.imageElement,
|
var imageElement = galleryItem.imageElement,
|
||||||
imageSrc = imageElement.href,
|
imageSrc = imageElement.href,
|
||||||
thumbnailElement = imageElement.getElementsByTagName('img')[0],
|
thumbnailElement = imageElement.querySelector('img, video'),
|
||||||
imageCaption = typeof options.captions === 'function' ?
|
imageCaption = typeof options.captions === 'function' ?
|
||||||
options.captions.call(currentGallery, imageElement) :
|
options.captions.call(currentGallery, imageElement) :
|
||||||
imageElement.getAttribute('data-caption') || imageElement.title;
|
imageElement.getAttribute('data-caption') || imageElement.title;
|
||||||
@@ -428,16 +440,20 @@ window.baguetteBox = (function () {
|
|||||||
}
|
}
|
||||||
imageContainer.appendChild(figure);
|
imageContainer.appendChild(figure);
|
||||||
|
|
||||||
var image = mknod('img');
|
var is_vid = re_v.test(imageSrc),
|
||||||
image.onload = function () {
|
image = mknod(is_vid ? 'video' : 'img');
|
||||||
|
|
||||||
|
clmod(imageContainer, 'vid', is_vid);
|
||||||
|
|
||||||
|
image.addEventListener(is_vid ? 'loadedmetadata' : 'load', function () {
|
||||||
// Remove loader element
|
// Remove loader element
|
||||||
var spinner = document.querySelector('#baguette-img-' + index + ' .baguetteBox-spinner');
|
var spinner = document.querySelector('#baguette-img-' + index + ' .baguetteBox-spinner');
|
||||||
figure.removeChild(spinner);
|
figure.removeChild(spinner);
|
||||||
if (!options.async && callback) {
|
if (!options.async && callback)
|
||||||
callback();
|
callback();
|
||||||
}
|
});
|
||||||
};
|
|
||||||
image.setAttribute('src', imageSrc);
|
image.setAttribute('src', imageSrc);
|
||||||
|
image.setAttribute('controls', 'controls');
|
||||||
image.alt = thumbnailElement ? thumbnailElement.alt || '' : '';
|
image.alt = thumbnailElement ? thumbnailElement.alt || '' : '';
|
||||||
if (options.titleTag && imageCaption) {
|
if (options.titleTag && imageCaption) {
|
||||||
image.title = imageCaption;
|
image.title = imageCaption;
|
||||||
@@ -498,6 +514,7 @@ window.baguetteBox = (function () {
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
playvid(false);
|
||||||
currentIndex = index;
|
currentIndex = index;
|
||||||
loadImage(currentIndex, function () {
|
loadImage(currentIndex, function () {
|
||||||
preloadNext(currentIndex);
|
preloadNext(currentIndex);
|
||||||
@@ -512,6 +529,26 @@ window.baguetteBox = (function () {
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function vid() {
|
||||||
|
return imagesElements[currentIndex].querySelector('video');
|
||||||
|
}
|
||||||
|
|
||||||
|
function playvid(play) {
|
||||||
|
if (vid())
|
||||||
|
vid()[play ? 'play' : 'pause']();
|
||||||
|
}
|
||||||
|
|
||||||
|
function playpause() {
|
||||||
|
var v = vid();
|
||||||
|
if (v)
|
||||||
|
v[v.paused ? "play" : "pause"]();
|
||||||
|
}
|
||||||
|
|
||||||
|
function relseek(sec) {
|
||||||
|
if (vid())
|
||||||
|
vid().currentTime += sec;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Triggers the bounce animation
|
* Triggers the bounce animation
|
||||||
* @param {('left'|'right')} direction - Direction of the movement
|
* @param {('left'|'right')} direction - Direction of the movement
|
||||||
@@ -534,6 +571,8 @@ window.baguetteBox = (function () {
|
|||||||
} else {
|
} else {
|
||||||
slider.style.transform = 'translate3d(' + offset + ',0,0)';
|
slider.style.transform = 'translate3d(' + offset + ',0,0)';
|
||||||
}
|
}
|
||||||
|
playvid(false);
|
||||||
|
playvid(true);
|
||||||
}
|
}
|
||||||
|
|
||||||
function preloadNext(index) {
|
function preloadNext(index) {
|
||||||
@@ -566,6 +605,7 @@ window.baguetteBox = (function () {
|
|||||||
unbindEvents();
|
unbindEvents();
|
||||||
clearCachedData();
|
clearCachedData();
|
||||||
unbind(document, 'keydown', keyDownHandler);
|
unbind(document, 'keydown', keyDownHandler);
|
||||||
|
unbind(document, 'keyup', keyUpHandler);
|
||||||
document.getElementsByTagName('body')[0].removeChild(ebi('baguetteBox-overlay'));
|
document.getElementsByTagName('body')[0].removeChild(ebi('baguetteBox-overlay'));
|
||||||
data = {};
|
data = {};
|
||||||
currentGallery = [];
|
currentGallery = [];
|
||||||
@@ -577,6 +617,8 @@ window.baguetteBox = (function () {
|
|||||||
show: show,
|
show: show,
|
||||||
showNext: showNextImage,
|
showNext: showNextImage,
|
||||||
showPrevious: showPreviousImage,
|
showPrevious: showPreviousImage,
|
||||||
|
relseek: relseek,
|
||||||
|
playpause: playpause,
|
||||||
hide: hideOverlay,
|
hide: hideOverlay,
|
||||||
destroy: destroyPlugin
|
destroy: destroyPlugin
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -29,10 +29,10 @@ body {
|
|||||||
position: fixed;
|
position: fixed;
|
||||||
max-width: 34em;
|
max-width: 34em;
|
||||||
background: #222;
|
background: #222;
|
||||||
border: 0 solid #555;
|
border: 0 solid #777;
|
||||||
overflow: hidden;
|
overflow: hidden;
|
||||||
margin-top: 1em;
|
margin-top: 1em;
|
||||||
padding: 0 1em;
|
padding: 0 1.3em;
|
||||||
height: 0;
|
height: 0;
|
||||||
opacity: .1;
|
opacity: .1;
|
||||||
transition: opacity 0.14s, height 0.14s, padding 0.14s;
|
transition: opacity 0.14s, height 0.14s, padding 0.14s;
|
||||||
@@ -40,19 +40,31 @@ body {
|
|||||||
border-radius: .4em;
|
border-radius: .4em;
|
||||||
z-index: 9001;
|
z-index: 9001;
|
||||||
}
|
}
|
||||||
|
#tt.b {
|
||||||
|
padding: 0 2em;
|
||||||
|
border-radius: .5em;
|
||||||
|
box-shadow: 0 .2em 1em #000;
|
||||||
|
}
|
||||||
#tt.show {
|
#tt.show {
|
||||||
padding: 1em;
|
padding: 1em 1.3em;
|
||||||
|
border-width: .4em 0;
|
||||||
height: auto;
|
height: auto;
|
||||||
border-width: .2em 0;
|
|
||||||
opacity: 1;
|
opacity: 1;
|
||||||
}
|
}
|
||||||
|
#tt.show.b {
|
||||||
|
padding: 1.5em 2em;
|
||||||
|
border-width: .5em 0;
|
||||||
|
}
|
||||||
#tt code {
|
#tt code {
|
||||||
background: #3c3c3c;
|
background: #3c3c3c;
|
||||||
padding: .2em .3em;
|
padding: .1em .3em;
|
||||||
border-top: 1px solid #777;
|
border-top: 1px solid #777;
|
||||||
border-radius: .3em;
|
border-radius: .3em;
|
||||||
font-family: monospace, monospace;
|
font-family: monospace, monospace;
|
||||||
line-height: 2em;
|
line-height: 1.7em;
|
||||||
|
}
|
||||||
|
#tt em {
|
||||||
|
color: #f6a;
|
||||||
}
|
}
|
||||||
#path,
|
#path,
|
||||||
#path * {
|
#path * {
|
||||||
@@ -812,11 +824,13 @@ input.eq_gain {
|
|||||||
border-bottom: 1px solid #555;
|
border-bottom: 1px solid #555;
|
||||||
}
|
}
|
||||||
#thumbs,
|
#thumbs,
|
||||||
#au_osd_cv {
|
#au_osd_cv,
|
||||||
|
#u2tdate {
|
||||||
opacity: .3;
|
opacity: .3;
|
||||||
}
|
}
|
||||||
#griden.on+#thumbs,
|
#griden.on+#thumbs,
|
||||||
#au_os_ctl.on+#au_osd_cv {
|
#au_os_ctl.on+#au_osd_cv,
|
||||||
|
#u2turbo.on+#u2tdate {
|
||||||
opacity: 1;
|
opacity: 1;
|
||||||
}
|
}
|
||||||
#ghead {
|
#ghead {
|
||||||
@@ -921,13 +935,16 @@ html.light {
|
|||||||
}
|
}
|
||||||
html.light #tt {
|
html.light #tt {
|
||||||
background: #fff;
|
background: #fff;
|
||||||
border-color: #888;
|
border-color: #888 #000 #777 #000;
|
||||||
box-shadow: 0 .3em 1em rgba(0,0,0,0.4);
|
box-shadow: 0 .3em 1em rgba(0,0,0,0.4);
|
||||||
}
|
}
|
||||||
html.light #tt code {
|
html.light #tt code {
|
||||||
background: #060;
|
background: #060;
|
||||||
color: #fff;
|
color: #fff;
|
||||||
}
|
}
|
||||||
|
html.light #tt em {
|
||||||
|
color: #d38;
|
||||||
|
}
|
||||||
html.light #ops,
|
html.light #ops,
|
||||||
html.light .opbox,
|
html.light .opbox,
|
||||||
html.light #srch_form {
|
html.light #srch_form {
|
||||||
@@ -1157,7 +1174,8 @@ html.light #tree::-webkit-scrollbar {
|
|||||||
margin: 0;
|
margin: 0;
|
||||||
height: 100%;
|
height: 100%;
|
||||||
}
|
}
|
||||||
#baguetteBox-overlay .full-image img {
|
#baguetteBox-overlay .full-image img,
|
||||||
|
#baguetteBox-overlay .full-image video {
|
||||||
display: inline-block;
|
display: inline-block;
|
||||||
width: auto;
|
width: auto;
|
||||||
height: auto;
|
height: auto;
|
||||||
@@ -1166,6 +1184,9 @@ html.light #tree::-webkit-scrollbar {
|
|||||||
vertical-align: middle;
|
vertical-align: middle;
|
||||||
box-shadow: 0 0 8px rgba(0, 0, 0, 0.6);
|
box-shadow: 0 0 8px rgba(0, 0, 0, 0.6);
|
||||||
}
|
}
|
||||||
|
#baguetteBox-overlay .full-image video {
|
||||||
|
background: #333;
|
||||||
|
}
|
||||||
#baguetteBox-overlay .full-image figcaption {
|
#baguetteBox-overlay .full-image figcaption {
|
||||||
display: block;
|
display: block;
|
||||||
position: absolute;
|
position: absolute;
|
||||||
|
|||||||
@@ -133,6 +133,13 @@ ebi('op_cfg').innerHTML = (
|
|||||||
(have_zip ? (
|
(have_zip ? (
|
||||||
'<div><h3>folder download</h3><div id="arc_fmt"></div></div>\n'
|
'<div><h3>folder download</h3><div id="arc_fmt"></div></div>\n'
|
||||||
) : '') +
|
) : '') +
|
||||||
|
'<div>\n' +
|
||||||
|
' <h3>up2k switches</h3>\n' +
|
||||||
|
' <div>\n' +
|
||||||
|
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="the yolo button, you probably DO NOT want to enable this:$N$Nuse this if you were uploading a huge amount of files and had to restart for some reason, and want to continue the upload ASAP$N$Nthis replaces the hash-check with a simple <em>"does this have the same filesize on the server?"</em> so if the file contents are different it will NOT be uploaded$N$Nyou should turn this off when the upload is done, and then "upload" the same files again to let the client verify them">turbo</a>\n' +
|
||||||
|
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished/corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards">date-chk</a>\n' +
|
||||||
|
' </div>\n' +
|
||||||
|
'</div>\n' +
|
||||||
'<div><h3>key notation</h3><div id="key_notation"></div></div>\n' +
|
'<div><h3>key notation</h3><div id="key_notation"></div></div>\n' +
|
||||||
'<div class="fill"><h3>hidden columns</h3><div id="hcols"></div></div>'
|
'<div class="fill"><h3>hidden columns</h3><div id="hcols"></div></div>'
|
||||||
);
|
);
|
||||||
@@ -1749,7 +1756,7 @@ document.onkeydown = function (e) {
|
|||||||
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
|
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
var k = (e.code + ''), pos = -1, n;
|
var k = e.code + '', pos = -1, n;
|
||||||
|
|
||||||
if (e.shiftKey && k != 'KeyA' && k != 'KeyD')
|
if (e.shiftKey && k != 'KeyA' && k != 'KeyD')
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -35,7 +35,7 @@
|
|||||||
</table>
|
</table>
|
||||||
</td></tr></table>
|
</td></tr></table>
|
||||||
<div class="btns">
|
<div class="btns">
|
||||||
<a href="{{ avol[0] }}?stack">dump stack</a>
|
<a href="/?stack">dump stack</a>
|
||||||
</div>
|
</div>
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
|
||||||
|
|||||||
@@ -225,7 +225,7 @@ function U2pvis(act, btns) {
|
|||||||
this.hashed = function (fobj) {
|
this.hashed = function (fobj) {
|
||||||
var fo = this.tab[fobj.n],
|
var fo = this.tab[fobj.n],
|
||||||
nb = fo.bt * (++fo.nh / fo.cb.length),
|
nb = fo.bt * (++fo.nh / fo.cb.length),
|
||||||
p = this.perc(nb, 0, fobj.size, fobj.t1);
|
p = this.perc(nb, 0, fobj.size, fobj.t_hashing);
|
||||||
|
|
||||||
fo.hp = '{0}%, {1}, {2} MB/s'.format(
|
fo.hp = '{0}%, {1}, {2} MB/s'.format(
|
||||||
p[0].toFixed(2), p[1], p[2].toFixed(2)
|
p[0].toFixed(2), p[1], p[2].toFixed(2)
|
||||||
@@ -248,7 +248,7 @@ function U2pvis(act, btns) {
|
|||||||
fo.cb[nchunk] = cbd;
|
fo.cb[nchunk] = cbd;
|
||||||
fo.bd += delta;
|
fo.bd += delta;
|
||||||
|
|
||||||
var p = this.perc(fo.bd, fo.bd0, fo.bt, fobj.t3);
|
var p = this.perc(fo.bd, fo.bd0, fo.bt, fobj.t_uploading);
|
||||||
fo.hp = '{0}%, {1}, {2} MB/s'.format(
|
fo.hp = '{0}%, {1}, {2} MB/s'.format(
|
||||||
p[0].toFixed(2), p[1], p[2].toFixed(2)
|
p[0].toFixed(2), p[1], p[2].toFixed(2)
|
||||||
);
|
);
|
||||||
@@ -308,6 +308,12 @@ function U2pvis(act, btns) {
|
|||||||
throw 42;
|
throw 42;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//console.log("oldcat %s %d, newcat %s %d, head=%d, tail=%d, file=%d, act.old=%s, act.new=%s, bz_act=%s",
|
||||||
|
// oldcat, this.ctr[oldcat],
|
||||||
|
// newcat, this.ctr[newcat],
|
||||||
|
// this.head, this.tail, nfile,
|
||||||
|
// this.is_act(oldcat), this.is_act(newcat), bz_act);
|
||||||
|
|
||||||
fo.in = newcat;
|
fo.in = newcat;
|
||||||
this.ctr[oldcat]--;
|
this.ctr[oldcat]--;
|
||||||
this.ctr[newcat]++;
|
this.ctr[newcat]++;
|
||||||
@@ -319,7 +325,7 @@ function U2pvis(act, btns) {
|
|||||||
this.addrow(nfile);
|
this.addrow(nfile);
|
||||||
}
|
}
|
||||||
else if (this.is_act(oldcat)) {
|
else if (this.is_act(oldcat)) {
|
||||||
while (this.head < Math.min(this.tab.length, this.tail) && (this.head == nfile || !this.is_act(this.tab[this.head].in)))
|
while (this.head < Math.min(this.tab.length, this.tail) && this.precard[this.tab[this.head].in])
|
||||||
this.head++;
|
this.head++;
|
||||||
|
|
||||||
if (!bz_act) {
|
if (!bz_act) {
|
||||||
@@ -327,9 +333,10 @@ function U2pvis(act, btns) {
|
|||||||
tr.parentNode.removeChild(tr);
|
tr.parentNode.removeChild(tr);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (bz_act) {
|
else return;
|
||||||
|
|
||||||
|
if (bz_act)
|
||||||
this.bzw();
|
this.bzw();
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
this.bzw = function () {
|
this.bzw = function () {
|
||||||
@@ -343,7 +350,8 @@ function U2pvis(act, btns) {
|
|||||||
|
|
||||||
while (this.head - first > this.wsz) {
|
while (this.head - first > this.wsz) {
|
||||||
var obj = ebi('f' + (first++));
|
var obj = ebi('f' + (first++));
|
||||||
obj.parentNode.removeChild(obj);
|
if (obj)
|
||||||
|
obj.parentNode.removeChild(obj);
|
||||||
}
|
}
|
||||||
while (last - this.tail < this.wsz && last < this.tab.length - 2) {
|
while (last - this.tail < this.wsz && last < this.tab.length - 2) {
|
||||||
var obj = ebi('f' + (++last));
|
var obj = ebi('f' + (++last));
|
||||||
@@ -376,6 +384,8 @@ function U2pvis(act, btns) {
|
|||||||
|
|
||||||
this.changecard = function (card) {
|
this.changecard = function (card) {
|
||||||
this.act = card;
|
this.act = card;
|
||||||
|
this.precard = has(["ok", "ng", "done"], this.act) ? {} : this.act == "bz" ? { "ok": 1, "ng": 1 } : { "ok": 1, "ng": 1, "bz": 1 };
|
||||||
|
this.postcard = has(["ok", "ng", "done"], this.act) ? { "bz": 1, "q": 1 } : this.act == "bz" ? { "q": 1 } : {};
|
||||||
this.head = -1;
|
this.head = -1;
|
||||||
this.tail = -1;
|
this.tail = -1;
|
||||||
var html = [];
|
var html = [];
|
||||||
@@ -390,22 +400,23 @@ function U2pvis(act, btns) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (this.head == -1) {
|
if (this.head == -1) {
|
||||||
var precard = has(["ok", "ng", "done"], this.act) ? {} : this.act == "bz" ? { "ok": 1, "ng": 1 } : { "ok": 1, "ng": 1, "bz": 1 },
|
|
||||||
postcard = has(["ok", "ng", "done"], this.act) ? { "bz": 1, "q": 1 } : this.act == "bz" ? { "q": 1 } : {};
|
|
||||||
|
|
||||||
for (var a = 0; a < this.tab.length; a++) {
|
for (var a = 0; a < this.tab.length; a++) {
|
||||||
var rt = this.tab[a].in;
|
var rt = this.tab[a].in;
|
||||||
if (precard[rt]) {
|
if (this.precard[rt]) {
|
||||||
this.head = a + 1;
|
this.head = a + 1;
|
||||||
this.tail = a;
|
this.tail = a;
|
||||||
}
|
}
|
||||||
else if (postcard[rt]) {
|
else if (this.postcard[rt]) {
|
||||||
this.head = a;
|
this.head = a;
|
||||||
this.tail = a - 1;
|
this.tail = a - 1;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (this.head < 0)
|
||||||
|
this.head = 0;
|
||||||
|
|
||||||
if (card == "bz") {
|
if (card == "bz") {
|
||||||
for (var a = this.head - 1; a >= this.head - this.wsz && a >= 0; a--) {
|
for (var a = this.head - 1; a >= this.head - this.wsz && a >= 0; a--) {
|
||||||
html.unshift(this.genrow(a, true).replace(/><td>/, "><td>a "));
|
html.unshift(this.genrow(a, true).replace(/><td>/, "><td>a "));
|
||||||
@@ -452,6 +463,8 @@ function U2pvis(act, btns) {
|
|||||||
that.changecard(newtab);
|
that.changecard(newtab);
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
this.changecard(this.act);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -548,17 +561,21 @@ function up2k_init(subtle) {
|
|||||||
ask_up = bcfg_get('ask_up', true),
|
ask_up = bcfg_get('ask_up', true),
|
||||||
flag_en = bcfg_get('flag_en', false),
|
flag_en = bcfg_get('flag_en', false),
|
||||||
fsearch = bcfg_get('fsearch', false),
|
fsearch = bcfg_get('fsearch', false),
|
||||||
|
turbo = bcfg_get('u2turbo', false),
|
||||||
|
datechk = bcfg_get('u2tdate', true),
|
||||||
fdom_ctr = 0,
|
fdom_ctr = 0,
|
||||||
min_filebuf = 0;
|
min_filebuf = 0;
|
||||||
|
|
||||||
var st = {
|
var st = {
|
||||||
"files": [],
|
"files": [],
|
||||||
"todo": {
|
"todo": {
|
||||||
|
"head": [],
|
||||||
"hash": [],
|
"hash": [],
|
||||||
"handshake": [],
|
"handshake": [],
|
||||||
"upload": []
|
"upload": []
|
||||||
},
|
},
|
||||||
"busy": {
|
"busy": {
|
||||||
|
"head": [],
|
||||||
"hash": [],
|
"hash": [],
|
||||||
"handshake": [],
|
"handshake": [],
|
||||||
"upload": []
|
"upload": []
|
||||||
@@ -569,6 +586,15 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
function push_t(arr, t) {
|
||||||
|
var sort = arr.length && arr[arr.length - 1].n > t.n;
|
||||||
|
arr.push(t);
|
||||||
|
if (sort)
|
||||||
|
arr.sort(function (a, b) {
|
||||||
|
return a.n < b.n ? -1 : 1;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
var pvis = new U2pvis("bz", '#u2cards');
|
var pvis = new U2pvis("bz", '#u2cards');
|
||||||
|
|
||||||
var bobslice = null;
|
var bobslice = null;
|
||||||
@@ -612,7 +638,7 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
else files = e.target.files;
|
else files = e.target.files;
|
||||||
|
|
||||||
if (!files || files.length == 0)
|
if (!files || !files.length)
|
||||||
return alert('no files selected??');
|
return alert('no files selected??');
|
||||||
|
|
||||||
more_one_file();
|
more_one_file();
|
||||||
@@ -715,8 +741,7 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
pf.push(name);
|
pf.push(name);
|
||||||
dn.file(function (fobj) {
|
dn.file(function (fobj) {
|
||||||
var idx = pf.indexOf(name);
|
apop(pf, name);
|
||||||
pf.splice(idx, 1);
|
|
||||||
try {
|
try {
|
||||||
if (fobj.size > 0) {
|
if (fobj.size > 0) {
|
||||||
good.push([fobj, name]);
|
good.push([fobj, name]);
|
||||||
@@ -739,7 +764,7 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function gotallfiles(good_files, bad_files) {
|
function gotallfiles(good_files, bad_files) {
|
||||||
if (bad_files.length > 0) {
|
if (bad_files.length) {
|
||||||
var ntot = bad_files.length + good_files.length,
|
var ntot = bad_files.length + good_files.length,
|
||||||
msg = 'These {0} files (of {1} total) were skipped because they are empty:\n'.format(bad_files.length, ntot);
|
msg = 'These {0} files (of {1} total) were skipped because they are empty:\n'.format(bad_files.length, ntot);
|
||||||
|
|
||||||
@@ -797,7 +822,10 @@ function up2k_init(subtle) {
|
|||||||
], fobj.size, draw_each);
|
], fobj.size, draw_each);
|
||||||
|
|
||||||
st.files.push(entry);
|
st.files.push(entry);
|
||||||
st.todo.hash.push(entry);
|
if (turbo)
|
||||||
|
push_t(st.todo.head, entry);
|
||||||
|
else
|
||||||
|
push_t(st.todo.hash, entry);
|
||||||
}
|
}
|
||||||
if (!draw_each) {
|
if (!draw_each) {
|
||||||
pvis.drawcard("q");
|
pvis.drawcard("q");
|
||||||
@@ -837,15 +865,32 @@ function up2k_init(subtle) {
|
|||||||
//
|
//
|
||||||
|
|
||||||
function handshakes_permitted() {
|
function handshakes_permitted() {
|
||||||
var lim = multitask ? 1 : 0;
|
if (!st.todo.handshake.length)
|
||||||
|
return true;
|
||||||
|
|
||||||
if (lim <
|
var t = st.todo.handshake[0],
|
||||||
st.todo.upload.length +
|
cd = t.cooldown;
|
||||||
st.busy.upload.length)
|
|
||||||
|
if (cd && cd - Date.now() > 0)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
var cd = st.todo.handshake.length ? st.todo.handshake[0].cooldown : 0;
|
// keepalive or verify
|
||||||
if (cd && cd - Date.now() > 0)
|
if (t.keepalive ||
|
||||||
|
t.t_uploaded)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
if (parallel_uploads <
|
||||||
|
st.busy.handshake.length)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if (st.busy.handshake.length)
|
||||||
|
for (var n = t.n - 1; n >= t.n - parallel_uploads && n >= 0; n--)
|
||||||
|
if (st.files[n].t_uploading)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if ((multitask ? 1 : 0) <
|
||||||
|
st.todo.upload.length +
|
||||||
|
st.busy.upload.length)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
@@ -880,13 +925,16 @@ function up2k_init(subtle) {
|
|||||||
clearTimeout(tto);
|
clearTimeout(tto);
|
||||||
running = true;
|
running = true;
|
||||||
while (window['vis_exh']) {
|
while (window['vis_exh']) {
|
||||||
var is_busy = 0 !=
|
var now = Date.now(),
|
||||||
st.todo.hash.length +
|
is_busy = 0 !=
|
||||||
st.todo.handshake.length +
|
st.todo.head.length +
|
||||||
st.todo.upload.length +
|
st.todo.hash.length +
|
||||||
st.busy.hash.length +
|
st.todo.handshake.length +
|
||||||
st.busy.handshake.length +
|
st.todo.upload.length +
|
||||||
st.busy.upload.length;
|
st.busy.head.length +
|
||||||
|
st.busy.hash.length +
|
||||||
|
st.busy.handshake.length +
|
||||||
|
st.busy.upload.length;
|
||||||
|
|
||||||
if (was_busy != is_busy) {
|
if (was_busy != is_busy) {
|
||||||
was_busy = is_busy;
|
was_busy = is_busy;
|
||||||
@@ -897,7 +945,6 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
if (flag) {
|
if (flag) {
|
||||||
if (is_busy) {
|
if (is_busy) {
|
||||||
var now = Date.now();
|
|
||||||
flag.take(now);
|
flag.take(now);
|
||||||
if (!flag.ours)
|
if (!flag.ours)
|
||||||
return defer();
|
return defer();
|
||||||
@@ -909,43 +956,52 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
var mou_ikkai = false;
|
var mou_ikkai = false;
|
||||||
|
|
||||||
if (st.busy.handshake.length > 0 &&
|
if (st.busy.handshake.length &&
|
||||||
st.busy.handshake[0].busied < Date.now() - 30 * 1000
|
st.busy.handshake[0].t_busied < now - 30 * 1000
|
||||||
) {
|
) {
|
||||||
console.log("retrying stuck handshake");
|
console.log("retrying stuck handshake");
|
||||||
var t = st.busy.handshake.shift();
|
var t = st.busy.handshake.shift();
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(t);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (st.todo.handshake.length > 0 &&
|
var nprev = -1;
|
||||||
st.busy.handshake.length == 0 && (
|
for (var a = 0; a < st.todo.upload.length; a++) {
|
||||||
st.todo.handshake[0].t4 || (
|
var nf = st.todo.upload[a].nfile;
|
||||||
handshakes_permitted() &&
|
if (nprev == nf)
|
||||||
st.busy.upload.length < parallel_uploads
|
continue;
|
||||||
)
|
|
||||||
)
|
nprev = nf;
|
||||||
) {
|
var t = st.files[nf];
|
||||||
exec_handshake();
|
if (now - t.t_busied > 1000 * 30 &&
|
||||||
|
now - t.t_handshake > 1000 * (21600 - 1800)
|
||||||
|
) {
|
||||||
|
apop(st.todo.handshake, t);
|
||||||
|
st.todo.handshake.unshift(t);
|
||||||
|
t.keepalive = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (st.todo.head.length &&
|
||||||
|
st.busy.head.length < parallel_uploads) {
|
||||||
|
exec_head();
|
||||||
mou_ikkai = true;
|
mou_ikkai = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (handshakes_permitted() &&
|
if (handshakes_permitted() &&
|
||||||
st.todo.handshake.length > 0 &&
|
st.todo.handshake.length) {
|
||||||
st.busy.handshake.length == 0 &&
|
|
||||||
st.busy.upload.length < parallel_uploads) {
|
|
||||||
exec_handshake();
|
exec_handshake();
|
||||||
mou_ikkai = true;
|
mou_ikkai = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (st.todo.upload.length > 0 &&
|
if (st.todo.upload.length &&
|
||||||
st.busy.upload.length < parallel_uploads) {
|
st.busy.upload.length < parallel_uploads) {
|
||||||
exec_upload();
|
exec_upload();
|
||||||
mou_ikkai = true;
|
mou_ikkai = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (hashing_permitted() &&
|
if (hashing_permitted() &&
|
||||||
st.todo.hash.length > 0 &&
|
st.todo.hash.length &&
|
||||||
st.busy.hash.length == 0) {
|
!st.busy.hash.length) {
|
||||||
exec_hash();
|
exec_hash();
|
||||||
mou_ikkai = true;
|
mou_ikkai = true;
|
||||||
}
|
}
|
||||||
@@ -1080,7 +1136,7 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
if (handled) {
|
if (handled) {
|
||||||
pvis.move(t.n, 'ng');
|
pvis.move(t.n, 'ng');
|
||||||
st.busy.hash.splice(st.busy.hash.indexOf(t), 1);
|
apop(st.busy.hash, t);
|
||||||
st.bytes.uploaded += t.size;
|
st.bytes.uploaded += t.size;
|
||||||
return tasker();
|
return tasker();
|
||||||
}
|
}
|
||||||
@@ -1113,15 +1169,15 @@ function up2k_init(subtle) {
|
|||||||
t.hash.push(hashtab[a]);
|
t.hash.push(hashtab[a]);
|
||||||
}
|
}
|
||||||
|
|
||||||
t.t2 = Date.now();
|
t.t_hashed = Date.now();
|
||||||
if (t.n == 0 && window.location.hash == '#dbg') {
|
if (t.n == 0 && window.location.hash == '#dbg') {
|
||||||
var spd = (t.size / ((t.t2 - t.t1) / 1000.)) / (1024 * 1024.);
|
var spd = (t.size / ((t.t_hashed - t.t_hashing) / 1000.)) / (1024 * 1024.);
|
||||||
alert('{0} ms, {1} MB/s\n'.format(t.t2 - t.t1, spd.toFixed(3)) + t.hash.join('\n'));
|
alert('{0} ms, {1} MB/s\n'.format(t.t_hashed - t.t_hashing, spd.toFixed(3)) + t.hash.join('\n'));
|
||||||
}
|
}
|
||||||
|
|
||||||
pvis.seth(t.n, 2, 'hashing done');
|
pvis.seth(t.n, 2, 'hashing done');
|
||||||
pvis.seth(t.n, 1, '📦 wait');
|
pvis.seth(t.n, 1, '📦 wait');
|
||||||
st.busy.hash.splice(st.busy.hash.indexOf(t), 1);
|
apop(st.busy.hash, t);
|
||||||
st.todo.handshake.push(t);
|
st.todo.handshake.push(t);
|
||||||
tasker();
|
tasker();
|
||||||
};
|
};
|
||||||
@@ -1144,10 +1200,57 @@ function up2k_init(subtle) {
|
|||||||
}, 1);
|
}, 1);
|
||||||
};
|
};
|
||||||
|
|
||||||
t.t1 = Date.now();
|
t.t_hashing = Date.now();
|
||||||
segm_next();
|
segm_next();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/////
|
||||||
|
////
|
||||||
|
/// head
|
||||||
|
//
|
||||||
|
|
||||||
|
function exec_head() {
|
||||||
|
var t = st.todo.head.shift();
|
||||||
|
st.busy.head.push(t);
|
||||||
|
|
||||||
|
var xhr = new XMLHttpRequest();
|
||||||
|
xhr.onerror = function () {
|
||||||
|
console.log('head onerror, retrying', t);
|
||||||
|
apop(st.busy.head, t);
|
||||||
|
st.todo.head.unshift(t);
|
||||||
|
tasker();
|
||||||
|
};
|
||||||
|
function orz(e) {
|
||||||
|
var ok = false;
|
||||||
|
if (xhr.status == 200) {
|
||||||
|
var srv_sz = xhr.getResponseHeader('Content-Length'),
|
||||||
|
srv_ts = xhr.getResponseHeader('Last-Modified');
|
||||||
|
|
||||||
|
ok = t.size == srv_sz;
|
||||||
|
if (ok && datechk) {
|
||||||
|
srv_ts = new Date(srv_ts) / 1000;
|
||||||
|
ok = Math.abs(srv_ts - t.lmod) < 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
apop(st.busy.head, t);
|
||||||
|
if (!ok)
|
||||||
|
return push_t(st.todo.hash, t);
|
||||||
|
|
||||||
|
t.done = true;
|
||||||
|
st.bytes.hashed += t.size;
|
||||||
|
st.bytes.uploaded += t.size;
|
||||||
|
pvis.seth(t.n, 1, 'YOLO');
|
||||||
|
pvis.seth(t.n, 2, "turbo'd");
|
||||||
|
pvis.move(t.n, 'ok');
|
||||||
|
};
|
||||||
|
xhr.onload = function (e) {
|
||||||
|
try { orz(e); } catch (ex) { vis_exh(ex + '', '', '', '', ex); }
|
||||||
|
};
|
||||||
|
|
||||||
|
xhr.open('HEAD', t.purl + t.name, true);
|
||||||
|
xhr.send();
|
||||||
|
}
|
||||||
|
|
||||||
/////
|
/////
|
||||||
////
|
////
|
||||||
/// handshake
|
/// handshake
|
||||||
@@ -1155,30 +1258,41 @@ function up2k_init(subtle) {
|
|||||||
|
|
||||||
function exec_handshake() {
|
function exec_handshake() {
|
||||||
var t = st.todo.handshake.shift(),
|
var t = st.todo.handshake.shift(),
|
||||||
|
keepalive = t.keepalive,
|
||||||
me = Date.now();
|
me = Date.now();
|
||||||
|
|
||||||
st.busy.handshake.push(t);
|
st.busy.handshake.push(t);
|
||||||
t.busied = me;
|
t.keepalive = undefined;
|
||||||
|
t.t_busied = me;
|
||||||
|
|
||||||
|
if (keepalive)
|
||||||
|
console.log("sending keepalive handshake", t);
|
||||||
|
|
||||||
var xhr = new XMLHttpRequest();
|
var xhr = new XMLHttpRequest();
|
||||||
xhr.onerror = function () {
|
xhr.onerror = function () {
|
||||||
if (t.busied != me) {
|
if (t.t_busied != me) {
|
||||||
console.log('zombie handshake onerror,', t);
|
console.log('zombie handshake onerror,', t);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
console.log('handshake onerror, retrying', t);
|
console.log('handshake onerror, retrying', t);
|
||||||
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
|
apop(st.busy.handshake, t);
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(t);
|
||||||
|
t.keepalive = keepalive;
|
||||||
tasker();
|
tasker();
|
||||||
};
|
};
|
||||||
function orz(e) {
|
function orz(e) {
|
||||||
if (t.busied != me) {
|
if (t.t_busied != me) {
|
||||||
console.log('zombie handshake onload,', t);
|
console.log('zombie handshake onload,', t);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (xhr.status == 200) {
|
if (xhr.status == 200) {
|
||||||
var response = JSON.parse(xhr.responseText);
|
t.t_handshake = Date.now();
|
||||||
|
if (keepalive) {
|
||||||
|
apop(st.busy.handshake, t);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var response = JSON.parse(xhr.responseText);
|
||||||
if (!response.name) {
|
if (!response.name) {
|
||||||
var msg = '',
|
var msg = '',
|
||||||
smsg = '';
|
smsg = '';
|
||||||
@@ -1202,7 +1316,7 @@ function up2k_init(subtle) {
|
|||||||
pvis.seth(t.n, 2, msg);
|
pvis.seth(t.n, 2, msg);
|
||||||
pvis.seth(t.n, 1, smsg);
|
pvis.seth(t.n, 1, smsg);
|
||||||
pvis.move(t.n, smsg == '404' ? 'ng' : 'ok');
|
pvis.move(t.n, smsg == '404' ? 'ng' : 'ok');
|
||||||
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
|
apop(st.busy.handshake, t);
|
||||||
st.bytes.uploaded += t.size;
|
st.bytes.uploaded += t.size;
|
||||||
t.done = true;
|
t.done = true;
|
||||||
tasker();
|
tasker();
|
||||||
@@ -1244,31 +1358,41 @@ function up2k_init(subtle) {
|
|||||||
var done = true,
|
var done = true,
|
||||||
msg = '🎷🐛';
|
msg = '🎷🐛';
|
||||||
|
|
||||||
if (t.postlist.length > 0) {
|
if (t.postlist.length) {
|
||||||
|
var arr = st.todo.upload,
|
||||||
|
sort = arr.length && arr[arr.length - 1].nfile > t.n;
|
||||||
|
|
||||||
for (var a = 0; a < t.postlist.length; a++)
|
for (var a = 0; a < t.postlist.length; a++)
|
||||||
st.todo.upload.push({
|
arr.push({
|
||||||
'nfile': t.n,
|
'nfile': t.n,
|
||||||
'npart': t.postlist[a]
|
'npart': t.postlist[a]
|
||||||
});
|
});
|
||||||
|
|
||||||
msg = 'uploading';
|
msg = 'uploading';
|
||||||
done = false;
|
done = false;
|
||||||
|
|
||||||
|
if (sort)
|
||||||
|
arr.sort(function (a, b) {
|
||||||
|
return a.nfile < b.nfile ? -1 :
|
||||||
|
/* */ a.nfile > b.nfile ? 1 :
|
||||||
|
a.npart < b.npart ? -1 : 1;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
pvis.seth(t.n, 1, msg);
|
pvis.seth(t.n, 1, msg);
|
||||||
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
|
apop(st.busy.handshake, t);
|
||||||
|
|
||||||
if (done) {
|
if (done) {
|
||||||
t.done = true;
|
t.done = true;
|
||||||
st.bytes.uploaded += t.size - t.bytes_uploaded;
|
st.bytes.uploaded += t.size - t.bytes_uploaded;
|
||||||
var spd1 = (t.size / ((t.t2 - t.t1) / 1000.)) / (1024 * 1024.),
|
var spd1 = (t.size / ((t.t_hashed - t.t_hashing) / 1000.)) / (1024 * 1024.),
|
||||||
spd2 = (t.size / ((t.t4 - t.t3) / 1000.)) / (1024 * 1024.);
|
spd2 = (t.size / ((t.t_uploaded - t.t_uploading) / 1000.)) / (1024 * 1024.);
|
||||||
|
|
||||||
pvis.seth(t.n, 2, 'hash {0}, up {1} MB/s'.format(
|
pvis.seth(t.n, 2, 'hash {0}, up {1} MB/s'.format(
|
||||||
spd1.toFixed(2), spd2.toFixed(2)));
|
spd1.toFixed(2), spd2.toFixed(2)));
|
||||||
|
|
||||||
pvis.move(t.n, 'ok');
|
pvis.move(t.n, 'ok');
|
||||||
}
|
}
|
||||||
else t.t4 = undefined;
|
else t.t_uploaded = undefined;
|
||||||
|
|
||||||
tasker();
|
tasker();
|
||||||
}
|
}
|
||||||
@@ -1287,7 +1411,7 @@ function up2k_init(subtle) {
|
|||||||
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
|
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
|
||||||
console.log("rate-limit: " + penalty);
|
console.log("rate-limit: " + penalty);
|
||||||
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
|
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
|
||||||
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
|
apop(st.busy.handshake, t);
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(t);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -1306,7 +1430,7 @@ function up2k_init(subtle) {
|
|||||||
pvis.seth(t.n, 2, err);
|
pvis.seth(t.n, 2, err);
|
||||||
pvis.move(t.n, 'ng');
|
pvis.move(t.n, 'ng');
|
||||||
|
|
||||||
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
|
apop(st.busy.handshake, t);
|
||||||
tasker();
|
tasker();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -1347,8 +1471,8 @@ function up2k_init(subtle) {
|
|||||||
var npart = upt.npart,
|
var npart = upt.npart,
|
||||||
t = st.files[upt.nfile];
|
t = st.files[upt.nfile];
|
||||||
|
|
||||||
if (!t.t3)
|
if (!t.t_uploading)
|
||||||
t.t3 = Date.now();
|
t.t_uploading = Date.now();
|
||||||
|
|
||||||
pvis.seth(t.n, 1, "🚀 send");
|
pvis.seth(t.n, 1, "🚀 send");
|
||||||
|
|
||||||
@@ -1374,10 +1498,10 @@ function up2k_init(subtle) {
|
|||||||
xhr.status, t.name) + (txt || "no further information"));
|
xhr.status, t.name) + (txt || "no further information"));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
st.busy.upload.splice(st.busy.upload.indexOf(upt), 1);
|
apop(st.busy.upload, upt);
|
||||||
t.postlist.splice(t.postlist.indexOf(npart), 1);
|
apop(t.postlist, npart);
|
||||||
if (t.postlist.length == 0) {
|
if (!t.postlist.length) {
|
||||||
t.t4 = Date.now();
|
t.t_uploaded = Date.now();
|
||||||
pvis.seth(t.n, 1, 'verifying');
|
pvis.seth(t.n, 1, 'verifying');
|
||||||
st.todo.handshake.unshift(t);
|
st.todo.handshake.unshift(t);
|
||||||
}
|
}
|
||||||
@@ -1504,6 +1628,35 @@ function up2k_init(subtle) {
|
|||||||
set_fsearch(!fsearch);
|
set_fsearch(!fsearch);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function tgl_turbo() {
|
||||||
|
turbo = !turbo;
|
||||||
|
bcfg_set('u2turbo', turbo);
|
||||||
|
draw_turbo();
|
||||||
|
}
|
||||||
|
|
||||||
|
function tgl_datechk() {
|
||||||
|
datechk = !datechk;
|
||||||
|
bcfg_set('u2tdate', datechk);
|
||||||
|
}
|
||||||
|
|
||||||
|
function draw_turbo() {
|
||||||
|
var msgu = '<p class="warn">WARNING: turbo enabled, <span> client may not detect and resume incomplete uploads; see turbo-button tooltip</span></p>',
|
||||||
|
msgs = '<p class="warn">WARNING: turbo enabled, <span> search may give false-positives; see turbo-button tooltip</span></p>',
|
||||||
|
msg = fsearch ? msgs : msgu,
|
||||||
|
omsg = fsearch ? msgu : msgs,
|
||||||
|
html = ebi('u2foot').innerHTML,
|
||||||
|
ohtml = html;
|
||||||
|
|
||||||
|
if (turbo && html.indexOf(msg) === -1)
|
||||||
|
html = html.replace(omsg, '') + msg;
|
||||||
|
else if (!turbo)
|
||||||
|
html = html.replace(msgu, '').replace(msgs, '');
|
||||||
|
|
||||||
|
if (html !== ohtml)
|
||||||
|
ebi('u2foot').innerHTML = html;
|
||||||
|
}
|
||||||
|
draw_turbo();
|
||||||
|
|
||||||
function set_fsearch(new_state) {
|
function set_fsearch(new_state) {
|
||||||
var fixed = false;
|
var fixed = false;
|
||||||
|
|
||||||
@@ -1541,6 +1694,7 @@ function up2k_init(subtle) {
|
|||||||
}
|
}
|
||||||
catch (ex) { }
|
catch (ex) { }
|
||||||
|
|
||||||
|
draw_turbo();
|
||||||
onresize();
|
onresize();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1585,6 +1739,8 @@ function up2k_init(subtle) {
|
|||||||
ebi('multitask').addEventListener('click', tgl_multitask, false);
|
ebi('multitask').addEventListener('click', tgl_multitask, false);
|
||||||
ebi('ask_up').addEventListener('click', tgl_ask_up, false);
|
ebi('ask_up').addEventListener('click', tgl_ask_up, false);
|
||||||
ebi('flag_en').addEventListener('click', tgl_flag_en, false);
|
ebi('flag_en').addEventListener('click', tgl_flag_en, false);
|
||||||
|
ebi('u2turbo').addEventListener('click', tgl_turbo, false);
|
||||||
|
ebi('u2tdate').addEventListener('click', tgl_datechk, false);
|
||||||
var o = ebi('fsearch');
|
var o = ebi('fsearch');
|
||||||
if (o)
|
if (o)
|
||||||
o.addEventListener('click', tgl_fsearch, false);
|
o.addEventListener('click', tgl_fsearch, false);
|
||||||
|
|||||||
@@ -215,9 +215,31 @@
|
|||||||
color: #fff;
|
color: #fff;
|
||||||
font-style: italic;
|
font-style: italic;
|
||||||
}
|
}
|
||||||
|
#u2foot .warn {
|
||||||
|
font-size: 1.3em;
|
||||||
|
padding: .5em .8em;
|
||||||
|
margin: 1em -.6em;
|
||||||
|
color: #f74;
|
||||||
|
background: #322;
|
||||||
|
border: 1px solid #633;
|
||||||
|
border-width: .1em 0;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
#u2foot .warn span {
|
||||||
|
color: #f86;
|
||||||
|
}
|
||||||
|
html.light #u2foot .warn {
|
||||||
|
color: #b00;
|
||||||
|
background: #fca;
|
||||||
|
border-color: #f70;
|
||||||
|
}
|
||||||
|
html.light #u2foot .warn span {
|
||||||
|
color: #930;
|
||||||
|
}
|
||||||
#u2foot span {
|
#u2foot span {
|
||||||
color: #999;
|
color: #999;
|
||||||
font-size: .9em;
|
font-size: .9em;
|
||||||
|
font-weight: normal;
|
||||||
}
|
}
|
||||||
#u2footfoot {
|
#u2footfoot {
|
||||||
margin-bottom: -1em;
|
margin-bottom: -1em;
|
||||||
|
|||||||
@@ -389,6 +389,13 @@ function has(haystack, needle) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function apop(arr, v) {
|
||||||
|
var ofs = arr.indexOf(v);
|
||||||
|
if (ofs !== -1)
|
||||||
|
arr.splice(ofs, 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
function jcp(obj) {
|
function jcp(obj) {
|
||||||
return JSON.parse(JSON.stringify(obj));
|
return JSON.parse(JSON.stringify(obj));
|
||||||
}
|
}
|
||||||
@@ -507,8 +514,10 @@ var tt = (function () {
|
|||||||
|
|
||||||
var pos = this.getBoundingClientRect(),
|
var pos = this.getBoundingClientRect(),
|
||||||
left = pos.left < window.innerWidth / 2,
|
left = pos.left < window.innerWidth / 2,
|
||||||
top = pos.top < window.innerHeight / 2;
|
top = pos.top < window.innerHeight / 2,
|
||||||
|
big = this.className.indexOf(' ttb') !== -1;
|
||||||
|
|
||||||
|
clmod(r.tt, 'b', big);
|
||||||
r.tt.style.top = top ? pos.bottom + 'px' : 'auto';
|
r.tt.style.top = top ? pos.bottom + 'px' : 'auto';
|
||||||
r.tt.style.bottom = top ? 'auto' : (window.innerHeight - pos.top) + 'px';
|
r.tt.style.bottom = top ? 'auto' : (window.innerHeight - pos.top) + 'px';
|
||||||
r.tt.style.left = left ? pos.left + 'px' : 'auto';
|
r.tt.style.left = left ? pos.left + 'px' : 'auto';
|
||||||
|
|||||||
51
docs/hls.html
Normal file
51
docs/hls.html
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
<!DOCTYPE html><html lang="en"><head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<title>hls-test</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
</head><body>
|
||||||
|
|
||||||
|
<video id="vid" controls></video>
|
||||||
|
<script src="hls.light.js"></script>
|
||||||
|
<script>
|
||||||
|
|
||||||
|
var video = document.getElementById('vid');
|
||||||
|
var hls = new Hls({
|
||||||
|
debug: true,
|
||||||
|
autoStartLoad: false
|
||||||
|
});
|
||||||
|
hls.loadSource('live/v.m3u8');
|
||||||
|
hls.attachMedia(video);
|
||||||
|
hls.on(Hls.Events.MANIFEST_PARSED, function() {
|
||||||
|
hls.startLoad(0);
|
||||||
|
});
|
||||||
|
hls.on(Hls.Events.MEDIA_ATTACHED, function() {
|
||||||
|
video.muted = true;
|
||||||
|
video.play();
|
||||||
|
});
|
||||||
|
|
||||||
|
/*
|
||||||
|
general good news:
|
||||||
|
- doesn't need fixed-length segments; ok to let x264 pick optimal keyframes and slice on those
|
||||||
|
- hls.js polls the m3u8 for new segments, scales the duration accordingly, seeking works great
|
||||||
|
- the sfx will grow by 66 KiB since that's how small hls.js can get, wait thats not good
|
||||||
|
|
||||||
|
# vod, creates m3u8 at the end, fixed keyframes, v bad
|
||||||
|
ffmpeg -hide_banner -threads 0 -flags -global_header -i ..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -g 120 -keyint_min 120 -sc_threshold 0 -hls_time 4 -hls_playlist_type vod -hls_segment_filename v%05d.ts v.m3u8
|
||||||
|
|
||||||
|
# live, updates m3u8 as it goes, dynamic keyframes, streamable with hls.js
|
||||||
|
ffmpeg -hide_banner -threads 0 -flags -global_header -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f segment -segment_list v.m3u8 -segment_format mpegts -segment_list_flags live v%05d.ts
|
||||||
|
|
||||||
|
# fmp4 (fragmented mp4), doesn't work with hls.js, gets duratoin 149:07:51 (536871s), probably the tkhd/mdhd 0xffffffff (timebase 8000? ok)
|
||||||
|
ffmpeg -re -hide_banner -threads 0 -flags +cgop -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f segment -segment_list v.m3u8 -segment_format fmp4 -segment_list_flags live v%05d.mp4
|
||||||
|
|
||||||
|
# try 2, works, uses tempfiles for m3u8 updates, good, 6% smaller
|
||||||
|
ffmpeg -re -hide_banner -threads 0 -flags +cgop -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f hls -hls_segment_type fmp4 -hls_list_size 0 -hls_segment_filename v%05d.mp4 v.m3u8
|
||||||
|
|
||||||
|
more notes
|
||||||
|
- adding -hls_flags single_file makes duration wack during playback (for both fmp4 and ts), ok once finalized and refreshed, gives no size reduction anyways
|
||||||
|
- bebop op has good keyframe spacing for testing hls.js, in particular it hops one seg back and immediately resumes if it hits eof with the explicit hls.startLoad(0); otherwise it jumps into the middle of a seg and becomes art
|
||||||
|
- can probably -c:v copy most of the time, is there a way to check for cgop? todo
|
||||||
|
|
||||||
|
*/
|
||||||
|
</script>
|
||||||
|
</body></html>
|
||||||
@@ -30,6 +30,7 @@ class Cfg(Namespace):
|
|||||||
c=c,
|
c=c,
|
||||||
rproxy=0,
|
rproxy=0,
|
||||||
ed=False,
|
ed=False,
|
||||||
|
nw=False,
|
||||||
no_zip=False,
|
no_zip=False,
|
||||||
no_scandir=False,
|
no_scandir=False,
|
||||||
no_sendfile=True,
|
no_sendfile=True,
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ from copyparty import util
|
|||||||
|
|
||||||
class Cfg(Namespace):
|
class Cfg(Namespace):
|
||||||
def __init__(self, a=[], v=[], c=None):
|
def __init__(self, a=[], v=[], c=None):
|
||||||
ex = {k: False for k in "e2d e2ds e2dsa e2t e2ts e2tsr".split()}
|
ex = {k: False for k in "nw e2d e2ds e2dsa e2t e2ts e2tsr".split()}
|
||||||
ex2 = {
|
ex2 = {
|
||||||
"mtp": [],
|
"mtp": [],
|
||||||
"mte": "a",
|
"mte": "a",
|
||||||
|
|||||||
Reference in New Issue
Block a user