fgtk
文件大小: unknow
源码售价: 5 个金币 积分规则     积分充值
资源说明:A set of a misc tools to work with files and processes
fgtk
====

A set of a misc tools to work with files and processes.

Various oldish helper scripts/binaries I wrote to help myself with day-to-day tasks.

License for all scripts is `WTFPL `__
(public domain-ish), feel free to just copy and use these in whatever way you like.


.. contents::
  :backlinks: none



Scripts
-------


[-root-] Various console/system things
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

File/dir/fs management
^^^^^^^^^^^^^^^^^^^^^^

File/link/dir and filesystem manipulation tools.

scim set
''''''''

A set of tools to bind a bunch of scattered files to a single path, with
completely unrelated internal path structure. Intended usage is to link
configuration files to scm-controlled path (repository).

Actually started as `cfgit project`_, but then evolved away from git vcs into a
more generic, not necessarily vcs-related, solution.

.. _cfgit project: http://fraggod.net/code/git/configit/

scim-ln
```````

Adds a new link (symlink or catref) to a manifest (links-list), also moving file
to scim-tree (repository) on fs-level.

scim
````

Main tool to check binding and metadata of files under scim-tree. Basic
operation boils down to two (optional) steps:

* Check files' metadata (uid, gid, mode, acl, posix capabilities) against
  metadata-list (``.scim_meta``, by default), if any, updating the metadata/list
  if requested, except for exclusion-patterns (``.scim_meta_exclude``).

* Check tree against links-list (``.scim_links``), warning about any files /
  paths in the same root, which aren't on the list, yet not in exclusion
  patterns (``.scim_links_exclude``).


fs
''

Complex tool for high-level fs operations. Reference is built-in.

Copy files, setting mode and ownership for the destination::

  fs -m600 -o root:wheel cp * /somepath

Temporarily (1hr) change attributes (i.e. to edit file from user's
editor)::

  fs -t3600 -m600 -o someuser expose /path/to/file

Copy ownership/mode from one file to another::

  fs cps /file1 /file2

fatrace-pipe
''''''''''''

fatrace_-based script to read filesystem write events via linux fanotify_ system
and match them against specific path and app name, sending matches to a FIFO
pipe.

Use-case is to, for example, setup watcher for development project dir changes,
sending instant "refresh" signals to something that renders the project or shows
changes' results otherwise.

FIFO is there because fanotify requires root privileges, and running some
potentially-rm-rf-/ ops as uid=0 is a damn bad idea. User's pid can read lines
from the fifo and react to these safely instead.

Example - run "make" on any change to ``~user/hatch/project`` files::

  (root) ~# fatrace-pipe ~user/hatch/project
  (user) project% xargs -in1 ' -- pkill -HUP -F /run/nginx.pid

(-p to also echo events to stdout, "-f W" will filter file writes,
D - deletions, <> - renames)

findx
'''''

Wrapper around GNU find to accept paths at the end of argv if none are passed
before query.

Makes it somewhat more consistent with most other commands that accept options
and a lists of paths (almost always after opts), but still warns when/if
reordering takes place.

No matter how many years I'm using that tool, still can't get used to typing
paths before query there, so decided to patch around that frustrating issue one
day.

patch-nspawn-ids
''''''''''''''''

Python3 script to "shift" or "patch" uid/gid values with new container-id
according to systemd-nspawn schema, i.e. set upper 16-bit to specified
container-id value and keep lower 16 bits to uid/gid inside the container.

Similar operation to what systemd-nspawn's --private-users-chown option does
(described in nspawn-patch-uid.c), but standalone, doesn't bother with ACLs or
checks on filesystem boundaries.

Main purpose is to update uids when migrating systemd-nspawn containers or
adding paths/filesystems to these without clobbering ownership info there.

Should be safe to use anywhere, as in most non-nspawn cases upper bits of
uid/gid are always zero, hence any changes can be easily reverted by running
this tool again with -c0.

bindfs-idmap
''''''''''''

`bindfs `_ wrapper script to setup id-mapping from uid of
the mountpoint to uid/gid of the source directory.

I.e. after ``bindfs-idmap /var/lib/machines/home/src-user ~dst-user/tmp``,
``~dst-user/tmp`` will be accessible to dst-user as if they were src-user, with
all operations proxied to src-user's dir.

Anything created under ``~dst-user/tmp`` will have uid/gid of the src dir.

Useful to allow temporary access to some uid's files in a local container to
user acc in a main namespace.

For long-term access (e.g. for some daemon), there probably are better options
than such bindfs hack - e.g. bind-mounts, shared uids/gids, ACLs, etc.

docker-ln
'''''''''

Simple bash script to symlink uppermost "merged" overlayfs layer of a running
docker-compose setup container, to allow easy access to temporary files there.

Useful for testing stuff without the need to rebuild and restart whole container
or a bunch of compose stuff after every one-liner tweak to some script that's
supposed to be running in there, or to experiment-with and debug things.

These paths are very likely to change between container and docker-compose
restarts for many reasons, so such symlinks are generally only valid during
container runtime, and script needs a re-run to update these too.

fast-disk-wipe
''''''''''''''

Very simple "write 512B, skip N * 512B, repeat" binary for wiping some block
device in a hurry.

Idea is not to erase every trace of data or to hide it, but just to make files
probabilistically unusable due to such junk blocks all over the place.

With low-enough intervals it should also corrupt filesystem pretty badly,
making metadata hard to access.

Fast loop of 512B writes to a device directly will likely hang that binary until
it's done, as that's how such direct I/O seem to work on linux.

Writes only stop when write() or lseek() starts returning errors, so using this
on some extendable file will result in it eating up all space available to it.

See head of the file for build and usage info.



Generic file contents manglers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Things that manipulate arbitrary file contents.

repr
''''

Ever needed to check if file has newlines or BOM in it, yet every editor is
user-friendly by default and hides these from actual file contents?

One fix is hexdump or switching to binary mode, but these are usually terrible
for looking at text, and tend to display all non-ASCII as "." instead of nicer
\\r \\t \\n ... escapes, not to mention unicode chars.

This trivial script prints each line in a file via python3's repr(), which is
usually very nice, has none of the above issues and doesn't dump byte codes on
you for anything it can interpret as char/codepoint or some neat escape code.

Has opts for text/byte mode and stripping "universal newlines" (see newline= in
built-in open() func).

Can also do encoding/newline conversion via -c option, as iconv can't do BOM or
newlines, and sometimes you just want "MS utf-8 mode" (``repr -c utf-8-sig+r``).
Using that with +i flag as e.g. ``repr -c utf-8-sig+ri file1 file2 ...``
converts encoding+newlines+BOM for files in-place at no extra hassle.

color
'''''

Outputs terminal color sequences, making important output more distinctive.

Also can be used to interleave "tail -f" of several logfiles in the same
terminal::

  % t -f /var/log/app1.log | color red - &
  % t -f /var/log/app2.log | color green - &
  % t -f /var/log/app2.log | color blue - &

Or to get color-escape-magic for your bash script: ``color red bold p``

resolve-hostnames
'''''''''''''''''

Script (py3) to find all specified (either directly, or by regexp) hostnames and
replace these with corresponding IP addresses, resolved through getaddrinfo(3).

Examples::

  % cat cjdroute.conf
  ... "fraggod.net:21987": { ... },
      "localhost:21987": { ... },
      "fraggod.net:12345": { ... }, ...

  % resolve-hostnames fraggod.net localhost < cjdroute.conf
  ... "192.168.0.11:21987": { ... },
      "127.0.0.1:21987": { ... },
      "192.168.0.11:12345": { ... }, ...

  % resolve-hostnames -m '"(?P[\w.]+):\d+"' < cjdroute.conf
  % resolve-hostnames fraggod.net:12345 < cjdroute.conf
  % resolve-hostnames -a inet6 fraggod.net localhost < cjdroute.conf
  ...

  % cat nftables.conf
  define set.gw.ipv4 = { !ipv4.name1.local, !ipv4.name2.local }
  define set.gw.ipv6 = { !ipv6.name1.local, !ipv6.name2.local }
  ...
  # Will crash nft-0.6 because it treats names in anonymous sets as AF_INET (ipv4 only)

  % resolve-hostnames -rum '!(\S+\.local)\b' -f nftables.conf
  define set.gw.ipv4 = { 10.12.34.1, 10.12.34.2 }
  define set.gw.ipv6 = { fd04::1, fd04::2 }
  ...

Useful a as conf-file pre-processor for tools that cannot handle names properly
(e.g. introduce ambiguity, can't deal with ipv4/ipv6, use weird resolvers, do it
dynamically, etc) or should not be allowed to handle these, convert lists of
names (in some arbitrary format) to IP addresses, and such.

Has all sorts of failure-handling and getaddrinfo-control cli options, can
resolve port/protocol names as well.

resolve-conf
''''''''''''

Python-3/Jinja2 script to produce a text file from a template, focused
specifically on templating configuration files, somewhat similar to
"resolve-hostnames" above or templating provided by ansible/saltstack.

Jinja2 env for template has following filters and values:

- ``dns(host [, af, proto, sock, default, force_unique=True])`` filter/global.

  getaddrinfo(3) wrapper to resolve ``host`` (name or address) with optional
  parameters to a single address, raising exception if it's non-unique by default.

  af/proto/sock values can be either enum value names (without AF/SOL/SOCK
  prefix) or integers.

- ``hosts`` - /etc/hosts as a mapping.

  For example, hosts-file line ``1.2.3.4 sub.host.example.org`` will produce
  following mapping (represented as yaml)::

    sub.host.example.org: 1.2.3.4
    host.example.org:
      sub: 1.2.3.4
    org:
      example:
        host:
          sub: 1.2.3.4

  | Can be used as a reliable dns/network-independent names.
  | ``--hosts-opts`` cli option allows some tweaks wrt how that file is parsed.
  | See also HostsNode object for various helper methods to lookup those.

- ``iface`` - current network interfaces and IPv4/IPv6 addresses assigned there
  (fetched from libc getifaddrs via ctypes).

  Example value structure (as yaml)::

    enp1s0:
      - 10.0.0.134
      - fd00::134
      - 2001:470:1f0b:11de::134
      - fe80::c646:19ff:fe64:632f
    enp2s7:
      - 10.0.1.1
    lo:
      - 127.0.0.1
      - ::1
    ip_vti0: []

  Probably a good idea to use this stuff only when IPs are static and get
  assigned strictly before templating.

- ``{% comment_out_if value[, comment-prefix] %}...{% comment_out_end %}``

  Custom template block to prefix each non-empty line within it with specified
  string (defaults to "#") if value is not false-y.

  Can be used when format doesn't have block comments, but it's still desirable
  to keep disabled things in dst file (e.g. for manual tinkering) instead of
  using if-blocks around these, or to make specific lines easier to uncomment manually.

- ``it`` - itertools, ``_v``/``v_``/``_v_`` - global funcs for adding spaces
  before/after/around non-empty strings.

- Whatever is loaded from ``--conf-file/--conf-dir`` (JSON/YAML files), if specified.

Use-case is a simple conf-file pre-processor for autonomous templating on
service startup with a minimal toolbox on top of jinja2, without huge dep-tree
or any other requirements and complexity, that is not scary to run from
``ExecStartPre=`` line as root.

temp-patch
''''''''''

Tool to temporarily modify (patch) a file - until reboot or for a specified
amount of time. Uses bind-mounts from tmpfs to make sure file will be reverted
to the original state eventually.

Useful to e.g. patch ``/etc/hosts`` with (pre-defined) stuff from LAN on a
laptop (so this changes will be reverted on reboot), or a notification filter
file for a short "busy!" time period (with a time limit, so it'll auto-revert
after), or stuff like that.

Even though dst file is mounted with "-o ro" by default (there's "-w" option to
disable that), linux doesn't seem to care about that option and mounts the thing
as "rw" anyway, so "chmod a-w" gets run on temp file instead to prevent
accidental modification (that can be lost).

There're also "-t" and "-m" flags to control timestamps during the whole
process.

term-pipe
'''''''''

Py3 script with various terminal input/output piping helpers and tools.

Has multiple modes for different use-cases, collected in same script mostly
because they're pretty simple and not worth remembering separate ones.

out-paste
`````````

Disables terminal echo and outputs line-buffered stdin to stdout.

Example use-case can be grepping through huge multiline strings
(e.g. webpage source) pasted into terminal, i.e.::

  % term-pipe | g -o '\/tmp/errors.log" can be added at the end.

Check options of this subcommand for rate-limiting and some other tweaks.

yaml-to-pretty-json
'''''''''''''''''''

Converts yaml files to an indented json, which is a bit more readable and
editable by hand than the usual compact one-liner serialization.

Due to yaml itself being json superset, can be used to convert json to
pretty-json as well.

yaml-flatten
''''''''''''

Converts yaml/json files to a flat "key: value" lines.

Nested keys are flattened to a dot-separated "level1.level2.level3" keys,
replacing dots, spaces and colons there, to avoid confusing level separators
with the keys themselves.

Values are also processed to always be one-liners, handling long values
and empty lists/dicts and such in a readable manner too.

Output is intended for a human reader, to easily see value paths and such,
and definitely can't be converted back to yaml or any kind of data safely.

hz
''

Same thing as the common "head" tool, but works with \\x00 (aka null character,
null byte, NUL, ␀, \\0, \\z, \\000, \\u0000, %00, ^@) delimeters.

Can be done with putting "tr" in the pipeline before and after "head", but this
one is probably less fugly.

Allows replacing input null-bytes with newlines in the output
(--replace-with-newlines option) and vice-versa.

Common use-case is probably has something to do with filenames and xargs, e.g.::

  % find -type f -print0 | shuf -z | hz -10 | xargs -0 some-cool-command
  % ls -1 | hz -z | xargs -0 some-other-command

I have "h" as an alias for "head" in shells, so "head -z" (if there were such
option) would be aliased neatly to "hz", hence the script name.

Defaults to reading ALL lines, not just arbitrary number (like 10, which is
default for regular "head")!

liac
''''

"Log Interleaver And Colorizer" python script.

.. figure:: http://blog.fraggod.net/images/liac_interleaved_colorized_output.jpg
   :alt: interleaved_and_colorized_output_image

Reads lines from multiple files, ordering them by the specified field in the
output (default - first field, e.g. ISO8601 timestamp) and outputs each with
(optional) unique-filename-part prefix and unique (ansi-terminal, per-file)
color.

Most useful for figuring out sequence of events from multiple timestamped logs.

To have safely-rotated logs with nice timestamps from any arbitrary command's
output, something like ``stdbuf -oL  | svlogd -r _ -ttt
`` can be used.
Note "stdbuf" coreutils tool, used there to tweak output buffering, which
usually breaks such timestamps, and "svlogd" from runit_ suite (no deps, can be
built separately).

See `blog post about liac tool`_ for more info.

.. _runit: http://smarden.org/runit/
.. _blog post about liac tool: http://blog.fraggod.net/2015/12/29/tool-to-interleave-and-colorize-lines-from-multiple-log-or-any-other-files.html

html-embed
''''''''''

Script to create "fat" HTML files, embedding all linked images
(as base64-encoded data-urls), stylesheets and js into them.

All src= and href= paths must be local (e.g. "js/script.js" or "/css/main.css"),
and will simply be treated as path components (stripping slashes on the left)
from html dir, nothing external (e.g. "//site.com/stuff.js") will be fetched.

Doesn't need anything but Python-3, based on stdlib html.parser module.

Not optimized for huge amounts of embedded data, storing all the substitutions
in memory while it runs, and is unsafe to run on random html files, as it can
embed something sensitive (e.g. ````) - no extra
checks there.

Use-case is to easily produce single-file webapps or pages to pass around (or
share somewhere), e.g. some d3-based interactive chart page or an html report
with a few embedded images.

someml-indent
'''''''''''''

Simple and dirty regexp + backreferences something-ML (SGML/HTML/XML) parser to
indent tags/values in a compact way without messing-up anything else in there.

I.e. non-closed tags are FINE, something like <@> doesn't cause parser to
explode, etc.

Does not add any XML headers, does not mangle (or "canonize") tags/attrs/values
in any way, except for stripping/adding those spaces.

Kinda like BeautifulSoup, except not limited to html and trivial enough so that
it can be trusted not to do anything unnecessary like stuff mentioned above.

For cases when ``xmllint --format`` fail and/or break such kinda-ML-but-not-XML files.

entropy
'''''''

Python (2 or 3) script to feed /dev/random linux entropy pool, to e.g. stop dumb
tools like gpg blocking forever on ``pacman --init`` in a throwaway chroot.

Basically haveged or rngd replacement for bare-bones chroots that don't have
either, but do have python.

Probably a bad idea to use it for anything other than very brief workarounds for
such tools on an isolated systems that don't run anything else crypto-related.

Shouldn't compromise deterministic stuff though, e.g. dm-crypt operation (except
new key generation in cryptsetup or such).

crypt
'''''

Trivial file/stream encryption tool using `PyNaCl's`_
crypto_secretstream_xchacha20poly1305 authenticated encryption API.

Key can be either specified on the command line for simplicity or read from a
file, and is always processed via scrypt, as it's likely some short string.

Usage examples::

  % crypt -ek my-secret-key secret.tar secret.tar.enc
  % crypt -dk my-secret-key secret.tar.enc secret.tar.test
  % crypt -ek @~/.secret.key secret.tar.enc

Intended for an ad-hoc temporary encryption when transferring stuff via a usb
stick, making a temporary backup to a random untrusted disk or whatever.

Does not support any kind of appending/resuming or partial operation, which can
be bad if there's a flipped bit anywhere in the encrypted data - decryption will
stop and throw error at that point.

.. _PyNaCl's: https://pynacl.readthedocs.io/



Kernel sources/build/version management
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

kernel-patch
''''''''''''

Simple stateless script to update sources in /usr/src/linux to some (specified)
stable version.

Looks for "patch-X.Y.Z.xz" files (as provided on kernel.org) under
/usr/src/distfiles (configurable at the top of the script), or downloads them
there from kernel.org.

Does update (or rollback) by grabbing current patchset version from Makefile and
doing essentially ``patch -R <  && patch < `` - i.e.
rolling-back the current patchset, then applying new patch.

Always does ``patch --dry-run`` first to make sure there will be no mess left
over by the tool and updates will be all-or-nothing.

In short, allows to run e.g. ``kernel-patch 3.14.22`` to get 3.14.22 in
``/usr/src/linux`` from any other clean 3.14.\* version, or just
``kernel-patch`` to have the latest 3.14 patchset.

kernel-conf-check
'''''''''''''''''

Ad-hoc python3 script to check any random snippet with linux kernel
``CONFIG_...`` values (e.g. "this is stuff you want to set" block on some wiki)
against kernel config file, current config in /proc/config.gz or such.

Reports what matches and what doesn't to stdout, trivial regexp matching.

clean-boot
''''''''''

Script to remove older kernel versions (as installed by ``/sbin/installkernel``)
from ``/boot`` or similar dir.

Always keeps version linked as "vmlinuz", and prioritizes removal of older
patchset versions from each major one, and only then latest per-major patchset,
until free space goal (specified percentage, 20% by default) is met.

Also keeps specified number of last-to-remove versions, can prioritize cleanup
of ".old" verssion variants, keep ``config-*`` files... and other stuff (see
--help).

Example::

  # clean-boot --debug --dry-run -f 100
  DEBUG:root:Preserved versions (linked version, its ".old" variant, --keep-min): 4
  DEBUG:root: - 3.9.9.1 - System.map-3.9.9-fg.mf_master
  DEBUG:root: - 3.9.9.1 - config-3.9.9-fg.mf_master
  DEBUG:root: - 3.9.9.1 - vmlinuz-3.9.9-fg.mf_master
  DEBUG:root: - 3.10.27.1 - vmlinuz-3.10.27-fg.mf_master
  ...
  DEBUG:root: - 3.12.19.1 - System.map-3.12.19-fg.mf_master
  DEBUG:root: - 3.12.20.1 - config-3.12.20-fg.mf_master
  DEBUG:root: - 3.12.20.1 - System.map-3.12.20-fg.mf_master
  DEBUG:root: - 3.12.20.1 - vmlinuz-3.12.20-fg.mf_master
  DEBUG:root:Removing files for version (df: 58.9%): 3.2.0.1
  DEBUG:root: - System.map-3.2.0-fg.mf_master
  DEBUG:root: - config-3.2.0-fg.mf_master
  DEBUG:root: - vmlinuz-3.2.0-fg.mf_master
  DEBUG:root:Removing files for version (df: 58.9%): 3.2.1.0
  ... (removal of older patchsets for each major version, 3.2 - 3.12)
  DEBUG:root:Removing files for version (df: 58.9%): 3.12.18.1
  ... (this was the last non-latest patchset-per-major)
  DEBUG:root:Removing files for version (df: 58.9%): 3.2.16.1
  ... (removing latest patchset for each major version, starting from oldest - 3.2 here)
  DEBUG:root:Removing files for version (df: 58.9%): 3.7.9.1
  ...
  DEBUG:root:Removing files for version (df: 58.9%): 3.8.11.1
  ...
  DEBUG:root:Finished (df: 58.9%, versions left: 4, versions removed: 66).

("df" doesn't rise here because of --dry-run, ``-f 100`` = "remove all
non-preserved" - as df can't really get to 100%)

Note how 3.2.0.1 (non-.old 3.2.0) gets removed first, then 3.2.1, 3.2.2, and so
on, but 3.2.16 (latest of 3.2.X) gets removed towards the very end, among other
"latest patchset for major" versions, except those that are preserved
unconditionally (listed at the top).



ZNC log helpers
^^^^^^^^^^^^^^^

Tools to manage `ZNC IRC bouncer `_ logs - archive, view, search, etc.

znc-log-aggregator
''''''''''''''''''

Tool to process znc chat logs, produced by "log" module (global, per-user or
per-network - looks everywhere) and store them using following schema::

  /chat/__-.log.xz
  /priv/__-.log.xz

Where "priv" differs from "chat" in latter being prefixed by "#" or "&".
Values there are parsed according to any one of these (whichever matches
first):

* ``users//moddata/log/_.log``

* ``moddata/log/_default__.log`` (no "_" in ```` allowed)

* ``moddata/log/___.log`` (no "_" in ```` or
  ```` allowed)

Each line gets processed by regexp to do ``[HH:MM:SS]  some msg`` ->
``[yy-mm-dd HH:MM:SS]  some msg``.

Latest (current day) logs are skipped. New logs for each run are concatenated to
the monthly .xz file.

Should be safe to stop at any time without any data loss - all the resulting
.xz's get written to temporary files and renamed at the very end (followed only
by unlinking of the source files).

All temp files are produced in the destination dir and should be cleaned-up on
any abort/exit/finish.

Idea is to have more convenient hierarchy and less files for easier shell
navigation/grepping (xzless/xzgrep), plus don't worry about the excessive space
usage in the long run.

znc-log-reader
''''''''''''''

Same as znc-log-aggregator above, but seeks/reads specific tail ("last n lines")
or time range (with additional filtering by channel/nick and network) from all
the current and aggregated logs.



systemd
^^^^^^^

systemd-dashboard
'''''''''''''''''

Python3 script to list all currently active and non-transient systemd units,
so that these can be tracked as a "system state",
and e.g. any deviations there detected/reported (simple diff can do it).

Gets unit info by parsing Dump() snapshot fetched via sd-bus API of libsystemd
(using ctypes to wrap it), which is same as e.g. "systemd-analyze dump" gets.

Has -m/--machines option to query state from all registered machines as well,
which requires root (for sd_bus_open_system_machine) due to current systemd limitations.

See `Dashboard-for-... blog post`_ for extended rationale,
though it's probably obsolete otherwise since this thing was rewritten.

.. _Dashboard-for-... blog post: http://blog.fraggod.net/2011/2/Dashboard-for-enabled-services-in-systemd

systemd-watchdog
''''''''''''''''

Trivial script to ping systemd watchdog and do some trivial actions in-between
to make sure os still works.

Wrote it after yet another silent non-crash, where linux kernel refuses to
create new pids (with some backtraces) and seem to hang on some fs ops, blocking
syslog/journal, but leaving most simple daemons running ok-ish for a while.

So this trivial script, tied into systemd-controlled watchdog timers, tries to
create pids every once in a while, with either hang or crash bubbling-up to
systemd (pid-1), which should reliably reboot/crash the system via hardware wdt.

Example watchdog.service::

  [Service]
  Type=notify
  ExecStart=/usr/local/bin/systemd-watchdog -i30 -n \
    -f /var/log/wdt-fail.log \
    -x 'ip link' -x 'ip addr' -x 'ip ro' -x 'journalctl -an30'

  WatchdogSec=60
  TimeoutStartSec=15
  Restart=on-failure
  RestartSec=20
  StartLimitInterval=10min
  StartLimitBurst=5
  StartLimitAction=reboot-force

  [Install]
  WantedBy=multi-user.target

(be sure to tweak timeouts and test without "reboot-force" first though,
e.g. pick RestartSec= for transient failures to not trigger StartLimitAction)

Can optionally get IP of (non-local) gateway to 1.1.1.1 (or any specified IPv4)
via libmnl (also used by iproute2, so always available) and check whether it
responds to `fping `_ probes, crashing if it does not - see
-n/--check-net-gw option.

That's mainly for remote systems which can become unreachable if kernel network
stack, local firewall, dhcp, ethernet or whatever other link fails (usually due
to some kind of local tinkering), ignoring more mundane internet failures.

To avoid reboot loops (in abscence of any networking), it might be a good idea
to only start script with this option manually (e.g. right before messing up
with the network, or on first successful access).

-f/--fail-log option is to log date/time of any failures for latest boot
and run -x/--fail-log-cmd command(s) on any python exceptions (note: kernel
hangs probably won't cause these), logging their stdout/stderr there -
e.g. to dump network configuration info as in example above.

Useless without systemd and requires systemd python3 module, plus fping tool if
-n/--check-net-gw option is used.

cgrc
''''

Wrapper for `systemd.resource control`_ stuff to run commands in transient
scopes within pre-defined slices, as well as wait for these and list pids
within them easily.

Replacement for things like libcgroup, cgmanager and my earlier `cgroup-tools
project`_, compatible with `unified cgroup-v2 hierarchy`_ and working on top of
systemd (use ``systemd.unified_cgroup_hierarchy`` on cmdline, if non-default).

Resource limits for cgrc scopes should be defined via hierarchical slices like these::

  # apps.slice
  [Slice]

  CPUWeight=30
  IOWeight=30

  MemoryHigh=5G
  MemoryMax=8G
  MemorySwapMax=1G

  # apps-browser.slice
  [Slice]
  CPUWeight=30
  IOWeight=30
  MemoryHigh=3G

And then script can be used to start things there::

  % cgrc apps-browser -- chromium
  % cgrc -u ff apps-browser -- firefox --profile myprofile

Where e.g. last command would end up running something like this::

  % systemd-run -q --user --scope --unit ff \
    --slice apps-browser -- firefox --profile myprofile

Note that .scope cgroups are always transient (vanish after run), and only
.slice ones can be pre-defined with limits.
Both get started/stopped by systemd on as-needed basis.

Tool also allows to check or list pids within scopes/slices with -c/-l options
(to e.g. check if named scope already started or something running in a slice),
as well as waiting on these (-q option, can be used to queue/run commands in sequence)
and manipulating associated cgroup limits easily (-v option).

Run without any args/opts or with -h/--help to get more detailed usage info.

.. _systemd.resource control: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
.. _cgroup-tools project: https://github.com/mk-fg/cgroup-tools
.. _unified cgroup-v2 hierarchy: https://www.kernel.org/doc/Documentation/cgroup-v2.txt



SSH and WireGuard related
^^^^^^^^^^^^^^^^^^^^^^^^^

See also "backup" subsection.

ssh-fingerprint
'''''''''''''''

ssh-keyscan, but outputting each key in every possible format.

Imagine you have an incoming IM message "hey, someone haxxors me, it says 'ECDSA
key fingerprint is f5:e5:f9:b6:a4:6b:fd:b3:07:15:f6:d9:0c:f5:47:54', what do?",
this tool allows to dump any such fingerprint for a remote host, with::

  % ssh-fingerprint congo.fg.nym
  ...
  congo.fg.nym ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNo...zoU04g=
  256 MD5:f5:e5:f9:b6:a4:6b:fd:b3:07:15:f6:d9:0c:f5:47:54 /tmp/.ssh_keyscan.key.kc3ur3C (ECDSA)
  256 SHA256:lFLzFQR...2ZBmIgQi/w /tmp/.ssh_keyscan.key.kc3ur3C (ECDSA)
  ---- BEGIN SSH2 PUBLIC KEY ----
  ...

Only way I know how to get that
"f5:e5:f9:b6:a4:6b:fd:b3:07:15:f6:d9:0c:f5:47:54" secret-sauce is to either do
your own md5 + hexdigest on ssh-keyscan output (and not mess-up due to some
extra space or newline), or store one of the keys from there with first field
cut off into a file and run ``ssh-keygen -l -E md5 -f key.pub``.

Note how "intuitive" it is to confirm something that ssh prints (and it prints
only that md5-fp thing!) for every new host you connect to with just openssh.

With this command, just running it on the remote host - presumably from diff
location, or even localhost - should give (hopefully) any possible gibberish
permutation that openssh (or something else) may decide to throw at you.

ssh-keyparse
''''''''''''

Python3 script to extract raw private key string from ed25519 ssh keys.

Main purpose is easy backup of ssh private keys and derivation of new secrets
from these for other purposes.

For example::

  % ssh-keygen -t ed25519 -f test-key
  ...

  % cat test-key
  -----BEGIN OPENSSH PRIVATE KEY-----
  b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
  QyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yAAAAJi1Bt0atQbd
  GgAAAAtzc2gtZWQyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yA
  AAAEAc5IRaYYm2Ss4E65MYY4VewwiwyqWdBNYAZxEhZe9GpNopTJz/d2cMv4VLj/fYkWwX
  zyhChhvaVTRBi0uA7H7IAAAAE2ZyYWdnb2RAbWFsZWRpY3Rpb24BAg==
  -----END OPENSSH PRIVATE KEY-----

  % ssh-keyparse test-key
  HOSEWmGJtkrOBOuTGGOFXsMIsMqlnQTWAGcRIWXvRqQ=

That one line at the end contains 32-byte ed25519 seed (with urlsafe-base64
encoding) - "secret key" - all the necessary info to restore the blob above,
without extra openssh wrapping (as per PROTOCOL.key).

Original OpenSSH format (as produced by ssh-keygen) stores "magic string",
ciphername ("none"), kdfname ("none"), kdfoptions (empty string), public key and
index for that, two "checkint" numbers, seed + public key string, comment and a
bunch of extra padding at the end. All string values there are length-prefixed,
so take extra 4 bytes, even when empty.

Gist is that it's a ton of stuff that's not the actual key, which ssh-keyparse
extracts.

To restore key from seed, use -d/--patch-key option on any existing ed25519 key,
e.g. ``ssh-keygen -t ed25519 -N '' -f test-key && ssh-keyparse -d  test-key``

If key is encrypted with passphrase, ``ssh-keygen -p`` will be run on a
temporary copy of it to decrypt, with a big warning in case it's not desirable.

There's also an option (--pbkdf2) to run the thing through PBKDF2 (tunable via
--pbkdf2-opts) and various output encodings available::

  % ssh-keyparse test-key  # default is urlsafe-base64 encoding
  HOSEWmGJtkrOBOuTGGOFXsMIsMqlnQTWAGcRIWXvRqQ=

  % ssh-keyparse test-key --hex
  1ce4845a6189b64ace04eb931863855ec308b0caa59d04d60067112165ef46a4

  % ssh-keyparse test-key --base32
  3KJ8-8PK1-H6V4-NKG4-XE9H-GRW5-BV1G-HC6A-MPEG-9NG0-CW8J-2SFF-8TJ0-e

  % ssh-keyparse test-key --base32-nodashes
  3KJ88PK1H6V4NKG4XE9HGRW5BV1GHC6AMPEG9NG0CW8J2SFF8TJ0e

  % ssh-keyparse test-key --raw >test-key.bin

With encoding like --base32 (`Douglas Crockford's human-oriented Base32`_,
last digit/lowercase-letter there is a checksum), it's easy to even read the
thing over some voice channel, if necessary.

.. _Douglas Crockford's human-oriented Base32: http://www.crockford.com/wrmg/base32.html

ssh-key-init
''''''''''''

Bash script to generate (init) ssh key (via ssh-keygen) without asking about
various legacy and uninteresting options and safe against replacing existing
keys.

I.e. don't ever want RSA, ECDSA or such nonsense (Ed25519 is the norm), don't
need passwords for 99.999% of the keys, don't care about any of the ssh-keygen
output, don't need any interactivity, but do care about silently overwriting
existing key and want the thing to create parent dirs properly (which -f fails
to do).

Has -m option to init key for an nspawn container under ``/var/lib/machines``
(e.g. ``ssh-key-init -m mymachine``) and -r option to replace any existing keys.
Sets uid/gid of the parent path for all new ones and -m700.

ssh-tunnel
''''''''''

| Script to keep persistent, unique and reasonably responsive ssh tunnels.
| Mostly just a bash wrapper with collection of options for such use-case.
|

I.e. to run ``ssh-tunnel -ti 60 2223:nexthop:22 user@host -p2222`` instead of
some manual loop (re-)connecting every 60s in the background using something like::

  ssh \
    -oControlPath=none -oControlMaster=no \
    -oConnectTimeout=5 -oServerAliveInterval=3 -oServerAliveCountMax=5 \
    -oPasswordAuthentication=no -oNumberOfPasswordPrompts=0 \
    -oBatchMode=yes -oExitOnForwardFailure=yes -TnNqy \
    -p2222 -L 2223:nexthop:22 user@host

Which are all pretty much required for proper background tunnel operation.

| Has opts for reverse-tunnels and using tping tool instead of ssh/sleep loop.
| Keeps pidfiles in /tmp and allows to kill running tunnel-script via same command with -k/kill appended.

ssh-reverse-mux-\*
''''''''''''''''''

Python 3.6+ (asyncio) scripts to establish multiple ssh reverse-port-forwarding
("ssh -R") connections to the same tunnel-server from mutliple hosts using same
exact configuration on each.

Normally, first client host will bind the "ssh -R" listening port and all others
will fail, but these two scripts negotiate unique port within specified range to
each host, so there are no clashes and all tunnels work fine.

Tunnel server also stores allocated ports in a db file, so that each client gets
more-or-less persistent listening port.

Each client negotiates port before exec'ing "ssh -R" command, identifying itself
via --ident-\* string (derived from /etc/machine-id by default), and both
client/server need to use same -s/--auth-secret to create/validate MACs in each
packet.

Note that all --auth-secret is used for is literally handing-out sequential
numbers, and isn't expected to be strong protection against anything,
unlike ssh auth that should come after that.

wg-mux-\*
'''''''''

Same thing as ssh-reverse-mux-\* scripts above, but for negotiating WireGuard
tunnels, with persistent host tunnel IPs tracked via --ident-\* strings with
simple auth via MACs on UDP packets derived from symmetric -s/--auth-secret.

Client identity, wg port, public key and tunnel IPs are sent in the clear with
relatively weak authentication (hmac of -s/--auth-secret string), but wg server
is also authenticated by pre-shared public key (and --wg-psk, if specified).

Such setup is roughly equivalent to a password-protected (--auth-secret) public network.

Runs "wg set" commands to update configuration, which need privileges,
but can be wrapped in sudo or suid/caps via --wg-cmd to avoid root in the rest
of the script.

Does not touch or handle WireGuard private keys in any way by itself,
and probably should not have direct access to these
(though note that unrestricted access to "wg" command can reveal them anyway).

Example systemd unit for server::

  # wg.service + auth.secret psk.secret key.secret
  # useradd -s /usr/bin/nologin wg && mkdir -m700 ~wg && chown wg: ~wg
  # cd ~wg && cp /usr/bin/wg . && chown root:wg wg && chmod 4110 wg
  [Unit]
  Wants=network.target
  After=network.target

  [Service]
  Type=exec
  User=wg
  WorkingDirectory=~
  Restart=always
  RestartSec=60
  StandardInput=file:/home/wg/auth.secret
  StandardOutput=journal
  ExecStartPre=+sh -c 'ip link add wg type wireguard 2>/dev/null; \
    ip addr add 10.123.0.1/24 dev wg 2>/dev/null; ip link set wg up'
  ExecStartPre=+wg set wg listen-port 1500 private-key key.secret
  ExecStart=wg-mux-server --mux-port=1501 --wg-port=1500 \
    --wg-net=10.123.0.0/24 --wg-cmd=./wg --wg-psk=psk.secret

  [Install]
  WantedBy=multi-user.target

Client::

  # wg.service + auth.secret psk.secret
  # useradd -s /usr/bin/nologin wg && mkdir -m700 ~wg && chown wg: ~wg
  # cd ~wg && cp /usr/bin/wg . && chown root:wg wg && chmod 4110 wg
  # cd ~wg && cp /usr/bin/ip . && chown root:wg ip && chmod 4110 ip
  [Unit]
  Wants=network.target
  After=network.target

  [Service]
  Type=exec
  User=wg
  WorkingDirectory=~
  Restart=always
  RestartSec=10
  StandardInput=file:/home/wg/auth.secret
  StandardOutput=journal
  ExecStartPre=+sh -c '[ -e key.secret ] || { umask 077; wg genkey >key.secret; }
  ExecStartPre=+sh -c '[ -e key.public ] || wg pubkey key.public
  ExecStartPre=+sh -c 'ip link add wg type wireguard 2>/dev/null; ip link set wg up'
  ExecStartPre=+wg set wg private-key key.secret
  ExecStart=wg-mux-client \
    20.88.203.92:1501 BcOn/q9D5zcqK0hrWmXGQHtaEKGGf6g5nTxZUZ0P4HY= key.public \
    --ident-rpi --wg-net=10.123.0.0/24 --wg-cmd=./wg --ip-cmd=./ip --wg-psk=psk.secret \
    --ping-cmd='ping -q -w15 -c3 -i3 10.123.0.1' --ping-silent

  [Install]
  WantedBy=multi-user.target

When enabled, these should be enough to setup reliable tunnel up on client boot,
and then keep it alive from there indefinitely (via --ping-cmd + systemd restart).

Explicit iface/IP init in these units can be replaced by systemd-networkd
.netdev + .network stuff, as it supports wireguard configuration there.

ssh-tunnels-cleanup
'''''''''''''''''''

Bash script to list or kill users' sshd pids, created for "ssh -R" tunnels, that
don't have a listening socket associated with them or don't show ssh protocol
greeting (e.g. "SSH-2.0-OpenSSH_7.4") there.

These seem to occur when ssh client suddenly dies and reconnects to create new
tunnel - old pid can still hog listening socket (even though there's nothing on
the other end), but new pid won't exit and hang around uselessly.

Solution is to a) check for sshd pids that don't have listenings socket, and
b) connect to sshd pids' sockets and see if anything responds there, killing
both non-listening and unresponsive pids.

Only picks sshd pids for users with specific prefix, e.g. "tun-" by default, to
be sure not to kill anything useful (i.e. anything that's not for "ssh -R").

Uses ps, ss, gawk and ncat (comes with nmap), only prints pids by default
(without -k/--kill option).

Also has -s/--cleanup-sessions option to remove all "abandoned" login sessions
(think loginctl) for user with specified prefix, i.e. any leftover stuff after
killing those useless ssh pids.

See also: `autossh `_ and such.

mosh-nat / mosh-nat-bind.c
''''''''''''''''''''''''''

Python (3.6+) wrapper for mosh-server binary to do UDP hole punching through
local NAT setup before starting it.

Comes with mosh-nat-bind.c source for LD_PRELOAD=./mnb.so lib to force
mosh-client on the other side to use specific local port that was used in
"mosh-nat".

Example usage (server at 84.217.173.225, client at 74.59.38.152)::

  server% ./mosh-nat 74.59.38.152
  mosh-client command:
    MNB_PORT=34730 LD_PRELOAD=./mnb.so
      MOSH_KEY=rYt2QFJapgKN5GUqKJH2NQ mosh-client  34730

  client% MNB_PORT=34730 LD_PRELOAD=./mnb.so \
    MOSH_KEY=rYt2QFJapgKN5GUqKJH2NQ mosh-client 84.217.173.225 34730

Notes:

- mnb.so is mosh-nat-bind.c lib. Check its header for command to build it.
- Both mnb.so and mosh-nat only work with IPv4, IPv6 shouldn't use NAT anyway.
- Should only work like that when NAT on either side doesn't rewrite src ports.
- 34730 is default for -c/--client-port and -s/--server-port opts.
- Started mosh-server waits for 60s (default) for mosh-client to connect.
- Continous operation relies on mosh keepalive packets without interruption.
- No roaming of any kind is possible here.
- New MOSH_KEY is generated by mosh-server on every run.

Useful for direct and fast connection when there's some other means of access
available already, e.g. ssh through some slow/indirect tunnel or port forwarding
setup.

| For more hands-off hole-punching, similar approach to what
  `pwnat `_ does can be used.
| See `mobile-shell/mosh#623 `_
  for more info and links on such feature implemented in mosh directly.
| Source for LD_PRELOAD lib is based on https://github.com/yongboy/bindp/

tping
'''''

Python-3 (asyncio) tool to try connecting to specified TCP port until connection
can be established, then just exit, i.e. to wait until some remote port is accessible.

Can be used to wait for host to reboot before trying to ssh into it, e.g.::

  % tping myhost && ssh root@myhost

(default -p/--port is 22 - ssh, see also -s/--ssh option)

Tries establishing new connection (forcing new SYN, IPv4/IPv6 should both work)
every -r/--retry-delay seconds (default: 1), only discarding (closing) "in
progress" connections after -t/--timeout seconds (default: 3), essentially
keeping rotating pool of establishing connections until one of them succeeds.

This means that with e.g. ``-r1 -t5`` there will be 5 establishing connections
(to account for slow-to-respond remote hosts) rotating every second, so ratio of
these delays shouldn't be too high to avoid spawning too many connections.

Host/port names specified on the command line are resolved synchronously on
script startup (same as with e.g. "ping" tool), so it can't be used to wait
until hostname resolves, only for connection itself.

Above example can also be shortened via -s/--ssh option, e.g.::

  % tping -s myhost 1234
  % tping -s root@myhost:1234 # same thing as above
  % tping -s -p1234 myhost # same thing as above

Will exec ``ssh -p1234 root@myhost`` immediately after successful tcp connection.

Uses python3 stdlib stuff, namely asyncio, to juggle multiple connections in an
efficient manner.



WiFi / Bluetooth helpers
^^^^^^^^^^^^^^^^^^^^^^^^

adhocapd
''''''''

Picks first wireless dev from ``iw dev`` and runs hostapd_ + udhcpd (from
busybox) on it.

Use-case is plugging wifi usb dongle and creating temporary AP on it - kinda
like "tethering" functionality in Android and such.

Configuration for both is generated using reasonable defaults - distinctive
(picked from ``ssid_list`` at the top of the script) AP name and random password
(using ``passgen`` from this repo or falling back to ``tr -cd '[:alnum:]'


First line above will probably complain that "bnep" bridge is missing and list
commands to bring it up (brctl, ip).

Default mode for both "server" and "client" is NAP (AP mode, like with WiFi).

Both commands make bluetoothd (that should be running) create "bnepX" network
interfaces, connected to server/clients, and "server" also automatically (as
clients are connecting) adds these to specified bridge.

Not sure how PANU and GN "ad-hoc" modes are supposed to work - both BlueZ
"NetworkServer" and "Network" (client) interfaces support these, so I suppose
one might need to run both or either of server/client commands (with e.g. "-u
panu" option).

Couldn't get either one of ad-hoc modes to work myself, but didn't try
particulary hard, and it might be hardware issue as well, I guess.



Misc
^^^^

Misc one-off scripts that don't group well with anythin else.

at
''

Replacement for standard unix'ish "atd" daemon in the form of a bash script.

| It just forks out and waits for however long it needs before executing the given command.
| Unlike atd proper, such tasks won't survive reboot, obviously.

::

  Usage: ./at [ -h | -v ] when < sh_script
  With -v flag ./at mails script output if it's not empty even if exit code is zero.

wgets
'''''

Simple script to grab a file using wget and then validate checksum of the
result, e.g.:

.. code:: console

  $ wgets -c http://os.archlinuxarm.org/os/ArchLinuxARM-sun4i-latest.tar.gz cea5d785df19151806aa5ac3a917e41c
  Using hash: md5
  Using output filename: ArchLinuxARM-sun4i-latest.tar.gz
  --2014-09-27 00:04:45--  http://os.archlinuxarm.org/os/ArchLinuxARM-sun4i-latest.tar.gz
  Resolving os.archlinuxarm.org (os.archlinuxarm.org)... 142.4.223.96, 67.23.118.182, 54.203.244.41, ...
  Connecting to os.archlinuxarm.org (os.archlinuxarm.org)|142.4.223.96|:80... connected.
  HTTP request sent, awaiting response... 416 Requested Range Not Satisfiable

      The file is already fully retrieved; nothing to do.

  Checksum matched

Basic invocation syntax is ``wgets [ wget_opts ] url checksum``, checksum is
hex-decoded and hash func is auto-detected from its length (md5, sha-1, all
sha-2's are supported).

Idea is that - upon encountering an http link with either checksum on the page
or in the file nearby - you can easily run the thing providing both link and
checksum to fetch the file.

If checksum is available in e.g. \*.sha1 file alongside the original one, it
might be a good idea to fetch that checksum from any remote host (e.g. via
"curl" from any open ssh session), making spoofing of both checksum and the
original file a bit harder.

mail
''''

Simple bash wrapper for sendmail command, generating From/Date headers and
stuff, just like mailx would do, but also allowing to pass custom headers
(useful for filtering error reports by-source), which some implementations of
"mail" fail to do.

passgen
'''''''

Uses aspell english dictionaly to generate easy-to-remember passphrase -
a `Diceware-like`_ method.

Use -e option to get a rough entropy estimate for the resulting passphrase,
based on number of words in aspell dictionary dump that is being used.

Other options allow for picking number of words and sanity-checks like min/max length
(to avoid making it too unwieldy or easy to bruteforce via other methods).

.. _Diceware-like: https://en.wikipedia.org/wiki/Diceware

hhash
'''''

Produces lower-entropy "human hash" phrase consisting of aspell english
dictionary words for input arg(s) or data on stdin.

It works by first calculating BLAKE2 hash of input string/data via libsodium_,
and then encoding it using consistent word-alphabet, exactly like something like
base32 or base64 does.

Example::

  % hhash -e AAAAC3NzaC1lZDI1NTE5AAAAIPh5/VmxDwgtJI0HiFBqZkbyV1I1YK+2DVjGjYydNp5o
  allan avenues regrade windups flours
  entropy-stats: word-count=5 dict-words=126643 word-bits=17.0 total-bits=84.8

Here -e is used to print entropy estimate for produced words.

Note that resulting entropy values can be fractional if word-alphabet ends up
being padded to map exactly to N bits (e.g. 17 bits above), so that words in it
can be repeated, hence not exactly 17 bits of distinct values.

Written in OCAML, linked against libsodium_ (for BLAKE2 hash function) via small
C glue code, build with::

  % ocamlopt -o hhash -O2 unix.cmxa str.cmxa \
     -cclib -lsodium -ccopt -Wl,--no-as-needed hhash.ml hhash.ml.c
  % strip hhash

Caches dictionary into a ~/.cache/hhash.dict (-c option) on first run to produce
consistent results on this machine. Updating that dictionary will change outputs!

.. _libsodium: https://libsodium.org/

urlparse
''''''''

Simple script to parse long URL with lots of parameters, decode and print it out
in an easily readable ordered YAML format or diff (that is, just using "diff"
command on two outputs) with another URL.

No more squinting at some huge incomprehensible ecommerce URLs before scraping
the hell out of them!

ip-ext
''''''

Some minor tools for network configuration from console/scripts, which iproute2
seem to be lacking, in a py3 script.

For instance, if network interface on a remote machine was (mis-)configured in
initramfs or wherever to not have link-local IPv6 address, there seem to be no
tool to restore it without whole "ip link down && ip link up" dance, which can
be a bad idea.

``ipv6-lladdr`` subcommand handles that particular case, generating ipv6-lladdr
from mac, as per RFC 4291 (as implemented in "netaddr" module) and can assign
resulting address to the interface, if missing:

.. code:: console

  # ip-ext --debug ipv6-lladdr -i enp0s9 -x
  DEBUG:root:Got lladdr from interface (enp0s9): 00:e0:4c:c2:78:86
  DEBUG:root:Assigned ipv6_lladdr (fe80::2e0:4cff:fec2:7886) to interface: enp0s9

``ipv6-dns`` tool generates \*.ip.arpa and djbdns records for specified IPv6.

``ipv6-name`` encodes or hashes name into IPv6 address suffix to produce an
easy-to-remember static ones.

``iptables-flush`` removes all iptables/ip6tables rules from all tables,
including any custom chains, using iptables-save/restore command-line tools, and
sets policy for default chains to ACCEPT.

blinky
''''''

Script to blink gpio-connected leds via ``/sys/class/gpio`` interface.

Includes oneshot mode, countdown mode (with some interval scaling option),
direct on-off phase delay control (see --pre, --post and --interval\* options),
cooperation between several instances using same gpio pin, "until" timestamp
spec, and generally everything I can think of being useful (mostly for use from
other scripts though).

openssl-fingerprint
'''''''''''''''''''

Do ``openssl s_client -connect somesite ``, but does not asks for
user/password and does not start new "systemd --user" session, just runs
``su -`` to get root login shell.

Essentially same as ``machinectl shell ``, but doesn't require
systemd-225 and machine being registered with systemd at all.

If running ``tty`` there says ``not a tty`` and e.g. ``screen`` bails out with
``Must be connected to a terminal.``, just run extra ``getty tty`` there - will
ask to login (be mindful of /etc/securetty if login fails), and everything
tty-related should work fine afterwards.

If run without argument or with -l/--list option, will list running machines.

See also: lsns(1), nsenter(1), unshare(1)

pam-run
'''''''

Wrapper that opens specified PAM session (as per one of the configs in
``/etc/pam.d``, e.g. "system-login"), switches to specified uid/gid and runs
some command there.

My use-case is to emulate proper "login" session for systemd-logind, which
neither "su" nor "sudo" can do (nor should do!) in default pam configurations
for them, as they don't load pam_systemd.so (as opposed to something like
``machinectl shell myuser@ -- ...``).

This script can load any pam stack however, so e.g. running it as::

  # pam-run -s system-login -u myuser -t :1 \
    -- bash -c 'systemctl --user import-environment \
      && systemctl --user start xorg.target && sleep infinity'

Should initiate proper systemd-logind session (and close it afterwards) and
start "xorg.target" in "myuser"-specific "systemd --user" instance (started by
logind with the session).

Can be used as a GDM-less way to start/keep such sessions (with proper
display/tty and class/type from env) without much hassle or other weirdness like
"agetty --autologin" or "login" in some pty (see also `mk-fg/de-setup
`_ repo), or for whatever other pam wrapping
or testing (e.g. try logins with passwords from file), as it has nothing
specific (or even related) to desktops.

Self-contained python-3 script, using libpam via ctypes.

Warning: this script is no replacement for su/sudo wrt uid/gid-switching, and
doesn't implement all the checks and sanitization these tools do, so only
intended to be run from static, clean or trusted environment (e.g. started by
systemd or manually).

primes
''''''

Python3 script to print prime numbers in specified range.

For small ranges only, as it does brute-force [2, sqrt(n)] division checks,
and intended to generate primes for non-overlapping "tick % n" workload spacing,
not any kind of crypto operations.

boot-patcher
''''''''''''

Py3 script to run on early boot, checking specific directory for update-files
and unpack/run these, recording names to skip applied ones on subsequent boots.

Idea for it is to be very simple, straightforward, single-file drop-in script to
put on distributed .img files to avoid re-making these on every one-liner change,
sending tiny .update files instead.

Update-file format:

- Either zip or bash script with .update suffix.
- Script/zip detected by python's zipfile.is_zipfile() (zip file magic).
- If zip, should contain "_install" (update-install) script inside.
- Update-install script shebang is optional, defaults to "#!/bin/bash".

Update-install script env:

- BP_UPDATE_ID: name of the update (without .update suffix, e.g. "001.test").

- BP_UPDATE_DIR: unpacked update zip dir in tmpfs.

  Will only have "_install" file in it for standalone scripts (non-zip).

- BP_UPDATE_STATE: /var/lib/boot-patcher/

  Persistent dir created for this update, can be used to backup various
  updated/removed files, just in case.

  If left empty, removed after update-install script is done.

- BP_UPDATE_STATE_ROOT: /var/lib/boot-patcher

- BP_UPDATE_REBOOT: reboot-after flag-file (on tmpfs) to touch.

  | If reboot is required after this update, create (touch) file at that path.
  | Reboot will be done immediately after this particular update, not after all of them.

- BP_UPDATE_REAPPLY: flag-file (on tmpfs) to re-run this update on next boot.

  Can be used to retry failed updates by e.g. creating it at the start of the
  script and removing on success.

Example update-file contents:

- 2017-10-27.001.install-stuff.zip.update

  ``_install``::

    cd "$BP_UPDATE_DIR"
    exec pacman --noconfirm -U *.pkg.tar.xz

  ``*.pkg.tar.xz`` - any packages to install, zipped alongside that ^^^

- 2017-10-28.001.disable-console-logging.update (single update-install file)::

    patch -l /boot/boot.ini <<'EOF'
    --- /boot/boot.ini.old  2017-10-28 04:11:15.836588509 +0000
    +++ /boot/boot.ini      2017-10-28 04:11:38.000000000 +0000
    @@ -6,7 +6,7 @@
     hdmitx edid

     setenv condev "console=ttyAML0,115200n8 console=tty0"
    -setenv bootargs "root=/dev/mmcblk1p2 ... video=HDMI-A-1:1920x1080@60e"
    +setenv bootargs "root=/dev/mmcblk1p2 ... video=HDMI-A-1:1920x1080@60e loglevel=1"

     setenv loadaddr "0x1080000"
     setenv dtb_loadaddr "0x1000000"
    EOF
    touch "$BP_UPDATE_REBOOT"

- 2017-10-28.002.apply-patches-from-git.zip.update

  ``_install``::

    set -e -o pipefail
    cd /srv/app
    for p in "$BP_UPDATE_DIR"/*.patch ; do patch -p1 -i "$p"; done

  ``*.patch`` - patches for "app" from the repo, made by e.g. ``git format-patch -3``.

Misc notes:

- Update-install exit code is not checked.

- After update-install is finished, and if BP_UPDATE_REAPPLY was not created,
  ".done" file is created in BP_UPDATE_STATE_ROOT and update is
  skipped on all subsequent runs.

- Update ordering is simple alphasort, dependenciess can be checked by update
  scripts via .done files (also mentioned in prev item).

- No auth (e.g. signature checks) for update-files, so be sure to send these
  over secure channels.

- Run as ``boot-patcher --print-systemd-unit`` for the only bit of setup it needs.

audit-follow
''''''''''''

Simple py3 script to decode audit messages from "journalctl -af -o json" output,
i.e. stuff like this::

  Jul 24 17:14:01 malediction audit: PROCTITLE
    proctitle=7368002D630067726570202D652044... (loooong hex-encoded string)
  Jul 24 17:14:01 malediction audit: SOCKADDR saddr=020000517F0000010000000000000000

Into this::

  PROCTITLE proctitle='sh -c grep -e Dirty: -e Writeback: /proc/meminfo'
  SOCKADDR saddr=127.0.0.1:81

Filters for audit messages only, strips long audit-id/time prefixes,
unless -a/--all specified, puts separators between multi-line audit reports,
relative and/or differential timestamps (-r/--reltime and -d/--difftime opts).

Audit subsystem can be very useful to understand which process modifies some
path, what's the command-line of some /bin/bash being run from somewhere
occasionally, or what process/command-line connects to some specific IP and what
scripts it opens beforehand - all without need for gdb/strace, or where they're
inapplicable.

Some useful incantations (cheatsheet)::

  # auditctl -e 1
  # auditctl -a exit,always -S execve -F path=/bin/bash
  # auditctl -a exit,always -F auid=1001 -S open -S openat
  # auditctl -w /some/important/path/ -p rwxa
  # auditctl -a exit,always -F arch=b64 -S connect

  # audit-follow -ro='--since=-30min SYSLOG_IDENTIFIER=audit' |
    grep --line-buffered -B1000 -F some-interesting-stuff | tee -a audit.log

  # auditctl -e 0
  # auditctl -D

| auditd + ausearch can be used as an offline/advanced alternative to such script.
| More powerful options for such task on linux can be sysdig and various BPF tools.

tui-binary-conv
'''''''''''''''

Simple ncurses-based interactive (TUI) decimal/hex/binary
py3 converter script for the terminal.

Main purpose it to easily experiment with flipping bits and digits in values,
seeing nicely aligned/formatted/highlighted immediate changes in other outputs
and an easy converter tool as well.

Controls are: cursor keys, home/end, backspace, insert (insert/replace mode),
0/1 + digits + a-f, q to quit.

There's a picture of it `on the blog page here`_.

.. _on the blog page here: http://blog.fraggod.net/2019/01/10/tui-console-dechexbinary-converter-tool.html

maildir-cat
'''''''''''

Python3 script to iterate over all messages in all folders of a maildir and
print (decoded) headers and plain + html body of each (decoded) message, with
every line prefixed by its filename.

Intended use is to produce a text dump of a maildir for searching or processing
it via any simple tools like grep or awk.

So using e.g. ``maildir-cat | grep 'important-word'`` will produce same output
as ``grep -r 'important-word' email-texts/`` would if emails+headers were dumped
as simple text files there.

| Can also be pointed to maildir subdirs (same thing) or individual files.
| Uses python stdlib email.* modules for all processing.

dns-update-proxy
''''''''''''''''

Small py3/asyncio UDP listener that receives ~100B ``pk || box(name:addr)``
libnacl-encrypted packets, decrypts (name, addr) tuples from there,
checking that:

- Public key of the sender is in -a/--auth-key list.
- Name doesn't resolve to same IP already, among any others (-c/--check option).
- Name has one of the allowed domain suffixes (-d/--update option).

If all these pass, specified BIND-format zone-file (for e.g. nsd_) is updated,
or DNS servi

本源码包内暂不包含可直接显示的源代码文件,请下载源码包。