Age | Commit message (Collapse) | Author |
|
|
|
|
|
This affects both the usage() text, and the error message if the
`/.arch-chroot` version doesn't match. The latter is the one that I
really care about, and motivates this change.
On Parabola, the `arch-nspawn` program isn't in PATH, it's somewhere
under `/usr/lib/`, and gets called as a helper to user-facing
programs; and the error message is displayed directly to the user.
These programs consistently put two spaces after a period when
printing a message to the terminal.
|
|
|
|
- Use `read -r` instead of other forms of read or looping
- Use arrays instead of strings with whitespaces.
- In one instance, use ${var%%.*} instead of $(echo $var|cut -f. -d1)
|
|
These changes are all strictly "slap some double-quotes in there".
Anything more than that is not included in this commit.
|
|
These are purely stylistic changes that make shellcheck complain less.
This does NOT include things like quoting currently unquoted variables.
|
|
|
|
The default m4 quote characters: `QUOTE' are troublesome, because ` is
fairly likely to pop up in a shell script (if not for a subshell, because
it is a useful character in comments and user-facing messages).
So, this changes it to [[[QUOTE]]], as it is unlikely to see three braces
together like that, let alone in unbalanced sets.
|
|
The absence of it was allowing an (m4-produced) syntax error in
in a change I had made to be masked.
|
|
What this is really doing is fixing a conflict that I had incorrectly
resolved when rebasing what became 2fd5931 onto cda9cf4. Of course,
because of dynamic scoping, everything worked out, and everything worked as
intended.
Before cda9cf4, it was appropriate for download_sources to take src_owner
as an argument, but after cda9cf4, it is now appropriate to take
makepkg_user as an argument. However, it still takes src_owner as an
argument, but pays 0 attention to it; instead looking at makepkg_user which
it happily inherited because of dynamic scoping.
So change it to take makepkg_user as the argument.
|
|
The `-xdev` flag to `find` makes it not recurse over subvolumes; so it only
supports recursion with depth=1. Fix this by having the function
recursively call itself.
|
|
This is inspired by the thought that went in to the delete_chroot
is_subvolume commit.
sync_chroot($chrootdir, $copydir) copies `$chrootdir/root` to `$copydir`.
That seems a little silly; why do we care about "$chrootdir"? Have it just
be sync_chroot(source, destination) like every other sync/copy command.
Where this becomes tricky is check to decide if we are going to use btrfs
subvolumes or not. We don't care if "$source/.." is on btrfs; the root
could be a directly-mounted subvolume, but and the destination could be
another subvolume of the same btrfs mounted somewhere else.
The things we do care about are:
- The source is a btrfs subvolume (so that we can snapshot it)
- The source is on the same filesystem as the directory that the copy will
be created in.
- If the destination exists:
* that it is not a mountpoint (so that we can delete and recreate it)
* that it is a btrfs subvolume (so that we can quickly delete it)
On the last point, it isn't necessary for creating the new snapshot, just
for quick deletion. That can be a separate check, where we use regular
`rm` for deleting the existing copy, but use subvolume snapshots for
creating the new one.
|
|
Also, shorten the "Synchronizing" message to only include the full path
to the copy if it was specified.
The capslocked variable names in the Usage comment were references to
things in Parabola's tools, that didn't make much sense here out of
context.
|
|
First of all, it ran `is_btrfs "$chrootdir"` to decide if it was on
btrfs, but $chrootdir wasn't defined locally; it just happens to work
because $chrootdir was defined in main(). (I noticed this because in
Parabola, it is called differently, so $chrootdir was empty).
So I was tempted to just change it to `is_btrfs "$copydir"`, but if
$copydir is just a regular directory on a btrfs filesystem, then it
It would leave much of $copydir intact. What we really care about is
if $copydir is a btrfs subvolume; which we can check by combining the
is_btrfs check with inspecting the inum of the directory.
I put this combined check in lib/archroot.sh:is_subvolume.
https://lists.archlinux.org/pipermail/arch-projects/2013-September/003901.html
|
|
This means wrapping variable initialization in init_variables(), and the
main program routine in main().
I did NOT put `shopt -s nullglob` in to a function.
It make make sense to move init_variables() down into the main()
function, instead of having it as a separate function up top (if this
done, then the `-g` flag passed to `declare` in init_variables() can
be dropped). However, in interest of keeping the `diff -w` small, and
merges/rebases simpler, this isn't done here.
|
|
I overlooked this one. Fixes FS#53513.
|
|
|
|
A previous iteration of this change (libretools commit d7dcce53396d)
simply inserted `env -i` to clear the environment.
However, that lead to it ignoring proxy settings, which some users had
problems with:
https://labs.parabola.nu/issues/487:
> To fix other bugs, the pacstrap environment is blank, which also
> means that the proxy settings are blank.
So (in libretools commit d17d1d82349f), I changed it to use `declare
-x` to inspect the environment, and create a version of it only
consisting of variables ending with "_proxy" (case-insensitive).
I honestly don't remember what "other bugs" prompted me to clear the
environment in the first place.
|
|
In sync_chroot(), this makes the messages be a bit more precise with
exactly which thing they are syncing where. This is based on my users
expressing confusion at what is going on (especially when something is
taking a long time, and they have to blame something for blocking).
With these changes, I haven't gotten such confusion in a long time
(but maybe my users just got used to it).
In delete_chroot(), this changes "temporary copy" to "chroot copy",
since in Parabola's version of the tools, the function can get called
from other places, and it isn't necessarily operating on a temporary
copy.
|
|
This allows us to run an ARM chroot on an x86 box; as the binfmt
runner will set the architecture for us, and the x86
`/usr/bin/setarch` program won't know about the ARM architecture
string.
|
|
This allows us to copy in files like `qemu-arm-static`, which is
necessary for running an ARM chroot on an x86 box.
|
|
Even though main() doesn't call `set -u`; this way the functions will
continue to work if copied into an environment with `set -u`, or so
that we are ready if we ever want to start using `set -u`.
|
|
Rather than them simply being named blocks of code with braces around
them.
That is: have them take things via arguments rather than global
variables.
Specific notes:
- create_chroot->sync_chroot:
I pulled out locking the destination chroot; getting that lock is
now the caller's responsibility. It still handles locking the
source chroot though.
I pulled the `if [[ ! -d $copydir ]] || $clean_first;` check out; it is
now the caller's responsibility to use that check when deciding if to
call sync_chroot.
However, when pulling that check out, I left it as `if true;`, to
keep an indentation level. This patch has had to be rebased/merged
many times, and changing the indentation is a sure way to make that
go less smoothly; I'm not going to re-indent this block until I see
the check removed in the git.archlinux.org/devtools.git repository.
- install_packages:
1. Receive the list of packages as arguments, rather than a global
variable.
2. Make the caller responsible for looking at PKGBUILD. From the
name and arguments, one would never expect it to look at PKGBUILD.
|
|
|
|
This is similar to common C #ifdef guards.
I was tempted to wrap the entire thing in the if/fi, rather than use
'return' to bail early. However, that means it won't execute anything
until after it reaches 'fi'. And if `shopt -s extglob` isn't executed
before parsing, then it will syntax-error on the extended globs. One
solution would have been to move `shopt -s extglob` up above the
include-guard. But the committed solution is all-around simpler.
|
|
|
|
|
|
|
|
embedding.
|
|
It was displaing the value of the `makepkg_args` variable, which may
have already been changed by the argument parsing by the time it gets
to `-h`. Now there is a separate `default_makepkg_args` variable.
|
|
This involves extending the signature of lib/common.sh's `stat_busy()`,
`lock()`, and `slock()`. The `mesg=$1; shift` in stat_busy even suggests
that this is what was originally intended from it.
|
|
In cases where there is no license specified, the file is tagged as
"License: Unspecified". Obviously, that is not ideal, but it
highlights the fact, and I hope that it encourages whoever has the
authority to specify the license to do so.
On that note, to anyone who may have the authority to specify the
license of files in devtools: the current licence of many files is
GPLv2 with no option for later versions; I impore you to re-license
them to have the "or any later version" option.
|
|
It was confusing Emacs and screwing up the syntax highlighting and
auto-indentation for the rest of the file.
|
|
This provides a cross-editor hint that the syntax of the file is Bash.
|
|
|
|
|
|
Allow for locks to be inherited. Inheriting the lock is something that
mkarchroot could do previously, but has since lost the ability to do. This
allows for the programs to be more compos-able.
Do this by instead of unconditionally opening $file on $fd, first check if
$file is already open on $fd; and go ahead use it if it is.
The naive way of doing this would be to `$(readlink /dev/fd/$fd)` and
compare that to `$file`. However, if `$file` is itself a symlink; or there
is a symlink somewhere in the path to `$file`, then this could easily fail.
Instead, check `[[ "/dev/fd/$fd" -ef "$file" ]]`. Even though the Bash
documentation (`help test`) says that `-ef` checks for if the two files are
hard links to eachother, because it uses stat(3) (which resolves symlinks)
to do this check, it also works with the /dev/fd/ soft links.
|
|
`lock_close FD` is easier to remember than 'exec FD>&-`; and is especially
easier if FD is a variable (though that isn't actually taken advantage of
here).
This uses Bash 4.1+ `exec {var}>&-`, rather than the clunkier
`eval exec "$var>&-"` that was necessary in older versions of Bash.
Thanks to Dave Reisner for pointing this new bit of syntax out to me
the last time I submitted this (back in 2014, 4.1 had just come out).
|
|
|
|
The systemd package creates a subvolume at /var/lib/machines (through
tmpfiles), if it can. We need to delete this subvolume before we can
delete the parent subvolume.
Look through the root for inodes with the number 256. These identify
subvolume roots.
|
|
|
|
Move the function and save the orig_argv right along it.
|
|
|
|
makepkg --asroot was removed with pacman 4.2. Allow to specify a
separate makepkg user from the command line instead.
Fixes FS#43432
|
|
The way in which makechrootpkg reads variables from makepkg.conf(5) is
different from makepkg, in that it reads a subset of defined
variables, and only if the were not set in the environment before.
Mention this in the usage text.
Fixes FS#44827
|
|
|
|
This removes the preservation of HOME being /build just for the pacman
sudo call. Former leads to unbuildable packages when an to be installed
dependency writes something into the HOME dir (f.e. .config). The
resulting directories won't be writable by the builduser as they are
owned by root:root and ultimately will fail to build anything that
requires so.
|
|
|
|
|