Xen is a trademark of the Cloud Software Group; we're not packaging
Xen(Server), we're packaging the Xen Project Hypervisor, which is open
source and owned by the Linux Foundation.
This is based on advice from Kelly Choi, the Xen Project Community
Manager, who has assisted us in the branding aspects of pacakaging.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
Also, all URLs in package and module comments are updated.
At the time of this writing, the "Update History" page
(release notes) for tsm-client >=8.1.19 does not list any
"APARs" ("Authorized Program Analysis Reports") for 8.1.24.0.
Running the migrations in a systemd execStartPre was a mistake. The
migrations can be pretty long to run and easily time-out.
Moving this to a proper oneshot service solves this issue and makes
this fits better the systemd execution model. We can now easily filter
the migrations logs.
The fully-qualified name would certainly be a lot here, but `with` can
still be unclear even with narrow scope. A short `let` adds clarity
without significantly increasing verbosity.
This was incorrectly getting `lib.version` which is e.g.
`"24.11pre-git"`, but should have been the ZFS package version. However,
the condition, at least per the comment, is reversed and should be
instead `versionOlder cfgZfs.package.version "2.2.0"`. However, the
entire premise seems to be incorrect, as ZFS 2.2.6 includes the spl
module. Since the previous condition here was effectively always true,
it would initially seem the best move is to remove the conditional
altogether and always include the spl kmod. However, going back to
4360a87c45 where this condition was added,
the intent appears to be that spl was no longer needed here in
the-pre-release ZFS (long since in all supported versions), due to it
being merged into ZFS mainline. Given that intent and that our boot
tests on all versions succeed without including it in the initrd, remove
it.
These can be either an integer or a range.
Range options are necessary for `FREE_LIMIT` to take effect when used in
conjunction with `TIMELINE_LIMIT_*`.
It is currently tied to `services.avahi.enable` which might not be
desirable.
With this change it is possible to disable the service with
`services.printing.browsed.enable = false`
Factor out part of the provisioning script into a
wait-until-service-is-ready script, and put it unconditionally in
front of ExecStartPost=, so that services that depend on influxdb2 are
not started until influxdb2 responds to requests.
Fixes https://github.com/NixOS/nixpkgs/issues/317017 ("Scrutiny tries to start before influxdb has started")
We currently package all CUDA versions from 10.0 onwards. In
some cases, CUDA is the only thing preventing us from removing old
versions of GCC. Since we currently don’t deprecate or remove CUDA
versions, this will be an increasing drag on compiler maintenance in
Nixpkgs going forward unless we establish a sensible policy. After
discussing this with @SomeoneSerge in the context of old versions
of GCC, I learned that there was already a desire to remove at least
versions prior to 11.3, as those versions were only packaged in the
old “runfile” format, but that it was blocked on someone doing
the work to warn about the upcoming deprecation for a release cycle.
This change adds a release note and warnings indicating that CUDA 10.x
and 11.x will be removed in Nixpkgs 25.05, about 8 months from now.
I chose this version cut‐off because these versions of CUDA require
GCC < 12. GCC releases a major version every year, and seems to
support about four releases at a time, releasing the last update to
the oldest version and marking it as unsupported on their site around
the time of the release of the next major version. Therefore, by the
time of the 25.05 release, we should expect GCC 15 to be released
and GCC 11 to become unsupported. Adding a warning and communicating
the policy of only shipping CUDA versions that work with supported
compilers in the release notes means that we should be able to
clean up old versions as required without any issue or extensive
deprecation period in future, without obligating us to do so if there
is a strongly compelling reason to be more lenient. That should help
solve both shipping an indefinitely‐growing list of CUDA versions
and an indefinitely‐growing list of GCC and LLVM versions.
As I’m not a user of CUDA myself, I can’t be sure of how sensible
this version support policy is, but I think it’s fair to say that
it’s reasonable for Nixpkgs to choose not to maintain compiler
versions that are unsupported upstream just for the sake of versions
of CUDA that are also unmaintained. CUDA 11.x has not received an
update for two years already, and would only become unsupported in
Nixpkgs in over half a year’s time.
CUDA 10.x is currently unused in‐tree except for the unmaintained
Caffe and NVIDIA DCGM, which depends on multiple CUDA versions solely
so that it can provide plugins for those versions. The latest DCGM
version has already removed support for CUDA 10.x and is just awaiting
an update in Nixpkgs. They maintain a list of supported versions to
build plugins for in their CMake build system, so it should be simple
enough for us to only build support for the versions of CUDA that we
support in Nixpkgs.
From what I can tell, CUDA 11.x is currently used by the following
packages other than DCGM:
* `catboost`, because of
<https://github.com/catboost/catboost/issues/2540>. It looks like
upstream has since redesigned this part of their build system, so
perhaps the problem is no longer present, or would be easier to fix.
* `magma_2_6_2`, an old version from before upstream added CUDA
12 support. This seems okay to break to me; that version is not
maintained and will never be updated for new CUDA versions, and
the CUDA support is optional.
* `paddlepaddle`, which, uh, also requires OpenSSL 1.1 of all
things. <https://github.com/PaddlePaddle/Paddle/issues/67571>
states that PaddlePaddle supports up to 12.3.
* `python3Packages.cupy`, which is listed as “possibly incompatible
with cutensor 2.0 that comes with `cudaPackages_12`”. I’m
not sure what the “possibly” means here, but according to
<https://github.com/cupy/cupy/tree/v13.3.0?tab=readme-ov-file#installation>
they ship binary wheels using CUDA 12.x so I think this should
be fine.
* `python3Packages.tensorrt`, which supports CUDA 12.x going by
<https://github.com/NVIDIA/TensorRT/blob/release/10.4/CMakeLists.txt#L111>.
* TensorFlow, which has a link to
<https://www.tensorflow.org/install/source#gpu> above the
`python3Packages.tensorflow-bin` definition, but that page lists
the versions we package as supporting CUDA 12.x.
Given the years since CUDA 11.x received any update upstream, and the
seemingly very limited set of packages that truly require it, I think
the policy of being able to drop versions that require unsupported
compilers starting from the next Nixpkgs release is a reasonable
one, but of course I’m open to feedback from the CUDA maintainers
about this.
When `diskImage = null`, the root fs is a tmpfs instead of
`/dev/vda`. Thus, it doesn't have to wait for virtio modules to load
before being mounted. The root fs is a dependency of shared
directories by nature of being their parent directory. Without
depending on `/dev/vda`, these shared directories may attempt to mount
without virtio modules being loaded.
In preparation for the deprecation of `stdenv.isX`.
These shorthands are not conducive to cross-compilation because they
hide the platforms.
Darwin might get cross-compilation for which the continued usage of `stdenv.isDarwin` will get in the way
One example of why this is bad and especially affects compiler packages
https://www.github.com/NixOS/nixpkgs/pull/343059
There are too many files to go through manually but a treewide should
get users thinking when they see a `hostPlatform.isX` in a place where it
doesn't make sense.
```
fd --type f "\.nix" | xargs sd --fixed-strings "stdenv.is" "stdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "stdenv'.is" "stdenv'.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "clangStdenv.is" "clangStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "gccStdenv.is" "gccStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "stdenvNoCC.is" "stdenvNoCC.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "inherit (stdenv) is" "inherit (stdenv.hostPlatform) is"
fd --type f "\.nix" | xargs sd --fixed-strings "buildStdenv.is" "buildStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "effectiveStdenv.is" "effectiveStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "originalStdenv.is" "originalStdenv.hostPlatform.is"
```
Unlike regular input-addressed or fixed-output derivations, floating and
deferred derivations do not have their store path available at evaluation time,
so their outPath is a placeholder. The following changes are needed for
replaceDependencies to continue working:
* Detect the placeholder and retrieve the store path using another IFD hack
when collecting the rewrite plan.
* Try to obtain the derivation name needed for replaceDirectDependencies from
the derivation arguments if a placeholder is detected.
* Move the length mismatch detection to build time, since the placeholder has a
fixed length which is unrelated to the store path.
The tests cannot be directly built by Hydra, because replaceDependencies relies
on IFD. Instead, they are put inside a NixOS test where they are built on the
guest.
Move replaceRuntimeDependencies to the replaceDependencies namespace,
where the structure is more consistent with the replaceDependencies
function. This makes space for wiring up cutoffPackages as an option
too.
By default, the system's initrd is excluded. The replacement process does not
work properly anyway due to the structure of the initrd (the files being copied
into it, and it being compressed). In the worst case (which has been observed
to actually occur in practice), a store path makes it into the incompressible
parts of the archive, checksums are broken, and the system won't boot.
Instead of iterating over all replacements and applying them one by one,
use the newly introduced replaceDependencies function to apply them all
at once for replaceRuntimeDependencies. The advantages are twofold in
case there are multiple replacements:
* Performance is significantly improved, because there is only one pass
over the closure to be made.
* Correctness is improved, because replaceDependencies also replaces
dependencies of the replacements themselves if applicable.
Fixes: https://github.com/NixOS/nixpkgs/issues/4336
4b836fb680 added `pkgs.grub2_efi` to `environment.systemPackages` so that it would be in the Nix store and available for install. But `pkgs.grub2` is already in the list. This causes the various paths of the two GRUB2 versions to collide. To fix this, put `pkgs.grub2_efi` into `system.extraDependencies` instead. This should achieve the same effect of adding the second GRUB2 version to the Nix store without the paths colliding in the environment.
To reproduce the problem, execute `nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/iso-image.nix -A config.system.build.isoImage` and look for messages like
```
warning: collision between `/nix/store/9jk1p9n5dl431lcm4w9p6x6x8a00dm0q-grub-2.12/bin/grub-install' and `/nix/store/809l0i6aydg4zhn3kqf723brjyp2qm8h-grub-2.12/bin/grub-install'
```
The nixpkgs/nixos version includes a suffix like "pre-git" or
"pre676716.6f16e67b4921", which does not match the conventional
"XX.YY" format of system.stateVersion.
Unifying the format to "XX.YY" allows for (stricter) validation (see #317858),
and the introduction in 3a5ff9a68c was
only concerned with silencing warnings, so the addition of the "pre.*"
suffix into stateVersion was probably unintentional.
Adding custom plugins causes the `vim` command to be a wrapper script
running `vim -u ...`, which makes it not load the default ~/.vimrc.
(This is analogous to #177375 about neovim.)
As of Vim 9, the syntax-highlighting portion of the nix plugin is
upstream; the full plugin is only needed for indentation etc. (see also
e261eb152b). So, using regular pkgs.vim
works around this behavior/bug and causes any ~/.vimrc to get loaded,
without regressing the syntax highlighting support that motivated the
change being reverted here.
This reverts commit 0b5a0cbc69.
I'm maintaining the associated packages. So it makes sense to add myself
to their modules as well.
Signed-off-by: Felix Singer <felixsinger@posteo.net>
The GUI of GlobalProtect-openconnect is unfree software, while the CLI is
licensed as GPLv3-only. This packaging work focuses on the CLI, and
components required for the CLI.
Link: https://github.com/yuezk/GlobalProtect-openconnect
Signed-off-by: Rahul Rameshbabu <sergeantsagara@protonmail.com>
The 1.x iteration of globalprotect-openconnect is no longer being
developed. Remove related components from nixpkgs.
Signed-off-by: Rahul Rameshbabu <sergeantsagara@protonmail.com>
It thinks we want to expand the `*` regex expressions inside the `sed`
commands. We do not.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
tpm2.target was functionally useless without these services and this
generator. When systemd-cryptsetup-generator creates
systemd-cryptsetup@.service units, they are ordered after
systemd-tpm2-setup-early.service, not tpm2.target. These services are
themselves ordered after tpm2.target.
Note: The systemd-tpm2-setup(-early) services will serve no *function*
under a normal NixOS system at the moment. Because of their
ConditionSecurity=measured-uki, they will always be skipped, unless
you are building an appliance with the system.build.uki feature. Thus,
these are enabled solely for their systemd unit ordering properties.
This module provides some abstraction for a multi-stage build to create
a dm-verity protected NixOS repart image.
The opinionated approach realized by this module is to first create an
immutable, verity-protected nix store partition, then embed the root
hash of the corresponding verity hash partition in a UKI, that is then
injected into the ESP of the resulting image.
The UKI can then precisely identify the corresponding data from which
the entire system is bootstrapped.
The module comes with a script that checks the UKI used in the final
image corresponds to the intermediate image created in the first step.
This is necessary to notice incompatible substitutions of
non-reproducible store paths, for example when working with distributed
builds, or when offline-signing the UKI.
For some reason, chromium, which is still the nixpkgs version hangs
inside the normal test vm, while working fine in .driverInteractive.
I suspect that might have to do with the existence of a display in
.driverInteractive. Neither vm does run X11 or wayland.
The assertion message should include the `nixpkgs.config` value, however
it currently includes the entire `nixpkgs.config` _option_.
This means the type, declarations, definitions, etc were all printed.