Contour was broken for aarch64 in #253334, and completely broke
in #344788 for all platforms.
This removes the broken package, and adds a notice to remove broken
packages in the future. aarch64 users have waited a year for this to be
fixed, so I think we should lean to be more eager to remove in general,
and then the fix can come when it is ready, instead of letting it block
this.
Resolves: #258515
Signed-off-by: Christina Sørensen <christina@cafkafk.com>
In order to not expose Redmine over all interfaces, allow configuring an
IP address it should bind to. Listen to 0.0.0.0 by default.
Signed-off-by: Felix Singer <felixsinger@posteo.net>
Just noticed that I apparently disabled this test while restructuring
the Nextcloud tests[1] effectively disabling the test.
This patch re-adds it and adjusts the code accordingly.
I also noticed that the old check whether the cache is actually used
(`test "[]" = "$(redis-cli --json KEYS "*")"`) was broken because the
`nextcloud.fail()` hid the fact that the `redis-cli` invocation was
failing due to a missing password. Fixed the subtest accordingly.
[1] 0b31ada92b
Having access to the original Nix partition definitions in the builder
should make it a bit easier to manipulate them and still provide access
to the manipulated results.
This only ever worked for the session, not for the greeter. Writing the information out to a file should be more consistent.
To make sure that this works, and continues working, for the greeter & session, also add a new VM test.
The majority of users these days will install NixOS on SSD/NVME based
storage. Enabling fstrim ensures that the TRIM operation on this type of
storage is run at least once a week. This will improve performance and
life time of said devices. This also works in virtual machines as
formats such as qcow2 or vmdk support TRIM.
Ubuntu has a similar systemd timer also for quite a while enabled by
default.
Enabling this service will not increase the dependency closure as
util-linux is already part of the base system.
In case only filesystems that are not supported by fstrim are used, the
overhead is negelible as fstrim run in less than a second once a week.
The previous hardening change restricted the unit too much, breaking
legitimate functionality of logrotate.
Unfortunately this was not covered by our NixOS test.
Xen is a trademark of the Cloud Software Group; we're not packaging
Xen(Server), we're packaging the Xen Project Hypervisor, which is open
source and owned by the Linux Foundation.
This is based on advice from Kelly Choi, the Xen Project Community
Manager, who has assisted us in the branding aspects of pacakaging.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
Also, all URLs in package and module comments are updated.
At the time of this writing, the "Update History" page
(release notes) for tsm-client >=8.1.19 does not list any
"APARs" ("Authorized Program Analysis Reports") for 8.1.24.0.
Running the migrations in a systemd execStartPre was a mistake. The
migrations can be pretty long to run and easily time-out.
Moving this to a proper oneshot service solves this issue and makes
this fits better the systemd execution model. We can now easily filter
the migrations logs.
The fully-qualified name would certainly be a lot here, but `with` can
still be unclear even with narrow scope. A short `let` adds clarity
without significantly increasing verbosity.
This was incorrectly getting `lib.version` which is e.g.
`"24.11pre-git"`, but should have been the ZFS package version. However,
the condition, at least per the comment, is reversed and should be
instead `versionOlder cfgZfs.package.version "2.2.0"`. However, the
entire premise seems to be incorrect, as ZFS 2.2.6 includes the spl
module. Since the previous condition here was effectively always true,
it would initially seem the best move is to remove the conditional
altogether and always include the spl kmod. However, going back to
4360a87c45 where this condition was added,
the intent appears to be that spl was no longer needed here in
the-pre-release ZFS (long since in all supported versions), due to it
being merged into ZFS mainline. Given that intent and that our boot
tests on all versions succeed without including it in the initrd, remove
it.
These can be either an integer or a range.
Range options are necessary for `FREE_LIMIT` to take effect when used in
conjunction with `TIMELINE_LIMIT_*`.
It is currently tied to `services.avahi.enable` which might not be
desirable.
With this change it is possible to disable the service with
`services.printing.browsed.enable = false`
Factor out part of the provisioning script into a
wait-until-service-is-ready script, and put it unconditionally in
front of ExecStartPost=, so that services that depend on influxdb2 are
not started until influxdb2 responds to requests.
Fixes https://github.com/NixOS/nixpkgs/issues/317017 ("Scrutiny tries to start before influxdb has started")
We currently package all CUDA versions from 10.0 onwards. In
some cases, CUDA is the only thing preventing us from removing old
versions of GCC. Since we currently don’t deprecate or remove CUDA
versions, this will be an increasing drag on compiler maintenance in
Nixpkgs going forward unless we establish a sensible policy. After
discussing this with @SomeoneSerge in the context of old versions
of GCC, I learned that there was already a desire to remove at least
versions prior to 11.3, as those versions were only packaged in the
old “runfile” format, but that it was blocked on someone doing
the work to warn about the upcoming deprecation for a release cycle.
This change adds a release note and warnings indicating that CUDA 10.x
and 11.x will be removed in Nixpkgs 25.05, about 8 months from now.
I chose this version cut‐off because these versions of CUDA require
GCC < 12. GCC releases a major version every year, and seems to
support about four releases at a time, releasing the last update to
the oldest version and marking it as unsupported on their site around
the time of the release of the next major version. Therefore, by the
time of the 25.05 release, we should expect GCC 15 to be released
and GCC 11 to become unsupported. Adding a warning and communicating
the policy of only shipping CUDA versions that work with supported
compilers in the release notes means that we should be able to
clean up old versions as required without any issue or extensive
deprecation period in future, without obligating us to do so if there
is a strongly compelling reason to be more lenient. That should help
solve both shipping an indefinitely‐growing list of CUDA versions
and an indefinitely‐growing list of GCC and LLVM versions.
As I’m not a user of CUDA myself, I can’t be sure of how sensible
this version support policy is, but I think it’s fair to say that
it’s reasonable for Nixpkgs to choose not to maintain compiler
versions that are unsupported upstream just for the sake of versions
of CUDA that are also unmaintained. CUDA 11.x has not received an
update for two years already, and would only become unsupported in
Nixpkgs in over half a year’s time.
CUDA 10.x is currently unused in‐tree except for the unmaintained
Caffe and NVIDIA DCGM, which depends on multiple CUDA versions solely
so that it can provide plugins for those versions. The latest DCGM
version has already removed support for CUDA 10.x and is just awaiting
an update in Nixpkgs. They maintain a list of supported versions to
build plugins for in their CMake build system, so it should be simple
enough for us to only build support for the versions of CUDA that we
support in Nixpkgs.
From what I can tell, CUDA 11.x is currently used by the following
packages other than DCGM:
* `catboost`, because of
<https://github.com/catboost/catboost/issues/2540>. It looks like
upstream has since redesigned this part of their build system, so
perhaps the problem is no longer present, or would be easier to fix.
* `magma_2_6_2`, an old version from before upstream added CUDA
12 support. This seems okay to break to me; that version is not
maintained and will never be updated for new CUDA versions, and
the CUDA support is optional.
* `paddlepaddle`, which, uh, also requires OpenSSL 1.1 of all
things. <https://github.com/PaddlePaddle/Paddle/issues/67571>
states that PaddlePaddle supports up to 12.3.
* `python3Packages.cupy`, which is listed as “possibly incompatible
with cutensor 2.0 that comes with `cudaPackages_12`”. I’m
not sure what the “possibly” means here, but according to
<https://github.com/cupy/cupy/tree/v13.3.0?tab=readme-ov-file#installation>
they ship binary wheels using CUDA 12.x so I think this should
be fine.
* `python3Packages.tensorrt`, which supports CUDA 12.x going by
<https://github.com/NVIDIA/TensorRT/blob/release/10.4/CMakeLists.txt#L111>.
* TensorFlow, which has a link to
<https://www.tensorflow.org/install/source#gpu> above the
`python3Packages.tensorflow-bin` definition, but that page lists
the versions we package as supporting CUDA 12.x.
Given the years since CUDA 11.x received any update upstream, and the
seemingly very limited set of packages that truly require it, I think
the policy of being able to drop versions that require unsupported
compilers starting from the next Nixpkgs release is a reasonable
one, but of course I’m open to feedback from the CUDA maintainers
about this.
When `diskImage = null`, the root fs is a tmpfs instead of
`/dev/vda`. Thus, it doesn't have to wait for virtio modules to load
before being mounted. The root fs is a dependency of shared
directories by nature of being their parent directory. Without
depending on `/dev/vda`, these shared directories may attempt to mount
without virtio modules being loaded.
The package has been updated to 0.4 which will result in an auto-migration of the config. This updates our config to match the new expected format. Assertions have been added to warn users that they need to migrate their configuration.
In preparation for the deprecation of `stdenv.isX`.
These shorthands are not conducive to cross-compilation because they
hide the platforms.
Darwin might get cross-compilation for which the continued usage of `stdenv.isDarwin` will get in the way
One example of why this is bad and especially affects compiler packages
https://www.github.com/NixOS/nixpkgs/pull/343059
There are too many files to go through manually but a treewide should
get users thinking when they see a `hostPlatform.isX` in a place where it
doesn't make sense.
```
fd --type f "\.nix" | xargs sd --fixed-strings "stdenv.is" "stdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "stdenv'.is" "stdenv'.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "clangStdenv.is" "clangStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "gccStdenv.is" "gccStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "stdenvNoCC.is" "stdenvNoCC.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "inherit (stdenv) is" "inherit (stdenv.hostPlatform) is"
fd --type f "\.nix" | xargs sd --fixed-strings "buildStdenv.is" "buildStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "effectiveStdenv.is" "effectiveStdenv.hostPlatform.is"
fd --type f "\.nix" | xargs sd --fixed-strings "originalStdenv.is" "originalStdenv.hostPlatform.is"
```
Unlike regular input-addressed or fixed-output derivations, floating and
deferred derivations do not have their store path available at evaluation time,
so their outPath is a placeholder. The following changes are needed for
replaceDependencies to continue working:
* Detect the placeholder and retrieve the store path using another IFD hack
when collecting the rewrite plan.
* Try to obtain the derivation name needed for replaceDirectDependencies from
the derivation arguments if a placeholder is detected.
* Move the length mismatch detection to build time, since the placeholder has a
fixed length which is unrelated to the store path.
The tests cannot be directly built by Hydra, because replaceDependencies relies
on IFD. Instead, they are put inside a NixOS test where they are built on the
guest.
Move replaceRuntimeDependencies to the replaceDependencies namespace,
where the structure is more consistent with the replaceDependencies
function. This makes space for wiring up cutoffPackages as an option
too.
By default, the system's initrd is excluded. The replacement process does not
work properly anyway due to the structure of the initrd (the files being copied
into it, and it being compressed). In the worst case (which has been observed
to actually occur in practice), a store path makes it into the incompressible
parts of the archive, checksums are broken, and the system won't boot.
Instead of iterating over all replacements and applying them one by one,
use the newly introduced replaceDependencies function to apply them all
at once for replaceRuntimeDependencies. The advantages are twofold in
case there are multiple replacements:
* Performance is significantly improved, because there is only one pass
over the closure to be made.
* Correctness is improved, because replaceDependencies also replaces
dependencies of the replacements themselves if applicable.
Fixes: https://github.com/NixOS/nixpkgs/issues/4336
4b836fb680 added `pkgs.grub2_efi` to `environment.systemPackages` so that it would be in the Nix store and available for install. But `pkgs.grub2` is already in the list. This causes the various paths of the two GRUB2 versions to collide. To fix this, put `pkgs.grub2_efi` into `system.extraDependencies` instead. This should achieve the same effect of adding the second GRUB2 version to the Nix store without the paths colliding in the environment.
To reproduce the problem, execute `nix-build nixos -I nixos-config=nixos/modules/installer/cd-dvd/iso-image.nix -A config.system.build.isoImage` and look for messages like
```
warning: collision between `/nix/store/9jk1p9n5dl431lcm4w9p6x6x8a00dm0q-grub-2.12/bin/grub-install' and `/nix/store/809l0i6aydg4zhn3kqf723brjyp2qm8h-grub-2.12/bin/grub-install'
```