Merge branch 'staging-next' into PR 82342
Hydra nixpkgs: ?compare=1586582
This commit is contained in:
commit
5eaabaf089
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -15,7 +15,7 @@ Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/do
|
||||
|
||||
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
|
||||
|
||||
- [ ] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
|
||||
- [ ] Tested using sandboxing ([nix.useSandbox](https://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](https://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
|
||||
- Built on platform(s)
|
||||
- [ ] NixOS
|
||||
- [ ] macOS
|
||||
|
@ -111,7 +111,7 @@
|
||||
</para>
|
||||
<para>
|
||||
The exact syntax and semantics of the Nix expression language, including the built-in function, are described in the Nix manual in the <link
|
||||
xlink:href="http://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter on writing Nix expressions</link>.
|
||||
xlink:href="https://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter on writing Nix expressions</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -167,7 +167,7 @@ parameters that the SDK composition function (the function shown in the
|
||||
previous section) supports.
|
||||
|
||||
This build function is particularly useful when it is desired to use
|
||||
[Hydra](http://nixos.org/hydra): the Nix-based continuous integration solution
|
||||
[Hydra](https://nixos.org/hydra): the Nix-based continuous integration solution
|
||||
to build Android apps. An Android APK gets exposed as a build product and can be
|
||||
installed on any Android device with a web browser by navigating to the build
|
||||
result page.
|
||||
|
@ -40,6 +40,23 @@
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-icon-theme-packaging">
|
||||
<title>Packaging icon themes</title>
|
||||
|
||||
<para>
|
||||
Icon themes may inherit from other icon themes. The inheritance is specified using the <literal>Inherits</literal> key in the <filename>index.theme</filename> file distributed with the icon theme. According to the <link xlink:href="https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html">icon theme specification</link>, icons not provided by the theme are looked for in its parent icon themes. Therefore the parent themes should be installed as dependencies for a more complete experience regarding the icon sets used.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The package <package>hicolor-icon-theme</package> provides a setup hook which makes symbolic links for the parent themes into the directory <filename>share/icons</filename> of the current theme directory in the nix store, making sure they can be found at runtime. For that to work the packages providing parent icon themes should be listed as propagated build dependencies, together with <package>hicolor-icon-theme</package>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Also make sure that <filename>icon-theme.cache</filename> is installed for each theme provided by the package, and set <code>dontDropIconThemeCache</code> to <code>true</code> so that the cache file is not removed by the <package>gtk3</package> setup hook.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-gnome-themes">
|
||||
<title>GTK Themes</title>
|
||||
|
||||
|
@ -103,8 +103,8 @@ command displays the complete list of available compilers:
|
||||
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler
|
||||
haskell.compiler.ghc8101 ghc-8.10.1
|
||||
haskell.compiler.integer-simple.ghc8101 ghc-8.10.1
|
||||
haskell.compiler.ghcHEAD ghc-8.11.20200403
|
||||
haskell.compiler.integer-simple.ghcHEAD ghc-8.11.20200403
|
||||
haskell.compiler.ghcHEAD ghc-8.11.20200505
|
||||
haskell.compiler.integer-simple.ghcHEAD ghc-8.11.20200505
|
||||
haskell.compiler.ghc822Binary ghc-8.2.2-binary
|
||||
haskell.compiler.ghc844 ghc-8.4.4
|
||||
haskell.compiler.ghc863Binary ghc-8.6.3-binary
|
||||
|
@ -21,6 +21,7 @@
|
||||
<xi:include href="node.section.xml" />
|
||||
<xi:include href="ocaml.xml" />
|
||||
<xi:include href="perl.xml" />
|
||||
<xi:include href="php.section.xml" />
|
||||
<xi:include href="python.section.xml" />
|
||||
<xi:include href="qt.xml" />
|
||||
<xi:include href="r.section.xml" />
|
||||
|
@ -18,7 +18,7 @@ The primary objective of this project is to use the Nix expression language to
|
||||
specify how iOS apps can be built from source code, and to automatically spawn
|
||||
iOS simulator instances for testing.
|
||||
|
||||
This component also makes it possible to use [Hydra](http://nixos.org/hydra),
|
||||
This component also makes it possible to use [Hydra](https://nixos.org/hydra),
|
||||
the Nix-based continuous integration server to regularly build iOS apps and to
|
||||
do wireless ad-hoc installations of enterprise IPAs on iOS devices through
|
||||
Hydra.
|
||||
|
@ -1,10 +1,8 @@
|
||||
# PHP
|
||||
# PHP {#sec-php}
|
||||
|
||||
## User Guide
|
||||
## User Guide {#ssec-php-user-guide}
|
||||
|
||||
### Using PHP
|
||||
|
||||
#### Overview
|
||||
### Overview {#ssec-php-user-guide-overview}
|
||||
|
||||
Several versions of PHP are available on Nix, each of which having a
|
||||
wide variety of extensions and libraries available.
|
||||
@ -36,7 +34,7 @@ opcache extension shipped with PHP is available at
|
||||
`php.extensions.opcache` and the third-party ImageMagick extension at
|
||||
`php.extensions.imagick`.
|
||||
|
||||
#### Installing PHP with extensions
|
||||
### Installing PHP with extensions {#ssec-php-user-guide-installing-with-extensions}
|
||||
|
||||
A PHP package with specific extensions enabled can be built using
|
||||
`php.withExtensions`. This is a function which accepts an anonymous
|
||||
@ -64,7 +62,7 @@ To build your list of extensions from the ground up, you can simply
|
||||
ignore `enabled`:
|
||||
|
||||
```nix
|
||||
php.withExtensions ({ all, ... }: with all; [ opcache imagick ])
|
||||
php.withExtensions ({ all, ... }: with all; [ imagick opcache ])
|
||||
```
|
||||
|
||||
`php.withExtensions` provides extensions by wrapping a minimal php
|
||||
@ -89,14 +87,14 @@ php.buildEnv {
|
||||
}
|
||||
```
|
||||
|
||||
##### Example setup for `phpfpm`
|
||||
#### Example setup for `phpfpm` {#ssec-php-user-guide-installing-with-extensions-phpfpm}
|
||||
|
||||
You can use the previous examples in a `phpfpm` pool called `foo` as
|
||||
follows:
|
||||
|
||||
```nix
|
||||
let
|
||||
myPhp = php.withExtensions ({ all, ... }: with all; [ opcache imagick ]);
|
||||
myPhp = php.withExtensions ({ all, ... }: with all; [ imagick opcache ]);
|
||||
in {
|
||||
services.phpfpm.pools."foo".phpPackage = myPhp;
|
||||
};
|
||||
@ -113,7 +111,7 @@ in {
|
||||
};
|
||||
```
|
||||
|
||||
##### Example usage with `nix-shell`
|
||||
#### Example usage with `nix-shell` {#ssec-php-user-guide-installing-with-extensions-nix-shell}
|
||||
|
||||
This brings up a temporary environment that contains a PHP interpreter
|
||||
with the extensions `imagick` and `opcache` enabled:
|
||||
@ -121,3 +119,19 @@ with the extensions `imagick` and `opcache` enabled:
|
||||
```sh
|
||||
nix-shell -p 'php.withExtensions ({ all, ... }: with all; [ imagick opcache ])'
|
||||
```
|
||||
|
||||
### Installing PHP packages with extensions {#ssec-php-user-guide-installing-packages-with-extensions}
|
||||
|
||||
All interactive tools use the PHP package you get them from, so all
|
||||
packages at `php.packages.*` use the `php` package with its default
|
||||
extensions. Sometimes this default set of extensions isn't enough and
|
||||
you may want to extend it. A common case of this is the `composer`
|
||||
package: a project may depend on certain extensions and `composer`
|
||||
won't work with that project unless those extensions are loaded.
|
||||
|
||||
Example of building `composer` with additional extensions:
|
||||
```nix
|
||||
(php.withExtensions ({ all, enabled }:
|
||||
enabled ++ (with all; [ imagick redis ]))
|
||||
).packages.composer
|
||||
```
|
||||
|
@ -9,7 +9,7 @@
|
||||
Several versions of the Python interpreter are available on Nix, as well as a
|
||||
high amount of packages. The attribute `python` refers to the default
|
||||
interpreter, which is currently CPython 2.7. It is also possible to refer to
|
||||
specific versions, e.g. `python35` refers to CPython 3.5, and `pypy` refers to
|
||||
specific versions, e.g. `python38` refers to CPython 3.8, and `pypy` refers to
|
||||
the default PyPy interpreter.
|
||||
|
||||
Python is used a lot, and in different ways. This affects also how it is
|
||||
@ -25,10 +25,10 @@ however, are in separate sets, with one set per interpreter version.
|
||||
The interpreters have several common attributes. One of these attributes is
|
||||
`pkgs`, which is a package set of Python libraries for this specific
|
||||
interpreter. E.g., the `toolz` package corresponding to the default interpreter
|
||||
is `python.pkgs.toolz`, and the CPython 3.5 version is `python35.pkgs.toolz`.
|
||||
is `python.pkgs.toolz`, and the CPython 3.8 version is `python38.pkgs.toolz`.
|
||||
The main package set contains aliases to these package sets, e.g.
|
||||
`pythonPackages` refers to `python.pkgs` and `python35Packages` to
|
||||
`python35.pkgs`.
|
||||
`pythonPackages` refers to `python.pkgs` and `python38Packages` to
|
||||
`python38.pkgs`.
|
||||
|
||||
#### Installing Python and packages
|
||||
|
||||
@ -50,7 +50,7 @@ to create an environment with `python.buildEnv` or `python.withPackages` where
|
||||
the interpreter and other executables are able to find each other and all of the
|
||||
modules.
|
||||
|
||||
In the following examples we create an environment with Python 3.5, `numpy` and
|
||||
In the following examples we create an environment with Python 3.8, `numpy` and
|
||||
`toolz`. As you may imagine, there is one limitation here, and that's that
|
||||
you can install only one environment at a time. You will notice the complaints
|
||||
about collisions when you try to install a second environment.
|
||||
@ -61,7 +61,7 @@ Create a file, e.g. `build.nix`, with the following expression
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
python35.withPackages (ps: with ps; [ numpy toolz ])
|
||||
python38.withPackages (ps: with ps; [ numpy toolz ])
|
||||
```
|
||||
and install it in your profile with
|
||||
```shell
|
||||
@ -79,7 +79,7 @@ Nixpkgs set, e.g. using `config.nix`,
|
||||
{ # ...
|
||||
|
||||
packageOverrides = pkgs: with pkgs; {
|
||||
myEnv = python35.withPackages (ps: with ps; [ numpy toolz ]);
|
||||
myEnv = python38.withPackages (ps: with ps; [ numpy toolz ]);
|
||||
};
|
||||
}
|
||||
```
|
||||
@ -101,7 +101,7 @@ environment system-wide.
|
||||
{ # ...
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
(python35.withPackages(ps: with ps; [ numpy toolz ]))
|
||||
(python38.withPackages(ps: with ps; [ numpy toolz ]))
|
||||
];
|
||||
}
|
||||
```
|
||||
@ -118,7 +118,7 @@ recommended method is to create an environment with `python.buildEnv` or
|
||||
`python.withPackages` and load that. E.g.
|
||||
|
||||
```sh
|
||||
$ nix-shell -p 'python35.withPackages(ps: with ps; [ numpy toolz ])'
|
||||
$ nix-shell -p 'python38.withPackages(ps: with ps; [ numpy toolz ])'
|
||||
```
|
||||
|
||||
opens a shell from which you can launch the interpreter
|
||||
@ -131,7 +131,7 @@ The other method, which is not recommended, does not create an environment and
|
||||
requires you to list the packages directly,
|
||||
|
||||
```sh
|
||||
$ nix-shell -p python35.pkgs.numpy python35.pkgs.toolz
|
||||
$ nix-shell -p python38.pkgs.numpy python38.pkgs.toolz
|
||||
```
|
||||
|
||||
Again, it is possible to launch the interpreter from the shell. The Python
|
||||
@ -140,14 +140,14 @@ that specific interpreter.
|
||||
|
||||
##### Load environment from `.nix` expression
|
||||
As explained in the Nix manual, `nix-shell` can also load an
|
||||
expression from a `.nix` file. Say we want to have Python 3.5, `numpy`
|
||||
expression from a `.nix` file. Say we want to have Python 3.8, `numpy`
|
||||
and `toolz`, like before, in an environment. Consider a `shell.nix` file
|
||||
with
|
||||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
(python35.withPackages (ps: [ps.numpy ps.toolz])).env
|
||||
(python38.withPackages (ps: [ps.numpy ps.toolz])).env
|
||||
```
|
||||
|
||||
Executing `nix-shell` gives you again a Nix shell from which you can run Python.
|
||||
@ -158,7 +158,7 @@ What's happening here?
|
||||
imports the `<nixpkgs>` function, `{}` calls it and the `with` statement
|
||||
brings all attributes of `nixpkgs` in the local scope. These attributes form
|
||||
the main package set.
|
||||
2. Then we create a Python 3.5 environment with the `withPackages` function.
|
||||
2. Then we create a Python 3.8 environment with the `withPackages` function.
|
||||
3. The `withPackages` function expects us to provide a function as an argument
|
||||
that takes the set of all python packages and returns a list of packages to
|
||||
include in the environment. Here, we select the packages `numpy` and `toolz`
|
||||
@ -170,7 +170,7 @@ To combine this with `mkShell` you can:
|
||||
with import <nixpkgs> {};
|
||||
|
||||
let
|
||||
pythonEnv = python35.withPackages (ps: [
|
||||
pythonEnv = python38.withPackages (ps: [
|
||||
ps.numpy
|
||||
ps.toolz
|
||||
]);
|
||||
@ -188,13 +188,13 @@ option, with which you can execute a command in the `nix-shell`. We can
|
||||
e.g. directly open a Python shell
|
||||
|
||||
```sh
|
||||
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3"
|
||||
$ nix-shell -p python38Packages.numpy python38Packages.toolz --run "python3"
|
||||
```
|
||||
|
||||
or run a script
|
||||
|
||||
```sh
|
||||
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3 myscript.py"
|
||||
$ nix-shell -p python38Packages.numpy python38Packages.toolz --run "python3 myscript.py"
|
||||
```
|
||||
|
||||
##### `nix-shell` as shebang
|
||||
@ -231,11 +231,11 @@ building Python libraries is `buildPythonPackage`. Let's see how we can build th
|
||||
|
||||
buildPythonPackage rec {
|
||||
pname = "toolz";
|
||||
version = "0.7.4";
|
||||
version = "0.10.0";
|
||||
|
||||
src = fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
|
||||
sha256 = "08fdd5ef7c96480ad11c12d472de21acd32359996f69a5259299b540feba4560";
|
||||
};
|
||||
|
||||
doCheck = false;
|
||||
@ -260,8 +260,8 @@ information. The output of the function is a derivation.
|
||||
|
||||
An expression for `toolz` can be found in the Nixpkgs repository. As explained
|
||||
in the introduction of this Python section, a derivation of `toolz` is available
|
||||
for each interpreter version, e.g. `python35.pkgs.toolz` refers to the `toolz`
|
||||
derivation corresponding to the CPython 3.5 interpreter.
|
||||
for each interpreter version, e.g. `python38.pkgs.toolz` refers to the `toolz`
|
||||
derivation corresponding to the CPython 3.8 interpreter.
|
||||
The above example works when you're directly working on
|
||||
`pkgs/top-level/python-packages.nix` in the Nixpkgs repository. Often though,
|
||||
you will want to test a Nix expression outside of the Nixpkgs tree.
|
||||
@ -273,13 +273,13 @@ and adds it along with a `numpy` package to a Python environment.
|
||||
with import <nixpkgs> {};
|
||||
|
||||
( let
|
||||
my_toolz = python35.pkgs.buildPythonPackage rec {
|
||||
my_toolz = python38.pkgs.buildPythonPackage rec {
|
||||
pname = "toolz";
|
||||
version = "0.7.4";
|
||||
version = "0.10.0";
|
||||
|
||||
src = python35.pkgs.fetchPypi {
|
||||
src = python38.pkgs.fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
|
||||
sha256 = "08fdd5ef7c96480ad11c12d472de21acd32359996f69a5259299b540feba4560";
|
||||
};
|
||||
|
||||
doCheck = false;
|
||||
@ -290,12 +290,12 @@ with import <nixpkgs> {};
|
||||
};
|
||||
};
|
||||
|
||||
in python35.withPackages (ps: [ps.numpy my_toolz])
|
||||
in python38.withPackages (ps: [ps.numpy my_toolz])
|
||||
).env
|
||||
```
|
||||
|
||||
Executing `nix-shell` will result in an environment in which you can use
|
||||
Python 3.5 and the `toolz` package. As you can see we had to explicitly mention
|
||||
Python 3.8 and the `toolz` package. As you can see we had to explicitly mention
|
||||
for which Python version we want to build a package.
|
||||
|
||||
So, what did we do here? Well, we took the Nix expression that we used earlier
|
||||
@ -435,7 +435,7 @@ If we create a `shell.nix` file which calls `buildPythonPackage`, and if `src`
|
||||
is a local source, and if the local source has a `setup.py`, then development
|
||||
mode is activated.
|
||||
|
||||
In the following example we create a simple environment that has a Python 3.5
|
||||
In the following example we create a simple environment that has a Python 3.8
|
||||
version of our package in it, as well as its dependencies and other packages we
|
||||
like to have in the environment, all specified with `propagatedBuildInputs`.
|
||||
Indeed, we can just add any package we like to have in our environment to
|
||||
@ -443,7 +443,7 @@ Indeed, we can just add any package we like to have in our environment to
|
||||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
with python35Packages;
|
||||
with python38Packages;
|
||||
|
||||
buildPythonPackage rec {
|
||||
name = "mypackage";
|
||||
@ -505,9 +505,9 @@ with import <nixpkgs> {};
|
||||
|
||||
( let
|
||||
toolz = callPackage /path/to/toolz/release.nix {
|
||||
buildPythonPackage = python35Packages.buildPythonPackage;
|
||||
buildPythonPackage = python38Packages.buildPythonPackage;
|
||||
};
|
||||
in python35.withPackages (ps: [ ps.numpy toolz ])
|
||||
in python38.withPackages (ps: [ ps.numpy toolz ])
|
||||
).env
|
||||
```
|
||||
|
||||
@ -515,8 +515,8 @@ Important to remember is that the Python version for which the package is made
|
||||
depends on the `python` derivation that is passed to `buildPythonPackage`. Nix
|
||||
tries to automatically pass arguments when possible, which is why generally you
|
||||
don't explicitly define which `python` derivation should be used. In the above
|
||||
example we use `buildPythonPackage` that is part of the set `python35Packages`,
|
||||
and in this case the `python35` interpreter is automatically used.
|
||||
example we use `buildPythonPackage` that is part of the set `python38Packages`,
|
||||
and in this case the `python38` interpreter is automatically used.
|
||||
|
||||
## Reference
|
||||
|
||||
@ -662,7 +662,7 @@ following are specific to `buildPythonPackage`:
|
||||
variables which will be available when the binary is run. For example,
|
||||
`makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
|
||||
* `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this
|
||||
defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications
|
||||
defaults to `"python3.8-"` for Python 3.8, etc., and in case of applications
|
||||
to `""`.
|
||||
* `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages
|
||||
in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
|
||||
@ -960,7 +960,7 @@ has security implications and is relevant for those using Python in a
|
||||
|
||||
When the environment variable `DETERMINISTIC_BUILD` is set, all bytecode will
|
||||
have timestamp 1. The `buildPythonPackage` function sets `DETERMINISTIC_BUILD=1`
|
||||
and [PYTHONHASHSEED=0](https://docs.python.org/3.5/using/cmdline.html#envvar-PYTHONHASHSEED).
|
||||
and [PYTHONHASHSEED=0](https://docs.python.org/3.8/using/cmdline.html#envvar-PYTHONHASHSEED).
|
||||
Both are also exported in `nix-shell`.
|
||||
|
||||
|
||||
@ -1014,7 +1014,7 @@ with import <nixpkgs> {};
|
||||
packageOverrides = self: super: {
|
||||
pandas = super.pandas.overridePythonAttrs(old: {name="foo";});
|
||||
};
|
||||
in pkgs.python35.override {inherit packageOverrides;};
|
||||
in pkgs.python38.override {inherit packageOverrides;};
|
||||
|
||||
in python.withPackages(ps: [ps.pandas])).env
|
||||
```
|
||||
@ -1036,7 +1036,7 @@ with import <nixpkgs> {};
|
||||
packageOverrides = self: super: {
|
||||
scipy = super.scipy_0_17;
|
||||
};
|
||||
in (pkgs.python35.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
|
||||
in (pkgs.python38.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
|
||||
).env
|
||||
```
|
||||
|
||||
@ -1049,12 +1049,12 @@ If you want the whole of Nixpkgs to use your modifications, then you can use
|
||||
```nix
|
||||
let
|
||||
pkgs = import <nixpkgs> {};
|
||||
newpkgs = import pkgs.path { overlays = [ (pkgsself: pkgssuper: {
|
||||
python27 = let
|
||||
packageOverrides = self: super: {
|
||||
numpy = super.numpy_1_10;
|
||||
newpkgs = import pkgs.path { overlays = [ (self: super: {
|
||||
python38 = let
|
||||
packageOverrides = python-self: python-super: {
|
||||
numpy = python-super.numpy_1_18.3;
|
||||
};
|
||||
in pkgssuper.python27.override {inherit packageOverrides;};
|
||||
in super.python38.override {inherit packageOverrides;};
|
||||
} ) ]; };
|
||||
in newpkgs.inkscape
|
||||
```
|
||||
|
@ -42,7 +42,7 @@ distributed as soon as all tests for that channel pass, e.g.
|
||||
[this table](https://hydra.nixos.org/job/nixpkgs/trunk/unstable#tabs-constituents)
|
||||
shows the status of tests for the `nixpkgs` channel.
|
||||
|
||||
The tests are conducted by a cluster called [Hydra](http://nixos.org/hydra/),
|
||||
The tests are conducted by a cluster called [Hydra](https://nixos.org/hydra/),
|
||||
which also builds binary packages from the Nix expressions in Nixpkgs for
|
||||
`x86_64-linux`, `i686-linux` and `x86_64-darwin`.
|
||||
The binaries are made available via a [binary cache](https://cache.nixos.org).
|
||||
|
@ -286,7 +286,7 @@ export NIX_MIRRORS_sourceforge=http://osdn.dl.sourceforge.net/sourceforge/</prog
|
||||
<note>
|
||||
<para>
|
||||
This release of Nixpkgs requires <link
|
||||
xlink:href='http://nixos.org/releases/nix/nix-0.10/'>Nix 0.10</link> or higher.
|
||||
xlink:href='https://nixos.org/releases/nix/nix-0.10/'>Nix 0.10</link> or higher.
|
||||
</para>
|
||||
</note>
|
||||
|
||||
@ -436,7 +436,7 @@ stdenv.mkDerivation {
|
||||
<listitem>
|
||||
<para>
|
||||
Distribution files have been moved to <link
|
||||
xlink:href="http://nixos.org/" />.
|
||||
xlink:href="https://nixos.org/" />.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -145,7 +145,7 @@ genericBuild
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
GNU Make. It has been patched to provide <quote>nested</quote> output that can be fed into the <command>nix-log2xml</command> command and <command>log2html</command> stylesheet to create a structured, readable output of the build steps performed by Make.
|
||||
GNU Make.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -758,7 +758,7 @@
|
||||
name = "Jonathan Glines";
|
||||
};
|
||||
avaq = {
|
||||
email = "avaq+nixos@xs4all.nl";
|
||||
email = "nixpkgs@account.avaq.it";
|
||||
github = "avaq";
|
||||
githubId = 1217745;
|
||||
name = "Aldwin Vlasblom";
|
||||
@ -1662,6 +1662,12 @@
|
||||
}
|
||||
];
|
||||
};
|
||||
cyplo = {
|
||||
email = "nixos@cyplo.dev";
|
||||
github = "cyplo";
|
||||
githubId = 217899;
|
||||
name = "Cyryl Płotnicki";
|
||||
};
|
||||
d-goldin = {
|
||||
email = "dgoldin+github@protonmail.ch";
|
||||
github = "d-goldin";
|
||||
@ -4019,12 +4025,6 @@
|
||||
fingerprint = "8992 44FC D291 5CA2 0A97 802C 156C 88A5 B0A0 4B2A";
|
||||
}];
|
||||
};
|
||||
kjuvi = {
|
||||
email = "quentin.vaucher@pm.me";
|
||||
github = "kjuvi";
|
||||
githubId = 17534323;
|
||||
name = "Quentin Vaucher";
|
||||
};
|
||||
kkallio = {
|
||||
email = "tierpluspluslists@gmail.com";
|
||||
name = "Karn Kallio";
|
||||
@ -4084,6 +4084,12 @@
|
||||
githubId = 6346418;
|
||||
name = "Kolby Crouch";
|
||||
};
|
||||
kolloch = {
|
||||
email = "info@eigenvalue.net";
|
||||
github = "kolloch";
|
||||
githubId = 339354;
|
||||
name = "Peter Kolloch";
|
||||
};
|
||||
konimex = {
|
||||
email = "herdiansyah@netc.eu";
|
||||
github = "konimex";
|
||||
@ -4423,6 +4429,16 @@
|
||||
fingerprint = "74F5 E5CC 19D3 B5CB 608F 6124 68FF 81E6 A785 0F49";
|
||||
}];
|
||||
};
|
||||
lourkeur = {
|
||||
name = "Louis Bettens";
|
||||
email = "louis@bettens.info";
|
||||
github = "lourkeur";
|
||||
githubId = 15657735;
|
||||
keys = [{
|
||||
longkeyid = "ed25519/0xDFE1D4A017337E2A";
|
||||
fingerprint = "5B93 9CFA E8FC 4D8F E07A 3AEA DFE1 D4A0 1733 7E2A";
|
||||
}];
|
||||
};
|
||||
luis = {
|
||||
email = "luis.nixos@gmail.com";
|
||||
github = "Luis-Hebendanz";
|
||||
@ -5048,6 +5064,12 @@
|
||||
githubId = 3269878;
|
||||
name = "Miguel Madrid Mencía";
|
||||
};
|
||||
mindavi = {
|
||||
email = "rol3517@gmail.com";
|
||||
github = "Mindavi";
|
||||
githubId = 9799623;
|
||||
name = "Rick van Schijndel";
|
||||
};
|
||||
minijackson = {
|
||||
email = "minijackson@riseup.net";
|
||||
github = "minijackson";
|
||||
@ -5846,6 +5868,16 @@
|
||||
githubId = 131844;
|
||||
name = "Igor Pashev";
|
||||
};
|
||||
patryk27 = {
|
||||
email = "wychowaniec.patryk@gmail.com";
|
||||
github = "Patryk27";
|
||||
githubId = 3395477;
|
||||
name = "Patryk Wychowaniec";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0xF62547D075E09767";
|
||||
fingerprint = "196A BFEC 6A1D D1EC 7594 F8D1 F625 47D0 75E0 9767";
|
||||
}];
|
||||
};
|
||||
patternspandemic = {
|
||||
email = "patternspandemic@live.com";
|
||||
github = "patternspandemic";
|
||||
@ -7130,6 +7162,12 @@
|
||||
githubId = 602439;
|
||||
name = "Serguei Narojnyi";
|
||||
};
|
||||
snicket2100 = {
|
||||
email = "57048005+snicket2100@users.noreply.github.com";
|
||||
github = "snicket2100";
|
||||
githubId = 57048005;
|
||||
name = "snicket2100";
|
||||
};
|
||||
snyh = {
|
||||
email = "snyh@snyh.org";
|
||||
github = "snyh";
|
||||
@ -7592,12 +7630,6 @@
|
||||
githubId = 1141680;
|
||||
name = "Thane Gill";
|
||||
};
|
||||
the-kenny = {
|
||||
email = "moritz@tarn-vedra.de";
|
||||
github = "the-kenny";
|
||||
githubId = 31167;
|
||||
name = "Moritz Ulrich";
|
||||
};
|
||||
thedavidmeister = {
|
||||
email = "thedavidmeister@gmail.com";
|
||||
github = "thedavidmeister";
|
||||
@ -7650,12 +7682,24 @@
|
||||
githubId = 7709;
|
||||
name = "Thomaz Leite";
|
||||
};
|
||||
thomasdesr = {
|
||||
email = "git@hive.pw";
|
||||
github = "thomasdesr";
|
||||
githubId = 681004;
|
||||
name = "Thomas Desrosiers";
|
||||
};
|
||||
ThomasMader = {
|
||||
email = "thomas.mader@gmail.com";
|
||||
github = "ThomasMader";
|
||||
githubId = 678511;
|
||||
name = "Thomas Mader";
|
||||
};
|
||||
thomasjm = {
|
||||
email = "tom@codedown.io";
|
||||
github = "thomasjm";
|
||||
githubId = 1634990;
|
||||
name = "Tom McLaughlin";
|
||||
};
|
||||
thoughtpolice = {
|
||||
email = "aseipp@pobox.com";
|
||||
github = "thoughtpolice";
|
||||
@ -8311,6 +8355,12 @@
|
||||
githubId = 1297598;
|
||||
name = "Konrad Borowski";
|
||||
};
|
||||
xiorcale = {
|
||||
email = "quentin.vaucher@pm.me";
|
||||
github = "xiorcale";
|
||||
githubId = 17534323;
|
||||
name = "Quentin Vaucher";
|
||||
};
|
||||
xnaveira = {
|
||||
email = "xnaveira@gmail.com";
|
||||
github = "xnaveira";
|
||||
|
43
maintainers/scripts/build.nix
Normal file
43
maintainers/scripts/build.nix
Normal file
@ -0,0 +1,43 @@
|
||||
{ maintainer }:
|
||||
|
||||
# based on update.nix
|
||||
# nix-build build.nix --argstr maintainer <yourname>
|
||||
|
||||
let
|
||||
pkgs = import ./../../default.nix {};
|
||||
maintainer_ = pkgs.lib.maintainers.${maintainer};
|
||||
packagesWith = cond: return: set:
|
||||
(pkgs.lib.flatten
|
||||
(pkgs.lib.mapAttrsToList
|
||||
(name: pkg:
|
||||
let
|
||||
result = builtins.tryEval
|
||||
(
|
||||
if pkgs.lib.isDerivation pkg && cond name pkg
|
||||
then [ (return name pkg) ]
|
||||
else if pkg.recurseForDerivations or false || pkg.recurseForRelease or false
|
||||
then packagesWith cond return pkg
|
||||
else [ ]
|
||||
);
|
||||
in
|
||||
if result.success then result.value
|
||||
else [ ]
|
||||
)
|
||||
set
|
||||
)
|
||||
);
|
||||
in
|
||||
packagesWith
|
||||
(name: pkg:
|
||||
(
|
||||
if builtins.hasAttr "maintainers" pkg.meta
|
||||
then (
|
||||
if builtins.isList pkg.meta.maintainers
|
||||
then builtins.elem maintainer_ pkg.meta.maintainers
|
||||
else maintainer_ == pkg.meta.maintainers
|
||||
)
|
||||
else false
|
||||
)
|
||||
)
|
||||
(name: pkg: pkg)
|
||||
pkgs
|
@ -79,7 +79,7 @@ def cli(jobset):
|
||||
and print a summary of failed builds
|
||||
"""
|
||||
|
||||
url = "http://hydra.nixos.org/jobset/{}".format(jobset)
|
||||
url = "https://hydra.nixos.org/jobset/{}".format(jobset)
|
||||
|
||||
# get the last evaluation
|
||||
click.echo(click.style(
|
||||
|
@ -2,4 +2,4 @@
|
||||
|
||||
NixOS is a Linux distribution based on the purely functional package
|
||||
management system Nix. More information can be found at
|
||||
http://nixos.org/nixos and in the manual in doc/manual.
|
||||
https://nixos.org/nixos and in the manual in doc/manual.
|
||||
|
@ -11,7 +11,7 @@
|
||||
the package to your clone, and (optionally) submit a patch or pull request to
|
||||
have it accepted into the main Nixpkgs repository. This is described in
|
||||
detail in the <link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual">Nixpkgs
|
||||
xlink:href="https://nixos.org/nixpkgs/manual">Nixpkgs
|
||||
manual</link>. In short, you clone Nixpkgs:
|
||||
<screen>
|
||||
<prompt>$ </prompt>git clone https://github.com/NixOS/nixpkgs
|
||||
|
@ -14,7 +14,7 @@
|
||||
when managing complex systems. The syntax and semantics of the Nix language
|
||||
are fully described in the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
xlink:href="https://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
manual</link>, but here we give a short overview of the most important
|
||||
constructs useful in NixOS configuration files.
|
||||
</para>
|
||||
|
@ -10,7 +10,7 @@
|
||||
expression language. It’s not complete. In particular, there are many other
|
||||
built-in functions. See the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
xlink:href="https://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
manual</link> for the rest.
|
||||
</para>
|
||||
|
||||
|
@ -16,11 +16,11 @@
|
||||
effects, some example settings:
|
||||
<programlisting>
|
||||
<link linkend="opt-services.picom.enable">services.picom</link> = {
|
||||
<link linkend="opt-services.picom.enable">enable</link> = true;
|
||||
<link linkend="opt-services.picom.fade">fade</link> = true;
|
||||
<link linkend="opt-services.picom.inactiveOpacity">inactiveOpacity</link> = "0.9";
|
||||
<link linkend="opt-services.picom.shadow">shadow</link> = true;
|
||||
<link linkend="opt-services.picom.fadeDelta">fadeDelta</link> = 4;
|
||||
<link linkend="opt-services.picom.enable">enable</link> = true;
|
||||
<link linkend="opt-services.picom.fade">fade</link> = true;
|
||||
<link linkend="opt-services.picom.inactiveOpacity">inactiveOpacity</link> = 0.9;
|
||||
<link linkend="opt-services.picom.shadow">shadow</link> = true;
|
||||
<link linkend="opt-services.picom.fadeDelta">fadeDelta</link> = 4;
|
||||
};
|
||||
</programlisting>
|
||||
</para>
|
||||
|
@ -57,7 +57,7 @@
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/NixOS/nixos-org-configurations/pull/18">
|
||||
Make sure a channel is created at http://nixos.org/channels/. </link>
|
||||
Make sure a channel is created at https://nixos.org/channels/. </link>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -37,7 +37,7 @@
|
||||
|
||||
imports =
|
||||
[ # Use postgresql service from nixos-unstable channel.
|
||||
# sudo nix-channel --add http://nixos.org/channels/nixos-unstable nixos-unstable
|
||||
# sudo nix-channel --add https://nixos.org/channels/nixos-unstable nixos-unstable
|
||||
<nixos-unstable/nixos/modules/services/databases/postgresql.nix>
|
||||
];
|
||||
|
||||
|
@ -7,7 +7,7 @@
|
||||
<para>
|
||||
NixOS ISO images can be downloaded from the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nixos/download.html">NixOS download
|
||||
xlink:href="https://nixos.org/nixos/download.html">NixOS download
|
||||
page</link>. There are a number of installation options. If you happen to
|
||||
have an optical drive and a spare CD, burning the image to CD and booting
|
||||
from that is probably the easiest option. Most people will need to prepare a
|
||||
@ -26,7 +26,7 @@ xlink:href="https://nixos.wiki/wiki/NixOS_Installation_Guide#Making_the_installa
|
||||
<para>
|
||||
Using virtual appliances in Open Virtualization Format (OVF) that can be
|
||||
imported into VirtualBox. These are available from the
|
||||
<link xlink:href="http://nixos.org/nixos/download.html">NixOS download
|
||||
<link xlink:href="https://nixos.org/nixos/download.html">NixOS download
|
||||
page</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
|
@ -24,16 +24,6 @@
|
||||
</arg>
|
||||
</group>
|
||||
</arg>
|
||||
<arg>
|
||||
<group choice='req'>
|
||||
<arg choice='plain'>
|
||||
<option>--print-build-logs</option>
|
||||
</arg>
|
||||
<arg choice='plain'>
|
||||
<option>-L</option>
|
||||
</arg>
|
||||
</group>
|
||||
</arg>
|
||||
<arg>
|
||||
<arg choice='plain'>
|
||||
<option>-I</option>
|
||||
@ -178,12 +168,6 @@
|
||||
<para>Please note that this option may be specified repeatedly.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term><option>--print-build-logs</option> / <option>-L</option></term>
|
||||
<listitem>
|
||||
<para>Print the full build logs of <command>nix build</command> to stderr.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<option>--root</option>
|
||||
|
@ -49,7 +49,7 @@
|
||||
<para>
|
||||
Nix has been updated to 1.7
|
||||
(<link
|
||||
xlink:href="http://nixos.org/nix/manual/#ssec-relnotes-1.7">details</link>).
|
||||
xlink:href="https://nixos.org/nix/manual/#ssec-relnotes-1.7">details</link>).
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -22,7 +22,7 @@
|
||||
in excess of 8,000 Haskell packages. Detailed instructions on how to use
|
||||
that infrastructure can be found in the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
xlink:href="https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
Guide to the Haskell Infrastructure</link>. Users migrating from an earlier
|
||||
release may find helpful information below, in the list of
|
||||
backwards-incompatible changes. Furthermore, we distribute 51(!) additional
|
||||
@ -555,7 +555,7 @@ nix-env -f "<nixpkgs>" -iA haskellPackages.pandoc
|
||||
the compiler now is the <literal>haskellPackages.ghcWithPackages</literal>
|
||||
function. The
|
||||
<link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
xlink:href="https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
Guide to the Haskell Infrastructure</link> provides more information about
|
||||
this subject.
|
||||
</para>
|
||||
|
@ -54,7 +54,7 @@
|
||||
xlink:href="https://reproducible-builds.org/specs/source-date-epoch/">SOURCE_DATE_EPOCH</envar>
|
||||
to a deterministic value, and Nix has
|
||||
<link
|
||||
xlink:href="http://nixos.org/nix/manual/#ssec-relnotes-1.11">gained
|
||||
xlink:href="https://nixos.org/nix/manual/#ssec-relnotes-1.11">gained
|
||||
an option</link> to repeat a build a number of times to test determinism.
|
||||
An ongoing project, the goal of exact reproducibility is to allow binaries
|
||||
to be verified independently (e.g., a user might only trust binaries that
|
||||
|
@ -55,6 +55,12 @@
|
||||
The new <varname>virtualisation.containers</varname> module manages configuration shared by the CRI-O and Podman modules.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Declarative Docker containers are renamed from <varname>docker-containers</varname> to <varname>virtualisation.oci-containers.containers</varname>.
|
||||
This is to make it possible to use <literal>podman</literal> instead of <literal>docker</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
@ -311,6 +317,15 @@ php.override {
|
||||
<manvolnum>5</manvolnum></citerefentry> for details.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
In the <literal>picom</literal> module, several options that accepted
|
||||
floating point numbers encoded as strings (for example
|
||||
<xref linkend="opt-services.picom.activeOpacity"/>) have been changed
|
||||
to the (relatively) new native <literal>float</literal> type. To migrate
|
||||
your configuration simply remove the quotes around the numbers.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
|
@ -1,135 +0,0 @@
|
||||
<?xml version="1.0"?>
|
||||
|
||||
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
|
||||
|
||||
<xsl:output method='html' encoding="UTF-8"
|
||||
doctype-public="-//W3C//DTD HTML 4.01//EN"
|
||||
doctype-system="http://www.w3.org/TR/html4/strict.dtd" />
|
||||
|
||||
<xsl:template match="logfile">
|
||||
<html>
|
||||
<head>
|
||||
<script type="text/javascript" src="jquery.min.js"></script>
|
||||
<script type="text/javascript" src="jquery-ui.min.js"></script>
|
||||
<script type="text/javascript" src="treebits.js" />
|
||||
<link rel="stylesheet" href="logfile.css" type="text/css" />
|
||||
<title>Log File</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>VM build log</h1>
|
||||
<p>
|
||||
<a href="javascript:" class="logTreeExpandAll">Expand all</a> |
|
||||
<a href="javascript:" class="logTreeCollapseAll">Collapse all</a>
|
||||
</p>
|
||||
<ul class='toplevel'>
|
||||
<xsl:for-each select='line|nest'>
|
||||
<li>
|
||||
<xsl:apply-templates select='.'/>
|
||||
</li>
|
||||
</xsl:for-each>
|
||||
</ul>
|
||||
|
||||
<xsl:if test=".//*[@image]">
|
||||
<h1>Screenshots</h1>
|
||||
<ul class="vmScreenshots">
|
||||
<xsl:for-each select='.//*[@image]'>
|
||||
<li><a href="{@image}"><xsl:value-of select="@image" /></a></li>
|
||||
</xsl:for-each>
|
||||
</ul>
|
||||
</xsl:if>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<xsl:template match="nest">
|
||||
|
||||
<!-- The tree should be collapsed by default if all children are
|
||||
unimportant or if the header is unimportant. -->
|
||||
<xsl:variable name="collapsed" select="not(./head[@expanded]) and count(.//*[@error]) = 0"/>
|
||||
|
||||
<xsl:variable name="style"><xsl:if test="$collapsed">display: none;</xsl:if></xsl:variable>
|
||||
|
||||
<xsl:if test="line|nest">
|
||||
<a href="javascript:" class="logTreeToggle">
|
||||
<xsl:choose>
|
||||
<xsl:when test="$collapsed"><xsl:text>+</xsl:text></xsl:when>
|
||||
<xsl:otherwise><xsl:text>-</xsl:text></xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</a>
|
||||
<xsl:text> </xsl:text>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:apply-templates select='head'/>
|
||||
|
||||
<!-- Be careful to only generate <ul>s if there are <li>s, otherwise it’s malformed. -->
|
||||
<xsl:if test="line|nest">
|
||||
|
||||
<ul class='nesting' style="{$style}">
|
||||
<xsl:for-each select='line|nest'>
|
||||
|
||||
<!-- Is this the last line? If so, mark it as such so that it
|
||||
can be rendered differently. -->
|
||||
<xsl:variable name="class"><xsl:choose><xsl:when test="position() != last()">line</xsl:when><xsl:otherwise>lastline</xsl:otherwise></xsl:choose></xsl:variable>
|
||||
|
||||
<li class='{$class}'>
|
||||
<span class='lineconn' />
|
||||
<span class='linebody'>
|
||||
<xsl:apply-templates select='.'/>
|
||||
</span>
|
||||
</li>
|
||||
</xsl:for-each>
|
||||
</ul>
|
||||
</xsl:if>
|
||||
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<xsl:template match="head|line">
|
||||
<code>
|
||||
<xsl:if test="@error">
|
||||
<xsl:attribute name="class">errorLine</xsl:attribute>
|
||||
</xsl:if>
|
||||
<xsl:if test="@warning">
|
||||
<xsl:attribute name="class">warningLine</xsl:attribute>
|
||||
</xsl:if>
|
||||
<xsl:if test="@priority = 3">
|
||||
<xsl:attribute name="class">prio3</xsl:attribute>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="@type = 'serial'">
|
||||
<xsl:attribute name="class">serial</xsl:attribute>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="@machine">
|
||||
<xsl:choose>
|
||||
<xsl:when test="@type = 'serial'">
|
||||
<span class="machine"><xsl:value-of select="@machine"/># </span>
|
||||
</xsl:when>
|
||||
<xsl:otherwise>
|
||||
<span class="machine"><xsl:value-of select="@machine"/>: </span>
|
||||
</xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:choose>
|
||||
<xsl:when test="@image">
|
||||
<a href="{@image}"><xsl:apply-templates/></a>
|
||||
</xsl:when>
|
||||
<xsl:otherwise>
|
||||
<xsl:apply-templates/>
|
||||
</xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</code>
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<xsl:template match="storeref">
|
||||
<em class='storeref'>
|
||||
<span class='popup'><xsl:apply-templates/></span>
|
||||
<span class='elided'>/...</span><xsl:apply-templates select='name'/><xsl:apply-templates select='path'/>
|
||||
</em>
|
||||
</xsl:template>
|
||||
|
||||
</xsl:stylesheet>
|
@ -1,129 +0,0 @@
|
||||
body {
|
||||
font-family: sans-serif;
|
||||
background: white;
|
||||
}
|
||||
|
||||
h1
|
||||
{
|
||||
color: #005aa0;
|
||||
font-size: 180%;
|
||||
}
|
||||
|
||||
a {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
|
||||
ul.nesting, ul.toplevel {
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
ul.toplevel {
|
||||
list-style-type: none;
|
||||
}
|
||||
|
||||
.line, .head {
|
||||
padding-top: 0em;
|
||||
}
|
||||
|
||||
ul.nesting li.line, ul.nesting li.lastline {
|
||||
position: relative;
|
||||
list-style-type: none;
|
||||
}
|
||||
|
||||
ul.nesting li.line {
|
||||
padding-left: 2.0em;
|
||||
}
|
||||
|
||||
ul.nesting li.lastline {
|
||||
padding-left: 2.1em; /* for the 0.1em border-left in .lastline > .lineconn */
|
||||
}
|
||||
|
||||
li.line {
|
||||
border-left: 0.1em solid #6185a0;
|
||||
}
|
||||
|
||||
li.line > span.lineconn, li.lastline > span.lineconn {
|
||||
position: absolute;
|
||||
height: 0.65em;
|
||||
left: 0em;
|
||||
width: 1.5em;
|
||||
border-bottom: 0.1em solid #6185a0;
|
||||
}
|
||||
|
||||
li.lastline > span.lineconn {
|
||||
border-left: 0.1em solid #6185a0;
|
||||
}
|
||||
|
||||
|
||||
em.storeref {
|
||||
color: #500000;
|
||||
position: relative;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
em.storeref:hover {
|
||||
background-color: #eeeeee;
|
||||
}
|
||||
|
||||
*.popup {
|
||||
display: none;
|
||||
/* background: url('http://losser.st-lab.cs.uu.nl/~mbravenb/menuback.png') repeat; */
|
||||
background: #ffffcd;
|
||||
border: solid #555555 1px;
|
||||
position: absolute;
|
||||
top: 0em;
|
||||
left: 0em;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
z-index: 100;
|
||||
}
|
||||
|
||||
em.storeref:hover span.popup {
|
||||
display: inline;
|
||||
width: 40em;
|
||||
}
|
||||
|
||||
|
||||
.logTreeToggle {
|
||||
text-decoration: none;
|
||||
font-family: monospace;
|
||||
font-size: larger;
|
||||
}
|
||||
|
||||
.errorLine {
|
||||
color: #ff0000;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.warningLine {
|
||||
color: darkorange;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.prio3 {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
code {
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
.serial {
|
||||
color: #56115c;
|
||||
}
|
||||
|
||||
.machine {
|
||||
color: #002399;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
ul.vmScreenshots {
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
ul.vmScreenshots li {
|
||||
font-family: monospace;
|
||||
list-style: square;
|
||||
}
|
@ -143,7 +143,7 @@ class Logger:
|
||||
self.logfile = os.environ.get("LOGFILE", "/dev/null")
|
||||
self.logfile_handle = codecs.open(self.logfile, "wb")
|
||||
self.xml = XMLGenerator(self.logfile_handle, encoding="utf-8")
|
||||
self.queue: "Queue[Dict[str, str]]" = Queue(1000)
|
||||
self.queue: "Queue[Dict[str, str]]" = Queue()
|
||||
|
||||
self.xml.startDocument()
|
||||
self.xml.startElement("logfile", attrs={})
|
||||
@ -391,11 +391,11 @@ class Machine:
|
||||
def execute(self, command: str) -> Tuple[int, str]:
|
||||
self.connect()
|
||||
|
||||
out_command = "( {} ); echo '|!EOF' $?\n".format(command)
|
||||
out_command = "( {} ); echo '|!=EOF' $?\n".format(command)
|
||||
self.shell.send(out_command.encode())
|
||||
|
||||
output = ""
|
||||
status_code_pattern = re.compile(r"(.*)\|\!EOF\s+(\d+)")
|
||||
status_code_pattern = re.compile(r"(.*)\|\!=EOF\s+(\d+)")
|
||||
|
||||
while True:
|
||||
chunk = self.shell.recv(4096).decode(errors="ignore")
|
||||
|
@ -1,30 +0,0 @@
|
||||
$(document).ready(function() {
|
||||
|
||||
/* When a toggle is clicked, show or hide the subtree. */
|
||||
$(".logTreeToggle").click(function() {
|
||||
if ($(this).siblings("ul:hidden").length != 0) {
|
||||
$(this).siblings("ul").show();
|
||||
$(this).text("-");
|
||||
} else {
|
||||
$(this).siblings("ul").hide();
|
||||
$(this).text("+");
|
||||
}
|
||||
});
|
||||
|
||||
/* Implementation of the expand all link. */
|
||||
$(".logTreeExpandAll").click(function() {
|
||||
$(".logTreeToggle", $(this).parent().siblings(".toplevel")).map(function() {
|
||||
$(this).siblings("ul").show();
|
||||
$(this).text("-");
|
||||
});
|
||||
});
|
||||
|
||||
/* Implementation of the collapse all link. */
|
||||
$(".logTreeCollapseAll").click(function() {
|
||||
$(".logTreeToggle", $(this).parent().siblings(".toplevel")).map(function() {
|
||||
$(this).siblings("ul").hide();
|
||||
$(this).text("+");
|
||||
});
|
||||
});
|
||||
|
||||
});
|
@ -62,25 +62,11 @@ in rec {
|
||||
|
||||
requiredSystemFeatures = [ "kvm" "nixos-test" ];
|
||||
|
||||
buildInputs = [ libxslt ];
|
||||
|
||||
buildCommand =
|
||||
''
|
||||
mkdir -p $out/nix-support
|
||||
mkdir -p $out
|
||||
|
||||
LOGFILE=$out/log.xml tests='exec(os.environ["testScript"])' ${driver}/bin/nixos-test-driver
|
||||
|
||||
# Generate a pretty-printed log.
|
||||
xsltproc --output $out/log.html ${./test-driver/log2html.xsl} $out/log.xml
|
||||
ln -s ${./test-driver/logfile.css} $out/logfile.css
|
||||
ln -s ${./test-driver/treebits.js} $out/treebits.js
|
||||
ln -s ${jquery}/js/jquery.min.js $out/
|
||||
ln -s ${jquery}/js/jquery.js $out/
|
||||
ln -s ${jquery-ui}/js/jquery-ui.min.js $out/
|
||||
ln -s ${jquery-ui}/js/jquery-ui.js $out/
|
||||
|
||||
touch $out/nix-support/hydra-build-products
|
||||
echo "report testlog $out log.html" >> $out/nix-support/hydra-build-products
|
||||
LOGFILE=/dev/null tests='exec(os.environ["testScript"])' ${driver}/bin/nixos-test-driver
|
||||
|
||||
for i in */xchg/coverage-data; do
|
||||
mkdir -p $out/coverage-data
|
||||
|
@ -58,23 +58,11 @@ in rec {
|
||||
|
||||
requiredSystemFeatures = [ "kvm" "nixos-test" ];
|
||||
|
||||
buildInputs = [ libxslt ];
|
||||
|
||||
buildCommand =
|
||||
''
|
||||
mkdir -p $out/nix-support
|
||||
mkdir -p $out
|
||||
|
||||
LOGFILE=$out/log.xml tests='eval $ENV{testScript}; die $@ if $@;' ${driver}/bin/nixos-test-driver
|
||||
|
||||
# Generate a pretty-printed log.
|
||||
xsltproc --output $out/log.html ${./test-driver/log2html.xsl} $out/log.xml
|
||||
ln -s ${./test-driver/logfile.css} $out/logfile.css
|
||||
ln -s ${./test-driver/treebits.js} $out/treebits.js
|
||||
ln -s ${jquery}/js/jquery.min.js $out/
|
||||
ln -s ${jquery-ui}/js/jquery-ui.min.js $out/
|
||||
|
||||
touch $out/nix-support/hydra-build-products
|
||||
echo "report testlog $out log.html" >> $out/nix-support/hydra-build-products
|
||||
LOGFILE=/dev/null tests='eval $ENV{testScript}; die $@ if $@;' ${driver}/bin/nixos-test-driver
|
||||
|
||||
for i in */xchg/coverage-data; do
|
||||
mkdir -p $out/coverage-data
|
||||
|
@ -8,30 +8,22 @@ let
|
||||
|
||||
# only with nscd up and running we can load NSS modules that are not integrated in NSS
|
||||
canLoadExternalModules = config.services.nscd.enable;
|
||||
myhostname = canLoadExternalModules;
|
||||
mymachines = canLoadExternalModules;
|
||||
# XXX Move these to their respective modules
|
||||
nssmdns = canLoadExternalModules && config.services.avahi.nssmdns;
|
||||
nsswins = canLoadExternalModules && config.services.samba.nsswins;
|
||||
ldap = canLoadExternalModules && (config.users.ldap.enable && config.users.ldap.nsswitch);
|
||||
resolved = canLoadExternalModules && config.services.resolved.enable;
|
||||
|
||||
hostArray = mkMerge [
|
||||
(mkBefore [ "files" ])
|
||||
(mkIf mymachines [ "mymachines" ])
|
||||
(mkIf nssmdns [ "mdns_minimal [NOTFOUND=return]" ])
|
||||
(mkIf nsswins [ "wins" ])
|
||||
(mkIf resolved [ "resolve [!UNAVAIL=return]" ])
|
||||
(mkAfter [ "dns" ])
|
||||
(mkIf nssmdns (mkOrder 1501 [ "mdns" ])) # 1501 to ensure it's after dns
|
||||
(mkIf myhostname (mkOrder 1600 [ "myhostname" ])) # 1600 to ensure it's always the last
|
||||
];
|
||||
|
||||
passwdArray = mkMerge [
|
||||
(mkBefore [ "files" ])
|
||||
(mkIf ldap [ "ldap" ])
|
||||
(mkIf mymachines [ "mymachines" ])
|
||||
(mkIf canLoadExternalModules (mkAfter [ "systemd" ]))
|
||||
];
|
||||
|
||||
shadowArray = mkMerge [
|
||||
@ -134,11 +126,6 @@ in {
|
||||
assertion = config.system.nssModules.path != "" -> canLoadExternalModules;
|
||||
message = "Loading NSS modules from path ${config.system.nssModules.path} requires nscd being enabled.";
|
||||
}
|
||||
{
|
||||
# resolved does not need to add to nssModules, therefore needs an extra assertion
|
||||
assertion = resolved -> canLoadExternalModules;
|
||||
message = "Loading systemd-resolved's nss-resolve NSS module requires nscd being enabled.";
|
||||
}
|
||||
];
|
||||
|
||||
# Name Service Switch configuration file. Required by the C
|
||||
@ -164,12 +151,5 @@ in {
|
||||
hosts = hostArray;
|
||||
services = mkBefore [ "files" ];
|
||||
};
|
||||
|
||||
# Systemd provides nss-myhostname to ensure that our hostname
|
||||
# always resolves to a valid IP address. It returns all locally
|
||||
# configured IP addresses, or ::1 and 127.0.0.2 as
|
||||
# fallbacks. Systemd also provides nss-mymachines to return IP
|
||||
# addresses of local containers.
|
||||
system.nssModules = (optionals canLoadExternalModules [ config.systemd.package.out ]);
|
||||
};
|
||||
}
|
||||
|
@ -15,7 +15,6 @@ mountPoint=/mnt
|
||||
channelPath=
|
||||
system=
|
||||
verbosity=()
|
||||
buildLogs=
|
||||
|
||||
while [ "$#" -gt 0 ]; do
|
||||
i="$1"; shift 1
|
||||
@ -60,9 +59,6 @@ while [ "$#" -gt 0 ]; do
|
||||
-v*|--verbose)
|
||||
verbosity+=("$i")
|
||||
;;
|
||||
-L|--print-build-logs)
|
||||
buildLogs="$i"
|
||||
;;
|
||||
*)
|
||||
echo "$0: unknown option \`$i'"
|
||||
exit 1
|
||||
@ -91,8 +87,11 @@ if [[ ! -e $NIXOS_CONFIG && -z $system ]]; then
|
||||
fi
|
||||
|
||||
# A place to drop temporary stuff.
|
||||
tmpdir="$(mktemp -d -p $mountPoint)"
|
||||
trap "rm -rf $tmpdir" EXIT
|
||||
tmpdir="$(mktemp -d)"
|
||||
|
||||
# store temporary files on target filesystem by default
|
||||
export TMPDIR=${TMPDIR:-$tmpdir}
|
||||
|
||||
sub="auto?trusted=1"
|
||||
|
||||
@ -100,9 +99,9 @@ sub="auto?trusted=1"
|
||||
if [[ -z $system ]]; then
|
||||
echo "building the configuration in $NIXOS_CONFIG..."
|
||||
outLink="$tmpdir/system"
|
||||
nix build --out-link "$outLink" --store "$mountPoint" "${extraBuildFlags[@]}" \
|
||||
nix-build --out-link "$outLink" --store "$mountPoint" "${extraBuildFlags[@]}" \
|
||||
--extra-substituters "$sub" \
|
||||
-f '<nixpkgs/nixos>' system -I "nixos-config=$NIXOS_CONFIG" ${verbosity[@]} ${buildLogs}
|
||||
'<nixpkgs/nixos>' -A system -I "nixos-config=$NIXOS_CONFIG" ${verbosity[@]}
|
||||
system=$(readlink -f $outLink)
|
||||
fi
|
||||
|
||||
|
@ -984,9 +984,9 @@
|
||||
./virtualisation/container-config.nix
|
||||
./virtualisation/containers.nix
|
||||
./virtualisation/nixos-containers.nix
|
||||
./virtualisation/oci-containers.nix
|
||||
./virtualisation/cri-o.nix
|
||||
./virtualisation/docker.nix
|
||||
./virtualisation/docker-containers.nix
|
||||
./virtualisation/ecs-agent.nix
|
||||
./virtualisation/libvirtd.nix
|
||||
./virtualisation/lxc.nix
|
||||
|
@ -75,7 +75,7 @@ in
|
||||
};
|
||||
|
||||
link = mkOption {
|
||||
default = "http://planet.nixos.org";
|
||||
default = "https://planet.nixos.org";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Link to the main page.
|
||||
|
@ -87,19 +87,19 @@ let
|
||||
default = {};
|
||||
example = literalExample ''
|
||||
{
|
||||
"example.org" = "/srv/http/nginx";
|
||||
"example.org" = null;
|
||||
"mydomain.org" = null;
|
||||
}
|
||||
'';
|
||||
description = ''
|
||||
A list of extra domain names, which are included in the one certificate to be issued, with their
|
||||
own server roots if needed.
|
||||
A list of extra domain names, which are included in the one certificate to be issued.
|
||||
Setting a distinct server root is deprecated and not functional in 20.03+
|
||||
'';
|
||||
};
|
||||
|
||||
keyType = mkOption {
|
||||
type = types.str;
|
||||
default = "ec384";
|
||||
default = "ec256";
|
||||
description = ''
|
||||
Key type to use for private keys.
|
||||
For an up to date list of supported values check the --key-type option
|
||||
@ -250,7 +250,7 @@ in
|
||||
"example.com" = {
|
||||
webroot = "/var/www/challenges/";
|
||||
email = "foo@example.com";
|
||||
extraDomains = { "www.example.com" = null; "foo.example.com" = "/var/www/foo/"; };
|
||||
extraDomains = { "www.example.com" = null; "foo.example.com" = null; };
|
||||
};
|
||||
"bar.example.com" = {
|
||||
webroot = "/var/www/challenges/";
|
||||
|
@ -6,65 +6,49 @@
|
||||
<title>SSL/TLS Certificates with ACME</title>
|
||||
<para>
|
||||
NixOS supports automatic domain validation & certificate retrieval and
|
||||
renewal using the ACME protocol. This is currently only implemented by and
|
||||
for Let's Encrypt. The alternative ACME client <literal>lego</literal> is
|
||||
used under the hood.
|
||||
renewal using the ACME protocol. Any provider can be used, but by default
|
||||
NixOS uses Let's Encrypt. The alternative ACME client <literal>lego</literal>
|
||||
is used under the hood.
|
||||
</para>
|
||||
<para>
|
||||
Automatic cert validation and configuration for Apache and Nginx virtual
|
||||
hosts is included in NixOS, however if you would like to generate a wildcard
|
||||
cert or you are not using a web server you will have to configure DNS
|
||||
based validation.
|
||||
</para>
|
||||
<section xml:id="module-security-acme-prerequisites">
|
||||
<title>Prerequisites</title>
|
||||
|
||||
<para>
|
||||
You need to have a running HTTP server for verification. The server must
|
||||
have a webroot defined that can serve
|
||||
To use the ACME module, you must accept the provider's terms of service
|
||||
by setting <literal><xref linkend="opt-security.acme.acceptTerms" /></literal>
|
||||
to <literal>true</literal>. The Let's Encrypt ToS can be found
|
||||
<link xlink:href="https://letsencrypt.org/repository/">here</link>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You must also set an email address to be used when creating accounts with
|
||||
Let's Encrypt. You can set this for all certs with
|
||||
<literal><xref linkend="opt-security.acme.email" /></literal>
|
||||
and/or on a per-cert basis with
|
||||
<literal><xref linkend="opt-security.acme.certs._name_.email" /></literal>.
|
||||
This address is only used for registration and renewal reminders,
|
||||
and cannot be used to administer the certificates in any way.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively, you can use a different ACME server by changing the
|
||||
<literal><xref linkend="opt-security.acme.server" /></literal> option
|
||||
to a provider of your choosing, or just change the server for one cert with
|
||||
<literal><xref linkend="opt-security.acme.certs._name_.server" /></literal>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You will need an HTTP server or DNS server for verification. For HTTP,
|
||||
the server must have a webroot defined that can serve
|
||||
<filename>.well-known/acme-challenge</filename>. This directory must be
|
||||
writeable by the user that will run the ACME client.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For instance, this generic snippet could be used for Nginx:
|
||||
<programlisting>
|
||||
http {
|
||||
server {
|
||||
server_name _;
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
location /.well-known/acme-challenge {
|
||||
root /var/www/challenges;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-configuring">
|
||||
<title>Configuring</title>
|
||||
|
||||
<para>
|
||||
To enable ACME certificate retrieval & renewal for a certificate for
|
||||
<literal>foo.example.com</literal>, add the following in your
|
||||
<filename>configuration.nix</filename>:
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.certs"/>."foo.example.com" = {
|
||||
<link linkend="opt-security.acme.certs._name_.webroot">webroot</link> = "/var/www/challenges";
|
||||
<link linkend="opt-security.acme.certs._name_.email">email</link> = "foo@example.com";
|
||||
};
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The private key <filename>key.pem</filename> and certificate
|
||||
<filename>fullchain.pem</filename> will be put into
|
||||
<filename>/var/lib/acme/foo.example.com</filename>.
|
||||
</para>
|
||||
<para>
|
||||
Refer to <xref linkend="ch-options" /> for all available configuration
|
||||
options for the <link linkend="opt-security.acme.certs">security.acme</link>
|
||||
module.
|
||||
writeable by the user that will run the ACME client. For DNS, you must
|
||||
set up credentials with your provider/server for use with lego.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-nginx">
|
||||
@ -80,12 +64,27 @@ http {
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> = true;
|
||||
<xref linkend="opt-security.acme.email" /> = "admin+acme@example.com";
|
||||
services.nginx = {
|
||||
<link linkend="opt-services.nginx.enable">enable = true;</link>
|
||||
<link linkend="opt-services.nginx.enable">enable</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts">virtualHosts</link> = {
|
||||
"foo.example.com" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.enableACME">enableACME</link> = true;
|
||||
# All serverAliases will be added as <link linkend="opt-security.acme.certs._name_.extraDomains">extra domains</link> on the certificate.
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.serverAliases">serverAliases</link> = [ "bar.example.com" ];
|
||||
locations."/" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.root">root</link> = "/var/www";
|
||||
};
|
||||
};
|
||||
|
||||
# We can also add a different vhost and reuse the same certificate
|
||||
# but we have to append extraDomains manually.
|
||||
<link linkend="opt-security.acme.certs._name_.extraDomains">security.acme.certs."foo.example.com".extraDomains."baz.example.com"</link> = null;
|
||||
"baz.example.com" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.useACMEHost">useACMEHost</link> = "foo.example.com";
|
||||
locations."/" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.root">root</link> = "/var/www";
|
||||
};
|
||||
@ -94,4 +93,162 @@ services.nginx = {
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-httpd">
|
||||
<title>Using ACME certificates in Apache/httpd</title>
|
||||
|
||||
<para>
|
||||
Using ACME certificates with Apache virtual hosts is identical
|
||||
to using them with Nginx. The attribute names are all the same, just replace
|
||||
"nginx" with "httpd" where appropriate.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-configuring">
|
||||
<title>Manual configuration of HTTP-01 validation</title>
|
||||
|
||||
<para>
|
||||
First off you will need to set up a virtual host to serve the challenges.
|
||||
This example uses a vhost called <literal>certs.example.com</literal>, with
|
||||
the intent that you will generate certs for all your vhosts and redirect
|
||||
everyone to HTTPS.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> = true;
|
||||
<xref linkend="opt-security.acme.email" /> = "admin+acme@example.com";
|
||||
services.nginx = {
|
||||
<link linkend="opt-services.nginx.enable">enable</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts">virtualHosts</link> = {
|
||||
"acmechallenge.example.com" = {
|
||||
# Catchall vhost, will redirect users to HTTPS for all vhosts
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.serverAliases">serverAliases</link> = [ "*.example.com" ];
|
||||
# /var/lib/acme/.challenges must be writable by the ACME user
|
||||
# and readable by the Nginx user.
|
||||
# By default, this is the case.
|
||||
locations."/.well-known/acme-challenge" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.root">root</link> = "/var/lib/acme/.challenges";
|
||||
};
|
||||
locations."/" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.return">return</link> = "301 https://$host$request_uri";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
# Alternative config for Apache
|
||||
services.httpd = {
|
||||
<link linkend="opt-services.httpd.enable">enable = true;</link>
|
||||
<link linkend="opt-services.httpd.virtualHosts">virtualHosts</link> = {
|
||||
"acmechallenge.example.com" = {
|
||||
# Catchall vhost, will redirect users to HTTPS for all vhosts
|
||||
<link linkend="opt-services.httpd.virtualHosts._name_.serverAliases">serverAliases</link> = [ "*.example.com" ];
|
||||
# /var/lib/acme/.challenges must be writable by the ACME user and readable by the Apache user.
|
||||
# By default, this is the case.
|
||||
<link linkend="opt-services.httpd.virtualHosts._name_.documentRoot">documentRoot</link> = "/var/lib/acme/.challenges";
|
||||
<link linkend="opt-services.httpd.virtualHosts._name_.extraConfig">extraConfig</link> = ''
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteCond %{REQUEST_URI} !^/\.well-known/acme-challenge [NC]
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301]
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Now you need to configure ACME to generate a certificate.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.certs"/>."foo.example.com" = {
|
||||
<link linkend="opt-security.acme.certs._name_.webroot">webroot</link> = "/var/lib/acme/.challenges";
|
||||
<link linkend="opt-security.acme.certs._name_.email">email</link> = "foo@example.com";
|
||||
# Since we have a wildcard vhost to handle port 80,
|
||||
# we can generate certs for anything!
|
||||
# Just make sure your DNS resolves them.
|
||||
<link linkend="opt-security.acme.certs._name_.extraDomains">extraDomains</link> = [ "mail.example.com" ];
|
||||
};
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
The private key <filename>key.pem</filename> and certificate
|
||||
<filename>fullchain.pem</filename> will be put into
|
||||
<filename>/var/lib/acme/foo.example.com</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Refer to <xref linkend="ch-options" /> for all available configuration
|
||||
options for the <link linkend="opt-security.acme.certs">security.acme</link>
|
||||
module.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-config-dns">
|
||||
<title>Configuring ACME for DNS validation</title>
|
||||
|
||||
<para>
|
||||
This is useful if you want to generate a wildcard certificate, since
|
||||
ACME servers will only hand out wildcard certs over DNS validation.
|
||||
There a number of supported DNS providers and servers you can utilise,
|
||||
see the <link xlink:href="https://go-acme.github.io/lego/dns/">lego docs</link>
|
||||
for provider/server specific configuration values. For the sake of these
|
||||
docs, we will provide a fully self-hosted example using bind.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
services.bind = {
|
||||
<link linkend="opt-services.bind.enable">enable</link> = true;
|
||||
<link linkend="opt-services.bind.extraConfig">extraConfig</link> = ''
|
||||
include "/var/lib/secrets/dnskeys.conf";
|
||||
'';
|
||||
<link linkend="opt-services.bind.zones">zones</link> = [
|
||||
rec {
|
||||
name = "example.com";
|
||||
file = "/var/db/bind/${name}";
|
||||
master = true;
|
||||
extraConfig = "allow-update { key rfc2136key.example.com.; };";
|
||||
}
|
||||
];
|
||||
}
|
||||
|
||||
# Now we can configure ACME
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> = true;
|
||||
<xref linkend="opt-security.acme.email" /> = "admin+acme@example.com";
|
||||
<xref linkend="opt-security.acme.certs" />."example.com" = {
|
||||
<link linkend="opt-security.acme.certs._name_.domain">domain</link> = "*.example.com";
|
||||
<link linkend="opt-security.acme.certs._name_.dnsProvider">dnsProvider</link> = "rfc2136";
|
||||
<link linkend="opt-security.acme.certs._name_.credentialsFile">credentialsFile</link> = "/var/lib/secrets/certs.secret";
|
||||
# We don't need to wait for propagation since this is a local DNS server
|
||||
<link linkend="opt-security.acme.certs._name_.dnsPropagationCheck">dnsPropagationCheck</link> = false;
|
||||
};
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
The <filename>dnskeys.conf</filename> and <filename>certs.secret</filename>
|
||||
must be kept secure and thus you should not keep their contents in your
|
||||
Nix config. Instead, generate them one time with these commands:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
mkdir -p /var/lib/secrets
|
||||
tsig-keygen rfc2136key.example.com > /var/lib/secrets/dnskeys.conf
|
||||
chown named:root /var/lib/secrets/dnskeys.conf
|
||||
chmod 400 /var/lib/secrets/dnskeys.conf
|
||||
|
||||
# Copy the secret value from the dnskeys.conf, and put it in
|
||||
# RFC2136_TSIG_SECRET below
|
||||
|
||||
cat > /var/lib/secrets/certs.secret << EOF
|
||||
RFC2136_NAMESERVER='127.0.0.1:53'
|
||||
RFC2136_TSIG_ALGORITHM='hmac-sha256.'
|
||||
RFC2136_TSIG_KEY='rfc2136key.example.com'
|
||||
RFC2136_TSIG_SECRET='your secret key'
|
||||
EOF
|
||||
chmod 400 /var/lib/secrets/certs.secret
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Now you're all set to generate certs! You should monitor the first invokation
|
||||
by running <literal>systemctl start acme-example.com.service &
|
||||
journalctl -fu acme-example.com.service</literal> and watching its log output.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
||||
|
@ -372,6 +372,41 @@ in
|
||||
and <citerefentry><refentrytitle>zfs</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
||||
for more info.
|
||||
'';
|
||||
features.sendRaw = mkEnableOption ''
|
||||
sendRaw feature which adds the options <literal>-w</literal> to the
|
||||
<command>zfs send</command> command. For encrypted source datasets this
|
||||
instructs zfs not to decrypt before sending which results in a remote
|
||||
backup that can't be read without the encryption key/passphrase, useful
|
||||
when the remote isn't fully trusted or not physically secure. This
|
||||
option must be used consistently, raw incrementals cannot be based on
|
||||
non-raw snapshots and vice versa.
|
||||
'';
|
||||
features.skipIntermediates = mkEnableOption ''
|
||||
Enable the skipIntermediates feature to send a single increment
|
||||
between latest common snapshot and the newly made one. It may skip
|
||||
several source snaps if the destination was offline for some time, and
|
||||
it should skip snapshots not managed by znapzend. Normally for online
|
||||
destinations, the new snapshot is sent as soon as it is created on the
|
||||
source, so there are no automatic increments to skip.
|
||||
'';
|
||||
features.lowmemRecurse = mkEnableOption ''
|
||||
use lowmemRecurse on systems where you have too many datasets, so a
|
||||
recursive listing of attributes to find backup plans exhausts the
|
||||
memory available to <command>znapzend</command>: instead, go the slower
|
||||
way to first list all impacted dataset names, and then query their
|
||||
configs one by one.
|
||||
'';
|
||||
features.zfsGetType = mkEnableOption ''
|
||||
use zfsGetType if your <command>zfs get</command> supports a
|
||||
<literal>-t</literal> argument for filtering by dataset type at all AND
|
||||
lists properties for snapshots by default when recursing, so that there
|
||||
is too much data to process while searching for backup plans.
|
||||
If these two conditions apply to your system, the time needed for a
|
||||
<literal>--recursive</literal> search for backup plans can literally
|
||||
differ by hundreds of times (depending on the amount of snapshots in
|
||||
that dataset tree... and a decent backup plan will ensure you have a lot
|
||||
of those), so you would benefit from requesting this feature.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -1,160 +1,494 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.gitlab-runner;
|
||||
configFile =
|
||||
if (cfg.configFile == null) then
|
||||
(pkgs.runCommand "config.toml" {
|
||||
buildInputs = [ pkgs.remarshal ];
|
||||
preferLocalBuild = true;
|
||||
} ''
|
||||
remarshal -if json -of toml \
|
||||
< ${pkgs.writeText "config.json" (builtins.toJSON cfg.configOptions)} \
|
||||
> $out
|
||||
'')
|
||||
else
|
||||
cfg.configFile;
|
||||
hasDocker = config.virtualisation.docker.enable;
|
||||
hashedServices = with builtins; (mapAttrs' (name: service: nameValuePair
|
||||
"${name}_${config.networking.hostName}_${
|
||||
substring 0 12
|
||||
(hashString "md5" (unsafeDiscardStringContext (toJSON service)))}"
|
||||
service)
|
||||
cfg.services);
|
||||
configPath = "$HOME/.gitlab-runner/config.toml";
|
||||
configureScript = pkgs.writeShellScriptBin "gitlab-runner-configure" (
|
||||
if (cfg.configFile != null) then ''
|
||||
mkdir -p $(dirname ${configPath})
|
||||
cp ${cfg.configFile} ${configPath}
|
||||
# make config file readable by service
|
||||
chown -R --reference=$HOME $(dirname ${configPath})
|
||||
'' else ''
|
||||
export CONFIG_FILE=${configPath}
|
||||
|
||||
mkdir -p $(dirname ${configPath})
|
||||
|
||||
# remove no longer existing services
|
||||
gitlab-runner verify --delete
|
||||
|
||||
# current and desired state
|
||||
NEEDED_SERVICES=$(echo ${concatStringsSep " " (attrNames hashedServices)} | tr " " "\n")
|
||||
REGISTERED_SERVICES=$(gitlab-runner list 2>&1 | grep 'Executor' | awk '{ print $1 }')
|
||||
|
||||
# difference between current and desired state
|
||||
NEW_SERVICES=$(grep -vxF -f <(echo "$REGISTERED_SERVICES") <(echo "$NEEDED_SERVICES") || true)
|
||||
OLD_SERVICES=$(grep -vxF -f <(echo "$NEEDED_SERVICES") <(echo "$REGISTERED_SERVICES") || true)
|
||||
|
||||
# register new services
|
||||
${concatStringsSep "\n" (mapAttrsToList (name: service: ''
|
||||
if echo "$NEW_SERVICES" | grep -xq ${name}; then
|
||||
bash -c ${escapeShellArg (concatStringsSep " \\\n " ([
|
||||
"set -a && source ${service.registrationConfigFile} &&"
|
||||
"gitlab-runner register"
|
||||
"--non-interactive"
|
||||
"--name ${name}"
|
||||
"--executor ${service.executor}"
|
||||
"--limit ${toString service.limit}"
|
||||
"--request-concurrency ${toString service.requestConcurrency}"
|
||||
"--maximum-timeout ${toString service.maximumTimeout}"
|
||||
] ++ service.registrationFlags
|
||||
++ optional (service.buildsDir != null)
|
||||
"--builds-dir ${service.buildsDir}"
|
||||
++ optional (service.preCloneScript != null)
|
||||
"--pre-clone-script ${service.preCloneScript}"
|
||||
++ optional (service.preBuildScript != null)
|
||||
"--pre-build-script ${service.preBuildScript}"
|
||||
++ optional (service.postBuildScript != null)
|
||||
"--post-build-script ${service.postBuildScript}"
|
||||
++ optional (service.tagList != [ ])
|
||||
"--tag-list ${concatStringsSep "," service.tagList}"
|
||||
++ optional service.runUntagged
|
||||
"--run-untagged"
|
||||
++ optional service.protected
|
||||
"--access-level ref_protected"
|
||||
++ optional service.debugTraceDisabled
|
||||
"--debug-trace-disabled"
|
||||
++ map (e: "--env ${escapeShellArg e}") (mapAttrsToList (name: value: "${name}=${value}") service.environmentVariables)
|
||||
++ optionals (service.executor == "docker") (
|
||||
assert (
|
||||
assertMsg (service.dockerImage != null)
|
||||
"dockerImage option is required for docker executor (${name})");
|
||||
[ "--docker-image ${service.dockerImage}" ]
|
||||
++ optional service.dockerDisableCache
|
||||
"--docker-disable-cache"
|
||||
++ optional service.dockerPrivileged
|
||||
"--docker-privileged"
|
||||
++ map (v: "--docker-volumes ${escapeShellArg v}") service.dockerVolumes
|
||||
++ map (v: "--docker-extra-hosts ${escapeShellArg v}") service.dockerExtraHosts
|
||||
++ map (v: "--docker-allowed-images ${escapeShellArg v}") service.dockerAllowedImages
|
||||
++ map (v: "--docker-allowed-services ${escapeShellArg v}") service.dockerAllowedServices
|
||||
)
|
||||
))} && sleep 1
|
||||
fi
|
||||
'') hashedServices)}
|
||||
|
||||
# unregister old services
|
||||
for NAME in $(echo "$OLD_SERVICES")
|
||||
do
|
||||
[ ! -z "$NAME" ] && gitlab-runner unregister \
|
||||
--name "$NAME" && sleep 1
|
||||
done
|
||||
|
||||
# update global options
|
||||
remarshal --if toml --of json ${configPath} \
|
||||
| jq -cM '.check_interval = ${toString cfg.checkInterval} |
|
||||
.concurrent = ${toString cfg.concurrent}' \
|
||||
| remarshal --if json --of toml \
|
||||
| sponge ${configPath}
|
||||
|
||||
# make config file readable by service
|
||||
chown -R --reference=$HOME $(dirname ${configPath})
|
||||
'');
|
||||
startScript = pkgs.writeShellScriptBin "gitlab-runner-start" ''
|
||||
export CONFIG_FILE=${configPath}
|
||||
exec gitlab-runner run --working-directory $HOME
|
||||
'';
|
||||
in
|
||||
{
|
||||
options.services.gitlab-runner = {
|
||||
enable = mkEnableOption "Gitlab Runner";
|
||||
|
||||
configFile = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = ''
|
||||
Configuration file for gitlab-runner.
|
||||
Use this option in favor of configOptions to avoid placing CI tokens in the nix store.
|
||||
|
||||
<option>configFile</option> takes precedence over <option>configOptions</option>.
|
||||
<option>configFile</option> takes precedence over <option>services</option>.
|
||||
<option>checkInterval</option> and <option>concurrent</option> will be ignored too.
|
||||
|
||||
Warning: Not using <option>configFile</option> will potentially result in secrets
|
||||
leaking into the WORLD-READABLE nix store.
|
||||
This option is deprecated, please use <option>services</option> instead.
|
||||
You can use <option>registrationConfigFile</option> and
|
||||
<option>registrationFlags</option>
|
||||
for settings not covered by this module.
|
||||
'';
|
||||
type = types.nullOr types.path;
|
||||
};
|
||||
|
||||
configOptions = mkOption {
|
||||
checkInterval = mkOption {
|
||||
type = types.int;
|
||||
default = 0;
|
||||
example = literalExample "with lib; (length (attrNames config.services.gitlab-runner.services)) * 3";
|
||||
description = ''
|
||||
Configuration for gitlab-runner
|
||||
<option>configFile</option> will take precedence over this option.
|
||||
|
||||
Warning: all Configuration, especially CI token, will be stored in a
|
||||
WORLD-READABLE file in the Nix Store.
|
||||
|
||||
If you want to protect your CI token use <option>configFile</option> instead.
|
||||
Defines the interval length, in seconds, between new jobs check.
|
||||
The default value is 3;
|
||||
if set to 0 or lower, the default value will be used.
|
||||
See <link xlink:href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#how-check_interval-works">runner documentation</link> for more information.
|
||||
'';
|
||||
};
|
||||
concurrent = mkOption {
|
||||
type = types.int;
|
||||
default = 1;
|
||||
example = literalExample "config.nix.maxJobs";
|
||||
description = ''
|
||||
Limits how many jobs globally can be run concurrently.
|
||||
The most upper limit of jobs using all defined runners.
|
||||
0 does not mean unlimited.
|
||||
'';
|
||||
type = types.attrs;
|
||||
example = {
|
||||
concurrent = 2;
|
||||
runners = [{
|
||||
name = "docker-nix-1.11";
|
||||
url = "https://CI/";
|
||||
token = "TOKEN";
|
||||
executor = "docker";
|
||||
builds_dir = "";
|
||||
docker = {
|
||||
host = "";
|
||||
image = "nixos/nix:1.11";
|
||||
privileged = true;
|
||||
disable_cache = true;
|
||||
cache_dir = "";
|
||||
};
|
||||
}];
|
||||
};
|
||||
};
|
||||
|
||||
gracefulTermination = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Finish all remaining jobs before stopping, restarting or reconfiguring.
|
||||
If not set gitlab-runner will stop immediatly without waiting for jobs to finish,
|
||||
which will lead to failed builds.
|
||||
Finish all remaining jobs before stopping.
|
||||
If not set gitlab-runner will stop immediatly without waiting
|
||||
for jobs to finish, which will lead to failed builds.
|
||||
'';
|
||||
};
|
||||
|
||||
gracefulTimeout = mkOption {
|
||||
default = "infinity";
|
||||
type = types.str;
|
||||
default = "infinity";
|
||||
example = "5min 20s";
|
||||
description = ''Time to wait until a graceful shutdown is turned into a forceful one.'';
|
||||
description = ''
|
||||
Time to wait until a graceful shutdown is turned into a forceful one.
|
||||
'';
|
||||
};
|
||||
|
||||
workDir = mkOption {
|
||||
default = "/var/lib/gitlab-runner";
|
||||
type = types.path;
|
||||
description = "The working directory used";
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
description = "Gitlab Runner package to use";
|
||||
type = types.package;
|
||||
default = pkgs.gitlab-runner;
|
||||
defaultText = "pkgs.gitlab-runner";
|
||||
type = types.package;
|
||||
example = literalExample "pkgs.gitlab-runner_1_11";
|
||||
description = "Gitlab Runner package to use.";
|
||||
};
|
||||
|
||||
packages = mkOption {
|
||||
default = [ pkgs.bash pkgs.docker-machine ];
|
||||
defaultText = "[ pkgs.bash pkgs.docker-machine ]";
|
||||
extraPackages = mkOption {
|
||||
type = types.listOf types.package;
|
||||
default = [ ];
|
||||
description = ''
|
||||
Packages to add to PATH for the gitlab-runner process.
|
||||
Extra packages to add to PATH for the gitlab-runner process.
|
||||
'';
|
||||
};
|
||||
services = mkOption {
|
||||
description = "GitLab Runner services.";
|
||||
default = { };
|
||||
example = literalExample ''
|
||||
{
|
||||
# runner for building in docker via host's nix-daemon
|
||||
# nix store will be readable in runner, might be insecure
|
||||
nix = {
|
||||
# File should contain at least these two variables:
|
||||
# `CI_SERVER_URL`
|
||||
# `REGISTRATION_TOKEN`
|
||||
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
|
||||
dockerImage = "alpine";
|
||||
dockerVolumes = [
|
||||
"/nix/store:/nix/store:ro"
|
||||
"/nix/var/nix/db:/nix/var/nix/db:ro"
|
||||
"/nix/var/nix/daemon-socket:/nix/var/nix/daemon-socket:ro"
|
||||
];
|
||||
dockerDisableCache = true;
|
||||
preBuildScript = pkgs.writeScript "setup-container" '''
|
||||
mkdir -p -m 0755 /nix/var/log/nix/drvs
|
||||
mkdir -p -m 0755 /nix/var/nix/gcroots
|
||||
mkdir -p -m 0755 /nix/var/nix/profiles
|
||||
mkdir -p -m 0755 /nix/var/nix/temproots
|
||||
mkdir -p -m 0755 /nix/var/nix/userpool
|
||||
mkdir -p -m 1777 /nix/var/nix/gcroots/per-user
|
||||
mkdir -p -m 1777 /nix/var/nix/profiles/per-user
|
||||
mkdir -p -m 0755 /nix/var/nix/profiles/per-user/root
|
||||
mkdir -p -m 0700 "$HOME/.nix-defexpr"
|
||||
|
||||
. ''${pkgs.nix}/etc/profile.d/nix.sh
|
||||
|
||||
''${pkgs.nix}/bin/nix-env -i ''${concatStringsSep " " (with pkgs; [ nix cacert git openssh ])}
|
||||
|
||||
''${pkgs.nix}/bin/nix-channel --add https://nixos.org/channels/nixpkgs-unstable
|
||||
''${pkgs.nix}/bin/nix-channel --update nixpkgs
|
||||
''';
|
||||
environmentVariables = {
|
||||
ENV = "/etc/profile";
|
||||
USER = "root";
|
||||
NIX_REMOTE = "daemon";
|
||||
PATH = "/nix/var/nix/profiles/default/bin:/nix/var/nix/profiles/default/sbin:/bin:/sbin:/usr/bin:/usr/sbin";
|
||||
NIX_SSL_CERT_FILE = "/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt";
|
||||
};
|
||||
tagList = [ "nix" ];
|
||||
};
|
||||
# runner for building docker images
|
||||
docker-images = {
|
||||
# File should contain at least these two variables:
|
||||
# `CI_SERVER_URL`
|
||||
# `REGISTRATION_TOKEN`
|
||||
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
|
||||
dockerImage = "docker:stable";
|
||||
dockerVolumes = [
|
||||
"/var/run/docker.sock:/var/run/docker.sock"
|
||||
];
|
||||
tagList = [ "docker-images" ];
|
||||
};
|
||||
# runner for executing stuff on host system (very insecure!)
|
||||
# make sure to add required packages (including git!)
|
||||
# to `environment.systemPackages`
|
||||
shell = {
|
||||
# File should contain at least these two variables:
|
||||
# `CI_SERVER_URL`
|
||||
# `REGISTRATION_TOKEN`
|
||||
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
|
||||
executor = "shell";
|
||||
tagList = [ "shell" ];
|
||||
};
|
||||
# runner for everything else
|
||||
default = {
|
||||
# File should contain at least these two variables:
|
||||
# `CI_SERVER_URL`
|
||||
# `REGISTRATION_TOKEN`
|
||||
registrationConfigFile = "/run/secrets/gitlab-runner-registration";
|
||||
dockerImage = "debian:stable";
|
||||
};
|
||||
}
|
||||
'';
|
||||
type = types.attrsOf (types.submodule {
|
||||
options = {
|
||||
registrationConfigFile = mkOption {
|
||||
type = types.path;
|
||||
description = ''
|
||||
Absolute path to a file with environment variables
|
||||
used for gitlab-runner registration.
|
||||
A list of all supported environment variables can be found in
|
||||
<literal>gitlab-runner register --help</literal>.
|
||||
|
||||
Ones that you probably want to set is
|
||||
|
||||
<literal>CI_SERVER_URL=<CI server URL></literal>
|
||||
|
||||
<literal>REGISTRATION_TOKEN=<registration secret></literal>
|
||||
'';
|
||||
};
|
||||
registrationFlags = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "--docker-helper-image my/gitlab-runner-helper" ];
|
||||
description = ''
|
||||
Extra command-line flags passed to
|
||||
<literal>gitlab-runner register</literal>.
|
||||
Execute <literal>gitlab-runner register --help</literal>
|
||||
for a list of supported flags.
|
||||
'';
|
||||
};
|
||||
environmentVariables = mkOption {
|
||||
type = types.attrsOf types.str;
|
||||
default = { };
|
||||
example = { NAME = "value"; };
|
||||
description = ''
|
||||
Custom environment variables injected to build environment.
|
||||
For secrets you can use <option>registrationConfigFile</option>
|
||||
with <literal>RUNNER_ENV</literal> variable set.
|
||||
'';
|
||||
};
|
||||
executor = mkOption {
|
||||
type = types.str;
|
||||
default = "docker";
|
||||
description = ''
|
||||
Select executor, eg. shell, docker, etc.
|
||||
See <link xlink:href="https://docs.gitlab.com/runner/executors/README.html">runner documentation</link> for more information.
|
||||
'';
|
||||
};
|
||||
buildsDir = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
example = "/var/lib/gitlab-runner/builds";
|
||||
description = ''
|
||||
Absolute path to a directory where builds will be stored
|
||||
in context of selected executor (Locally, Docker, SSH).
|
||||
'';
|
||||
};
|
||||
dockerImage = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
Docker image to be used.
|
||||
'';
|
||||
};
|
||||
dockerVolumes = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "/var/run/docker.sock:/var/run/docker.sock" ];
|
||||
description = ''
|
||||
Bind-mount a volume and create it
|
||||
if it doesn't exist prior to mounting.
|
||||
'';
|
||||
};
|
||||
dockerDisableCache = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Disable all container caching.
|
||||
'';
|
||||
};
|
||||
dockerPrivileged = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Give extended privileges to container.
|
||||
'';
|
||||
};
|
||||
dockerExtraHosts = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "other-host:127.0.0.1" ];
|
||||
description = ''
|
||||
Add a custom host-to-IP mapping.
|
||||
'';
|
||||
};
|
||||
dockerAllowedImages = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "ruby:*" "python:*" "php:*" "my.registry.tld:5000/*:*" ];
|
||||
description = ''
|
||||
Whitelist allowed images.
|
||||
'';
|
||||
};
|
||||
dockerAllowedServices = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "postgres:9" "redis:*" "mysql:*" ];
|
||||
description = ''
|
||||
Whitelist allowed services.
|
||||
'';
|
||||
};
|
||||
preCloneScript = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = ''
|
||||
Runner-specific command script executed before code is pulled.
|
||||
'';
|
||||
};
|
||||
preBuildScript = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = ''
|
||||
Runner-specific command script executed after code is pulled,
|
||||
just before build executes.
|
||||
'';
|
||||
};
|
||||
postBuildScript = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = ''
|
||||
Runner-specific command script executed after code is pulled
|
||||
and just after build executes.
|
||||
'';
|
||||
};
|
||||
tagList = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
description = ''
|
||||
Tag list.
|
||||
'';
|
||||
};
|
||||
runUntagged = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Register to run untagged builds; defaults to
|
||||
<literal>true</literal> when <option>tagList</option> is empty.
|
||||
'';
|
||||
};
|
||||
limit = mkOption {
|
||||
type = types.int;
|
||||
default = 0;
|
||||
description = ''
|
||||
Limit how many jobs can be handled concurrently by this service.
|
||||
0 (default) simply means don't limit.
|
||||
'';
|
||||
};
|
||||
requestConcurrency = mkOption {
|
||||
type = types.int;
|
||||
default = 0;
|
||||
description = ''
|
||||
Limit number of concurrent requests for new jobs from GitLab.
|
||||
'';
|
||||
};
|
||||
maximumTimeout = mkOption {
|
||||
type = types.int;
|
||||
default = 0;
|
||||
description = ''
|
||||
What is the maximum timeout (in seconds) that will be set for
|
||||
job when using this Runner. 0 (default) simply means don't limit.
|
||||
'';
|
||||
};
|
||||
protected = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
When set to true Runner will only run on pipelines
|
||||
triggered on protected branches.
|
||||
'';
|
||||
};
|
||||
debugTraceDisabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
When set to true Runner will disable the possibility of
|
||||
using the <literal>CI_DEBUG_TRACE</literal> feature.
|
||||
'';
|
||||
};
|
||||
};
|
||||
});
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
warnings = optional (cfg.configFile != null) "services.gitlab-runner.`configFile` is deprecated, please use services.gitlab-runner.`services`.";
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
systemd.services.gitlab-runner = {
|
||||
path = cfg.packages;
|
||||
environment = config.networking.proxy.envVars // {
|
||||
# Gitlab runner will not start if the HOME variable is not set
|
||||
HOME = cfg.workDir;
|
||||
};
|
||||
description = "Gitlab Runner";
|
||||
documentation = [ "https://docs.gitlab.com/runner/" ];
|
||||
after = [ "network.target" ]
|
||||
++ optional hasDocker "docker.service";
|
||||
requires = optional hasDocker "docker.service";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
environment = config.networking.proxy.envVars // {
|
||||
HOME = "/var/lib/gitlab-runner";
|
||||
};
|
||||
path = with pkgs; [
|
||||
bash
|
||||
gawk
|
||||
jq
|
||||
moreutils
|
||||
remarshal
|
||||
utillinux
|
||||
cfg.package
|
||||
] ++ cfg.extraPackages;
|
||||
reloadIfChanged = true;
|
||||
restartTriggers = [
|
||||
config.environment.etc."gitlab-runner/config.toml".source
|
||||
];
|
||||
serviceConfig = {
|
||||
# Set `DynamicUser` under `systemd.services.gitlab-runner.serviceConfig`
|
||||
# to `lib.mkForce false` in your configuration to run this service as root.
|
||||
# You can also set `User` and `Group` options to run this service as desired user.
|
||||
# Make sure to restart service or changes won't apply.
|
||||
DynamicUser = true;
|
||||
StateDirectory = "gitlab-runner";
|
||||
ExecReload= "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||
ExecStart = ''${cfg.package}/bin/gitlab-runner run \
|
||||
--working-directory ${cfg.workDir} \
|
||||
--config /etc/gitlab-runner/config.toml \
|
||||
--service gitlab-runner \
|
||||
--user gitlab-runner \
|
||||
'';
|
||||
|
||||
} // optionalAttrs (cfg.gracefulTermination) {
|
||||
SupplementaryGroups = optional hasDocker "docker";
|
||||
ExecStartPre = "!${configureScript}/bin/gitlab-runner-configure";
|
||||
ExecStart = "${startScript}/bin/gitlab-runner-start";
|
||||
ExecReload = "!${configureScript}/bin/gitlab-runner-configure";
|
||||
} // optionalAttrs (cfg.gracefulTermination) {
|
||||
TimeoutStopSec = "${cfg.gracefulTimeout}";
|
||||
KillSignal = "SIGQUIT";
|
||||
KillMode = "process";
|
||||
};
|
||||
};
|
||||
|
||||
# Make the gitlab-runner command availabe so users can query the runner
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
# Make sure the config can be reloaded on change
|
||||
environment.etc."gitlab-runner/config.toml".source = configFile;
|
||||
|
||||
users.users.gitlab-runner = {
|
||||
group = "gitlab-runner";
|
||||
extraGroups = optional hasDocker "docker";
|
||||
uid = config.ids.uids.gitlab-runner;
|
||||
home = cfg.workDir;
|
||||
createHome = true;
|
||||
};
|
||||
|
||||
users.groups.gitlab-runner.gid = config.ids.gids.gitlab-runner;
|
||||
# Enable docker if `docker` executor is used in any service
|
||||
virtualisation.docker.enable = mkIf (
|
||||
any (s: s.executor == "docker") (attrValues cfg.services)
|
||||
) (mkDefault true);
|
||||
};
|
||||
imports = [
|
||||
(mkRenamedOptionModule [ "services" "gitlab-runner" "packages" ] [ "services" "gitlab-runner" "extraPackages" ] )
|
||||
(mkRemovedOptionModule [ "services" "gitlab-runner" "configOptions" ] "Use services.gitlab-runner.services option instead" )
|
||||
(mkRemovedOptionModule [ "services" "gitlab-runner" "workDir" ] "You should move contents of workDir (if any) to /var/lib/gitlab-runner" )
|
||||
];
|
||||
}
|
||||
|
@ -231,6 +231,10 @@ in
|
||||
|
||||
};
|
||||
|
||||
meta = {
|
||||
maintainers = lib.maintainers.mic92;
|
||||
};
|
||||
|
||||
|
||||
###### implementation
|
||||
|
||||
|
@ -34,13 +34,7 @@ in
|
||||
|
||||
services.postgresql = {
|
||||
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run PostgreSQL.
|
||||
'';
|
||||
};
|
||||
enable = mkEnableOption "PostgreSQL Server";
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
|
@ -294,7 +294,7 @@ https://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides
|
||||
If you are not on NixOS or want to install this particular Emacs only for
|
||||
yourself, you can do so by adding it to your
|
||||
<filename>~/.config/nixpkgs/config.nix</filename> (see
|
||||
<link xlink:href="http://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides">Nixpkgs
|
||||
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides">Nixpkgs
|
||||
manual</link>):
|
||||
<example xml:id="module-services-emacs-config-nix">
|
||||
<title>Custom Emacs in <filename>~/.config/nixpkgs/config.nix</filename></title>
|
||||
|
@ -407,7 +407,7 @@ in
|
||||
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
restartTriggers = [ cfg.configFile ];
|
||||
restartTriggers = [ cfg.configFile modulesDir ];
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${dovecotPkg}/sbin/dovecot -F";
|
||||
|
@ -14,53 +14,9 @@ let
|
||||
RUN_USER = ${cfg.user}
|
||||
RUN_MODE = prod
|
||||
|
||||
[database]
|
||||
DB_TYPE = ${cfg.database.type}
|
||||
${optionalString (usePostgresql || useMysql) ''
|
||||
HOST = ${if cfg.database.socket != null then cfg.database.socket else cfg.database.host + ":" + toString cfg.database.port}
|
||||
NAME = ${cfg.database.name}
|
||||
USER = ${cfg.database.user}
|
||||
PASSWD = #dbpass#
|
||||
''}
|
||||
${optionalString useSqlite ''
|
||||
PATH = ${cfg.database.path}
|
||||
''}
|
||||
${optionalString usePostgresql ''
|
||||
SSL_MODE = disable
|
||||
''}
|
||||
${generators.toINI {} cfg.settings}
|
||||
|
||||
[repository]
|
||||
ROOT = ${cfg.repositoryRoot}
|
||||
|
||||
[server]
|
||||
DOMAIN = ${cfg.domain}
|
||||
HTTP_ADDR = ${cfg.httpAddress}
|
||||
HTTP_PORT = ${toString cfg.httpPort}
|
||||
ROOT_URL = ${cfg.rootUrl}
|
||||
STATIC_ROOT_PATH = ${cfg.staticRootPath}
|
||||
LFS_JWT_SECRET = #jwtsecret#
|
||||
|
||||
[session]
|
||||
COOKIE_NAME = session
|
||||
COOKIE_SECURE = ${boolToString cfg.cookieSecure}
|
||||
|
||||
[security]
|
||||
SECRET_KEY = #secretkey#
|
||||
INSTALL_LOCK = true
|
||||
|
||||
[log]
|
||||
ROOT_PATH = ${cfg.log.rootPath}
|
||||
LEVEL = ${cfg.log.level}
|
||||
|
||||
[service]
|
||||
DISABLE_REGISTRATION = ${boolToString cfg.disableRegistration}
|
||||
|
||||
${optionalString (cfg.mailerPasswordFile != null) ''
|
||||
[mailer]
|
||||
PASSWD = #mailerpass#
|
||||
''}
|
||||
|
||||
${cfg.extraConfig}
|
||||
${optionalString (cfg.extraConfig != null) cfg.extraConfig}
|
||||
'';
|
||||
in
|
||||
|
||||
@ -279,9 +235,36 @@ in
|
||||
'';
|
||||
};
|
||||
|
||||
settings = mkOption {
|
||||
type = with types; attrsOf (attrsOf (oneOf [ bool int str ]));
|
||||
default = {};
|
||||
description = ''
|
||||
Gitea configuration. Refer to <link xlink:href="https://docs.gitea.io/en-us/config-cheat-sheet/"/>
|
||||
for details on supported values.
|
||||
'';
|
||||
example = literalExample ''
|
||||
{
|
||||
"cron.sync_external_users" = {
|
||||
RUN_AT_START = true;
|
||||
SCHEDULE = "@every 24h";
|
||||
UPDATE_EXISTING = true;
|
||||
};
|
||||
mailer = {
|
||||
ENABLED = true;
|
||||
MAILER_TYPE = "sendmail";
|
||||
FROM = "do-not-reply@example.org";
|
||||
SENDMAIL_PATH = "${pkgs.system-sendmail}/bin/sendmail";
|
||||
};
|
||||
other = {
|
||||
SHOW_FOOTER_VERSION = false;
|
||||
};
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.str;
|
||||
default = "";
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = "Configuration lines appended to the generated gitea configuration file.";
|
||||
};
|
||||
};
|
||||
@ -294,6 +277,62 @@ in
|
||||
}
|
||||
];
|
||||
|
||||
services.gitea.settings = {
|
||||
database = mkMerge [
|
||||
{
|
||||
DB_TYPE = cfg.database.type;
|
||||
}
|
||||
(mkIf (useMysql || usePostgresql) {
|
||||
HOST = if cfg.database.socket != null then cfg.database.socket else cfg.database.host + ":" + toString cfg.database.port;
|
||||
NAME = cfg.database.name;
|
||||
USER = cfg.database.user;
|
||||
PASSWD = "#dbpass#";
|
||||
})
|
||||
(mkIf useSqlite {
|
||||
PATH = cfg.database.path;
|
||||
})
|
||||
(mkIf usePostgresql {
|
||||
SSL_MODE = "disable";
|
||||
})
|
||||
];
|
||||
|
||||
repository = {
|
||||
ROOT = cfg.repositoryRoot;
|
||||
};
|
||||
|
||||
server = {
|
||||
DOMAIN = cfg.domain;
|
||||
HTTP_ADDR = cfg.httpAddress;
|
||||
HTTP_PORT = cfg.httpPort;
|
||||
ROOT_URL = cfg.rootUrl;
|
||||
STATIC_ROOT_PATH = cfg.staticRootPath;
|
||||
LFS_JWT_SECRET = "#jwtsecret#";
|
||||
};
|
||||
|
||||
session = {
|
||||
COOKIE_NAME = "session";
|
||||
COOKIE_SECURE = cfg.cookieSecure;
|
||||
};
|
||||
|
||||
security = {
|
||||
SECRET_KEY = "#secretkey#";
|
||||
INSTALL_LOCK = true;
|
||||
};
|
||||
|
||||
log = {
|
||||
ROOT_PATH = cfg.log.rootPath;
|
||||
LEVEL = cfg.log.level;
|
||||
};
|
||||
|
||||
service = {
|
||||
DISABLE_REGISTRATION = cfg.disableRegistration;
|
||||
};
|
||||
|
||||
mailer = mkIf (cfg.mailerPasswordFile != null) {
|
||||
PASSWD = "#mailerpass#";
|
||||
};
|
||||
};
|
||||
|
||||
services.postgresql = optionalAttrs (usePostgresql && cfg.database.createDatabase) {
|
||||
enable = mkDefault true;
|
||||
|
||||
@ -435,9 +474,12 @@ in
|
||||
|
||||
users.groups.gitea = {};
|
||||
|
||||
warnings = optional (cfg.database.password != "")
|
||||
''config.services.gitea.database.password will be stored as plaintext
|
||||
in the Nix store. Use database.passwordFile instead.'';
|
||||
warnings =
|
||||
optional (cfg.database.password != "") ''
|
||||
config.services.gitea.database.password will be stored as plaintext in the Nix store. Use database.passwordFile instead.'' ++
|
||||
optional (cfg.extraConfig != null) ''
|
||||
services.gitea.`extraConfig` is deprecated, please use services.gitea.`settings`.
|
||||
'';
|
||||
|
||||
# Create database passwordFile default when password is configured.
|
||||
services.gitea.database.passwordFile =
|
||||
|
@ -283,7 +283,7 @@ in
|
||||
trustedBinaryCaches = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "http://hydra.nixos.org/" ];
|
||||
example = [ "https://hydra.nixos.org/" ];
|
||||
description = ''
|
||||
List of binary cache URLs that non-root users can use (in
|
||||
addition to those specified using
|
||||
@ -510,8 +510,7 @@ in
|
||||
|
||||
system.activationScripts.nix = stringAfter [ "etc" "users" ]
|
||||
''
|
||||
# Create directories in /nix.
|
||||
${nix}/bin/nix ping-store --no-net
|
||||
install -m 0755 -d /nix/var/nix/{gcroots,profiles}/per-user
|
||||
|
||||
# Subscribe the root user to the NixOS channel by default.
|
||||
if [ ! -e "/root/.nix-channels" ]; then
|
||||
|
@ -17,9 +17,9 @@ let
|
||||
|
||||
cfgUpdate = pkgs.writeText "octoprint-config.yaml" (builtins.toJSON fullConfig);
|
||||
|
||||
pluginsEnv = pkgs.python.buildEnv.override {
|
||||
extraLibs = cfg.plugins pkgs.octoprint-plugins;
|
||||
};
|
||||
pluginsEnv = package.python.withPackages (ps: [ps.octoprint] ++ (cfg.plugins ps));
|
||||
|
||||
package = pkgs.octoprint;
|
||||
|
||||
in
|
||||
{
|
||||
@ -106,7 +106,6 @@ in
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" ];
|
||||
path = [ pluginsEnv ];
|
||||
environment.PYTHONPATH = makeSearchPathOutput "lib" pkgs.python.sitePackages [ pluginsEnv ];
|
||||
|
||||
preStart = ''
|
||||
if [ -e "${cfg.stateDir}/config.yaml" ]; then
|
||||
@ -119,7 +118,7 @@ in
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.octoprint}/bin/octoprint serve -b ${cfg.stateDir}";
|
||||
ExecStart = "${pluginsEnv}/bin/octoprint serve -b ${cfg.stateDir}";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
};
|
||||
|
@ -37,9 +37,7 @@ let
|
||||
baseService = recursiveUpdate commonEnv {
|
||||
wants = [ "ipfs-init.service" ];
|
||||
# NB: migration must be performed prior to pre-start, else we get the failure message!
|
||||
preStart = ''
|
||||
ipfs repo fsck # workaround for BUG #4212 (https://github.com/ipfs/go-ipfs/issues/4214)
|
||||
'' + optionalString cfg.autoMount ''
|
||||
preStart = optionalString cfg.autoMount ''
|
||||
ipfs --local config Mounts.FuseAllowOther --json true
|
||||
ipfs --local config Mounts.IPFS ${cfg.ipfsMountDir}
|
||||
ipfs --local config Mounts.IPNS ${cfg.ipnsMountDir}
|
||||
|
@ -20,12 +20,14 @@ let
|
||||
ssid=${cfg.ssid}
|
||||
hw_mode=${cfg.hwMode}
|
||||
channel=${toString cfg.channel}
|
||||
${optionalString (cfg.countryCode != null) ''country_code=${cfg.countryCode}''}
|
||||
${optionalString (cfg.countryCode != null) ''ieee80211d=1''}
|
||||
|
||||
# logging (debug level)
|
||||
logger_syslog=-1
|
||||
logger_syslog_level=2
|
||||
logger_syslog_level=${toString cfg.logLevel}
|
||||
logger_stdout=-1
|
||||
logger_stdout_level=2
|
||||
logger_stdout_level=${toString cfg.logLevel}
|
||||
|
||||
ctrl_interface=/run/hostapd
|
||||
ctrl_interface_group=${cfg.group}
|
||||
@ -147,6 +149,35 @@ in
|
||||
'';
|
||||
};
|
||||
|
||||
logLevel = mkOption {
|
||||
default = 2;
|
||||
type = types.int;
|
||||
description = ''
|
||||
Levels (minimum value for logged events):
|
||||
0 = verbose debugging
|
||||
1 = debugging
|
||||
2 = informational messages
|
||||
3 = notification
|
||||
4 = warning
|
||||
'';
|
||||
};
|
||||
|
||||
countryCode = mkOption {
|
||||
default = null;
|
||||
example = "US";
|
||||
type = with types; nullOr str;
|
||||
description = ''
|
||||
Country code (ISO/IEC 3166-1). Used to set regulatory domain.
|
||||
Set as needed to indicate country in which device is operating.
|
||||
This can limit available channels and transmit power.
|
||||
These two octets are used as the first two octets of the Country String
|
||||
(dot11CountryString).
|
||||
If set this enables IEEE 802.11d. This advertises the countryCode and
|
||||
the set of allowed channels and transmit power levels based on the
|
||||
regulatory limits.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
default = "";
|
||||
example = ''
|
||||
@ -167,6 +198,8 @@ in
|
||||
|
||||
environment.systemPackages = [ pkgs.hostapd ];
|
||||
|
||||
services.udev.packages = optional (cfg.countryCode != null) [ pkgs.crda ];
|
||||
|
||||
systemd.services.hostapd =
|
||||
{ description = "hostapd wireless AP";
|
||||
|
||||
|
@ -382,6 +382,11 @@ let
|
||||
default = "en";
|
||||
description = "Default room language.";
|
||||
};
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
description = "Additional MUC specific configuration";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@ -792,6 +797,8 @@ in
|
||||
|
||||
https_ports = ${toLua cfg.httpsPorts}
|
||||
|
||||
${ cfg.extraConfig }
|
||||
|
||||
${lib.concatMapStrings (muc: ''
|
||||
Component ${toLua muc.domain} "muc"
|
||||
modules_enabled = { "muc_mam"; ${optionalString muc.vcard_muc ''"vcard_muc";'' } }
|
||||
@ -809,8 +816,8 @@ in
|
||||
muc_room_default_change_subject = ${toLua muc.roomDefaultChangeSubject}
|
||||
muc_room_default_history_length = ${toLua muc.roomDefaultHistoryLength}
|
||||
muc_room_default_language = ${toLua muc.roomDefaultLanguage}
|
||||
|
||||
'') cfg.muc}
|
||||
${ muc.extraConfig }
|
||||
'') cfg.muc}
|
||||
|
||||
${ lib.optionalString (cfg.uploadHttp != null) ''
|
||||
Component ${toLua cfg.uploadHttp.domain} "http_upload"
|
||||
@ -820,8 +827,6 @@ in
|
||||
http_upload_path = ${toLua cfg.uploadHttp.httpUploadPath}
|
||||
''}
|
||||
|
||||
${ cfg.extraConfig }
|
||||
|
||||
${ lib.concatStringsSep "\n" (lib.mapAttrsToList (n: v: ''
|
||||
VirtualHost "${v.domain}"
|
||||
enabled = ${boolToString v.enabled};
|
||||
|
@ -142,7 +142,7 @@ in {
|
||||
description = ''
|
||||
Extra packages available at runtime to enable Deluge's plugins. For example,
|
||||
extraction utilities are required for the built-in "Extractor" plugin.
|
||||
This always contains unzip, gnutar, xz, p7zip and bzip2.
|
||||
This always contains unzip, gnutar, xz and bzip2.
|
||||
'';
|
||||
};
|
||||
|
||||
@ -187,7 +187,7 @@ in {
|
||||
);
|
||||
|
||||
# Provide a default set of `extraPackages`.
|
||||
services.deluge.extraPackages = with pkgs; [ unzip gnutar xz p7zip bzip2 ];
|
||||
services.deluge.extraPackages = with pkgs; [ unzip gnutar xz bzip2 ];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' 0770 ${cfg.user} ${cfg.group}"
|
||||
|
@ -188,7 +188,7 @@ let
|
||||
name = "icalevents";
|
||||
# Download the plugin from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip;
|
||||
url = "https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip";
|
||||
sha256 = "e40ed7dd6bbe7fe3363bbbecb4de481d5e42385b5a0f62f6a6ce6bf3a1f9dfa8";
|
||||
};
|
||||
sourceRoot = ".";
|
||||
@ -216,7 +216,7 @@ let
|
||||
name = "bootstrap3";
|
||||
# Download the theme from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip;
|
||||
url = "https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip";
|
||||
sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6";
|
||||
};
|
||||
# We need unzip to build this package
|
||||
|
@ -91,41 +91,47 @@ in {
|
||||
description = "Unit App Server";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = with pkgs; [ curl ];
|
||||
preStart = ''
|
||||
test -f '${cfg.stateDir}/conf.json' || rm -f '${cfg.stateDir}/conf.json'
|
||||
[ ! -e '${cfg.stateDir}/conf.json' ] || rm -f '${cfg.stateDir}/conf.json'
|
||||
'';
|
||||
postStart = ''
|
||||
curl -X PUT --data-binary '@${configFile}' --unix-socket '/run/unit/control.unit.sock' 'http://localhost/config'
|
||||
${pkgs.curl}/bin/curl -X PUT --data-binary '@${configFile}' --unix-socket '/run/unit/control.unit.sock' 'http://localhost/config'
|
||||
'';
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
PIDFile = "/run/unit/unit.pid";
|
||||
ExecStart = ''
|
||||
${cfg.package}/bin/unitd --control 'unix:/run/unit/control.unit.sock' --pid '/run/unit/unit.pid' \
|
||||
--log '${cfg.logDir}/unit.log' --state '${cfg.stateDir}' --no-daemon \
|
||||
--log '${cfg.logDir}/unit.log' --state '${cfg.stateDir}' \
|
||||
--user ${cfg.user} --group ${cfg.group}
|
||||
'';
|
||||
# User and group
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
# Capabilities
|
||||
AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" "CAP_SETGID" "CAP_SETUID" ];
|
||||
ExecStop = ''
|
||||
${pkgs.curl}/bin/curl -X DELETE --unix-socket '/run/unit/control.unit.sock' 'http://localhost/config'
|
||||
'';
|
||||
# Runtime directory and mode
|
||||
RuntimeDirectory = "unit";
|
||||
RuntimeDirectoryMode = "0750";
|
||||
# Access write directories
|
||||
ReadWritePaths = [ cfg.stateDir cfg.logDir ];
|
||||
# Security
|
||||
NoNewPrivileges = true;
|
||||
# Sandboxing
|
||||
ProtectSystem = "full";
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = true;
|
||||
RuntimeDirectory = "unit";
|
||||
RuntimeDirectoryMode = "0750";
|
||||
PrivateTmp = true;
|
||||
PrivateDevices = true;
|
||||
ProtectHostname = true;
|
||||
ProtectKernelTunables = true;
|
||||
ProtectKernelModules = true;
|
||||
ProtectControlGroups = true;
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = true;
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
PrivateMounts = true;
|
||||
# System Call Filtering
|
||||
SystemCallArchitectures = "native";
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -109,7 +109,7 @@ in
|
||||
|
||||
# Without this, elementary LightDM greeter will pre-select non-existent `default` session
|
||||
# https://github.com/elementary/greeter/issues/368
|
||||
services.xserver.displayManager.defaultSession = "pantheon";
|
||||
services.xserver.displayManager.defaultSession = mkDefault "pantheon";
|
||||
|
||||
services.xserver.displayManager.sessionCommands = ''
|
||||
if test "$XDG_CURRENT_DESKTOP" = "Pantheon"; then
|
||||
|
@ -37,7 +37,7 @@ in
|
||||
# If there is any package configured in modulePackages, we generate the
|
||||
# loaders.cache based on that and set the environment variable
|
||||
# GDK_PIXBUF_MODULE_FILE to point to it.
|
||||
config = mkIf (cfg.modulePackages != [] || pkgs.stdenv.hostPlatform != pkgs.stdenv.buildPlatform) {
|
||||
config = mkIf (cfg.modulePackages != []) {
|
||||
environment.variables = {
|
||||
GDK_PIXBUF_MODULE_FILE = "${loadersCache}";
|
||||
};
|
||||
|
@ -1,39 +1,48 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
with builtins;
|
||||
|
||||
let
|
||||
|
||||
cfg = config.services.picom;
|
||||
|
||||
pairOf = x: with types; addCheck (listOf x) (y: length y == 2);
|
||||
pairOf = x: with types;
|
||||
addCheck (listOf x) (y: length y == 2)
|
||||
// { description = "pair of ${x.description}"; };
|
||||
|
||||
floatBetween = a: b: with lib; with types;
|
||||
addCheck str (x: versionAtLeast x a && versionOlder x b);
|
||||
floatBetween = a: b: with types;
|
||||
let
|
||||
# toString prints floats with hardcoded high precision
|
||||
floatToString = f: builtins.toJSON f;
|
||||
in
|
||||
addCheck float (x: x <= b && x >= a)
|
||||
// { description = "a floating point number in " +
|
||||
"range [${floatToString a}, ${floatToString b}]"; };
|
||||
|
||||
toConf = attrs: concatStringsSep "\n"
|
||||
(mapAttrsToList
|
||||
(k: v: let
|
||||
sep = if isAttrs v then ":" else "=";
|
||||
# Basically a tinkered lib.generators.mkKeyValueDefault
|
||||
mkValueString = v:
|
||||
if isBool v then boolToString v
|
||||
else if isInt v then toString v
|
||||
else if isFloat v then toString v
|
||||
else if isString v then ''"${escape [ ''"'' ] v}"''
|
||||
else if isList v then "[ "
|
||||
+ concatMapStringsSep " , " mkValueString v
|
||||
+ " ]"
|
||||
else if isAttrs v then "{ "
|
||||
+ concatStringsSep " "
|
||||
(mapAttrsToList
|
||||
(key: value: "${toString key}=${mkValueString value};")
|
||||
v)
|
||||
+ " }"
|
||||
else abort "picom.mkValueString: unexpected type (v = ${v})";
|
||||
in "${escape [ sep ] k}${sep}${mkValueString v};")
|
||||
attrs);
|
||||
mkDefaultAttrs = mapAttrs (n: v: mkDefault v);
|
||||
|
||||
# Basically a tinkered lib.generators.mkKeyValueDefault
|
||||
# It either serializes a top-level definition "key: { values };"
|
||||
# or an expression "key = { values };"
|
||||
mkAttrsString = top:
|
||||
mapAttrsToList (k: v:
|
||||
let sep = if (top && isAttrs v) then ":" else "=";
|
||||
in "${escape [ sep ] k}${sep}${mkValueString v};");
|
||||
|
||||
# This serializes a Nix expression to the libconfig format.
|
||||
mkValueString = v:
|
||||
if types.bool.check v then boolToString v
|
||||
else if types.int.check v then toString v
|
||||
else if types.float.check v then toString v
|
||||
else if types.str.check v then "\"${escape [ "\"" ] v}\""
|
||||
else if builtins.isList v then "[ ${concatMapStringsSep " , " mkValueString v} ]"
|
||||
else if types.attrs.check v then "{ ${concatStringsSep " " (mkAttrsString false v) } }"
|
||||
else throw ''
|
||||
invalid expression used in option services.picom.settings:
|
||||
${v}
|
||||
'';
|
||||
|
||||
toConf = attrs: concatStringsSep "\n" (mkAttrsString true cfg.settings);
|
||||
|
||||
configFile = pkgs.writeText "picom.conf" (toConf cfg.settings);
|
||||
|
||||
@ -61,7 +70,7 @@ in {
|
||||
};
|
||||
|
||||
fadeDelta = mkOption {
|
||||
type = types.addCheck types.int (x: x > 0);
|
||||
type = types.ints.positive;
|
||||
default = 10;
|
||||
example = 5;
|
||||
description = ''
|
||||
@ -70,12 +79,11 @@ in {
|
||||
};
|
||||
|
||||
fadeSteps = mkOption {
|
||||
type = pairOf (floatBetween "0.01" "1.01");
|
||||
default = [ "0.028" "0.03" ];
|
||||
example = [ "0.04" "0.04" ];
|
||||
type = pairOf (floatBetween 0.01 1);
|
||||
default = [ 0.028 0.03 ];
|
||||
example = [ 0.04 0.04 ];
|
||||
description = ''
|
||||
Opacity change between fade steps (in and out).
|
||||
(numbers in range 0.01 - 1.0)
|
||||
'';
|
||||
};
|
||||
|
||||
@ -111,11 +119,11 @@ in {
|
||||
};
|
||||
|
||||
shadowOpacity = mkOption {
|
||||
type = floatBetween "0.0" "1.01";
|
||||
default = "0.75";
|
||||
example = "0.8";
|
||||
type = floatBetween 0 1;
|
||||
default = 0.75;
|
||||
example = 0.8;
|
||||
description = ''
|
||||
Window shadows opacity (number in range 0.0 - 1.0).
|
||||
Window shadows opacity.
|
||||
'';
|
||||
};
|
||||
|
||||
@ -134,29 +142,29 @@ in {
|
||||
};
|
||||
|
||||
activeOpacity = mkOption {
|
||||
type = floatBetween "0.0" "1.01";
|
||||
default = "1.0";
|
||||
example = "0.8";
|
||||
type = floatBetween 0 1;
|
||||
default = 1.0;
|
||||
example = 0.8;
|
||||
description = ''
|
||||
Opacity of active windows (number in range 0.0 - 1.0).
|
||||
Opacity of active windows.
|
||||
'';
|
||||
};
|
||||
|
||||
inactiveOpacity = mkOption {
|
||||
type = floatBetween "0.1" "1.01";
|
||||
default = "1.0";
|
||||
example = "0.8";
|
||||
type = floatBetween 0.1 1;
|
||||
default = 1.0;
|
||||
example = 0.8;
|
||||
description = ''
|
||||
Opacity of inactive windows (number in range 0.1 - 1.0).
|
||||
Opacity of inactive windows.
|
||||
'';
|
||||
};
|
||||
|
||||
menuOpacity = mkOption {
|
||||
type = floatBetween "0.0" "1.01";
|
||||
default = "1.0";
|
||||
example = "0.8";
|
||||
type = floatBetween 0 1;
|
||||
default = 1.0;
|
||||
example = 0.8;
|
||||
description = ''
|
||||
Opacity of dropdown and popup menu (number in range 0.0 - 1.0).
|
||||
Opacity of dropdown and popup menu.
|
||||
'';
|
||||
};
|
||||
|
||||
@ -210,7 +218,7 @@ in {
|
||||
};
|
||||
|
||||
refreshRate = mkOption {
|
||||
type = types.addCheck types.int (x: x >= 0);
|
||||
type = types.ints.unsigned;
|
||||
default = 0;
|
||||
example = 60;
|
||||
description = ''
|
||||
@ -218,54 +226,69 @@ in {
|
||||
'';
|
||||
};
|
||||
|
||||
settings = let
|
||||
configTypes = with types; oneOf [ bool int float str ];
|
||||
# types.loaOf converts lists to sets
|
||||
loaOf = t: with types; either (listOf t) (attrsOf t);
|
||||
settings = with types;
|
||||
let
|
||||
scalar = oneOf [ bool int float str ]
|
||||
// { description = "scalar types"; };
|
||||
|
||||
libConfig = oneOf [ scalar (listOf libConfig) (attrsOf libConfig) ]
|
||||
// { description = "libconfig type"; };
|
||||
|
||||
topLevel = attrsOf libConfig
|
||||
// { description = ''
|
||||
libconfig configuration. The format consists of an attributes
|
||||
set (called a group) of settings. Each setting can be a scalar type
|
||||
(boolean, integer, floating point number or string), a list of
|
||||
scalars or a group itself
|
||||
'';
|
||||
};
|
||||
|
||||
in mkOption {
|
||||
type = loaOf (types.either configTypes (loaOf (types.either configTypes (loaOf configTypes))));
|
||||
default = {};
|
||||
type = topLevel;
|
||||
default = { };
|
||||
example = literalExample ''
|
||||
blur =
|
||||
{ method = "gaussian";
|
||||
size = 10;
|
||||
deviation = 5.0;
|
||||
};
|
||||
'';
|
||||
description = ''
|
||||
Additional Picom configuration.
|
||||
Picom settings. Use this option to configure Picom settings not exposed
|
||||
in a NixOS option or to bypass one. For the available options see the
|
||||
CONFIGURATION FILES section at <literal>picom(1)</literal>.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
services.picom.settings = let
|
||||
# Hard conversion to float, literally lib.toInt but toFloat
|
||||
toFloat = str: let
|
||||
may_be_float = builtins.fromJSON str;
|
||||
in if builtins.isFloat may_be_float
|
||||
then may_be_float
|
||||
else throw "Could not convert ${str} to float.";
|
||||
in {
|
||||
services.picom.settings = mkDefaultAttrs {
|
||||
# fading
|
||||
fading = mkDefault cfg.fade;
|
||||
fade-delta = mkDefault cfg.fadeDelta;
|
||||
fade-in-step = mkDefault (toFloat (elemAt cfg.fadeSteps 0));
|
||||
fade-out-step = mkDefault (toFloat (elemAt cfg.fadeSteps 1));
|
||||
fade-exclude = mkDefault cfg.fadeExclude;
|
||||
fading = cfg.fade;
|
||||
fade-delta = cfg.fadeDelta;
|
||||
fade-in-step = elemAt cfg.fadeSteps 0;
|
||||
fade-out-step = elemAt cfg.fadeSteps 1;
|
||||
fade-exclude = cfg.fadeExclude;
|
||||
|
||||
# shadows
|
||||
shadow = mkDefault cfg.shadow;
|
||||
shadow-offset-x = mkDefault (elemAt cfg.shadowOffsets 0);
|
||||
shadow-offset-y = mkDefault (elemAt cfg.shadowOffsets 1);
|
||||
shadow-opacity = mkDefault (toFloat cfg.shadowOpacity);
|
||||
shadow-exclude = mkDefault cfg.shadowExclude;
|
||||
shadow = cfg.shadow;
|
||||
shadow-offset-x = elemAt cfg.shadowOffsets 0;
|
||||
shadow-offset-y = elemAt cfg.shadowOffsets 1;
|
||||
shadow-opacity = cfg.shadowOpacity;
|
||||
shadow-exclude = cfg.shadowExclude;
|
||||
|
||||
# opacity
|
||||
active-opacity = mkDefault (toFloat cfg.activeOpacity);
|
||||
inactive-opacity = mkDefault (toFloat cfg.inactiveOpacity);
|
||||
active-opacity = cfg.activeOpacity;
|
||||
inactive-opacity = cfg.inactiveOpacity;
|
||||
|
||||
wintypes = mkDefault cfg.wintypes;
|
||||
wintypes = cfg.wintypes;
|
||||
|
||||
opacity-rule = mkDefault cfg.opacityRules;
|
||||
opacity-rule = cfg.opacityRules;
|
||||
|
||||
# other options
|
||||
backend = mkDefault cfg.backend;
|
||||
vsync = mkDefault cfg.vSync;
|
||||
refresh-rate = mkDefault cfg.refreshRate;
|
||||
backend = cfg.backend;
|
||||
vsync = cfg.vSync;
|
||||
refresh-rate = cfg.refreshRate;
|
||||
};
|
||||
|
||||
systemd.user.services.picom = {
|
||||
|
@ -83,6 +83,12 @@ in
|
||||
Authorized keys for the root user on initrd.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
description = "Verbatim contents of <filename>sshd_config</filename>.";
|
||||
};
|
||||
};
|
||||
|
||||
imports =
|
||||
@ -126,6 +132,8 @@ in
|
||||
'' else ''
|
||||
UseDNS no
|
||||
''}
|
||||
|
||||
${cfg.extraConfig}
|
||||
'';
|
||||
in mkIf (config.boot.initrd.network.enable && cfg.enable) {
|
||||
assertions = [
|
||||
|
@ -138,6 +138,10 @@ in
|
||||
|
||||
users.users.resolved.group = "systemd-resolve";
|
||||
|
||||
# add resolve to nss hosts database if enabled and nscd enabled
|
||||
# system.nssModules is configured in nixos/modules/system/boot/systemd.nix
|
||||
system.nssDatabases.hosts = optional config.services.nscd.enable "resolve [!UNAVAIL=return]";
|
||||
|
||||
systemd.additionalUpstreamSystemUnits = [
|
||||
"systemd-resolved.service"
|
||||
];
|
||||
|
@ -405,6 +405,8 @@ let
|
||||
"hibernate" "hybrid-sleep" "suspend-then-hibernate" "lock"
|
||||
];
|
||||
|
||||
proxy_env = config.networking.proxy.envVars;
|
||||
|
||||
in
|
||||
|
||||
{
|
||||
@ -827,6 +829,27 @@ in
|
||||
|
||||
system.build.units = cfg.units;
|
||||
|
||||
# Systemd provides various NSS modules to look up dynamic users, locally
|
||||
# configured IP adresses and local container hostnames.
|
||||
# On NixOS, these can only be passed to the NSS system via nscd (and its
|
||||
# LD_LIBRARY_PATH), which is why it's usually a very good idea to have nscd
|
||||
# enabled (also see the config.nscd.enable description).
|
||||
# While there is already an assertion in place complaining loudly about
|
||||
# having nssModules configured and nscd disabled, for some reason we still
|
||||
# check for nscd being enabled before adding to nssModules.
|
||||
system.nssModules = optional config.services.nscd.enable systemd.out;
|
||||
system.nssDatabases = mkIf config.services.nscd.enable {
|
||||
hosts = (mkMerge [
|
||||
[ "mymachines" ]
|
||||
(mkOrder 1600 [ "myhostname" ] # 1600 to ensure it's always the last
|
||||
)
|
||||
]);
|
||||
passwd = (mkMerge [
|
||||
[ "mymachines" ]
|
||||
(mkAfter [ "systemd" ])
|
||||
]);
|
||||
};
|
||||
|
||||
environment.systemPackages = [ systemd ];
|
||||
|
||||
environment.etc = let
|
||||
@ -1035,6 +1058,7 @@ in
|
||||
systemd.targets.remote-fs.unitConfig.X-StopOnReconfiguration = true;
|
||||
systemd.targets.network-online.wantedBy = [ "multi-user.target" ];
|
||||
systemd.services.systemd-binfmt.wants = [ "proc-sys-fs-binfmt_misc.mount" ];
|
||||
systemd.services.systemd-importd.environment = proxy_env;
|
||||
|
||||
# Don't bother with certain units in containers.
|
||||
systemd.services.systemd-remount-fs.unitConfig.ConditionVirtualization = "!container";
|
||||
|
@ -4,6 +4,11 @@ with lib;
|
||||
|
||||
let
|
||||
cfg = config.virtualisation.cri-o;
|
||||
|
||||
# Copy configuration files to avoid having the entire sources in the system closure
|
||||
copyFile = filePath: pkgs.runCommandNoCC (builtins.unsafeDiscardStringContext (builtins.baseNameOf filePath)) {} ''
|
||||
cp ${filePath} $out
|
||||
'';
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
@ -45,9 +50,9 @@ in
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = with pkgs;
|
||||
[ cri-o cri-tools conmon iptables runc utillinux ];
|
||||
environment.etc."crictl.yaml".text = ''
|
||||
runtime-endpoint: unix:///var/run/crio/crio.sock
|
||||
'';
|
||||
|
||||
environment.etc."crictl.yaml".source = copyFile "${pkgs.cri-o.src}/crictl.yaml";
|
||||
|
||||
environment.etc."crio/crio.conf".text = ''
|
||||
[crio]
|
||||
storage_driver = "${cfg.storageDriver}"
|
||||
@ -66,23 +71,7 @@ in
|
||||
manage_network_ns_lifecycle = true
|
||||
'';
|
||||
|
||||
environment.etc."cni/net.d/20-cri-o-bridge.conf".text = ''
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "crio-bridge",
|
||||
"type": "bridge",
|
||||
"bridge": "cni0",
|
||||
"isGateway": true,
|
||||
"ipMasq": true,
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"subnet": "10.88.0.0/16",
|
||||
"routes": [
|
||||
{ "dst": "0.0.0.0/0" }
|
||||
]
|
||||
}
|
||||
}
|
||||
'';
|
||||
environment.etc."cni/net.d/10-crio-bridge.conf".source = copyFile "${pkgs.cri-o.src}/contrib/cni/10-crio-bridge.conf";
|
||||
|
||||
# Enable common /etc/containers configuration
|
||||
virtualisation.containers.enable = true;
|
||||
|
@ -1,17 +1,20 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ config, options, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
let
|
||||
cfg = config.docker-containers;
|
||||
cfg = config.virtualisation.oci-containers;
|
||||
proxy_env = config.networking.proxy.envVars;
|
||||
|
||||
dockerContainer =
|
||||
defaultBackend = options.virtualisation.oci-containers.backend.default;
|
||||
|
||||
containerOptions =
|
||||
{ ... }: {
|
||||
|
||||
options = {
|
||||
|
||||
image = mkOption {
|
||||
type = with types; str;
|
||||
description = "Docker image to run.";
|
||||
description = "OCI image to run.";
|
||||
example = "library/hello-world";
|
||||
};
|
||||
|
||||
@ -58,18 +61,19 @@ let
|
||||
|
||||
log-driver = mkOption {
|
||||
type = types.str;
|
||||
default = "none";
|
||||
default = "journald";
|
||||
description = ''
|
||||
Logging driver for the container. The default of
|
||||
<literal>"none"</literal> means that the container's logs will be
|
||||
handled as part of the systemd unit. Setting this to
|
||||
<literal>"journald"</literal> will result in duplicate logging, but
|
||||
the container's logs will be visible to the <command>docker
|
||||
logs</command> command.
|
||||
<literal>"journald"</literal> means that the container's logs will be
|
||||
handled as part of the systemd unit.
|
||||
|
||||
For more details and a full list of logging drivers, refer to the
|
||||
<link xlink:href="https://docs.docker.com/engine/reference/run/#logging-drivers---log-driver">
|
||||
Docker engine documentation</link>
|
||||
For more details and a full list of logging drivers, refer to respective backends documentation.
|
||||
|
||||
For Docker:
|
||||
<link xlink:href="https://docs.docker.com/engine/reference/run/#logging-drivers---log-driver">Docker engine documentation</link>
|
||||
|
||||
For Podman:
|
||||
Refer to the docker-run(1) man page.
|
||||
'';
|
||||
};
|
||||
|
||||
@ -172,10 +176,10 @@ let
|
||||
description = ''
|
||||
Define which other containers this one depends on. They will be added to both After and Requires for the unit.
|
||||
|
||||
Use the same name as the attribute under <literal>services.docker-containers</literal>.
|
||||
Use the same name as the attribute under <literal>virtualisation.oci-containers</literal>.
|
||||
'';
|
||||
example = literalExample ''
|
||||
services.docker-containers = {
|
||||
virtualisation.oci-containers = {
|
||||
node1 = {};
|
||||
node2 = {
|
||||
dependsOn = [ "node1" ];
|
||||
@ -184,10 +188,10 @@ let
|
||||
'';
|
||||
};
|
||||
|
||||
extraDockerOptions = mkOption {
|
||||
extraOptions = mkOption {
|
||||
type = with types; listOf str;
|
||||
default = [];
|
||||
description = "Extra options for <command>docker run</command>.";
|
||||
description = "Extra options for <command>${defaultBackend} run</command>.";
|
||||
example = literalExample ''
|
||||
["--network=host"]
|
||||
'';
|
||||
@ -205,24 +209,31 @@ let
|
||||
};
|
||||
|
||||
mkService = name: container: let
|
||||
mkAfter = map (x: "docker-${x}.service") container.dependsOn;
|
||||
in rec {
|
||||
dependsOn = map (x: "${cfg.backend}-${x}.service") container.dependsOn;
|
||||
in {
|
||||
wantedBy = [] ++ optional (container.autoStart) "multi-user.target";
|
||||
after = [ "docker.service" "docker.socket" ] ++ mkAfter;
|
||||
requires = after;
|
||||
path = [ pkgs.docker ];
|
||||
after = lib.optionals (cfg.backend == "docker") [ "docker.service" "docker.socket" ] ++ dependsOn;
|
||||
requires = dependsOn;
|
||||
environment = proxy_env;
|
||||
|
||||
path =
|
||||
if cfg.backend == "docker" then [ pkgs.docker ]
|
||||
else if cfg.backend == "podman" then [ config.virtualisation.podman.package ]
|
||||
else throw "Unhandled backend: ${cfg.backend}";
|
||||
|
||||
preStart = ''
|
||||
docker rm -f ${name} || true
|
||||
${cfg.backend} rm -f ${name} || true
|
||||
${optionalString (container.imageFile != null) ''
|
||||
docker load -i ${container.imageFile}
|
||||
${cfg.backend} load -i ${container.imageFile}
|
||||
''}
|
||||
'';
|
||||
postStop = "docker rm -f ${name} || true";
|
||||
|
||||
postStop = "${cfg.backend} rm -f ${name} || true";
|
||||
|
||||
serviceConfig = {
|
||||
StandardOutput = "null";
|
||||
StandardError = "null";
|
||||
ExecStart = concatStringsSep " \\\n " ([
|
||||
"${pkgs.docker}/bin/docker run"
|
||||
"${config.system.path}/bin/${cfg.backend} run"
|
||||
"--rm"
|
||||
"--name=${name}"
|
||||
"--log-driver=${container.log-driver}"
|
||||
@ -233,12 +244,12 @@ let
|
||||
++ optional (container.user != null) "-u ${escapeShellArg container.user}"
|
||||
++ map (v: "-v ${escapeShellArg v}") container.volumes
|
||||
++ optional (container.workdir != null) "-w ${escapeShellArg container.workdir}"
|
||||
++ map escapeShellArg container.extraDockerOptions
|
||||
++ map escapeShellArg container.extraOptions
|
||||
++ [container.image]
|
||||
++ map escapeShellArg container.cmd
|
||||
);
|
||||
|
||||
ExecStop = ''${pkgs.bash}/bin/sh -c "[ $SERVICE_RESULT = success ] || docker stop ${name}"'';
|
||||
ExecStop = ''${pkgs.bash}/bin/sh -c "[ $SERVICE_RESULT = success ] || ${cfg.backend} stop ${name}"'';
|
||||
|
||||
### There is no generalized way of supporting `reload` for docker
|
||||
### containers. Some containers may respond well to SIGHUP sent to their
|
||||
@ -263,19 +274,50 @@ let
|
||||
};
|
||||
|
||||
in {
|
||||
imports = [
|
||||
(
|
||||
lib.mkChangedOptionModule
|
||||
[ "docker-containers" ]
|
||||
[ "virtualisation" "oci-containers" ]
|
||||
(oldcfg: {
|
||||
backend = "docker";
|
||||
containers = lib.mapAttrs (n: v: builtins.removeAttrs (v // {
|
||||
extraOptions = v.extraDockerOptions or [];
|
||||
}) [ "extraDockerOptions" ]) oldcfg.docker-containers;
|
||||
})
|
||||
)
|
||||
];
|
||||
|
||||
options.docker-containers = mkOption {
|
||||
default = {};
|
||||
type = types.attrsOf (types.submodule dockerContainer);
|
||||
description = "Docker containers to run as systemd services.";
|
||||
};
|
||||
options.virtualisation.oci-containers = {
|
||||
|
||||
config = mkIf (cfg != {}) {
|
||||
backend = mkOption {
|
||||
type = types.enum [ "podman" "docker" ];
|
||||
default =
|
||||
# TODO: Once https://github.com/NixOS/nixpkgs/issues/77925 is resolved default to podman
|
||||
# if versionAtLeast config.system.stateVersion "20.09" then "podman"
|
||||
# else "docker";
|
||||
"docker";
|
||||
description = "The underlying Docker implementation to use.";
|
||||
};
|
||||
|
||||
systemd.services = mapAttrs' (n: v: nameValuePair "docker-${n}" (mkService n v)) cfg;
|
||||
|
||||
virtualisation.docker.enable = true;
|
||||
containers = mkOption {
|
||||
default = {};
|
||||
type = types.attrsOf (types.submodule containerOptions);
|
||||
description = "OCI (Docker) containers to run as systemd services.";
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
config = lib.mkIf (cfg.containers != {}) (lib.mkMerge [
|
||||
{
|
||||
systemd.services = mapAttrs' (n: v: nameValuePair "${cfg.backend}-${n}" (mkService n v)) cfg.containers;
|
||||
}
|
||||
(lib.mkIf (cfg.backend == "podman") {
|
||||
virtualisation.podman.enable = true;
|
||||
})
|
||||
(lib.mkIf (cfg.backend == "docker") {
|
||||
virtualisation.docker.enable = true;
|
||||
})
|
||||
]);
|
||||
|
||||
}
|
@ -8,13 +8,11 @@ let
|
||||
|
||||
# Provides a fake "docker" binary mapping to podman
|
||||
dockerCompat = pkgs.runCommandNoCC "${podmanPackage.pname}-docker-compat-${podmanPackage.version}" {
|
||||
outputs = [ "out" "bin" "man" ];
|
||||
outputs = [ "out" "man" ];
|
||||
inherit (podmanPackage) meta;
|
||||
} ''
|
||||
mkdir $out
|
||||
|
||||
mkdir -p $bin/bin
|
||||
ln -s ${podmanPackage.bin}/bin/podman $bin/bin/docker
|
||||
mkdir -p $out/bin
|
||||
ln -s ${podmanPackage}/bin/podman $out/bin/docker
|
||||
|
||||
mkdir -p $man/share/man/man1
|
||||
for f in ${podmanPackage.man}/share/man/man1/*; do
|
||||
@ -88,11 +86,21 @@ in
|
||||
};
|
||||
};
|
||||
|
||||
package = lib.mkOption {
|
||||
type = types.package;
|
||||
default = podmanPackage;
|
||||
internal = true;
|
||||
description = ''
|
||||
The final Podman package (including extra packages).
|
||||
'';
|
||||
};
|
||||
|
||||
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
|
||||
environment.systemPackages = [ podmanPackage ]
|
||||
environment.systemPackages = [ cfg.package ]
|
||||
++ lib.optional cfg.dockerCompat dockerCompat;
|
||||
|
||||
environment.etc."containers/libpod.conf".text = ''
|
||||
|
@ -189,9 +189,18 @@ let
|
||||
mkdir /boot/grub
|
||||
echo '(hd0) /dev/vda' > /boot/grub/device.map
|
||||
|
||||
# Install GRUB and generate the GRUB boot menu.
|
||||
touch /etc/NIXOS
|
||||
# This is needed for systemd-boot to find ESP, and udev is not available here to create this
|
||||
mkdir -p /dev/block
|
||||
ln -s /dev/vda2 /dev/block/254:2
|
||||
|
||||
# Set up system profile (normally done by nixos-rebuild / nix-env --set)
|
||||
mkdir -p /nix/var/nix/profiles
|
||||
ln -s ${config.system.build.toplevel} /nix/var/nix/profiles/system-1-link
|
||||
ln -s /nix/var/nix/profiles/system-1-link /nix/var/nix/profiles/system
|
||||
|
||||
# Install bootloader
|
||||
touch /etc/NIXOS
|
||||
export NIXOS_INSTALL_BOOTLOADER=1
|
||||
${config.system.build.toplevel}/bin/switch-to-configuration boot
|
||||
|
||||
umount /boot
|
||||
|
@ -24,7 +24,6 @@ in
|
||||
_3proxy = handleTest ./3proxy.nix {};
|
||||
acme = handleTest ./acme.nix {};
|
||||
atd = handleTest ./atd.nix {};
|
||||
automysqlbackup = handleTest ./automysqlbackup.nix {};
|
||||
avahi = handleTest ./avahi.nix {};
|
||||
babeld = handleTest ./babeld.nix {};
|
||||
bcachefs = handleTestOn ["x86_64-linux"] ./bcachefs.nix {}; # linux-4.18.2018.10.12 is unsupported on aarch64
|
||||
@ -70,7 +69,7 @@ in
|
||||
dhparams = handleTest ./dhparams.nix {};
|
||||
dnscrypt-proxy2 = handleTestOn ["x86_64-linux"] ./dnscrypt-proxy2.nix {};
|
||||
docker = handleTestOn ["x86_64-linux"] ./docker.nix {};
|
||||
docker-containers = handleTestOn ["x86_64-linux"] ./docker-containers.nix {};
|
||||
oci-containers = handleTestOn ["x86_64-linux"] ./oci-containers.nix {};
|
||||
docker-edge = handleTestOn ["x86_64-linux"] ./docker-edge.nix {};
|
||||
docker-preloader = handleTestOn ["x86_64-linux"] ./docker-preloader.nix {};
|
||||
docker-registry = handleTest ./docker-registry.nix {};
|
||||
@ -85,6 +84,7 @@ in
|
||||
ecryptfs = handleTest ./ecryptfs.nix {};
|
||||
ejabberd = handleTest ./xmpp/ejabberd.nix {};
|
||||
elk = handleTestOn ["x86_64-linux"] ./elk.nix {};
|
||||
enlightenment = handleTest ./enlightenment.nix {};
|
||||
env = handleTest ./env.nix {};
|
||||
etcd = handleTestOn ["x86_64-linux"] ./etcd.nix {};
|
||||
etcd-cluster = handleTestOn ["x86_64-linux"] ./etcd-cluster.nix {};
|
||||
@ -143,6 +143,7 @@ in
|
||||
initrdNetwork = handleTest ./initrd-network.nix {};
|
||||
installer = handleTest ./installer.nix {};
|
||||
iodine = handleTest ./iodine.nix {};
|
||||
ipfs = handleTest ./ipfs.nix {};
|
||||
ipv6 = handleTest ./ipv6.nix {};
|
||||
jackett = handleTest ./jackett.nix {};
|
||||
jellyfin = handleTest ./jellyfin.nix {};
|
||||
@ -164,7 +165,6 @@ in
|
||||
kubernetes.rbac = handleTestOn ["x86_64-linux"] ./kubernetes/rbac.nix {};
|
||||
latestKernel.hardened = handleTest ./hardened.nix { latestKernel = true; };
|
||||
latestKernel.login = handleTest ./login.nix { latestKernel = true; };
|
||||
ldap = handleTest ./ldap.nix {};
|
||||
leaps = handleTest ./leaps.nix {};
|
||||
lidarr = handleTest ./lidarr.nix {};
|
||||
lightdm = handleTest ./lightdm.nix {};
|
||||
@ -176,6 +176,8 @@ in
|
||||
magnetico = handleTest ./magnetico.nix {};
|
||||
magic-wormhole-mailbox-server = handleTest ./magic-wormhole-mailbox-server.nix {};
|
||||
mailcatcher = handleTest ./mailcatcher.nix {};
|
||||
mariadb-galera-mariabackup = handleTest ./mysql/mariadb-galera-mariabackup.nix {};
|
||||
mariadb-galera-rsync = handleTest ./mysql/mariadb-galera-rsync.nix {};
|
||||
mathics = handleTest ./mathics.nix {};
|
||||
matomo = handleTest ./matomo.nix {};
|
||||
matrix-synapse = handleTest ./matrix-synapse.nix {};
|
||||
@ -197,9 +199,10 @@ in
|
||||
munin = handleTest ./munin.nix {};
|
||||
mutableUsers = handleTest ./mutable-users.nix {};
|
||||
mxisd = handleTest ./mxisd.nix {};
|
||||
mysql = handleTest ./mysql.nix {};
|
||||
mysqlBackup = handleTest ./mysql-backup.nix {};
|
||||
mysqlReplication = handleTest ./mysql-replication.nix {};
|
||||
mysql = handleTest ./mysql/mysql.nix {};
|
||||
mysql-autobackup = handleTest ./mysql/mysql-autobackup.nix {};
|
||||
mysql-backup = handleTest ./mysql/mysql-backup.nix {};
|
||||
mysql-replication = handleTest ./mysql/mysql-replication.nix {};
|
||||
nagios = handleTest ./nagios.nix {};
|
||||
nat.firewall = handleTest ./nat.nix { withFirewall = true; };
|
||||
nat.firewall-conntrack = handleTest ./nat.nix { withFirewall = true; withConntrackHelpers = true; };
|
||||
@ -297,6 +300,7 @@ in
|
||||
syncthing-relay = handleTest ./syncthing-relay.nix {};
|
||||
systemd = handleTest ./systemd.nix {};
|
||||
systemd-analyze = handleTest ./systemd-analyze.nix {};
|
||||
systemd-boot = handleTestOn ["x86_64-linux"] ./systemd-boot.nix {};
|
||||
systemd-confinement = handleTest ./systemd-confinement.nix {};
|
||||
systemd-timesyncd = handleTest ./systemd-timesyncd.nix {};
|
||||
systemd-networkd-vrf = handleTest ./systemd-networkd-vrf.nix {};
|
||||
@ -320,6 +324,7 @@ in
|
||||
trickster = handleTest ./trickster.nix {};
|
||||
tuptime = handleTest ./tuptime.nix {};
|
||||
udisks2 = handleTest ./udisks2.nix {};
|
||||
unit-php = handleTest ./web-servers/unit-php.nix {};
|
||||
upnp = handleTest ./upnp.nix {};
|
||||
uwsgi = handleTest ./uwsgi.nix {};
|
||||
vault = handleTest ./vault.nix {};
|
||||
|
@ -37,7 +37,7 @@ mapAttrs (channel: chromiumPkg: makeTest rec {
|
||||
</head>
|
||||
<body onload="javascript:document.title='startup done'">
|
||||
<img src="file://${pkgs.fetchurl {
|
||||
url = "http://nixos.org/logo/nixos-hex.svg";
|
||||
url = "https://nixos.org/logo/nixos-hex.svg";
|
||||
sha256 = "07ymq6nw8kc22m7kzxjxldhiq8gzmc7f45kq2bvhbdm0w5s112s4";
|
||||
}}" />
|
||||
</body>
|
||||
|
@ -1,27 +0,0 @@
|
||||
# Test Docker containers as systemd units
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "docker-containers";
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ benley mkaito ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
docker = { pkgs, ... }: {
|
||||
virtualisation.docker.enable = true;
|
||||
|
||||
docker-containers.nginx = {
|
||||
image = "nginx-container";
|
||||
imageFile = pkgs.dockerTools.examples.nginx;
|
||||
ports = ["8181:80"];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
start_all()
|
||||
docker.wait_for_unit("docker-nginx.service")
|
||||
docker.wait_for_open_port(8181)
|
||||
docker.wait_until_succeeds("curl http://localhost:8181 | grep Hello")
|
||||
'';
|
||||
})
|
@ -5,7 +5,7 @@ let
|
||||
name = "bootstrap3";
|
||||
# Download the theme from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip;
|
||||
url = "https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip";
|
||||
sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6";
|
||||
};
|
||||
# We need unzip to build this package
|
||||
@ -20,7 +20,7 @@ let
|
||||
name = "icalevents";
|
||||
# Download the plugin from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip;
|
||||
url = "https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip";
|
||||
sha256 = "e40ed7dd6bbe7fe3363bbbecb4de481d5e42385b5a0f62f6a6ce6bf3a1f9dfa8";
|
||||
};
|
||||
# We need unzip to build this package
|
||||
|
@ -108,7 +108,7 @@ in {
|
||||
inherit image;
|
||||
sshPublicKey = snakeOilPublicKey;
|
||||
|
||||
# ### http://nixos.org/channels/nixos-unstable nixos
|
||||
# ### https://nixos.org/channels/nixos-unstable nixos
|
||||
userData = ''
|
||||
{ pkgs, ... }:
|
||||
|
||||
|
101
nixos/tests/enlightenment.nix
Normal file
101
nixos/tests/enlightenment.nix
Normal file
@ -0,0 +1,101 @@
|
||||
import ./make-test-python.nix ({ pkgs, ...} :
|
||||
{
|
||||
name = "enlightenment";
|
||||
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ romildo ];
|
||||
};
|
||||
|
||||
machine = { ... }:
|
||||
{
|
||||
imports = [ ./common/user-account.nix ];
|
||||
services.xserver.enable = true;
|
||||
services.xserver.desktopManager.enlightenment.enable = true;
|
||||
services.xserver.displayManager.lightdm = {
|
||||
enable = true;
|
||||
autoLogin = {
|
||||
enable = true;
|
||||
user = "alice";
|
||||
};
|
||||
};
|
||||
hardware.pulseaudio.enable = true; # needed for the factl test, /dev/snd/* exists without them but udev doesn't care then
|
||||
virtualisation.memorySize = 1024;
|
||||
environment.systemPackages = [ pkgs.xdotool ];
|
||||
services.acpid.enable = true;
|
||||
services.connman.enable = true;
|
||||
services.connman.package = pkgs.connmanMinimal;
|
||||
};
|
||||
|
||||
enableOCR = true;
|
||||
|
||||
testScript = { nodes, ... }: let
|
||||
user = nodes.machine.config.users.users.alice;
|
||||
in ''
|
||||
with subtest("Ensure x starts"):
|
||||
machine.wait_for_x()
|
||||
machine.wait_for_file("${user.home}/.Xauthority")
|
||||
machine.succeed("xauth merge ${user.home}/.Xauthority")
|
||||
|
||||
with subtest("Check that logging in has given the user ownership of devices"):
|
||||
machine.succeed("getfacl -p /dev/snd/timer | grep -q ${user.name}")
|
||||
|
||||
with subtest("First time wizard"):
|
||||
machine.wait_for_text("Default") # Language
|
||||
machine.succeed("xdotool mousemove 512 185 click 1") # Default Language
|
||||
machine.screenshot("wizard1")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("English") # Keyboard (default)
|
||||
machine.screenshot("wizard2")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("Standard") # Profile (default)
|
||||
machine.screenshot("wizard3")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("Title") # Sizing (default)
|
||||
machine.screenshot("wizard4")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("clicked") # Windows Phocus
|
||||
machine.succeed("xdotool mousemove 512 370 click 1") # Click
|
||||
machine.screenshot("wizard5")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("bindings") # Mouse Modifiers (default)
|
||||
machine.screenshot("wizard6")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("Connman") # Network Management (default)
|
||||
machine.screenshot("wizard7")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("BlusZ") # Bluetooh Management (default)
|
||||
machine.screenshot("wizard8")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("Compositing") # Compositing (default)
|
||||
machine.screenshot("wizard9")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("update") # Updates
|
||||
machine.succeed("xdotool mousemove 512 495 click 1") # Disable
|
||||
machine.screenshot("wizard10")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("taskbar") # Taskbar
|
||||
machine.succeed("xdotool mousemove 480 410 click 1") # Enable
|
||||
machine.screenshot("wizard11")
|
||||
machine.succeed("xdotool mousemove 512 740 click 1") # Next
|
||||
|
||||
machine.wait_for_text("Home") # The desktop
|
||||
machine.screenshot("wizard12")
|
||||
|
||||
with subtest("Run Terminology"):
|
||||
machine.succeed("terminology &")
|
||||
machine.sleep(5)
|
||||
machine.send_chars("ls --color -alF\n")
|
||||
machine.sleep(2)
|
||||
machine.screenshot("terminology")
|
||||
'';
|
||||
})
|
@ -1,55 +1,25 @@
|
||||
|
||||
import ./make-test.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
name = "ipfs";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ mguentner ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
adder =
|
||||
{ ... }:
|
||||
{
|
||||
services.ipfs = {
|
||||
enable = true;
|
||||
defaultMode = "norouting";
|
||||
gatewayAddress = "/ip4/127.0.0.1/tcp/2323";
|
||||
apiAddress = "/ip4/127.0.0.1/tcp/2324";
|
||||
};
|
||||
networking.firewall.allowedTCPPorts = [ 4001 ];
|
||||
};
|
||||
getter =
|
||||
{ ... }:
|
||||
{
|
||||
services.ipfs = {
|
||||
enable = true;
|
||||
defaultMode = "norouting";
|
||||
autoMount = true;
|
||||
};
|
||||
networking.firewall.allowedTCPPorts = [ 4001 ];
|
||||
};
|
||||
nodes.machine = { ... }: {
|
||||
services.ipfs = {
|
||||
enable = true;
|
||||
apiAddress = "/ip4/127.0.0.1/tcp/2324";
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
startAll;
|
||||
$adder->waitForUnit("ipfs-norouting");
|
||||
$getter->waitForUnit("ipfs-norouting");
|
||||
start_all()
|
||||
machine.wait_for_unit("ipfs")
|
||||
|
||||
# wait until api is available
|
||||
$adder->waitUntilSucceeds("ipfs --api /ip4/127.0.0.1/tcp/2324 id");
|
||||
my $addrId = $adder->succeed("ipfs --api /ip4/127.0.0.1/tcp/2324 id -f=\"<id>\"");
|
||||
my $addrIp = (split /[ \/]+/, $adder->succeed("ip -o -4 addr show dev eth1"))[3];
|
||||
machine.wait_until_succeeds("ipfs --api /ip4/127.0.0.1/tcp/2324 id")
|
||||
ipfs_hash = machine.succeed(
|
||||
"echo fnord | ipfs --api /ip4/127.0.0.1/tcp/2324 add | awk '{ print $2 }'"
|
||||
)
|
||||
|
||||
$adder->mustSucceed("[ -n \"\$(ipfs --api /ip4/127.0.0.1/tcp/2324 config Addresses.Gateway | grep /ip4/127.0.0.1/tcp/2323)\" ]");
|
||||
|
||||
# wait until api is available
|
||||
$getter->waitUntilSucceeds("ipfs --api /ip4/127.0.0.1/tcp/5001 id");
|
||||
my $ipfsHash = $adder->mustSucceed("echo fnord | ipfs --api /ip4/127.0.0.1/tcp/2324 add | cut -d' ' -f2");
|
||||
chomp($ipfsHash);
|
||||
|
||||
$adder->mustSucceed("[ -n \"\$(echo fnord | ipfs --api /ip4/127.0.0.1/tcp/2324 add | grep added)\" ]");
|
||||
|
||||
$getter->mustSucceed("ipfs --api /ip4/127.0.0.1/tcp/5001 swarm connect /ip4/$addrIp/tcp/4001/ipfs/$addrId");
|
||||
$getter->mustSucceed("[ -n \"\$(ipfs --api /ip4/127.0.0.1/tcp/5001 cat /ipfs/$ipfsHash | grep fnord)\" ]");
|
||||
$getter->mustSucceed("[ -n \"$(cat /ipfs/$ipfsHash | grep fnord)\" ]");
|
||||
'';
|
||||
machine.succeed(f"ipfs cat /ipfs/{ipfs_hash.strip()} | grep fnord")
|
||||
'';
|
||||
})
|
||||
|
@ -1,405 +0,0 @@
|
||||
import ./make-test-python.nix ({ pkgs, lib, ...} :
|
||||
|
||||
let
|
||||
unlines = lib.concatStringsSep "\n";
|
||||
unlinesAttrs = f: as: unlines (lib.mapAttrsToList f as);
|
||||
|
||||
dbDomain = "example.com";
|
||||
dbSuffix = "dc=example,dc=com";
|
||||
dbAdminDn = "cn=admin,${dbSuffix}";
|
||||
dbAdminPwd = "admin-password";
|
||||
# NOTE: slappasswd -h "{SSHA}" -s '${dbAdminPwd}'
|
||||
dbAdminPwdHash = "{SSHA}i7FopSzkFQMrHzDMB1vrtkI0rBnwouP8";
|
||||
ldapUser = "test-ldap-user";
|
||||
ldapUserId = 10000;
|
||||
ldapUserPwd = "user-password";
|
||||
# NOTE: slappasswd -h "{SSHA}" -s '${ldapUserPwd}'
|
||||
ldapUserPwdHash = "{SSHA}v12XICMZNGT6r2KJ26rIkN8Vvvp4QX6i";
|
||||
ldapGroup = "test-ldap-group";
|
||||
ldapGroupId = 10000;
|
||||
|
||||
mkClient = useDaemon:
|
||||
{ lib, ... }:
|
||||
{
|
||||
virtualisation.memorySize = 256;
|
||||
virtualisation.vlans = [ 1 ];
|
||||
security.pam.services.su.rootOK = lib.mkForce false;
|
||||
users.ldap.enable = true;
|
||||
users.ldap.daemon = {
|
||||
enable = useDaemon;
|
||||
rootpwmoddn = "cn=admin,${dbSuffix}";
|
||||
rootpwmodpwFile = "/etc/nslcd.rootpwmodpw";
|
||||
};
|
||||
users.ldap.loginPam = true;
|
||||
users.ldap.nsswitch = true;
|
||||
users.ldap.server = "ldap://server";
|
||||
users.ldap.base = "ou=posix,${dbSuffix}";
|
||||
users.ldap.bind = {
|
||||
distinguishedName = "cn=admin,${dbSuffix}";
|
||||
passwordFile = "/etc/ldap/bind.password";
|
||||
};
|
||||
# NOTE: passwords stored in clear in Nix's store, but this is a test.
|
||||
environment.etc."ldap/bind.password".source = pkgs.writeText "password" dbAdminPwd;
|
||||
environment.etc."nslcd.rootpwmodpw".source = pkgs.writeText "rootpwmodpw" dbAdminPwd;
|
||||
};
|
||||
in
|
||||
|
||||
{
|
||||
name = "ldap";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ montag451 ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
|
||||
server =
|
||||
{ pkgs, config, ... }:
|
||||
let
|
||||
inherit (config.services) openldap;
|
||||
|
||||
slapdConfig = pkgs.writeText "cn=config.ldif" (''
|
||||
dn: cn=config
|
||||
objectClass: olcGlobal
|
||||
#olcPidFile: /run/slapd/slapd.pid
|
||||
# List of arguments that were passed to the server
|
||||
#olcArgsFile: /run/slapd/slapd.args
|
||||
# Read slapd-config(5) for possible values
|
||||
olcLogLevel: none
|
||||
# The tool-threads parameter sets the actual amount of CPU's
|
||||
# that is used for indexing.
|
||||
olcToolThreads: 1
|
||||
|
||||
dn: olcDatabase={-1}frontend,cn=config
|
||||
objectClass: olcDatabaseConfig
|
||||
objectClass: olcFrontendConfig
|
||||
# The maximum number of entries that is returned for a search operation
|
||||
olcSizeLimit: 500
|
||||
# Allow unlimited access to local connection from the local root user
|
||||
olcAccess: to *
|
||||
by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage
|
||||
by * break
|
||||
# Allow unauthenticated read access for schema and base DN autodiscovery
|
||||
olcAccess: to dn.exact=""
|
||||
by * read
|
||||
olcAccess: to dn.base="cn=Subschema"
|
||||
by * read
|
||||
|
||||
dn: olcDatabase=config,cn=config
|
||||
objectClass: olcDatabaseConfig
|
||||
olcRootDN: cn=admin,cn=config
|
||||
#olcRootPW:
|
||||
# NOTE: access to cn=config, system root can be manager
|
||||
# with SASL mechanism (-Y EXTERNAL) over unix socket (-H ldapi://)
|
||||
olcAccess: to *
|
||||
by dn.exact="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage
|
||||
by * break
|
||||
|
||||
dn: cn=schema,cn=config
|
||||
objectClass: olcSchemaConfig
|
||||
|
||||
include: file://${pkgs.openldap}/etc/schema/core.ldif
|
||||
include: file://${pkgs.openldap}/etc/schema/cosine.ldif
|
||||
include: file://${pkgs.openldap}/etc/schema/nis.ldif
|
||||
include: file://${pkgs.openldap}/etc/schema/inetorgperson.ldif
|
||||
|
||||
dn: cn=module{0},cn=config
|
||||
objectClass: olcModuleList
|
||||
# Where the dynamically loaded modules are stored
|
||||
#olcModulePath: /usr/lib/ldap
|
||||
olcModuleLoad: back_mdb
|
||||
|
||||
''
|
||||
+ unlinesAttrs (olcSuffix: {conf, ...}:
|
||||
"include: file://" + pkgs.writeText "config.ldif" conf
|
||||
) slapdDatabases
|
||||
);
|
||||
|
||||
slapdDatabases = {
|
||||
${dbSuffix} = {
|
||||
conf = ''
|
||||
dn: olcBackend={1}mdb,cn=config
|
||||
objectClass: olcBackendConfig
|
||||
|
||||
dn: olcDatabase={1}mdb,cn=config
|
||||
olcSuffix: ${dbSuffix}
|
||||
olcDbDirectory: ${openldap.dataDir}/${dbSuffix}
|
||||
objectClass: olcDatabaseConfig
|
||||
objectClass: olcMdbConfig
|
||||
# NOTE: checkpoint the database periodically in case of system failure
|
||||
# and to speed up slapd shutdown.
|
||||
olcDbCheckpoint: 512 30
|
||||
# Database max size is 1G
|
||||
olcDbMaxSize: 1073741824
|
||||
olcLastMod: TRUE
|
||||
# NOTE: database superuser. Needed for syncrepl,
|
||||
# and used to auth as admin through a TCP connection.
|
||||
olcRootDN: cn=admin,${dbSuffix}
|
||||
olcRootPW: ${dbAdminPwdHash}
|
||||
#
|
||||
olcDbIndex: objectClass eq
|
||||
olcDbIndex: cn,uid eq
|
||||
olcDbIndex: uidNumber,gidNumber eq
|
||||
olcDbIndex: member,memberUid eq
|
||||
#
|
||||
olcAccess: to attrs=userPassword
|
||||
by self write
|
||||
by anonymous auth
|
||||
by dn="cn=admin,${dbSuffix}" write
|
||||
by dn="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" write
|
||||
by * none
|
||||
olcAccess: to attrs=shadowLastChange
|
||||
by self write
|
||||
by dn="cn=admin,${dbSuffix}" write
|
||||
by dn="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" write
|
||||
by * none
|
||||
olcAccess: to dn.sub="ou=posix,${dbSuffix}"
|
||||
by self read
|
||||
by dn="cn=admin,${dbSuffix}" read
|
||||
by dn="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read
|
||||
olcAccess: to *
|
||||
by self read
|
||||
by * none
|
||||
'';
|
||||
data = ''
|
||||
dn: ${dbSuffix}
|
||||
objectClass: top
|
||||
objectClass: dcObject
|
||||
objectClass: organization
|
||||
o: ${dbDomain}
|
||||
|
||||
dn: cn=admin,${dbSuffix}
|
||||
objectClass: simpleSecurityObject
|
||||
objectClass: organizationalRole
|
||||
description: ${dbDomain} LDAP administrator
|
||||
roleOccupant: ${dbSuffix}
|
||||
userPassword: ${ldapUserPwdHash}
|
||||
|
||||
dn: ou=posix,${dbSuffix}
|
||||
objectClass: top
|
||||
objectClass: organizationalUnit
|
||||
|
||||
dn: ou=accounts,ou=posix,${dbSuffix}
|
||||
objectClass: top
|
||||
objectClass: organizationalUnit
|
||||
|
||||
dn: ou=groups,ou=posix,${dbSuffix}
|
||||
objectClass: top
|
||||
objectClass: organizationalUnit
|
||||
''
|
||||
+ lib.concatMapStrings posixAccount [
|
||||
{ uid=ldapUser; uidNumber=ldapUserId; gidNumber=ldapGroupId; userPassword=ldapUserPwdHash; }
|
||||
]
|
||||
+ lib.concatMapStrings posixGroup [
|
||||
{ gid=ldapGroup; gidNumber=ldapGroupId; members=[]; }
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
# NOTE: create a user account using the posixAccount objectClass.
|
||||
posixAccount =
|
||||
{ uid
|
||||
, uidNumber ? null
|
||||
, gidNumber ? null
|
||||
, cn ? ""
|
||||
, sn ? ""
|
||||
, userPassword ? ""
|
||||
, loginShell ? "/bin/sh"
|
||||
}: ''
|
||||
|
||||
dn: uid=${uid},ou=accounts,ou=posix,${dbSuffix}
|
||||
objectClass: person
|
||||
objectClass: posixAccount
|
||||
objectClass: shadowAccount
|
||||
cn: ${cn}
|
||||
gecos:
|
||||
${if gidNumber == null then "#" else "gidNumber: ${toString gidNumber}"}
|
||||
homeDirectory: /home/${uid}
|
||||
loginShell: ${loginShell}
|
||||
sn: ${sn}
|
||||
${if uidNumber == null then "#" else "uidNumber: ${toString uidNumber}"}
|
||||
${if userPassword == "" then "#" else "userPassword: ${userPassword}"}
|
||||
'';
|
||||
|
||||
# NOTE: create a group using the posixGroup objectClass.
|
||||
posixGroup =
|
||||
{ gid
|
||||
, gidNumber
|
||||
, members
|
||||
}: ''
|
||||
|
||||
dn: cn=${gid},ou=groups,ou=posix,${dbSuffix}
|
||||
objectClass: top
|
||||
objectClass: posixGroup
|
||||
gidNumber: ${toString gidNumber}
|
||||
${lib.concatMapStrings (member: "memberUid: ${member}\n") members}
|
||||
'';
|
||||
in
|
||||
{
|
||||
virtualisation.memorySize = 256;
|
||||
virtualisation.vlans = [ 1 ];
|
||||
networking.firewall.allowedTCPPorts = [ 389 ];
|
||||
services.openldap.enable = true;
|
||||
services.openldap.dataDir = "/var/db/openldap";
|
||||
services.openldap.configDir = "/var/db/slapd";
|
||||
services.openldap.urlList = [
|
||||
"ldap:///"
|
||||
"ldapi:///"
|
||||
];
|
||||
systemd.services.openldap = {
|
||||
preStart = ''
|
||||
set -e
|
||||
# NOTE: slapd's config is always re-initialized.
|
||||
rm -rf "${openldap.configDir}"/cn=config \
|
||||
"${openldap.configDir}"/cn=config.ldif
|
||||
install -D -d -m 0700 -o "${openldap.user}" -g "${openldap.group}" "${openldap.configDir}"
|
||||
# NOTE: olcDbDirectory must be created before adding the config.
|
||||
'' +
|
||||
unlinesAttrs (olcSuffix: {data, ...}: ''
|
||||
# NOTE: database is always re-initialized.
|
||||
rm -rf "${openldap.dataDir}/${olcSuffix}"
|
||||
install -D -d -m 0700 -o "${openldap.user}" -g "${openldap.group}" \
|
||||
"${openldap.dataDir}/${olcSuffix}"
|
||||
'') slapdDatabases
|
||||
+ ''
|
||||
# NOTE: slapd is supposed to be stopped while in preStart,
|
||||
# hence slap* commands can safely be used.
|
||||
umask 0077
|
||||
${pkgs.openldap}/bin/slapadd -n 0 \
|
||||
-F "${openldap.configDir}" \
|
||||
-l ${slapdConfig}
|
||||
chown -R "${openldap.user}:${openldap.group}" "${openldap.configDir}"
|
||||
# NOTE: slapadd(8): To populate the config database slapd-config(5),
|
||||
# use -n 0 as it is always the first database.
|
||||
# It must physically exist on the filesystem prior to this, however.
|
||||
'' +
|
||||
unlinesAttrs (olcSuffix: {data, ...}: ''
|
||||
# NOTE: load database ${olcSuffix}
|
||||
# (as root to avoid depending on sudo or chpst)
|
||||
${pkgs.openldap}/bin/slapadd \
|
||||
-F "${openldap.configDir}" \
|
||||
-l ${pkgs.writeText "data.ldif" data}
|
||||
'' + ''
|
||||
# NOTE: redundant with default openldap's preStart, but do not harm.
|
||||
chown -R "${openldap.user}:${openldap.group}" \
|
||||
"${openldap.dataDir}/${olcSuffix}"
|
||||
'') slapdDatabases;
|
||||
};
|
||||
};
|
||||
|
||||
client1 = mkClient true; # use nss_pam_ldapd
|
||||
client2 = mkClient false; # use nss_ldap and pam_ldap
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
def expect_script(*commands):
|
||||
script = ";".join(commands)
|
||||
return f"${pkgs.expect}/bin/expect -c '{script}'"
|
||||
|
||||
|
||||
server.start()
|
||||
server.wait_for_unit("default.target")
|
||||
|
||||
with subtest("slapd: auth as database admin with SASL and check a POSIX account"):
|
||||
server.succeed(
|
||||
'test "$(ldapsearch -LLL -H ldapi:// -Y EXTERNAL '
|
||||
+ "-b 'uid=${ldapUser},ou=accounts,ou=posix,${dbSuffix}' "
|
||||
+ "-s base uidNumber | "
|
||||
+ "sed -ne 's/^uidNumber: \\(.*\\)/\\1/p')\" -eq ${toString ldapUserId}"
|
||||
)
|
||||
|
||||
with subtest("slapd: auth as database admin with password and check a POSIX account"):
|
||||
server.succeed(
|
||||
"test \"$(ldapsearch -LLL -H ldap://server -D 'cn=admin,${dbSuffix}' "
|
||||
+ "-w '${dbAdminPwd}' -b 'uid=${ldapUser},ou=accounts,ou=posix,${dbSuffix}' "
|
||||
+ "-s base uidNumber | "
|
||||
+ "sed -ne 's/^uidNumber: \\(.*\\)/\\1/p')\" -eq ${toString ldapUserId}"
|
||||
)
|
||||
|
||||
client1.start()
|
||||
client1.wait_for_unit("default.target")
|
||||
|
||||
with subtest("password: su with password to a POSIX account"):
|
||||
client1.succeed(
|
||||
expect_script(
|
||||
'spawn su "${ldapUser}"',
|
||||
'expect "Password:"',
|
||||
'send "${ldapUserPwd}\n"',
|
||||
'expect "*"',
|
||||
'send "whoami\n"',
|
||||
'expect -ex "${ldapUser}" {exit}',
|
||||
"exit 1",
|
||||
)
|
||||
)
|
||||
|
||||
with subtest("password: change password of a POSIX account as root"):
|
||||
client1.succeed(
|
||||
"chpasswd <<<'${ldapUser}:new-password'",
|
||||
expect_script(
|
||||
'spawn su "${ldapUser}"',
|
||||
'expect "Password:"',
|
||||
'send "new-password\n"',
|
||||
'expect "*"',
|
||||
'send "whoami\n"',
|
||||
'expect -ex "${ldapUser}" {exit}',
|
||||
"exit 1",
|
||||
),
|
||||
"chpasswd <<<'${ldapUser}:${ldapUserPwd}'",
|
||||
)
|
||||
|
||||
with subtest("password: change password of a POSIX account from itself"):
|
||||
client1.succeed(
|
||||
"chpasswd <<<'${ldapUser}:${ldapUserPwd}' ",
|
||||
expect_script(
|
||||
"spawn su --login ${ldapUser} -c passwd",
|
||||
'expect "Password: "',
|
||||
'send "${ldapUserPwd}\n"',
|
||||
'expect "(current) UNIX password: "',
|
||||
'send "${ldapUserPwd}\n"',
|
||||
'expect "New password: "',
|
||||
'send "new-password\n"',
|
||||
'expect "Retype new password: "',
|
||||
'send "new-password\n"',
|
||||
'expect "passwd: password updated successfully" {exit}',
|
||||
"exit 1",
|
||||
),
|
||||
expect_script(
|
||||
'spawn su "${ldapUser}"',
|
||||
'expect "Password:"',
|
||||
'send "${ldapUserPwd}\n"',
|
||||
'expect "su: Authentication failure" {exit}',
|
||||
"exit 1",
|
||||
),
|
||||
expect_script(
|
||||
'spawn su "${ldapUser}"',
|
||||
'expect "Password:"',
|
||||
'send "new-password\n"',
|
||||
'expect "*"',
|
||||
'send "whoami\n"',
|
||||
'expect -ex "${ldapUser}" {exit}',
|
||||
"exit 1",
|
||||
),
|
||||
"chpasswd <<<'${ldapUser}:${ldapUserPwd}'",
|
||||
)
|
||||
|
||||
client2.start()
|
||||
client2.wait_for_unit("default.target")
|
||||
|
||||
with subtest("NSS"):
|
||||
client1.succeed(
|
||||
"test \"$(id -u '${ldapUser}')\" -eq ${toString ldapUserId}",
|
||||
"test \"$(id -u -n '${ldapUser}')\" = '${ldapUser}'",
|
||||
"test \"$(id -g '${ldapUser}')\" -eq ${toString ldapGroupId}",
|
||||
"test \"$(id -g -n '${ldapUser}')\" = '${ldapGroup}'",
|
||||
"test \"$(id -u '${ldapUser}')\" -eq ${toString ldapUserId}",
|
||||
"test \"$(id -u -n '${ldapUser}')\" = '${ldapUser}'",
|
||||
"test \"$(id -g '${ldapUser}')\" -eq ${toString ldapGroupId}",
|
||||
"test \"$(id -g -n '${ldapUser}')\" = '${ldapGroup}'",
|
||||
)
|
||||
|
||||
with subtest("PAM"):
|
||||
client1.succeed(
|
||||
"echo ${ldapUserPwd} | su -l '${ldapUser}' -c true",
|
||||
"echo ${ldapUserPwd} | su -l '${ldapUser}' -c true",
|
||||
)
|
||||
'';
|
||||
})
|
@ -44,7 +44,7 @@ in {
|
||||
|
||||
# Create a test bucket on the server
|
||||
machine.succeed(
|
||||
"mc config host add minio http://localhost:9000 ${accessKey} ${secretKey} S3v4"
|
||||
"mc config host add minio http://localhost:9000 ${accessKey} ${secretKey} --api s3v4"
|
||||
)
|
||||
machine.succeed("mc mb minio/test-bucket")
|
||||
machine.succeed("${minioPythonScript}")
|
||||
|
223
nixos/tests/mysql/mariadb-galera-mariabackup.nix
Normal file
223
nixos/tests/mysql/mariadb-galera-mariabackup.nix
Normal file
@ -0,0 +1,223 @@
|
||||
import ./../make-test-python.nix ({ pkgs, ...} :
|
||||
|
||||
let
|
||||
mysqlenv-common = pkgs.buildEnv { name = "mysql-path-env-common"; pathsToLink = [ "/bin" ]; paths = with pkgs; [ bash gawk gnutar inetutils which ]; };
|
||||
mysqlenv-mariabackup = pkgs.buildEnv { name = "mysql-path-env-mariabackup"; pathsToLink = [ "/bin" ]; paths = with pkgs; [ gzip iproute netcat procps pv socat ]; };
|
||||
|
||||
in {
|
||||
name = "mariadb-galera-mariabackup";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ izorkin ];
|
||||
};
|
||||
|
||||
# The test creates a Galera cluster with 3 nodes and is checking if mariabackup-based SST works. The cluster is tested by creating a DB and an empty table on one node,
|
||||
# and checking the table's presence on the other node.
|
||||
|
||||
nodes = {
|
||||
galera_01 =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.1.1"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.1.1 galera_01
|
||||
192.168.1.2 galera_02
|
||||
192.168.1.3 galera_03
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 3306 4444 4567 4568 ];
|
||||
firewall.allowedUDPPorts = [ 4567 ];
|
||||
};
|
||||
users.users.testuser = { };
|
||||
systemd.services.mysql = with pkgs; {
|
||||
path = [ mysqlenv-common mysqlenv-mariabackup ];
|
||||
};
|
||||
services.mysql = {
|
||||
enable = true;
|
||||
package = pkgs.mariadb;
|
||||
ensureDatabases = [ "testdb" ];
|
||||
ensureUsers = [{
|
||||
name = "testuser";
|
||||
ensurePermissions = {
|
||||
"testdb.*" = "ALL PRIVILEGES";
|
||||
};
|
||||
}];
|
||||
initialScript = pkgs.writeText "mariadb-init.sql" ''
|
||||
GRANT ALL PRIVILEGES ON *.* TO 'check_repl'@'localhost' IDENTIFIED BY 'check_pass' WITH GRANT OPTION;
|
||||
FLUSH PRIVILEGES;
|
||||
'';
|
||||
settings = {
|
||||
mysqld = {
|
||||
bind_address = "0.0.0.0";
|
||||
};
|
||||
galera = {
|
||||
wsrep_on = "ON";
|
||||
wsrep_debug = "OFF";
|
||||
wsrep_retry_autocommit = "3";
|
||||
wsrep_provider = "${pkgs.mariadb-galera_25}/lib/galera/libgalera_smm.so";
|
||||
wsrep_cluster_address = "gcomm://";
|
||||
wsrep_cluster_name = "galera";
|
||||
wsrep_node_address = "192.168.1.1";
|
||||
wsrep_node_name = "galera_01";
|
||||
wsrep_sst_method = "mariabackup";
|
||||
wsrep_sst_auth = "check_repl:check_pass";
|
||||
binlog_format = "ROW";
|
||||
enforce_storage_engine = "InnoDB";
|
||||
innodb_autoinc_lock_mode = "2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
galera_02 =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.1.2"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.1.1 galera_01
|
||||
192.168.1.2 galera_02
|
||||
192.168.1.3 galera_03
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 3306 4444 4567 4568 ];
|
||||
firewall.allowedUDPPorts = [ 4567 ];
|
||||
};
|
||||
users.users.testuser = { };
|
||||
systemd.services.mysql = with pkgs; {
|
||||
path = [ mysqlenv-common mysqlenv-mariabackup ];
|
||||
};
|
||||
services.mysql = {
|
||||
enable = true;
|
||||
package = pkgs.mariadb;
|
||||
settings = {
|
||||
mysqld = {
|
||||
bind_address = "0.0.0.0";
|
||||
};
|
||||
galera = {
|
||||
wsrep_on = "ON";
|
||||
wsrep_debug = "OFF";
|
||||
wsrep_retry_autocommit = "3";
|
||||
wsrep_provider = "${pkgs.mariadb-galera_25}/lib/galera/libgalera_smm.so";
|
||||
wsrep_cluster_address = "gcomm://galera_01,galera_02,galera_03";
|
||||
wsrep_cluster_name = "galera";
|
||||
wsrep_node_address = "192.168.1.2";
|
||||
wsrep_node_name = "galera_02";
|
||||
wsrep_sst_method = "mariabackup";
|
||||
wsrep_sst_auth = "check_repl:check_pass";
|
||||
binlog_format = "ROW";
|
||||
enforce_storage_engine = "InnoDB";
|
||||
innodb_autoinc_lock_mode = "2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
galera_03 =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.1.3"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.1.1 galera_01
|
||||
192.168.1.2 galera_02
|
||||
192.168.1.3 galera_03
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 3306 4444 4567 4568 ];
|
||||
firewall.allowedUDPPorts = [ 4567 ];
|
||||
};
|
||||
users.users.testuser = { };
|
||||
systemd.services.mysql = with pkgs; {
|
||||
path = [ mysqlenv-common mysqlenv-mariabackup ];
|
||||
};
|
||||
services.mysql = {
|
||||
enable = true;
|
||||
package = pkgs.mariadb;
|
||||
settings = {
|
||||
mysqld = {
|
||||
bind_address = "0.0.0.0";
|
||||
};
|
||||
galera = {
|
||||
wsrep_on = "ON";
|
||||
wsrep_debug = "OFF";
|
||||
wsrep_retry_autocommit = "3";
|
||||
wsrep_provider = "${pkgs.mariadb-galera_25}/lib/galera/libgalera_smm.so";
|
||||
wsrep_cluster_address = "gcomm://galera_01,galera_02,galera_03";
|
||||
wsrep_cluster_name = "galera";
|
||||
wsrep_node_address = "192.168.1.3";
|
||||
wsrep_node_name = "galera_03";
|
||||
wsrep_sst_method = "mariabackup";
|
||||
wsrep_sst_auth = "check_repl:check_pass";
|
||||
binlog_format = "ROW";
|
||||
enforce_storage_engine = "InnoDB";
|
||||
innodb_autoinc_lock_mode = "2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
galera_01.start()
|
||||
galera_01.wait_for_unit("mysql")
|
||||
galera_01.wait_for_open_port(3306)
|
||||
galera_01.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; create table db1 (test_id INT, PRIMARY KEY (test_id)) ENGINE = InnoDB;'"
|
||||
)
|
||||
galera_01.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; insert into db1 values (37);'"
|
||||
)
|
||||
galera_02.start()
|
||||
galera_02.wait_for_unit("mysql")
|
||||
galera_02.wait_for_open_port(3306)
|
||||
galera_03.start()
|
||||
galera_03.wait_for_unit("mysql")
|
||||
galera_03.wait_for_open_port(3306)
|
||||
galera_02.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db1;' -N | grep 37"
|
||||
)
|
||||
galera_02.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; create table db2 (test_id INT, PRIMARY KEY (test_id)) ENGINE = InnoDB;'"
|
||||
)
|
||||
galera_02.succeed("systemctl stop mysql")
|
||||
galera_01.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; insert into db2 values (38);'"
|
||||
)
|
||||
galera_03.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; create table db3 (test_id INT, PRIMARY KEY (test_id)) ENGINE = InnoDB;'"
|
||||
)
|
||||
galera_01.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; insert into db3 values (39);'"
|
||||
)
|
||||
galera_02.succeed("systemctl start mysql")
|
||||
galera_02.wait_for_open_port(3306)
|
||||
galera_02.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'show status' -N | grep 'wsrep_cluster_size.*3'"
|
||||
)
|
||||
galera_03.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'show status' -N | grep 'wsrep_local_state_comment.*Synced'"
|
||||
)
|
||||
galera_01.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db3;' -N | grep 39"
|
||||
)
|
||||
galera_02.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db2;' -N | grep 38"
|
||||
)
|
||||
galera_03.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db1;' -N | grep 37"
|
||||
)
|
||||
galera_01.succeed("sudo -u testuser mysql -u testuser -e 'use testdb; drop table db3;'")
|
||||
galera_02.succeed("sudo -u testuser mysql -u root -e 'use testdb; drop table db2;'")
|
||||
galera_03.succeed("sudo -u testuser mysql -u root -e 'use testdb; drop table db1;'")
|
||||
'';
|
||||
})
|
216
nixos/tests/mysql/mariadb-galera-rsync.nix
Normal file
216
nixos/tests/mysql/mariadb-galera-rsync.nix
Normal file
@ -0,0 +1,216 @@
|
||||
import ./../make-test-python.nix ({ pkgs, ...} :
|
||||
|
||||
let
|
||||
mysqlenv-common = pkgs.buildEnv { name = "mysql-path-env-common"; pathsToLink = [ "/bin" ]; paths = with pkgs; [ bash gawk gnutar inetutils which ]; };
|
||||
mysqlenv-rsync = pkgs.buildEnv { name = "mysql-path-env-rsync"; pathsToLink = [ "/bin" ]; paths = with pkgs; [ lsof procps rsync stunnel ]; };
|
||||
|
||||
in {
|
||||
name = "mariadb-galera-rsync";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ izorkin ];
|
||||
};
|
||||
|
||||
# The test creates a Galera cluster with 3 nodes and is checking if rsync-based SST works. The cluster is tested by creating a DB and an empty table on one node,
|
||||
# and checking the table's presence on the other node.
|
||||
|
||||
nodes = {
|
||||
galera_04 =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.2.1"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.2.1 galera_04
|
||||
192.168.2.2 galera_05
|
||||
192.168.2.3 galera_06
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 3306 4444 4567 4568 ];
|
||||
firewall.allowedUDPPorts = [ 4567 ];
|
||||
};
|
||||
users.users.testuser = { };
|
||||
systemd.services.mysql = with pkgs; {
|
||||
path = [ mysqlenv-common mysqlenv-rsync ];
|
||||
};
|
||||
services.mysql = {
|
||||
enable = true;
|
||||
package = pkgs.mariadb;
|
||||
ensureDatabases = [ "testdb" ];
|
||||
ensureUsers = [{
|
||||
name = "testuser";
|
||||
ensurePermissions = {
|
||||
"testdb.*" = "ALL PRIVILEGES";
|
||||
};
|
||||
}];
|
||||
settings = {
|
||||
mysqld = {
|
||||
bind_address = "0.0.0.0";
|
||||
};
|
||||
galera = {
|
||||
wsrep_on = "ON";
|
||||
wsrep_debug = "OFF";
|
||||
wsrep_retry_autocommit = "3";
|
||||
wsrep_provider = "${pkgs.mariadb-galera_25}/lib/galera/libgalera_smm.so";
|
||||
wsrep_cluster_address = "gcomm://";
|
||||
wsrep_cluster_name = "galera-rsync";
|
||||
wsrep_node_address = "192.168.2.1";
|
||||
wsrep_node_name = "galera_04";
|
||||
wsrep_sst_method = "rsync";
|
||||
binlog_format = "ROW";
|
||||
enforce_storage_engine = "InnoDB";
|
||||
innodb_autoinc_lock_mode = "2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
galera_05 =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.2.2"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.2.1 galera_04
|
||||
192.168.2.2 galera_05
|
||||
192.168.2.3 galera_06
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 3306 4444 4567 4568 ];
|
||||
firewall.allowedUDPPorts = [ 4567 ];
|
||||
};
|
||||
users.users.testuser = { };
|
||||
systemd.services.mysql = with pkgs; {
|
||||
path = [ mysqlenv-common mysqlenv-rsync ];
|
||||
};
|
||||
services.mysql = {
|
||||
enable = true;
|
||||
package = pkgs.mariadb;
|
||||
settings = {
|
||||
mysqld = {
|
||||
bind_address = "0.0.0.0";
|
||||
};
|
||||
galera = {
|
||||
wsrep_on = "ON";
|
||||
wsrep_debug = "OFF";
|
||||
wsrep_retry_autocommit = "3";
|
||||
wsrep_provider = "${pkgs.mariadb-galera_25}/lib/galera/libgalera_smm.so";
|
||||
wsrep_cluster_address = "gcomm://galera_04,galera_05,galera_06";
|
||||
wsrep_cluster_name = "galera-rsync";
|
||||
wsrep_node_address = "192.168.2.2";
|
||||
wsrep_node_name = "galera_05";
|
||||
wsrep_sst_method = "rsync";
|
||||
binlog_format = "ROW";
|
||||
enforce_storage_engine = "InnoDB";
|
||||
innodb_autoinc_lock_mode = "2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
galera_06 =
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.2.3"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.2.1 galera_04
|
||||
192.168.2.2 galera_05
|
||||
192.168.2.3 galera_06
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 3306 4444 4567 4568 ];
|
||||
firewall.allowedUDPPorts = [ 4567 ];
|
||||
};
|
||||
users.users.testuser = { };
|
||||
systemd.services.mysql = with pkgs; {
|
||||
path = [ mysqlenv-common mysqlenv-rsync ];
|
||||
};
|
||||
services.mysql = {
|
||||
enable = true;
|
||||
package = pkgs.mariadb;
|
||||
settings = {
|
||||
mysqld = {
|
||||
bind_address = "0.0.0.0";
|
||||
};
|
||||
galera = {
|
||||
wsrep_on = "ON";
|
||||
wsrep_debug = "OFF";
|
||||
wsrep_retry_autocommit = "3";
|
||||
wsrep_provider = "${pkgs.mariadb-galera_25}/lib/galera/libgalera_smm.so";
|
||||
wsrep_cluster_address = "gcomm://galera_04,galera_05,galera_06";
|
||||
wsrep_cluster_name = "galera-rsync";
|
||||
wsrep_node_address = "192.168.2.3";
|
||||
wsrep_node_name = "galera_06";
|
||||
wsrep_sst_method = "rsync";
|
||||
binlog_format = "ROW";
|
||||
enforce_storage_engine = "InnoDB";
|
||||
innodb_autoinc_lock_mode = "2";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
galera_04.start()
|
||||
galera_04.wait_for_unit("mysql")
|
||||
galera_04.wait_for_open_port(3306)
|
||||
galera_04.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; create table db1 (test_id INT, PRIMARY KEY (test_id)) ENGINE = InnoDB;'"
|
||||
)
|
||||
galera_04.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; insert into db1 values (41);'"
|
||||
)
|
||||
galera_05.start()
|
||||
galera_05.wait_for_unit("mysql")
|
||||
galera_05.wait_for_open_port(3306)
|
||||
galera_06.start()
|
||||
galera_06.wait_for_unit("mysql")
|
||||
galera_06.wait_for_open_port(3306)
|
||||
galera_05.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db1;' -N | grep 41"
|
||||
)
|
||||
galera_05.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; create table db2 (test_id INT, PRIMARY KEY (test_id)) ENGINE = InnoDB;'"
|
||||
)
|
||||
galera_05.succeed("systemctl stop mysql")
|
||||
galera_04.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; insert into db2 values (42);'"
|
||||
)
|
||||
galera_06.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; create table db3 (test_id INT, PRIMARY KEY (test_id)) ENGINE = InnoDB;'"
|
||||
)
|
||||
galera_04.succeed(
|
||||
"sudo -u testuser mysql -u testuser -e 'use testdb; insert into db3 values (43);'"
|
||||
)
|
||||
galera_05.succeed("systemctl start mysql")
|
||||
galera_05.wait_for_open_port(3306)
|
||||
galera_05.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'show status' -N | grep 'wsrep_cluster_size.*3'"
|
||||
)
|
||||
galera_06.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'show status' -N | grep 'wsrep_local_state_comment.*Synced'"
|
||||
)
|
||||
galera_04.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db3;' -N | grep 43"
|
||||
)
|
||||
galera_05.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db2;' -N | grep 42"
|
||||
)
|
||||
galera_06.succeed(
|
||||
"sudo -u testuser mysql -u root -e 'use testdb; select test_id from db1;' -N | grep 41"
|
||||
)
|
||||
galera_04.succeed("sudo -u testuser mysql -u testuser -e 'use testdb; drop table db3;'")
|
||||
galera_05.succeed("sudo -u testuser mysql -u root -e 'use testdb; drop table db2;'")
|
||||
galera_06.succeed("sudo -u testuser mysql -u root -e 'use testdb; drop table db1;'")
|
||||
'';
|
||||
})
|
@ -1,4 +1,4 @@
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }:
|
||||
import ./../make-test-python.nix ({ pkgs, lib, ... }:
|
||||
|
||||
{
|
||||
name = "automysqlbackup";
|
@ -1,5 +1,5 @@
|
||||
# Test whether mysqlBackup option works
|
||||
import ./make-test-python.nix ({ pkgs, ... } : {
|
||||
import ./../make-test-python.nix ({ pkgs, ... } : {
|
||||
name = "mysql-backup";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ rvl ];
|
@ -1,4 +1,4 @@
|
||||
import ./make-test-python.nix ({ pkgs, ...} :
|
||||
import ./../make-test-python.nix ({ pkgs, ...} :
|
||||
|
||||
let
|
||||
replicateUser = "replicate";
|
@ -1,4 +1,4 @@
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./../make-test-python.nix ({ pkgs, ...} : {
|
||||
name = "mysql";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ eelco shlevy ];
|
43
nixos/tests/oci-containers.nix
Normal file
43
nixos/tests/oci-containers.nix
Normal file
@ -0,0 +1,43 @@
|
||||
{ system ? builtins.currentSystem
|
||||
, config ? {}
|
||||
, pkgs ? import ../.. { inherit system config; }
|
||||
, lib ? pkgs.lib
|
||||
}:
|
||||
|
||||
let
|
||||
|
||||
inherit (import ../lib/testing-python.nix { inherit system pkgs; }) makeTest;
|
||||
|
||||
mkOCITest = backend: makeTest {
|
||||
name = "oci-containers-${backend}";
|
||||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ adisbladis benley mkaito ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
${backend} = { pkgs, ... }: {
|
||||
virtualisation.oci-containers = {
|
||||
inherit backend;
|
||||
containers.nginx = {
|
||||
image = "nginx-container";
|
||||
imageFile = pkgs.dockerTools.examples.nginx;
|
||||
ports = ["8181:80"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
start_all()
|
||||
${backend}.wait_for_unit("${backend}-nginx.service")
|
||||
${backend}.wait_for_open_port(8181)
|
||||
${backend}.wait_until_succeeds("curl http://localhost:8181 | grep Hello")
|
||||
'';
|
||||
};
|
||||
|
||||
in
|
||||
lib.foldl' (attrs: backend: attrs // { ${backend} = mkOCITest backend; }) {} [
|
||||
"docker"
|
||||
"podman"
|
||||
]
|
@ -1,247 +0,0 @@
|
||||
import ./make-test.nix ({ pkgs, ... }:
|
||||
|
||||
with pkgs.lib;
|
||||
|
||||
let
|
||||
ksExt = pkgs.writeText "ks-ext4" ''
|
||||
clearpart --all --initlabel --drives=vdb
|
||||
|
||||
part /boot --recommended --label=boot --fstype=ext2 --ondisk=vdb
|
||||
part swap --recommended --label=swap --fstype=swap --ondisk=vdb
|
||||
part /nix --size=500 --label=nix --fstype=ext3 --ondisk=vdb
|
||||
part / --recommended --label=root --fstype=ext4 --ondisk=vdb
|
||||
'';
|
||||
|
||||
ksBtrfs = pkgs.writeText "ks-btrfs" ''
|
||||
clearpart --all --initlabel --drives=vdb,vdc
|
||||
|
||||
part swap1 --recommended --label=swap1 --fstype=swap --ondisk=vdb
|
||||
part swap2 --recommended --label=swap2 --fstype=swap --ondisk=vdc
|
||||
|
||||
part btrfs.1 --grow --ondisk=vdb
|
||||
part btrfs.2 --grow --ondisk=vdc
|
||||
|
||||
btrfs / --data=0 --metadata=1 --label=root btrfs.1 btrfs.2
|
||||
'';
|
||||
|
||||
ksF2fs = pkgs.writeText "ks-f2fs" ''
|
||||
clearpart --all --initlabel --drives=vdb
|
||||
|
||||
part swap --recommended --label=swap --fstype=swap --ondisk=vdb
|
||||
part /boot --recommended --label=boot --fstype=f2fs --ondisk=vdb
|
||||
part / --recommended --label=root --fstype=f2fs --ondisk=vdb
|
||||
'';
|
||||
|
||||
ksRaid = pkgs.writeText "ks-raid" ''
|
||||
clearpart --all --initlabel --drives=vdb,vdc
|
||||
|
||||
part raid.01 --size=200 --ondisk=vdb
|
||||
part raid.02 --size=200 --ondisk=vdc
|
||||
|
||||
part swap1 --size=500 --label=swap1 --fstype=swap --ondisk=vdb
|
||||
part swap2 --size=500 --label=swap2 --fstype=swap --ondisk=vdc
|
||||
|
||||
part raid.11 --grow --ondisk=vdb
|
||||
part raid.12 --grow --ondisk=vdc
|
||||
|
||||
raid /boot --level=1 --fstype=ext3 --device=md0 raid.01 raid.02
|
||||
raid / --level=1 --fstype=xfs --device=md1 raid.11 raid.12
|
||||
'';
|
||||
|
||||
ksRaidLvmCrypt = pkgs.writeText "ks-lvm-crypt" ''
|
||||
clearpart --all --initlabel --drives=vdb,vdc
|
||||
|
||||
part raid.1 --grow --ondisk=vdb
|
||||
part raid.2 --grow --ondisk=vdc
|
||||
|
||||
raid pv.0 --level=1 --encrypted --passphrase=x --device=md0 raid.1 raid.2
|
||||
|
||||
volgroup nixos pv.0
|
||||
|
||||
logvol /boot --size=200 --fstype=ext3 --name=boot --vgname=nixos
|
||||
logvol swap --size=500 --fstype=swap --name=swap --vgname=nixos
|
||||
logvol / --size=1000 --grow --fstype=ext4 --name=root --vgname=nixos
|
||||
'';
|
||||
in {
|
||||
name = "partitiion";
|
||||
|
||||
machine = { pkgs, ... }: {
|
||||
environment.systemPackages = [
|
||||
pkgs.pythonPackages.nixpart0
|
||||
pkgs.file pkgs.btrfs-progs pkgs.xfsprogs pkgs.lvm2
|
||||
];
|
||||
virtualisation.emptyDiskImages = [ 4096 4096 ];
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
my $diskStart;
|
||||
my @mtab;
|
||||
|
||||
sub getMtab {
|
||||
my $mounts = $machine->succeed("cat /proc/mounts");
|
||||
chomp $mounts;
|
||||
return map [split], split /\n/, $mounts;
|
||||
}
|
||||
|
||||
sub parttest {
|
||||
my ($desc, $code) = @_;
|
||||
$machine->start;
|
||||
$machine->waitForUnit("default.target");
|
||||
|
||||
# Gather mounts and superblock
|
||||
@mtab = getMtab;
|
||||
$diskStart = $machine->succeed("dd if=/dev/vda bs=512 count=1");
|
||||
|
||||
subtest($desc, $code);
|
||||
$machine->shutdown;
|
||||
}
|
||||
|
||||
sub ensureSanity {
|
||||
# Check whether the filesystem in /dev/vda is still intact
|
||||
my $newDiskStart = $machine->succeed("dd if=/dev/vda bs=512 count=1");
|
||||
if ($diskStart ne $newDiskStart) {
|
||||
$machine->log("Something went wrong, the partitioner wrote " .
|
||||
"something into the first 512 bytes of /dev/vda!");
|
||||
die;
|
||||
}
|
||||
|
||||
# Check whether nixpart has unmounted anything
|
||||
my @currentMtab = getMtab;
|
||||
for my $mount (@mtab) {
|
||||
my $path = $mount->[1];
|
||||
unless (grep { $_->[1] eq $path } @currentMtab) {
|
||||
$machine->log("The partitioner seems to have unmounted $path.");
|
||||
die;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sub checkMount {
|
||||
my $mounts = $machine->succeed("cat /proc/mounts");
|
||||
|
||||
}
|
||||
|
||||
sub kickstart {
|
||||
$machine->copyFileFromHost($_[0], "/kickstart");
|
||||
$machine->succeed("nixpart -v /kickstart");
|
||||
ensureSanity;
|
||||
}
|
||||
|
||||
sub ensurePartition {
|
||||
my ($name, $match) = @_;
|
||||
my $path = $name =~ /^\// ? $name : "/dev/disk/by-label/$name";
|
||||
my $out = $machine->succeed("file -Ls $path");
|
||||
my @matches = grep(/^$path: .*$match/i, $out);
|
||||
if (!@matches) {
|
||||
$machine->log("Partition on $path was expected to have a " .
|
||||
"file system that matches $match, but instead has: $out");
|
||||
die;
|
||||
}
|
||||
}
|
||||
|
||||
sub ensureNoPartition {
|
||||
$machine->succeed("test ! -e /dev/$_[0]");
|
||||
}
|
||||
|
||||
sub ensureMountPoint {
|
||||
$machine->succeed("mountpoint $_[0]");
|
||||
}
|
||||
|
||||
sub remountAndCheck {
|
||||
$machine->nest("Remounting partitions:", sub {
|
||||
# XXX: "findmnt -ARunl -oTARGET /mnt" seems to NOT print all mounts!
|
||||
my $getmounts_cmd = "cat /proc/mounts | cut -d' ' -f2 | grep '^/mnt'";
|
||||
# Insert canaries first
|
||||
my $canaries = $machine->succeed($getmounts_cmd . " | while read p;" .
|
||||
" do touch \"\$p/canary\";" .
|
||||
" echo \"\$p/canary\"; done");
|
||||
# Now unmount manually
|
||||
$machine->succeed($getmounts_cmd . " | tac | xargs -r umount");
|
||||
# /mnt should be empty or non-existing
|
||||
my $found = $machine->succeed("find /mnt -mindepth 1");
|
||||
chomp $found;
|
||||
if ($found) {
|
||||
$machine->log("Cruft found in /mnt:\n$found");
|
||||
die;
|
||||
}
|
||||
# Try to remount with nixpart
|
||||
$machine->succeed("nixpart -vm /kickstart");
|
||||
ensureMountPoint("/mnt");
|
||||
# Check if our beloved canaries are dead
|
||||
chomp $canaries;
|
||||
$machine->nest("Checking canaries:", sub {
|
||||
for my $canary (split /\n/, $canaries) {
|
||||
$machine->succeed("test -e '$canary'");
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
parttest "ext2, ext3 and ext4 filesystems", sub {
|
||||
kickstart("${ksExt}");
|
||||
ensurePartition("boot", "ext2");
|
||||
ensurePartition("swap", "swap");
|
||||
ensurePartition("nix", "ext3");
|
||||
ensurePartition("root", "ext4");
|
||||
ensurePartition("/dev/vdb4", "boot sector");
|
||||
ensureNoPartition("vdb6");
|
||||
ensureNoPartition("vdc1");
|
||||
remountAndCheck;
|
||||
ensureMountPoint("/mnt/boot");
|
||||
ensureMountPoint("/mnt/nix");
|
||||
};
|
||||
|
||||
parttest "btrfs filesystem", sub {
|
||||
$machine->succeed("modprobe btrfs");
|
||||
kickstart("${ksBtrfs}");
|
||||
ensurePartition("swap1", "swap");
|
||||
ensurePartition("swap2", "swap");
|
||||
ensurePartition("/dev/vdb2", "btrfs");
|
||||
ensurePartition("/dev/vdc2", "btrfs");
|
||||
ensureNoPartition("vdb3");
|
||||
ensureNoPartition("vdc3");
|
||||
remountAndCheck;
|
||||
};
|
||||
|
||||
parttest "f2fs filesystem", sub {
|
||||
$machine->succeed("modprobe f2fs");
|
||||
kickstart("${ksF2fs}");
|
||||
ensurePartition("swap", "swap");
|
||||
ensurePartition("boot", "f2fs");
|
||||
ensurePartition("root", "f2fs");
|
||||
remountAndCheck;
|
||||
ensureMountPoint("/mnt/boot", "f2fs");
|
||||
};
|
||||
|
||||
parttest "RAID1 with XFS", sub {
|
||||
kickstart("${ksRaid}");
|
||||
ensurePartition("swap1", "swap");
|
||||
ensurePartition("swap2", "swap");
|
||||
ensurePartition("/dev/md0", "ext3");
|
||||
ensurePartition("/dev/md1", "xfs");
|
||||
ensureNoPartition("vdb4");
|
||||
ensureNoPartition("vdc4");
|
||||
ensureNoPartition("md2");
|
||||
remountAndCheck;
|
||||
ensureMountPoint("/mnt/boot");
|
||||
};
|
||||
|
||||
parttest "RAID1 with LUKS and LVM", sub {
|
||||
kickstart("${ksRaidLvmCrypt}");
|
||||
ensurePartition("/dev/vdb1", "data");
|
||||
ensureNoPartition("vdb2");
|
||||
ensurePartition("/dev/vdc1", "data");
|
||||
ensureNoPartition("vdc2");
|
||||
|
||||
ensurePartition("/dev/md0", "luks");
|
||||
ensureNoPartition("md1");
|
||||
|
||||
ensurePartition("/dev/nixos/boot", "ext3");
|
||||
ensurePartition("/dev/nixos/swap", "swap");
|
||||
ensurePartition("/dev/nixos/root", "ext4");
|
||||
|
||||
remountAndCheck;
|
||||
ensureMountPoint("/mnt/boot");
|
||||
};
|
||||
'';
|
||||
})
|
@ -179,7 +179,7 @@ in import ./make-test-python.nix {
|
||||
s3.succeed(
|
||||
"mc config host add minio "
|
||||
+ "http://localhost:${toString minioPort} "
|
||||
+ "${s3.accessKey} ${s3.secretKey} S3v4",
|
||||
+ "${s3.accessKey} ${s3.secretKey} --api s3v4",
|
||||
"mc mb minio/thanos-bucket",
|
||||
)
|
||||
|
||||
|
31
nixos/tests/systemd-boot.nix
Normal file
31
nixos/tests/systemd-boot.nix
Normal file
@ -0,0 +1,31 @@
|
||||
{ system ? builtins.currentSystem,
|
||||
config ? {},
|
||||
pkgs ? import ../.. { inherit system config; }
|
||||
}:
|
||||
|
||||
with import ../lib/testing-python.nix { inherit system pkgs; };
|
||||
with pkgs.lib;
|
||||
|
||||
makeTest {
|
||||
name = "systemd-boot";
|
||||
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ danielfullmer ];
|
||||
|
||||
machine = { pkgs, lib, ... }: {
|
||||
virtualisation.useBootLoader = true;
|
||||
virtualisation.useEFIBoot = true;
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
machine.start()
|
||||
machine.wait_for_unit("multi-user.target")
|
||||
|
||||
machine.succeed("test -e /boot/loader/entries/nixos-generation-1.conf")
|
||||
|
||||
# Ensure we actually booted using systemd-boot.
|
||||
# Magic number is the vendor UUID used by systemd-boot.
|
||||
machine.succeed(
|
||||
"test -e /sys/firmware/efi/efivars/LoaderEntrySelected-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f"
|
||||
)
|
||||
'';
|
||||
}
|
@ -3,7 +3,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
|
||||
let
|
||||
|
||||
stick = pkgs.fetchurl {
|
||||
url = "http://nixos.org/~eelco/nix/udisks-test.img.xz";
|
||||
url = "https://nixos.org/~eelco/nix/udisks-test.img.xz";
|
||||
sha256 = "0was1xgjkjad91nipzclaz5biv3m4b2nk029ga6nk7iklwi19l8b";
|
||||
};
|
||||
|
||||
|
47
nixos/tests/web-servers/unit-php.nix
Normal file
47
nixos/tests/web-servers/unit-php.nix
Normal file
@ -0,0 +1,47 @@
|
||||
import ../make-test-python.nix ({pkgs, ...}:
|
||||
let
|
||||
testdir = pkgs.writeTextDir "www/info.php" "<?php phpinfo();";
|
||||
|
||||
in {
|
||||
name = "unit-php-test";
|
||||
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ izorkin ];
|
||||
|
||||
machine = { config, lib, pkgs, ... }: {
|
||||
services.unit = {
|
||||
enable = true;
|
||||
config = ''
|
||||
{
|
||||
"listeners": {
|
||||
"*:9074": {
|
||||
"application": "php_74"
|
||||
}
|
||||
},
|
||||
"applications": {
|
||||
"php_74": {
|
||||
"type": "php 7.4",
|
||||
"processes": 1,
|
||||
"user": "testuser",
|
||||
"group": "testgroup",
|
||||
"root": "${testdir}/www",
|
||||
"index": "info.php"
|
||||
}
|
||||
}
|
||||
}
|
||||
'';
|
||||
};
|
||||
users = {
|
||||
users.testuser = {
|
||||
isNormalUser = false;
|
||||
uid = 1074;
|
||||
group = "testgroup";
|
||||
};
|
||||
groups.testgroup = {
|
||||
gid= 1074;
|
||||
};
|
||||
};
|
||||
};
|
||||
testScript = ''
|
||||
machine.wait_for_unit("unit.service")
|
||||
assert "PHP Version ${pkgs.php74.version}" in machine.succeed("curl -vvv -s http://127.0.0.1:9074/")
|
||||
'';
|
||||
})
|
@ -6,6 +6,11 @@ import ../make-test-python.nix {
|
||||
environment.systemPackages = [
|
||||
(pkgs.callPackage ./xmpp-sendmessage.nix { connectTo = nodes.server.config.networking.primaryIPAddress; })
|
||||
];
|
||||
networking.extraHosts = ''
|
||||
${nodes.server.config.networking.primaryIPAddress} example.com
|
||||
${nodes.server.config.networking.primaryIPAddress} conference.example.com
|
||||
${nodes.server.config.networking.primaryIPAddress} uploads.example.com
|
||||
'';
|
||||
};
|
||||
server = { config, pkgs, ... }: {
|
||||
nixpkgs.overlays = [
|
||||
@ -18,6 +23,8 @@ import ../make-test-python.nix {
|
||||
];
|
||||
networking.extraHosts = ''
|
||||
${config.networking.primaryIPAddress} example.com
|
||||
${config.networking.primaryIPAddress} conference.example.com
|
||||
${config.networking.primaryIPAddress} uploads.example.com
|
||||
'';
|
||||
networking.firewall.enable = false;
|
||||
services.prosody = {
|
||||
@ -39,6 +46,14 @@ import ../make-test-python.nix {
|
||||
domain = "example.com";
|
||||
enabled = true;
|
||||
};
|
||||
muc = [
|
||||
{
|
||||
domain = "conference.example.com";
|
||||
}
|
||||
];
|
||||
uploadHttp = {
|
||||
domain = "uploads.example.com";
|
||||
};
|
||||
};
|
||||
};
|
||||
mysql = { config, pkgs, ... }: {
|
||||
|
@ -60,6 +60,5 @@ stdenv.mkDerivation rec {
|
||||
homepage = "http://audacityteam.org/";
|
||||
license = licenses.gpl2Plus;
|
||||
platforms = intersectLists platforms.linux platforms.x86; # fails on ARM
|
||||
maintainers = with maintainers; [ the-kenny ];
|
||||
};
|
||||
}
|
||||
|
@ -2,13 +2,13 @@
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "BSEQuencer";
|
||||
version = "1.2.0";
|
||||
version = "1.4.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "sjaehn";
|
||||
repo = pname;
|
||||
rev = "${version}";
|
||||
sha256 = "08xwz5v8wrar0rx7qdr9pkpjz2k9sw6bn5glhpn6sp6453fabf8q";
|
||||
sha256 = "1zz1cirmx4wm4im4gjdp691f2042c8d1i8np1ns71f6kqdj9ps3k";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ pkgconfig ];
|
||||
|
@ -3,12 +3,12 @@
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
version = "1.4.0.0";
|
||||
version = "1.4.0.1";
|
||||
pname = "chuck";
|
||||
|
||||
src = fetchurl {
|
||||
url = "http://chuck.cs.princeton.edu/release/files/chuck-${version}.tgz";
|
||||
sha256 = "1b17rsf7bv45gfhyhfmpz9d4rkxn24c0m2hgmpfjz3nlp0rf7bic";
|
||||
sha256 = "1m0fhndbqaf0lii1asyc50c66bv55ib6mbnm8fzk5qc5ncs0r8hi";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ flex bison which ];
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
mkDerivation rec {
|
||||
pname = "drumkv1";
|
||||
version = "0.9.13";
|
||||
version = "0.9.14";
|
||||
|
||||
src = fetchurl {
|
||||
url = "mirror://sourceforge/drumkv1/${pname}-${version}.tar.gz";
|
||||
sha256 = "1h88sakxs0b20k8v2sh14y05fin1zqmhnid6h9mk9c37ixxg58ia";
|
||||
sha256 = "0fr7pkp55zvjxf7p22drs93fsjgvqhbd55vxi0srhp2s2wzz5qak";
|
||||
};
|
||||
|
||||
buildInputs = [ libjack2 alsaLib libsndfile liblo lv2 qt5.qtbase qt5.qttools ];
|
||||
|
@ -2,7 +2,7 @@
|
||||
, gtk2
|
||||
, jack2Full
|
||||
, alsaLib
|
||||
, opencv
|
||||
, opencv2
|
||||
, libsndfile
|
||||
}:
|
||||
|
||||
@ -20,7 +20,7 @@ faust.wrapWithBuildEnv {
|
||||
gtk2
|
||||
jack2Full
|
||||
alsaLib
|
||||
opencv
|
||||
opencv2
|
||||
libsndfile
|
||||
];
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
{ faust
|
||||
, jack2Full
|
||||
, opencv
|
||||
, opencv2
|
||||
, qt4
|
||||
, libsndfile
|
||||
, which
|
||||
@ -17,7 +17,7 @@ faust.wrapWithBuildEnv {
|
||||
|
||||
propagatedBuildInputs = [
|
||||
jack2Full
|
||||
opencv
|
||||
opencv2
|
||||
qt4
|
||||
libsndfile
|
||||
which
|
||||
|
@ -8,13 +8,13 @@ in
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "freewheeling";
|
||||
version = "0.6.5";
|
||||
version = "0.6.6";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "free-wheeling";
|
||||
repo = "freewheeling";
|
||||
rev = "v${version}";
|
||||
sha256 = "1gjii2kndffj9iqici4vb9zrkrdqj1hs9q43x7jv48wv9872z78r";
|
||||
sha256 = "1xff5whr02cixihgd257dc70hnyf22j3zamvhsvg4lp7zq9l2in4";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ pkgconfig autoreconfHook libtool ];
|
||||
|
@ -7,13 +7,13 @@
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "ft2-clone";
|
||||
version = "1.15";
|
||||
version = "1.23";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "8bitbubsy";
|
||||
repo = "ft2-clone";
|
||||
rev = "v${version}";
|
||||
sha256 = "19xgdaij71gpvq216zjlp60zmfdl2a8kf8sc3bpk8a4d4xh4n151";
|
||||
sha256 = "03prdifc2nz7smmzdy19flp33m927vb7j5bhdc46gak753pikw7d";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ cmake ];
|
||||
|
@ -1,23 +1,41 @@
|
||||
{ lib, stdenv, fetchFromGitHub, autoreconfHook,
|
||||
fltk, jansson, rtmidi, libsamplerate, libsndfile,
|
||||
jack2, alsaLib, libpulseaudio,
|
||||
libXpm, libXinerama, libXcursor }:
|
||||
{ stdenv
|
||||
, fetchFromGitHub
|
||||
, autoreconfHook
|
||||
, fltk
|
||||
, jansson
|
||||
, rtmidi
|
||||
, libsamplerate
|
||||
, libsndfile
|
||||
, jack2
|
||||
, alsaLib
|
||||
, libpulseaudio
|
||||
, libXpm
|
||||
, libXinerama
|
||||
, libXcursor
|
||||
, catch2
|
||||
, nlohmann_json
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "giada";
|
||||
version = "0.16.1";
|
||||
version = "0.16.2.2";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "monocasual";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "0b3lhjs6myml5r5saky15523sbc3qr43r9rh047vhsiafmqdvfq1";
|
||||
sha256 = "0rpg5qmw3b76xcra869shb8fwk5wfzpzw216n96hxa5s6k69cm0p";
|
||||
};
|
||||
|
||||
configureFlags = [ "--target=linux" ];
|
||||
configureFlags = [
|
||||
"--target=linux"
|
||||
"--enable-system-catch"
|
||||
];
|
||||
|
||||
nativeBuildInputs = [
|
||||
autoreconfHook
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
fltk
|
||||
libsndfile
|
||||
@ -30,9 +48,16 @@ stdenv.mkDerivation rec {
|
||||
libpulseaudio
|
||||
libXinerama
|
||||
libXcursor
|
||||
catch2
|
||||
nlohmann_json
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
postPatch = ''
|
||||
sed -i 's:"deps/json/single_include/nlohmann/json\.hpp":<nlohmann/json.hpp>:' \
|
||||
src/core/{conf,init,midiMapConf,patch}.cpp
|
||||
'';
|
||||
|
||||
meta = with stdenv.lib; {
|
||||
description = "A free, minimal, hardcore audio tool for DJs, live performers and electronic musicians";
|
||||
homepage = "https://giadamusic.com/";
|
||||
license = licenses.gpl3;
|
||||
|
@ -5,14 +5,14 @@
|
||||
|
||||
python3Packages.buildPythonApplication rec {
|
||||
pname = "gpodder";
|
||||
version = "3.10.13";
|
||||
version = "3.10.15";
|
||||
format = "other";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = pname;
|
||||
repo = pname;
|
||||
rev = version;
|
||||
sha256 = "1h542syaxsx1hslfzlk3fx1nbp190zjw35kigw7a1kx1jwvfwapg";
|
||||
sha256 = "0ghbanj142n0hgydzfjmnkdgri2kswsjal3mn10c723kih4ir4yr";
|
||||
};
|
||||
|
||||
patches = [
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
python3Packages.buildPythonApplication rec {
|
||||
pname = "Mopidy-Iris";
|
||||
version = "3.46.0";
|
||||
version = "3.47.0";
|
||||
|
||||
src = python3Packages.fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "0c7b6zbcj4bq5qsxvhjwqclrl1k2hs3wb50pfjbw7gs7m3gm2b7d";
|
||||
sha256 = "1lvq5qsnn2djwkgbadzr7rr6ik2xh8yyj0p3y3hck9pl96ms7lfv";
|
||||
};
|
||||
|
||||
propagatedBuildInputs = [
|
||||
|
@ -4,11 +4,11 @@
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "mup";
|
||||
version = "6.7";
|
||||
version = "6.8";
|
||||
|
||||
src = fetchurl {
|
||||
url = "http://www.arkkra.com/ftp/pub/unix/mup${builtins.replaceStrings ["."] [""] version}src.tar.gz";
|
||||
sha256 = "1y1qknhib1isdjsbv833w3nxzyfljkfgp1gmjwly60l55q60frpk";
|
||||
sha256 = "06bv5nyl8rcibyb83zzrfdq6x6f93g3rgnv47i5gsjcaw5w6l31y";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ autoreconfHook bison flex ghostscript groff netpbm ];
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user