Merge branch 'staging-next' into staging

This commit is contained in:
Vladimír Čunát 2019-02-01 09:42:53 +01:00
commit 8ba516664b
No known key found for this signature in database
GPG Key ID: E747DF1F9575A3AA
458 changed files with 9823 additions and 5655 deletions

View File

@ -11,6 +11,8 @@
<xi:include href="functions/overrides.xml" /> <xi:include href="functions/overrides.xml" />
<xi:include href="functions/generators.xml" /> <xi:include href="functions/generators.xml" />
<xi:include href="functions/debug.xml" /> <xi:include href="functions/debug.xml" />
<xi:include href="functions/fetchers.xml" />
<xi:include href="functions/trivial-builders.xml" />
<xi:include href="functions/fhs-environments.xml" /> <xi:include href="functions/fhs-environments.xml" />
<xi:include href="functions/shell.xml" /> <xi:include href="functions/shell.xml" />
<xi:include href="functions/dockertools.xml" /> <xi:include href="functions/dockertools.xml" />

View File

@ -24,7 +24,7 @@
<para> <para>
This function is analogous to the <command>docker build</command> command, This function is analogous to the <command>docker build</command> command,
in that can used to build a Docker-compatible repository tarball containing in that it can be used to build a Docker-compatible repository tarball containing
a single image with one or multiple layers. As such, the result is suitable a single image with one or multiple layers. As such, the result is suitable
for being loaded in Docker with <command>docker load</command>. for being loaded in Docker with <command>docker load</command>.
</para> </para>
@ -190,11 +190,11 @@ buildImage {
By default <function>buildImage</function> will use a static date of one By default <function>buildImage</function> will use a static date of one
second past the UNIX Epoch. This allows <function>buildImage</function> to second past the UNIX Epoch. This allows <function>buildImage</function> to
produce binary reproducible images. When listing images with produce binary reproducible images. When listing images with
<command>docker list images</command>, the newly created images will be <command>docker images</command>, the newly created images will be
listed like this: listed like this:
</para> </para>
<screen><![CDATA[ <screen><![CDATA[
$ docker image list $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB hello latest 08c791c7846e 48 years ago 25.2MB
]]></screen> ]]></screen>
@ -217,7 +217,7 @@ pkgs.dockerTools.buildImage {
and now the Docker CLI will display a reasonable date and sort the images and now the Docker CLI will display a reasonable date and sort the images
as expected: as expected:
<screen><![CDATA[ <screen><![CDATA[
$ docker image list $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB hello latest de2bf4786de6 About a minute ago 25.2MB
]]></screen> ]]></screen>
@ -402,7 +402,7 @@ pkgs.dockerTools.buildLayeredImage {
<para> <para>
This function is analogous to the <command>docker pull</command> command, in This function is analogous to the <command>docker pull</command> command, in
that can be used to pull a Docker image from a Docker registry. By default that it can be used to pull a Docker image from a Docker registry. By default
<link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull
images. images.
</para> </para>
@ -484,7 +484,7 @@ sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b
<para> <para>
This function is analogous to the <command>docker export</command> command, This function is analogous to the <command>docker export</command> command,
in that can used to flatten a Docker image that contains multiple layers. It in that it can be used to flatten a Docker image that contains multiple layers. It
is in fact the result of the merge of all the layers of the image. As such, is in fact the result of the merge of all the layers of the image. As such,
the result is suitable for being imported in Docker with <command>docker the result is suitable for being imported in Docker with <command>docker
import</command>. import</command>.
@ -557,7 +557,7 @@ buildImage {
<para> <para>
Creating base files like <literal>/etc/passwd</literal> or Creating base files like <literal>/etc/passwd</literal> or
<literal>/etc/login.defs</literal> are necessary for shadow-utils to <literal>/etc/login.defs</literal> is necessary for shadow-utils to
manipulate users and groups. manipulate users and groups.
</para> </para>
</section> </section>

206
doc/functions/fetchers.xml Normal file
View File

@ -0,0 +1,206 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-fetchers">
<title>Fetcher functions</title>
<para>
When using Nix, you will frequently need to download source code
and other files from the internet. Nixpkgs comes with a few helper
functions that allow you to fetch fixed-output derivations in a
structured way.
</para>
<para>
The two fetcher primitives are <function>fetchurl</function> and
<function>fetchzip</function>. Both of these have two required
arguments, a URL and a hash. The hash is typically
<literal>sha256</literal>, although many more hash algorithms are
supported. Nixpkgs contributors are currently recommended to use
<literal>sha256</literal>. This hash will be used by Nix to
identify your source. A typical usage of fetchurl is provided
below.
</para>
<programlisting><![CDATA[
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "hello";
src = fetchurl {
url = "http://www.example.org/hello.tar.gz";
sha256 = "1111111111111111111111111111111111111111111111111111";
};
}
]]></programlisting>
<para>
The main difference between <function>fetchurl</function> and
<function>fetchzip</function> is in how they store the contents.
<function>fetchurl</function> will store the unaltered contents of
the URL within the Nix store. <function>fetchzip</function> on the
other hand will decompress the archive for you, making files and
directories directly accessible in the future.
<function>fetchzip</function> can only be used with archives.
Despite the name, <function>fetchzip</function> is not limited to
.zip files and can also be used with any tarball.
</para>
<para>
<function>fetchpatch</function> works very similarly to
<function>fetchurl</function> with the same arguments expected. It
expects patch files as a source and and performs normalization on
them before computing the checksum. For example it will remove
comments or other unstable parts that are sometimes added by
version control systems and can change over time.
</para>
<para>
Other fetcher functions allow you to add source code directly from
a VCS such as subversion or git. These are mostly straightforward
names based on the name of the command used with the VCS system.
Because they give you a working repository, they act most like
<function>fetchzip</function>.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchsvn</literal>
</term>
<listitem>
<para>
Used with Subversion. Expects <literal>url</literal> to a
Subversion directory, <literal>rev</literal>, and
<literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchgit</literal>
</term>
<listitem>
<para>
Used with Git. Expects <literal>url</literal> to a Git repo,
<literal>rev</literal>, and <literal>sha256</literal>.
<literal>rev</literal> in this case can be full the git commit
id (SHA1 hash) or a tag name like
<literal>refs/tags/v1.0</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchfossil</literal>
</term>
<listitem>
<para>
Used with Fossil. Expects <literal>url</literal> to a Fossil
archive, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchcvs</literal>
</term>
<listitem>
<para>
Used with CVS. Expects <literal>cvsRoot</literal>,
<literal>tag</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchhg</literal>
</term>
<listitem>
<para>
Used with Mercurial. Expects <literal>url</literal>,
<literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
A number of fetcher functions wrap part of
<function>fetchurl</function> and <function>fetchzip</function>.
They are mainly convenience functions intended for commonly used
destinations of source code in Nixpkgs. These wrapper fetchers are
listed below.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchFromGitHub</literal>
</term>
<listitem>
<para>
<function>fetchFromGitHub</function> expects four arguments.
<literal>owner</literal> is a string corresponding to the
GitHub user or organization that controls this repository.
<literal>repo</literal> corresponds to the name of the
software repository. These are located at the top of every
GitHub HTML page as
<literal>owner</literal>/<literal>repo</literal>.
<literal>rev</literal> corresponds to the Git commit hash or
tag (e.g <literal>v1.0</literal>) that will be downloaded from
Git. Finally, <literal>sha256</literal> corresponds to the
hash of the extracted directory. Again, other hash algorithms
are also available but <literal>sha256</literal> is currently
preferred.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromGitLab</literal>
</term>
<listitem>
<para>
This is used with GitLab repositories. The arguments expected
are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromBitbucket</literal>
</term>
<listitem>
<para>
This is used with BitBucket repositories. The arguments expected
are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromSavannah</literal>
</term>
<listitem>
<para>
This is used with Savannah repositories. The arguments expected
are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromRepoOrCz</literal>
</term>
<listitem>
<para>
This is used with repo.or.cz repositories. The arguments
expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>

View File

@ -0,0 +1,124 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-trivial-builders">
<title>Trivial builders</title>
<para>
Nixpkgs provides a couple of functions that help with building
derivations. The most important one,
<function>stdenv.mkDerivation</function>, has already been
documented above. The following functions wrap
<function>stdenv.mkDerivation</function>, making it easier to use
in certain cases.
</para>
<variablelist>
<varlistentry>
<term>
<literal>runCommand</literal>
</term>
<listitem>
<para>
This takes three arguments, <literal>name</literal>,
<literal>env</literal>, and <literal>buildCommand</literal>.
<literal>name</literal> is just the name that Nix will append
to the store path in the same way that
<literal>stdenv.mkDerivation</literal> uses its
<literal>name</literal> attribute. <literal>env</literal> is an
attribute set specifying environment variables that will be set
for this derivation. These attributes are then passed to the
wrapped <literal>stdenv.mkDerivation</literal>.
<literal>buildCommand</literal> specifies the commands that
will be run to create this derivation. Note that you will need
to create <literal>$out</literal> for Nix to register the
command as successful.
</para>
<para>
An example of using <literal>runCommand</literal> is provided
below.
</para>
<programlisting>
(import &lt;nixpkgs&gt; {}).runCommand "my-example" {} ''
echo My example command is running
mkdir $out
echo I can write data to the Nix store > $out/message
echo I can also run basic commands like:
echo ls
ls
echo whoami
whoami
echo date
date
''
</programlisting>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>runCommandCC</literal>
</term>
<listitem>
<para>
This works just like <literal>runCommand</literal>. The only
difference is that it also provides a C compiler in
<literal>buildCommand</literal>s environment. To minimize your
dependencies, you should only use this if you are sure you will
need a C compiler as part of running your command.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>writeTextFile</literal>, <literal>writeText</literal>,
<literal>writeTextDir</literal>, <literal>writeScript</literal>,
<literal>writeScriptBin</literal>
</term>
<listitem>
<para>
These functions write <literal>text</literal> to the Nix store.
This is useful for creating scripts from Nix expressions.
<literal>writeTextFile</literal> takes an attribute set and
expects two arguments, <literal>name</literal> and
<literal>text</literal>. <literal>name</literal> corresponds to
the name used in the Nix store path. <literal>text</literal>
will be the contents of the file. You can also set
<literal>executable</literal> to true to make this file have
the executable bit set.
</para>
<para>
Many more commands wrap <literal>writeTextFile</literal>
including <literal>writeText</literal>,
<literal>writeTextDir</literal>,
<literal>writeScript</literal>, and
<literal>writeScriptBin</literal>. These are convenience
functions over <literal>writeTextFile</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>symlinkJoin</literal>
</term>
<listitem>
<para>
This can be used to put many derivations into the same directory
structure. It works by creating a new derivation and adding
symlinks to each of the paths listed. It expects two arguments,
<literal>name</literal>, and <literal>paths</literal>.
<literal>name</literal> is the name used in the Nix store path
for the created derivation. <literal>paths</literal> is a list of
paths that will be symlinked. These paths can be to Nix store
derivations or any other subdirectory contained within.
</para>
</listitem>
</varlistentry>
</variablelist>
</section>

View File

@ -2204,10 +2204,130 @@ addEnvHooks "$hostOffset" myBashFunction
</para> </para>
<para> <para>
Here are some packages that provide a setup hook. Since the mechanism is First, lets cover some setup hooks that are part of Nixpkgs
modular, this probably isn't an exhaustive list. Then again, since the default stdenv. This means that they are run for every package
mechanism is only to be used as a last resort, it might be. built using <function>stdenv.mkDerivation</function>. Some of
<variablelist> these are platform specific, so they may run on Linux but not
Darwin or vice-versa.
<variablelist>
<varlistentry>
<term>
<literal>move-docs.sh</literal>
</term>
<listitem>
<para>
This setup hook moves any installed documentation to the
<literal>/share</literal> subdirectory directory. This includes
the man, doc and info directories. This is needed for legacy
programs that do not know how to use the
<literal>share</literal> subdirectory.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>compress-man-pages.sh</literal>
</term>
<listitem>
<para>
This setup hook compresses any man pages that have been
installed. The compression is done using the gzip program. This
helps to reduce the installed size of packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>strip.sh</literal>
</term>
<listitem>
<para>
This runs the strip command on installed binaries and
libraries. This removes unnecessary information like debug
symbols when they are not needed. This also helps to reduce the
installed size of packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>patch-shebangs.sh</literal>
</term>
<listitem>
<para>
This setup hook patches installed scripts to use the full path
to the shebang interpreter. A shebang interpreter is the first
commented line of a script telling the operating system which
program will run the script (e.g <literal>#!/bin/bash</literal>). In
Nix, we want an exact path to that interpreter to be used. This
often replaces <literal>/bin/sh</literal> with a path in the
Nix store.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>audit-tmpdir.sh</literal>
</term>
<listitem>
<para>
This verifies that no references are left from the install
binaries to the directory used to build those binaries. This
ensures that the binaries do not need things outside the Nix
store. This is currently supported in Linux only.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>multiple-outputs.sh</literal>
</term>
<listitem>
<para>
This setup hook adds configure flags that tell packages to
install files into any one of the proper outputs listed in
<literal>outputs</literal>. This behavior can be turned off by setting
<literal>setOutputFlags</literal> to false in the derivation
environment. See <xref linkend="chap-multiple-output"/> for
more information.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>move-sbin.sh</literal>
</term>
<listitem>
<para>
This setup hook moves any binaries installed in the sbin
subdirectory into bin. In addition, a link is provided from
sbin to bin for compatibility.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>move-lib64.sh</literal>
</term>
<listitem>
<para>
This setup hook moves any libraries installed in the lib64
subdirectory into lib. In addition, a link is provided from
lib64 to lib for compatibility.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>set-source-date-epoch-to-latest.sh</literal>
</term>
<listitem>
<para>
This sets <literal>SOURCE_DATE_EPOCH</literal> to the
modification time of the most recent file.
</para>
</listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term> <term>
Bintools Wrapper Bintools Wrapper
@ -2314,6 +2434,15 @@ addEnvHooks "$hostOffset" myBashFunction
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
</variablelist>
</para>
<para>
Here are some more packages that provide a setup hook. Since the
list of hooks is extensible, this is not an exhaustive list the
mechanism is only to be used as a last resort, it might cover most
uses.
<variablelist>
<varlistentry> <varlistentry>
<term> <term>
Perl Perl

View File

@ -1,57 +1,21 @@
{ lib { lib, version }:
# we pass the kernel version here to keep a nice syntax `whenOlder "4.13"`
# kernelVersion, e.g., config.boot.kernelPackages.version
, version
, mkValuePreprocess ? null
}:
with lib; with lib;
rec { rec {
# Common patterns # Common patterns/legacy
when = cond: opt: if cond then opt else null; whenAtLeast = ver: mkIf (versionAtLeast version ver);
whenAtLeast = ver: when (versionAtLeast version ver); whenOlder = ver: mkIf (versionOlder version ver);
whenOlder = ver: when (versionOlder version ver); # range is (inclusive, exclusive)
whenBetween = verLow: verHigh: when (versionAtLeast version verLow && versionOlder version verHigh); whenBetween = verLow: verHigh: mkIf (versionAtLeast version verLow && versionOlder version verHigh);
# Keeping these around in case we decide to change this horrible implementation :) # Keeping these around in case we decide to change this horrible implementation :)
option = x: if x == null then null else "?${x}"; option = x:
yes = "y"; x // { optional = true; };
no = "n";
module = "m";
mkValue = val: yes = { tristate = "y"; };
let no = { tristate = "n"; };
isNumber = c: elem c ["0" "1" "2" "3" "4" "5" "6" "7" "8" "9"]; module = { tristate = "m"; };
in freeform = x: { freeform = x; };
if val == "" then "\"\""
else if val == yes || val == module || val == no then val
else if all isNumber (stringToCharacters val) then val
else if substring 0 2 val == "0x" then val
else val; # FIXME: fix quoting one day
# generate nix intermediate kernel config file of the form
#
# VIRTIO_MMIO m
# VIRTIO_BLK y
# VIRTIO_CONSOLE n
# NET_9P_VIRTIO? y
#
# Use mkValuePreprocess to preprocess option values, aka mark 'modules' as
# 'yes' or vice-versa
# Borrowed from copumpkin https://github.com/NixOS/nixpkgs/pull/12158
# returns a string, expr should be an attribute set
generateNixKConf = exprs: mkValuePreprocess:
let
mkConfigLine = key: rawval:
let
val = if builtins.isFunction mkValuePreprocess then mkValuePreprocess rawval else rawval;
in
if val == null
then ""
else if hasPrefix "?" val
then "${key}? ${mkValue (removePrefix "?" val)}\n"
else "${key} ${mkValue val}\n";
mkConf = cfg: concatStrings (mapAttrsToList mkConfigLine cfg);
in mkConf exprs;
} }

View File

@ -214,23 +214,25 @@ rec {
qux = [ "module.hidden=baz,value=bar" "module.hidden=fli,value=gne" ]; qux = [ "module.hidden=baz,value=bar" "module.hidden=fli,value=gne" ];
} }
*/ */
byName = attr: f: modules: foldl' (acc: module: byName = attr: f: modules:
foldl' (inner: name: foldl' (acc: module:
inner // { ${name} = (acc.${name} or []) ++ (f module module.${attr}.${name}); } acc // (mapAttrs (n: v:
) acc (attrNames module.${attr}) (acc.${n} or []) ++ f module v
) {} modules; ) module.${attr}
)
) {} modules;
# an attrset 'name' => list of submodules that declare name. # an attrset 'name' => list of submodules that declare name.
declsByName = byName "options" declsByName = byName "options" (module: option:
(module: option: [{ inherit (module) file; options = option; }]) [{ inherit (module) file; options = option; }]
options; ) options;
# an attrset 'name' => list of submodules that define name. # an attrset 'name' => list of submodules that define name.
defnsByName = byName "config" (module: value: defnsByName = byName "config" (module: value:
map (config: { inherit (module) file; inherit config; }) (pushDownProperties value) map (config: { inherit (module) file; inherit config; }) (pushDownProperties value)
) configs; ) configs;
# extract the definitions for each loc # extract the definitions for each loc
defnsByName' = byName "config" defnsByName' = byName "config" (module: value:
(module: value: [{ inherit (module) file; inherit value; }]) [{ inherit (module) file; inherit value; }]
configs; ) configs;
in in
(flip mapAttrs declsByName (name: decls: (flip mapAttrs declsByName (name: decls:
# We're descending into attribute name. # We're descending into attribute name.
@ -362,7 +364,6 @@ rec {
values = defs'''; values = defs''';
inherit (defs'') highestPrio; inherit (defs'') highestPrio;
}; };
defsFinal = defsFinal'.values; defsFinal = defsFinal'.values;
# Type-check the remaining definitions, and merge them. # Type-check the remaining definitions, and merge them.
@ -475,22 +476,8 @@ rec {
optionSet to options of type submodule. FIXME: remove optionSet to options of type submodule. FIXME: remove
eventually. */ eventually. */
fixupOptionType = loc: opt: fixupOptionType = loc: opt:
let if opt.type.getSubModules or null == null
options = opt.options or then opt // { type = opt.type or types.unspecified; }
(throw "Option `${showOption loc'}' has type optionSet but has no option attribute, in ${showFiles opt.declarations}.");
f = tp:
let optionSetIn = type: (tp.name == type) && (tp.functor.wrapped.name == "optionSet");
in
if tp.name == "option set" || tp.name == "submodule" then
throw "The option ${showOption loc} uses submodules without a wrapping type, in ${showFiles opt.declarations}."
else if optionSetIn "attrsOf" then types.attrsOf (types.submodule options)
else if optionSetIn "loaOf" then types.loaOf (types.submodule options)
else if optionSetIn "listOf" then types.listOf (types.submodule options)
else if optionSetIn "nullOr" then types.nullOr (types.submodule options)
else tp;
in
if opt.type.getSubModules or null == null
then opt // { type = f (opt.type or types.unspecified); }
else opt // { type = opt.type.substSubModules opt.options; options = []; }; else opt // { type = opt.type.substSubModules opt.options; options = []; };

View File

@ -48,8 +48,6 @@ rec {
visible ? null, visible ? null,
# Whether the option can be set only once # Whether the option can be set only once
readOnly ? null, readOnly ? null,
# Obsolete, used by types.optionSet.
options ? null
} @ attrs: } @ attrs:
attrs // { _type = "option"; }; attrs // { _type = "option"; };

View File

@ -58,6 +58,7 @@ rec {
"netbsd" = "NetBSD"; "netbsd" = "NetBSD";
"freebsd" = "FreeBSD"; "freebsd" = "FreeBSD";
"openbsd" = "OpenBSD"; "openbsd" = "OpenBSD";
"wasm" = "Wasm";
}.${final.parsed.kernel.name} or null; }.${final.parsed.kernel.name} or null;
# uname -p # uname -p

View File

@ -284,8 +284,7 @@ rec {
(mergeDefinitions (loc ++ [name]) elemType defs).optionalValue (mergeDefinitions (loc ++ [name]) elemType defs).optionalValue
) )
# Push down position info. # Push down position info.
(map (def: listToAttrs (mapAttrsToList (n: def': (map (def: mapAttrs (n: v: { inherit (def) file; value = v; }) def.value) defs)));
{ name = n; value = { inherit (def) file; value = def'; }; }) def.value)) defs)));
getSubOptions = prefix: elemType.getSubOptions (prefix ++ ["<name>"]); getSubOptions = prefix: elemType.getSubOptions (prefix ++ ["<name>"]);
getSubModules = elemType.getSubModules; getSubModules = elemType.getSubModules;
substSubModules = m: attrsOf (elemType.substSubModules m); substSubModules = m: attrsOf (elemType.substSubModules m);
@ -470,10 +469,7 @@ rec {
# Obsolete alternative to configOf. It takes its option # Obsolete alternative to configOf. It takes its option
# declarations from the options attribute of containing option # declarations from the options attribute of containing option
# declaration. # declaration.
optionSet = mkOptionType { optionSet = builtins.throw "types.optionSet is deprecated; use types.submodule instead" "optionSet";
name = builtins.trace "types.optionSet is deprecated; use types.submodule instead" "optionSet";
description = "option set";
};
# Augment the given type with an additional type check function. # Augment the given type with an additional type check function.
addCheck = elemType: check: elemType // { check = x: elemType.check x && check x; }; addCheck = elemType: check: elemType // { check = x: elemType.check x && check x; };

View File

@ -892,6 +892,11 @@
github = "cko"; github = "cko";
name = "Christine Koppelt"; name = "Christine Koppelt";
}; };
clacke = {
email = "claes.wallin@greatsinodevelopment.com";
github = "clacke";
name = "Claes Wallin";
};
cleverca22 = { cleverca22 = {
email = "cleverca22@gmail.com"; email = "cleverca22@gmail.com";
github = "cleverca22"; github = "cleverca22";
@ -1768,6 +1773,11 @@
github = "dguibert"; github = "dguibert";
name = "David Guibert"; name = "David Guibert";
}; };
groodt = {
email = "groodt@gmail.com";
github = "groodt";
name = "Greg Roodt";
};
guibou = { guibou = {
email = "guillaum.bouchard@gmail.com"; email = "guillaum.bouchard@gmail.com";
github = "guibou"; github = "guibou";
@ -3661,6 +3671,11 @@
github = "PsyanticY"; github = "PsyanticY";
name = "Psyanticy"; name = "Psyanticy";
}; };
ptival = {
email = "valentin.robert.42@gmail.com";
github = "Ptival";
name = "Valentin Robert";
};
puffnfresh = { puffnfresh = {
email = "brian@brianmckenna.org"; email = "brian@brianmckenna.org";
github = "puffnfresh"; github = "puffnfresh";

View File

@ -23,7 +23,7 @@ $ diskutil list
[..] [..]
$ diskutil unmountDisk diskN $ diskutil unmountDisk diskN
Unmount of all volumes on diskN was successful Unmount of all volumes on diskN was successful
$ sudo dd bs=1000000 if=nix.iso of=/dev/rdiskN $ sudo dd if=nix.iso of=/dev/rdiskN
</programlisting> </programlisting>
Using the 'raw' <command>rdiskN</command> device instead of Using the 'raw' <command>rdiskN</command> device instead of
<command>diskN</command> completes in minutes instead of hours. After <command>diskN</command> completes in minutes instead of hours. After

View File

@ -363,17 +363,29 @@
<para> <para>
The <literal>pam_unix</literal> account module is now loaded with its The <literal>pam_unix</literal> account module is now loaded with its
control field set to <literal>required</literal> instead of control field set to <literal>required</literal> instead of
<literal>sufficient</literal>, so that later pam account modules that <literal>sufficient</literal>, so that later PAM account modules that
might do more extensive checks are being executed. might do more extensive checks are being executed.
Previously, the whole account module verification was exited prematurely Previously, the whole account module verification was exited prematurely
in case a nss module provided the account name to in case a nss module provided the account name to
<literal>pam_unix</literal>. <literal>pam_unix</literal>.
The LDAP and SSSD NixOS modules already add their NSS modules when The LDAP and SSSD NixOS modules already add their NSS modules when
enabled. In case your setup breaks due to some later pam account module enabled. In case your setup breaks due to some later PAM account module
previosuly shadowed, or failing NSS lookups, please file a bug. You can previosuly shadowed, or failing NSS lookups, please file a bug. You can
get back the old behaviour by manually setting get back the old behaviour by manually setting
<literal><![CDATA[security.pam.services.<name?>.text]]></literal>. <literal><![CDATA[security.pam.services.<name?>.text]]></literal>.
</para> </para>
</listitem>
<listitem>
<para>
The <literal>pam_unix</literal> password module is now loaded with its
control field set to <literal>sufficient</literal> instead of
<literal>required</literal>, so that password managed only
by later PAM password modules are being executed.
Previously, for example, changing an LDAP account's password through PAM
was not possible: the whole password module verification
was exited prematurely by <literal>pam_unix</literal>,
preventing <literal>pam_ldap</literal> to manage the password as it should.
</para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
@ -382,6 +394,22 @@
See the <literal>fish</literal> <link xlink:href="https://github.com/fish-shell/fish-shell/releases/tag/3.0.0">release notes</link> for more information. See the <literal>fish</literal> <link xlink:href="https://github.com/fish-shell/fish-shell/releases/tag/3.0.0">release notes</link> for more information.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The ibus-table input method has had a change in config format, which
causes all previous settings to be lost. See
<link xlink:href="https://github.com/mike-fabian/ibus-table/commit/f9195f877c5212fef0dfa446acb328c45ba5852b">this commit message</link>
for details.
</para>
</listitem>
<listitem>
<para>
Support for NixOS module system type <literal>types.optionSet</literal> and
<literal>lib.mkOption</literal> argument <literal>options</literal> is removed.
Use <literal>types.submodule</literal> instead.
(<link xlink:href="https://github.com/NixOS/nixpkgs/pull/54637">#54637</link>)
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -434,6 +462,12 @@
of maintainers. of maintainers.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The httpd service now saves log files with a .log file extension by default for
easier integration with the logrotate service.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The owncloud server packages and httpd subservice module were removed The owncloud server packages and httpd subservice module were removed
@ -455,6 +489,20 @@
use <literal>nixos-rebuild boot; reboot</literal>. use <literal>nixos-rebuild boot; reboot</literal>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Flat volumes are now disabled by default in <literal>hardware.pulseaudio</literal>.
This has been done to prevent applications, which are unaware of this feature, setting
their volumes to 100% on startup causing harm to your audio hardware and potentially your ears.
</para>
<note>
<para>
With this change application specific volumes are relative to the master volume which can be
adjusted independently, whereas before they were absolute; meaning that in effect, it scaled the
device-volume with the volume of the loudest application.
</para>
</note>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
</section> </section>

View File

@ -38,6 +38,8 @@ let
bind_timelimit ${toString cfg.bind.timeLimit} bind_timelimit ${toString cfg.bind.timeLimit}
${optionalString (cfg.bind.distinguishedName != "") ${optionalString (cfg.bind.distinguishedName != "")
"binddn ${cfg.bind.distinguishedName}" } "binddn ${cfg.bind.distinguishedName}" }
${optionalString (cfg.daemon.rootpwmoddn != "")
"rootpwmoddn ${cfg.daemon.rootpwmoddn}" }
${optionalString (cfg.daemon.extraConfig != "") cfg.daemon.extraConfig } ${optionalString (cfg.daemon.extraConfig != "") cfg.daemon.extraConfig }
''; '';
}; };
@ -126,6 +128,26 @@ in
the end of the nslcd configuration file (nslcd.conf). the end of the nslcd configuration file (nslcd.conf).
'' ; '' ;
} ; } ;
rootpwmoddn = mkOption {
default = "";
example = "cn=admin,dc=example,dc=com";
type = types.str;
description = ''
The distinguished name to use to bind to the LDAP server
when the root user tries to modify a user's password.
'';
};
rootpwmodpw = mkOption {
default = "";
example = "/run/keys/nslcd.rootpwmodpw";
type = types.str;
description = ''
The path to a file containing the credentials with which
to bind to the LDAP server if the root user tries to change a user's password
'';
};
}; };
bind = { bind = {
@ -203,9 +225,11 @@ in
system.activationScripts = mkIf insertLdapPassword { system.activationScripts = mkIf insertLdapPassword {
ldap = stringAfter [ "etc" "groups" "users" ] '' ldap = stringAfter [ "etc" "groups" "users" ] ''
if test -f "${cfg.bind.password}" ; then if test -f "${cfg.bind.password}" ; then
echo "bindpw "$(cat ${cfg.bind.password})"" | cat ${ldapConfig.source} - > /etc/ldap.conf.bindpw umask 0077
mv -fT /etc/ldap.conf.bindpw /etc/ldap.conf conf="$(mktemp)"
chmod 600 /etc/ldap.conf printf 'bindpw %s\n' "$(cat ${cfg.bind.password})" |
cat ${ldapConfig.source} - >"$conf"
mv -fT "$conf" /etc/ldap.conf
fi fi
''; '';
}; };
@ -232,21 +256,31 @@ in
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
preStart = '' preStart = ''
mkdir -p /run/nslcd umask 0077
rm -f /run/nslcd/nslcd.pid; conf="$(mktemp)"
chown nslcd.nslcd /run/nslcd {
${optionalString (cfg.bind.distinguishedName != "") '' cat ${nslcdConfig.source}
if test -s "${cfg.bind.password}" ; then test -z '${cfg.bind.distinguishedName}' -o ! -f '${cfg.bind.password}' ||
ln -sfT "${cfg.bind.password}" /run/nslcd/bindpw printf 'bindpw %s\n' "$(cat '${cfg.bind.password}')"
fi test -z '${cfg.daemon.rootpwmoddn}' -o ! -f '${cfg.daemon.rootpwmodpw}' ||
''} printf 'rootpwmodpw %s\n' "$(cat '${cfg.daemon.rootpwmodpw}')"
} >"$conf"
mv -fT "$conf" /etc/nslcd.conf
''; '';
# NOTE: because one cannot pass a custom config path to `nslcd`
# (which is only able to use `/etc/nslcd.conf`)
# changes in `nslcdConfig` won't change `serviceConfig`,
# and thus won't restart `nslcd`.
# Therefore `restartTriggers` is used on `/etc/nslcd.conf`.
restartTriggers = [ nslcdConfig.source ];
serviceConfig = { serviceConfig = {
ExecStart = "${nss_pam_ldapd}/sbin/nslcd"; ExecStart = "${nss_pam_ldapd}/sbin/nslcd";
Type = "forking"; Type = "forking";
PIDFile = "/run/nslcd/nslcd.pid"; PIDFile = "/run/nslcd/nslcd.pid";
Restart = "always"; Restart = "always";
RuntimeDirectory = [ "nslcd" ];
}; };
}; };

View File

@ -180,7 +180,7 @@ in {
type = types.attrsOf types.unspecified; type = types.attrsOf types.unspecified;
default = {}; default = {};
description = ''Config of the pulse daemon. See <literal>man pulse-daemon.conf</literal>.''; description = ''Config of the pulse daemon. See <literal>man pulse-daemon.conf</literal>.'';
example = literalExample ''{ flat-volumes = "no"; }''; example = literalExample ''{ realtime-scheduling = "yes"; }'';
}; };
}; };
@ -242,6 +242,9 @@ in {
source = writeText "libao.conf" "default_driver=pulse"; } source = writeText "libao.conf" "default_driver=pulse"; }
]; ];
# Disable flat volumes to enable relative ones
hardware.pulseaudio.daemon.config.flat-volumes = mkDefault "no";
# Allow PulseAudio to get realtime priority using rtkit. # Allow PulseAudio to get realtime priority using rtkit.
security.rtkit.enable = true; security.rtkit.enable = true;

View File

@ -635,9 +635,10 @@ $bootLoaderConfig
# services.xserver.desktopManager.plasma5.enable = true; # services.xserver.desktopManager.plasma5.enable = true;
# Define a user account. Don't forget to set a password with passwd. # Define a user account. Don't forget to set a password with passwd.
# users.users.guest = { # users.users.jane = {
# isNormalUser = true; # isNormalUser = true;
# uid = 1000; # uid = 1000;
# extraGroups = [ "wheel" ]; # Enable sudo for the user.
# }; # };
# This value determines the NixOS release with which your system is to be # This value determines the NixOS release with which your system is to be

View File

@ -684,6 +684,7 @@
./services/security/hologram-server.nix ./services/security/hologram-server.nix
./services/security/hologram-agent.nix ./services/security/hologram-agent.nix
./services/security/munge.nix ./services/security/munge.nix
./services/security/nginx-sso.nix
./services/security/oauth2_proxy.nix ./services/security/oauth2_proxy.nix
./services/security/oauth2_proxy_nginx.nix ./services/security/oauth2_proxy_nginx.nix
./services/security/physlock.nix ./services/security/physlock.nix

View File

@ -13,5 +13,5 @@ with lib;
documentation.enable = mkDefault false; documentation.enable = mkDefault false;
services.nixosManual.enable = mkDefault false; documentation.nixos.enable = mkDefault false;
} }

View File

@ -48,6 +48,23 @@ in
https://github.com/zsh-users/zsh-syntax-highlighting/blob/master/docs/highlighters/pattern.md https://github.com/zsh-users/zsh-syntax-highlighting/blob/master/docs/highlighters/pattern.md
''; '';
}; };
styles = mkOption {
default = {};
type = types.attrsOf types.string;
example = literalExample ''
{
"alias" = "fg=magenta,bold";
}
'';
description = ''
Specifies custom styles to be highlighted by zsh-syntax-highlighting.
Please refer to the docs for more information about the usage:
https://github.com/zsh-users/zsh-syntax-highlighting/blob/master/docs/highlighters/main.md
'';
};
}; };
}; };
@ -73,6 +90,11 @@ in
pattern: design: pattern: design:
"ZSH_HIGHLIGHT_PATTERNS+=('${pattern}' '${design}')" "ZSH_HIGHLIGHT_PATTERNS+=('${pattern}' '${design}')"
) cfg.patterns) ) cfg.patterns)
++ optionals (length(attrNames cfg.styles) > 0)
(mapAttrsToList (
styles: design:
"ZSH_HIGHLIGHT_STYLES[${styles}]='${design}'"
) cfg.styles)
); );
}; };
} }

View File

@ -69,6 +69,9 @@ with lib;
(mkRemovedOptionModule [ "security" "setuidOwners" ] "Use security.wrappers instead") (mkRemovedOptionModule [ "security" "setuidOwners" ] "Use security.wrappers instead")
(mkRemovedOptionModule [ "security" "setuidPrograms" ] "Use security.wrappers instead") (mkRemovedOptionModule [ "security" "setuidPrograms" ] "Use security.wrappers instead")
# PAM
(mkRenamedOptionModule [ "security" "pam" "enableU2F" ] [ "security" "pam" "u2f" "enable" ])
(mkRemovedOptionModule [ "services" "rmilter" "bindInetSockets" ] "Use services.rmilter.bindSocket.* instead") (mkRemovedOptionModule [ "services" "rmilter" "bindInetSockets" ] "Use services.rmilter.bindSocket.* instead")
(mkRemovedOptionModule [ "services" "rmilter" "bindUnixSockets" ] "Use services.rmilter.bindSocket.* instead") (mkRemovedOptionModule [ "services" "rmilter" "bindUnixSockets" ] "Use services.rmilter.bindSocket.* instead")

View File

@ -37,12 +37,14 @@ let
}; };
u2fAuth = mkOption { u2fAuth = mkOption {
default = config.security.pam.enableU2F; default = config.security.pam.u2f.enable;
type = types.bool; type = types.bool;
description = '' description = ''
If set, users listed in If set, users listed in
<filename>~/.config/Yubico/u2f_keys</filename> are able to log in <filename>$XDG_CONFIG_HOME/Yubico/u2f_keys</filename> (or
with the associated U2F key. <filename>$HOME/.config/Yubico/u2f_keys</filename> if XDG variable is
not set) are able to log in with the associated U2F key. Path can be
changed using <option>security.pam.u2f.authFile</option> option.
''; '';
}; };
@ -320,8 +322,8 @@ let
"auth sufficient ${pkgs.pam_ssh_agent_auth}/libexec/pam_ssh_agent_auth.so file=~/.ssh/authorized_keys:~/.ssh/authorized_keys2:/etc/ssh/authorized_keys.d/%u"} "auth sufficient ${pkgs.pam_ssh_agent_auth}/libexec/pam_ssh_agent_auth.so file=~/.ssh/authorized_keys:~/.ssh/authorized_keys2:/etc/ssh/authorized_keys.d/%u"}
${optionalString cfg.fprintAuth ${optionalString cfg.fprintAuth
"auth sufficient ${pkgs.fprintd}/lib/security/pam_fprintd.so"} "auth sufficient ${pkgs.fprintd}/lib/security/pam_fprintd.so"}
${optionalString cfg.u2fAuth ${let u2f = config.security.pam.u2f; in optionalString cfg.u2fAuth
"auth sufficient ${pkgs.pam_u2f}/lib/security/pam_u2f.so"} "auth ${u2f.control} ${pkgs.pam_u2f}/lib/security/pam_u2f.so ${optionalString u2f.debug "debug"} ${optionalString (u2f.authFile != null) "authfile=${u2f.authFile}"} ${optionalString u2f.interactive "interactive"} ${optionalString u2f.cue "cue"}"}
${optionalString cfg.usbAuth ${optionalString cfg.usbAuth
"auth sufficient ${pkgs.pam_usb}/lib/security/pam_usb.so"} "auth sufficient ${pkgs.pam_usb}/lib/security/pam_usb.so"}
${let oath = config.security.pam.oath; in optionalString cfg.oathAuth ${let oath = config.security.pam.oath; in optionalString cfg.oathAuth
@ -368,7 +370,7 @@ let
auth required pam_deny.so auth required pam_deny.so
# Password management. # Password management.
password requisite pam_unix.so nullok sha512 password sufficient pam_unix.so nullok sha512
${optionalString config.security.pam.enableEcryptfs ${optionalString config.security.pam.enableEcryptfs
"password optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so"} "password optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so"}
${optionalString cfg.pamMount ${optionalString cfg.pamMount
@ -527,11 +529,96 @@ in
''; '';
}; };
security.pam.enableU2F = mkOption { security.pam.u2f = {
default = false; enable = mkOption {
description = '' default = false;
Enable the U2F PAM module. type = types.bool;
''; description = ''
Enables U2F PAM (<literal>pam-u2f</literal>) module.
If set, users listed in
<filename>$XDG_CONFIG_HOME/Yubico/u2f_keys</filename> (or
<filename>$HOME/.config/Yubico/u2f_keys</filename> if XDG variable is
not set) are able to log in with the associated U2F key. The path can
be changed using <option>security.pam.u2f.authFile</option> option.
File format is:
<literal>username:first_keyHandle,first_public_key: second_keyHandle,second_public_key</literal>
This file can be generated using <command>pamu2fcfg</command> command.
More information can be found <link
xlink:href="https://developers.yubico.com/pam-u2f/">here</link>.
'';
};
authFile = mkOption {
default = null;
type = with types; nullOr path;
description = ''
By default <literal>pam-u2f</literal> module reads the keys from
<filename>$XDG_CONFIG_HOME/Yubico/u2f_keys</filename> (or
<filename>$HOME/.config/Yubico/u2f_keys</filename> if XDG variable is
not set).
If you want to change auth file locations or centralize database (for
example use <filename>/etc/u2f-mappings</filename>) you can set this
option.
File format is:
<literal>username:first_keyHandle,first_public_key: second_keyHandle,second_public_key</literal>
This file can be generated using <command>pamu2fcfg</command> command.
More information can be found <link
xlink:href="https://developers.yubico.com/pam-u2f/">here</link>.
'';
};
control = mkOption {
default = "sufficient";
type = types.enum [ "required" "requisite" "sufficient" "optional" ];
description = ''
This option sets pam "control".
If you want to have multi factor authentication, use "required".
If you want to use U2F device instead of regular password, use "sufficient".
Read
<citerefentry>
<refentrytitle>pam.conf</refentrytitle>
<manvolnum>5</manvolnum>
</citerefentry>
for better understanding of this option.
'';
};
debug = mkOption {
default = false;
type = types.bool;
description = ''
Debug output to stderr.
'';
};
interactive = mkOption {
default = false;
type = types.bool;
description = ''
Set to prompt a message and wait before testing the presence of a U2F device.
Recommended if your device doesnt have a tactile trigger.
'';
};
cue = mkOption {
default = false;
type = types.bool;
description = ''
By default <literal>pam-u2f</literal> module does not inform user
that he needs to use the u2f device, it just waits without a prompt.
If you set this option to <literal>true</literal>,
<literal>cue</literal> option is added to <literal>pam-u2f</literal>
module and reminder message will be displayed.
'';
};
}; };
security.pam.enableEcryptfs = mkOption { security.pam.enableEcryptfs = mkOption {
@ -563,7 +650,7 @@ in
++ optionals config.krb5.enable [pam_krb5 pam_ccreds] ++ optionals config.krb5.enable [pam_krb5 pam_ccreds]
++ optionals config.security.pam.enableOTPW [ pkgs.otpw ] ++ optionals config.security.pam.enableOTPW [ pkgs.otpw ]
++ optionals config.security.pam.oath.enable [ pkgs.oathToolkit ] ++ optionals config.security.pam.oath.enable [ pkgs.oathToolkit ]
++ optionals config.security.pam.enableU2F [ pkgs.pam_u2f ]; ++ optionals config.security.pam.u2f.enable [ pkgs.pam_u2f ];
boot.supportedFilesystems = optionals config.security.pam.enableEcryptfs [ "ecryptfs" ]; boot.supportedFilesystems = optionals config.security.pam.enableEcryptfs [ "ecryptfs" ];

View File

@ -249,6 +249,7 @@ in
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
restartTriggers = [ config.environment.etc."my.cnf".source ];
unitConfig.RequiresMountsFor = "${cfg.dataDir}"; unitConfig.RequiresMountsFor = "${cfg.dataDir}";

View File

@ -15,6 +15,19 @@ let
mkName = p: "pki/fwupd/${baseNameOf (toString p)}"; mkName = p: "pki/fwupd/${baseNameOf (toString p)}";
mkEtcFile = p: nameValuePair (mkName p) { source = p; }; mkEtcFile = p: nameValuePair (mkName p) { source = p; };
in listToAttrs (map mkEtcFile cfg.extraTrustedKeys); in listToAttrs (map mkEtcFile cfg.extraTrustedKeys);
# We cannot include the file in $out and rely on filesInstalledToEtc
# to install it because it would create a cyclic dependency between
# the outputs. We also need to enable the remote,
# which should not be done by default.
testRemote = if cfg.enableTestRemote then {
"fwupd/remotes.d/fwupd-tests.conf" = {
source = pkgs.runCommand "fwupd-tests-enabled.conf" {} ''
sed "s,^Enabled=false,Enabled=true," \
"${pkgs.fwupd.installedTests}/etc/fwupd/remotes.d/fwupd-tests.conf" > "$out"
'';
};
} else {};
in { in {
###### interface ###### interface
@ -40,7 +53,7 @@ in {
blacklistPlugins = mkOption { blacklistPlugins = mkOption {
type = types.listOf types.string; type = types.listOf types.string;
default = []; default = [ "test" ];
example = [ "udev" ]; example = [ "udev" ];
description = '' description = ''
Allow blacklisting specific plugins Allow blacklisting specific plugins
@ -55,6 +68,15 @@ in {
Installing a public key allows firmware signed with a matching private key to be recognized as trusted, which may require less authentication to install than for untrusted files. By default trusted firmware can be upgraded (but not downgraded) without the user or administrator password. Only very few keys are installed by default. Installing a public key allows firmware signed with a matching private key to be recognized as trusted, which may require less authentication to install than for untrusted files. By default trusted firmware can be upgraded (but not downgraded) without the user or administrator password. Only very few keys are installed by default.
''; '';
}; };
enableTestRemote = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable test remote. This is used by
<link xlink:href="https://github.com/hughsie/fwupd/blob/master/data/installed-tests/README.md">installed tests</link>.
'';
};
}; };
}; };
@ -78,7 +100,7 @@ in {
''; '';
}; };
} // originalEtc // extraTrustedKeys; } // originalEtc // extraTrustedKeys // testRemote;
services.dbus.packages = [ pkgs.fwupd ]; services.dbus.packages = [ pkgs.fwupd ];

View File

@ -292,6 +292,7 @@ in
# execute redmine required commands prior to starting the application # execute redmine required commands prior to starting the application
# NOTE: su required in case using mysql socket authentication # NOTE: su required in case using mysql socket authentication
/run/wrappers/bin/su -s ${pkgs.bash}/bin/bash -m -l redmine -c '${bundle} exec rake db:migrate' /run/wrappers/bin/su -s ${pkgs.bash}/bin/bash -m -l redmine -c '${bundle} exec rake db:migrate'
/run/wrappers/bin/su -s ${pkgs.bash}/bin/bash -m -l redmine -c '${bundle} exec rake redmine:plugins:migrate'
/run/wrappers/bin/su -s ${pkgs.bash}/bin/bash -m -l redmine -c '${bundle} exec rake redmine:load_default_data' /run/wrappers/bin/su -s ${pkgs.bash}/bin/bash -m -l redmine -c '${bundle} exec rake redmine:load_default_data'

View File

@ -10,9 +10,14 @@ let
ln -s /run/wrappers/bin/apps.plugin $out/libexec/netdata/plugins.d/apps.plugin ln -s /run/wrappers/bin/apps.plugin $out/libexec/netdata/plugins.d/apps.plugin
''; '';
plugins = [
"${pkgs.netdata}/libexec/netdata/plugins.d"
"${wrappedPlugins}/libexec/netdata/plugins.d"
] ++ cfg.extraPluginPaths;
localConfig = { localConfig = {
global = { global = {
"plugins directory" = "${pkgs.netdata}/libexec/netdata/plugins.d ${wrappedPlugins}/libexec/netdata/plugins.d"; "plugins directory" = concatStringsSep " " plugins;
}; };
web = { web = {
"web files owner" = "root"; "web files owner" = "root";
@ -78,6 +83,24 @@ in {
}; };
}; };
extraPluginPaths = mkOption {
type = types.listOf types.path;
default = [ ];
example = literalExample ''
[ "/path/to/plugins.d" ]
'';
description = ''
Extra paths to add to the netdata global "plugins directory"
option. Useful for when you want to include your own
collection scripts.
</para><para>
Details about writing a custom netdata plugin are available at:
<link xlink:href="https://docs.netdata.cloud/collectors/plugins.d/"/>
</para><para>
Cannot be combined with configText.
'';
};
config = mkOption { config = mkOption {
type = types.attrsOf types.attrs; type = types.attrsOf types.attrs;
default = {}; default = {};

View File

@ -13,7 +13,7 @@ let
overrides = ${cfg.privateConfig} overrides = ${cfg.privateConfig}
[server:main] [server:main]
use = egg:Paste#http use = egg:gunicorn
host = ${cfg.listen.address} host = ${cfg.listen.address}
port = ${toString cfg.listen.port} port = ${toString cfg.listen.port}
@ -30,6 +30,8 @@ let
audiences = ${removeSuffix "/" cfg.publicUrl} audiences = ${removeSuffix "/" cfg.publicUrl}
''; '';
user = "syncserver";
group = "syncserver";
in in
{ {
@ -126,15 +128,14 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
systemd.services.syncserver = let systemd.services.syncserver = {
syncServerEnv = pkgs.python.withPackages(ps: with ps; [ syncserver pasteScript requests ]);
user = "syncserver";
group = "syncserver";
in {
after = [ "network.target" ]; after = [ "network.target" ];
description = "Firefox Sync Server"; description = "Firefox Sync Server";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
path = [ pkgs.coreutils syncServerEnv ]; path = [
pkgs.coreutils
(pkgs.python.withPackages (ps: [ pkgs.syncserver ps.gunicorn ]))
];
serviceConfig = { serviceConfig = {
User = user; User = user;
@ -166,14 +167,17 @@ in
chown ${user}:${group} ${defaultDbLocation} chown ${user}:${group} ${defaultDbLocation}
fi fi
''; '';
serviceConfig.ExecStart = "${syncServerEnv}/bin/paster serve ${syncServerIni}";
script = ''
gunicorn --paste ${syncServerIni}
'';
}; };
users.users.syncserver = { users.users.${user} = {
group = "syncserver"; inherit group;
isSystemUser = true; isSystemUser = true;
}; };
users.groups.syncserver = {}; users.groups.${group} = {};
}; };
} }

View File

@ -142,7 +142,6 @@ in
description = "Collection of named nylon instances"; description = "Collection of named nylon instances";
type = with types; loaOf (submodule nylonOpts); type = with types; loaOf (submodule nylonOpts);
internal = true; internal = true;
options = [ nylonOpts ];
}; };
}; };

View File

@ -11,7 +11,7 @@ let
userOptions = { userOptions = {
openssh.authorizedKeys = { options.openssh.authorizedKeys = {
keys = mkOption { keys = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = []; default = [];
@ -320,7 +320,7 @@ in
}; };
users.users = mkOption { users.users = mkOption {
options = [ userOptions ]; type = with types; loaOf (submodule userOptions);
}; };
}; };

View File

@ -184,4 +184,5 @@ in
}; };
meta.maintainers = with lib.maintainers; [ erictapen ];
} }

View File

@ -1,4 +1,4 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, utils, ... }:
with lib; with lib;
@ -193,7 +193,7 @@ in {
# FIXME: start a separate wpa_supplicant instance per interface. # FIXME: start a separate wpa_supplicant instance per interface.
systemd.services.wpa_supplicant = let systemd.services.wpa_supplicant = let
ifaces = cfg.interfaces; ifaces = cfg.interfaces;
deviceUnit = interface: [ "sys-subsystem-net-devices-${interface}.device" ]; deviceUnit = interface: [ "sys-subsystem-net-devices-${utils.escapeSystemdPath interface}.device" ];
in { in {
description = "WPA Supplicant"; description = "WPA Supplicant";

View File

@ -30,13 +30,20 @@ let
preStart = '' preStart = ''
${concatStringsSep " \\\n" (["mkdir -p"] ++ map escapeShellArg specPaths)} ${concatStringsSep " \\\n" (["mkdir -p"] ++ map escapeShellArg specPaths)}
${pkgs.certmgr}/bin/certmgr -f ${certmgrYaml} check ${cfg.package}/bin/certmgr -f ${certmgrYaml} check
''; '';
in in
{ {
options.services.certmgr = { options.services.certmgr = {
enable = mkEnableOption "certmgr"; enable = mkEnableOption "certmgr";
package = mkOption {
type = types.package;
default = pkgs.certmgr;
defaultText = "pkgs.certmgr";
description = "Which certmgr package to use in the service.";
};
defaultRemote = mkOption { defaultRemote = mkOption {
type = types.str; type = types.str;
default = "127.0.0.1:8888"; default = "127.0.0.1:8888";
@ -187,7 +194,7 @@ in
serviceConfig = { serviceConfig = {
Restart = "always"; Restart = "always";
RestartSec = "10s"; RestartSec = "10s";
ExecStart = "${pkgs.certmgr}/bin/certmgr -f ${certmgrYaml}"; ExecStart = "${cfg.package}/bin/certmgr -f ${certmgrYaml}";
}; };
}; };
}; };

View File

@ -50,7 +50,7 @@ in
path = [ pkgs.munge pkgs.coreutils ]; path = [ pkgs.munge pkgs.coreutils ];
preStart = '' preStart = ''
chmod 0700 ${cfg.password} chmod 0400 ${cfg.password}
mkdir -p /var/lib/munge -m 0711 mkdir -p /var/lib/munge -m 0711
chown -R munge:munge /var/lib/munge chown -R munge:munge /var/lib/munge
mkdir -p /run/munge -m 0755 mkdir -p /run/munge -m 0755

View File

@ -0,0 +1,58 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.nginx.sso;
pkg = getBin pkgs.nginx-sso;
configYml = pkgs.writeText "nginx-sso.yml" (builtins.toJSON cfg.configuration);
in {
options.services.nginx.sso = {
enable = mkEnableOption "nginx-sso service";
configuration = mkOption {
type = types.attrsOf types.unspecified;
default = {};
example = literalExample ''
{
listen = { addr = "127.0.0.1"; port = 8080; };
providers.token.tokens = {
myuser = "MyToken";
};
acl = {
rule_sets = [
{
rules = [ { field = "x-application"; equals = "MyApp"; } ];
allow = [ "myuser" ];
}
];
};
}
'';
description = ''
nginx-sso configuration
(<link xlink:href="https://github.com/Luzifer/nginx-sso/wiki/Main-Configuration">documentation</link>)
as a Nix attribute set.
'';
};
};
config = mkIf cfg.enable {
systemd.services.nginx-sso = {
description = "Nginx SSO Backend";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''
${pkg}/bin/nginx-sso \
--config ${configYml} \
--frontend-dir ${pkg}/share/frontend
'';
Restart = "always";
DynamicUser = true;
};
};
};
}

View File

@ -5,6 +5,9 @@ with lib;
let let
cfg = config.services.sks; cfg = config.services.sks;
sksPkg = cfg.package; sksPkg = cfg.package;
dbConfig = pkgs.writeText "DB_CONFIG" ''
${cfg.extraDbConfig}
'';
in { in {
meta.maintainers = with maintainers; [ primeos calbrecht jcumming ]; meta.maintainers = with maintainers; [ primeos calbrecht jcumming ];
@ -39,6 +42,20 @@ in {
''; '';
}; };
extraDbConfig = mkOption {
type = types.str;
default = "";
description = ''
Set contents of the files "KDB/DB_CONFIG" and "PTree/DB_CONFIG" within
the ''${dataDir} directory. This is used to configure options for the
database for the sks key server.
Documentation of available options are available in the file named
"sampleConfig/DB_CONFIG" in the following repository:
https://bitbucket.org/skskeyserver/sks-keyserver/src
'';
};
hkpAddress = mkOption { hkpAddress = mkOption {
default = [ "127.0.0.1" "::1" ]; default = [ "127.0.0.1" "::1" ];
type = types.listOf types.str; type = types.listOf types.str;
@ -99,6 +116,17 @@ in {
${lib.optionalString (cfg.webroot != null) ${lib.optionalString (cfg.webroot != null)
"ln -sfT \"${cfg.webroot}\" web"} "ln -sfT \"${cfg.webroot}\" web"}
mkdir -p dump mkdir -p dump
# Check that both database configs are symlinks before overwriting them
if [ -e KDB/DB_CONFIG ] && [ ! -L KBD/DB_CONFIG ]; then
echo "KDB/DB_CONFIG exists but is not a symlink." >&2
exit 1
fi
if [ -e PTree/DB_CONFIG ] && [ ! -L PTree/DB_CONFIG ]; then
echo "PTree/DB_CONFIG exists but is not a symlink." >&2
exit 1
fi
ln -sf ${dbConfig} KDB/DB_CONFIG
ln -sf ${dbConfig} PTree/DB_CONFIG
${sksPkg}/bin/sks build dump/*.gpg -n 10 -cache 100 || true #*/ ${sksPkg}/bin/sks build dump/*.gpg -n 10 -cache 100 || true #*/
${sksPkg}/bin/sks cleandb || true ${sksPkg}/bin/sks cleandb || true
${sksPkg}/bin/sks pbuild -cache 20 -ptree_cache 70 || true ${sksPkg}/bin/sks pbuild -cache 20 -ptree_cache 70 || true

View File

@ -4,6 +4,7 @@ with lib;
let let
cfg = config.services.sshguard; cfg = config.services.sshguard;
in { in {
###### interface ###### interface
@ -77,65 +78,65 @@ in {
Systemd services sshguard should receive logs of. Systemd services sshguard should receive logs of.
''; '';
}; };
}; };
}; };
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.sshguard pkgs.iptables pkgs.ipset ];
environment.etc."sshguard.conf".text = let environment.etc."sshguard.conf".text = let
list_services = ( name: "-t ${name} "); args = lib.concatStringsSep " " ([
in '' "-afb"
BACKEND="${pkgs.sshguard}/libexec/sshg-fw-ipset" "-p info"
LOGREADER="LANG=C ${pkgs.systemd}/bin/journalctl -afb -p info -n1 ${toString (map list_services cfg.services)} -o cat" "-o cat"
"-n1"
] ++ (map (name: "-t ${escapeShellArg name}") cfg.services));
in ''
BACKEND="${pkgs.sshguard}/libexec/sshg-fw-ipset"
LOGREADER="LANG=C ${pkgs.systemd}/bin/journalctl ${args}"
'';
systemd.services.sshguard = {
description = "SSHGuard brute-force attacks protection system";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
partOf = optional config.networking.firewall.enable "firewall.service";
path = with pkgs; [ iptables ipset iproute systemd ];
postStart = ''
${pkgs.ipset}/bin/ipset -quiet create -exist sshguard4 hash:ip family inet
${pkgs.ipset}/bin/ipset -quiet create -exist sshguard6 hash:ip family inet6
${pkgs.iptables}/bin/iptables -I INPUT -m set --match-set sshguard4 src -j DROP
${pkgs.iptables}/bin/ip6tables -I INPUT -m set --match-set sshguard6 src -j DROP
''; '';
systemd.services.sshguard = preStop = ''
{ description = "SSHGuard brute-force attacks protection system"; ${pkgs.iptables}/bin/iptables -D INPUT -m set --match-set sshguard4 src -j DROP
${pkgs.iptables}/bin/ip6tables -D INPUT -m set --match-set sshguard6 src -j DROP
'';
wantedBy = [ "multi-user.target" ]; unitConfig.Documentation = "man:sshguard(8)";
after = [ "network.target" ];
partOf = optional config.networking.firewall.enable "firewall.service";
path = [ pkgs.iptables pkgs.ipset pkgs.iproute pkgs.systemd ]; serviceConfig = {
Type = "simple";
postStart = '' ExecStart = let
mkdir -p /var/lib/sshguard args = lib.concatStringsSep " " ([
${pkgs.ipset}/bin/ipset -quiet create -exist sshguard4 hash:ip family inet "-a ${toString cfg.attack_threshold}"
${pkgs.ipset}/bin/ipset -quiet create -exist sshguard6 hash:ip family inet6 "-p ${toString cfg.blocktime}"
${pkgs.iptables}/bin/iptables -I INPUT -m set --match-set sshguard4 src -j DROP "-s ${toString cfg.detection_time}"
${pkgs.iptables}/bin/ip6tables -I INPUT -m set --match-set sshguard6 src -j DROP (optionalString (cfg.blacklist_threshold != null) "-b ${toString cfg.blacklist_threshold}:${cfg.blacklist_file}")
''; ] ++ (map (name: "-w ${escapeShellArg name}") cfg.whitelist));
in "${pkgs.sshguard}/bin/sshguard ${args}";
preStop = '' Restart = "always";
${pkgs.iptables}/bin/iptables -D INPUT -m set --match-set sshguard4 src -j DROP ProtectSystem = "strict";
${pkgs.iptables}/bin/ip6tables -D INPUT -m set --match-set sshguard6 src -j DROP ProtectHome = "tmpfs";
''; RuntimeDirectory = "sshguard";
StateDirectory = "sshguard";
unitConfig.Documentation = "man:sshguard(8)"; CapabilityBoundingSet = "CAP_NET_ADMIN CAP_NET_RAW";
serviceConfig = {
Type = "simple";
ExecStart = let
list_whitelist = ( name: "-w ${name} ");
in ''
${pkgs.sshguard}/bin/sshguard -a ${toString cfg.attack_threshold} ${optionalString (cfg.blacklist_threshold != null) "-b ${toString cfg.blacklist_threshold}:${cfg.blacklist_file} "}-i /run/sshguard/sshguard.pid -p ${toString cfg.blocktime} -s ${toString cfg.detection_time} ${toString (map list_whitelist cfg.whitelist)}
'';
PIDFile = "/run/sshguard/sshguard.pid";
Restart = "always";
ReadOnlyDirectories = "/";
ReadWriteDirectories = "/run/sshguard /var/lib/sshguard";
RuntimeDirectory = "sshguard";
StateDirectory = "sshguard";
CapabilityBoundingSet = "CAP_NET_ADMIN CAP_NET_RAW";
};
}; };
};
}; };
} }

View File

@ -143,6 +143,9 @@ in
${getLib pkgs.lz4}/lib/liblz4*.so* mr, ${getLib pkgs.lz4}/lib/liblz4*.so* mr,
${getLib pkgs.libkrb5}/lib/lib*.so* mr, ${getLib pkgs.libkrb5}/lib/lib*.so* mr,
${getLib pkgs.keyutils}/lib/libkeyutils*.so* mr, ${getLib pkgs.keyutils}/lib/libkeyutils*.so* mr,
${getLib pkgs.utillinuxMinimal.out}/lib/libblkid.so.* mr,
${getLib pkgs.utillinuxMinimal.out}/lib/libmount.so.* mr,
${getLib pkgs.utillinuxMinimal.out}/lib/libuuid.so.* mr,
@{PROC}/sys/kernel/random/uuid r, @{PROC}/sys/kernel/random/uuid r,
@{PROC}/sys/vm/overcommit_memory r, @{PROC}/sys/vm/overcommit_memory r,

View File

@ -151,7 +151,7 @@ let
loggingConf = (if mainCfg.logFormat != "none" then '' loggingConf = (if mainCfg.logFormat != "none" then ''
ErrorLog ${mainCfg.logDir}/error_log ErrorLog ${mainCfg.logDir}/error.log
LogLevel notice LogLevel notice
@ -160,7 +160,7 @@ let
LogFormat "%{Referer}i -> %U" referer LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent LogFormat "%{User-agent}i" agent
CustomLog ${mainCfg.logDir}/access_log ${mainCfg.logFormat} CustomLog ${mainCfg.logDir}/access.log ${mainCfg.logFormat}
'' else '' '' else ''
ErrorLog /dev/null ErrorLog /dev/null
''); '');
@ -261,8 +261,8 @@ let
'' else ""} '' else ""}
${if !isMainServer && mainCfg.logPerVirtualHost then '' ${if !isMainServer && mainCfg.logPerVirtualHost then ''
ErrorLog ${mainCfg.logDir}/error_log-${cfg.hostName} ErrorLog ${mainCfg.logDir}/error-${cfg.hostName}.log
CustomLog ${mainCfg.logDir}/access_log-${cfg.hostName} ${cfg.logFormat} CustomLog ${mainCfg.logDir}/access-${cfg.hostName}.log ${cfg.logFormat}
'' else ""} '' else ""}
${optionalString (robotsTxt != "") '' ${optionalString (robotsTxt != "") ''

View File

@ -0,0 +1,137 @@
{ lib, config, ... }:
with lib;
let
findWinner = candidates: winner:
any (x: x == winner) candidates;
# winners is an ordered list where first item wins over 2nd etc
mergeAnswer = winners: locs: defs:
let
values = map (x: x.value) defs;
freeformAnswer = intersectLists values winners;
inter = intersectLists values winners;
winner = head winners;
in
if defs == [] then abort "This case should never happen."
else if winner == [] then abort "Give a valid list of winner"
else if inter == [] then mergeOneOption locs defs
else if findWinner values winner then
winner
else
mergeAnswer (tail winners) locs defs;
mergeFalseByDefault = locs: defs:
if defs == [] then abort "This case should never happen."
else if any (x: x == false) defs then false
else true;
kernelItem = types.submodule {
options = {
tristate = mkOption {
type = types.enum [ "y" "m" "n" null ] // {
merge = mergeAnswer [ "y" "m" "n" ];
};
default = null;
internal = true;
visible = true;
description = ''
Use this field for tristate kernel options expecting a "y" or "m" or "n".
'';
};
freeform = mkOption {
type = types.nullOr types.str // {
merge = mergeEqualOption;
};
default = null;
example = ''MMC_BLOCK_MINORS.freeform = "32";'';
description = ''
Freeform description of a kernel configuration item value.
'';
};
optional = mkOption {
type = types.bool // { merge = mergeFalseByDefault; };
default = false;
description = ''
Wether option should generate a failure when unused.
'';
};
};
};
mkValue = with lib; val:
let
isNumber = c: elem c ["0" "1" "2" "3" "4" "5" "6" "7" "8" "9"];
in
if (val == "") then "\"\""
else if val == "y" || val == "m" || val == "n" then val
else if all isNumber (stringToCharacters val) then val
else if substring 0 2 val == "0x" then val
else val; # FIXME: fix quoting one day
# generate nix intermediate kernel config file of the form
#
# VIRTIO_MMIO m
# VIRTIO_BLK y
# VIRTIO_CONSOLE n
# NET_9P_VIRTIO? y
#
# Borrowed from copumpkin https://github.com/NixOS/nixpkgs/pull/12158
# returns a string, expr should be an attribute set
# Use mkValuePreprocess to preprocess option values, aka mark 'modules' as 'yes' or vice-versa
# use the identity if you don't want to override the configured values
generateNixKConf = exprs:
let
mkConfigLine = key: item:
let
val = if item.freeform != null then item.freeform else item.tristate;
in
if val == null
then ""
else if (item.optional)
then "${key}? ${mkValue val}\n"
else "${key} ${mkValue val}\n";
mkConf = cfg: concatStrings (mapAttrsToList mkConfigLine cfg);
in mkConf exprs;
in
{
options = {
intermediateNixConfig = mkOption {
readOnly = true;
type = types.lines;
example = ''
USB? y
DEBUG n
'';
description = ''
The result of converting the structured kernel configuration in settings
to an intermediate string that can be parsed by generate-config.pl to
answer the kernel `make defconfig`.
'';
};
settings = mkOption {
type = types.attrsOf kernelItem;
example = literalExample '' with lib.kernel; {
"9P_NET" = yes;
USB = optional yes;
MMC_BLOCK_MINORS = freeform "32";
}'';
description = ''
Structured kernel configuration.
'';
};
};
config = {
intermediateNixConfig = generateNixKConf config.settings;
};
}

View File

@ -525,16 +525,18 @@ in
}; };
fileSystems = mkOption { fileSystems = mkOption {
options.neededForBoot = mkOption { type = with lib.types; loaOf (submodule {
default = false; options.neededForBoot = mkOption {
type = types.bool; default = false;
description = '' type = types.bool;
If set, this file system will be mounted in the initial description = ''
ramdisk. By default, this applies to the root file system If set, this file system will be mounted in the initial
and to the file system containing ramdisk. By default, this applies to the root file system
<filename>/nix/store</filename>. and to the file system containing
''; <filename>/nix/store</filename>.
}; '';
};
});
}; };
}; };

View File

@ -210,6 +210,15 @@ in rec {
''; '';
}; };
startLimitIntervalSec = mkOption {
type = types.int;
description = ''
Configure unit start rate limiting. Units which are started
more than burst times within an interval time interval are
not permitted to start any more.
'';
};
}; };

View File

@ -193,7 +193,7 @@ let
let mkScriptName = s: "unit-script-" + (replaceChars [ "\\" "@" ] [ "-" "_" ] (shellEscape s) ); let mkScriptName = s: "unit-script-" + (replaceChars [ "\\" "@" ] [ "-" "_" ] (shellEscape s) );
in pkgs.writeTextFile { name = mkScriptName name; executable = true; inherit text; }; in pkgs.writeTextFile { name = mkScriptName name; executable = true; inherit text; };
unitConfig = { config, ... }: { unitConfig = { config, options, ... }: {
config = { config = {
unitConfig = unitConfig =
optionalAttrs (config.requires != []) optionalAttrs (config.requires != [])
@ -219,7 +219,9 @@ let
// optionalAttrs (config.documentation != []) { // optionalAttrs (config.documentation != []) {
Documentation = toString config.documentation; } Documentation = toString config.documentation; }
// optionalAttrs (config.onFailure != []) { // optionalAttrs (config.onFailure != []) {
OnFailure = toString config.onFailure; OnFailure = toString config.onFailure; }
// optionalAttrs (options.startLimitIntervalSec.isDefined) {
StartLimitIntervalSec = toString config.startLimitIntervalSec;
}; };
}; };
}; };

View File

@ -12,28 +12,28 @@ let
encryptedFSOptions = { encryptedFSOptions = {
encrypted = { options.encrypted = {
enable = mkOption { enable = mkOption {
default = false; default = false;
type = types.bool; type = types.bool;
description = "The block device is backed by an encrypted one, adds this device as a initrd luks entry."; description = "The block device is backed by an encrypted one, adds this device as a initrd luks entry.";
}; };
blkDev = mkOption { options.blkDev = mkOption {
default = null; default = null;
example = "/dev/sda1"; example = "/dev/sda1";
type = types.nullOr types.str; type = types.nullOr types.str;
description = "Location of the backing encrypted device."; description = "Location of the backing encrypted device.";
}; };
label = mkOption { options.label = mkOption {
default = null; default = null;
example = "rootfs"; example = "rootfs";
type = types.nullOr types.str; type = types.nullOr types.str;
description = "Label of the unlocked encrypted device. Set <literal>fileSystems.&lt;name?&gt;.device</literal> to <literal>/dev/mapper/&lt;label&gt;</literal> to mount the unlocked device."; description = "Label of the unlocked encrypted device. Set <literal>fileSystems.&lt;name?&gt;.device</literal> to <literal>/dev/mapper/&lt;label&gt;</literal> to mount the unlocked device.";
}; };
keyFile = mkOption { options.keyFile = mkOption {
default = null; default = null;
example = "/mnt-root/root/.swapkey"; example = "/mnt-root/root/.swapkey";
type = types.nullOr types.str; type = types.nullOr types.str;
@ -47,10 +47,10 @@ in
options = { options = {
fileSystems = mkOption { fileSystems = mkOption {
options = [encryptedFSOptions]; type = with lib.types; loaOf (submodule encryptedFSOptions);
}; };
swapDevices = mkOption { swapDevices = mkOption {
options = [encryptedFSOptions]; type = with lib.types; listOf (submodule encryptedFSOptions);
}; };
}; };

View File

@ -92,23 +92,24 @@ let
exit($mainRes & 127 ? 255 : $mainRes << 8); exit($mainRes & 127 ? 255 : $mainRes << 8);
''; '';
opts = { config, name, ... }: {
options.runner = mkOption {
internal = true;
description = ''
A script that runs the service outside of systemd,
useful for testing or for using NixOS services outside
of NixOS.
'';
};
config.runner = makeScript name config;
};
in in
{ {
options = { options = {
systemd.services = mkOption { systemd.services = mkOption {
options = type = with types; attrsOf (submodule opts);
{ config, name, ... }:
{ options.runner = mkOption {
internal = true;
description = ''
A script that runs the service outside of systemd,
useful for testing or for using NixOS services outside
of NixOS.
'';
};
config.runner = makeScript name config;
};
}; };
}; };
} }

View File

@ -36,7 +36,7 @@ let
#! ${pkgs.runtimeShell} -e #! ${pkgs.runtimeShell} -e
# Initialise the container side of the veth pair. # Initialise the container side of the veth pair.
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ]; then if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ] || [ -n "$HOST_BRIDGE" ]; then
ip link set host0 name eth0 ip link set host0 name eth0
ip link set dev eth0 up ip link set dev eth0 up
@ -90,18 +90,20 @@ let
if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ]; then if [ -n "$HOST_ADDRESS" ] || [ -n "$LOCAL_ADDRESS" ]; then
extraFlags+=" --network-veth" extraFlags+=" --network-veth"
if [ -n "$HOST_BRIDGE" ]; then fi
extraFlags+=" --network-bridge=$HOST_BRIDGE"
fi if [ -n "$HOST_PORT" ]; then
if [ -n "$HOST_PORT" ]; then OIFS=$IFS
OIFS=$IFS IFS=","
IFS="," for i in $HOST_PORT
for i in $HOST_PORT do
do extraFlags+=" --port=$i"
extraFlags+=" --port=$i" done
done IFS=$OIFS
IFS=$OIFS fi
fi
if [ -n "$HOST_BRIDGE" ]; then
extraFlags+=" --network-bridge=$HOST_BRIDGE"
fi fi
extraFlags+=" ${concatStringsSep " " (mapAttrsToList nspawnExtraVethArgs cfg.extraVeths)}" extraFlags+=" ${concatStringsSep " " (mapAttrsToList nspawnExtraVethArgs cfg.extraVeths)}"

View File

@ -153,6 +153,7 @@ in
nfs4 = handleTest ./nfs.nix { version = 4; }; nfs4 = handleTest ./nfs.nix { version = 4; };
nghttpx = handleTest ./nghttpx.nix {}; nghttpx = handleTest ./nghttpx.nix {};
nginx = handleTest ./nginx.nix {}; nginx = handleTest ./nginx.nix {};
nginx-sso = handleTest ./nginx-sso.nix {};
nix-ssh-serve = handleTest ./nix-ssh-serve.nix {}; nix-ssh-serve = handleTest ./nix-ssh-serve.nix {};
novacomd = handleTestOn ["x86_64-linux"] ./novacomd.nix {}; novacomd = handleTestOn ["x86_64-linux"] ./novacomd.nix {};
nsd = handleTest ./nsd.nix {}; nsd = handleTest ./nsd.nix {};
@ -162,6 +163,7 @@ in
osquery = handleTest ./osquery.nix {}; osquery = handleTest ./osquery.nix {};
ostree = handleTest ./ostree.nix {}; ostree = handleTest ./ostree.nix {};
pam-oath-login = handleTest ./pam-oath-login.nix {}; pam-oath-login = handleTest ./pam-oath-login.nix {};
pam-u2f = handleTest ./pam-u2f.nix {};
pantheon = handleTest ./pantheon.nix {}; pantheon = handleTest ./pantheon.nix {};
peerflix = handleTest ./peerflix.nix {}; peerflix = handleTest ./peerflix.nix {};
pgjwt = handleTest ./pgjwt.nix {}; pgjwt = handleTest ./pgjwt.nix {};

View File

@ -45,6 +45,19 @@ import ./make-test.nix ({ pkgs, ...} : {
}; };
}; };
containers.web-noip =
{
autoStart = true;
privateNetwork = true;
hostBridge = "br0";
config =
{ services.httpd.enable = true;
services.httpd.adminAddr = "foo@example.org";
networking.firewall.allowedTCPPorts = [ 80 ];
};
};
virtualisation.pathsInNixDB = [ pkgs.stdenv ]; virtualisation.pathsInNixDB = [ pkgs.stdenv ];
}; };
@ -56,6 +69,10 @@ import ./make-test.nix ({ pkgs, ...} : {
# Start the webserver container. # Start the webserver container.
$machine->succeed("nixos-container status webserver") =~ /up/ or die; $machine->succeed("nixos-container status webserver") =~ /up/ or die;
# Check if bridges exist inside containers
$machine->succeed("nixos-container run webserver -- ip link show eth0");
$machine->succeed("nixos-container run web-noip -- ip link show eth0");
"${containerIp}" =~ /([^\/]+)\/([0-9+])/; "${containerIp}" =~ /([^\/]+)\/([0-9+])/;
my $ip = $1; my $ip = $1;
chomp $ip; chomp $ip;

View File

@ -8,6 +8,8 @@ import ./make-test.nix ({ pkgs, ... }: {
machine = { pkgs, ... }: { machine = { pkgs, ... }: {
services.fwupd.enable = true; services.fwupd.enable = true;
services.fwupd.blacklistPlugins = []; # don't blacklist test plugin
services.fwupd.enableTestRemote = true;
environment.systemPackages = with pkgs; [ gnome-desktop-testing ]; environment.systemPackages = with pkgs; [ gnome-desktop-testing ];
environment.variables.XDG_DATA_DIRS = [ "${pkgs.fwupd.installedTests}/share" ]; environment.variables.XDG_DATA_DIRS = [ "${pkgs.fwupd.installedTests}/share" ];
virtualisation.memorySize = 768; virtualisation.memorySize = 768;

View File

@ -73,7 +73,7 @@ in {
$hass->succeed("curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}' | grep -qF '\"state\": \"on\"'"); $hass->succeed("curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}' | grep -qF '\"state\": \"on\"'");
# Toggle a binary sensor using hass-cli # Toggle a binary sensor using hass-cli
$hass->succeed("${hassCli} entity get binary_sensor.mqtt_binary_sensor | grep -qF '\"state\": \"on\"'"); $hass->succeed("${hassCli} --output json entity get binary_sensor.mqtt_binary_sensor | grep -qF '\"state\": \"on\"'");
$hass->succeed("${hassCli} entity edit binary_sensor.mqtt_binary_sensor --json='{\"state\": \"off\"}'"); $hass->succeed("${hassCli} entity edit binary_sensor.mqtt_binary_sensor --json='{\"state\": \"off\"}'");
$hass->succeed("curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}' | grep -qF '\"state\": \"off\"'"); $hass->succeed("curl http://localhost:8123/api/states/binary_sensor.mqtt_binary_sensor -H 'x-ha-access: ${apiPassword}' | grep -qF '\"state\": \"off\"'");

View File

@ -1,41 +1,23 @@
import ./make-test.nix ({ pkgs, lib, ...} : import ./make-test.nix ({ pkgs, lib, ...} :
let let
unlines = lib.concatStringsSep "\n";
unlinesAttrs = f: as: unlines (lib.mapAttrsToList f as);
dbDomain = "example.com";
dbSuffix = "dc=example,dc=com"; dbSuffix = "dc=example,dc=com";
dbPath = "/var/db/openldap";
dbAdminDn = "cn=admin,${dbSuffix}"; dbAdminDn = "cn=admin,${dbSuffix}";
dbAdminPwd = "test"; dbAdminPwd = "admin-password";
serverUri = "ldap:///"; # NOTE: slappasswd -h "{SSHA}" -s '${dbAdminPwd}'
dbAdminPwdHash = "{SSHA}i7FopSzkFQMrHzDMB1vrtkI0rBnwouP8";
ldapUser = "test-ldap-user"; ldapUser = "test-ldap-user";
ldapUserId = 10000; ldapUserId = 10000;
ldapUserPwd = "test"; ldapUserPwd = "user-password";
# NOTE: slappasswd -h "{SSHA}" -s '${ldapUserPwd}'
ldapUserPwdHash = "{SSHA}v12XICMZNGT6r2KJ26rIkN8Vvvp4QX6i";
ldapGroup = "test-ldap-group"; ldapGroup = "test-ldap-group";
ldapGroupId = 10000; ldapGroupId = 10000;
setupLdif = pkgs.writeText "test-ldap.ldif" ''
dn: ${dbSuffix}
dc: ${with lib; let dc = head (splitString "," dbSuffix); dcName = head (tail (splitString "=" dc)); in dcName}
o: ${dbSuffix}
objectclass: top
objectclass: dcObject
objectclass: organization
dn: cn=${ldapUser},${dbSuffix}
sn: ${ldapUser}
objectClass: person
objectClass: posixAccount
uid: ${ldapUser}
uidNumber: ${toString ldapUserId}
gidNumber: ${toString ldapGroupId}
homeDirectory: /home/${ldapUser}
loginShell: /bin/sh
userPassword: ${ldapUserPwd}
dn: cn=${ldapGroup},${dbSuffix}
objectClass: posixGroup
gidNumber: ${toString ldapGroupId}
memberUid: ${ldapUser}
'';
mkClient = useDaemon: mkClient = useDaemon:
{ lib, ... }: { lib, ... }:
{ {
@ -43,13 +25,24 @@ let
virtualisation.vlans = [ 1 ]; virtualisation.vlans = [ 1 ];
security.pam.services.su.rootOK = lib.mkForce false; security.pam.services.su.rootOK = lib.mkForce false;
users.ldap.enable = true; users.ldap.enable = true;
users.ldap.daemon.enable = useDaemon; users.ldap.daemon = {
enable = useDaemon;
rootpwmoddn = "cn=admin,${dbSuffix}";
rootpwmodpw = "/etc/nslcd.rootpwmodpw";
};
# NOTE: password stored in clear in Nix's store, but this is a test.
environment.etc."nslcd.rootpwmodpw".source = pkgs.writeText "rootpwmodpw" dbAdminPwd;
users.ldap.loginPam = true; users.ldap.loginPam = true;
users.ldap.nsswitch = true; users.ldap.nsswitch = true;
users.ldap.server = "ldap://server"; users.ldap.server = "ldap://server";
users.ldap.base = "${dbSuffix}"; users.ldap.base = "ou=posix,${dbSuffix}";
users.ldap.bind = {
distinguishedName = "cn=admin,${dbSuffix}";
password = "/etc/ldap/bind.password";
};
# NOTE: password stored in clear in Nix's store, but this is a test.
environment.etc."ldap/bind.password".source = pkgs.writeText "password" dbAdminPwd;
}; };
in in
{ {
@ -61,28 +54,237 @@ in
nodes = { nodes = {
server = server =
{ pkgs, ... }: { pkgs, config, ... }:
let
inherit (config.services) openldap;
slapdConfig = pkgs.writeText "cn=config.ldif" (''
dn: cn=config
objectClass: olcGlobal
#olcPidFile: /run/slapd/slapd.pid
# List of arguments that were passed to the server
#olcArgsFile: /run/slapd/slapd.args
# Read slapd-config(5) for possible values
olcLogLevel: none
# The tool-threads parameter sets the actual amount of CPU's
# that is used for indexing.
olcToolThreads: 1
dn: olcDatabase={-1}frontend,cn=config
objectClass: olcDatabaseConfig
objectClass: olcFrontendConfig
# The maximum number of entries that is returned for a search operation
olcSizeLimit: 500
# Allow unlimited access to local connection from the local root user
olcAccess: to *
by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage
by * break
# Allow unauthenticated read access for schema and base DN autodiscovery
olcAccess: to dn.exact=""
by * read
olcAccess: to dn.base="cn=Subschema"
by * read
dn: olcDatabase=config,cn=config
objectClass: olcDatabaseConfig
olcRootDN: cn=admin,cn=config
#olcRootPW:
# NOTE: access to cn=config, system root can be manager
# with SASL mechanism (-Y EXTERNAL) over unix socket (-H ldapi://)
olcAccess: to *
by dn.exact="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage
by * break
dn: cn=schema,cn=config
objectClass: olcSchemaConfig
include: file://${pkgs.openldap}/etc/schema/core.ldif
include: file://${pkgs.openldap}/etc/schema/cosine.ldif
include: file://${pkgs.openldap}/etc/schema/nis.ldif
include: file://${pkgs.openldap}/etc/schema/inetorgperson.ldif
dn: cn=module{0},cn=config
objectClass: olcModuleList
# Where the dynamically loaded modules are stored
#olcModulePath: /usr/lib/ldap
olcModuleLoad: back_mdb
''
+ unlinesAttrs (olcSuffix: {conf, ...}:
"include: file://" + pkgs.writeText "config.ldif" conf
) slapdDatabases
);
slapdDatabases = {
"${dbSuffix}" = {
conf = ''
dn: olcBackend={1}mdb,cn=config
objectClass: olcBackendConfig
dn: olcDatabase={1}mdb,cn=config
olcSuffix: ${dbSuffix}
olcDbDirectory: ${openldap.dataDir}/${dbSuffix}
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
# NOTE: checkpoint the database periodically in case of system failure
# and to speed up slapd shutdown.
olcDbCheckpoint: 512 30
# Database max size is 1G
olcDbMaxSize: 1073741824
olcLastMod: TRUE
# NOTE: database superuser. Needed for syncrepl,
# and used to auth as admin through a TCP connection.
olcRootDN: cn=admin,${dbSuffix}
olcRootPW: ${dbAdminPwdHash}
#
olcDbIndex: objectClass eq
olcDbIndex: cn,uid eq
olcDbIndex: uidNumber,gidNumber eq
olcDbIndex: member,memberUid eq
#
olcAccess: to attrs=userPassword
by self write
by anonymous auth
by dn="cn=admin,${dbSuffix}" write
by dn="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" write
by * none
olcAccess: to attrs=shadowLastChange
by self write
by dn="cn=admin,${dbSuffix}" write
by dn="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" write
by * none
olcAccess: to dn.sub="ou=posix,${dbSuffix}"
by self read
by dn="cn=admin,${dbSuffix}" read
by dn="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read
olcAccess: to *
by self read
by * none
'';
data = ''
dn: ${dbSuffix}
objectClass: top
objectClass: dcObject
objectClass: organization
o: ${dbDomain}
dn: cn=admin,${dbSuffix}
objectClass: simpleSecurityObject
objectClass: organizationalRole
description: ${dbDomain} LDAP administrator
roleOccupant: ${dbSuffix}
userPassword: ${ldapUserPwdHash}
dn: ou=posix,${dbSuffix}
objectClass: top
objectClass: organizationalUnit
dn: ou=accounts,ou=posix,${dbSuffix}
objectClass: top
objectClass: organizationalUnit
dn: ou=groups,ou=posix,${dbSuffix}
objectClass: top
objectClass: organizationalUnit
''
+ lib.concatMapStrings posixAccount [
{ uid=ldapUser; uidNumber=ldapUserId; gidNumber=ldapGroupId; userPassword=ldapUserPwdHash; }
]
+ lib.concatMapStrings posixGroup [
{ gid=ldapGroup; gidNumber=ldapGroupId; members=[]; }
];
};
};
# NOTE: create a user account using the posixAccount objectClass.
posixAccount =
{ uid
, uidNumber ? null
, gidNumber ? null
, cn ? ""
, sn ? ""
, userPassword ? ""
, loginShell ? "/bin/sh"
}: ''
dn: uid=${uid},ou=accounts,ou=posix,${dbSuffix}
objectClass: person
objectClass: posixAccount
objectClass: shadowAccount
cn: ${cn}
gecos:
${if gidNumber == null then "#" else "gidNumber: ${toString gidNumber}"}
homeDirectory: /home/${uid}
loginShell: ${loginShell}
sn: ${sn}
${if uidNumber == null then "#" else "uidNumber: ${toString uidNumber}"}
${if userPassword == "" then "#" else "userPassword: ${userPassword}"}
'';
# NOTE: create a group using the posixGroup objectClass.
posixGroup =
{ gid
, gidNumber
, members
}: ''
dn: cn=${gid},ou=groups,ou=posix,${dbSuffix}
objectClass: top
objectClass: posixGroup
gidNumber: ${toString gidNumber}
${lib.concatMapStrings (member: "memberUid: ${member}\n") members}
'';
in
{ {
virtualisation.memorySize = 256; virtualisation.memorySize = 256;
virtualisation.vlans = [ 1 ]; virtualisation.vlans = [ 1 ];
networking.firewall.allowedTCPPorts = [ 389 ]; networking.firewall.allowedTCPPorts = [ 389 ];
services.openldap.enable = true; services.openldap.enable = true;
services.openldap.dataDir = dbPath; services.openldap.dataDir = "/var/db/openldap";
services.openldap.configDir = "/var/db/slapd";
services.openldap.urlList = [ services.openldap.urlList = [
serverUri "ldap:///"
"ldapi:///"
]; ];
services.openldap.extraConfig = '' systemd.services.openldap = {
include ${pkgs.openldap.out}/etc/schema/core.schema preStart = ''
include ${pkgs.openldap.out}/etc/schema/cosine.schema set -e
include ${pkgs.openldap.out}/etc/schema/inetorgperson.schema # NOTE: slapd's config is always re-initialized.
include ${pkgs.openldap.out}/etc/schema/nis.schema rm -rf "${openldap.configDir}"/cn=config \
"${openldap.configDir}"/cn=config.ldif
database mdb install -D -d -m 0700 -o "${openldap.user}" -g "${openldap.group}" "${openldap.configDir}"
suffix ${dbSuffix} # NOTE: olcDbDirectory must be created before adding the config.
rootdn ${dbAdminDn} '' +
rootpw ${dbAdminPwd} unlinesAttrs (olcSuffix: {data, ...}: ''
directory ${dbPath} # NOTE: database is always re-initialized.
''; rm -rf "${openldap.dataDir}/${olcSuffix}"
install -D -d -m 0700 -o "${openldap.user}" -g "${openldap.group}" \
"${openldap.dataDir}/${olcSuffix}"
'') slapdDatabases
+ ''
# NOTE: slapd is supposed to be stopped while in preStart,
# hence slap* commands can safely be used.
umask 0077
${pkgs.openldap}/bin/slapadd -n 0 \
-F "${openldap.configDir}" \
-l ${slapdConfig}
chown -R "${openldap.user}:${openldap.group}" "${openldap.configDir}"
# NOTE: slapadd(8): To populate the config database slapd-config(5),
# use -n 0 as it is always the first database.
# It must physically exist on the filesystem prior to this, however.
'' +
unlinesAttrs (olcSuffix: {data, ...}: ''
# NOTE: load database ${olcSuffix}
# (as root to avoid depending on sudo or chpst)
${pkgs.openldap}/bin/slapadd \
-F "${openldap.configDir}" \
-l ${pkgs.writeText "data.ldif" data}
'' + ''
# NOTE: redundant with default openldap's preStart, but do not harm.
chown -R "${openldap.user}:${openldap.group}" \
"${openldap.dataDir}/${olcSuffix}"
'') slapdDatabases;
};
}; };
client1 = mkClient true; # use nss_pam_ldapd client1 = mkClient true; # use nss_pam_ldapd
@ -91,15 +293,91 @@ in
}; };
testScript = '' testScript = ''
startAll; $server->start;
$server->waitForUnit("default.target"); $server->waitForUnit("default.target");
subtest "slapd", sub {
subtest "auth as database admin with SASL and check a POSIX account", sub {
$server->succeed(join ' ', 'test',
'"$(ldapsearch -LLL -H ldapi:// -Y EXTERNAL',
'-b \'uid=${ldapUser},ou=accounts,ou=posix,${dbSuffix}\' ',
'-s base uidNumber |',
'sed -ne \'s/^uidNumber: \\(.*\\)/\\1/p\' ',
')" -eq ${toString ldapUserId}');
};
subtest "auth as database admin with password and check a POSIX account", sub {
$server->succeed(join ' ', 'test',
'"$(ldapsearch -LLL -H ldap://server',
'-D \'cn=admin,${dbSuffix}\' -w \'${dbAdminPwd}\' ',
'-b \'uid=${ldapUser},ou=accounts,ou=posix,${dbSuffix}\' ',
'-s base uidNumber |',
'sed -ne \'s/^uidNumber: \\(.*\\)/\\1/p\' ',
')" -eq ${toString ldapUserId}');
};
};
$client1->start;
$client1->waitForUnit("default.target"); $client1->waitForUnit("default.target");
subtest "password", sub {
subtest "su with password to a POSIX account", sub {
$client1->succeed("${pkgs.expect}/bin/expect -c '" . join ';',
'spawn su "${ldapUser}"',
'expect "Password:"',
'send "${ldapUserPwd}\n"',
'expect "*"',
'send "whoami\n"',
'expect -ex "${ldapUser}" {exit}',
'exit 1' . "'");
};
subtest "change password of a POSIX account as root", sub {
$client1->succeed("chpasswd <<<'${ldapUser}:new-password'");
$client1->succeed("${pkgs.expect}/bin/expect -c '" . join ';',
'spawn su "${ldapUser}"',
'expect "Password:"',
'send "new-password\n"',
'expect "*"',
'send "whoami\n"',
'expect -ex "${ldapUser}" {exit}',
'exit 1' . "'");
$client1->succeed('chpasswd <<<\'${ldapUser}:${ldapUserPwd}\' ');
};
subtest "change password of a POSIX account from itself", sub {
$client1->succeed('chpasswd <<<\'${ldapUser}:${ldapUserPwd}\' ');
$client1->succeed("${pkgs.expect}/bin/expect -c '" . join ';',
'spawn su --login ${ldapUser} -c passwd',
'expect "Password: "',
'send "${ldapUserPwd}\n"',
'expect "(current) UNIX password: "',
'send "${ldapUserPwd}\n"',
'expect "New password: "',
'send "new-password\n"',
'expect "Retype new password: "',
'send "new-password\n"',
'expect "passwd: password updated successfully" {exit}',
'exit 1' . "'");
$client1->succeed("${pkgs.expect}/bin/expect -c '" . join ';',
'spawn su "${ldapUser}"',
'expect "Password:"',
'send "${ldapUserPwd}\n"',
'expect "su: Authentication failure" {exit}',
'exit 1' . "'");
$client1->succeed("${pkgs.expect}/bin/expect -c '" . join ';',
'spawn su "${ldapUser}"',
'expect "Password:"',
'send "new-password\n"',
'expect "*"',
'send "whoami\n"',
'expect -ex "${ldapUser}" {exit}',
'exit 1' . "'");
$client1->succeed('chpasswd <<<\'${ldapUser}:${ldapUserPwd}\' ');
};
};
$client2->start;
$client2->waitForUnit("default.target"); $client2->waitForUnit("default.target");
$server->succeed("ldapadd -D '${dbAdminDn}' -w ${dbAdminPwd} -H ${serverUri} -f '${setupLdif}'"); subtest "NSS", sub {
# NSS tests
subtest "nss", sub {
$client1->succeed("test \"\$(id -u '${ldapUser}')\" -eq ${toString ldapUserId}"); $client1->succeed("test \"\$(id -u '${ldapUser}')\" -eq ${toString ldapUserId}");
$client1->succeed("test \"\$(id -u -n '${ldapUser}')\" = '${ldapUser}'"); $client1->succeed("test \"\$(id -u -n '${ldapUser}')\" = '${ldapUser}'");
$client1->succeed("test \"\$(id -g '${ldapUser}')\" -eq ${toString ldapGroupId}"); $client1->succeed("test \"\$(id -g '${ldapUser}')\" -eq ${toString ldapGroupId}");
@ -110,8 +388,7 @@ in
$client2->succeed("test \"\$(id -g -n '${ldapUser}')\" = '${ldapGroup}'"); $client2->succeed("test \"\$(id -g -n '${ldapUser}')\" = '${ldapGroup}'");
}; };
# PAM tests subtest "PAM", sub {
subtest "pam", sub {
$client1->succeed("echo ${ldapUserPwd} | su -l '${ldapUser}' -c true"); $client1->succeed("echo ${ldapUserPwd} | su -l '${ldapUser}' -c true");
$client2->succeed("echo ${ldapUserPwd} | su -l '${ldapUser}' -c true"); $client2->succeed("echo ${ldapUserPwd} | su -l '${ldapUser}' -c true");
}; };

44
nixos/tests/nginx-sso.nix Normal file
View File

@ -0,0 +1,44 @@
import ./make-test.nix ({ pkgs, ... }: {
name = "nginx-sso";
meta = {
maintainers = with pkgs.stdenv.lib.maintainers; [ delroth ];
};
machine = {
services.nginx.sso = {
enable = true;
configuration = {
listen = { addr = "127.0.0.1"; port = 8080; };
providers.token.tokens = {
myuser = "MyToken";
};
acl = {
rule_sets = [
{
rules = [ { field = "x-application"; equals = "MyApp"; } ];
allow = [ "myuser" ];
}
];
};
};
};
};
testScript = ''
startAll;
$machine->waitForUnit("nginx-sso.service");
$machine->waitForOpenPort(8080);
# No valid user -> 401.
$machine->fail("curl -sSf http://localhost:8080/auth");
# Valid user but no matching ACL -> 403.
$machine->fail("curl -sSf -H 'Authorization: Token MyToken' http://localhost:8080/auth");
# Valid user and matching ACL -> 200.
$machine->succeed("curl -sSf -H 'Authorization: Token MyToken' -H 'X-Application: MyApp' http://localhost:8080/auth");
'';
})

23
nixos/tests/pam-u2f.nix Normal file
View File

@ -0,0 +1,23 @@
import ./make-test.nix ({ ... }:
{
name = "pam-u2f";
machine =
{ ... }:
{
security.pam.u2f = {
control = "required";
cue = true;
debug = true;
enable = true;
interactive = true;
};
};
testScript =
''
$machine->waitForUnit('multi-user.target');
$machine->succeed('egrep "auth required .*/lib/security/pam_u2f.so.*debug.*interactive.*cue" /etc/pam.d/ -R');
'';
})

View File

@ -1,8 +1,8 @@
{ stdenv, fetchurl, alsaLib, bzip2, cairo, dpkg, freetype, gdk_pixbuf { stdenv, fetchurl, alsaLib, bzip2, cairo, dpkg, freetype, gdk_pixbuf
, glib, gtk2, harfbuzz, jdk, lib, xorg , wrapGAppsHook, gtk2, gtk3, harfbuzz, jdk, lib, xorg
, libbsd, libjack2, libpng , libbsd, libjack2, libpng, ffmpeg
, libxkbcommon , libxkbcommon
, makeWrapper, pixman , makeWrapper, pixman, autoPatchelfHook
, xdg_utils, zenity, zlib }: , xdg_utils, zenity, zlib }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
@ -14,22 +14,21 @@ stdenv.mkDerivation rec {
sha256 = "0n0fxh9gnmilwskjcayvjsjfcs3fz9hn00wh7b3gg0cv3qqhich8"; sha256 = "0n0fxh9gnmilwskjcayvjsjfcs3fz9hn00wh7b3gg0cv3qqhich8";
}; };
nativeBuildInputs = [ dpkg makeWrapper ]; nativeBuildInputs = [ dpkg makeWrapper autoPatchelfHook wrapGAppsHook ];
unpackCmd = "mkdir root ; dpkg-deb -x $curSrc root"; unpackCmd = "mkdir root ; dpkg-deb -x $curSrc root";
dontBuild = true; dontBuild = true;
dontPatchELF = true; dontWrapGApps = true; # we only want $gappsWrapperArgs here
dontStrip = true;
libPath = with xorg; lib.makeLibraryPath [ buildInputs = with xorg; [
alsaLib bzip2.out cairo freetype gdk_pixbuf glib gtk2 harfbuzz libX11 libXau alsaLib bzip2.out cairo freetype gdk_pixbuf gtk2 gtk3 harfbuzz libX11 libXau
libXcursor libXdmcp libXext libXfixes libXrender libbsd libjack2 libpng libxcb libXcursor libXdmcp libXext libXfixes libXrender libbsd libjack2 libpng libxcb
libxkbfile pixman xcbutil xcbutilwm zlib libxkbfile pixman xcbutil xcbutilwm zlib
]; ];
binPath = lib.makeBinPath [ binPath = lib.makeBinPath [
xdg_utils zenity xdg_utils zenity ffmpeg
]; ];
installPhase = '' installPhase = ''
@ -49,6 +48,16 @@ stdenv.mkDerivation rec {
rm -rf $out/libexec/lib/jre rm -rf $out/libexec/lib/jre
ln -s ${jdk.home}/jre $out/libexec/lib/jre ln -s ${jdk.home}/jre $out/libexec/lib/jre
mkdir -p $out/bin
ln -s $out/libexec/bitwig-studio $out/bin/bitwig-studio
cp -r usr/share $out/share
substitute usr/share/applications/bitwig-studio.desktop \
$out/share/applications/bitwig-studio.desktop \
--replace /usr/bin/bitwig-studio $out/bin/bitwig-studio
'';
postFixup = ''
# Bitwigs `libx11-windowing-system.so` has several problems: # Bitwigs `libx11-windowing-system.so` has several problems:
# #
# • has some old version of libxkbcommon linked statically (ಠ_ಠ), # • has some old version of libxkbcommon linked statically (ಠ_ಠ),
@ -67,22 +76,11 @@ stdenv.mkDerivation rec {
-not -name '*.so' \ -not -name '*.so' \
-not -path '*/resources/*' | \ -not -path '*/resources/*' | \
while IFS= read -r f ; do while IFS= read -r f ; do
patchelf \
--set-interpreter $(cat ${stdenv.cc}/nix-support/dynamic-linker) \
$f && \
wrapProgram $f \ wrapProgram $f \
--prefix PATH : "${binPath}" \ --prefix PATH : "${binPath}" \
--prefix LD_LIBRARY_PATH : "${libPath}" \ "''${gappsWrapperArgs[@]}" \
--set LD_PRELOAD "${libxkbcommon.out}/lib/libxkbcommon.so" || true --set LD_PRELOAD "${libxkbcommon.out}/lib/libxkbcommon.so" || true
done done
mkdir -p $out/bin
ln -s $out/libexec/bitwig-studio $out/bin/bitwig-studio
cp -r usr/share $out/share
substitute usr/share/applications/bitwig-studio.desktop \
$out/share/applications/bitwig-studio.desktop \
--replace /usr/bin/bitwig-studio $out/bin/bitwig-studio
''; '';
meta = with stdenv.lib; { meta = with stdenv.lib; {

View File

@ -1,18 +1,16 @@
{ stdenv, fetchurl, bitwig-studio1, { stdenv, fetchurl, bitwig-studio1,
xdg_utils, zenity, ffmpeg }: xdg_utils, zenity, ffmpeg, pulseaudio }:
bitwig-studio1.overrideAttrs (oldAttrs: rec { bitwig-studio1.overrideAttrs (oldAttrs: rec {
name = "bitwig-studio-${version}"; name = "bitwig-studio-${version}";
version = "2.3.5"; version = "2.4.3";
src = fetchurl { src = fetchurl {
url = "https://downloads.bitwig.com/stable/${version}/bitwig-studio-${version}.deb"; url = "https://downloads.bitwig.com/stable/${version}/bitwig-studio-${version}.deb";
sha256 = "1v62z08hqla8fz5m7hl9ynf2hpr0j0arm0nb5lpd99qrv36ibrsc"; sha256 = "17754y4ni0zj9vjxl8ldivi33gdb0nk6sdlcmlpskgffrlx8di08";
}; };
buildInputs = bitwig-studio1.buildInputs ++ [ ffmpeg ]; runtimeDependencies = [
pulseaudio
binPath = stdenv.lib.makeBinPath [
ffmpeg xdg_utils zenity
]; ];
}) })

View File

@ -1,4 +1,4 @@
{ stdenv, fetchurl, fetchpatch, boost, cmake, chromaprint, gettext, gst_all_1, liblastfm { stdenv, fetchFromGitHub, fetchpatch, boost, cmake, chromaprint, gettext, gst_all_1, liblastfm
, qt4, taglib, fftw, glew, qjson, sqlite, libgpod, libplist, usbmuxd, libmtp , qt4, taglib, fftw, glew, qjson, sqlite, libgpod, libplist, usbmuxd, libmtp
, libpulseaudio, gvfs, libcdio, libechonest, libspotify, pcre, projectm, protobuf , libpulseaudio, gvfs, libcdio, libechonest, libspotify, pcre, projectm, protobuf
, qca2, pkgconfig, sparsehash, config, makeWrapper, gst_plugins }: , qca2, pkgconfig, sparsehash, config, makeWrapper, gst_plugins }:
@ -11,14 +11,16 @@ let
version = "1.3.1"; version = "1.3.1";
src = fetchurl { src = fetchFromGitHub {
url = https://github.com/clementine-player/Clementine/archive/1.3.1.tar.gz; owner = "clementine-player";
sha256 = "0z7k73wyz54c3020lb6x2dgw0vz4ri7wcl3vs03qdj5pk8d971gq"; repo = "Clementine";
rev = version;
sha256 = "0i3jkfs8dbfkh47jq3cnx7pip47naqg7w66vmfszk4d8vj37j62j";
}; };
patches = [ patches = [
./clementine-spotify-blob.patch ./clementine-spotify-blob.patch
# Required so as to avoid adding libspotify as a build dependency (as it is # Required so as to avoid adding libspotify as a build dependency (as it is
# unfree and thus would prevent us from having a free package). # unfree and thus would prevent us from having a free package).
./clementine-spotify-blob-remove-from-build.patch ./clementine-spotify-blob-remove-from-build.patch
(fetchpatch { (fetchpatch {

View File

@ -5,7 +5,7 @@
python3.pkgs.buildPythonApplication rec { python3.pkgs.buildPythonApplication rec {
pname = "lollypop"; pname = "lollypop";
version = "0.9.914"; version = "0.9.915";
format = "other"; format = "other";
doCheck = false; doCheck = false;
@ -14,7 +14,7 @@ python3.pkgs.buildPythonApplication rec {
url = "https://gitlab.gnome.org/World/lollypop"; url = "https://gitlab.gnome.org/World/lollypop";
rev = "refs/tags/${version}"; rev = "refs/tags/${version}";
fetchSubmodules = true; fetchSubmodules = true;
sha256 = "0nkwic6mq4fs467c696m5w0wqrii5rzvf2il6vkw861my1bl9wzj"; sha256 = "133qmqb015ghif4d4zh6sf8585fpfgbq00rv6qdj5xn13wziipwh";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View File

@ -1,4 +1,4 @@
{ stdenv, lib, fetchFromGitHub, cmake, pkgconfig { stdenv, lib, fetchzip, cmake, pkgconfig
, alsaLib, freetype, libjack2, lame, libogg, libpulseaudio, libsndfile, libvorbis , alsaLib, freetype, libjack2, lame, libogg, libpulseaudio, libsndfile, libvorbis
, portaudio, portmidi, qtbase, qtdeclarative, qtscript, qtsvg, qttools , portaudio, portmidi, qtbase, qtdeclarative, qtscript, qtsvg, qttools
, qtwebengine, qtxmlpatterns , qtwebengine, qtxmlpatterns
@ -6,13 +6,12 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "musescore-${version}"; name = "musescore-${version}";
version = "3.0"; version = "3.0.1";
src = fetchFromGitHub { src = fetchzip {
owner = "musescore"; url = "https://download.musescore.com/releases/MuseScore-${version}/MuseScore-${version}.zip";
repo = "MuseScore"; sha256 = "1l9djxq5hdfqiya2jwcag7qq4dhmb9qcv68y27dlza19imrnim80";
rev = "v${version}"; stripRoot = false;
sha256 = "0g8n8xpw5d6wh8bwbvy12sinl9i0ir009sr28i4izr28lr4x8v50";
}; };
patches = [ patches = [

View File

@ -3,13 +3,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "ncpamixer-${version}"; name = "ncpamixer-${version}";
version = "1.2"; version = "1.3";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "fulhax"; owner = "fulhax";
repo = "ncpamixer"; repo = "ncpamixer";
rev = version; rev = version;
sha256 = "01kvd0pg5yraymlln5xdzqj1r6adxfvvza84wxn2481kcxfral54"; sha256 = "02v8vsx26w3wrzkg61457diaxv1hyzsh103p53j80la9vglamdsh";
}; };
buildInputs = [ ncurses libpulseaudio ]; buildInputs = [ ncurses libpulseaudio ];

View File

@ -1,14 +1,16 @@
{ stdenv, python3Packages, fetchurl, gettext, chromaprint }: { stdenv, python3Packages, fetchFromGitHub, gettext, chromaprint }:
let let
pythonPackages = python3Packages; pythonPackages = python3Packages;
in pythonPackages.buildPythonApplication rec { in pythonPackages.buildPythonApplication rec {
pname = "picard"; pname = "picard";
version = "2.1"; version = "2.1.2";
src = fetchurl { src = fetchFromGitHub {
url = "http://ftp.musicbrainz.org/pub/musicbrainz/picard/picard-${version}.tar.gz"; owner = "metabrainz";
sha256 = "054a37q5828q59jzml4npkyczsp891d89kawgsif9kwpi0dxa06c"; repo = pname;
rev = "release-${version}";
sha256 = "1p2bvfzby0nk1vh04yfmsvjcldgkj6m6s1hcv9v13hc8q1cbdfk5";
}; };
buildInputs = [ gettext ]; buildInputs = [ gettext ];
@ -29,8 +31,6 @@ in pythonPackages.buildPythonApplication rec {
substituteInPlace setup.cfg --replace "" "'" substituteInPlace setup.cfg --replace "" "'"
''; '';
doCheck = false;
meta = with stdenv.lib; { meta = with stdenv.lib; {
homepage = http://musicbrainz.org/doc/MusicBrainz_Picard; homepage = http://musicbrainz.org/doc/MusicBrainz_Picard;
description = "The official MusicBrainz tagger"; description = "The official MusicBrainz tagger";

View File

@ -31,6 +31,7 @@
, zam-plugins , zam-plugins
, rubberband , rubberband
, mda_lv2 , mda_lv2
, lsp-plugins
, hicolor-icon-theme , hicolor-icon-theme
}: }:
@ -38,6 +39,7 @@ let
lv2Plugins = [ lv2Plugins = [
calf # limiter, compressor exciter, bass enhancer and others calf # limiter, compressor exciter, bass enhancer and others
mda_lv2 # loudness mda_lv2 # loudness
lsp-plugins # delay
]; ];
ladspaPlugins = [ ladspaPlugins = [
rubberband # pitch shifting rubberband # pitch shifting
@ -45,13 +47,13 @@ let
]; ];
in stdenv.mkDerivation rec { in stdenv.mkDerivation rec {
pname = "pulseeffects"; pname = "pulseeffects";
version = "4.4.6"; version = "4.4.7";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "wwmm"; owner = "wwmm";
repo = "pulseeffects"; repo = "pulseeffects";
rev = "v${version}"; rev = "v${version}";
sha256 = "0zvcj2qliz2rlcz59ag4ljrs78qb7kpyaph16qvi07ij7c5bm333"; sha256 = "14sxwy3mayzn9k5hy58mjzhxaj4wqxvs257xaj03mwvm48k7c7ia";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View File

@ -1,6 +1,6 @@
{ fetchurl, stdenv, squashfsTools, xorg, alsaLib, makeWrapper, openssl, freetype { fetchurl, stdenv, squashfsTools, xorg, alsaLib, makeWrapper, openssl, freetype
, glib, pango, cairo, atk, gdk_pixbuf, gtk2, cups, nspr, nss, libpng , glib, pango, cairo, atk, gdk_pixbuf, gtk2, cups, nspr, nss, libpng
, libgcrypt, systemd, fontconfig, dbus, expat, ffmpeg_0_10, curl, zlib, gnome3 , libgcrypt, systemd, fontconfig, dbus, expat, ffmpeg, curl, zlib, gnome3
, at-spi2-atk , at-spi2-atk
}: }:
@ -26,7 +26,7 @@ let
curl curl
dbus dbus
expat expat
ffmpeg_0_10 ffmpeg
fontconfig fontconfig
freetype freetype
gdk_pixbuf gdk_pixbuf
@ -118,6 +118,9 @@ stdenv.mkDerivation {
ln -s ${nspr.out}/lib/libnspr4.so $libdir/libnspr4.so ln -s ${nspr.out}/lib/libnspr4.so $libdir/libnspr4.so
ln -s ${nspr.out}/lib/libplc4.so $libdir/libplc4.so ln -s ${nspr.out}/lib/libplc4.so $libdir/libplc4.so
ln -s ${ffmpeg.out}/lib/libavcodec.so.56 $libdir/libavcodec-ffmpeg.so.56
ln -s ${ffmpeg.out}/lib/libavformat.so.56 $libdir/libavformat-ffmpeg.so.56
rpath="$out/share/spotify:$libdir" rpath="$out/share/spotify:$libdir"
patchelf \ patchelf \

View File

@ -555,12 +555,12 @@ rec {
spotbugs = buildEclipseUpdateSite rec { spotbugs = buildEclipseUpdateSite rec {
name = "spotbugs-${version}"; name = "spotbugs-${version}";
version = "3.1.10"; version = "3.1.11";
src = fetchzip { src = fetchzip {
stripRoot = false; stripRoot = false;
url = "https://github.com/spotbugs/spotbugs/releases/download/${version}/eclipsePlugin.zip"; url = "https://github.com/spotbugs/spotbugs/releases/download/${version}/eclipsePlugin.zip";
sha256 = "0xrflgw0h05z3za784ach2fx6dh04lgmfr426m1q235vv2ibds5y"; sha256 = "0aanqwx3gy1arpbkqd846381hiy6272lzwhfjl94x8jhfykpqqbj";
}; };
meta = with stdenv.lib; { meta = with stdenv.lib; {

View File

@ -213,6 +213,13 @@ self:
# upstream issue: missing file header # upstream issue: missing file header
qiita = markBroken super.qiita; qiita = markBroken super.qiita;
racer = super.racer.overrideAttrs (attrs: {
postPatch = attrs.postPatch or "" + ''
substituteInPlace racer.el \
--replace /usr/local/src/rust/src ${external.rustPlatform.rustcSrc}
'';
});
# upstream issue: missing file footer # upstream issue: missing file footer
seoul256-theme = markBroken super.seoul256-theme; seoul256-theme = markBroken super.seoul256-theme;

View File

@ -4,12 +4,12 @@ with stdenv.lib;
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "kakoune-unstable-${version}"; name = "kakoune-unstable-${version}";
version = "2018.10.27"; version = "2019.01.20";
src = fetchFromGitHub { src = fetchFromGitHub {
repo = "kakoune"; repo = "kakoune";
owner = "mawww"; owner = "mawww";
rev = "v${version}"; rev = "v${version}";
sha256 = "1w7jmq57h8gxxbzg0n3lgd6cci77xb9mziy6lr8330nzqc85zp9p"; sha256 = "04ak1jm7b1i03sx10z3fxw08rn692y2fj482jn5kpzfzj91b2ila";
}; };
nativeBuildInputs = [ pkgconfig ]; nativeBuildInputs = [ pkgconfig ];
buildInputs = [ ncurses asciidoc docbook_xsl libxslt ]; buildInputs = [ ncurses asciidoc docbook_xsl libxslt ];

View File

@ -3,13 +3,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "tiled-${version}"; name = "tiled-${version}";
version = "1.2.1"; version = "1.2.2";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "bjorn"; owner = "bjorn";
repo = "tiled"; repo = "tiled";
rev = "v${version}"; rev = "v${version}";
sha256 = "077fv3kn3fy06z8f414r3ny4a04l05prppmkyvjqhnwf1i1jck1w"; sha256 = "1yqw10izqhsnqwgxvws2n4ymcwawbh86srv7qmjwbsay752pfgfh";
}; };
nativeBuildInputs = [ pkgconfig qmake ]; nativeBuildInputs = [ pkgconfig qmake ];

View File

@ -1,92 +1,36 @@
{ stdenv, fetchurl, dpkg, lib, glib, dbus, makeWrapper, gnome2, gnome3, gtk3, atk, cairo, pango { stdenv, fetchurl, makeWrapper, electron_3, dpkg, gtk3, glib, gnome3, wrapGAppsHook }:
, gdk_pixbuf, freetype, fontconfig, nspr, nss, xorg, alsaLib, cups, expat, udev, wrapGAppsHook }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "typora-${version}"; pname = "typora";
version = "0.9.53"; version = "0.9.64";
src = src = fetchurl {
if stdenv.hostPlatform.system == "x86_64-linux" then url = "https://www.typora.io/linux/typora_${version}_amd64.deb";
fetchurl { sha256 = "0dffydc11ys2i38gdy8080ph1xlbbzhcdcc06hyfv0dr0nf58a09";
url = "https://www.typora.io/linux/typora_${version}_amd64.deb"; };
sha256 = "02k6x30l4mbjragqbq5rn663xbw3h4bxzgppfxqf5lwydswldklb";
}
else
fetchurl {
url = "https://www.typora.io/linux/typora_${version}_i386.deb";
sha256 = "1wyq1ri0wwdy7slbd9dwyrdynsaa644x44c815jl787sg4nhas6y";
}
;
rpath = stdenv.lib.makeLibraryPath [ nativeBuildInputs = [ dpkg makeWrapper wrapGAppsHook ];
alsaLib
gnome2.GConf
gdk_pixbuf
pango
gnome3.defaultIconTheme
expat
gtk3
atk
nspr
nss
stdenv.cc.cc
glib
cairo
cups
dbus
udev
fontconfig
freetype
xorg.libX11
xorg.libXi
xorg.libXext
xorg.libXtst
xorg.libXfixes
xorg.libXcursor
xorg.libXdamage
xorg.libXrender
xorg.libXrandr
xorg.libXcomposite
xorg.libxcb
xorg.libXScrnSaver
];
nativeBuildInputs = [ wrapGAppsHook ]; buildInputs = [ gtk3 glib gnome3.gsettings-desktop-schemas ];
unpackPhase = "dpkg-deb -x $src .";
dontWrapGApps = true; dontWrapGApps = true;
buildInputs = [ dpkg makeWrapper ];
unpackPhase = "true";
installPhase = '' installPhase = ''
mkdir -p $out mkdir -p $out/bin $out/share/typora
dpkg -x $src $out {
mv $out/usr/bin $out cd usr
mv $out/usr/share $out mv share/typora/resources/app/* $out/share/typora
rm $out/bin/typora mv share/applications $out/share
rmdir $out/usr mv share/icons $out/share
mv share/doc $out/share
}
# Otherwise it looks "suspicious" makeWrapper ${electron_3}/bin/electron $out/bin/typora \
chmod -R g-w $out --add-flags $out/share/typora \
'';
postFixup = ''
patchelf \
--set-interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)" \
--set-rpath "$out/share/typora:${rpath}" "$out/share/typora/Typora"
makeWrapper $out/share/typora/Typora $out/bin/typora
wrapProgram $out/bin/typora \
"''${gappsWrapperArgs[@]}" \ "''${gappsWrapperArgs[@]}" \
--suffix XDG_DATA_DIRS : "${gtk3}/share/gsettings-schemas/${gtk3.name}/" \ --prefix LD_LIBRARY_PATH : "${stdenv.lib.makeLibraryPath [ stdenv.cc.cc ]}"
--prefix XDG_DATA_DIRS : "${gnome3.defaultIconTheme}/share"
# Fix the desktop link
substituteInPlace $out/share/applications/typora.desktop \
--replace /usr/bin/ $out/bin/ \
--replace /usr/share/ $out/share/
''; '';
meta = with stdenv.lib; { meta = with stdenv.lib; {
@ -94,6 +38,6 @@ stdenv.mkDerivation rec {
homepage = https://typora.io; homepage = https://typora.io;
license = licenses.unfree; license = licenses.unfree;
maintainers = with maintainers; [ jensbin ]; maintainers = with maintainers; [ jensbin ];
platforms = [ "x86_64-linux" "i686-linux" ]; inherit (electron_3.meta) platforms;
}; };
} }

View File

@ -4,6 +4,8 @@
let let
executableName = "code" + lib.optionalString isInsiders "-insiders"; executableName = "code" + lib.optionalString isInsiders "-insiders";
longName = "Visual Studio Code" + lib.optionalString isInsiders " - Insiders";
shortName = "Code" + lib.optionalString isInsiders " - Insiders";
plat = { plat = {
"i686-linux" = "linux-ia32"; "i686-linux" = "linux-ia32";
@ -45,12 +47,40 @@ in
desktopItem = makeDesktopItem { desktopItem = makeDesktopItem {
name = executableName; name = executableName;
desktopName = longName;
comment = "Code Editing. Redefined.";
genericName = "Text Editor";
exec = executableName; exec = executableName;
icon = "@out@/share/pixmaps/code.png"; icon = "@out@/share/pixmaps/code.png";
comment = "Code editor redefined and optimized for building and debugging modern web and cloud applications"; startupNotify = "true";
desktopName = "Visual Studio Code" + lib.optionalString isInsiders " Insiders"; categories = "Utility;TextEditor;Development;IDE;";
mimeType = "text/plain;inode/directory;";
extraEntries = ''
StartupWMClass=${shortName}
Actions=new-empty-window;
Keywords=vscode;
[Desktop Action new-empty-window]
Name=New Empty Window
Exec=${executableName} --new-window %F
Icon=@out@/share/pixmaps/code.png
'';
};
urlHandlerDesktopItem = makeDesktopItem {
name = executableName + "-url-handler";
desktopName = longName + " - URL Handler";
comment = "Code Editing. Redefined.";
genericName = "Text Editor"; genericName = "Text Editor";
categories = "GNOME;GTK;Utility;TextEditor;Development;"; exec = executableName + " --open-url %U";
icon = "@out@/share/pixmaps/code.png";
startupNotify = "true";
categories = "Utility;TextEditor;Development;IDE;";
mimeType = "x-scheme-handler/vscode;";
extraEntries = ''
NoDisplay=true
Keywords=vscode;
'';
}; };
buildInputs = if stdenv.hostPlatform.system == "x86_64-darwin" buildInputs = if stdenv.hostPlatform.system == "x86_64-darwin"
@ -73,6 +103,8 @@ in
mkdir -p $out/share/applications mkdir -p $out/share/applications
substitute $desktopItem/share/applications/${executableName}.desktop $out/share/applications/${executableName}.desktop \ substitute $desktopItem/share/applications/${executableName}.desktop $out/share/applications/${executableName}.desktop \
--subst-var out --subst-var out
substitute $urlHandlerDesktopItem/share/applications/${executableName}-url-handler.desktop $out/share/applications/${executableName}-url-handler.desktop \
--subst-var out
mkdir -p $out/share/pixmaps mkdir -p $out/share/pixmaps
cp $out/lib/vscode/resources/app/resources/linux/code.png $out/share/pixmaps/code.png cp $out/lib/vscode/resources/app/resources/linux/code.png $out/share/pixmaps/code.png

View File

@ -8,13 +8,13 @@ assert useUnrar -> unrar != null;
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "ahoviewer-${version}"; name = "ahoviewer-${version}";
version = "1.6.4"; version = "1.6.5";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ahodesuka"; owner = "ahodesuka";
repo = "ahoviewer"; repo = "ahoviewer";
rev = version; rev = version;
sha256 = "144jmk8w7dnmqy4w81b3kzama7i97chx16pgax2facn72a92921q"; sha256 = "1avdl4qcpznvf3s2id5qi1vnzy4wgh6vxpnrz777a1s4iydxpcd8";
}; };
enableParallelBuilding = true; enableParallelBuilding = true;

View File

@ -1,14 +1,14 @@
{ stdenv, python3, fetchFromGitHub, fetchpatch }: { stdenv, python3, fetchFromGitHub, fetchpatch }:
with python3.pkgs; buildPythonApplication rec { with python3.pkgs; buildPythonApplication rec {
version = "3.8"; version = "4.1";
pname = "buku"; pname = "buku";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "jarun"; owner = "jarun";
repo = "buku"; repo = "buku";
rev = "v${version}"; rev = "v${version}";
sha256 = "0gv26c4rr1akcaiff1nrwil03sv7d58mfxr86pgsw6nwld67ns0r"; sha256 = "166l1fmpqn4hys4l0ssc4yd590mmav1w62vm9l5ijhjhmlnrzfax";
}; };
checkInputs = [ checkInputs = [
@ -33,8 +33,17 @@ with python3.pkgs; buildPythonApplication rec {
arrow arrow
werkzeug werkzeug
click click
html5lib
vcrpy
]; ];
postPatch = ''
# Jailbreak problematic dependencies
sed -i \
-e "s,'PyYAML.*','PyYAML',g" \
setup.py
'';
preCheck = '' preCheck = ''
# Fixes two tests for wrong encoding # Fixes two tests for wrong encoding
export PYTHONIOENCODING=utf-8 export PYTHONIOENCODING=utf-8

View File

@ -1,21 +1,30 @@
{ stdenv, buildPythonApplication, fetchFromGitHub, requests, dmenu }: { stdenv, buildPythonApplication, fetchFromGitHub, substituteAll, requests, dmenu }:
buildPythonApplication rec { buildPythonApplication rec {
name = "dmensamenu-${version}"; pname = "dmensamenu";
version = "1.1.1"; version = "1.2.1";
propagatedBuildInputs = [
requests
dmenu
];
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "dotlambda"; owner = "dotlambda";
repo = "dmensamenu"; repo = "dmensamenu";
rev = version; rev = version;
sha256 = "0gc23k2zbv9zfc0v27y4spiva8cizxavpzd5pch5qbawh2lak6a3"; sha256 = "15c8g2vdban3dw3g979icypgpx52irpvv39indgk19adicgnzzqp";
}; };
patches = [
(substituteAll {
src = ./dmenu-path.patch;
inherit dmenu;
})
];
propagatedBuildInputs = [
requests
];
# No tests implemented
doCheck = false;
meta = with stdenv.lib; { meta = with stdenv.lib; {
homepage = https://github.com/dotlambda/dmensamenu; homepage = https://github.com/dotlambda/dmensamenu;
description = "Print German canteen menus using dmenu and OpenMensa"; description = "Print German canteen menus using dmenu and OpenMensa";

View File

@ -0,0 +1,13 @@
diff --git a/dmensamenu/dmensamenu.py b/dmensamenu/dmensamenu.py
index 7df49f2..052ef1b 100644
--- a/dmensamenu/dmensamenu.py
+++ b/dmensamenu/dmensamenu.py
@@ -99,7 +99,7 @@ def main():
parser.add_argument('--city',
help='When searching for a canteen, only show the ones from the city specified'
+' (case-insensitive).')
- parser.add_argument('--dmenu', metavar='CMD', default='dmenu -i -l "$lines" -p "$date"',
+ parser.add_argument('--dmenu', metavar='CMD', default='@dmenu@/bin/dmenu -i -l "$lines" -p "$date"',
help='Command to execute. '
'Can be used to pass custom parameters to dmenu. '
'The shell variable $lines will be set to the number of items on the menu '

View File

@ -1,14 +1,14 @@
{ stdenv, fetchFromGitHub, makeWrapper, xdg_utils, file, coreutils }: { stdenv, fetchFromGitHub, makeWrapper, xdg_utils, file, coreutils }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "fff"; pname = "fff";
version = "1.5"; version = "2.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "dylanaraps"; owner = "dylanaraps";
repo = name; repo = pname;
rev = version; rev = version;
sha256 = "0jvv9mwj0qw3rmg1f17wbvx9fl5kxzmkp6j1113l3a6w1na83js0"; sha256 = "0pqxqg1gnl3kgqma5vb0wcy4n9xbm0dp7g7dxl60cwcyqvd4vm3i";
}; };
pathAdd = stdenv.lib.makeSearchPath "bin" [ xdg_utils file coreutils ]; pathAdd = stdenv.lib.makeSearchPath "bin" [ xdg_utils file coreutils ];

View File

@ -5,13 +5,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "font-manager"; pname = "font-manager";
version = "0.7.4.1"; version = "0.7.4.2";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "FontManager"; owner = "FontManager";
repo = "master"; repo = "master";
rev = version; rev = version;
sha256 = "1zy419zzc95h4gxvl88acqjbwlnmwybj23rx3vkc62j3v3w4nlay"; sha256 = "15814czap0qg2h9nkcn9fg4i4xxa1lgw1vi6h3hi242qfwc7fh3i";
}; };
nativeBuildInputs = [ nativeBuildInputs = [
@ -49,6 +49,10 @@ stdenv.mkDerivation rec {
patchShebangs meson_post_install.py patchShebangs meson_post_install.py
''; '';
postInstall = ''
rm $out/share/applications/mimeinfo.cache
'';
meta = { meta = {
homepage = https://fontmanager.github.io/; homepage = https://fontmanager.github.io/;
description = "Simple font management for GTK+ desktop environments"; description = "Simple font management for GTK+ desktop environments";

View File

@ -7,7 +7,7 @@
with python3Packages; with python3Packages;
buildPythonApplication rec { buildPythonApplication rec {
version = "0.13.2"; version = "0.13.3";
name = "kitty-${version}"; name = "kitty-${version}";
format = "other"; format = "other";
@ -15,7 +15,7 @@ buildPythonApplication rec {
owner = "kovidgoyal"; owner = "kovidgoyal";
repo = "kitty"; repo = "kitty";
rev = "v${version}"; rev = "v${version}";
sha256 = "1w93fq4rks6va0aapz6f6l1cn6zhchrfq8fv39xb6x0llx78dimx"; sha256 = "1y0vd75j8g61jdj8miml79w5ri3pqli5rv9iq6zdrxvzfa4b2rmb";
}; };
buildInputs = [ buildInputs = [

View File

@ -4,13 +4,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "libosmocore-${version}"; name = "libosmocore-${version}";
version = "0.12.1"; version = "1.0.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "osmocom"; owner = "osmocom";
repo = "libosmocore"; repo = "libosmocore";
rev = version; rev = version;
sha256 = "140c9jii0qs00s50kji1znc2339s22x8sz259x4pj35rrjzyyjgp"; sha256 = "08xbj2calh1zkp79kxbq01vnh0y7nkgd4cgsivrzlyqahilbzvd9";
}; };
propagatedBuildInputs = [ propagatedBuildInputs = [

View File

@ -1,6 +1,6 @@
{ stdenv, fetchFromGitHub, qt4, qmake4Hook, libpulseaudio }: { stdenv, fetchFromGitHub, qt4, qmake4Hook, libpulseaudio }:
let let
version = "1.1.6"; version = "1.1.7";
in in
stdenv.mkDerivation { stdenv.mkDerivation {
name = "multimon-ng-${version}"; name = "multimon-ng-${version}";
@ -9,7 +9,7 @@ stdenv.mkDerivation {
owner = "EliasOenal"; owner = "EliasOenal";
repo = "multimon-ng"; repo = "multimon-ng";
rev = "${version}"; rev = "${version}";
sha256 = "1a166mh73x77yrrnhhhzk44qrkgwav26vpidv1547zj3x3m8p0bm"; sha256 = "11wfk8jw86z44y0ji4jr4s8ig3zwxp6g9h3sl81pvk6l3ipqqbgi";
}; };
buildInputs = [ qt4 libpulseaudio ]; buildInputs = [ qt4 libpulseaudio ];

View File

@ -4,13 +4,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "${product}-${version}"; name = "${product}-${version}";
product = "pdfpc"; product = "pdfpc";
version = "4.3.0"; version = "4.3.1_0";
src = fetchFromGitHub { src = fetchFromGitHub {
repo = "pdfpc"; repo = "pdfpc";
owner = "pdfpc"; owner = "pdfpc";
rev = "v${version}"; rev = "v${version}";
sha256 = "1ild2p2lv89yj74fbbdsg3jb8dxpzdamsw0l0xs5h20fd2lsrwcd"; sha256 = "04bvgpdy3l030jd1f87a94lz4lky29skpak3k0bzazsajwpywprd";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View File

@ -10,12 +10,12 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "polar-bookshelf-${version}"; name = "polar-bookshelf-${version}";
version = "1.8.0"; version = "1.9.0";
# fetching a .deb because there's no easy way to package this Electron app # fetching a .deb because there's no easy way to package this Electron app
src = fetchurl { src = fetchurl {
url = "https://github.com/burtonator/polar-bookshelf/releases/download/v${version}/polar-bookshelf-${version}-amd64.deb"; url = "https://github.com/burtonator/polar-bookshelf/releases/download/v${version}/polar-bookshelf-${version}-amd64.deb";
sha256 = "0zbk8msc5p6ivldkznab8klzsgd31hd4hs5kkjzw1iy082cmrjv5"; sha256 = "1kvgmb7kvqc6pzcr0yp8x9mxwymiy85yr0cx3k2sclqlksrc5dzx";
}; };
buildInputs = [ buildInputs = [

View File

@ -6,12 +6,12 @@ let inherit (python3Packages) python buildPythonApplication fetchPypi;
in buildPythonApplication rec { in buildPythonApplication rec {
name = "${pname}-${version}"; name = "${pname}-${version}";
pname = "safeeyes"; pname = "safeeyes";
version = "2.0.6"; version = "2.0.8";
namePrefix = ""; namePrefix = "";
src = fetchPypi { src = fetchPypi {
inherit pname version; inherit pname version;
sha256 = "0s14pxicgq33srvhf6bvfq48wv3z4rlsmzkccz4ky9vh3gfx7zka"; sha256 = "08acrf9sngjjmplszjxzfq3af9xg4xscga94q0lkck2l1kqckc2l";
}; };
buildInputs = [ buildInputs = [

View File

@ -7,7 +7,7 @@
let let
version = "0.7.0"; version = "0.7.1";
modulesVersion = with lib; versions.major version + "." + versions.minor version; modulesVersion = with lib; versions.major version + "." + versions.minor version;
modulesPath = "lib/SoapySDR/modules" + modulesVersion; modulesPath = "lib/SoapySDR/modules" + modulesVersion;
extraPackagesSearchPath = lib.makeSearchPath modulesPath extraPackages; extraPackagesSearchPath = lib.makeSearchPath modulesPath extraPackages;
@ -19,7 +19,7 @@ in stdenv.mkDerivation {
owner = "pothosware"; owner = "pothosware";
repo = "SoapySDR"; repo = "SoapySDR";
rev = "soapy-sdr-${version}"; rev = "soapy-sdr-${version}";
sha256 = "14fjwnfj7jz9ixvim2gy4f52y6s7d4xggzxn2ck7g4q35d879x13"; sha256 = "1rbnd3w12kzsh94fiywyn4vch7h0kf75m88fi6nq992b3vnmiwvl";
}; };
nativeBuildInputs = [ cmake makeWrapper pkgconfig ]; nativeBuildInputs = [ cmake makeWrapper pkgconfig ];

View File

@ -3,13 +3,13 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
name = "urh-${version}"; name = "urh-${version}";
version = "2.5.4"; version = "2.5.5";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "jopohl"; owner = "jopohl";
repo = "urh"; repo = "urh";
rev = "v${version}"; rev = "v${version}";
sha256 = "06mz35jnmy6rchsnlk2s81fdwnc7zvx496q4ihjb9qybhyka79ay"; sha256 = "14aw8bvqb32976qmm124i5sv99nwv1jvs1r9ylbsmlg31dvla7ql";
}; };
buildInputs = [ hackrf rtl-sdr airspy limesuite ]; buildInputs = [ hackrf rtl-sdr airspy limesuite ];

View File

@ -2,13 +2,13 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
pname = "urlscan"; pname = "urlscan";
version = "0.9.1"; version = "0.9.2";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "firecat53"; owner = "firecat53";
repo = pname; repo = pname;
rev = version; rev = version;
sha256 = "0np7w38wzs72kxap9fsdliafqs0xfqnfj01i7b0fh7k235bgrapz"; sha256 = "16cc1vvvhylrl9208d253k11rqzi95mg7hrf7xbd0bqxvd6rmxar";
}; };
propagatedBuildInputs = [ python3Packages.urwid ]; propagatedBuildInputs = [ python3Packages.urwid ];

View File

@ -4,13 +4,13 @@
buildPythonApplication rec { buildPythonApplication rec {
name = "${pname}-${version}"; name = "${pname}-${version}";
pname = "visidata"; pname = "visidata";
version = "1.5.1"; version = "1.5.2";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "saulpw"; owner = "saulpw";
repo = "visidata"; repo = "visidata";
rev = "v${version}"; rev = "v${version}";
sha256 = "1pflv7nnv9nyfhynrdbh5pgvjxzj53hgqd972dis9rwwwkla26ng"; sha256 = "19gs8i6chrrwibz706gib5sixx1cjgfzh7v011kp3izcrn524mc0";
}; };
propagatedBuildInputs = [dateutil pyyaml openpyxl xlrd h5py fonttools propagatedBuildInputs = [dateutil pyyaml openpyxl xlrd h5py fonttools

View File

@ -4,13 +4,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "xmrig-${version}"; name = "xmrig-${version}";
version = "2.8.3"; version = "2.10.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "xmrig"; owner = "xmrig";
repo = "xmrig"; repo = "xmrig";
rev = "v${version}"; rev = "v${version}";
sha256 = "144i24c707fja89iqcc511b4077p53q8w2cq5zd26hry2i4i3abi"; sha256 = "10nqwxj8j2ciw2h178g2z5lrzv48xsi2a4v6s0ha93hfbjzvag5a";
}; };
nativeBuildInputs = [ cmake ]; nativeBuildInputs = [ cmake ];

View File

@ -12,6 +12,7 @@
, utillinux, alsaLib , utillinux, alsaLib
, bison, gperf , bison, gperf
, glib, gtk2, gtk3, dbus-glib , glib, gtk2, gtk3, dbus-glib
, glibc
, libXScrnSaver, libXcursor, libXtst, libGLU_combined , libXScrnSaver, libXcursor, libXtst, libGLU_combined
, protobuf, speechd, libXdamage, cups , protobuf, speechd, libXdamage, cups
, ffmpeg, libxslt, libxml2, at-spi2-core , ffmpeg, libxslt, libxml2, at-spi2-core
@ -163,6 +164,17 @@ let
'return sandbox_binary;' \ 'return sandbox_binary;' \
'return base::FilePath(GetDevelSandboxPath());' 'return base::FilePath(GetDevelSandboxPath());'
substituteInPlace services/audio/audio_sandbox_hook_linux.cc \
--replace \
'/usr/share/alsa/' \
'${alsaLib}/share/alsa/' \
--replace \
'/usr/lib/x86_64-linux-gnu/gconv/' \
'${glibc}/lib/gconv/' \
--replace \
'/usr/share/locale/' \
'${glibc}/share/locale/'
sed -i -e 's@"\(#!\)\?.*xdg-@"\1${xdg_utils}/bin/xdg-@' \ sed -i -e 's@"\(#!\)\?.*xdg-@"\1${xdg_utils}/bin/xdg-@' \
chrome/browser/shell_integration_linux.cc chrome/browser/shell_integration_linux.cc

View File

@ -1,18 +1,18 @@
# This file is autogenerated from update.sh in the same directory. # This file is autogenerated from update.sh in the same directory.
{ {
beta = { beta = {
sha256 = "1xcdbf5yia3xm0kil0gyd1mlj3m902w1px3lzpdqv31mr2lzaz08"; sha256 = "01l0vlvcckpag376mjld7qprv63l0z8li689k0h6v3h0i7irzs6z";
sha256bin64 = "0pcbz3201nyl07psdxwphb3z9shqj4crj16f97xclyvjnwpl1jnp"; sha256bin64 = "1dwxys43hn72inxja27jqq3mkiri6nf7ysrfwnnlvyg2iqz83avx";
version = "72.0.3626.28"; version = "72.0.3626.81";
}; };
dev = { dev = {
sha256 = "1vlpcafg3xx6bpnf74xs6ifqjbpb5bpxp10r55w4784yr57pmhq3"; sha256 = "1mdna7k715bxxd6cli4zryclp2p5l6i2dvfgzsfifgvgf2915hiz";
sha256bin64 = "02y974zbxy1gbiv9q8hp7nfl0l5frn35ggmgc44g90pbry48h8rg"; sha256bin64 = "01w05dpmc7h0pwh0rjslr3iqaxhmnb12nmj4rs7w1yq9c58zf1qr";
version = "73.0.3642.0"; version = "73.0.3679.0";
}; };
stable = { stable = {
sha256 = "0icxdg4fvz30jzq0xvl11zlwc9anb3lr9lb8sn1lqxr513isjmhw"; sha256 = "01l0vlvcckpag376mjld7qprv63l0z8li689k0h6v3h0i7irzs6z";
sha256bin64 = "07kiqx5bpk54il0ynxl61bs5yscxb470q2bw3sx6cxjbhmnvbcn2"; sha256bin64 = "09fsj90sjw3srkrq12l2bh39r172s783riyzi5y2g0wlyhxalpql";
version = "71.0.3578.98"; version = "72.0.3626.81";
}; };
} }

View File

@ -1,6 +1,7 @@
{ pname, ffversion, meta, updateScript ? null { pname, ffversion, meta, updateScript ? null
, src, unpackPhase ? null, patches ? [] , src, unpackPhase ? null, patches ? []
, extraNativeBuildInputs ? [], extraConfigureFlags ? [], extraMakeFlags ? [] , extraNativeBuildInputs ? [], extraConfigureFlags ? [], extraMakeFlags ? []
, isIceCatLike ? false, icversion ? null
, isTorBrowserLike ? false, tbversion ? null }: , isTorBrowserLike ? false, tbversion ? null }:
{ lib, stdenv, pkgconfig, pango, perl, python2, zip, libIDL { lib, stdenv, pkgconfig, pango, perl, python2, zip, libIDL
@ -25,7 +26,7 @@
## privacy-related options ## privacy-related options
, privacySupport ? isTorBrowserLike , privacySupport ? isTorBrowserLike || isIceCatLike
# WARNING: NEVER set any of the options below to `true` by default. # WARNING: NEVER set any of the options below to `true` by default.
# Set to `privacySupport` or `false`. # Set to `privacySupport` or `false`.
@ -75,17 +76,37 @@ let
default-toolkit = if stdenv.isDarwin then "cairo-cocoa" default-toolkit = if stdenv.isDarwin then "cairo-cocoa"
else "cairo-gtk${if gtk3Support then "3" else "2"}"; else "cairo-gtk${if gtk3Support then "3" else "2"}";
binaryName = if isIceCatLike then "icecat" else "firefox";
binaryNameCapitalized = lib.toUpper (lib.substring 0 1 binaryName) + lib.substring 1 (-1) binaryName;
browserName = if stdenv.isDarwin then binaryNameCapitalized else binaryName;
execdir = if stdenv.isDarwin execdir = if stdenv.isDarwin
then "/Applications/${browserName}.app/Contents/MacOS" then "/Applications/${binaryNameCapitalized}.app/Contents/MacOS"
else "/bin"; else "/bin";
browserName = if stdenv.isDarwin then "Firefox" else "firefox";
browserVersion = if isIceCatLike then icversion
else if isTorBrowserLike then tbversion
else ffversion;
browserPatches = [
./env_var_for_system_dir.patch
] ++ patches;
in in
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "${pname}-unwrapped-${version}"; name = "${pname}-unwrapped-${version}";
version = if !isTorBrowserLike then ffversion else tbversion; version = browserVersion;
inherit src unpackPhase patches meta; inherit src unpackPhase meta;
patches = browserPatches;
# Ignore trivial whitespace changes in patches, this fixes compatibility of
# ./env_var_for_system_dir.patch with Firefox >=65 without having to track
# two patches.
patchFlags = [ "-p1" "-l" ];
buildInputs = [ buildInputs = [
gtk2 perl zip libIDL libjpeg zlib bzip2 gtk2 perl zip libIDL libjpeg zlib bzip2
@ -265,22 +286,22 @@ stdenv.mkDerivation rec {
installPhase = if stdenv.isDarwin then '' installPhase = if stdenv.isDarwin then ''
mkdir -p $out/Applications mkdir -p $out/Applications
cp -LR dist/Firefox.app $out/Applications cp -LR dist/${binaryNameCapitalized}.app $out/Applications
'' else null; '' else null;
postInstall = lib.optionalString stdenv.isLinux '' postInstall = lib.optionalString stdenv.isLinux ''
# Remove SDK cruft. FIXME: move to a separate output? # Remove SDK cruft. FIXME: move to a separate output?
rm -rf $out/share/idl $out/include $out/lib/firefox-devel-* rm -rf $out/share/idl $out/include $out/lib/${binaryName}-devel-*
# Needed to find Mozilla runtime # Needed to find Mozilla runtime
gappsWrapperArgs+=(--argv0 "$out/bin/.firefox-wrapped") gappsWrapperArgs+=(--argv0 "$out/bin/.${binaryName}-wrapped")
''; '';
postFixup = lib.optionalString stdenv.isLinux '' postFixup = lib.optionalString stdenv.isLinux ''
# Fix notifications. LibXUL uses dlopen for this, unfortunately; see #18712. # Fix notifications. LibXUL uses dlopen for this, unfortunately; see #18712.
patchelf --set-rpath "${lib.getLib libnotify patchelf --set-rpath "${lib.getLib libnotify
}/lib:$(patchelf --print-rpath "$out"/lib/firefox*/libxul.so)" \ }/lib:$(patchelf --print-rpath "$out"/lib/${binaryName}*/libxul.so)" \
"$out"/lib/firefox*/libxul.so "$out"/lib/${binaryName}*/libxul.so
''; '';
doInstallCheck = true; doInstallCheck = true;
@ -292,6 +313,7 @@ stdenv.mkDerivation rec {
passthru = { passthru = {
inherit version updateScript; inherit version updateScript;
isFirefox3Like = true; isFirefox3Like = true;
inherit isIceCatLike;
inherit isTorBrowserLike; inherit isTorBrowserLike;
gtk = gtk2; gtk = gtk2;
inherit nspr; inherit nspr;

View File

@ -0,0 +1,23 @@
diff -ur firefox-65.0-orig/docshell/base/nsAboutRedirector.cpp firefox-65.0/docshell/base/nsAboutRedirector.cpp
--- firefox-65.0-orig/docshell/base/nsAboutRedirector.cpp 2019-01-23 00:48:28.988747428 +0100
+++ firefox-65.0/docshell/base/nsAboutRedirector.cpp 2019-01-23 00:51:13.378188397 +0100
@@ -67,8 +67,6 @@
{"about", "chrome://global/content/aboutAbout.xhtml", 0},
{"addons", "chrome://mozapps/content/extensions/extensions.xul",
nsIAboutModule::ALLOW_SCRIPT},
- {"buildconfig", "chrome://global/content/buildconfig.html",
- nsIAboutModule::URI_SAFE_FOR_UNTRUSTED_CONTENT},
{"checkerboard", "chrome://global/content/aboutCheckerboard.xhtml",
nsIAboutModule::URI_SAFE_FOR_UNTRUSTED_CONTENT |
nsIAboutModule::ALLOW_SCRIPT},
diff -ur firefox-65.0-orig/toolkit/content/jar.mn firefox-65.0/toolkit/content/jar.mn
--- firefox-65.0-orig/toolkit/content/jar.mn 2019-01-23 00:48:35.033372506 +0100
+++ firefox-65.0/toolkit/content/jar.mn 2019-01-23 00:50:45.126565924 +0100
@@ -36,7 +36,6 @@
content/global/plugins.css
content/global/browser-child.js
content/global/browser-content.js
-* content/global/buildconfig.html
content/global/buildconfig.css
content/global/contentAreaUtils.js
content/global/datepicker.xhtml

View File

@ -4,24 +4,20 @@ let
common = opts: callPackage (import ./common.nix opts) {}; common = opts: callPackage (import ./common.nix opts) {};
nixpkgsPatches = [
./env_var_for_system_dir.patch
];
in in
rec { rec {
firefox = common rec { firefox = common rec {
pname = "firefox"; pname = "firefox";
ffversion = "64.0.2"; ffversion = "65.0";
src = fetchurl { src = fetchurl {
url = "mirror://mozilla/firefox/releases/${ffversion}/source/firefox-${ffversion}.source.tar.xz"; url = "mirror://mozilla/firefox/releases/${ffversion}/source/firefox-${ffversion}.source.tar.xz";
sha512 = "2xvzbx20i2qwld04g3wl9j6j8bkcja3i83sf9cpngayllhrjki29020izrwjxrgm0z3isg7zijw656v1v2zzmhlfkpkbk71n2gjj7md"; sha512 = "39bx76whgf53rkfqqy8gfhd44wikh89zpnqr930v4grqg3v0pfr8mbvp7xzjjlf5r7bski0wxibn9vyyy273fp99zyj1w2m5ihh9aqh";
}; };
patches = nixpkgsPatches ++ [ patches = [
./no-buildconfig.patch ./no-buildconfig-ffx65.patch
]; ];
extraNativeBuildInputs = [ python3 ]; extraNativeBuildInputs = [ python3 ];
@ -39,6 +35,11 @@ rec {
}; };
}; };
# Do not remove. This is the last version of Firefox that supports
# the old plugins. While this package is unsafe to use for browsing
# the web, there are many old useful plugins targeting offline
# activities (e.g. ebook readers, syncronous translation, etc) that
# will probably never be ported to WebExtensions API.
firefox-esr-52 = common rec { firefox-esr-52 = common rec {
pname = "firefox-esr"; pname = "firefox-esr";
ffversion = "52.9.0esr"; ffversion = "52.9.0esr";
@ -47,7 +48,7 @@ rec {
sha512 = "bfca42668ca78a12a9fb56368f4aae5334b1f7a71966fbba4c32b9c5e6597aac79a6e340ac3966779d2d5563eb47c054ab33cc40bfb7306172138ccbd3adb2b9"; sha512 = "bfca42668ca78a12a9fb56368f4aae5334b1f7a71966fbba4c32b9c5e6597aac79a6e340ac3966779d2d5563eb47c054ab33cc40bfb7306172138ccbd3adb2b9";
}; };
patches = nixpkgsPatches ++ [ patches = [
# this one is actually an omnipresent bug # this one is actually an omnipresent bug
# https://bugzilla.mozilla.org/show_bug.cgi?id=1444519 # https://bugzilla.mozilla.org/show_bug.cgi?id=1444519
./fix-pa-context-connect-retval.patch ./fix-pa-context-connect-retval.patch
@ -66,14 +67,14 @@ rec {
firefox-esr-60 = common rec { firefox-esr-60 = common rec {
pname = "firefox-esr"; pname = "firefox-esr";
ffversion = "60.4.0esr"; ffversion = "60.5.0esr";
src = fetchurl { src = fetchurl {
url = "mirror://mozilla/firefox/releases/${ffversion}/source/firefox-${ffversion}.source.tar.xz"; url = "mirror://mozilla/firefox/releases/${ffversion}/source/firefox-${ffversion}.source.tar.xz";
sha512 = "3a2r2xyxqw86ihzbmzmxmj8wh3ay4mrjqrnyn73yl6ry19m1pjqbmy1fxnsmxnykfn35a1w18gmbj26kpn1yy7hif37cvy05wmza6c1"; sha512 = "3n7l146gdjwhi0iq85awc0yykvi4x5m91mcylxa5mrq911bv6xgn2i92nzhgnhdilqap5218778vgvnalikzsh67irrncx1hy5f6iyx";
}; };
patches = nixpkgsPatches ++ [ patches = [
./no-buildconfig.patch ./no-buildconfig-ffx65.patch
# this one is actually an omnipresent bug # this one is actually an omnipresent bug
# https://bugzilla.mozilla.org/show_bug.cgi?id=1444519 # https://bugzilla.mozilla.org/show_bug.cgi?id=1444519
@ -92,6 +93,81 @@ rec {
} // (let } // (let
iccommon = args: common (args // {
pname = "icecat";
isIceCatLike = true;
meta = (args.meta or {}) // {
description = "The GNU version of the Firefox web browser";
longDescription = ''
GNUzilla is the GNU version of the Mozilla suite, and GNU
IceCat is the GNU version of the Firefox web browser.
Notable differences from mainline Firefox:
- entirely free software, no non-free plugins, addons,
artwork,
- no telemetry, no "studies",
- sane privacy and security defaults (for instance, unlike
Firefox, IceCat does _zero_ network requests on startup by
default, which means that with IceCat you won't need to
unplug your Ethernet cable each time you want to create a
new browser profile without announcing that action to a
bunch of data-hungry corporations),
- all essential privacy and security settings can be
configured directly from the main screen,
- optional first party isolation (like TorBrowser),
- comes with HTTPS Everywhere (like TorBrowser), Tor Browser
Button (like TorBrowser Bundle), LibreJS, and SpyBlock
plugins out of the box.
This package can be installed together with Firefox and
TorBrowser, it will use distinct binary names and profile
directories.
'';
homepage = "https://www.gnu.org/software/gnuzilla/";
platforms = lib.platforms.unix;
license = with lib.licenses; [ mpl20 gpl3Plus ];
};
});
in rec {
icecat = iccommon rec {
ffversion = "60.3.0";
icversion = "${ffversion}-gnu1";
src = fetchurl {
url = "mirror://gnu/gnuzilla/${ffversion}/icecat-${icversion}.tar.bz2";
sha256 = "0icnl64nxcyf7dprpdpygxhabsvyhps8c3ixysj9bcdlj9q34ib1";
};
patches = [
./no-buildconfig.patch
];
};
# Similarly to firefox-esr-52 above.
icecat-52 = iccommon rec {
ffversion = "52.6.0";
icversion = "${ffversion}-gnu1";
src = fetchurl {
url = "mirror://gnu/gnuzilla/${ffversion}/icecat-${icversion}.tar.bz2";
sha256 = "09fn54glqg1aa93hnz5zdcy07cps09dbni2b4200azh6nang630a";
};
patches = [
# this one is actually an omnipresent bug
# https://bugzilla.mozilla.org/show_bug.cgi?id=1444519
./fix-pa-context-connect-retval.patch
];
meta.knownVulnerabilities = [ "Support ended in August 2018." ];
};
}) // (let
tbcommon = args: common (args // { tbcommon = args: common (args // {
pname = "tor-browser"; pname = "tor-browser";
isTorBrowserLike = true; isTorBrowserLike = true;
@ -107,9 +183,7 @@ rec {
find . -exec touch -d'2010-01-01 00:00' {} \; find . -exec touch -d'2010-01-01 00:00' {} \;
''; '';
patches = nixpkgsPatches; meta = (args.meta or {}) // {
meta = {
description = "A web browser built from TorBrowser source tree"; description = "A web browser built from TorBrowser source tree";
longDescription = '' longDescription = ''
This is a version of TorBrowser with bundle-related patches This is a version of TorBrowser with bundle-related patches
@ -138,9 +212,9 @@ rec {
Or just use `tor-browser-bundle` package that packs this Or just use `tor-browser-bundle` package that packs this
`tor-browser` back into a sanely-built bundle. `tor-browser` back into a sanely-built bundle.
''; '';
homepage = https://www.torproject.org/projects/torbrowser.html; homepage = "https://www.torproject.org/projects/torbrowser.html";
platforms = lib.platforms.linux; platforms = lib.platforms.unix;
license = lib.licenses.bsd3; license = with lib.licenses; [ mpl20 bsd3 ];
}; };
}); });
@ -163,16 +237,16 @@ in rec {
}; };
tor-browser-8-0 = tbcommon rec { tor-browser-8-0 = tbcommon rec {
ffversion = "60.3.0esr"; ffversion = "60.5.0esr";
tbversion = "8.0.3"; tbversion = "8.0.5";
# FIXME: fetchFromGitHub is not ideal, unpacked source is >900Mb # FIXME: fetchFromGitHub is not ideal, unpacked source is >900Mb
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "SLNOS"; owner = "SLNOS";
repo = "tor-browser"; repo = "tor-browser";
# branch "tor-browser-60.3.0esr-8.0-1-slnos" # branch "tor-browser-60.5.0esr-8.0-1-slnos"
rev = "bd512ad9c40069adfc983f4f03dbd9d220cdf2f9"; rev = "7f113a4ea0539bd2ea9687fe4296c880f2b006c4";
sha256 = "1j349aqiqrf58zrx8pkqvh292w41v1vwr7x7dmd74hq4pi2iwpn8"; sha256 = "11qbhwy2q9rinfw8337b9f78x0r26lnxg25581z85vxshp2jszdq";
}; };
}; };

View File

@ -8,12 +8,12 @@
}: }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "2.17"; version = "2.18";
name = "links2-${version}"; name = "links2-${version}";
src = fetchurl { src = fetchurl {
url = "${meta.homepage}/download/links-${version}.tar.bz2"; url = "${meta.homepage}/download/links-${version}.tar.bz2";
sha256 = "0dh2gbzcw8kxy81z4ggsynibnqs56b83vy8qgz7illsag1irff6q"; sha256 = "0mwhh61klicn2vwk39nc7y4cw4mygzdi2nljn4r0gjbw6jmw3337";
}; };
buildInputs = with stdenv.lib; buildInputs = with stdenv.lib;

View File

@ -89,7 +89,7 @@ let
fteLibPath = makeLibraryPath [ stdenv.cc.cc gmp ]; fteLibPath = makeLibraryPath [ stdenv.cc.cc gmp ];
# Upstream source # Upstream source
version = "8.0.4"; version = "8.0.5";
lang = "en-US"; lang = "en-US";
@ -99,15 +99,15 @@ let
"https://github.com/TheTorProject/gettorbrowser/releases/download/v${version}/tor-browser-linux64-${version}_${lang}.tar.xz" "https://github.com/TheTorProject/gettorbrowser/releases/download/v${version}/tor-browser-linux64-${version}_${lang}.tar.xz"
"https://dist.torproject.org/torbrowser/${version}/tor-browser-linux64-${version}_${lang}.tar.xz" "https://dist.torproject.org/torbrowser/${version}/tor-browser-linux64-${version}_${lang}.tar.xz"
]; ];
sha256 = "1hclxqk54w1diyr8lrgirhy6cwmw2rccg174hgv39zrj2a5ajvmm"; sha256 = "0afrq5vy6rxj4p2dm7kaiq3d3iv4g8ivn7nfqx0z8h1wikyaf5di";
}; };
"i686-linux" = fetchurl { "i686-linux" = fetchurl {
urls = [ urls = [
"https://github.com/TheTorProject/gettorbrowser/releases/download/v${version}/tor-browser-linux32-${version}_${lang}.tar.xz"
"https://dist.torproject.org/torbrowser/${version}/tor-browser-linux32-${version}_${lang}.tar.xz" "https://dist.torproject.org/torbrowser/${version}/tor-browser-linux32-${version}_${lang}.tar.xz"
"https://github.com/TheTorProject/gettorbrowser/releases/download/v${version}/tor-browser-linux32-${version}_${lang}.tar.xz"
]; ];
sha256 = "16393icjcck7brng1kq1vf4nacllcz1m3q3w2vs9rdkjfsazqh42"; sha256 = "113vn2fyw9sjxz24b2m6z4kw46rqgxglrna1lg9ji6zhkfb044vv";
}; };
}; };
in in

View File

@ -0,0 +1,24 @@
{ lib, buildGoPackage, fetchFromGitHub }:
buildGoPackage rec {
name = "argo-${version}";
version = "2.2.1";
src = fetchFromGitHub {
owner = "argoproj";
repo = "argo";
rev = "v${version}";
sha256 = "0x3aizwbqkg2712021wcq4chmwjhw2df702wbr6zd2a2cdypwb67";
};
goDeps = ./deps.nix;
goPackagePath = "github.com/argoproj/argo";
meta = with lib; {
description = "Container native workflow engine for Kubernetes";
homepage = https://github.com/argoproj/argo;
license = licenses.asl20;
maintainers = with maintainers; [ groodt ];
platforms = platforms.unix;
};
}

View File

@ -0,0 +1,687 @@
# file generated from Gopkg.lock using dep2nix (https://github.com/nixcloud/dep2nix)
[
{
goPackagePath = "cloud.google.com/go";
fetch = {
type = "git";
url = "https://code.googlesource.com/gocloud";
rev = "64a2037ec6be8a4b0c1d1f706ed35b428b989239";
sha256 = "149v3ci17g6wd2pm18mzcncq5qpl9hwdjnz3rlbn5rfidyn46la1";
};
}
{
goPackagePath = "github.com/Knetic/govaluate";
fetch = {
type = "git";
url = "https://github.com/Knetic/govaluate";
rev = "9aa49832a739dcd78a5542ff189fb82c3e423116";
sha256 = "12klijhq4fckzbhv0cwygbazj6lvhmdqksha9y6jgfmwzv51kwv5";
};
}
{
goPackagePath = "github.com/PuerkitoBio/purell";
fetch = {
type = "git";
url = "https://github.com/PuerkitoBio/purell";
rev = "0bcb03f4b4d0a9428594752bd2a3b9aa0a9d4bd4";
sha256 = "0vsxyn1fbm7g873b8kf3hcsgqgncb5nmfq3zfsc35a9yhzarka91";
};
}
{
goPackagePath = "github.com/PuerkitoBio/urlesc";
fetch = {
type = "git";
url = "https://github.com/PuerkitoBio/urlesc";
rev = "de5bf2ad457846296e2031421a34e2568e304e35";
sha256 = "0n0srpqwbaan1wrhh2b7ysz543pjs1xw2rghvqyffg9l0g8kzgcw";
};
}
{
goPackagePath = "github.com/argoproj/pkg";
fetch = {
type = "git";
url = "https://github.com/argoproj/pkg";
rev = "1aa3e0c55668da17703adba5c534fff6930db589";
sha256 = "0lr1dimm443qq3zzcrpialvxq9bl8pb3317zn34gmf1sycqh4iii";
};
}
{
goPackagePath = "github.com/beorn7/perks";
fetch = {
type = "git";
url = "https://github.com/beorn7/perks";
rev = "3a771d992973f24aa725d07868b467d1ddfceafb";
sha256 = "1l2lns4f5jabp61201sh88zf3b0q793w4zdgp9nll7mmfcxxjif3";
};
}
{
goPackagePath = "github.com/davecgh/go-spew";
fetch = {
type = "git";
url = "https://github.com/davecgh/go-spew";
rev = "346938d642f2ec3594ed81d874461961cd0faa76";
sha256 = "0d4jfmak5p6lb7n2r6yvf5p1zcw0l8j74kn55ghvr7zr7b7axm6c";
};
}
{
goPackagePath = "github.com/docker/spdystream";
fetch = {
type = "git";
url = "https://github.com/docker/spdystream";
rev = "bc6354cbbc295e925e4c611ffe90c1f287ee54db";
sha256 = "08746a15snvmax6cnzn2qy7cvsspxbsx97vdbjpdadir3pypjxya";
};
}
{
goPackagePath = "github.com/dustin/go-humanize";
fetch = {
type = "git";
url = "https://github.com/dustin/go-humanize";
rev = "9f541cc9db5d55bce703bd99987c9d5cb8eea45e";
sha256 = "1kqf1kavdyvjk7f8kx62pnm7fbypn9z1vbf8v2qdh3y7z7a0cbl3";
};
}
{
goPackagePath = "github.com/emicklei/go-restful";
fetch = {
type = "git";
url = "https://github.com/emicklei/go-restful";
rev = "3eb9738c1697594ea6e71a7156a9bb32ed216cf0";
sha256 = "1zqcjhg4q7788hyrkhwg4b6r1vc4qnzbw8c5j994mr18x42brxzg";
};
}
{
goPackagePath = "github.com/emirpasic/gods";
fetch = {
type = "git";
url = "https://github.com/emirpasic/gods";
rev = "f6c17b524822278a87e3b3bd809fec33b51f5b46";
sha256 = "1zhkppqzy149fp561pif8d5d92jd9chl3l9z4yi5f8n60ibdmmjf";
};
}
{
goPackagePath = "github.com/evanphx/json-patch";
fetch = {
type = "git";
url = "https://github.com/evanphx/json-patch";
rev = "afac545df32f2287a079e2dfb7ba2745a643747e";
sha256 = "1d90prf8wfvndqjn6nr0k405ykia5vb70sjw4ywd49s9p3wcdyn8";
};
}
{
goPackagePath = "github.com/fsnotify/fsnotify";
fetch = {
type = "git";
url = "https://github.com/fsnotify/fsnotify";
rev = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9";
sha256 = "07va9crci0ijlivbb7q57d2rz9h27zgn2fsm60spjsqpdbvyrx4g";
};
}
{
goPackagePath = "github.com/ghodss/yaml";
fetch = {
type = "git";
url = "https://github.com/ghodss/yaml";
rev = "c7ce16629ff4cd059ed96ed06419dd3856fd3577";
sha256 = "10cyv1gy3zwwkr04kk8cvhifb7xddakyvnk5s13yfcqj9hcjz8d1";
};
}
{
goPackagePath = "github.com/go-ini/ini";
fetch = {
type = "git";
url = "https://github.com/go-ini/ini";
rev = "358ee7663966325963d4e8b2e1fbd570c5195153";
sha256 = "1zr51xaka7px1pmfndm12fvg6a3cr24kg77j28zczbfcc6h339gy";
};
}
{
goPackagePath = "github.com/go-openapi/jsonpointer";
fetch = {
type = "git";
url = "https://github.com/go-openapi/jsonpointer";
rev = "3a0015ad55fa9873f41605d3e8f28cd279c32ab2";
sha256 = "02an755ashhckqwxyq2avgn8mm4qq3hxda2jsj1a3bix2gkb45v7";
};
}
{
goPackagePath = "github.com/go-openapi/jsonreference";
fetch = {
type = "git";
url = "https://github.com/go-openapi/jsonreference";
rev = "3fb327e6747da3043567ee86abd02bb6376b6be2";
sha256 = "0zwsrmqqcihm0lj2pc18cpm7wnn1dzwr4kvrlyrxf0lnn7dsdsbm";
};
}
{
goPackagePath = "github.com/go-openapi/spec";
fetch = {
type = "git";
url = "https://github.com/go-openapi/spec";
rev = "bce47c9386f9ecd6b86f450478a80103c3fe1402";
sha256 = "0agys8v5rkfyinvmjd8hzgwvb20hnqninwkxwqkwbbsnakhi8shk";
};
}
{
goPackagePath = "github.com/go-openapi/swag";
fetch = {
type = "git";
url = "https://github.com/go-openapi/swag";
rev = "2b0bd4f193d011c203529df626a65d63cb8a79e8";
sha256 = "14c998wkycmy69jhjqkrah8acrr9xfam1dxbzl0lf4s2ghwn7bdn";
};
}
{
goPackagePath = "github.com/gogo/protobuf";
fetch = {
type = "git";
url = "https://github.com/gogo/protobuf";
rev = "636bf0302bc95575d69441b25a2603156ffdddf1";
sha256 = "1525pq7r6h3s8dncvq8gxi893p2nq8dxpzvq0nfl5b4p6mq0v1c2";
};
}
{
goPackagePath = "github.com/golang/glog";
fetch = {
type = "git";
url = "https://github.com/golang/glog";
rev = "23def4e6c14b4da8ac2ed8007337bc5eb5007998";
sha256 = "0jb2834rw5sykfr937fxi8hxi2zy80sj2bdn9b3jb4b26ksqng30";
};
}
{
goPackagePath = "github.com/golang/protobuf";
fetch = {
type = "git";
url = "https://github.com/golang/protobuf";
rev = "b4deda0973fb4c70b50d226b1af49f3da59f5265";
sha256 = "0ya4ha7m20bw048m1159ppqzlvda4x0vdprlbk5sdgmy74h3xcdq";
};
}
{
goPackagePath = "github.com/google/gofuzz";
fetch = {
type = "git";
url = "https://github.com/google/gofuzz";
rev = "24818f796faf91cd76ec7bddd72458fbced7a6c1";
sha256 = "0cq90m2lgalrdfrwwyycrrmn785rgnxa3l3vp9yxkvnv88bymmlm";
};
}
{
goPackagePath = "github.com/googleapis/gnostic";
fetch = {
type = "git";
url = "https://github.com/googleapis/gnostic";
rev = "7c663266750e7d82587642f65e60bc4083f1f84e";
sha256 = "0yh3ckd7m0r9h50wmxxvba837d0wb1k5yd439zq4p1kpp4390z12";
};
}
{
goPackagePath = "github.com/gorilla/websocket";
fetch = {
type = "git";
url = "https://github.com/gorilla/websocket";
rev = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b";
sha256 = "1bhgs2542qs49p1dafybqxfs2qc072xv41w5nswyrknwyjxxs2a1";
};
}
{
goPackagePath = "github.com/hashicorp/golang-lru";
fetch = {
type = "git";
url = "https://github.com/hashicorp/golang-lru";
rev = "0fb14efe8c47ae851c0034ed7a448854d3d34cf3";
sha256 = "0vg4yn3088ym4sj1d34kr13lp4v5gya7r2nxshp2bz70n46fsqn2";
};
}
{
goPackagePath = "github.com/howeyc/gopass";
fetch = {
type = "git";
url = "https://github.com/howeyc/gopass";
rev = "bf9dde6d0d2c004a008c27aaee91170c786f6db8";
sha256 = "1jxzyfnqi0h1fzlsvlkn10bncic803bfhslyijcxk55mgh297g45";
};
}
{
goPackagePath = "github.com/imdario/mergo";
fetch = {
type = "git";
url = "https://github.com/imdario/mergo";
rev = "9f23e2d6bd2a77f959b2bf6acdbefd708a83a4a4";
sha256 = "1lbzy8p8wv439sqgf0n21q52flf2wbamp6qa1jkyv6an0nc952q7";
};
}
{
goPackagePath = "github.com/inconshreveable/mousetrap";
fetch = {
type = "git";
url = "https://github.com/inconshreveable/mousetrap";
rev = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75";
sha256 = "1mn0kg48xkd74brf48qf5hzp0bc6g8cf5a77w895rl3qnlpfw152";
};
}
{
goPackagePath = "github.com/jbenet/go-context";
fetch = {
type = "git";
url = "https://github.com/jbenet/go-context";
rev = "d14ea06fba99483203c19d92cfcd13ebe73135f4";
sha256 = "0q91f5549n81w3z5927n4a1mdh220bdmgl42zi3h992dcc4ls0sl";
};
}
{
goPackagePath = "github.com/json-iterator/go";
fetch = {
type = "git";
url = "https://github.com/json-iterator/go";
rev = "1624edc4454b8682399def8740d46db5e4362ba4";
sha256 = "11wn4hpmrs8bmpvd93wqk49jfbbgylakhi35f9k5qd7jd479ci4s";
};
}
{
goPackagePath = "github.com/kevinburke/ssh_config";
fetch = {
type = "git";
url = "https://github.com/kevinburke/ssh_config";
rev = "9fc7bb800b555d63157c65a904c86a2cc7b4e795";
sha256 = "102icrla92zmr5zngipc8c9yfbqhf73zs2w2jq6s7p0gdjifigc8";
};
}
{
goPackagePath = "github.com/mailru/easyjson";
fetch = {
type = "git";
url = "https://github.com/mailru/easyjson";
rev = "03f2033d19d5860aef995fe360ac7d395cd8ce65";
sha256 = "0r62ym6m1ijby7nwplq0gdnhak8in63njyisrwhr3xpx9vkira97";
};
}
{
goPackagePath = "github.com/matttproud/golang_protobuf_extensions";
fetch = {
type = "git";
url = "https://github.com/matttproud/golang_protobuf_extensions";
rev = "c12348ce28de40eed0136aa2b644d0ee0650e56c";
sha256 = "1d0c1isd2lk9pnfq2nk0aih356j30k3h1gi2w0ixsivi5csl7jya";
};
}
{
goPackagePath = "github.com/minio/minio-go";
fetch = {
type = "git";
url = "https://github.com/minio/minio-go";
rev = "70799fe8dae6ecfb6c7d7e9e048fce27f23a1992";
sha256 = "0xvvnny59v4p1y2kbvz90ga5xvc5sq1gc4wv6cym82rdbvgzb2ax";
};
}
{
goPackagePath = "github.com/mitchellh/go-homedir";
fetch = {
type = "git";
url = "https://github.com/mitchellh/go-homedir";
rev = "58046073cbffe2f25d425fe1331102f55cf719de";
sha256 = "0kwflrwsjdsy8vbhyzicc4c2vdi7lhdvn4rarfr18x1qsrb7n1bx";
};
}
{
goPackagePath = "github.com/modern-go/concurrent";
fetch = {
type = "git";
url = "https://github.com/modern-go/concurrent";
rev = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94";
sha256 = "0s0fxccsyb8icjmiym5k7prcqx36hvgdwl588y0491gi18k5i4zs";
};
}
{
goPackagePath = "github.com/modern-go/reflect2";
fetch = {
type = "git";
url = "https://github.com/modern-go/reflect2";
rev = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd";
sha256 = "1721y3yr3dpx5dx5ashf063qczk2awy5zjir1jvp1h5hn7qz4i49";
};
}
{
goPackagePath = "github.com/pelletier/go-buffruneio";
fetch = {
type = "git";
url = "https://github.com/pelletier/go-buffruneio";
rev = "c37440a7cf42ac63b919c752ca73a85067e05992";
sha256 = "0l83p1gg6g5mmhmxjisrhfimhbm71lwn1r2w7d6siwwqm9q08sd2";
};
}
{
goPackagePath = "github.com/pkg/errors";
fetch = {
type = "git";
url = "https://github.com/pkg/errors";
rev = "645ef00459ed84a119197bfb8d8205042c6df63d";
sha256 = "001i6n71ghp2l6kdl3qq1v2vmghcz3kicv9a5wgcihrzigm75pp5";
};
}
{
goPackagePath = "github.com/pmezard/go-difflib";
fetch = {
type = "git";
url = "https://github.com/pmezard/go-difflib";
rev = "792786c7400a136282c1664665ae0a8db921c6c2";
sha256 = "0c1cn55m4rypmscgf0rrb88pn58j3ysvc2d0432dp3c6fqg6cnzw";
};
}
{
goPackagePath = "github.com/prometheus/client_golang";
fetch = {
type = "git";
url = "https://github.com/prometheus/client_golang";
rev = "c5b7fccd204277076155f10851dad72b76a49317";
sha256 = "1xqny3147g12n4j03kxm8s9mvdbs3ln6i56c655mybrn9jjy48kd";
};
}
{
goPackagePath = "github.com/prometheus/client_model";
fetch = {
type = "git";
url = "https://github.com/prometheus/client_model";
rev = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f";
sha256 = "04psf81l9fjcwascsys428v03fx4fi894h7fhrj2vvcz723q57k0";
};
}
{
goPackagePath = "github.com/prometheus/common";
fetch = {
type = "git";
url = "https://github.com/prometheus/common";
rev = "c7de2306084e37d54b8be01f3541a8464345e9a5";
sha256 = "11dqfm2d0m4sjjgyrnayman96g59x2apmvvqby9qmww2qj2k83ig";
};
}
{
goPackagePath = "github.com/prometheus/procfs";
fetch = {
type = "git";
url = "https://github.com/prometheus/procfs";
rev = "05ee40e3a273f7245e8777337fc7b46e533a9a92";
sha256 = "0f6fnczxa42b9rys2h3l0m8fy3x5hrhaq707vq0lbx5fcylw8lis";
};
}
{
goPackagePath = "github.com/sergi/go-diff";
fetch = {
type = "git";
url = "https://github.com/sergi/go-diff";
rev = "1744e2970ca51c86172c8190fadad617561ed6e7";
sha256 = "0swiazj8wphs2zmk1qgq75xza6m19snif94h2m6fi8dqkwqdl7c7";
};
}
{
goPackagePath = "github.com/sirupsen/logrus";
fetch = {
type = "git";
url = "https://github.com/sirupsen/logrus";
rev = "3e01752db0189b9157070a0e1668a620f9a85da2";
sha256 = "029irw2lsbqi944gdrbkwdw0m2794sqni4g21gsnmz142hbzds8c";
};
}
{
goPackagePath = "github.com/spf13/cobra";
fetch = {
type = "git";
url = "https://github.com/spf13/cobra";
rev = "7c4570c3ebeb8129a1f7456d0908a8b676b6f9f1";
sha256 = "16amh0prlzqrrbg5j629sg0f688nfzfgn9sair8jyybqampr3wc7";
};
}
{
goPackagePath = "github.com/spf13/pflag";
fetch = {
type = "git";
url = "https://github.com/spf13/pflag";
rev = "583c0c0531f06d5278b7d917446061adc344b5cd";
sha256 = "0nr4mdpfhhk94hq4ymn5b2sxc47b29p1akxd8b0hx4dvdybmipb5";
};
}
{
goPackagePath = "github.com/src-d/gcfg";
fetch = {
type = "git";
url = "https://github.com/src-d/gcfg";
rev = "f187355171c936ac84a82793659ebb4936bc1c23";
sha256 = "1hrdxlha4kkcpyydmjqd929rmwn5a9xq7arvwhryxppxq7502axk";
};
}
{
goPackagePath = "github.com/stretchr/objx";
fetch = {
type = "git";
url = "https://github.com/stretchr/objx";
rev = "477a77ecc69700c7cdeb1fa9e129548e1c1c393c";
sha256 = "0iph0qmpyqg4kwv8jsx6a56a7hhqq8swrazv40ycxk9rzr0s8yls";
};
}
{
goPackagePath = "github.com/stretchr/testify";
fetch = {
type = "git";
url = "https://github.com/stretchr/testify";
rev = "f35b8ab0b5a2cef36673838d662e249dd9c94686";
sha256 = "0dlszlshlxbmmfxj5hlwgv3r22x0y1af45gn1vd198nvvs3pnvfs";
};
}
{
goPackagePath = "github.com/tidwall/gjson";
fetch = {
type = "git";
url = "https://github.com/tidwall/gjson";
rev = "1e3f6aeaa5bad08d777ea7807b279a07885dd8b2";
sha256 = "0b0kvpzq0xxk2fq4diy3ab238yjx022s56h5jv1lc9hglds80lnn";
};
}
{
goPackagePath = "github.com/tidwall/match";
fetch = {
type = "git";
url = "https://github.com/tidwall/match";
rev = "1731857f09b1f38450e2c12409748407822dc6be";
sha256 = "14nv96h0mjki5q685qx8y331h4yga6hlfh3z9nz6acvnv284q578";
};
}
{
goPackagePath = "github.com/valyala/bytebufferpool";
fetch = {
type = "git";
url = "https://github.com/valyala/bytebufferpool";
rev = "e746df99fe4a3986f4d4f79e13c1e0117ce9c2f7";
sha256 = "01lqzjddq6kz9v41nkky7wbgk7f1cw036sa7ldz10d82g5klzl93";
};
}
{
goPackagePath = "github.com/valyala/fasttemplate";
fetch = {
type = "git";
url = "https://github.com/valyala/fasttemplate";
rev = "dcecefd839c4193db0d35b88ec65b4c12d360ab0";
sha256 = "0kkxn0ad5a36533djh50n9l6wsylmnykridkm91dqlqbjirn7216";
};
}
{
goPackagePath = "github.com/xanzy/ssh-agent";
fetch = {
type = "git";
url = "https://github.com/xanzy/ssh-agent";
rev = "640f0ab560aeb89d523bb6ac322b1244d5c3796c";
sha256 = "069nlriymqswg52ggiwi60qhwrin9nzhd2g65a7h59z2qbcvk2hy";
};
}
{
goPackagePath = "golang.org/x/crypto";
fetch = {
type = "git";
url = "https://go.googlesource.com/crypto";
rev = "f027049dab0ad238e394a753dba2d14753473a04";
sha256 = "026475grqvylk9n2ld4ygaxmzck6v97j48sc2x58jjsmqflnhzld";
};
}
{
goPackagePath = "golang.org/x/net";
fetch = {
type = "git";
url = "https://go.googlesource.com/net";
rev = "f9ce57c11b242f0f1599cf25c89d8cb02c45295a";
sha256 = "1m507gyjd9246cr3inpn6lgv3vnc3i11x4fgz0k0hdxv3cn9dyx2";
};
}
{
goPackagePath = "golang.org/x/oauth2";
fetch = {
type = "git";
url = "https://go.googlesource.com/oauth2";
rev = "3d292e4d0cdc3a0113e6d207bb137145ef1de42f";
sha256 = "0jvivlvx7snacd6abd1prqxa7h1z6b7s6mqahn8lpqlag3asryrl";
};
}
{
goPackagePath = "golang.org/x/sys";
fetch = {
type = "git";
url = "https://go.googlesource.com/sys";
rev = "904bdc257025c7b3f43c19360ad3ab85783fad78";
sha256 = "1pmj9axkj898bk4i4lny03b3l0zbkpvxj03gyjckliabqimqz0az";
};
}
{
goPackagePath = "golang.org/x/text";
fetch = {
type = "git";
url = "https://go.googlesource.com/text";
rev = "f21a4dfb5e38f5895301dc265a8def02365cc3d0";
sha256 = "0r6x6zjzhr8ksqlpiwm5gdd7s209kwk5p4lw54xjvz10cs3qlq19";
};
}
{
goPackagePath = "golang.org/x/time";
fetch = {
type = "git";
url = "https://go.googlesource.com/time";
rev = "fbb02b2291d28baffd63558aa44b4b56f178d650";
sha256 = "0jjqcv6rzihlgg4i797q80g1f6ch5diz2kxqh6488gwkb6nds4h4";
};
}
{
goPackagePath = "golang.org/x/tools";
fetch = {
type = "git";
url = "https://go.googlesource.com/tools";
rev = "ca6481ae56504398949d597084558e50ad07117a";
sha256 = "0pza1pd0wy9r0pf9b9hham9ldr2byyg1slqf8p56dhf8b6j9jw9v";
};
}
{
goPackagePath = "google.golang.org/appengine";
fetch = {
type = "git";
url = "https://github.com/golang/appengine";
rev = "b1f26356af11148e710935ed1ac8a7f5702c7612";
sha256 = "1pz202zszg8f35dk5pfhwgcdi3r6dx1l4yk6x6ly7nb4j45zi96x";
};
}
{
goPackagePath = "gopkg.in/inf.v0";
fetch = {
type = "git";
url = "https://github.com/go-inf/inf";
rev = "d2d2541c53f18d2a059457998ce2876cc8e67cbf";
sha256 = "00k5iqjcp371fllqxncv7jkf80hn1zww92zm78cclbcn4ybigkng";
};
}
{
goPackagePath = "gopkg.in/src-d/go-billy.v4";
fetch = {
type = "git";
url = "https://github.com/src-d/go-billy";
rev = "83cf655d40b15b427014d7875d10850f96edba14";
sha256 = "18fghcyk69g460px8rvmhmqldkbhw17dpnhg45qwdvaq90b0bkx9";
};
}
{
goPackagePath = "gopkg.in/src-d/go-git.v4";
fetch = {
type = "git";
url = "https://github.com/src-d/go-git";
rev = "3bd5e82b2512d85becae9677fa06b5a973fd4cfb";
sha256 = "1krg24ncckwalmhzs2vlp8rwyk4rfnhfydwg8iw7gaywww2c1wfc";
};
}
{
goPackagePath = "gopkg.in/warnings.v0";
fetch = {
type = "git";
url = "https://github.com/go-warnings/warnings";
rev = "ec4a0fea49c7b46c2aeb0b51aac55779c607e52b";
sha256 = "1kzj50jn708cingn7a13c2wdlzs6qv89dr2h4zj8d09647vlnd81";
};
}
{
goPackagePath = "gopkg.in/yaml.v2";
fetch = {
type = "git";
url = "https://github.com/go-yaml/yaml";
rev = "5420a8b6744d3b0345ab293f6fcba19c978f1183";
sha256 = "0dwjrs2lp2gdlscs7bsrmyc5yf6mm4fvgw71bzr9mv2qrd2q73s1";
};
}
{
goPackagePath = "k8s.io/api";
fetch = {
type = "git";
url = "https://github.com/kubernetes/api";
rev = "0f11257a8a25954878633ebdc9841c67d8f83bdb";
sha256 = "1y8k0b03ibr8ga9dr91dc2imq2cbmy702a1xqggb97h8lmb6jqni";
};
}
{
goPackagePath = "k8s.io/apimachinery";
fetch = {
type = "git";
url = "https://github.com/kubernetes/apimachinery";
rev = "e386b2658ed20923da8cc9250e552f082899a1ee";
sha256 = "0lgwpsvx0gpnrdnkqc9m96xwkifdq50l7cj9rvh03njws4rbd8jz";
};
}
{
goPackagePath = "k8s.io/client-go";
fetch = {
type = "git";
url = "https://github.com/kubernetes/client-go";
rev = "a312bfe35c401f70e5ea0add48b50da283031dc3";
sha256 = "0z360np4iv7jdgacw576gdxbzl8ss810kbqwyrjk39by589rfkl9";
};
}
{
goPackagePath = "k8s.io/code-generator";
fetch = {
type = "git";
url = "https://github.com/kubernetes/code-generator";
rev = "9de8e796a74d16d2a285165727d04c185ebca6dc";
sha256 = "09858ykfrd3cyzkkpafzhqs6h7bk3n90s3p52x3axn4f7ikjh7k4";
};
}
{
goPackagePath = "k8s.io/gengo";
fetch = {
type = "git";
url = "https://github.com/kubernetes/gengo";
rev = "c42f3cdacc394f43077ff17e327d1b351c0304e4";
sha256 = "05vbrqfa96izm5j2q9f4yiyrbyx23nrkj5yv4fhfc7pvwb35iy04";
};
}
{
goPackagePath = "k8s.io/kube-openapi";
fetch = {
type = "git";
url = "https://github.com/kubernetes/kube-openapi";
rev = "e3762e86a74c878ffed47484592986685639c2cd";
sha256 = "1n9j08dwnj77iflzj047hrk0zg6nh1m4a5pljjdsvvf3xgka54pz";
};
}
]

View File

@ -2,13 +2,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "kubetail-${version}"; name = "kubetail-${version}";
version = "1.6.5"; version = "1.6.6";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "johanhaleby"; owner = "johanhaleby";
repo = "kubetail"; repo = "kubetail";
rev = "${version}"; rev = "${version}";
sha256 = "0q8had1bi1769wd6h1c43gq0cvr5qj1fvyglizlyq1gm8qi2dx7n"; sha256 = "0fd3xmhn20wmbwxdqs49nvwhl6vc3ipns83j558zir8x4fgq0yrr";
}; };
installPhase = '' installPhase = ''

View File

@ -4,10 +4,10 @@
}: }:
let let
version = "1.29.0"; version = "1.30.0";
# Update these on version bumps according to Makefile # Update these on version bumps according to Makefile
centOsIsoVersion = "v1.13.0"; centOsIsoVersion = "v1.14.0";
openshiftVersion = "v3.11.0"; openshiftVersion = "v3.11.0";
in buildGoPackage rec { in buildGoPackage rec {
@ -18,7 +18,7 @@ in buildGoPackage rec {
owner = "minishift"; owner = "minishift";
repo = "minishift"; repo = "minishift";
rev = "v${version}"; rev = "v${version}";
sha256 = "17scvv60hgk7s9fy4s9z26sc8a69ryh33rhr1f7p92kb5wfh2x40"; sha256 = "0p7g7r4m3brssy2znw7pd60aph6m6absqy23x88c07n5n4mv9wj8";
}; };
nativeBuildInputs = [ pkgconfig go-bindata makeWrapper ]; nativeBuildInputs = [ pkgconfig go-bindata makeWrapper ];

View File

@ -2,7 +2,7 @@
buildGoPackage rec { buildGoPackage rec {
name = "nomad-${version}"; name = "nomad-${version}";
version = "0.8.6"; version = "0.8.7";
rev = "v${version}"; rev = "v${version}";
goPackagePath = "github.com/hashicorp/nomad"; goPackagePath = "github.com/hashicorp/nomad";
@ -12,7 +12,7 @@ buildGoPackage rec {
owner = "hashicorp"; owner = "hashicorp";
repo = "nomad"; repo = "nomad";
inherit rev; inherit rev;
sha256 = "1786hbgby9q3p4x28xdc06v12n8qvxqwis70mr80axb6r4kd7yqw"; sha256 = "0nkqiqkrccfmn7qkbhd48m9m56ix4xb0a3ar0z0pl4sbm25rlj0b";
}; };
meta = with stdenv.lib; { meta = with stdenv.lib; {

View File

@ -1,21 +1,27 @@
{ stdenv, fetchFromGitHub, meson, ninja, pkgconfig, vala_0_40, gettext, python3 { stdenv, fetchFromGitHub, fetchpatch, meson, ninja, pkgconfig, vala_0_40, gettext, python3
, appstream-glib, desktop-file-utils, glibcLocales, wrapGAppsHook , appstream-glib, desktop-file-utils, glibcLocales, wrapGAppsHook
, curl, glib, gnome3, gst_all_1, json-glib, libnotify, libsecret, sqlite, gumbo , curl, glib, gnome3, gst_all_1, json-glib, libnotify, libsecret, sqlite, gumbo
}: }:
let stdenv.mkDerivation rec {
pname = "FeedReader"; pname = "feedreader";
version = "2.6.1"; version = "2.6.2";
in stdenv.mkDerivation {
name = "${pname}-${version}";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "jangernert"; owner = "jangernert";
repo = pname; repo = pname;
rev = "v" + version; rev = "v${version}";
sha256 = "01r00b2jrb12x46fvd207s5lkhc13kmzg0w1kqbdkwkwsrdzb0jy"; sha256 = "1x5milynfa27zyv2jkzyi7ikkszrvzki1hlzv8c2wvcmw60jqb8n";
}; };
patches = [
# See: https://github.com/jangernert/FeedReader/pull/842
(fetchpatch {
url = "https://github.com/jangernert/FeedReader/commit/f4ce70932c4ddc91783309708402c7c42d627455.patch";
sha256 = "076fpjn973xg2m35lc6z4h7g5x8nb08sghg94glsqa8wh1ig2311";
})
];
nativeBuildInputs = [ nativeBuildInputs = [
meson ninja pkgconfig vala_0_40 gettext appstream-glib desktop-file-utils meson ninja pkgconfig vala_0_40 gettext appstream-glib desktop-file-utils
python3 glibcLocales wrapGAppsHook python3 glibcLocales wrapGAppsHook
@ -30,9 +36,6 @@ in stdenv.mkDerivation {
gstreamer gst-plugins-base gst-plugins-good gstreamer gst-plugins-base gst-plugins-good
]); ]);
# TODO: fix https://github.com/NixOS/nixpkgs/issues/39547
LIBRARY_PATH = stdenv.lib.makeLibraryPath [ curl ];
# vcs_tag function fails with UnicodeDecodeError # vcs_tag function fails with UnicodeDecodeError
LC_ALL = "en_US.UTF-8"; LC_ALL = "en_US.UTF-8";
@ -41,7 +44,7 @@ in stdenv.mkDerivation {
''; '';
meta = with stdenv.lib; { meta = with stdenv.lib; {
description = "A modern desktop application designed to complement existing web-based RSS accounts."; description = "A modern desktop application designed to complement existing web-based RSS accounts";
homepage = https://jangernert.github.io/FeedReader/; homepage = https://jangernert.github.io/FeedReader/;
license = licenses.gpl3Plus; license = licenses.gpl3Plus;
maintainers = with maintainers; [ edwtjo ]; maintainers = with maintainers; [ edwtjo ];

View File

@ -3,13 +3,13 @@
with stdenv.lib; with stdenv.lib;
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
name = "bitlbee-facebook-${version}"; name = "bitlbee-facebook-${version}";
version = "1.1.2"; version = "1.2.0";
src = fetchFromGitHub { src = fetchFromGitHub {
rev = "v${version}"; rev = "v${version}";
owner = "bitlbee"; owner = "bitlbee";
repo = "bitlbee-facebook"; repo = "bitlbee-facebook";
sha256 = "0kz2sc10iq01vn0hvf06bcdc1rsxz1j77z3mw55slf3j08xr07in"; sha256 = "11068zhb1v55b1x0nhjc4f3p0glccxpcyk5c1630hfdzkj7vyqhn";
}; };
nativeBuildInputs = [ autoconf automake libtool pkgconfig ]; nativeBuildInputs = [ autoconf automake libtool pkgconfig ];

Some files were not shown because too many files have changed in this diff Show More