Merge pull request #2632 from bgilbert/v4.11.8

sys-kernel/coreos-*: bump to v4.11.8
This commit is contained in:
Benjamin Gilbert 2017-06-29 16:21:34 -07:00 committed by GitHub
commit 11f0ce5847
31 changed files with 141 additions and 1040 deletions

View File

@ -2,7 +2,7 @@
# Distributed under the terms of the GNU General Public License v2 # Distributed under the terms of the GNU General Public License v2
EAPI=5 EAPI=5
COREOS_SOURCE_REVISION="-r1" COREOS_SOURCE_REVISION=""
inherit coreos-kernel inherit coreos-kernel
DESCRIPTION="CoreOS Linux kernel" DESCRIPTION="CoreOS Linux kernel"

View File

@ -2,7 +2,7 @@
# Distributed under the terms of the GNU General Public License v2 # Distributed under the terms of the GNU General Public License v2
EAPI=5 EAPI=5
COREOS_SOURCE_REVISION="-r1" COREOS_SOURCE_REVISION=""
inherit coreos-kernel savedconfig inherit coreos-kernel savedconfig
DESCRIPTION="CoreOS Linux kernel modules" DESCRIPTION="CoreOS Linux kernel modules"

View File

@ -1,2 +1,2 @@
DIST linux-4.11.tar.xz 95447768 SHA256 b67ecafd0a42b3383bf4d82f0850cbff92a7e72a215a6d02f42ddbafcf42a7d6 SHA512 6610eed97ffb7207c71771198c36179b8244ace7222bebb109507720e26c5f17d918079a56d5febdd8605844d67fb2df0ebe910fa2f2f53690daf6e2a8ad09c3 WHIRLPOOL f577b7c5c209cb8dfef2f1d56d77314fbd53323743a34b900e2559ab0049b7c2d6262bda136dd3d005bc0527788106e0484e46558448a8720dac389a969e5886 DIST linux-4.11.tar.xz 95447768 SHA256 b67ecafd0a42b3383bf4d82f0850cbff92a7e72a215a6d02f42ddbafcf42a7d6 SHA512 6610eed97ffb7207c71771198c36179b8244ace7222bebb109507720e26c5f17d918079a56d5febdd8605844d67fb2df0ebe910fa2f2f53690daf6e2a8ad09c3 WHIRLPOOL f577b7c5c209cb8dfef2f1d56d77314fbd53323743a34b900e2559ab0049b7c2d6262bda136dd3d005bc0527788106e0484e46558448a8720dac389a969e5886
DIST patch-4.11.6.xz 195064 SHA256 00c0b804ccda18d6ed4a32ba0be049a80363aa2bc084733a22da03f435d992a4 SHA512 e0e2de7d721575cd2770fa4fa61a1ecdfd54bb4239725363a90ab3b670aab44531a7c0f198ff769080643e86ce7e4806d26bb436a43437747e123715061b278b WHIRLPOOL 282c10df0fbbaa6a607ea6918f319becd14c81350f717d82f7b84eac8b03c9e7684571b7751c8be38379665a26ad24d38de1f8f7f24e73491a1fbb1c128799a5 DIST patch-4.11.8.xz 239352 SHA256 c390540524e9647efa3752550cb04b02f47a60a5d45f26d56a07cd8a67501929 SHA512 9fed139ec4658d373ea6f25b0cc0cd9384e3bf61a05d30a523c13d8b5e673b461cf3cc8d97da2c69ca3a6c718319529f7ccfd90ca38b81d68986b7e63f2db297 WHIRLPOOL a72ef2cebcae11425c5eccb29619d5c9be99624cc48f439f30e6c4499ba7a404abc1bb768a07689ee05e9c086d16f5de0f8eb914c33d1295c0e1450dd60c154c

View File

@ -44,6 +44,5 @@ UNIPATCH_LIST="
${PATCH_DIR}/z0022-Lock-down-TIOCSSERIAL.patch \ ${PATCH_DIR}/z0022-Lock-down-TIOCSSERIAL.patch \
${PATCH_DIR}/z0023-kbuild-derive-relative-path-for-KBUILD_SRC-from-CURD.patch \ ${PATCH_DIR}/z0023-kbuild-derive-relative-path-for-KBUILD_SRC-from-CURD.patch \
${PATCH_DIR}/z0024-Add-arm64-coreos-verity-hash.patch \ ${PATCH_DIR}/z0024-Add-arm64-coreos-verity-hash.patch \
${PATCH_DIR}/z0025-mm-larger-stack-guard-gap-between-vmas.patch \ ${PATCH_DIR}/z0025-ext4-handle-the-rest-of-ext4_mb_load_buddy-ENOMEM-er.patch \
${PATCH_DIR}/z0026-mm-fix-new-crash-in-unmapped_area_topdown.patch \
" "

View File

@ -1,7 +1,7 @@
From fd884cf2511d381bbf180714adabbf49f3b2779a Mon Sep 17 00:00:00 2001 From 5eb64704322cfac6e12d26abe602c2e702df1312 Mon Sep 17 00:00:00 2001
From: Josh Boyer <jwboyer@fedoraproject.org> From: Josh Boyer <jwboyer@fedoraproject.org>
Date: Mon, 21 Nov 2016 23:55:55 +0000 Date: Mon, 21 Nov 2016 23:55:55 +0000
Subject: [PATCH 01/26] efi: Add EFI_SECURE_BOOT bit Subject: [PATCH 01/25] efi: Add EFI_SECURE_BOOT bit
UEFI machines can be booted in Secure Boot mode. Add a EFI_SECURE_BOOT bit UEFI machines can be booted in Secure Boot mode. Add a EFI_SECURE_BOOT bit
that can be passed to efi_enabled() to find out whether secure boot is that can be passed to efi_enabled() to find out whether secure boot is

View File

@ -1,7 +1,7 @@
From 031d0e66222dcc1f8e659ea4dec906828739b442 Mon Sep 17 00:00:00 2001 From 17572853d2658797d83a347b569970095be67666 Mon Sep 17 00:00:00 2001
From: David Howells <dhowells@redhat.com> From: David Howells <dhowells@redhat.com>
Date: Mon, 21 Nov 2016 23:36:17 +0000 Date: Mon, 21 Nov 2016 23:36:17 +0000
Subject: [PATCH 02/26] Add the ability to lock down access to the running Subject: [PATCH 02/25] Add the ability to lock down access to the running
kernel image kernel image
Provide a single call to allow kernel code to determine whether the system Provide a single call to allow kernel code to determine whether the system

View File

@ -1,7 +1,7 @@
From 8b8192d581d483984d4bff7ba86acfb748bb13c0 Mon Sep 17 00:00:00 2001 From 8fdc73845896fe16b1743eeee0984ce8530ede37 Mon Sep 17 00:00:00 2001
From: David Howells <dhowells@redhat.com> From: David Howells <dhowells@redhat.com>
Date: Mon, 21 Nov 2016 23:55:55 +0000 Date: Mon, 21 Nov 2016 23:55:55 +0000
Subject: [PATCH 03/26] efi: Lock down the kernel if booted in secure boot mode Subject: [PATCH 03/25] efi: Lock down the kernel if booted in secure boot mode
UEFI Secure Boot provides a mechanism for ensuring that the firmware will UEFI Secure Boot provides a mechanism for ensuring that the firmware will
only load signed bootloaders and kernels. Certain use cases may also only load signed bootloaders and kernels. Certain use cases may also

View File

@ -1,7 +1,7 @@
From 44c06553478bda830c83cfcff1169386757bfa5e Mon Sep 17 00:00:00 2001 From b952ea662bd2b88a712706bad504826fb5e47f00 Mon Sep 17 00:00:00 2001
From: David Howells <dhowells@redhat.com> From: David Howells <dhowells@redhat.com>
Date: Wed, 23 Nov 2016 13:22:22 +0000 Date: Wed, 23 Nov 2016 13:22:22 +0000
Subject: [PATCH 04/26] Enforce module signatures if the kernel is locked down Subject: [PATCH 04/25] Enforce module signatures if the kernel is locked down
If the kernel is locked down, require that all modules have valid If the kernel is locked down, require that all modules have valid
signatures that we can verify. signatures that we can verify.

View File

@ -1,7 +1,7 @@
From ebcf469083241dcddd27f65d8465957d9c5374c9 Mon Sep 17 00:00:00 2001 From ae791b7f235c63639fe7756bd779e646c2492c7a Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:16 +0000 Date: Tue, 22 Nov 2016 08:46:16 +0000
Subject: [PATCH 05/26] Restrict /dev/mem and /dev/kmem when the kernel is Subject: [PATCH 05/25] Restrict /dev/mem and /dev/kmem when the kernel is
locked down locked down
Allowing users to write to address space makes it possible for the kernel to Allowing users to write to address space makes it possible for the kernel to

View File

@ -1,7 +1,7 @@
From 9db5ea1dbc604754bf41fab3383fd8743ae6a42f Mon Sep 17 00:00:00 2001 From a9a6794e3d50a2bc3bf638e2a7e151e1483a87a0 Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:15 +0000 Date: Tue, 22 Nov 2016 08:46:15 +0000
Subject: [PATCH 06/26] kexec: Disable at runtime if the kernel is locked down Subject: [PATCH 06/25] kexec: Disable at runtime if the kernel is locked down
kexec permits the loading and execution of arbitrary code in ring 0, which kexec permits the loading and execution of arbitrary code in ring 0, which
is something that lock-down is meant to prevent. It makes sense to disable is something that lock-down is meant to prevent. It makes sense to disable

View File

@ -1,7 +1,7 @@
From 84196308f898ed6739af65d69e2b077b541153e1 Mon Sep 17 00:00:00 2001 From 8659ee3435108bf03df0b1a0155720f051ceabaa Mon Sep 17 00:00:00 2001
From: Dave Young <dyoung@redhat.com> From: Dave Young <dyoung@redhat.com>
Date: Tue, 22 Nov 2016 08:46:15 +0000 Date: Tue, 22 Nov 2016 08:46:15 +0000
Subject: [PATCH 07/26] Copy secure_boot flag in boot params across kexec Subject: [PATCH 07/25] Copy secure_boot flag in boot params across kexec
reboot reboot
Kexec reboot in case secure boot being enabled does not keep the secure Kexec reboot in case secure boot being enabled does not keep the secure

View File

@ -1,7 +1,7 @@
From 6d464109d41e58169e6121d844765443a23f0a37 Mon Sep 17 00:00:00 2001 From 147b43aeaffaec0a0809314bbfe7afa7bfce9fef Mon Sep 17 00:00:00 2001
From: "Lee, Chun-Yi" <joeyli.kernel@gmail.com> From: "Lee, Chun-Yi" <joeyli.kernel@gmail.com>
Date: Wed, 23 Nov 2016 13:49:19 +0000 Date: Wed, 23 Nov 2016 13:49:19 +0000
Subject: [PATCH 08/26] kexec_file: Disable at runtime if securelevel has been Subject: [PATCH 08/25] kexec_file: Disable at runtime if securelevel has been
set set
When KEXEC_VERIFY_SIG is not enabled, kernel should not loads image When KEXEC_VERIFY_SIG is not enabled, kernel should not loads image

View File

@ -1,7 +1,7 @@
From ca4d2b0d492a011f3f04ca27112dc897afa6cd6c Mon Sep 17 00:00:00 2001 From 639fe1050f8f7ac809d6429023b9e135aa1408a8 Mon Sep 17 00:00:00 2001
From: Josh Boyer <jwboyer@fedoraproject.org> From: Josh Boyer <jwboyer@fedoraproject.org>
Date: Tue, 22 Nov 2016 08:46:15 +0000 Date: Tue, 22 Nov 2016 08:46:15 +0000
Subject: [PATCH 09/26] hibernate: Disable when the kernel is locked down Subject: [PATCH 09/25] hibernate: Disable when the kernel is locked down
There is currently no way to verify the resume image when returning There is currently no way to verify the resume image when returning
from hibernate. This might compromise the signed modules trust model, from hibernate. This might compromise the signed modules trust model,

View File

@ -1,7 +1,7 @@
From 71a51cb3bf8ccadcd8909fd83d69ded308654c17 Mon Sep 17 00:00:00 2001 From 0cf20c96adc7e09d5a7155153d274ed60fd8f323 Mon Sep 17 00:00:00 2001
From: Matthew Garrett <mjg59@srcf.ucam.org> From: Matthew Garrett <mjg59@srcf.ucam.org>
Date: Wed, 23 Nov 2016 13:28:17 +0000 Date: Wed, 23 Nov 2016 13:28:17 +0000
Subject: [PATCH 10/26] uswsusp: Disable when the kernel is locked down Subject: [PATCH 10/25] uswsusp: Disable when the kernel is locked down
uswsusp allows a user process to dump and then restore kernel state, which uswsusp allows a user process to dump and then restore kernel state, which
makes it possible to modify the running kernel. Disable this if the kernel makes it possible to modify the running kernel. Disable this if the kernel

View File

@ -1,7 +1,7 @@
From 723299a61788af79dde4257a756aeba12ba1ae4a Mon Sep 17 00:00:00 2001 From 83624e2a7733314685e2722586e27830b482abd3 Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:15 +0000 Date: Tue, 22 Nov 2016 08:46:15 +0000
Subject: [PATCH 11/26] PCI: Lock down BAR access when the kernel is locked Subject: [PATCH 11/25] PCI: Lock down BAR access when the kernel is locked
down down
Any hardware that can potentially generate DMA has to be locked down in Any hardware that can potentially generate DMA has to be locked down in

View File

@ -1,7 +1,7 @@
From 6082b23ef0f4f4e8ab59d3bb4a9f0fd5847f560e Mon Sep 17 00:00:00 2001 From 44aae071f73313b7c3b8e62955d82a7130dac637 Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:16 +0000 Date: Tue, 22 Nov 2016 08:46:16 +0000
Subject: [PATCH 12/26] x86: Lock down IO port access when the kernel is locked Subject: [PATCH 12/25] x86: Lock down IO port access when the kernel is locked
down down
IO port access would permit users to gain access to PCI configuration IO port access would permit users to gain access to PCI configuration

View File

@ -1,7 +1,7 @@
From c281b90cf4a02a233765fcf5901b9d6ec3718966 Mon Sep 17 00:00:00 2001 From 7a5fcee2005bf31f04fee37f7f99b72633631261 Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:17 +0000 Date: Tue, 22 Nov 2016 08:46:17 +0000
Subject: [PATCH 13/26] x86: Restrict MSR access when the kernel is locked down Subject: [PATCH 13/25] x86: Restrict MSR access when the kernel is locked down
Writing to MSRs should not be allowed if the kernel is locked down, since Writing to MSRs should not be allowed if the kernel is locked down, since
it could lead to execution of arbitrary code in kernel mode. Based on a it could lead to execution of arbitrary code in kernel mode. Based on a

View File

@ -1,7 +1,7 @@
From 3991f2855a05f21641d223f05b822abc46b388b1 Mon Sep 17 00:00:00 2001 From 375f4e4c2885875f901bd1d773fcbd1387b4d891 Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:16 +0000 Date: Tue, 22 Nov 2016 08:46:16 +0000
Subject: [PATCH 14/26] asus-wmi: Restrict debugfs interface when the kernel is Subject: [PATCH 14/25] asus-wmi: Restrict debugfs interface when the kernel is
locked down locked down
We have no way of validating what all of the Asus WMI methods do on a given We have no way of validating what all of the Asus WMI methods do on a given

View File

@ -1,7 +1,7 @@
From 8d62701b2c57b2e472a80393e3e976f1ade21dac Mon Sep 17 00:00:00 2001 From d58cf0867fb90f8705b2517446f46acd040b811b Mon Sep 17 00:00:00 2001
From: Matthew Garrett <matthew.garrett@nebula.com> From: Matthew Garrett <matthew.garrett@nebula.com>
Date: Tue, 22 Nov 2016 08:46:16 +0000 Date: Tue, 22 Nov 2016 08:46:16 +0000
Subject: [PATCH 15/26] ACPI: Limit access to custom_method when the kernel is Subject: [PATCH 15/25] ACPI: Limit access to custom_method when the kernel is
locked down locked down
custom_method effectively allows arbitrary access to system memory, making custom_method effectively allows arbitrary access to system memory, making

View File

@ -1,7 +1,7 @@
From 953a0fc5063cd15031a4d6b328b5c9f1d2e71902 Mon Sep 17 00:00:00 2001 From 384fb2a457962ce0929750a3ac1ba024b8e0d98c Mon Sep 17 00:00:00 2001
From: Josh Boyer <jwboyer@redhat.com> From: Josh Boyer <jwboyer@redhat.com>
Date: Tue, 22 Nov 2016 08:46:16 +0000 Date: Tue, 22 Nov 2016 08:46:16 +0000
Subject: [PATCH 16/26] acpi: Ignore acpi_rsdp kernel param when the kernel has Subject: [PATCH 16/25] acpi: Ignore acpi_rsdp kernel param when the kernel has
been locked down been locked down
This option allows userspace to pass the RSDP address to the kernel, which This option allows userspace to pass the RSDP address to the kernel, which

View File

@ -1,7 +1,7 @@
From 7ad375dfa5b163a2d1918647f245d4f18811fbdf Mon Sep 17 00:00:00 2001 From 23dfa5b9a48b4fa6e563eebaa7be8d077251a98b Mon Sep 17 00:00:00 2001
From: Linn Crosetto <linn@hpe.com> From: Linn Crosetto <linn@hpe.com>
Date: Wed, 23 Nov 2016 13:32:27 +0000 Date: Wed, 23 Nov 2016 13:32:27 +0000
Subject: [PATCH 17/26] acpi: Disable ACPI table override if the kernel is Subject: [PATCH 17/25] acpi: Disable ACPI table override if the kernel is
locked down locked down
From the kernel documentation (initrd_table_override.txt): From the kernel documentation (initrd_table_override.txt):

View File

@ -1,7 +1,7 @@
From 0aaecda5c1b5f825b9cd2046e40d82b7ab811a95 Mon Sep 17 00:00:00 2001 From 518e8705e6343a43074ef9ef7f3a62cf4d1525d4 Mon Sep 17 00:00:00 2001
From: Linn Crosetto <linn@hpe.com> From: Linn Crosetto <linn@hpe.com>
Date: Wed, 23 Nov 2016 13:39:41 +0000 Date: Wed, 23 Nov 2016 13:39:41 +0000
Subject: [PATCH 18/26] acpi: Disable APEI error injection if the kernel is Subject: [PATCH 18/25] acpi: Disable APEI error injection if the kernel is
locked down locked down
ACPI provides an error injection mechanism, EINJ, for debugging and testing ACPI provides an error injection mechanism, EINJ, for debugging and testing

View File

@ -1,7 +1,7 @@
From cbdbd3c0ff6d98dba590cd3f4978c9b318ef1656 Mon Sep 17 00:00:00 2001 From 5fd3f4124512e23197efa6bcbca4b41f513f045b Mon Sep 17 00:00:00 2001
From: "Lee, Chun-Yi" <jlee@suse.com> From: "Lee, Chun-Yi" <jlee@suse.com>
Date: Wed, 23 Nov 2016 13:52:16 +0000 Date: Wed, 23 Nov 2016 13:52:16 +0000
Subject: [PATCH 19/26] bpf: Restrict kernel image access functions when the Subject: [PATCH 19/25] bpf: Restrict kernel image access functions when the
kernel is locked down kernel is locked down
There are some bpf functions can be used to read kernel memory: There are some bpf functions can be used to read kernel memory:

View File

@ -1,7 +1,7 @@
From 32c85f7a1d68ae1b947d305b2f73c1e2c46ecb1c Mon Sep 17 00:00:00 2001 From c6cf9a02898f6b70f35c6436d04b9d151fc1b5d7 Mon Sep 17 00:00:00 2001
From: David Howells <dhowells@redhat.com> From: David Howells <dhowells@redhat.com>
Date: Tue, 22 Nov 2016 10:10:34 +0000 Date: Tue, 22 Nov 2016 10:10:34 +0000
Subject: [PATCH 20/26] scsi: Lock down the eata driver Subject: [PATCH 20/25] scsi: Lock down the eata driver
When the kernel is running in secure boot mode, we lock down the kernel to When the kernel is running in secure boot mode, we lock down the kernel to
prevent userspace from modifying the running kernel image. Whilst this prevent userspace from modifying the running kernel image. Whilst this

View File

@ -1,7 +1,7 @@
From e835b3d609297875784bc7835cde55bfc8a40f7e Mon Sep 17 00:00:00 2001 From ae40d25c6273aee1875301ead7918aed44242342 Mon Sep 17 00:00:00 2001
From: David Howells <dhowells@redhat.com> From: David Howells <dhowells@redhat.com>
Date: Fri, 25 Nov 2016 14:37:45 +0000 Date: Fri, 25 Nov 2016 14:37:45 +0000
Subject: [PATCH 21/26] Prohibit PCMCIA CIS storage when the kernel is locked Subject: [PATCH 21/25] Prohibit PCMCIA CIS storage when the kernel is locked
down down
Prohibit replacement of the PCMCIA Card Information Structure when the Prohibit replacement of the PCMCIA Card Information Structure when the

View File

@ -1,7 +1,7 @@
From 9b09194823ad294e0a41de6b7ff9ee47e8e1e9cb Mon Sep 17 00:00:00 2001 From 1c7e0fcdc01d7d0c6e8002b82f913eea786f045a Mon Sep 17 00:00:00 2001
From: David Howells <dhowells@redhat.com> From: David Howells <dhowells@redhat.com>
Date: Wed, 7 Dec 2016 10:28:39 +0000 Date: Wed, 7 Dec 2016 10:28:39 +0000
Subject: [PATCH 22/26] Lock down TIOCSSERIAL Subject: [PATCH 22/25] Lock down TIOCSSERIAL
Lock down TIOCSSERIAL as that can be used to change the ioport and irq Lock down TIOCSSERIAL as that can be used to change the ioport and irq
settings on a serial port. This only appears to be an issue for the serial settings on a serial port. This only appears to be an issue for the serial

View File

@ -1,7 +1,7 @@
From cec28fd85530cf618a0c5412e5845130cdec93ad Mon Sep 17 00:00:00 2001 From 95fabc958dfc9e39bea8e9cad7c065e0382ae00f Mon Sep 17 00:00:00 2001
From: Vito Caputo <vito.caputo@coreos.com> From: Vito Caputo <vito.caputo@coreos.com>
Date: Wed, 25 Nov 2015 02:59:45 -0800 Date: Wed, 25 Nov 2015 02:59:45 -0800
Subject: [PATCH 23/26] kbuild: derive relative path for KBUILD_SRC from CURDIR Subject: [PATCH 23/25] kbuild: derive relative path for KBUILD_SRC from CURDIR
This enables relocating source and build trees to different roots, This enables relocating source and build trees to different roots,
provided they stay reachable relative to one another. Useful for provided they stay reachable relative to one another. Useful for
@ -12,7 +12,7 @@ by some undesirable path component.
1 file changed, 2 insertions(+), 1 deletion(-) 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/Makefile b/Makefile diff --git a/Makefile b/Makefile
index e46e99c..46a482c 100644 index 8c5c94c..8c63105 100644
--- a/Makefile --- a/Makefile
+++ b/Makefile +++ b/Makefile
@@ -149,7 +149,8 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make @@ -149,7 +149,8 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make

View File

@ -1,7 +1,7 @@
From 6869be30ef74913549956bcaa4c90f98e85d9ee2 Mon Sep 17 00:00:00 2001 From e546f8455c33b339a3b84b55f95d4fcb9fe07571 Mon Sep 17 00:00:00 2001
From: Geoff Levand <geoff@infradead.org> From: Geoff Levand <geoff@infradead.org>
Date: Fri, 11 Nov 2016 17:28:52 -0800 Date: Fri, 11 Nov 2016 17:28:52 -0800
Subject: [PATCH 24/26] Add arm64 coreos verity hash Subject: [PATCH 24/25] Add arm64 coreos verity hash
Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Geoff Levand <geoff@infradead.org>
--- ---

View File

@ -0,0 +1,88 @@
From 53bcfff6ac09aa20b49b67233f729f06d4eff9a8 Mon Sep 17 00:00:00 2001
From: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Date: Sun, 21 May 2017 22:35:23 -0400
Subject: [PATCH 25/25] ext4: handle the rest of ext4_mb_load_buddy() ENOMEM
errors
I've got another report about breaking ext4 by ENOMEM error returned from
ext4_mb_load_buddy() caused by memory shortage in memory cgroup.
This time inside ext4_discard_preallocations().
This patch replaces ext4_error() with ext4_warning() where errors returned
from ext4_mb_load_buddy() are not fatal and handled by caller:
* ext4_mb_discard_group_preallocations() - called before generating ENOSPC,
we'll try to discard other group or return ENOSPC into user-space.
* ext4_trim_all_free() - just stop trimming and return ENOMEM from ioctl.
Some callers cannot handle errors, thus __GFP_NOFAIL is used for them:
* ext4_discard_preallocations()
* ext4_mb_discard_lg_preallocations()
Fixes: adb7ef600cc9 ("ext4: use __GFP_NOFAIL in ext4_free_blocks()")
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
---
fs/ext4/mballoc.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 354dc1a..3942815 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3887,7 +3887,8 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
err = ext4_mb_load_buddy(sb, group, &e4b);
if (err) {
- ext4_error(sb, "Error loading buddy information for %u", group);
+ ext4_warning(sb, "Error %d loading buddy information for %u",
+ err, group);
put_bh(bitmap_bh);
return 0;
}
@@ -4044,10 +4045,11 @@ void ext4_discard_preallocations(struct inode *inode)
BUG_ON(pa->pa_type != MB_INODE_PA);
group = ext4_get_group_number(sb, pa->pa_pstart);
- err = ext4_mb_load_buddy(sb, group, &e4b);
+ err = ext4_mb_load_buddy_gfp(sb, group, &e4b,
+ GFP_NOFS|__GFP_NOFAIL);
if (err) {
- ext4_error(sb, "Error loading buddy information for %u",
- group);
+ ext4_error(sb, "Error %d loading buddy information for %u",
+ err, group);
continue;
}
@@ -4303,11 +4305,14 @@ ext4_mb_discard_lg_preallocations(struct super_block *sb,
spin_unlock(&lg->lg_prealloc_lock);
list_for_each_entry_safe(pa, tmp, &discard_list, u.pa_tmp_list) {
+ int err;
group = ext4_get_group_number(sb, pa->pa_pstart);
- if (ext4_mb_load_buddy(sb, group, &e4b)) {
- ext4_error(sb, "Error loading buddy information for %u",
- group);
+ err = ext4_mb_load_buddy_gfp(sb, group, &e4b,
+ GFP_NOFS|__GFP_NOFAIL);
+ if (err) {
+ ext4_error(sb, "Error %d loading buddy information for %u",
+ err, group);
continue;
}
ext4_lock_group(sb, group);
@@ -5127,8 +5132,8 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
ret = ext4_mb_load_buddy(sb, group, &e4b);
if (ret) {
- ext4_error(sb, "Error in loading buddy "
- "information for %u", group);
+ ext4_warning(sb, "Error %d loading buddy information for %u",
+ ret, group);
return ret;
}
bitmap = e4b.bd_bitmap;
--
2.9.4

View File

@ -1,936 +0,0 @@
From f87c64a5210a044c70a3f3b1e1f94c0d5e77e25d Mon Sep 17 00:00:00 2001
From: Hugh Dickins <hughd@google.com>
Date: Mon, 19 Jun 2017 04:03:24 -0700
Subject: [PATCH 25/26] mm: larger stack guard gap, between vmas
commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <oleg@redhat.com>
Original-patch-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Tested-by: Helge Deller <deller@gmx.de> # parisc
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[wt: backport to 4.11: adjust context]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
Documentation/admin-guide/kernel-parameters.txt | 7 ++
arch/arc/mm/mmap.c | 2 +-
arch/arm/mm/mmap.c | 4 +-
arch/frv/mm/elf-fdpic.c | 2 +-
arch/mips/mm/mmap.c | 2 +-
arch/parisc/kernel/sys_parisc.c | 15 ++-
arch/powerpc/mm/hugetlbpage-radix.c | 2 +-
arch/powerpc/mm/mmap.c | 4 +-
arch/powerpc/mm/slice.c | 2 +-
arch/s390/mm/mmap.c | 4 +-
arch/sh/mm/mmap.c | 4 +-
arch/sparc/kernel/sys_sparc_64.c | 4 +-
arch/sparc/mm/hugetlbpage.c | 2 +-
arch/tile/mm/hugetlbpage.c | 2 +-
arch/x86/kernel/sys_x86_64.c | 4 +-
arch/x86/mm/hugetlbpage.c | 2 +-
arch/xtensa/kernel/syscall.c | 2 +-
fs/hugetlbfs/inode.c | 2 +-
fs/proc/task_mmu.c | 4 -
include/linux/mm.h | 53 ++++-----
mm/gup.c | 5 -
mm/memory.c | 38 ------
mm/mmap.c | 149 ++++++++++++++----------
23 files changed, 152 insertions(+), 163 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index facc20a..952319b 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3779,6 +3779,13 @@
spia_pedr=
spia_peddr=
+ stack_guard_gap= [MM]
+ override the default stack gap protection. The value
+ is in page units and it defines how many pages prior
+ to (for stacks growing down) resp. after (for stacks
+ growing up) the main stack are reserved for no other
+ mapping. Default value is 256 pages.
+
stacktrace [FTRACE]
Enabled the stack tracer on boot up.
diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 3e25e8d..2e13683 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -65,7 +65,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 2239fde..f0701d8 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -90,7 +90,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -141,7 +141,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index da82c25..46aa289 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -75,7 +75,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
addr = PAGE_ALIGN(addr);
vma = find_vma(current->mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
goto success;
}
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 64dd8bd..28adeab 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -93,7 +93,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index e528863..378a754 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -90,7 +90,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
unsigned long task_size = TASK_SIZE;
int do_color_align, last_mmap;
struct vm_unmapped_area_info info;
@@ -117,9 +117,10 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
else
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
goto found_addr;
}
@@ -143,7 +144,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct mm_struct *mm = current->mm;
unsigned long addr = addr0;
int do_color_align, last_mmap;
@@ -177,9 +178,11 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = COLOR_ALIGN(addr, last_mmap, pgoff);
else
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
goto found_addr;
}
diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
index 35254a6..a2b2d97 100644
--- a/arch/powerpc/mm/hugetlbpage-radix.c
+++ b/arch/powerpc/mm/hugetlbpage-radix.c
@@ -65,7 +65,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
/*
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index a5d9ef5..04a6493 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -107,7 +107,7 @@ radix__arch_get_unmapped_area(struct file *filp, unsigned long addr,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -143,7 +143,7 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 2b27458..c4d5c9c 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -105,7 +105,7 @@ static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
if ((mm->task_size - len) < addr)
return 0;
vma = find_vma(mm, addr);
- return (!vma || (addr + len) <= vma->vm_start);
+ return (!vma || (addr + len) <= vm_start_gap(vma));
}
static int slice_low_has_vma(struct mm_struct *mm, unsigned long slice)
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 5061861..531e9d5 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -100,7 +100,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -138,7 +138,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 08e7af0..6a1a129 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -64,7 +64,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -114,7 +114,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index ef4520e..043544d 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -120,7 +120,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -183,7 +183,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 7c29d38..88855e3 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -120,7 +120,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (task_size - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (mm->get_unmapped_area == arch_get_unmapped_area)
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index cb10153..03e5cc4 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -233,7 +233,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (current->mm->get_unmapped_area == arch_get_unmapped_area)
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 50215a4..3123e6d 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -141,7 +141,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (end - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -184,7 +184,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
addr = PAGE_ALIGN(addr);
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index c5066a2..7d09827 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -145,7 +145,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
if (mm->get_unmapped_area == arch_get_unmapped_area)
diff --git a/arch/xtensa/kernel/syscall.c b/arch/xtensa/kernel/syscall.c
index 0693792..74afbf0 100644
--- a/arch/xtensa/kernel/syscall.c
+++ b/arch/xtensa/kernel/syscall.c
@@ -88,7 +88,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
/* At this point: (!vmm || addr < vmm->vm_end). */
if (TASK_SIZE - len < addr)
return -ENOMEM;
- if (!vmm || addr + len <= vmm->vm_start)
+ if (!vmm || addr + len <= vm_start_gap(vmm))
return addr;
addr = vmm->vm_end;
if (flags & MAP_SHARED)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index dde8613..d44f545 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -200,7 +200,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 3125780..f401682 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -300,11 +300,7 @@ show_map_vma(struct seq_file *m, struct vm_area_struct *vma, int is_pid)
/* We don't show the stack guard page in /proc/maps */
start = vma->vm_start;
- if (stack_guard_page_start(vma, start))
- start += PAGE_SIZE;
end = vma->vm_end;
- if (stack_guard_page_end(vma, end))
- end -= PAGE_SIZE;
seq_setwidth(m, 25 + sizeof(void *) * 6 - 1);
seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu ",
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 018b134..cec423b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1381,12 +1381,6 @@ int clear_page_dirty_for_io(struct page *page);
int get_cmdline(struct task_struct *task, char *buffer, int buflen);
-/* Is the vma a continuation of the stack vma above it? */
-static inline int vma_growsdown(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN);
-}
-
static inline bool vma_is_anonymous(struct vm_area_struct *vma)
{
return !vma->vm_ops;
@@ -1402,28 +1396,6 @@ bool vma_is_shmem(struct vm_area_struct *vma);
static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; }
#endif
-static inline int stack_guard_page_start(struct vm_area_struct *vma,
- unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSDOWN) &&
- (vma->vm_start == addr) &&
- !vma_growsdown(vma->vm_prev, addr);
-}
-
-/* Is the vma a continuation of the stack vma below it? */
-static inline int vma_growsup(struct vm_area_struct *vma, unsigned long addr)
-{
- return vma && (vma->vm_start == addr) && (vma->vm_flags & VM_GROWSUP);
-}
-
-static inline int stack_guard_page_end(struct vm_area_struct *vma,
- unsigned long addr)
-{
- return (vma->vm_flags & VM_GROWSUP) &&
- (vma->vm_end == addr) &&
- !vma_growsup(vma->vm_next, addr);
-}
-
int vma_is_stack_for_current(struct vm_area_struct *vma);
extern unsigned long move_page_tables(struct vm_area_struct *vma,
@@ -2210,6 +2182,7 @@ void page_cache_async_readahead(struct address_space *mapping,
pgoff_t offset,
unsigned long size);
+extern unsigned long stack_guard_gap;
/* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
@@ -2238,6 +2211,30 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m
return vma;
}
+static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+{
+ unsigned long vm_start = vma->vm_start;
+
+ if (vma->vm_flags & VM_GROWSDOWN) {
+ vm_start -= stack_guard_gap;
+ if (vm_start > vma->vm_start)
+ vm_start = 0;
+ }
+ return vm_start;
+}
+
+static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+{
+ unsigned long vm_end = vma->vm_end;
+
+ if (vma->vm_flags & VM_GROWSUP) {
+ vm_end += stack_guard_gap;
+ if (vm_end < vma->vm_end)
+ vm_end = -PAGE_SIZE;
+ }
+ return vm_end;
+}
+
static inline unsigned long vma_pages(struct vm_area_struct *vma)
{
return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
diff --git a/mm/gup.c b/mm/gup.c
index fb87cbf..1466f35 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -387,11 +387,6 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
/* mlock all present pages, but do not fault in new pages */
if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK)
return -ENOENT;
- /* For mm_populate(), just skip the stack guard page. */
- if ((*flags & FOLL_POPULATE) &&
- (stack_guard_page_start(vma, address) ||
- stack_guard_page_end(vma, address + PAGE_SIZE)))
- return -ENOENT;
if (*flags & FOLL_WRITE)
fault_flags |= FAULT_FLAG_WRITE;
if (*flags & FOLL_REMOTE)
diff --git a/mm/memory.c b/mm/memory.c
index 2437dc0..44a4dfc 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2855,40 +2855,6 @@ int do_swap_page(struct vm_fault *vmf)
}
/*
- * This is like a special single-page "expand_{down|up}wards()",
- * except we must first make sure that 'address{-|+}PAGE_SIZE'
- * doesn't hit another vma.
- */
-static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)
-{
- address &= PAGE_MASK;
- if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {
- struct vm_area_struct *prev = vma->vm_prev;
-
- /*
- * Is there a mapping abutting this one below?
- *
- * That's only ok if it's the same stack mapping
- * that has gotten split..
- */
- if (prev && prev->vm_end == address)
- return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
-
- return expand_downwards(vma, address - PAGE_SIZE);
- }
- if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
- struct vm_area_struct *next = vma->vm_next;
-
- /* As VM_GROWSDOWN but s/below/above/ */
- if (next && next->vm_start == address + PAGE_SIZE)
- return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
-
- return expand_upwards(vma, address + PAGE_SIZE);
- }
- return 0;
-}
-
-/*
* We enter with non-exclusive mmap_sem (to exclude vma changes,
* but allow concurrent faults), and pte mapped but not yet locked.
* We return with mmap_sem still held, but pte unmapped and unlocked.
@@ -2904,10 +2870,6 @@ static int do_anonymous_page(struct vm_fault *vmf)
if (vma->vm_flags & VM_SHARED)
return VM_FAULT_SIGBUS;
- /* Check if we need to add a guard page to the stack */
- if (check_stack_guard_page(vma, vmf->address) < 0)
- return VM_FAULT_SIGSEGV;
-
/*
* Use pte_alloc() instead of pte_alloc_map(). We can't run
* pte_offset_map() on pmds where a huge pmd might be created
diff --git a/mm/mmap.c b/mm/mmap.c
index bfbe885..116ea08 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -183,6 +183,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
unsigned long retval;
unsigned long newbrk, oldbrk;
struct mm_struct *mm = current->mm;
+ struct vm_area_struct *next;
unsigned long min_brk;
bool populate;
LIST_HEAD(uf);
@@ -229,7 +230,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
}
/* Check against existing mmap mappings. */
- if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))
+ next = find_vma(mm, oldbrk);
+ if (next && newbrk + PAGE_SIZE > vm_start_gap(next))
goto out;
/* Ok, looks good - let it rip. */
@@ -253,10 +255,22 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
static long vma_compute_subtree_gap(struct vm_area_struct *vma)
{
- unsigned long max, subtree_gap;
- max = vma->vm_start;
- if (vma->vm_prev)
- max -= vma->vm_prev->vm_end;
+ unsigned long max, prev_end, subtree_gap;
+
+ /*
+ * Note: in the rare case of a VM_GROWSDOWN above a VM_GROWSUP, we
+ * allow two stack_guard_gaps between them here, and when choosing
+ * an unmapped area; whereas when expanding we only require one.
+ * That's a little inconsistent, but keeps the code here simpler.
+ */
+ max = vm_start_gap(vma);
+ if (vma->vm_prev) {
+ prev_end = vm_end_gap(vma->vm_prev);
+ if (max > prev_end)
+ max -= prev_end;
+ else
+ max = 0;
+ }
if (vma->vm_rb.rb_left) {
subtree_gap = rb_entry(vma->vm_rb.rb_left,
struct vm_area_struct, vm_rb)->rb_subtree_gap;
@@ -352,7 +366,7 @@ static void validate_mm(struct mm_struct *mm)
anon_vma_unlock_read(anon_vma);
}
- highest_address = vma->vm_end;
+ highest_address = vm_end_gap(vma);
vma = vma->vm_next;
i++;
}
@@ -541,7 +555,7 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
if (vma->vm_next)
vma_gap_update(vma->vm_next);
else
- mm->highest_vm_end = vma->vm_end;
+ mm->highest_vm_end = vm_end_gap(vma);
/*
* vma->vm_prev wasn't known when we followed the rbtree to find the
@@ -856,7 +870,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
vma_gap_update(vma);
if (end_changed) {
if (!next)
- mm->highest_vm_end = end;
+ mm->highest_vm_end = vm_end_gap(vma);
else if (!adjust_next)
vma_gap_update(next);
}
@@ -941,7 +955,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
* mm->highest_vm_end doesn't need any update
* in remove_next == 1 case.
*/
- VM_WARN_ON(mm->highest_vm_end != end);
+ VM_WARN_ON(mm->highest_vm_end != vm_end_gap(vma));
}
}
if (insert && file)
@@ -1787,7 +1801,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
while (true) {
/* Visit left subtree if it looks promising */
- gap_end = vma->vm_start;
+ gap_end = vm_start_gap(vma);
if (gap_end >= low_limit && vma->vm_rb.rb_left) {
struct vm_area_struct *left =
rb_entry(vma->vm_rb.rb_left,
@@ -1798,7 +1812,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
}
}
- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
check_current:
/* Check if current node has a suitable gap */
if (gap_start > high_limit)
@@ -1825,8 +1839,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
vma = rb_entry(rb_parent(prev),
struct vm_area_struct, vm_rb);
if (prev == vma->vm_rb.rb_left) {
- gap_start = vma->vm_prev->vm_end;
- gap_end = vma->vm_start;
+ gap_start = vm_end_gap(vma->vm_prev);
+ gap_end = vm_start_gap(vma);
goto check_current;
}
}
@@ -1890,7 +1904,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
while (true) {
/* Visit right subtree if it looks promising */
- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;
+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;
if (gap_start <= high_limit && vma->vm_rb.rb_right) {
struct vm_area_struct *right =
rb_entry(vma->vm_rb.rb_right,
@@ -1903,7 +1917,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
check_current:
/* Check if current node has a suitable gap */
- gap_end = vma->vm_start;
+ gap_end = vm_start_gap(vma);
if (gap_end < low_limit)
return -ENOMEM;
if (gap_start <= high_limit && gap_end - gap_start >= length)
@@ -1929,7 +1943,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
struct vm_area_struct, vm_rb);
if (prev == vma->vm_rb.rb_right) {
gap_start = vma->vm_prev ?
- vma->vm_prev->vm_end : 0;
+ vm_end_gap(vma->vm_prev) : 0;
goto check_current;
}
}
@@ -1967,7 +1981,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct vm_unmapped_area_info info;
if (len > TASK_SIZE - mmap_min_addr)
@@ -1978,9 +1992,10 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
if (addr) {
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
return addr;
}
@@ -2003,7 +2018,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
const unsigned long len, const unsigned long pgoff,
const unsigned long flags)
{
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *prev;
struct mm_struct *mm = current->mm;
unsigned long addr = addr0;
struct vm_unmapped_area_info info;
@@ -2018,9 +2033,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
/* requesting a specific address */
if (addr) {
addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
+ vma = find_vma_prev(mm, addr, &prev);
if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vma->vm_start))
+ (!vma || addr + len <= vm_start_gap(vma)) &&
+ (!prev || addr >= vm_end_gap(prev)))
return addr;
}
@@ -2155,21 +2171,19 @@ find_vma_prev(struct mm_struct *mm, unsigned long addr,
* update accounting. This is shared with both the
* grow-up and grow-down cases.
*/
-static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, unsigned long grow)
+static int acct_stack_growth(struct vm_area_struct *vma,
+ unsigned long size, unsigned long grow)
{
struct mm_struct *mm = vma->vm_mm;
struct rlimit *rlim = current->signal->rlim;
- unsigned long new_start, actual_size;
+ unsigned long new_start;
/* address space limit tests */
if (!may_expand_vm(mm, vma->vm_flags, grow))
return -ENOMEM;
/* Stack limit test */
- actual_size = size;
- if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))
- actual_size -= PAGE_SIZE;
- if (actual_size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur))
+ if (size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur))
return -ENOMEM;
/* mlock limit tests */
@@ -2207,17 +2221,30 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
int expand_upwards(struct vm_area_struct *vma, unsigned long address)
{
struct mm_struct *mm = vma->vm_mm;
+ struct vm_area_struct *next;
+ unsigned long gap_addr;
int error = 0;
if (!(vma->vm_flags & VM_GROWSUP))
return -EFAULT;
/* Guard against wrapping around to address 0. */
- if (address < PAGE_ALIGN(address+4))
- address = PAGE_ALIGN(address+4);
- else
+ address &= PAGE_MASK;
+ address += PAGE_SIZE;
+ if (!address)
return -ENOMEM;
+ /* Enforce stack_guard_gap */
+ gap_addr = address + stack_guard_gap;
+ if (gap_addr < address)
+ return -ENOMEM;
+ next = vma->vm_next;
+ if (next && next->vm_start < gap_addr) {
+ if (!(next->vm_flags & VM_GROWSUP))
+ return -ENOMEM;
+ /* Check that both stack segments have the same anon_vma? */
+ }
+
/* We must make sure the anon_vma is allocated. */
if (unlikely(anon_vma_prepare(vma)))
return -ENOMEM;
@@ -2261,7 +2288,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
if (vma->vm_next)
vma_gap_update(vma->vm_next);
else
- mm->highest_vm_end = address;
+ mm->highest_vm_end = vm_end_gap(vma);
spin_unlock(&mm->page_table_lock);
perf_event_mmap(vma);
@@ -2282,6 +2309,8 @@ int expand_downwards(struct vm_area_struct *vma,
unsigned long address)
{
struct mm_struct *mm = vma->vm_mm;
+ struct vm_area_struct *prev;
+ unsigned long gap_addr;
int error;
address &= PAGE_MASK;
@@ -2289,6 +2318,17 @@ int expand_downwards(struct vm_area_struct *vma,
if (error)
return error;
+ /* Enforce stack_guard_gap */
+ gap_addr = address - stack_guard_gap;
+ if (gap_addr > address)
+ return -ENOMEM;
+ prev = vma->vm_prev;
+ if (prev && prev->vm_end > gap_addr) {
+ if (!(prev->vm_flags & VM_GROWSDOWN))
+ return -ENOMEM;
+ /* Check that both stack segments have the same anon_vma? */
+ }
+
/* We must make sure the anon_vma is allocated. */
if (unlikely(anon_vma_prepare(vma)))
return -ENOMEM;
@@ -2343,28 +2383,25 @@ int expand_downwards(struct vm_area_struct *vma,
return error;
}
-/*
- * Note how expand_stack() refuses to expand the stack all the way to
- * abut the next virtual mapping, *unless* that mapping itself is also
- * a stack mapping. We want to leave room for a guard page, after all
- * (the guard page itself is not added here, that is done by the
- * actual page faulting logic)
- *
- * This matches the behavior of the guard page logic (see mm/memory.c:
- * check_stack_guard_page()), which only allows the guard page to be
- * removed under these circumstances.
- */
+/* enforced gap between the expanding stack and other mappings. */
+unsigned long stack_guard_gap = 256UL<<PAGE_SHIFT;
+
+static int __init cmdline_parse_stack_guard_gap(char *p)
+{
+ unsigned long val;
+ char *endptr;
+
+ val = simple_strtoul(p, &endptr, 10);
+ if (!*endptr)
+ stack_guard_gap = val << PAGE_SHIFT;
+
+ return 0;
+}
+__setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+
#ifdef CONFIG_STACK_GROWSUP
int expand_stack(struct vm_area_struct *vma, unsigned long address)
{
- struct vm_area_struct *next;
-
- address &= PAGE_MASK;
- next = vma->vm_next;
- if (next && next->vm_start == address + PAGE_SIZE) {
- if (!(next->vm_flags & VM_GROWSUP))
- return -ENOMEM;
- }
return expand_upwards(vma, address);
}
@@ -2386,14 +2423,6 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
#else
int expand_stack(struct vm_area_struct *vma, unsigned long address)
{
- struct vm_area_struct *prev;
-
- address &= PAGE_MASK;
- prev = vma->vm_prev;
- if (prev && prev->vm_end == address) {
- if (!(prev->vm_flags & VM_GROWSDOWN))
- return -ENOMEM;
- }
return expand_downwards(vma, address);
}
@@ -2491,7 +2520,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
vma->vm_prev = prev;
vma_gap_update(vma);
} else
- mm->highest_vm_end = prev ? prev->vm_end : 0;
+ mm->highest_vm_end = prev ? vm_end_gap(prev) : 0;
tail_vma->vm_next = NULL;
/* Kill the cache */
--
2.9.4

View File

@ -1,50 +0,0 @@
From c462b13be57c29509b945f12b239bb90eba89d3c Mon Sep 17 00:00:00 2001
From: Hugh Dickins <hughd@google.com>
Date: Tue, 20 Jun 2017 02:10:44 -0700
Subject: [PATCH 26/26] mm: fix new crash in unmapped_area_topdown()
Trinity gets kernel BUG at mm/mmap.c:1963! in about 3 minutes of
mmap testing. That's the VM_BUG_ON(gap_end < gap_start) at the
end of unmapped_area_topdown(). Linus points out how MAP_FIXED
(which does not have to respect our stack guard gap intentions)
could result in gap_end below gap_start there. Fix that, and
the similar case in its alternative, unmapped_area().
Cc: stable@vger.kernel.org
Fixes: 1be7107fbe18 ("mm: larger stack guard gap, between vmas")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Debugged-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
---
mm/mmap.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 116ea08..ad54b9f 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1817,7 +1817,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
/* Check if current node has a suitable gap */
if (gap_start > high_limit)
return -ENOMEM;
- if (gap_end >= low_limit && gap_end - gap_start >= length)
+ if (gap_end >= low_limit &&
+ gap_end > gap_start && gap_end - gap_start >= length)
goto found;
/* Visit right subtree if it looks promising */
@@ -1920,7 +1921,8 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
gap_end = vm_start_gap(vma);
if (gap_end < low_limit)
return -ENOMEM;
- if (gap_start <= high_limit && gap_end - gap_start >= length)
+ if (gap_start <= high_limit &&
+ gap_end > gap_start && gap_end - gap_start >= length)
goto found;
/* Visit left subtree if it looks promising */
--
2.9.4